►
From YouTube: IETF104-SECDISPATCH-20190325-1350
Description
SECDISPATCH meeting session at IETF104
2019/03/25 1350
https://datatracker.ietf.org/meeting/104/proceedings/
A
All
right
welcome
to
the
sex
dispatch
working
group.
This
is
not
your
intended
destination,
so
your
last
time
to
get
off
the
plane.
This
is
the
note
well
note.
It
well
should've
seen
this
at
least
a
couple
times
by
now,
but
these
are
the
rules
governing
your
intellectual
property
right
tonight's
hip
contributions,
so
you
should
be
sensitive
and
comply
with
them.
A
A
We
are
here
to
dispatch
things
not
to
actually
do
work,
so
we're
gonna
have
some
presentations
about
proposed
new
work
for
the
security
area
and
we're
gonna
decide
what
to
do
with
that.
We
might
drop
that
into
an
existing
new
existing
working
group
who
might
have
a
new
working
group
to
handle
this
often
pre
focused
working
group
got
to
get
specific
piece
of
work
done.
A
We
could
recommend
that
a
tease
that
an
ad
ad
sponsor
this
if
the
ABS
are
willing-
and
we
can
kick
things
out
for
additional
discussion
or
say
we
shouldn't-
do
work
here,
so
that's
kind
of
the
spectrum
of
possible
outcomes.
We
have
here
so
keep
those
in
mind
as
you're
hearing
these
presentations
for
proposal
to
new
work,
because
we're
gonna
try
and
decide
on
one
of
those
for
each
of
the
pieces
of
new
work.
That's
proposed
today,
so
here's
the
agenda.
A
A
A
B
A
A
C
D
D
D
So
the
problem
that
we're
trying
to
solve
is
that
large
organizations
typically
need
to
keep
their
potentially
tech
illiterate
employees,
internal
key
stores,
up-to-date
across
all
devices,
at
the
place
that
mica
and
I
we
use
PGP
for
email
and
for
other
purposes,
such
as
code
signing,
and
we
have
a
lot
of
people
coming
and
going
all
the
time.
So
keys
are
frequently
added
or
updated
and
not
all
keys
identities
or
the
emails
associated
with
them
correspond
to
one
single
control
domain.
D
So
we
have
certain
employees
whose
preferred
keys
correspond
to
a
gmail
email,
others
MIT,
others,
the
intercept
and
so
existing
solutions
such
as
web
key
directories,
aren't
always
workable
in
our
organization.
Next
slide,
please
so
in
an
email
to
the
open,
PGP
mailing
list,
where
we've
had
some
existing
discussion,
one
of
them
a
micro
wrote.
D
What
are
the
main
purposes
of
Kilis
is
to
reduce
the
amount
of
work
required
to
ensure
that
everyone
in
an
organization
or
anyone
who
wishes
to
communicate
with
that
organ
members
of
that
organization
have
the
correct
public
key
for
everyone
else.
So
really,
the
purpose
of
the
key
list
is
to
ensure
that
a
certain
number
of
keys
are
constantly
kept
up
to
sync
in
the
internal
public,
key
stores
of
either
members
of
that
organization
or
people
who'd
like
to
communicate
with
those
people,
notably,
we
are
not
distributing
the
keys
themselves.
D
We
are
just
distributing
pointers
to
those
keys
currently
in
the
form
of
key
fingerprints,
which
then
allows
the
clients
to
leverage
existing
open,
PGP
key
distribution
systems
such
as
key
servers
or
even
web
key
directories
to
actually
pull
the
public
key
content
next
slide.
Please,
thank
you.
So
our
solution
is
that
organizations
publish
what
is
effectively
a
list
of
public
key
fingerprints
in
the
form
of
a
signed
key
list.
D
This
is
a
system
that
is
currently
actively
in
use
at
first
look
media,
the
intercept
and
the
freedom
of
the
press
foundation
using
a
tool
called
GPG
sink,
which
Myka
is
the
author
of
it's
completely
open-source
and
it
works
incredibly
well,
at
least
for
our
use
case.
Next
slide,
please.
So
this
is
an
example
of
a
key
list:
you'll
notice
that
there
are
number
of
metadata
fields
and
that
it
resembles
some
sort
of
key
store.
D
So,
for
example,
the
the
tech
administrators
at
an
organization
have
full
control
over
the
key
lists,
but
individuals
can
still
push
updates
to
their
keys
without
having
to
change
the
key
list
and
ask
the
administrators
to
update
the
key
list,
so
the
actual
keys
themselves
remain
in
full
control
of
the
original
owners,
where
and
and
and
the
tech
administrators
just
ensure
that
every
other
person
in
that
organization
is
constantly
updating.
Those
keys
next
slide,
please.
D
So
there
has
been
some
discussion
as
to
whether
this
is
actually
suitable
to
be
an
internet
standard,
and-
and
it's
it's
a
valid
this-
it's
an
absolutely
valid
discussion.
This
is
very
much
sort
of
an
application
discussion,
but
the
reason
why
we'd
like
to
turn
this
into
some
form
of
internet
standard,
whether
it
would
be
experimental,
is
still
absolutely
up
for
discussion-
is
that
we
want
multiple
PGP
clients
to
be
able
to
import
and
subscribe
to
the
same
key
lists
right
now.
D
Gpg
sync:
only
interfaces
on
top
of
GPG
and
lots
of
other
PGP
clients,
especially
those
that
exist
on
mobile,
currently,
cannot
be
used
with
key
lists.
So
what
we'd
like
is
for
there
to
be
a
single
format,
standardized
format
for
key
lists
that
allows
other
PGP
clients
to
subscribe,
whether
they
are
traditional
clients
such
as
GPG
or
newer
mobile
clients
such
as,
for
example,
pro
tunc
male
next
slide.
Please,
we've
also
gotten
a
fair
amount
of
discussion
as
to
why
not
such-and-such
why
not
such-and-such
and
really
helix
are
trying
to
accomplish
something
else.
D
So
with
briefly
two
web
feed
dictionaries
in
x.509
PTI,
we
don't
with
key
lists,
want
to
actually
manage
the
distribution
of
keys
themselves.
We
want
to
keep
the
actual
keys
in
king,
but
the
we
want
to
keep
ownership
of
those
keys
in
control
over
those
keys
in
the
hands
of
the
actual
key
voters.
So
what
key?
Let's
do
is
it
simply
points
to
the
keys,
and
then
individuals
can
push
their
own
updates
to
key
servers
themselves.
D
So
that's
one
fundamental
difference
between
those
two
existing
systems
and
then
finally,
Jinyu
PG
key
rings.
So
the
same
is
above
apply
and
also
as
far
as
I
know.
Mike
have
pointed
this
out.
Gpg
hearings
are
not
actually
an
internet
standard
and
so
key
rings
for
GPG
two
are
incompatible
with
key
rings
for
GP
g1,
and
what,
if
I,
don't
thing
to
note
about
why
web
key
dictionaries,
for
example,
do
not
work
in
this
case
is
that
all
of
the
keys
fall
under
different
domains,
not
all
of
which
are
controlled
by
the
central
administrator.
D
So
key
lists
allow
an
organization
to
distribute
keys
across
from
several
different
domains
within
an
organization,
even
if
they
don't
themselves
have
control
over
those
domains.
Next
slide
I
spoke
with
this
briefly
earlier,
but
this
is
a
system
that
is
not
just
a
theoretical
prototype.
It's
implemented,
GPG
sink
and
it's
been
an
active
use
in
first
look
media
at
the
intercept
and
the
freedom
of
the
press
foundation
with
collectively,
probably
more
than
300
employees
for
the
past
two
or
three
years
or
so
so
this
is
a.
D
This
is
a
system
that
has
been
sort
of
tried
and
true
within
our
organization
and
that
other
organizations
are
actually
looking
to
to
potentially
use
themselves
next
slide.
Please
there
are
a
number
of
areas
that
we'd
like
to
further
develop,
so
we're
considering
adding
a
well-known
location
and
there's
some
discussion
over.
This
is
whether
this
is
potentially
a
security
vulnerability,
because
the
authority
key
needs
to
be
inputted
separately,
but
that
is
a
discussion
for
elsewhere.
D
We
would
like
to
consider
additional
functionality
for
key
lists.
We'd
like
to
better
analyze
the
potential
security
implications.
Do
general
improvements
to
our
draft.
You
you
may
know
that
Micah
and
I
are
relative
newcomers
to
the
ITF
community.
So
what
we
are
grateful
to
all
of
those
who
have
engaged
with
us
and
provided
us
useful
feedback
so
far,
and
we
look
for
even
more
of
that
and
then
finally,
we'd
also
like
to
just
achieve
wider
adoption
by
existing
PGP
tools
and
much
of
the
PGP
community
also
is
in
the
ITF
community.
D
E
D
So
the
key
list
contains
a
fingerprint
of
the
public
key.
So
let's
say
I
have
a
private
key
yeah.
If
we
go
to
that
slide,
thank
you.
If
I
so
Mikey,
you'll
notice,
it's
it's
the
second
one.
What
I
can
do
for
my
if
what
I
can
do
from
my
computer
is
because
I
have
the
private
I
can
update
my
key
and
then
push
it
to
the
key
servers
and
that
fingerprint
will
not
change.
So
what.
F
F
F
Fingerprint
for
an
open,
PGP
certificate,
which
is
also
known
as
a
key
block,
also
known
as
a
transferable
public
key,
which
is
an
Internet
Center,
which
is
arts
and
4880,
which
can
be
updated
by
adding
subkeys
by
expiration
dates
that
sort
of
thing.
So
this
points
to
and
identifies
a
certificate
that
can
be
updated
without
changing
the
fingerprint
it's
the
root
yeah
effectively.
If
you
want
to
change
what
which
you
have
to
you
have
to
update
this
and
sorry.
F
F
This
is
dkg
at
the
Mike
I.
Do
not
so
there's
been
a
lot
of
back
and
forth
about
the
way
that
open,
PGP
fingerprints
are
formed.
This
is
not
the
right.
This
draft
is
not
the
right
place
to
do
that.
We
should
use
whatever
this.
If
this
draft
goes
forward
here
at
value
temp,
we
should
make
sure
to
use
whatever
the
OPC
fingerprint
standard
is.
J
First,
you
sort
of
mentioned
that
web
key
distribution
is
not
usable
for
your
use
case,
because
you've
got
all
these
people
that
have
keys
at
different
domains
and
the
domains
don't
have
a
common
administration,
and
you
also
mentioned
you
have
lots
of
other
organizations
that
are
looking
at
this.
So
it's
kind
of
curious.
If
these
other
organizations
have
the
same
property
where
you
know
they
have
these
multiple
domains,
the
keys,
and
so
they
can't
use
wkd.
D
A
D
K
Hey,
can
you
hear
me?
Yes,
okay,
so
with
first
look
media
that
was
wasn't
really
an
option
because
we
have
like
several
different
domains.
Internally,
we
have
like
first
look
org
and
the
intercepts
calm
and
feel
division,
org
and
stuff
I
think
that
that
might
be
appropriate
for
some
other
organizations
like
something
like
there
are
many
organizations
where
everyone
has
email
addresses
at
the
same
domain,
but
a
different
issue
is
that
if
your
organization
starts
growing
and
getting
to
be
bigger,
it
still
is
kind
of
hard
to
scale.
People
posting
updates
to
their
own
keys.
K
J
Yeah
perfect,
thank
you.
So
my
next
question
sort
of
relates
to
what
exactly
is
being
signed.
So
if
I
can
pull
the
example
back
up,
I
think
was
in
one
of
the
slides,
I
miss
you
got
the
metadata
with
the
signature,
and
then
the
array
of
keys
has
key
fingerprint,
but
also
a
field
for
the
name,
the
email
and
a
comment,
and
so
like.
Are
we
claiming
that
this?
K
K
F
This
is
dkg
again
I'm,
resisting
the
urge
to
like
get
into
the
details
here.
I
think
this
is
an
interesting
solution
to
what
I
do
think
is
a
real
issue
for
organizations
that
want
to
adopt
protective
mail.
So
I
would
like
to
see
work
on
it.
I
just
want
to
observe
from
a
dispatch
perspective
that
this
seems
to
depend
heavily
on
the
existence
of
the
public
key
server
Network
and
as
someone
who
operates,
one
of
the
key
servers
in
the
public
key
server
network
I
can
tell
you
it
is
an
unspeakable
disaster
right
now.
F
L
F
K
And
I
think
that's
a
good
point
about
relying
on
the
key
server
network,
but
like
miles
was
saying
that
really
it
just
needs
to
rely
on
some
type
of
way
to
fetch
keys,
so
it
could
be
from
wkd
or
other
methods
as
well,
but
yeah.
There
is
it
but
like
if
the
t
server,
the
key
servers
are
really.
How
is
the
only
way
that
GPG
singh
implements
it
and
yeah?
There
needs
to
be
some
way
of
a
public
way
of
fetching
key.
It's
like
they
could
be
from.
F
L
To
reiterate
it,
they're
kind
of
the
back
and
forth
of
this
conversation.
That's
pretty
much
saying
if
the
the
initial
feedback
is
interesting
on
the
draft,
but
we
have
to
pull
more
threats
than
just
this
draft.
If
we
were
to
proceed
forward,
we
need
to
discuss
the
robustness
of
the
key
infrastructure
of
the
key
servers.
I.
F
E
F
That
was
know
as
Ecker
saying
that
and
I
wanted
to.
Can
we
go
back
to
the
side.
That
said
why
we
don't
just
stuff
the
keys.
There
was
a
couple
different
arguments
for
why
so
your
final
argument,
we
I,
can't
address
I,
think
that's
correct
the
the
and
the
top
one
obviously
just
doesn't
work
for
the
use
case,
you're
talking
about
as
far
as
good
new
PG
key
rings.
There
is
an
internet
standard,
there's
a
cross
compatible
internet
Center,
which
is
just
the
transferable
public
key
itself.
F
Rfc
4880
actually
says:
transferable
public
key
packet
sequences
may
be
concatenated
to
allowed
transferring
multiple
public
keys
in
one
operation.
So
that
is
what
the
original
open
PGP
the
original
group
eg
keyring
was,
is
just
a
series
of
transferable
public
keys,
and
so
that
would
be
possible
that
we
away
around.
K
And
that's
actually
something
it's
not
in
our
draft
yet,
but
we're
discussing
making
an
optional
to
include
a
copy
of
the
actual
public
key
inside
the
key
list,
basically
making
it
required
to
include
the
fingerprint
but
optional
to
include
a
copy
of
the
public
key.
But
but
you
know
so
it's
with
a
the
first
look
media
key
list.
K
E
K
To
okay,
so
this
the
purpose
of
the
signature
is
to
make
it
so
that
okay,
so
this
object
is
at
a
URL
somewhere
and
people's
key
lists.
Clients
like
GP,
g-sync,
download
this
URL
download
the
signature,
verify
and
then
import
it.
Their
purpose
of
the
signature
is
so
that,
if
there's
like,
maybe
the
server
on
the
internet
that
you're
downloading
it
from
is
what
is
hacked
and
an
attacker
tries
to
replace
this
key
list
with
a
different
key
list.
It
prevents
the
attacker
from
being
able
to
put
an
arbitrary
set
of
keys
on
everyone's
computer.
K
J
Yeah
Ben
can
look
again
so
I
guess
my
take
on
this.
Is
that
what's
being
testitude
is
basically
these
are
keys.
You
should
be
interested
in
and
like
they're,
probably
people
who
are
in
your
organization
but
you're
gonna
have
to
use
the
actual
keys
themselves
for
trust
decisions
about
your
communications
and
code.
Signing
whatnot
and
dkg
is
reminding
us
that
we
should
not
have
the
actual,
like
technical
discussion
hearing
so
I'm,
trying
to
refrain
myself.
A
A
A
Anyone
besides
the
author's
interested
in
this
work,
okay,
see
four
five
or
six
hands.
Okay,
this
that
seems
positive
and
it
seems
like
a
positive
indication
of
some
interest
in
this
work
and
I'm,
not
seeing
anyone
saying
it
shouldn't
be
done
here.
So
that
brings
us
the
question
of
how
to
do
the
work
here.
So
we're
back
to
discussion
of,
should
it
be
working
group
should
be
ad
sponsor
it.
Well,.
E
Someone
who
would
not
be
having
to
manage
their
working
group
or
ad
sponsoring
yes
I
mean
that
does
not
seem
like
enough
critical,
mass
or
working
group
at
this
point,
so
I
think
maybe
disagrees
like
his
father.
J
So
dkg
is
pointing
out
without
standing
up
to
the
mic,
that
there's
discussion
going
on
on
the
open,
PGP
list,
which
is
pretty
active
and
like
this
getting
actual
discussion.
There
I
agree
with
that
here
that
yeah
yeah
activity
here
is
not
really
enough
to
spin
up
a
working
group
just
for
this,
but
it
seems
like
we
might
still
be
able
to
get
enough
for
you
to
still
publish
the
document
all
right,
so
I.
N
Agree
with
open
a
little
bit
too
bad
because
there
are
a
lot
of
subtlety
and
sort
of
this.
This
kind
of
bag
of
keys
approach
like
around
hard.
What's
the
semantics
around
expiration
and
how
long
are
these
valid
right?
What
does
it
mean
when
a
or
an
entity
used
to
be
part
of
the
list?
And
now
isn't
right?
Stuff
like
that
has
applicability?
N
Abstract
model
has
applicability
in
a
bunch
of
places
and
it's
actually
being
tried
in
the
IDF
before
there.
So
so
the
approach
for
managing
PPP
bdp
keys
that's
sort
of
similar
in
semantics.
Actually,
so
you
have
a
bag
of
a
bunch
of
keys
and
sign
them
right.
So
I
think
if
it
would
have
been
a
good
idea
to
have
a
working
group
actually
look
at
this
a
little
bit
more
generally
than
just
PDP,
but
I
realize
it's
not
gonna
happen
right
now,.
L
L
E
Say
you
said
either
good
yeah,
so
is
it
exporting
group?
So
let
me
suggest
the
summary
of
this
that
the
authors
continue
discussing
the
open,
bgp
mailing
list
and
refine
this
proposal
and
when
they
feel
like
it's
officially
baked
to
be
publishable
as
a
that.
As
a
you
know,
standard
they
bring
it
back
to
the
eighties
or
to
sec
dispatch
for
on
publication.
That
are
say
all.
A
O
It
was
a
little
bit
interesting
to
hear
about
this
PT
list
because
they
mentioned
detached
signatures.
In
my
opinion,
that
means
that
they
probably
need
chemical
ization
as
well,
because
how
can
you
do
that
without?
Okay,
if
you
want
to
try
something
of
what
I
am
talking
about
today,
you
can
you
have
a
URL,
this
mobile
peak,
why
you
can
play
around
to
see
that
it
exists
and
how
it
works
can
test
the
boundaries
of
it,
and
you
can,
of
course,
read
the
internet
draft
okay,
once
again,.
H
O
Okay,
almost
mm
okay,
chemical
ization,
can
be
done
at
different
ways.
One
is
to
do
it
on
text
level.
Only
I
skip
that
idea
because
essentially
get
two
streams
when,
if
you
would
first
first,
you
would
pass
using
your
regular
JSON
tubes
and
then
you
would
have
another
line
where
you
do.
The
chemical
is
Asia
that
is
technically
possible,
but
it
would
integrate
very
poorly
with
existing
tools.
O
The
idea
with
this
system
is
that
should
integrate
into
existing
tools
with
ease
this
picture
shows
what
conical
ization
does
it
changes
it
normalizes
numbers
when
I
started
with
this
long
time
ago,
I
thought
that
must
be
very
easy
to
write.
A
number
I
was
totally
wrong.
It's
rocket
science.
Fortunately
I
met
a
few
rocket
scientists
that
fix
this
problem.
For
me,
which
I
definitely
could
not
do
fixing
string
normalizing
strings
that
is
over
completely
trivial.
O
Sorting
keys
in
lexicographic
order
is
also
trivial.
That's
nothing.
What
you
end
up
with,
at
least
if
the
fall
of
the
draft
is
a
limitation.
Jason
numbers
must
stick
to
a
cheaply
double
precision
that
doesn't
mean
that
your
application
is
limited
by
that,
but
using
the
canalization
system.
You
must
stay
in
that
limit.
O
O
However,
it's
not
this
is
what
this
picture
shows.
Essentially,
the
only
snag
I
am
aware
of,
and
that
is
that,
if
you
pour
something
like
this
date
stamp
and
then
you
serialize
it,
it
must
be
serialized
as
this
in
the
same
way
and
that
may
affect
some
applications.
It
may
affect
some
horses,
but
if
you
know
this,
you
won't
have
any
big
problems
at
all
using
this
system.
O
O
One
of
the
reason
is
that
I'm
working
with
financial
applications
and
the
financial
people
they
are
sort
of
unused
to
basics
before
they
don't
want
to
see
payment
requests
or
things
like
that
in
base64.
They
don't
have
to,
and
my
mission
here
is
to
see
if
there's
any
room
for
this
in
ITF,
with
a
goal
of
making
some
kind
of
standard.
O
P
H
P
O
O
I
understand:
well,
if
you
stick
to
what
I
called
native
Jason,
that
means
the
literals,
the
true/false
known
the
strings
and
the
yes
number
I
will
say
it's
completely
a
solved
problem.
It
actually
works.
It
works
like
their
encoding.
It
should
work
in
asn.1,
but
I
know
that
they
were
our
Asian
one.
It's
a
much
more
complex
system,
so
I
think
that
the
slide
that
I
have
now
shows
the
only
known
problem
with
this
system.
Q
Matthew
Miller
I
mean
so
we
did
have
some
of
this
discussion
earlier
today
in
dispatch
for
people
that
were
there.
So
this
is
this
me
partly
reiterating
so
my
concerns
from
there
like
I,
think
for
what
this
is
doing.
It's
it's
probably
not
harmful,
but
there
are.
There
are
implications
to
doing
canonicalization
and
I.
Think
if
we're
going
to
do
this
work,
we
need
to.
We
need
to
understand
the
shape
of
those
before
we
try
to
try
to
standardize
anything
around
this.
Oh.
R
Yeah,
you
are
near
so
anything
you
can
canonical,
as
you
can
then
hash
and
send
a
signature
without
sending
the
raw
bytes
and
it's
all
great.
But
why
is
it
here?
I
mean
you
kind
of
know.
If
you
cannot
analyze
email,
you
get
a
shovel
email
due
to
HTML.
You
cannot
get
a
shovel
HTML
I.
Don't
really
expect
all
these
things
to
come
into
sec,
dispatcher
and
in
general,
the
security
area
work.
So
while
we
it's
like
this,
that's
rather
than
it's
dispatch.
O
I
have
no
opinion
exactly
where
this
belongs,
but
to
me
it's
it's
designed
to
support
to
be
a
building
block
of
secure
protocols.
So
therefore
they
and
the
application
is
harsh
obligation
rather
than
canonical
Jason.
That's
the
reason,
and
so
it
is
mainly
an
interest
for
people
working
with
secure
systems,
at
least
as
it
stands
today.
I
think.
O
Sure
it's
it's
not
a
cryptographic
protocol,
it's
just
a
building
block
that
could
be
a
part
of
cryptographic
system.
You.
You
know
you
jbs
container.
For
instance,
you
can.
You
can
open
up
that
and
make
that
readable.
Using
this
system.
I
have
done
that
and
it
works
fine.
You
don't
have
to
have
a
base64
encode
the
container
for
signatures.
S
S
It's
really
great
and
I
noticed
that
you've
done
a
lot
of
implementations
in
various
programming
languages
and
they
all
work
I've
gone
through
it
and
tested
some
of
them,
so
yeah
I
think
this
is
very
important
and
I
would
really
like
to
see
this
take
place.
I
know,
there's
projects
that
I
work
on
in
the
IETF
and
outside
the
IETF
that
could
make
use
of
this
and
then
meet
this.
So
thank
you.
Thank
you.
E
Eric
rajala
no
formal
status
here,
yeah
I'm,
not
not
super
enthusiastic
about
this.
For
several
reasons,
I
said
some.
These
things
earlier
and
dispatch
to
me
I'll
be
sure
they'll
be
shorter.
This
time,
first
of
all,
on
the
motivation
of
this
seems
extremely
innovation.
As
far
as
I
can
tell
us,
people
don't
like
the
BC
to
run
calculation.
That's
accidents
like
a
very
good
reason
at
all,
but
deliberately
would
last
several
times
we
tried
to
design
secure
container
formats.
We've
deliberately
avoided
canonicalization
to
deliberately
go
on
with
I'm.
E
You
know
with
with
data
preserving
transformations
instead,
precisely
because.
E
We
try
canonicalization
has
been
a
debacle,
it's
not
just
a
matter
of
pick
whether
you
can
canonicalize
or
not.
It's
that
people
sign
the
knot
and
commodifies
format
a
lot
of
the
time,
and
then
you
end
up
having
to
not
having
not
use
the
canonicalize
format,
because,
even
though
the
conversation
may
should
be
trivial,
you
end
up
with
non-qualified
signatures
on.
E
Second,
it's
actually
not
desirable
to
have
the
DB
brought
the
raw
data
and
this
injure
intermixed,
because
it
encourages
people
to
make
use
of
the
data
before
the
student
validation
hasn't
happened,
so
that's
actually
quite
undesirable.
So
is
it
so
generally,
I
think
probably
this
is
not
actually
something
we
should
be
doing.
That's.
T
It
hi
this
is
Sean
Turner,
so
I
think
there's
dragons
here
from
a
number
of
perspectives.
One
Jason
I
think
it's
not
really
on
Connie
isn't
economy
is
not
of
the
ITF.
There
was
a
JSON
working
group.
There
was
a
JSON,
this
working
group
that
was
there
there's
a
whole
bunch
of
history
for
people
who,
like
talked
about
canonicalization
I,
think
that
consensus
of
the
JSON
working
group
what
they
didn't
want
to
do
canonicalization.
T
So
if
you're
gonna
do
this
and
take
this
on
you're
gonna
have
to
talk
to
the
archives,
to
make
sure
that
we're
all
on
the
same
page,
otherwise
I
think
we're
just
going
to
be
an
endless
disaster
with
people
killing
each
other
of
these
defense,
and
that
probably
can
add
a
lot
more
context
to
this.
Q
So
it's
a
matthew,
Miller
one
of
the
former
JSON
and
JSON
best
chairs
I,
wouldn't
say
that
there
was
consensus.
Not
to
do
it.
I
would
say
that
there
was
there
was.
There
is
absolutely
no
consensus
to
name
any
way,
shape
or
form
to
do
anything,
because
there
are
so
many
dragons
and
nobody
could
agree
on
what
what
could
be
done
about
them.
I.
O
Understand,
well,
you
have
to
trust
me
that
the
work
has
been
done
to
sort
of
research
the
issues
around
this
and
they
are
clarified
in
the
draft.
You
are
free
to
find
homes
in
the
specification.
Of
course,
that's
very
important,
but
I
claim
that
the
problems
and
the
solutions
are
here.
They
are
already
occupant
'add.
They
were
not
available
the
time
the
Yossi
group
started.
Nobody
have
done
that
work
before
and
actually
I
got.
The
information
from
people
in
the
Yossi
grow
about
numbers,
which
was
the
most
complicated
part.
O
N
U
Because
moment,
as
I
have
said
in
The
Dispatch
already,
this
is
three
different
things.
One
is
define
the
data
model
of
JSON,
which
is
not
defined
at
this
point
in
time,
which
is
a
really
good
thing
to
do.
Second,
define
a
consistent
or
deterministic
mapping
from
the
data
model
to
take
strings,
which,
once
we
figure
out
that
we
really
have.
It
is
also
a
good
thing
to
have.
U
I
have
use
cases
for
that,
and
the
third
thing
is
define
security
protocols
that
make
use
of
that
in
one
or
the
other
way
and
I
know
several
security
programs.
I
don't
want
to
do
on
this
based
on
this,
but
I
really
want
to
have
the
first
two
things,
but
those
are
we
application
area
things
not
really
security
things
yeah.
A
We
shouldn't
take
on
any
work
and
set
carry
for
this,
but
there
may
be
some
in
the
art
area
is
an
interest
so,
like
so
yeah
we
like
screwed
this
up
I've
had
in
the
wrong
dispatch
group,
so
yeah,
I'm
gonna,
say
most
people
object
that
our
dispatch
outcome
here
is
no
work
in
the
SEC
Arya,
but
feel
free
to
try
with
this
patch.
Okay
sergeant,
sorry,
oh
sorry,
I
got
you.
The
runway
runs.
Thank
you
and
David
Ross.
V
Hi,
my
name
is
David
skin',
ozzie
I'm
at
Google
and
I'm
here
to
talk
to
you
about
masks.
So,
first
off
apologies
for
the
very
convoluted
acronym,
but
I
thought
an
acronym
would
look
cooler,
so
this
is
very
much
not
a
full-on.
Oh,
we
have
all
this
work
that
we're
absolutely
doing.
We
want
to
do
it
in
the
ITF.
This
is
very
much
more
early
stages,
more
of
a
thought,
experiment
of
something
that
I've
been
toying
with
for
a
few
years
and
kind
of
thanks
to
a
few.
V
Some
people
in
Bangkok
came
up
with
a
way
to
do
something
and
I
wanted
to
present
it
here
to
have
people
tell
me
either
on
one
end
of
the
spectrum
that
they're
interested
in
that
this
is
useful
work
or,
on
the
other,
that
I'm
horribly
wrong
and
that
I'm
gonna
do
terrible
things,
because
at
the
end
of
the
day,
I'm
kind
of
rolling
out
my
own
crypto
and
it's
totally
fine.
I
follow
cryptographers
on
twitter,
so
I
know
what
I'm
doing.
V
Alright,
so
what's
the
idea
here
as
I'm
sure
you
all
know,
internet
censorship
is
on
the
rise.
There
are
more
and
more
instances
of
governments
and
other
parties
getting
clever
and
clever
about
ways
to
either
block
access
to
things
or
to
detect
that
people
are
accessing
things
on
the
Internet
and
I
personally
feel
very
strongly
that
we
should
fight
that
or
give
people
or
ways
to
go
around
that,
especially
you
know.
Journalists
and
people
doing
things
that
we
think
is
for
the
ethical
good.
V
One
of
the
ins
that
I
think
we
still
have
is
that
the
Internet
is
still
be
very,
very
big
place,
and
so
it's
very
hard
today
to
mount
a
whitelist
like
as
a
government
of
what
websites
are
allowed.
So
most
informants
are
blacklist
based
or
a
heuristic
based.
So
the
goal
here
is
to
come
up
with
a
way
that
anyone
who
runs
any
web
site
can
run
this
kind
of
VPN
server
on
it
and
that
won't
necessarily
land
it
in
the
black
list,
because
it
is
hopefully
very
hard
to
detect.
V
So
in
terms
of
a
threat
model,
the
main
threat
are
passive
observers.
So
let's
say
this
government
has
a
tap
in
the
ISP
and
can
see
all
the
packets
going
back
and
forth.
So
we
can't
have
any
information
plane
test
because
they'd
see
that,
but
also
we
want
to
protect
against
active
attackers,
meaning
there
already
have
been
papers
showing
prior
research
showing
that
some
of
these
entities
will
actually
do
active
probes.
V
So,
for
example,
let's
say
they
see
traffic
to
my
website,
they
will
fly
VPN
protocols
to
my
website
and
then,
if
they
detect
that
I
connect,
then
they'll
block
my
website.
So
there's
been
a
problem
with
for
tor,
for
example.
So
we
want
a
way
to
prevent
those
kind
of
active
probes
from
detecting
anything.
V
Another
requirement
is
this
impossibility
to
probe
so
one
of
the
why
this
makes
it
a
little
harder
is
that
a
lot
of
graphic
protocols
rely
on
like
signing
and
knots
and
therefore
a
lot
of
them.
You
start
by
getting
the
server
to
tell
you
announced
a
new
sign.
It's
but
you
haven't
authenticated
yet,
so
you
need
to
do
this
without
having
a
way
to
ask
the
server
for
not
so
we'll
see
all
that
factors
into
the
design
later
and
another
one
is
even
today
and
not
for
necessarily.
V
Censorship
means
a
lot
of
networks,
block
quick,
more
just
any
kind
of
UDP.
So
this
needs
to
be
able
to
fall
back
to
something
that
looks
like
HTTP
two,
instead
of
HTTP
three,
when
you're
doing
that
performance
will
be
worse,
but
that's
a
fact
we
can
live
with
as
long
as
it
doesn't
stick
out
and
is
not
noticeable
so
now
going
more
into
the
mechanism
and
how
this
works.
V
So
you
start
off
by
having
your
client
initiate
an
HTTP
3
so
HTTP
over
quick
connection
to
the
server
and
do
a
regular
handshake
validate
the
sort
of
certificate.
So
for
this
you
can
validate
the
other
web
PKI,
but
you
can
also
in
the
client,
since
this
is
running
a
custom
client
pin
the
cert,
and
so
we
don't
need
to
build
a
new
authentication
mechanism
for
the
server
we
all
hire.
Ii
already
have
a
very
robust
one
for
HTTPS.
Let's
just
use
that
then,
because
we
can't
request
a
nonce
from
the
server
the
clever
trick.
V
There
is
to
use
a
keyless
key
exporter
to
generate
a
key
between
both
sides
and
you
actually
use
this
key
as
a
nonce.
So
this
is
very
similar
to
token
binding
in
terms
of
the
mechanism,
but
very
different
in
terms
of
what
it
tries
to
achieve,
because
we
saw
it
grabs
this
shared
secret
between
the
server
and
client
without
any
explicit
communication
between
them.
V
And
then
you
have
a
nonce
that
the
server
has
de
facto
added
entropy
to
so
it's
not
replayable
between
TLS
connections,
but
that
the
client
can
still
use,
and
then
basically,
you
sign.
This
knots
so
use
then
send
an
HTTP
connect
request
inside
the
the
encryption
and
you
connect
to
a
specific
mask
protocol.
This
is
very
similar
to
how
WebSockets
work
for
HTTP
2.
So,
instead
of
connecting
to
a
further
host
like
Connect,
was
designed
for
an
HTTP
1.
V
You
say
connect
to
this
protocol,
which
is
just
allows
you
to
switch
this
byte
stream
to
a
different
mode
of
communication
between
a
client
at
server
and
as
part
of
this,
you
can
pass
in
a
username,
that's
and
an
OID
for
which
kind
of
either
a
symmetric
signature.
You
want
to
use
or
HVAC
if
you
want
to
use
symmetric
keys,
because
I
had
some
people
reach
out
to
me
that
their
deployment
model
couldn't
usually
symmetric
keys.
So
this
also
works
for
set
your
keys
in
hashing
and
your
basic
ste
for
encode.
V
All
these
things,
good
HTTP
is
great
that
way,
and
you
just
send
that
up
to
the
server
and
then,
if
the
server
validates
your
keys,
everything
looks
good.
It
sends
your
200
error
code
everything's
great,
and
then
you
have
a
byte
stream
to
be
able
to
talk
mask.
Otherwise
it
does
the
same
thing
that
any
HTTP
webserver
will
do.
V
If
you
send
a
connect
to
something
it
doesn't
understand,
it
sends
your
104
protocol
not
support
it,
and
so,
if
someone
tries
to
probe
you
and
they
don't
have
the
keys,
don't
get
the
exact
same
response
that
if
they
try
to
send
it
to
a
server
that
doesn't
support
mask
then,
once
you
have
this
channel,
you
can
negotiate
what
kind
of
mask
features
you're
using.
So,
for
example,
one
the
simplest
one
and
I
think
probably
the
most
widespread
is
a
connected
proxy.
So
then
you've
just
enabled
proxying
on
that
web
server.
V
You
can
connect
to
any
other
server
on
the
Internet
and
that
just
creates
opens
up
a
byte
stream
to
port
443,
and
then
you
do
end-to-end
TLS
so
effectively,
you're
doubly
crypting,
but
then
I.
The
important
part
here
is
the
mask.
Server,
doesn't
sees
the
clear
text
so,
for
example,
on
my
personal
website,
I
could
run
this
service
as
like.
V
V
They're,
looking
at
I,
don't
know
their
usernames
or
password
and,
of
course,
that
works
for
any
TCP
base
protocol.
They
all
can
also
use
it
to
SSH
or
whatever
another
feature
is
domme.
You
can
just
say:
okay
mom,
my
web
server
is
also
a
doe
server,
so
that
gives
just
private
DNS
all
the
way
to
the
client
for
it
today,
in
today's
internet
part
fir
from
clear
techs
UDP.
Almost
all
the
uses
of
UDP
are
connection
based
in
the
sense
that
you
end
up
sending
a
bunch
of
packets
to
the
same
5-tuple
back
and
forth.
V
So
in
order
to
make
that
a
little
bit
more
efficient,
you
do
something
which
is
similar
to
how
Sox
v5
proxies
UDP
you
negotiate,
saying:
I
want
to
be
able
to
send
you
to
be
back
and
forth
to
this
IBM
port.
The
server
says
here
you
go,
and
then
it
gives
you
a
net
identifier
and
anytime.
You
send
a
quick
Datagram,
which
is
a
quick
extension
right
now
with
this
identifier.
V
V
Finally,
if
the
previous
features
didn't
do
what
you
want,
you
can
just
bring
up
a
full-blown
VPN
here,
where
you
just
send
data
grams
with
4,
IP,
headers
and
then
server
treats
some
the
same
way
as,
let's
say
an
IPSec
server
handles
like
the
inner
IP
header.
It
just
either
forwards
them
or
you
can
nap
them
in
such
a
way
that
it
becomes
like
the
end
server
like
the
really
further
one
doesn't
know
that
there's
necessarily
a
mass
client
somewhere.
It
could
all
look
like
it's
coming
from
your
server,
so
try
try
traffic.
V
So
traffic
analysis
is
like
the
main
answer
I've
had
from
people
about
oh,
but
you
can
saw
like
you're
in
trouble
and
I
totally
agree.
So
this
proposal
really
focuses
on
the
bits
that
are
obviously
stick
out
as
in
clear-text
and
I.
Would
love
help
from
anyone,
who's
actually
really
knowledgeable
in
traffic
analysis
and
the
prior
work?
V
That's
been
done
there
too,
like
if
they
want,
if
you
want
to
contribute
to
this
I
would
love
to,
because
that
is
the
main
risk
here
and
probably
the
main
thing
that
sensors
will
be
using
in
the
future,
and
these
two
things
I
will
skip
I'm
presenting
this
in
transport
area
later
this
week.
We're
also
be
focusing
more
on
the
transport
items,
so
like
I
was
saying
not
really
asking
for
any
kind
of
specific
adoption,
or
what
to
do
just.
W
W
You
actually
mentioned
a
whole
lot
of
things
in
here
and
it
seems
like
there's
there's
a
number
of
things
that
you
might
work
on
in
this
area.
There's
a
lot
of
existing
work
in
this
area.
I
think
we're
gonna
be
talking
about
using
quick,
cos
general
substrate
from
multiple
protocols
at
some
point,
mixing
them
together.
Are
we
talking
about
that
donation
fee
to
context
as
well
and
I?
W
Don't
know
to
what
extent
you're
proposing
to
invent
these
things
and
put
them
under
the
same
banner
or
whether
you're
just
talking
about
one
thing,
I
would
suggest
concentrating
on
a
number
of
things
and
doing
some
more
work
on
this
before
we
talk
about
dispatching
this
proper.
It's
because
there
is
there's
this
authentication
piece
that
you're
talking
about,
which
is
kind
of
interesting
and
maybe
a
little
more
thought-out
than
some
of
the
other
aspects
of
this
there's.
The
integration
of
that
that,
with
all
of
these
other
features,
you're
talking
about,
is
like
a
VPN.
W
You
can
have
this.
You
know
this
and
this
all
of
those
require
extensive
protocol,
work,
yep
and
that's
work.
That's
going
on
elsewhere
is
in
other
drafts
and
other
people
working
on
these
things.
I
would
encourage
you
to
work
with
those
people
and
work
out
what
it
is
that
those
things
can
do
before
we
get
to
the
point
of
saying.
W
V
Yes,
that's
not
what
I'm
saying
what
I'm
saying
is:
I,
don't
know
anything
about
that.
Please
help
me
out
not
we
can't
solve
this
like
yeah
and
in
trigger,
oh
absolutely,
and
to
your
earlier
point.
I
totally
agree.
I,
just
like
have
had
a
list
of
requirements
and
motivations
and
finally
came
up
with
a
way
to
kind
of
tie
them
in
together.
But
if
we
were
to
bring
this
and
actually
work
on
at
the
ITF,
I
totally
agree
that
it
would
be
work
kind
of
all
over
the
place.
F
B
Moriarity,
just
picking
up
on
you
saying
the
only
risks
and
Martin's
comments
on
honey.
It
seems
like
you
know
how
would
I
trust
the
specific
servers?
That's
right
for
some
government
agency
or
whatever
to
get
at
the
exact
point,
you're
you're
looking
to
prevent
to
stand
up
one
of
these
capture,
the
server
you
know
the
traffic
and
and
then
the
points
has
been
lost.
V
B
J
V
X
X
X
Now,
if
you
look
at
different
protocol
specifications
over
time
for
the
last
30
years
or
something
many
of
these
identifiers,
we
got
wrong.
Think
about,
for
example,
predictable,
tcp
seconds
numbers
or
iessons
predictable
transport
protocol
numbers,
the
ephemeral
port
numbers.
We
also
did
predictable,
ipv4
and
ipv6
fragment
identifiers.
We
also
did
predictable
DNS
transaction
IDs
and
we
have
also
had
predictable
ipv6
interface
identifiers
in
most
cases
over
time.
What
happened
was
that
eventually,
the
security
and
or
privacy
implications
of
of
these
suboptimal
identifiers,
if
you
want
you
know,
came
into
light
quite
a
few
times.
X
There
were
implementation,
specific
fixes
for
these
issues.
At
times
they
were
suboptimal,
and
you
know
the
bad
thing
about
this-
is
that
the
lessons
that
we
learn
from
some
protocols
were
actually
not
applied
to
other
protocols.
One
example,
simple
example,
is,
you
know,
fragment
IDs.
We
had
predictable
for
amenities
in
IP
before
and
essentially
we
had
the
same
thing
for
ipv6
and
that's
actually
still
the
case.
I.
Don't
remember.
When
was
the
last
protocol
that
I
had
checked
about
this,
but
there
was
a
Brady
Advisory
like
a
few
months
ago.
X
Don't
remember
off
the
top
of
my
head.
What
protocol
was
him
but
same
kind
of
thing?
Okay,
so
for
some
of
these,
for
example,
Steve
bellevigne
had
done
work
on
TCP
seconds
numbers
I
did
an
RFC
on
how
to
randomize
the
transport
protocol
numbers
there.
There
was
some
work
on
them,
but
it
was
always
fixes
to
the
safe
load
skin
schemes
to
pick
these
IDs.
X
X
It
should
be
okay
if
you
use
a
counter,
for
example,
instead
of
actually
specifying
what
are
the
you
know
in
terrible
interoperability
properties
that
you
actually
need
for
other
for
that
ID
and
then
also
you,
you
have
some
cases
where
the
spec
does
the
right
thing,
but
then
the
implementation
does
not.
So
what
we
try
to
do
in
these
respect
is
not
try
to
prevent
future
protocols
or
implementations
to
actually
fall
into
the
same
trap.
Okay,
so
essentially.
Well,
if
you
wonder
what
we
use
or
define
the
you
know
the
term,
you
know
numeric
ID.
X
Sometimes
it's
what
you
also
call
a
protocol
ID.
Essentially
it's
a
data
object
that
you
use
that
is
employed
in
a
spec.
You
know
distinguish
one
protocol
object
from
another.
This
ID
normally
have
like
different
interoperability
requirements.
In
some
cases
it
could
be
uniqueness,
so
you
just
need
a
number
that
is
unique.
In
other
cases,
the
requirements
could
be
things
like
there
is
Ben
monotonically
increasing,
and
there
are
others
in
which
the
idea
is
required,
for
example,
to
be
stable
within
some
kind
of
context.
X
X
So
when
we
were
trying
to,
you
know
find
a
solution
to
this
problem.
One
thing
that
we
did
is
to
try
to
now
look
at
different
numeric
IDs
that
we
have
four
different
protocols.
I
mean
here.
We
are
just
mentioning
a
few
with
these
are
actually
in
the
in
the
ID
and
try
to
figure
out
what
were
they
on
one
hand,
what
were
the
interoperability
requirements
for
the
IDs
and
also
try
to
figure
out?
Where
was
the
failure
mode
for
each
ID?
X
Okay,
this
is
well
the
list
that
we
we
came
up
with,
or
the
ideas
that
we
analyzed
and
the
reason
for
which
the
we
did.
This
analysis
is
that
we
try
to,
let's
say,
produce
a
figure
ease
of
numeric
identity,
no
Merrick
identifiers,
both
in
terms
of
the
interoperability
requirements
and
in
the
failure
modes.
So,
for
example,
here
for
the
ipv6
flow
level,
the
interoperability
requirements
are,
are
that
you
know
they
should
be
a
unique
you.
But
you
know,
if
you
happen
to
let's
say,
generate
an
ID
that
collides
with
a
previous
one.
X
Well,
there's
not
much
of
a
big
failure,
so
that's
a
soft
failure.
Then
you
also
have
cases
like
in
the
category
number
two
where
the
requirement
is
that
of
uniqueness.
But
if
there
happens
to
be
a
collision
well,
the
failure
can
be
actually
bad.
So
that's
why
you
know
we
market
those
as
heart
failure.
Well,
there
are
others
that
know
category
number
three.
X
The
requirement
is
for
for
uniqueness,
and
you
know
for
the
idea
to
be
constant
within
some
some
context
that,
for
example,
the
case
of
ipv6
interface,
identifiers
and
the
last
one
we
did
was
category
number
four,
which
you
know
the
requirement
is
that
it
should
be
unique,
but
they
also
should
be
monotonically
increasing
and
if
you
fail
to
come
up
with
proper
IDs,
there's
a
heart
failure.
Now
the
point
of
you
know
producing
these
categories
was
to
then
you
know
come
up
with
possible
algorithms.
X
This
doesn't
mean
that
is
the
only
way
to
do
things
or
the
best
way
to
do
things.
But
let's
say
for
each
of
these
categories:
we
phone.
What
are
what
some
implementations
are
doing
and
are
go
rhythms
that
are
kind
of
like
okay
put
another
way,
if
you,
you
know,
have
a
protocol
spec,
you
identify,
you
know
the
interoperability,
interoperability
requirements
and
the
failure
modes,
and
you
cannot
come
up
with
a
better
algorithm.
Well,
at
least
you
have
some
algorithm
that
that
you
can
employ.
X
Another
thing
that
you
know
our
document
does,
if
you
know
we
besides,
the
analysis
that
we
mentioned
before
is
to
try
to
come
up
with
requirements
for
protocol
specifications
when
it
comes
to
numeric
IDs,
meaning
that
if
your
protocol
has
a
numeric
identifier
well,
the
spec
should
clearly
state
what
are
the
interoperability
requirements
your
ID,
then
it
should
also
do
security
and
privacy
analysis
of
that
ID
and
then
recommend
an
algorithm.
We
are
providing.
You
know
some
sample
algorithms.
For
those
it
doesn't
mean
that
you
know
a
new
protocol.
X
Spec
should
use
one
of
these,
but
if
you
cannot
come
up
with
something
better,
then
you
might
end
up
using
some
of
these
algorithms
that
that
we
propose
we
polish
this
document
of
this
a
couple
of
years
ago.
I
think
if
I
remember
correctly,
you
know
at
the
time
there
was
work
on
35:52
B's
and
at
the
time
we
got
positive
feedback
from
the
at
the
site.
X
Then
there
was
also
some
other
part
like
you
know
the
algorithms
that
we
were
describing.
That
could
be
like
a
BCP
kind
of
thing
and
then
their
requirement
when
it
comes
to
the
security
and
privacy
analysis
of
numeric
IDs,
which
at
the
time
the
idea
was
to
wait
for
the
3552
visa
for
and
then
try
to
you
know
see
if
those
requirements
could
fit
in
there,
but
all
3552
B's
as
far
as
I
understand
was
installed.
X
So
that
didn't
happen
and
the
other
parts
of
of
this
document
were
stuck
to
so
now
we,
you
know
we
keep
the
original
document
as
a
whole
thing.
We
also
split
the
original
document
into
three
parts
and
we
are
curious.
You
know
about
whether
folks
find
this
useful
and
if
they
do,
you
know
what
they
think
might
be
a
way
to
actually
work
on
this
stuff.
J
That
you
have
different
qualitative
nature
of
content
to
them
and,
like
there's
a
lot
of
sort
of
historical
interest
in
how
did
we
manage
to
share
ourselves
in
the
foot
so
badly
in
the
past?
And
then
you
mentioned
that
there
might
be
some
parts
that
are
also
more
of
a
BCP
level
and
the
main
question
that
I
had
was
like.
What
do
you
see?
J
F
So
I
think
one
of
the
questions
that
you're
asking
vendors
that
we
typically
write
BCPs
for,
like
operators
of
these,
like
implementers
of
protocols
or
operators
of
the
services
that
run
these
protocols
right
and
in
this
case,
I
think
what
Fernando
is
offering
here
is
best
current
practices
for
protocol
designers.
So
a
little
bit
meta
but
I
think
it's
actually
really
useful
for
us
to
think
about
the
ITF
themselves.
As
the
consumer
of
the
of
the
best
practices
document,
I
mean.
H
H
J
L
Sir,
the
key
off
that
a
little
bit
in
why
I
came
to
Mike
is
what
I
was
intrigued
by
the
draft
is
how
it
took
a
historical
perspective
of
you,
design,
issues,
kind
of
threats,
how
things
were
exploited
and
then
reflected
that
into
protocol
kind
of
guidance
and
then
kind
of
thinking,
big
hat.
What
else
is
happening
in
the
IETF
or
in
the
kind
of
the
broader
community?
Is
the
smart
work
and
it's
a
temp?
L
E
Given
this
era,
scroll
I
mean
this
seems
like
useful
I
guess
I'm
trying
to
figure
out.
Why
isn't
serving
the
function
needs
to
serve
presently
by
the
internet?
Draft
I
mean
generally.
These
I
mean
these
BCPs
that
we
often
do
like
are
intended
to
like
constrain
the
protocol
designers,
and
it
says
there,
which
is
documentation
and
advice
it
seems
like
people
could
prefer.
We
did
the
drafts
repository
so
I'm,
trying
to
figure
out
what
why
why
MIT?
Why
is
that
mission
accomplished.
E
P
F
Dkg
just
respond
to
Eckhart
like
if
we
want
to
have
a
discussion
about
the
fact
that
expiration
doesn't
mean
anything
like
that's
a
meta,
meta,
discussion
and
and
I.
Don't
think
that
we
can
say
don't
worry
about
this
fashion
is
because
we
decided
offhand
at
the
my
client
that
we'd
actually
don't
care
about
expression,
I've
heard
multiple
times.
Yeah
that
drafts
expired,
you
shouldn't
use
it
in
other
contexts.
So
all
right,
that's
just
not
yeah.
L
Yeah
so
I'm,
what
I
heard
at
the
mic
line
is
generally
there's
kind
of
elements,
elements
kind
of
interest.
It's
you
know
we
talked
about
how
to
kind
of
codify
all
of
that,
but
what
I
didn't
hear
other
than
kind
of
in
kind
of
one
places
that
we
have
to
package
it
up
and
kind
of
publish,
so
there's
good
kind
of
design
guidance.
I
didn't
hear
next
step,
though.
F
H
A
F
Z
X
What
we
think
is
at
least
part
of
the
document
should
be
a
BCP
so
that,
when
you
hear
get
a
protocol
spec
that
has
a
numeric
ID,
there's
a
document
that
you
can
direct
people
to
and
say.
Well,
how
did
you
come
up
with
these
IDs?
What
are
their
requirements?
Did
you
did
the
proper
analysis
and
if
you
are
doing,
if
you
are
employing
some
other
algorithm
well,
why
are
you
in
that?
Instead
of
the
algorithms
that
have
been
already
used
for
that.
J
Thank
you
like
again
so
I
guess,
I
have
to
confess.
I
was
trying
to
do
some
research
on
this
stuff
earlier
today
and
there's
like
four
word:
wrap
for
documents
floating
around
that
I
tried
to
look
at,
and
I
ended
up
sort
of
a
little
bit
confused
about
which
pieces
go
where
and
which
things
I'm
supposed
to
be
looking
at.
So
at
least
for
me
personally,
it
might
be
simpler
to
like
publish
the
timeline
and,
like
is
informational.
L
AA
Steven
province,
so
I
I,
don't
think
you
necessarily
need
to
get
hung
up
on
PCP
stuff
because
you
can
always
just
put
it
in
an
RFC,
that's
informational
and
with
Narcy
1984.
We
made
it
pcp
20
odd
years
later,
so
you
can
just
I
mean
just
if
you
want
to
publish
gnar
and
see
if
you
can
find
somebody
to
sponsor
their
ghost
EIC,
then
you
can
get
an
ROC
and
if
there's
text
in
there
that
the
community
want
to
turn
into
a
PCP,
they
can
just
do
that
as
a
actually
so.
AA
J
Yeah
so
there's
like
a
couple
things
going
on
in
what
Stephen
was
talking
about,
so
one
of
them
is
that
you
know
you
can
have
an
RFC.
Almost
you
published
this
one
publication
stream
class
and
that
can
be
changed
at
a
later
date,
just
by
ITF
consensus
and
iesg
action,
I
think
and
so
like
you
can
have
a
if'
stream
informational
document
that
later
gets
reclassified
as
pcp.
J
L
Daniel,
then
you
come
up
to
the
Mike,
or
am
I
wrapping
up
all
right
so
to
kind
of
wrap
up
it
so
to
kind
of
pull
the
thread
on
the
follow
up
discussion
here,
there's
there's
some
interest.
I
talked
to
Ben
what
we're
going
to
commit
to
you.
It's
kind
of
sit
down
with
you
to
figure
out
what
it
makes
sense.
Do
we
do.
We
think
one
document
we
think
not
and
we'll
help
you
shop
it
kind
of
in
different
places
and
we'll
work
towards
kind
of
getting
something
out
there.
Yeah.
X
L
U
Yes,
you
can
see
I'm
wearing
glasses
and
custom
woman
I
want
to
talk
about
concise,
IDs
and
I
want
to
do
this
quickly
because
John,
where
is
John?
Oh
then
it's
going
to
use
the
other
half
of
the
slot
to
talk
about
something
completely
different,
except
that
it
almost
looks
the
same.
So
I
have
to
do
a
little
work
on
keeping
us
separate.
U
So
you
may
be
a
way.
I
want
x.509
certificates
because
these
have
been
around
for
a
while
and
they
are
used
for
lots
of
authenticating
assertions
and
what
we
have
now
as
a
new
thing
is
CWT
zebra
web
tokens
that
are
authenticated
assertions
and
they're
used
for
various
things
and
again
and
again,
the
question
comes
up.
I'm
using
certificates
right
now
to
solve
problem.
X
could
I
be
using
CW
trees
as
well.
U
U
It's
really
knowledge
if
it
could
even
be
informational,
because
it
just
profiles
what
we
already
have
and
says
how
to
actually
build
things
that
traditionally
have
been
done
using
x.509
certificates,
but
also
can
be
done
using
CW
trees
and,
of
course,
Cosi
has
a
security
scheme.
So
there
is
this
pretty
rough
draft
out
there,
which
calls
these
things
concise,
IDs,
because
that's
the
main
application
we
had
in
mind
in
the
sense
of
our
c49
online
identities
are
sets
of
attributes,
and
once
you
authenticate
these,
you
have
concise
IDs.
U
So
that's
an
indication
of
the
direction
where
you
want
to
go
home,
but
certainly
not
the
document
that
we
have
in
the
end.
So
that's
one
trick
and
I
quickly
want
to
separate
this
from
another
trick.
That's
also
very
interesting,
which
is
you?
Could
we
encode
an
X
of
x.509
certificate
in
C
bar
and
you
would
gain
something?
So
this
is
not
really
compression,
it's
just
the
fun
of
concise
serialization.
U
So
the
advantage
is
everybody
already
knows
what
an
x.509
certificate
is
the
disadvantage.
Is
you
get
all
the
baggage
of
30
years
of
using
and
abusing
x.509,
and
you
actually
need
to
convert
the
thing
to
ace
and
not
one
before
you
actually
can
sign
it
or
before
you
can
verify
the
signature?
So
it's
the
one
thing
that,
in
the
we
in
the
constrained
environment,
don't
usually
want
to
have
I
should
have
said
that
we're
most
interested
in
this
for
small
devices,
not
just
small
but
using
very
little
energy
having
very
little
processing
power.
U
Unfortunately,
there
are
only
about
5
people
in
this
room
who
want
this
and
everybody
else
is
sitting
over
at
6
low,
but
ok,
so
this
is
interesting.
Working
John
will
talk
about
it,
but
this
is
the
what
I'm
talking
about
I'm
talking
about
profiling
CWT
for
authenticated
assertions
with
a
view
of
having
this
available
in
places
where
traditionally
x.509
certificates
have
been
used.
We
had
a
site
meeting
about
a
month
ago
on
WebEx
I,
don't
know
where
we
discussed.
U
Where
could
we
put
this
work,
and
so
the
problem
here
is
not
that
we
have
no
idea
where
to
put
it,
we
have
too
many
choices,
so
the
cousy
work
you
could
work
on
this,
this
see.well
working.
You
could
work
on
this
and
so
on,
but
in
the
end,
after
the
discussion
we
came
out
with
two
working
groups
that
are
kind
of
credible.
One
is
in
the
security
area,
that's
ace,
which
is
looking
at
authentication
authorization
in
constrained
environments.
U
We
could
do
a
new
working
group,
not
sure
that's
worth
it
or
we
could
not
do
this
at
all
and
proclaim
that
x.509
rules
yeah.
You
probably
want
to
use
chunks
work
then,
but
there
is
another
problem.
People
are
just
going
to
go
ahead
and
ucw
T's
for
these
things,
and
if
we
don't
have
a
central
document
from
which
these
applications
can
benefit,
we
will
get
much
more
diversity
and
then
probably
also
many
more
security
problems.
L
C
C
C
This
work
was
presented
in
Tingting,
RG
IDF,
one
country
last
I
def
quite
well
received
I
would
say,
and
what
this
dropped
proposes
is
to
do
a
lightweight
x.509
certificate,
encoding,
/,
lossless
compression
algorithm,
the
potential
benefits
and
applications
of
this
is
to
do
it
should
not
be
it's
wrong.
Erosion
of
the
Gateway.
The
Gateway
should
be
gateway
to
endpoint,
so
gateway
to
endpoint
compression
when
older
versions
of
TLS
is
used.
This
is
what
they
do
currently
in
six
low
signal
paths.
C
This
is
not
a
long-term
solution,
hopefully
people
will
switch
to
DT
TLS
and
DTLS
103,
but
device
is
deployed
today
would
probably
be
used
for
10
years
until
the
battery
runs
out,
maybe
even
longer.
The
second
thing
it
could
be
used
for
is
Steelers
client,
TLS
server
compression.
When
TLS
103
is
used.
We
think
we
can
achieve
better
compression
for
this
type
of
very
constrained.
Ecdsa
certificates
and
the
third
application
long
term
would
be
as
a
migration
path
would
be
to
completely
skip
asn.1
and
just
use
this
as
a
seam
or
certificate
format.
C
C
So
you
get
very
compactness,
you
get
compability
and
a
migration
path
from
x.509.
You
get
smaller
footprint
for
the
compression
than
general
lossless
compression
our
wits
and
the
disadvantages.
Is
that
you?
You
still
need
to
support
a
SN
bomb
at
least
medium
long
term
until
you
switch
to
a
c
words
that
we
get
format,
but
yes,
the
best
would
be
switched
immediately,
but
then
you
have
a
hand
in
the
eight
problem.
Devices
will
not
adopt
a
new
certificate
format
unless
cas
implement
this
and
see
us
we're
not
implemented
unless
devices
implemented.
C
L
Y
E
First,
let
me
tell
you
a
story
of
a
scrappy,
little
dedication
format
designed
about
25
years
ago.
It
started
out,
which
is
like
some
names
and
like
an
you,
know,
skis
and
a
and
ability
period,
and
it
grew
up
in
don't
we
call
x.509
v3,
so
so
I
think
I'm,
not
necessarily
persuaded
that
just
like
removing
all
the
fields
is
gonna
like
end
up
being
smaller,
except
temporarily,
but
more
seriously.
I
think
it's
a
this
cannot
go
into
some
existing
working
group.
This
is
like
some.
It's
is
like
this
is.
E
The
reason
we
have
working
groups
is
to
like
actually
focus
on
things
that
they
they
know
about,
and
not
to
have
this
be
stuff
things
like
some
like
working
group.
That's
like
doing
something
else,
so
it
without
so
I
taking
pictures
on
merits
of
this
on,
like
maybe
maybe
the
super
encoding
can
be
done
like
in
some
small
corner
the
profile.
The
seat
of
beauty
is
like
a
real
security
work
and
like
it's
the
same
order
of
like
PKK's,
it
has
to
have
someone
working
group
with
senators
a.
G
G
Of
it,
but
you
know
what
I've
heard
a
lot
from
internally
in
my
organization.
That
externally
is
that
that
a
lot
of
service
providers
would
still
be
reliant
on
the
existing
infrastructure
and
may
not
want
to
actually
change
how
they
deal
with
certificates
and
would
like
to
stick
to
509
encoding
formats.
So
if
you
have,
for
instance,
resource
constrained
devices
that
are
producing
encoding
formats
like
this,
what
we
mean
to
actually
standardize
how
intermediating
proxies
would
work
that
would
accept
these
formats
yet
produced
something.
G
G
U
I
have
a
personal
review
on
this,
which
is
that
this
is
not
trying
to
replace
X,
5
or
19
places
where
it
works
well,
but
it's
it's
trying
to
solve
problems
that
x.509
can
sort
of
in
places
where
x.509
does
not
work.
Well,
so
I
I
wouldn't
expect
a
giant
Gateway
infrastructure
to
spring
up
before
for
the
current
size,
ID
work,
of
course,
Jon's
answer
will
be
completely
different.
C
M
T
Hi
Sean
Turner
I'm
gonna,
maybe
amplify
a
little
bit.
What
ever
said
about
that,
every.
T
We
then
turn
around
make
it
that
because
there's
extension
points
that
are
still
even
in
the
profile
and
I
I
just
feel
like
we
won't
be
able
to
stop
ourselves
from
extending
it
in
and
adding
things
later.
T
On
the
other
hand,
like
many
years
ago,
I
probably
would
have
been
up
here
falling
on
a
sword
about
coming
up
with
a
new
encoding
scheme.
Now,
like
yes
and
when
I
was
like
five
encoding
schemes,
just
do
them
for
C
for
this
one
for
JSON,
there's
one
for
XML
and
for
whatever,
like
just
make
a
new
encoding
format.
T
So
the
question
is
whether
or
not
the
DR
thing
is
required,
and
people
can
get
into
that
argument,
but
I'm
like
yeah
as
far
as
the
profile
goes
and
dropping
out
the
fields,
I
think
it's
so
much
easier.
If
you
just
stand
on
the
shoulders
of
giants
as
opposed
to
trying
to
selectively
pick
what
field
is
there
and
what's
not
filled?
Is
there
because
you're
gonna
rehash
all
the
reasons
why
those
fields
are
there
in
the
first
place
and
you're
gonna
end
up,
probably
putting
them
all
back.
U
T
We
do
this
right,
it
might
you
who
work
with
Jason
yeah
now
so
I
mean
my
whole
idea
is,
if
you
just
want,
like
a
new
encoding
scheme,
but
there's
basic
encoding
world
English
encoding
rolls
it
back
decoding
rolls.
So
now
you
see,
we
are
recording
role
like
just
make
a
new
one
of
those
like
whatever
that's.
AA
So
yeah
I
think
XIV
9
is
a
Potter
craft,
that's
videos,
but
it's
very
hard
to
do
significantly
better
than
us,
but
you're
gonna
actually
grip
the
places
or
displace
it
even
in
smaller
areas.
So
I'm
really
not
clear
that
this
at
least
burned,
while
I
read
the
current
draft
is
significantly
better,
although
then
in
terms
of
encoding
sites.
So
it's
not
clear
to
me
that
there
and,
for
example,
I
think
to
be
significantly
better.
You
probably
need
to
have
some
different
semantics.
A
Barnes,
like
a
couple
of
observations
here,
first
I,
don't
know
why
everyone's
hating
on
Anson
one
like
it's,
it's
my
favorite,
allegedly
canonical
format
that
you
can
take
an
object
and
you
can
deserialize
it
and
Yuri
serialize
it
you
get
different.
Bytes
I
was
expecting
it
Watson
to
be
up
here
because
he
made
this
observation
this
morning
in
this
patch.
Some
other
context.
Oh
well.
C
C
I
think
this
is
doing
better
compared
to
this
figures.
I
have
seen
from
the
compression
people
but
I
that
would
need
to
be
done
before.
Such
comparisons
should
definitely
be
done,
not
only
decisis,
but
also
what
are
the
code
and
memory
requirements
on
the
devices?
I,
don't
have
any
clear
answers.
Research
institute
of
Sweden
say
tell
me
that
this
is
lower
I,
believe
them,
but
I
don't
have
any
numbers
yeah
thanks.
N
So,
first
of
all,
this
is
not
about
rehashing
the
work
of
30
years.
That
then
then
I
would
have
to
repeat
the
Carson's
comment.
If
you
find
where
that
this
is
not
for
you,
we
have
new
scopes
and
intends
all
these
assertion
mandals
and
every
prominent
one.
Is
the
identity
we're
talking
about
so
I'm
not
going
into
what
is
an
identity
here?
It's
25
I
hope,
but
the
the
purpose
here
is
is
this:
we
start
the
work
and
there
is
actually
there
are
entity
out
there
like
a
CA
or
Ganesha
CA.
N
That
is
very
interested
in
this.
So
just
saying
nobody
wants
this
because
it's
changing
and
hard
is
I.
Don't
think
a
ready
palette
argument.
Also,
there
are
stories
where
I
have
experienced,
that
trying
to
use
a
s
and
one
again
for
a
different
purpose,
didn't
work
while
it
would
have
been
work
from
the
start
when
we
would
have
used
see
board.
So
there
is
also
basically
the
entire
example
for
this
that
it
would
have
been
better
to
just
not
use
age
and
from
the
beginning
and
don't
use
the
CMS
extra
four.
H
N
Representation
so
that
that's
just
just
all
side
information
on
this
topic-
we
are
starting
with
concise
identities,
because
people
I
think
have
a
relatively
good
understanding
how
to
deploy
them
and
work
with
them.
But
in
the
end
these
are
signed
assertion,
bundles
and
not.
Everything
here
is
about
identity.
N
This
is
a
first
step
or
a
thing
that
we
want
to
establish
here,
and
it
is
again
to
repeat
Carson's
comment
about
the
constraint
no
environments
and
for
future
use
and
not
for
rehashing
every
extension
that
has
been
made
or
created
in
the
last
thirty
years,
because
and
now
I
called
an
editor
of
the
divided
raft.
They
are
insanely.
AB
Max
pala
CableLabs
one
comment
and
one
question:
the
comment
is
just
hearing
what
he
just
saying
and
what
you
said
before
seems
to
me
that
this
is
not
just
a
a
change
of
encoding,
but
is
let's
remake
what
we
did
before
I
think
new,
possibly
new
features
etc,
and
this
could
be
seen
as
let's
reopen
it
in
another
way.
If
we
can
work
through
the
question
that
I
have,
for
you
is
certificates,
is
one
thing
in
a
PKI
there's
many
other
protocols,
many
other
data
structures
that
needs
to
be
updated
as
well.
AB
U
Just
an
example
sign
a
request
to
CSR
is
a
CWT,
so
we
don't
really
need
a
separate
data
structure.
This
just
falls
out
from
from
doing
the
signed
data
structure
as
well.
So
in
some
cases
we
can
just
reduce
the
number
of
variations
we
need,
but
if
you're
talking
about
things
like
timestamps
and
and
and
OCSP,
and
so
on,
yes,
so
in
some
cases
those
will
be
needed,
but
I
think
in
many
cases
they
will
just
take
a
different
form,
because
we
have
things
like
like
authorizations.
U
Hours
of
education
servers
that
that
we
do
these
things
in
a
specific
environment.
So
I
don't
know
whether
we
actually
would
ever
come
up
with
a
generic
timestamp
for
word,
but
we
might
come
up
with
something
that
allows
an
authorization
server
to
issue
some
some
freshness
indicator
that
can
be
used,
but
by
a
client
or
a
server
to
continue
working.
T
Just
to
fall
on
that
Carson
I
think
for
you're
here
for
edification
to
know
about
knowing
his
background,
there's
also
co-op
est
right.
So
there's
this.
H
AC
H
AC
Seaboard
certificates,
draft
and
I'm
listening
to
to
the
discussion
here
and
I
have
a
general
question
to
the
security
community
about
how
I
mean
these
are
two
examples
of
people
want
to
make
more
compact
certificates
for
the
purpose
of
IT,
and
we
we
haven't,
got
there
plenty
of
use
cases
where
people
want
to
deploy
lots
of
devices
using
some
automated
mechanism
with
based
on
manufacturing
certificates,
enrolling
new
certificates,
while
in
the
network
and
so
on.
So
so
there
is.
AC
This
is
a
concrete
problem
and
it
seems
that
these
people
are
not
happy
but
not
delighted
or
enthusiastic
over
these
proposals.
Some
ideas
are
present,
but
I'd
like
to
ask
the
security
community
way
at
the
IDF.
Where
do
we
go?
What
do
we
do?
I
mean?
How
do
we
address
this
problem?
If
someone
has
some.
AC
A
So,
thank
you
for
the
Segway
urine
I.
Think
what
I've
been
hearing
at
the
microphone
here
is
that
people
don't
think
this
is
appropriate
for
kind
of
the
fast
path
options
we
have
for
dispatch.
You
know
baby
sponsorship
and
things
like
that,
but
that
there's
there's
enough
meat
here
between
these
two
proposals-
and
you
know
the
general
topic
of
doing
more
compact
authentication,
Xin,
Meredith
off
I'm,
so
I
think,
that's
probably
the
course
we're
gonna
recommend.
A
So
it's
just
the
propensity
to
work
with
deities
and
see
kids
and
mix
it
up
alright
and
any
comments
on
that
outcome.
Further
discussion,
alright
call
it
the
steps
thanks
guys.
A
G
AD
So
I
have
no
slides,
I,
didn't
even
know
this
working
group
existed
until
about
three
or
four
hours
ago.
I'm
apologize,
the
ITF,
91
and
Honolulu.
We
started
some
work
very
descriptive
work
on
an
internet
draft
to
talk
about
techniques
that
global
sensors
were
using
against
protocols.
We
had
a
pretty
drafted
draft
at
that
point.
We've
worked
on
it
since
then.
It's
now
split
up
into
three
things:
prescription,
identification
and
interference.
You
could
read
the
prescription
is
how
you
decide
what
the
block
interference
is.
AD
How
do
you
identify
what
the
block
and
weight
identification
is,
how
to
identify
what
the
block
and
interference
is
actually
performing?
The
impairment
or
blocking
doesn't
have
to
be
exactly
blocking.
It
can
be
slowing
down
and
stuff
like
that.
It's
now
busted
out
in
two
layers,
and
this
is
with
collaborators
at
Princeton
and
folks
at
CDT.
AD
The
whole
purpose
of
this
draft
is
to
be
purely
descriptive
for
people
who
write
protocols
or
implement
protocols
that,
if
you
care
about
people
in
places
of
power
and
using
your
protocols
in
ways
you
didn't
intend
or
whatever
you
can
read
this
draft
and
get
examples
of
how
protocols
have
been
used
against
users
and
so
there's
a
similar
draft
that
I
know
Richard
had
worked
on
that
talks
about
the
effect
on
the
Internet
architecture
of
blocking
and
impairment.
This
is
more
about
the
effect
to
users
and
the
actually
availability
of
information.
AD
It's
pretty
so
it's
been
four
years
part
of
what
happened
as
I
did
I
asked
a
2.0
in
the
middle
of
that
and
I
haven't,
had
staff
excuses
excuses.
It's
pretty
mature
at
this
point.
There's
a
couple
of
open
issues
in
the
repository
we're
hoping
that
this
is
one
of
the
series
of
descriptive
drafts
that
are
meant
for
IETF,
the
IETF
community
or
any
protocol
designers
to
identify
themselves
simply
as
references
to
see
how
people
are
doing
certain
kinds
of
things
and
we've
had
some
interest
in
traffic
analysis.
AD
There's
some
very
fledgling
pointers
at
that
kind
of
stuff
too.
At
the
time
we
had
sort
of
tentative
ad
sponsorship
from
Kathleen
and
Steven
when
they
were
a
DS.
Obviously,
that's
been
very
different
from
now
and
in
fact,
ad
sponsorship
itself
may
have
changed,
but
anyway,
and
as
people
have
said,
this
is
very
useful
as
an
ID
as
itself,
we
thought
about
doing
in
an
RG
in
the
HR
PC
research
group.
AD
At
the
time
people
felt
like
it
would
benefit
from
being
an
IETF
document
for
IETF
folks
and
going
through
the
IETF
process
and
consensus
it's
anyway.
So
I
can
do
a
more
polished
presentation
later
in
the
week.
Just
got
to
give
me
a
sorry,
I
just
didn't
even
know,
I
was
going
to
do
this.
I
could
do
it
at
sagger
for
people.
AD
If
you
want
me
to
pull
you
aside
and
sort
of
talk
about
what's
going
on
here,
there
are
a
couple
of
open
issues
that
were
stealing
still
dealing
with,
like,
for
example,
all
South
Korea
blocking
yes
and
I.
That's
something
that
definitely
needs
to
be
in
here.
I
think
it
is
sort
of
in
here.
But
not
you
know
in
a
perfectly
polished
way
with
that
I
won't
waste
any
more
of
your
time
than
happy
to
take
questions.
Yeah.
J
Seems
like
something
would
be
good
for
the
drg,
the
privacy
enhancements
assessments,
research
group
we've
recently
took
on
work,
describing
data
collection
mechanisms
and
best
practices
around
there,
and
this
is
sort
of
in
a
similar
vein.
So
I
would
encourage
you
to
essentially
consider
something
there
and
Trust
for
presentation.
AD
A
RFC
may
not
be
the
best
thing
as
an
endpoint
for
that
and
I
think
we
doing
the
authorship
to
sort
of
realize
that,
because
we
have
to
change
in
every
field
for
four
or
five
months
to
respond
to
the
actual
global
Internet
environment,
but
it
yet
I
think
that's
a
great
idea.
I'll
start
all
right.
Thanks,
hey.
A
Comments
all
right,
thanks
for
the
heads
up,
Joe
I,
think
that's
probably
not
no
further
action
at
this
meeting,
but
please
feel
free
to
send
me
a
little
list
and
we'll
see
if
we
get
this
dispatch
there,
all
right
any
other,
any
other
business
in
the
remaining
five
minutes.
A
A
All
right
that
was
the
last
business
on
yet
so
oh
yeah
so
hold
on
one
more
second,
so
yeah.
L
Okay,
just
so
wrap
up
what
we
said:
I'm
reading
kind
of
from
ether,
I'm
reading
from
ether
pad.
Here,
as
we
talked
about
the
the
protocol
for
PGP
key
subscriptions,
we're
gonna,
the
recommendation
was
to
go
to
the
IETF
open,
PGP
mailing
list
and
grow
the
interest
there
as
it
relates
to
JCS.
The
feedback
was
to
bring
to
the
art
area
as
it
relates
to
mask
I.
Think
efforts
are
in
flight
to
already
create
a
non
working
group.