►
From YouTube: IETF104-DNSSD-20190325-1610
Description
DNSSD meeting session at IETF104
2019/03/25 1610
https://datatracker.ietf.org/meeting/104/proceedings/
A
A
B
B
E
All
right,
thanks
all
right,
welcome
to
DNS
SD.
If
this
is
not
the
working
group
you're
looking
for
this
is
the
wrong
room,
so
especially
since
this
is
Monday.
This
is
the
note.
Well,
please
note
well
the
note
well
in
particular
and
seriously
do
we
do
it
if
you
haven't
and
if
you're
a
new
would
ITF
attendee
or
if
you're,
a
longtime,
ITF
attendee,
who
still
hasn't
read
it
just
talking
at
the
microphone,
actually
is
liability
for
you
in
terms
of
patents
and
disclosing
them.
E
E
So
again,
some
reminders
that
should
be
obvious
to
people,
but
even
the
people
who
have
been
here
for
the
longest
are
the
ones
who
forget
the
most.
Often
please
state
your
name
clearly
any
time
you
go
up
to
the
microphone
and
please
review
documents
and
send
feedback
on
them.
Even
if
the
feedback
is
just
looks
good
because
when
you
write
documents,
you
also
want
other
people
to
read
them
and
the
blue
sheets.
This
is
a
reminder
for
me
to
send
out
the
blue
sheets
all
right
coming.
E
E
So
these
are
just
lengths
for
people
kind
of
horror,
not
necessarily
in
the
room
or
looking
at
this
PDFs
later
but
they're
just
the
nature
tracker.
For
what
we're
doing.
As
a
quick
reminder,
we
have
a
working
group,
github
organization.
We
encourage
document
authors
to
move
their
documents
there
if
they
feel
like
it,
but
it
is
absolutely
not
a
requirement
and
we
have
a
new
area
director.
So
all
right,
we
will,
on
Thursday,
have
a
new
area
director.
F
E
All
right,
so
our
goals
for
today
are
and
they're
not
just
similar
to
where
we
were
at
three
months
in
Bangkok,
but
we've
made
progress
on
all
these
items,
which
is
nice,
so
we're
gonna
start
with
the
discovery
proxy
pushing
Steve
flops
which
we've
been
talking
about
for
years,
but
we're
very
getting
very
close
to
done.
We're
gonna
talk
about
also
the
update
prophesy
private
subdomains
timeout
resource
records
then
keep
making
progress
on
privacy,
which
is
still
you
know,
really
big.
E
What
kind
of
for
us,
and
then
maybe
time
permitting
spend
a
little
time
about
reach
our
during
afterwards
so
kind
of
similar
to
what
I
just
said,
here's
our
official
agenda.
Would
anyone
like
to
bash
the
agenda?
Is
there
anything
that
you
would
like
to
add
there
or
things
that
you
think
we
should
not
be
discussing
all
right?
Thank
you
and
we're
start
off
with
Stuart.
B
D
G
All
right,
thank
you.
David.
We
have
three
documents:
we've
been
working
on
for
a
while
that
have
been
slowly
going
through
the
process.
Good
news,
DNS
stateful
operations
published
a
last
Friday
RFC.
So
that's
good
I'm
really
happy
with
that
document.
I
think
it
is,
my
name
is
on
it,
so
I
don't
be
too
self-serving,
but
I'm
really
happy
with
it
and
I
think
it
can
be
a
good
foundation
for
future
extensions,
the
first
of
which
is
push
notifications.
G
We've
been
through
working
group
last
call
with
this.
We
got
some
good
feedback
and
Ted
lemon
has
been
implementing
this,
and
in
a
way
it's
lucky
that
while
we
were
putting
Deanna
stateful
operations
through
the
public
publication
process,
we
we
were
sitting
on
this
and
holding
it
because
Ted
discovered
a
small
flaw.
We
had
previously
before
DSO
existed,
made
the
decision
to
adopt
the
DNS
update
record
formats
because
we're
we
can
reuse
existing
technology
instead
of
in
venison
new.
That
makes
sense,
since
that
happens,
DSO
came
along
and
to
find
the
TLV
syntax.
G
G
We
were
still
using
the
old
DNS
update
format,
which
does
exactly
that.
It
tries
to
overload
Sora's
record
formats
and
what
Ted
noticed
is
that
in
a
DNS
update
the
first
record
in
the
question
section,
which
is
called
the
zone
section
in
an
update,
tells
you
the
DNS
class
of
the
zone.
We
don't
have
that
section
in
a
push
notification
and
when
Ted
was
actually
trying
to
implement
this.
G
He
said
that
there's
all
this
weird
overloading
of
what
type
and
class
mean
in
the
records
to
mean
add
a
record
or
delete
a
record
or
delete
an
entire
RR
set,
and
what
we're
left
with
was
that
the
DNS
class
was
unspecified
now.
Dns
classes
are
not
really
used,
so
maybe
that's
not
important
and
we
can
just
forget
it,
but
I
kind
of
feel
like
it's
not
our
place
to
be
making
that
decision.
If
DNS
oak
decides
the
classes,
are
deprecated
I'm
fine
with
that.
But
this
is
not
the
document
to
make
that
decision.
G
So
with
that
and
the
realization
that
we
don't
have
to
be
constrained
to
look
like
resource
records,
we
came
up
with
a
simple
change:
semantically,
it's
no
different
base,
just
a
different
syntax
and
I
will
send
a
summary
to
the
mailing
list,
but
while
I'm
here
in
the
room,
we
would
like
people
to
have
a
look
at
that
and
see
if
they
agree
that
this
is
a
trivial
change
and
everybody's
happy
with
it
and
along
the
same
lines.
Ted
noticed
that
the
reconfirm
operation
has
a
reply,
but
there
was
nothing.
G
A
client
could
ever
do
with
the
reply
that
was
useful.
It
was
just
extra
bytes
being
sent.
It
was
extra
code
being
written
waiting
for
a
reply
just
so
you
could
ignore
it.
So,
while
we're
making
this
change,
it
seemed
like
a
useful
simplification
to
just
get
rid
of
the
replies,
so
make
reconfirm
a
unidirectional
message
instead
of
a
request
response,
so
I
think
those
are
two
worthwhile
improvements
and
we
have
updated
the
document
with
that
new
text.
I
think
probably
it's
worth
spending
the
two
weeks
to
do.
E
Then
the
other
document
discovery
proxy,
actually
Stewart
yep,
before
we
move
on
to
that.
So
to
give
people
a
little
bit
of
context,
the
DNS
Bush
document
had
gone
through
last
call
before
which
had
passed
and
I.
Think
Stewart's
approach
is
really
good
doing
it
again
if
well,
first
off,
does
anyone
in
the
room?
If
you
have
any
objections,
please
come
to
the
microphone
now.
Otherwise
we
understand
that,
maybe
not
everyone
has
read
the
latest
document
and
maybe
having
stored
summary
on
the
list
will
be
helpful.
E
So
we'll
probably
start
the
last
or
well
start
the
last
call
now
or
whenever
I
get
to
the
data.
Tracker
and
it'll
go
for
two
weeks,
starting
after
the
end
of
this
meeting,
because
everyone's
busy
during
this
week,
but
all
things
considered
I,
think
it
all
that
move
very
smoothly,
because
we
have
had
a
lot
of
review
of
this,
and
especially
now
that
we
have
an
implementation
which
Ted
will
look
into
we're
in
really
good
shape.
So,
thanks
for
everyone
who
put
a
lot
of
work
into
this.
G
Yeah
we
we
had
some
debate
about
this
and,
and
part
of
me
just
said:
let's
just
let's
just
leave
you
to
not
worry
about
it,
you
know,
add
a
sentence
that
describes
the
problem,
but
after
we
discussed
it
a
bit
more,
it
felt
like
we
want
to
make
a
good
protocol.
We
don't
a
compromise,
and-
and
at
this
point
a
little
bit
of
extra
work
now
to
produce
something
better
in
the
long
run.
I
think
is
the
right
trade-off.
F
Teri
mandersohn,
currently
still
ad
I,
think
I
concur
with
the
the
last
call.
The
semantic
changes
you
are
made
are
substantive
and
I
would
I
would
have
pushed
back
anyway
had
I
have
still
been
in
the
chair
when
it
got
to
me,
I
would
still
be
advising
Eric
in
such
a
situation
to
push
back
on
those
sort
of
changes,
and
hopefully
he
would
also
do
the
same
thing
for
consistency.
So,
yes,
great
idea,
thank
you
just.
G
G
Ted
and
I
have
also
been
working
on
implementing
this,
and
we
in
the
course
of
discussions
of
explaining
this,
it
became
clear
that
there
were
some
things
in
the
documents
that
could
be
explained
better.
So
I
have
made
an
attempt
to
improve
the
document
to
explain
those
things
better
and
I
have
a
summary
here.
G
None
of
this
is
changing
the
behavior
or
changing
the
messages
that
are
sent.
It
is
things
that
we
realized
were
kind
of
implicit,
but
not
stated
explicitly.
One
of
those
things
is
the
unicast
subdomain
and
the
dot
local
collection
of
information
on
the
local
link
are
more
or
less
a
bi-directional
mapping.
One
is
mapped
onto
the
other
and
for
every
query
you
do
with
the
delegated
unicast
subdomain
that
maps
to
a
corresponding
multicast
query
for
almost
everything,
but
there
are
some
exceptions
and
I
added
some
text
discussing
that.
G
So
if
there
is
not
a
full
bi-directional
mapping,
for
example,
the
document
recommends
that
if
there
are
link
local
addresses
in
the
multicast
DNS
address
records,
then
those
link-local
addresses
are
suppressed
for
queries
coming
from
outside
the
local
link,
because
the
link
local
addresses
wouldn't
be
useful.
So
that's
an
example
of
where
the
dot
local
multicast
namespace
and
the
exported
namespace
don't
match.
G
There
are
names
in
the
local
one
that
aren't
exported
and
in
the
other
direction
there
are
little
differences
and
I
have
one
on
the
slide
here
that
there
are
certain
metadata
records
like
the
SRV
records,
so
how
you
finally
push
notification
server
for
a
given
subdomain.
Is
you
look
up
the
DNS
Porsche
underscore
TCP
SRV
record.
That
record
does
not
actually
exist
in
the
in
the
dot
local
multicast
DNS
namespace.
G
So
that
was
something
that
the
document
didn't
discuss
and
it's
it's
a
potential
area
of
confusion.
So
we
wanted
to
clarify
that
and
in
particular
the
Woerner
that
is
described
here
after
Ted
had
finished
his
implementation
and
we're
using
it,
and
it
all
seems
to
be
working
fine
and
we're
very
happy.
G
It
was
a
bit
slow
and
when
we
and
to
start
with,
we
didn't
look
at
that,
because
we
were
just
happy,
it
was
working
and
Ted
was
working
remotely
from
Vermont
and
we're
doing
service
discovery
from
3,000
miles
away,
and
it's
kind
of
exciting
that
it's
working.
But
once
we
started
looking
more
closely,
we
figured
out
what
the
delay
was,
and
the
answer
is
when
I'm
using
a
shipping
version
of
this
code.
Now
this
in
two
years
time,
we
won't
care.
G
But
right
now,
I've
got
existing
code,
which
is
looking
for
the
old
udp-based
llq
protocol,
which
ted
has
to
implemented
because
that's
obsolete
and
it's
going
away
and
because
all
those
queries
map
into
the
corresponding
local
query,
it
would
try
and
retry
and
retry
before
it
gave
up.
And
that
would
add
a
six
second
delay
before
any
operation
succeeded
before
it
fell
back
to
the
old,
simple
queries
and
polling.
G
So
once
we
made
that
realization,
it
seemed
wise
to
point
out
in
the
document
that
there
are
some
of
these
names
that
are
special
and
don't
simply
translate
back
and
forth
in
the
obvious
way,
so
that
that
update
has
also
been
submitted.
I
will
run
RFC
DIF
to
make
a
comparison,
to
help,
seeing
the
differences
and
post
that
to
the
list.
G
I
think
what
we
want
here
is
consensus
from
the
group
about
whether
these
are
merely
helpful
editorial
changes
or
whether
they
change
anything
about
how
the
protocol
works,
and
if
we
decide
they
are
changing
them,
something
substantive.
Then
we
need
to
make
the
decision
about
what
level
of
last
call
we
need
to
do
to
review
these.
But,
as
with
the
last
document,
we
felt
that
letting
this
go
ahead
in
its
current
state
was
not
the
best
service
to
the
community
and
benefiting
from
this
implementation
experience
we
had
and
putting
that
in.
G
H
I
H
Ted
lemon
I've
been
working
with
Stuart,
as
he
mentioned
on
doing
an
implementation
of
all
this
cool
stuff
that
we've
written
documents
for
so
we've
implemented
all
of
these
specifications.
At
this
point
staple
operations,
hybrid,
proxy,
no
sorry,
discovery,
proxy
DNS,
push,
Service
registration
protocol
and
discovery
relay
so
stateful
operations
is
actually
being
used
both
for
DNS
push
and
for
discovery
relay.
So
we
have
two
different
things
using
the
same
DNS
stateful
operation
code
and
indeed,
actually
three
things.
If
you
count
the
client
and
server
discovery
relay
is
separate
and
actually
four
things.
H
If
you
count
the
discovery
proxy
and
the
discovery
that
and
the
client
has
separate
things,
so
that
stuff
is
all
done
with
the
exception
that
I
haven't
actually
done.
The
server-side
keep
a
live
code,
because
I
was
kind
of
under
the
gun
to
get
something
out
the
door,
and
that
was
the
lowest
priority,
but
that
shouldn't
be
too
hard.
The
client-side
keep
a
live
code
is
done,
so
I
will
be
probably
hacking
on
that
and
testing
it
the
next
couple
of
weeks.
J
H
In
the
in
the
in
the
meeting
materials
that
will
get
you
to
the
actual
implementation,
it's
inside
of
the
the
hackathon
repository
that
we
were
using
discovery
proxy.
So
that's
what
Stuart
was
just
talking
about
really
we've
we've
got
a
standalone
discovery
proxy.
It
acts
as
a
it
acts
as
a
authoritative
server
for
the
for
the
zones
that
it's
proxying
and
conveniently
it
also
will
do
dns
lookups
for
other
things,
so
you
can
use
it
as
a
DNS
resolver,
just
which,
which
is
handy
for
the
home
net
application.
H
That's
that's
part
of
why
I
did
it
that
way.
It
relies
on
em
DNS
responder
to
do
em,
DNS
resolution
and
it
implements
DNS
push
so
that
you
can
get
a
timely
notification
of
changes
and
again
the
code
is
is
at
that
URL.
Currently,
there
are
a
couple
things
missing.
The
code
is
able
to
identify
what
linker
requests
where
it's
it's
able
to
decide
what
link
to
send
a
request
to,
but
I
haven't
actually
implemented
the
link
naming
stuff,
that's
in
the
in
the
discovery,
proxy
documents.
H
So
that's
that's
still
to
do
and
I'd
really
like
to
get
it
packaged
up
for
open
wrt,
because
I
think
that's
one
of
the
really
nice
use
cases
for
this
code.
That
I
think
a
lot
of
people
could
try
out
very
easily,
but
that's
relatively
little
work
so
mostly
done
all
the
protocol
work
is
done,
dns
push,
so
we
have.
As
I
said,
we
have
an
implementation
of
the
discovery
proxy
and
then
also
mdns
responder,
which
is
a
DNS.
It's
a
Bonjour.
H
H
What
we
used
in
the
hackathon
was
actually
a
branch
of
the
Apple
Code
with
some
additional
stuff,
all
of
this
new
stuff
that
we're
talking
about
here
in
it,
so
so
the
that
that
is
actually
doing
DNS
push
now,
and
so
you
can
actually
build
it
and
install
it
on
Mac
OS.
If
you
turn
off
some
protections
and
it
works
nicely,
source
code
also
available
and
I
need
to
add
the
keepalive
support,
so
I
need
to
implement
the
server-side
and
you
test
the
client-side
Service
registration
protocol.
H
So
this
is
where
we
have
a
specially
constructed
DNS
update
that
is
checked
for
consistency,
self-consistency
and
then,
if
there,
if,
if
there
isn't
a
conflict
in
the
in
the
zone
file
for
the
the
for
the
update,
then
it's
just
added
and
the
update
includes
a
key
which
is
to
sign
it.
So
so
the
update
is
validated
against
the
key
that
was
used
to
sign
it
and
then,
if
another
update
comes
in
for
the
same
name,
that
uses
a
different
key
that
isn't
accepted.
H
So
you
get
first-come,
first-served
naming
as
a
result
of
that
I've
implemented
a
very
simple
SRP
client
that
sent
that
generates
an
update
signs.
It
sends
it
I
used
ECDSA,
because
that
seems
the
the
target
for
the
simple
SRP
client
is
an
IOT
device.
That's
that's!
You
know.
Low
power
doesn't
want
to
do
a
lot
of
work.
Ecdsa
is
a
good
choice
for
that.
It's
also
very
compact.
So
so
the
update
is
actually
quite
small.
H
I
thought
I
put
the
size
of
the
update
in
there,
but
it's
it's
like
415
bytes
for
a
complete
service
advertisement
using
using
SRP,
it's
one
packet
to
the
server
and
one
response
back
and
the
response
would
just
be
a
DNS
header.
That
says
yes,
thank
you
so
and
then
I've
also
implemented
an
SRP
proxy.
That's
a
essentially
an
SRP
server.
It
receives
SRP
updates
checks
them
for
consistency,
checks.
The
signature
I
haven't
quite
gotten
it
to
the
point
where
it's
actually
doing
a
DNS
update
to
a
DNS
server,
but
that's
the
next
step.
K
H
Nine
is
doing
it
correctly
when
I
first,
when
I
first
did
the
code.
I
I
read
the
the
6/0
document,
implemented
it
sort
of
naively
and
had
successful
validation
between
the
proxy
and
the
simple
client,
and
then
I
tried
to
do
it
against
by
nine
and
by
nine
rejected
it
out
of
hand,
and
it
took
me
a
couple
of
I
think
a
day
or
two
to
figure
out
why,
by
nine
didn't
like
it,
it's
the
way
that
sig
zero
works
is
quite
obscure.
H
So,
but
that's
done
so,
the
reigning
work
is.
We
need
to
have
an
SRP
client
that
actually
does
the
full
SRP
protocol
and
not
the
simplified
SRP
protocol,
just
to
make
sure
that
then
all
that
stuff
works
but
I
don't
see
any
reason.
Why
I,
wouldn't
it's
very
straightforward,
be
nice
if
the
SRP
proxy
was
actually
updating
by
nine
and
it
would
be
kind
of
nice
to
hack
SRP
support
in
two
by
nine.
So
these
are
things
that
are
on
the
agenda,
but
none
of
these
are
things
that
I
think
are
sufficiently
I.
H
Think
we
I
think
we've
we've
exercised
the
protocol
well
enough.
That
I
feel
like
the
specification,
is
mature,
so
be
nice
to
do
some
more
work
on
it,
but
I
don't
think
we
have
to
before
we
publish
the
code
for
all
of
this
is
in
that
link
it's
all
open
source.
Apache
you're,
welcome
to
check
it
out,
discovery
relay
so
discovery
relay.
H
The
idea
is
essentially
a
discovery
relay
is
like
a
DHCP
relay
a
little
bit
more
complicated
than
a
DHCP
relay,
but
has
the
same
essential
function,
which
is
a
really
stupid
thing
that
doesn't
have
any
any
features
you
can
add
to
it.
It's
only
goal
is
to
essentially
act
as
like
an
extra
like
Network
link
for
a
centralized
discovery
proxy.
So
you
have
a
centralized
discovery
proxy.
It
connects
to
the
to
the
discovery
relay
using
the
discovery
really
protocol
and
effectively
it
has
a
it
has
access
to
the
link
that
the
discovery
relay
is
on.
H
That
means
you
can
add
all
of
your
features
to
the
discovery
proxy.
You
don't
have
to
add
any
features
to
the
discovery
relay
you
don't
have
to
keep
the
discovery
relay
particularly
up
to
date
and
it'll
still
work.
So
we
have
an
implementation
for
that
in
mdns
responder,
both
the
client
and
the
server,
and
they
were
great
I've
used
them
to
do
discovery
over
3,000
miles
of
of
Internet,
so
remaining
work.
H
Discovery
relay
requires
TLS
support
and
it
doesn't
use
PK
I
uses
pre-shared
keys,
not
PSK
in
the
standard
sense,
but
just
like
you
know
a
self-signed
key
that
you
would
configure
as
opposed
to
using
PK
I
to
validate
and
I,
haven't,
actually
tried
that
bit
but
I'm
pretty
sure
that
we're
that
the
specification
is
correct,
I
updated
the
specification.
After
doing
all
the
TLS
work
for
DNS
push,
and
so
I
think
it's
correct
at
this
point,
so
I'm
reasonably
happy
with
it
and
feel
like
the
document
ready.
H
H
At
this
point
we
have
reason
to
think
that
these
drafts
are
mature,
so
action
items
would
be
really
nice
if
anybody's
interested
in
implementing
like
like
doing
a
real
full
implementation
of
some
part
of
this
work,
it
would
be
nice
to
get
another
person
to
do
that,
and
you
know
please
talk
to
me
if
you
want
to
try
to
do
some
Interop
testing.
I
would
love
to
do
that.
I'd
love
to
help
help
you
get
up
to
date.
H
But
don't
ask
me
any
questions
about
the
protocol,
because
I
want
you
to
to
read
the
specs
I
think
we're
ready
for
a
working
group
last
call
on
the
real.
A
document
passed
last
call
once
before,
but
I
just
wanted
to
hold
off
until
I'd
done.
The
TLS
work
and
I
feel
like
at
this
point.
The
TLS
stuff
is
good
enough.
B
C
L
Me
make
a
comment
about
the
last
call
first,
because
everyone's
so
busy
this
week,
I
don't
think
it
would
be
a
good
idea
to
start
last
calls
after
this
meeting,
but
maybe
next
Monday.
We
could
start
them
and
then
we'd
have
time
to
yeah.
E
L
Okay,
this
draft
came
out
in
February,
it's
the
first
time.
I've
talked
about
it
and
the
idea
you
know
we
we
haven't
actually
got
to
the
a
charter
work,
but
one
of
the
things
that
I
hope
becomes.
Recharter
item
is:
how
do
we
transition
to
a
unicast,
only
discovery
model
to
better
serve
the
Wi-Fi
clients
that
are
bigger
proportion
of
our
user
base?
L
And
if
you're
not
aware
of
the
problems
there,
it's
mostly
having
to
do
with
waking
up
clients
that
are
on
battery
and
having
to
use
additional
power
requirements
to
receive
multicast
that
they
end
up
not
needing
because
they
don't
have
anyone.
You
know
needing
that
service
or
don't
provide
that
service,
or
things
like
that.
So
the
idea
is.
If
we
can
transition
to
unicast,
we,
we
better
utilization
of
the
access
points
and
better
battery
performance
on
the
clients.
L
L
This
diagram
will
be
referring
back
to
it;
it
doesn't
have
anything
update
proxy
specific
in
it.
It's
a
just
kind
of
so
we
can
reference
things
to
talk
about
how
they
work
from
unicast
perspective
and
to
kind
of
refresh
the
clients
that
are
both
on
this
net
external
will
make
queries
for
services
subnets,
and
we
need
a
way
to
map
those
into
a
unicast,
DNS
namespace
and
the
discovery
proxy
is
one
way
to
do
it
and
it
does
it
dynamically,
and
this
update
proxy
is
a
alternate
way
to
do
it.
L
One
of
the
problems
with
the
deployment
of
wide-area
Bonjour
was
that
all
of
the
clients
couldn't
get
a
shared
key
that
they
could
update,
send
updates
to
you
know
a
authoritative
servers
didn't
scale
well,
and
you
didn't
want
to
distribute
those
keys,
and
so
by
having
a
few
trusted
boxes
on
your
in
your
infrastructure
that
have
those
keys
act
as
a
proxy.
You
can
then
scale
the
updates
better
and
do
the
service
registrations
all
over
unicast.
L
L
And
then
the
clients
it's
transparent
to
them
they.
As
long
as
the
you
know,
the
top-level
domain
is
in
their
search
domain.
The
update
proxy
can
insert
the
browser
pointer
records
at
the
domain
level
so
that
it
can
add
the
subdomains
as
needed
and
then
to
the
clients.
This
is
all
becomes
transparent
and
automatic.
L
So
it's
a
it's
quite
neat
because
of
how
easily
the
whole
thing
builds
automatically
you,
you
can
figure
the
one
shared
key
and
as
it
just
as
each
as
the
update
proxy,
you
have
becomes
aware
of
its
IP
subnets
and
they
can
come
and
go
or
whatever
and
they
can
be
reconfigured.
It
can
add
them,
it
can
configure
them.
It
can
make
them
browsable
and
then
start
sending
updates
for
their
services
that
it
discovers
and
the
other
nice
thing
is
each
client.
That's
on.
L
On
these
interval,
individual
networks
can
transition
independently.
So
initially
it's
gonna
proxy,
the
mdns
stuff,
but
as
the
clients
learn
about
an
update
proxy,
they
search
for
an
update
proxy.
They
find
it
they
can
unicast
their
announcements
to
the
update
proxy
and
intranet
transition
independently
from
multicast
mdns
to
unicast
only
Tom.
You
said.
L
It
can
start
out
by
passively
listening
to
mdns
announcements.
It
can
send
a
query
for
the
services,
your
services
dns
s,
the
UDP
local,
to
try
and
find
services
that
are
on
that
network
and
as
it
discovers,
services
and
service.
When
you
first
announced
a
service
you'll
send
it
twice
so
that
people
who
may
have
sent
a
query
recently
but
weren't
online
at
the
time
those
updates
will
get
out,
and
so
you
can
use
those
announcements
and
then,
once
you
learn
about
these
things,
you
can
send
unicast
qu.
L
You
listen
for
the
goodbye
announcements
where
the
TTL
is
zero
and
you
can
delete
it
and
then
you
can
remove
it,
send
the
update
to
remove
it
from
the
unicast
server
and,
along
with
the
the
records
that
you
send
in
the
update.
You
can
either
include
the
lease
lifetime
option
or
you
could
actually
add
a
timeout
record
at
the
same
time
which
Tim's
going
to
talk
about
timeout
records
later.
But
it's
another
way
to
transfer
the
life
of
that
lease
for
the
update
announcement.
L
So
as
a
refresher
of
the
way
that
the
subdomain
name
can
be
determined,
both
clients
and
the
update
proxy
can
search
for
the
subnet
you
know
and
and
what
would
be
like
the
default
registration
domain
or
the
registration
domain
or
the
and
there's
a
list
of
the
Browse
domain.
That
default
browser
me
and
try
to
determine
what
the
subdomain
is.
If
there's
one
already
allocated
for
the
subnet,
if
not,
it
can
create
one
on
demand
and
register
it
so
that
all
the
clients
can
find
it
and
other
update
proxies
on
the
same.
L
What
should
I
call
this?
Does
it
matter
that
it's
you
know
the
engineering
department
versus
the
marketing
department,
you
know
or
building
one
floor
one
and
building
two
floor.
Two,
it
doesn't
really
matter,
you
don't
necessarily
care
about
any
of
that.
So
just
these
names
are
not
going
to
be
ones
that
appear
in
you
eyes,
typically
you're.
Looking
for
the
the
actual
service,
the
SRV
instance
name,
not
these
names
so
I'm
not
sure
that
it
really
matters
what
they
look
like.
L
L
L
Because
it's
putting
the
all
the
services
in
the
unicast,
authoritative
server
and
when
clients
create
they
query
the
unicast
server
and
get
a
response
immediately
back,
so
there's
no
go
search
for
the
service.
Ok,
it's
instant!
It's
either
in
the
in
the
day,
the
authoritative
server
or
it's
not
so
we
have
to
get
everything
in
there.
There's
advantages
and
disadvantages
to
this.
L
L
L
Now,
in
the
end,
if
you
look
at
a
switch
when
you
do
a
query,
if
you
do
a
multicast
query
on
a
switch,
you've
got
all
these
ports
with
all
these
clients
on
it
or
all
these
sorry
hosts
on
it
and
a
single
multicast
gets
replicated
on
every
port
and
a
unicast,
and
if
you
send
multiple
unicast
that
gets
replicated
on
all
those
same
ports
on
the
subnet.
So
as
far
as
the
number
of
packets
going
out,
I'm,
not
sure,
there's
really
a
difference.
The
replication
is
done
in
the
hardware
of
the
switch
under
multicast.
L
L
As
far
as
the
complexity
goes
there's
you
know
it
I
think
we
need
feedback
from
implementers
to
validate.
You
know
that
the
discovery
proxy
has
to
implement
a
full,
authoritative
server,
the
if
it
does
the
mdns
relay
so
that
it
can
collect,
have
like
a
single
discovery
proxy
that
everything
gets
collected
in
that's
additional
code
to
write
where,
as
the
complexity
of
the
update
proxy
means,
it's
mostly
just
passive
mdns
listening
and
doing
some
unicast
queries
plus
sending
updates
to
an
authoritative
server.
G
G
M
L
Number
of
subnets
times
the
number
of
query
plus
responses,
because
you,
when
you
do
a
when
you
do
a
unicast
query,
it
gets
and
out
to
every
subnet
right.
The
the
client
will
query
that
and
then
get
a
list
of
subdomains
back
and
it'll
send
out
queries
to
each
one
of
those
which
will
do
an
MD
NS
query
on
each
one
of
those
and
gets,
and
so
many
responses
back.
So
it's
number
of
subdomains
times
the
number
of
responses,
basically
plus
one
right.
G
L
K
G
L
G
L
K
L
G
L
L
G
Implement
this
and
test
this
yeah,
we
can
get
real
operational
data.
I
think
you
may
be
being
a
little
bit
optimistic
with
the
the
order,
one
which
is
another
way
of
saying
there
is
no
change
in
the
multicast
traffic
at
all.
It's
it's.
You
know
one
time
it's
what
it
used
to
be,
which
is
no
change
at
all.
I.
Think
when
you
do
the
underscore
services
matter,
query
what
you're
actually
going
to
be
doing?
Is
you
do
a
multicast
query
to
discover
all
the
service
types
represented
on
the
network?
Yes,.
M
G
M
G
M
G
Might
my
intuition
here
is
that
the
amount
of
traffic
is
probably
going
to
be
comparable
and
probably
it
would
be
possible
to
show
that
in
any
case,
the
discovery
proxy
is
strictly
less
traffic
on
multicast
traffic,
because
the
update
proxy
has
to
exhaustively
discover
everything.
That's
on
the
network
that
anybody
might
potentially
ask
for
in
the
future,
whereas
the
discovery
proxy,
because
it's
on
demand
is
by
definition,
only
discovering
the
subset
of
all
possible
services
that
clients
are
actually
interested
in.
G
So
I
think
it
has
to
be
strictly
less
yeah
because
it's
discovering
fewer
things
and
then
you
talk
about
DNS,
SEC
I,
think
there
you
want
10
sec
3,
because
the
idea
of
ntek
3
is
to
prevent
zone
walking,
which
was
a
attack
discovered
with
classic
DNS
AK
that
some
people
were
concerned
about.
Where,
from
looking
at
the
an
SEC
records,
you
could
then
enumerate
all
the
contents
of
a
zone
which
had
privacy
implications
in
Sec
3
added
some
cryptographic
hashing
to
make
that
zone.
Walking.
G
Infeasible
in
the
case
here
and
I
believe
that
the
draft
talks
about
this.
So
if
it
doesn't,
we
should
fix
it.
Discovery
proxy
allows
online
DNS
SEC
signing
not
offline,
which
is
a
little
subtlety.
It's
not
saying
you
can't
use
DNS
SEC,
but
you
can.
The
the
device
has
to
know
the
key.
What
you
can't
do
is
the
classic:
keep
a
PC
in
a
locked
room
with
a
floppy
disk
and
sign
the
zone
offline
and
then
walk
the
floppy
disk
out
to
the
server.
G
If
you
really
really
want
to
have
very
strict
isolation
that
that
device
can't
be
compromised,
you
can't
do
that
when
you're
building
this
dynamically,
so
the
device
needs
to
own
the
key,
and
that
is
an
additional
vulnerability.
But
that's
not
quite
the
same
thing
as
saying
you
can't
do
dns
ech.
It's
just
that
you're
only
doing
online
sign
ya
know.
G
L
G
They
do
a
different
query
for
a
different
name
that
doesn't
exist.
You
send
them
an
insect
for
that.
What
you
don't
have
is
n
SEC
records
that
straddle
a
whole
range
of
names.
That
says
none
of
these
exist,
but
that
capability
was
actually
a
failing
event
set
because
it
enables
own
walking.
So
the
inability
to
do
range
n
SEC
records
is
arguably
that's
not
a
deficiency,
because
that
was
not
considered
a
desirable
property
of
an
SEC
records.
Ok,
ok,.
L
Just
I
need
to
walk
through
the
implementation
and
match
up
to
the
spec
and
make
sure
that
everything
gets
created
correctly
and
it's
listed
in
the
spec
of
what
what
to
creates
to
make
it
the
auto-configuration
all
work
transparently
to
a
client
in
the
case
where
an
update
proxy
fails
and
it
leaves
sub
domain
information
behind
and
another
update
proxy
takes
over
just
want
a
document
to
make
sure
that
that
transition
happens.
You
know
cleanly
I.
L
Had
some
questions
and
Stuart
and
I
can
talk
later
or
we
can,
you
know
now
is
appropriate.
That's
fine,
but
if
you
want
to
register
a
service
in
multiple
domains,
then
you
know,
and
the
our
registration
domain
query
comes
back
with
a
list
of
multiple
domains.
Is
that
okay
to
just
register
that
service
in
all
those
domains
or
like
for
when
doing
address
enumeration
of
the
subnet.
G
B
G
It
is.
It
makes
me
sad.
Even
today
you
can
buy
a
network
printer
that
advertises
its
service
using
DNS
SD
and
the
service
is
vendor
printer
hexadecimal
vomit,
because
they
think,
if
you
make
every
name
unique
out
of
the
box,
that
solves
the
problem
right
because
you
you
look
at
the
list
of
printers
available
in
air
print
and
you
see
12
different
printers
differentiated
only
by
the
hexadecimal
string.
G
On
the
end
and
as
long
as
you
memorize
hexadecimal
strings,
then
you
know
which
one
you're
using
getting
people
to
even
put
in
a
meaningful
name
for
the
printer
by
going
to
the
web,
UI
and
typing
in
a
name
turned
out
to
be
in
practice,
apparently
too
much
of
a
hurdle.
So
if
you're
not
willing
to
give
the
thing
a
name,
configuring
it
with
a
key
is
certainly
not
going
to
happen.
So
that
was
overly
optimistic.
G
It
never
happened,
and
the
point
I'm
getting
to
here
is
don't
be
overly
constrained
by
what
we
were
hoping
would
happen
10-15
years
ago,
if,
if
it's
appropriate,
to
define
new
mechanisms
or
new
conventions
here
that
is
totally
on
the
table.
The
those
old
records
were
there
with
an
intention
of
giving
guidance
so
that
when
you
connect
your
new
device
to
the
network
and
go
to
its
web
UI,
it
says
I
see
on
this
network.
G
The
administrators
are
recommending
the
following
for
domains:
sales,
marketing
engineering
or
first
of
all,
second
floor
third
floor,
and
you
pick
the
one
that's
appropriate
for
your
device
since
that
never
really
materialized
with
we're,
not
beholden.
To
that,
we
can
define
you
mechanisms
if
that's
appropriate,
for
what
you
want
to
do.
Ok,.
L
L
We
just
had
a
discussion
at
the
hackathon
about
the
reconfirmed
message
and
it's
not
possible
at
this
point
to
get
a
reconfirm
back
to
you
know,
there's
no
nowhere
to
send
a
reconfirm
if
you
received
one
at
the
authoritative
server,
but
what
the
reconfirmed
triggers
in
the
discovery
proxy
I
think
the
client
could
do
manually
when
it
would
have
wanted
to
do
that
when
it
recognizes
that
things
not
responding.
I
think
you
can
just
make
the
queries
that
it
needs
to
do
without
using
the
reconfirm
in
the
subscription.
L
As
far
as
implementation,
there's
one
I'm
working
on
it's
it's
partially
working,
but
it's
not
actually
sending
the
update
out
at
the
moment
because
of
a
concern.
Concurrency
problem
I
was
having
in
the
code
as
I
learned,
rust,
but
I
think
I've
worked
through
the
issue
in
my
head
now
and
it's
a
matter
of
reor
connecting
the
code.
No,
it
does
listen
to
them
the
NS
announcements
that
are
out
there
and
builds
a
cache
and
then
sends
updates
for
new
entries
into
the
cache
and
times
out.
L
The
cache
entries
listens
as
it
gets
the
goodbye
packets.
It
doesn't
send
the
updates
yet
as
well
for
those
to
delete
the
entry
I'd
like
to
add
t6
support
to
it
and
do
it
over
TLS
as
well.
Right
now,
it's
all
over
UDP
dude,
the
subdomain
discovery
and
auto
auto
configuration
stuff
at
dynamic
interface
support
so
as
addresses
are
added
on
an
interface
or
automatically
recognize
those,
and
you
know
start.
E
G
G
The
Discovery
relay
is
like
a
USB
Ethernet
interface,
with
a
really
really
long,
USB
cable
on
it
that
runs
over
TCP,
but
it
is
a
way
for
you
to
have
an
interface
on
a
network
that
you're
not
physically
connected
to
with
hardware,
but
it's
as
if
you're
connected
and
then
all
the
stuff.
You
would
have
done
locally.
You
can
do
remotely
and
if
we're
successful
in
getting
those
relays
deployed
in
routers,
then
they
provide
this
capability.
G
H
It's
a
great
point:
Ted
lemon,
so
I
don't
want
to
make
you
answer
this
question
right
now,
because
I
think
it
would
be
a
little
bit
difficult
to
answer
it
while
you're
just
standing
there
in
front
of
us
knowing
like
but
I
I,
don't
understand
like
so
we
have.
We
have
the
the
hybrid
proxy
the
discovery
proxy.
H
Looking
at
this
problem,
I
think
Stewart
and
I
I.
Remember.
We
were
meeting
in
in
Argentina
at
the
ITF
there
and
talking
about
this
and
I
actually
proposed
something
really
similar
to
this,
because,
like
you,
I
wanted
to
have
a
complete
zone
file
with
everything
in
it,
and
we
had
a
pretty
long
conversation
about
that
and
I
came
away
with
it.
H
I
came
away
from
the
conversation
concluding
that
actually,
that
wasn't
a
really
good
idea
doesn't
mean
that
I
was
a
writer
that
Stewart
was
right,
but
it
would
be
good
to
actually
capture
that
that
discussion
and
be
able
to
express
clearly
why
it
is
that
this
is
different
and
good,
as
opposed
to
the
same
thing.
I'm
not
really
different.
Yeah,
the.
L
Short
answer
is
that
I
implemented
the
discovery
proxy
and
found
it
to
be
I,
think
more
complex
than
I
would
have
liked
and
adding
on
the
DNS
relay
and
discovery
relay
and
all
those
components.
You
know
you
guys
were
able
to
bootstrap
from
mdns
responder
and
that
got
you
to
where
you
are
today
rather
quick,
much
quicker
right
and
certainly
that's
an
option
for
people.
Some
people
may
not
be
able
to
do
that.
Some
people
are
going
to
implement
from
scratch
and
I
find
this
solution
to
be
simpler.
E
So
I
mean
tricked
thanks
Ted.
Could
we
ask
you
to
like
repeat
this
question
on
the
list
because,
like
we
don't
have
time
right
now,
but
I
would
love
to
see
maybe
a
comparison
of
the
two
solutions
and,
like
you
had
one
about
the
discovery
proxy
and,
like
you
know,
the
a
nine
squares
tall
complexity
and
getting
to
the
bottom
of
this
on
the
list
would
be
really
helpful.
Might.
H
E
L
L
Okay,
so
we
we've
talked
about
different
privacy
options
and
I
thought
I'd
throw
another
one
into
the
mix
here.
The
idea
is
that
we
already
know
how
to
search
for
services
in
a
subdomain.
If
we
can
make
that
subdomain
private,
then
we
don't.
We
just
do
what
we
normally
do
and
that's
we
make
normal
queries
and
get
normal
responses.
L
And
so
the
idea
is
that
all
the
queries
and
responses
are
encrypted.
They
can
be
over
TLS.
It's
what
the
current
document
says.
The
previous
document
said
well,
we
could
do
these
queries
in
the
clear,
but
that
the
actual
records
were
encrypted
and
that
got
too
complex.
So
this
one
that
was
the
Overson.
You
know
one
version
says:
let's
just
use:
TLS.
L
L
Have
some
policy
to
allow
me
to
create
that
subdomain
based
on
maybe
some
credentials,
I
already
have
like
my
email
credentials
or
or
it
could
be,
first-come
first-served,
like
Ted's
Service
registration
protocol
does
and
only
if
there's
a
problem
or
conflict.
You
know
what
they,
but
when
you
first
create
it,
there's
nothing
in
there.
So
you're,
not
exposing
anything
and
the
way
you
create
one
is
you
use
update
to
create
a
key
record
at
the
at
the
apex
and
that
key
record
is
the
public
key
and
then
from
then
I'll
from
then
on.
All.
L
Operations
that
you
do
in
that
private
subdomain
has
to
be
signed
with
the
private
key
associated
with
that
public
key.
So
you
do
a
query.
You
would
put
the
query.
You
would
sign
the
query
with
a
6-0
signature
using
that
your
your
private
key,
it
would
go
up
to
the
authoritative
server,
which
would
then
verify
using
the
public
key
you
put
in
there
and
make
sure
that
that
you
really
have
the
private
key.
L
N
N
L
It's
it's
really
I'm
not
trying
to
dictate
the
policy
and
how
that
happens.
But
there
are
a
variety
of
Mac
ways
that
could
happen
that
I,
don't
they're
not
currently
enumerated
in
the
draft,
but
that's
something
that
we
could
do
as
possible
ways.
I
think
that
current
providers
do
that
for
you
today.
Now
you
may
not
trust
a
provider
to
create
the
public-private
key
pair
and
give
it
to
you
and
not
keep
it.
But
that
happens
today
and
a
lot
of
people
have
rolled
out
services.
Doing
that
and
I.
O
L
M
L
I
I
Have
one
question:
yes,
my
name
is
Lucia
Tosh,
so
the
intent
here
as
I
understand
you
can
correct
me
if
I'm
wrong
is
to
keep
these
domains
private,
yes,
but
things
like
certificate
transparency,
log,
which
is
very
popular.
So
there
you
can
get
the
the
subdomains
from
there
right.
If
so
so,
if
a
certificate
is
issued
and
all
the
private
subdomains
are,
one
could
go
and
get
the
purpose
of
the
mains
from
from
there.
If,
assuming
that
certificate
transparency,
log
is
also,
the
owner
of
the
website
is
also
populating
there.
I
L
L
L
L
L
Brought
up
synchronizing
the
private
key,
and
you
know,
if
you
have
a
group
you
might
want
to
have
some
people
a
lot
of
people
be
able
to
read
it,
but
only
a
few
people
be
able
to
update
it.
So
we
could
have
like
a
right
key
as
opposed
to
rekey
as
well
at
the
at
the
apex
and
then
Tim
and
and
well
them
both
brought
up
the
point
of.
Maybe
you
want
to
separate
encryption
and
and
and
authentication
so
that
if
you
wanted
to
provide
you
know
public
records,
you
could
do
that.
E
P
All
right
so
since
tom
was
talking
quite
a
bit
presenting
the
our
time
out
draft,
which
is
just
for
those
of
you
who
haven't,
read
the
draft
quick
overview,
what
we
are
trying
to
do
with
it,
so
it's
introducing
a
new
time
out
resource
record,
which
is
essentially
to
hold
information
of
how
long
other
resource
records
can
be
considered
valid
and
it
gives
them
a
lifetime.
So
after
this
lifetime,
the
authoritative
server
is
supposed
to
remove
them.
P
The
idea
to
have
this
as
a
resource
record,
is
simply
to
have
this
information
be
stored
right
in
the
zone,
where
you
have
all
the
other
information
about
the
records,
and
this
way
you
can
transfer
it
to
your
secondary
servers
and
even
if
you
have
like
a
multi
vendor
set
up,
it's
not
a
problem.
Just
have
this
information
in
one
in
one
place
and
can
transfer
regarding
any
yeah
any
band
of
specific
things
you
might
have
done
in
any.
P
In
other
cases,
also,
you
I
mean
you
could
do
an
implementation
which
survives
restarts
or
crashes
of
your
server
software,
but
yeah.
If
you
don't,
then
this
resource
record
will
will
save
the
information
when
the
when
the
record
is
supposed
to
expire
in
the
zone
and
yeah,
you
also
have
it
if
something
fails,
so
what
could
fail
to
work?
P
For
example,
if
you
have
services
which
are
registering
wire
for
examples,
yeah,
the
SRP
or
the
update
proxy
or
anything
else,
the
cleanup
might
not
happen,
and
even
though
you
might
have
a
update
lease
option,
so
you
know
how
long
this
record
is
supposed
to
be
valid.
You,
you
know
the
Idina
zero
option
and
provide
a
mechanism
to
transport
this
lifetime
and
to
save
this
date
between
different
different
instances
of
your
name
servers.
P
So
that's
kind
of
the
idea
I've
been
during
the
hackathon
this
weekend,
working
on
an
implementation
of
a
little
demon
which
provides
this
functionality
by
just
looking
through
the
zone
grabbing
the
resource
records,
saving
a
time
or
for
later
to
remove
them.
When
the
expiry
time
has
come
and
once
it's
reached
it
just
sends
out
an
update
to
the
authoritative
server
and
the
records
are
removed.
P
H
Lemon
you
might
consider
having
there
be
in
any
record
that
is
superseded
by
any
more
specific
record.
Alright
other
words:
oh
yeah,
the
whole
thing's
missing.
Q
Hi
Pete
Alexis
power,
DNS
I,
had
a
quick
look
at
the
draft.
Just
before
I
went
in
were
not
against
implementing
this.
However,
we
don't
have
plans
to
to
do
this
anytime
soon,
and
there
was
one
little
thing
I
didn't
see
in
the
draft.
It
does
not
mention
at
the
time
what
record
should
or
should
not
be
Korea
Bowl,
okay,.
P
Q
C
L
So
so
the
initial
discussion
of
this
Joe
ably
said
that
he
wanted
it
to
act
like
any
other
resource
record
and
when
we
had
it,
not
queryable
he's
like
we
don't
want
special
rules.
So
that's
why
it's
now
queryable
and
it
also
allows
you
to
do,
update
to
add
and
remove
it
and
know
it's
there.
So
there.
P
G
Stuart
Russia
a
couple
of
comments
on
the
draft
in
at
the
start.
In
the
introduction
you
give
some
motivation
that
compared
to
using
the
Eden
s0
option,
the
benefit
of
having
this
be
a
real
DNS
record
are
that
it
can
be
saved
on
disk
and
it
can
be
transferred
to
secondary
servers.
I
think
you
can
make
those
justified
just
for
Asians
a
bit
tighter,
because
we
do
have
ways
of
saving
data
on
disk.
That's
not
DNS
records
because,
like
my
digital
camera,
does
that
every
time
I
take
a
picture
we
do
have
this
technology.
G
I
think
you've
got
a
good
argument
that
if
you
want
to
save
it
out
as
a
textual
zone
file
in
a
way
that
is
portable
between
different
DNS
vendors,
then
having
this
other
metadata
stored
on
disk
in
a
proprietary
format
is
not
portable.
So
I
think
that
would
be
a
crisper
argument
for
that
that
it
gives
you
portability.
It's
not
that
that's
the
only
way
to
save
data
on
disk,
but
it
saves.
K
G
In
the
portable
way,
the
other
thing
that
was
a
bit
anomalous
and
I'm
not
sure
how
you'll
clean
it
up,
but
you
start
off
by
saying
the
great
benefit
of
this
is
that
it's
transferred
to
secondary
name
servers
and
then
later
in
the
documents
you
say
that
secondary
name
servers
must
not
do
anything
with
this
data.
If
they're
never
going
to
do
anything
with
it,
then
the
benefit
of
transferring
it
to
them
is
is
less
clear.
So
I
think
that
language
could
be
tightened
up
a
bit
all
right.
So
the
idea.
P
E
P
E
B
O
O
Yes,
so
a
little
first
set
of
assumptions
there
about
what
okay
about
what
what
we
assume
in
the
industries
and
Asia
when
I
speak
of
TRS
I
speak
of
TRS
1.3
and
the
reason
for
speaking
of
ts
1.3
is
that
TS
one
country
is
the
first
version
that
has
any
hope
of
providing
several
privacy,
because
in
the
version
prior
to
that,
the
response
from
the
server
carried
a
clear
text,
version
of
the
server
certificate
and
so
much
for
Survivor
see
now
that
is
fixing
TS
1.3.
So
that's
that's
a
good
thing.
We
can
say
hey.
O
O
The
next
thing
that
we
have
in
these
toolbox,
which
is
a
actual
work
in
progress,
is
encoded
s.
Aniyah
now
the
TRS
require
initial
request,
carries
service,
a
service
name
indication
which
is
effectively
the
domain
name
of
the
server,
then
to
simplify
now,
of
course,
if
you're
undecided
in
clear-text.
So,
of
course,
if
your
initial
curvature
is
whenever
the
cell
in
clear-text
again,
the
hope
of
fiber
C
are
very
limited.
But
we
have
recent
development
on
how
to
encrypt
that
same
and
it's
encrypted
by
effectively
having
a
public
key
of
the
server
accessible
to
clients.
O
So
they
can
use
it
to
encode
it,
and
that
way
only
the
intended
server
can
decrypt
the
SNA,
and
that's
meant
in
particular
for
big
salvadore
server,
a
lot
of
different
domains.
So
you
can
connect
the
seller,
but
you
don't
know
which
particular
domain
you
connected,
and
the
further
assumption
in
the
design
is
that
we
are
going
to
use
UDP
burst
transports.
They
of
course
familiar
with
TCP
with
TLS
about
TCP,
but
there
are
variations
that
do
not
use
TCP
under
the
two
interesting
variations.
O
There
are
details
which
is
basically
straight
TLS
or
UDP
and
quick,
which
also
use
a
TLS
exchange
during
its
initial
handshake,
to
set
up
the
encryption
key
and
then
is
encrypted
and
carries
a
lot
of
our
UDP.
So
these
are
the
assumption
in
the
design.
So
then
basic
idea.
The
basic
idea
is
that,
if
we
have
here
as
protocols
that
are
carried
over
UDP,
then
I
can
send
the
first
packet
in
the
exchange
of
a
multicast
and
that
first
packet
about
the
exchange
carried
the
encrypted
sni
of
the
target
server.
O
So
if
I
do
that,
it's
received
by
every
server
of
what,
in
fact,
everybody
will
kill
us
on
the
local
network,
but
they
try
to
decrypt
it,
and
only
one
of
them
will
succeed,
the
guy
that
as
the
corresponding
private
key,
and
so
that
gives
you
a
way
to
do
a
discovery
request,
followed
by
trial
description
and
if
that
match.
Well,
you
have
a
101
TLS
best
connection
and
the.
What
attracted
me
to
the
design
is
that
it
has
the
Year
the
poverty
of
doing
minimal
innovation
in
terms
of
encryption
and
processes.
O
Etcetera,
are
you
you
get
terrace,
you
get
TLS
and
you
don't
have
to
innovate,
discuss
which
key
you
use,
or
whatever
it's
all
specified.
You
you
benefit
from
a
lot
of
investment,
80s
walking.
You
benefit
from
a
lot
of
investment
in
the
ESN
I
design,
because
there
are
pitfalls
there
to
verify
that
this
thing
works.
So
that
that
was
that's
the
general
idea
now.
O
There's
one
difference
with
the
ESN
I
design:
hydro,
the
classic
oil
classic.
It's
a
work
in
progress
so
calling
it
classic
is
kind
of
weird,
but
the
ESN
idea
the
design
relies
on
DNS
and
basically
what
you
do
is
that
when
you
go
to
the
DNS
you're,
a
property
of
the
server
tells
you
which
encryption
key.
You
shall
use
for
the
four
years
and
I
it
gives
you
the
public
key
that
you
shall
use
now.
O
O
So
what
we
don't
own
that
so,
instead
of
putting
these
es
NIT
in
a
public
space
for
every
to
read,
we
say
no,
we
are
going
to
use
that
as
a
control
point
so
that
this
public
key
is
only
delivered
to
the
clients
that
are
authorized
to
discover
me
and
so
in
as
much
as
I
keep
that
public
key
secret.
And
here
we
stop
having
a
problem
of
vocabulary,
but
in
as
much
as
I
keep
that
public
key
only
disclose
to
authorized
client
I
have
in
fact
a
private
discovery
system.
O
Now
you
want
in
the
document
we
had
discussion
on
that
on
on
the
email
list.
We
want
to
not
call
that
a
public
key,
because
if
we
call
it
a
public
key,
it's
a
busy
people
power
so
we're
quite
the
discovery
key.
So
the
server
each
service
as
a
discovery
key,
which
is
only
available
to
authorized
clients.
O
So
that
gives
you
a
property
that
only
authorized
client
can
issue
a
discovery
request
and
then,
when
they
get
the
response,
they
know
that
we
certainty
that
the
response
come
from
the
otherwise
Selva,
because
the
sni
design,
as
a
proof
in
it
as
I,
am
in
the
response
to
prove
that
the
server
is
who
they
claim.
They
are.
O
Value
is
Alliant,
because
the
fallback
mode
is
that,
yes,
there
is
an
our
mode
that
you
have.
You
have
given
that
pub
that
discovery
key
to
all
your
bodies,
and
one
of
them
is
a
traitor
or
one
of
them
is
careless
and
and
leaks
the
key
well,
that
design
is
very
busy
and
because,
in
that
case,
you
go
back
to
yes
and
I.
O
That
means
that,
yes,
it
would
be
possible
for
someone
to
discover
me
because
hey
Mike
he
has
leaked,
but
at
the
same
time
it
won't
be
possible
to
tell
which
clients
are
trying
to
discover
me.
So
we
have
this
fallback
mode
that
if
the
key
leaks,
the
server
can
become
discoverable,
but
the
clients
identity
is
not
discovered.
N
Yes,
it
so
kind
of
like
the
idea,
at
least
from
what
I
have
heard
so
far.
My
question
is
TLS
by
its
design
is
a
little
bit
asymmetric.
You
have
a
client
and
a
server
and
in
DNS
SD.
Maybe
devices
are
more
like
peers
that
discover
each
other.
So
it's
like,
if
my
printer
is
discovering
my
lightbulb
I
I,
don't
know
why
it
would
who
would
be
the
client
and
who
would
be
the
server
and
how
would
they
decide
on
the
on
the
roles
to
use
in.
O
In
the
scenario
that
we
are
looking
at,
I
mean
we
have
looked
at
the
Stewart,
for
example,
so
key
scenario
in
which
the
diabetes
up
on
his
cellphone
is
discovering
the
insulin
pump
in
your
body.
These
things
are
really
peers
and
the
the
high
level
response
is
that
any
one
of
them
can
act
as
either
sell
or
cry
out
it's
a
matter
in
application
design,
because.
N
R
R
O
Okay,
so
let's
let's
go
to
the
where
we
have
small
issues.
First
I
mean
it's
not
clear,
that
all
applications
that
need
private
discovery
will
run
about
quick.
They
will
eventually,
because
quick
is
going
to
take
over
everything,
but
I
mean
take
some
time.
So
in
that
case,
what
we
can
do
is
have
a
two-phase
scenario
in
which,
in
the
application
that
has
to
be
discovered,
you
also
have
a
small
implementation
of
a
discovery
protocol
that
runs
over
these
theorists.
Yes,
a9
discovery
and
provide
something
like
DNS,
Avadi,
hereis
or
DNS
of
a
quick.
O
The
target
of
the
design
is
really
the
private,
more
or
less
peer
to
peer
application
to
device
scenario
in
which
device
is
paired
with
pretty
few
applications,
a
pretty
few
clients
Vanessa.
If
you
take
the
insulin
pump,
for
example,
you
are
not
going
to
pier
the
insulin
pump
application
with
very
many
pumps
with
just
a
few.
So
in
those
peer-to-peer
scenario
we
don't
have
a
really
big
scaling
issue,
because
what
the
normal
set
is
very
small.
O
But
it
is
true
that
if
you
want
to
discover
something,
the
client
that
comes
on
a
network
is
going
to
send
a
bunch
of
multicast
requests
for
every
server
that
they
want
to
connect
to
every
private
server,
and
it
is
true
that
they
may
well
be
deployment
cases
in
which
that
number
is
large.
So
that's
that's
the
downside
of
that
proposal.
O
The
other
downside
which
is
related
is
that
if
the
client
arrives
first
and
only
the
clan
can
do
the
discovery,
then
the
client
would
have
to
repeat
the
account
and
that's
a
downside.
I
think
that
it's
not
a
practical
downside,
because
I
believe
that
if
you
look
at
the
scenarios
we
have
in
the
requirements,
doston
I
use
tend
to
be
very
much
peer-to-peer,
and
so
it's
effectively
Weaver
our
lives
last
twice
to
discover
the
other
one
and
finds
them
when
the
last
guy
arrives.
O
O
R
So
this
Chris,
but
the
discovery
key
for
announcements
will
be
used
for
signing
and
for
being
discovered.
It
would
be
used
for
decryption
of
the
ESN.
I
got
some
news
and
still
this
clear,
okay,
we'll
probably
have
to
figure
out
like
not
you
know
you
could
use
that
gamma
and
things
like
that.
It's
not
impossible.
No
I'm,
not
saying
it's
impossible.
I'm
just
I'm
worried
about
the
use
of
a
key
for
two
different
purposes
in
two
different
contexts:
okay,
yeah
week
with
it,
but
that's
like
a
solvable
problem,
I
think
no
yeah.
R
O
Anna
is
too
that,
basically,
if
you
send
a
serious
message,
the
terrorist
message
will
include
your
unchecked
proposal
and
you
really
want
to
have
separate
hand-checked
proposal
for
separate
people
or
separate
Piazza.
Yes,
I
agree
with
that.
Yes,
so
so
I
mean
you,
you
tend
to
get
this
scaling
issue
anyhow
because
of
the
security
requirements.
I
guess
my
comment.
R
Okay,
I
did
have
a
possible
proposal
for
a
variant
of
yes
and
I
that
might
make
it
a
bit
easier
to
deploy.
So
currently
in
draft
you,
you
distribute,
I'd,
have
been
the
discovery
key
for
all
the
clients
and
you
use
that
to
encrypt
the
s
and
I
keen
everything
works,
but
potentially
rotating
that
discovery
key
is
problematic.
I,
don't
know
how
you
would
actually
do
that
in
practice.
R
Perhaps
you
another
issue
right,
I
was
going
to
propose
potentially,
instead
of
distributing
the
discovery
key,
you
distribute
a
certificate,
a
basically
a
trust
anchor
that
the
server
will
possess,
like
I,
have
it
certificate
chain
level,
root
or
chain
up
to
something
that
is
distributed
out
of
band,
and
then
you
can
rely
on
the
fallback
mechanism
that
David
Benjamin
proposed
for
es
ni
to
distribute
the
new
discovery
key
in
band
and
sort
of
rotate
it
a
lot
more
quickly
without
having
to
build
in
any
additional
infrastructure.
Yes,.
O
And
that
will
be
a
solution
for
this
particular
slide,
which
was
my
next
slide
to
say
that
there
is
one
problem
is
one
problem
with
anything
that
relies
on
a
basically
does
see.
Those
discovery
rely
on
publishing
a
public
key,
and
if
you
publish
a
public
key,
you
have
a
failure
mode,
which
is
what
happens
if
you
return.
O
Key
leaks
and
you
want
to
have
some
resiliency
there
and
it
turns
out
to
have
a
symphony
CS
anion
in
general
and
with
anything
which
is
based
on
public
key.
Is
that
if
you
leak
the
private
key
but
stuff
happens
in
our
case?
What
happens
is
that
if
the
public
key
is
compromised,
not
only
can
someone
impersonate
you,
but
they
can
also
go
back
to
all
the
logs
that
have
been
captured
by
our
concert
earth
in
the
last
20
years
and
look
at
every
places
you've
been,
and
so
that's
not
good.
O
R
O
R
No,
the
idea
and
I
apologize
for
bringing
this
up
with
the
mic,
because
it's
not
really
well-formed
or
anything,
but
so
the
coin
just
assumes
it
doesn't
know
what
the
key
is
of
the
server
whatsoever.
It
just
happens
to
know
how
to
authenticate
a
certificate
that
server
will
present
because
it's
been
given.
That's
no
good
either.
Why.
R
O
I
R
O
O
N
Good
quick
question:
what's
your
thought
on
on
session
TLS
session
resumption
and
and
does
the
server
ever
issue
or
ticket
to
the
client,
and
then
you
use
that
later
on
for
discovery?
I
haven't
thought
this
because
I'm,
you
know
now
listening
to
you,
but
that
will
I
mean
if
the
server
issues
a
new
such
session
ticket
after
the
handshake.
Yes,.
O
Actually,
I
I
look
at
that
and
that's
the
reason
why
I
kept
the
ESN
I
design,
because
the
ESN
I
design
is
completely
parallel
to
the
session
resumption.
So
basically
you
do
yes,
ni
all
the
time,
no
matter
what,
because
you
need.
You
need
to
understand
yes
and
I
to
understand
why
so
the
ticket
is
valid,
trance
and
results
for
you,
okay,
and
but
if
you
do
that,
then
you
can
have
either
a
full
handshake
or
a
resumption.
O
E
So
stepping
in
is
chair,
I've
been
noticing
a
pattern
that
anytime
we
meet
together
on
this
topic.
There
is
a
lot
of
crypt
analysis
at
the
mic
and
you
actually
do
make
progress
so
I'm
thinking,
maybe
a,
but
then
on
the
mailing
list.
Three
months
go
by
and
not
that
much
happens,
which
you
know
we're
all
busy.
We
all
have
jobs
like
no
one
is
actually
currently
being
you
know
a
hundred
percent
on
this.
E
E
B
E
N
Sorry,
sorry
to
interject,
maybe
still
one
clarifying
question
on
on
the
these
keys
are
generated
like
these
are
ephemeral
keys
for
both
took
like
for.
If,
if,
if
there
is
a
server,
that's
like
has
a
keeper
and,
and
one
of
them
is
to
discovery
key,
is
it
like
ephemeral
or
is
it
like
do
envisioned?
This
is
printed
on
the
device.
I
O
N
O
B
Charter,
more
specifically
possible
recharter
I
think
this
will
work
so
Tom
did
an
excellent
job
at
loading
up
our
DNS
SD
github
with
well
the
current
charter.
That
was
pretty
easy
and
also
there
and
you
see
its
github
DNS
SD
WG.
If
you're
curious.
Also,
there
are
seven
open
issues
that
have
received
zero
comments,
so
I'm
thinking,
if
our
people
willing
is
there
anybody
willing
to
work
on
this
on
github.
B
Is
there
anybody
willing
to
work
on
this?
Okay,
okay,
I
will
send
out
an
email
link
to
the
github
site
and
I'll
go
ahead
and
list
these
seven
issues
that
Tom
has
started
there
and
if
people
would
actively
specifically
I
think,
as
you
can
see
from
the
issues
that
Tom
started
us
with
there's
milestones,
which
of
course,
chairs
can
actually
update
without
necessarily
reach
our
Turing,
but
the
refocus
of
the
privacy
I
work
I
think
was
a
key
one
and
the
privacy
and
data
integrity
so
alrighty
we
will
send
out
an
email
to
the
list.
L
I
think
some
of
this
is
gonna
require
discussion,
because
we
have
to
decide
as
a
group
what
new
things
we
want
to
do
and
a
couple
of
those
things
are
new
and
a
couple
of
those
things
are
existing
and
so
I
didn't
feel
comfortable.
You
know
writing
up
a
new
charter
to
say
we
should
now
do
these
things,
because
that's
really
for
all
of
us
to
decide
if
we
want
to
do
them
or
not,
and
and
but
yet
I
can't
get
any
discussion
to
occur
on.
L
E
E
So
so
that
is
a
good
point.
Thanks,
Tom
I
think
yeah
we'll
want
to
discuss
this
on
the
list.
I
think
when
Barbara
sends
like
the
list
of
issues.
Hopefully
we
can
spark
a
conversation
there
and
make
sure
because
you're
absolutely
right,
the
Charter
is
just
a
reflection
of
what
the
room
wants
to
be
doing.
So
it's
important
to
agree
on
what
we're
doing
not
on
before.
We
agree
on
the
text,
because
otherwise
it's
kind
of
moved.
Thank
you.
So
this
wraps
up
our
ESD
session
in
Prague.
E
We
was
kind
of
great
to
see,
like
some
of
our
documents
are
moving
forward
or
hopefully
going
to
be
done
and
half
published
the
discovery
proxy
in
before
Montreal.
So
we
can
all
have
champagne
in
there.
It'll
be
great
and
a
lot
of
the
other
documents
are
moving
forward
nicely,
which
is
really
cool
and
there's
always
many
ups
and
downs
with
privacy
or
making
good
progress.
It's
good,
okay!
E
H
Yeah
so
just
FYI
I
mean
I'm
gonna,
be
sitting
up
the
hack
demo
happy
hour
and
maybe
store
we'll,
be
there
I.
Don't
yes,
we'll
be
there,
and
basically,
just
like
anybody
wants
to
sit
around
and
and
like
play
with
DNS
SD
in
one
way
or
another
Mike
and
Stuart
or
I
can
help
you
to
understand
the
API
is
that
are
available.
We
can
help
you
to
build
the
source
if
you
want
to
build
the
source
and
we
can
help
you
to
do
anything
else.
G
Reinforcing
what
you
said
at
the
hot
RFC
session
on
Sunday
night
Ted
took
that
opportunity
to
talk
about
what
we've
been
doing.
One
of
the
things
that
we
have
discovered
talking
to
people
and
talking
to
people
at
the
hackathon
is
even
though
this
service
discovery
technology
is
fairly
mature.
Now,
a
lot
of
people
at
the
IETF
don't
know
what
it
does,
or
in
some
cases
have
never
even
heard
of
it.
Well.
G
About
it,
so
Ted
had
this
idea,
which
I
thought
was
great,
that
let's
use
that
hack
demo
happy
hour
time
slot
as
a
chance
to
talk
to
people
and
answer
questions
so
Ted
pitched
this
at
the
hot
RFC's
told
people
would
be
there
the
audience.
So
anybody
in
this
room
who
wants
to
talk
is
welcome
to
come.
We
were
hoping
to
reach
the
people
not
in
this
room,
who
know
nothing
about
this
and
have
an
opportunity
for
some
face-to-face
chats
thanks.