►
From YouTube: IETF109-HOTRFC-20201113-1348
Description
HOTRFC meeting session at IETF109
2020/11/13 1348
https://datatracker.ietf.org/meeting/109/proceedings/
A
Hi
everybody,
my
name
is
aaron
falk.
This
is
the
hot
rfc
lightning
talk
session.
A
The
purpose
of
this
session
is
to
create
an
opportunity
for
folks
who
are
looking
for
collaborators
in
the
ietf
to
have
a
chance
to
speak
briefly,
talk
about
their
problem
or
their
their
idea
and
give
enough
information,
so
others
who
might
be
interested
will
know
enough
to
follow
up,
and
so
each
talk
should
include
some
information
on
how
to
collaborate
afterwards.
A
There
are
in
the
agenda
for
this
meeting.
There
are
short
abstracts
for
each
of
the
presentations
and
they
all
include
email
addresses
and
some
include
some
additional
information
for
how
to
follow
up
so
really.
The
goal
here
is
to
to
try
to
connect
people
together
and
the
way.
So
this
is
the
first
time
that
we've
done
this
as
a
live
online
meeting.
A
I'm
sure
that
most
of
you
have
done
a
lot
of
live
online
meetings
recently,
so
I
think
that
the
mechanics
should
be
pretty
familiar,
but
because
these
are
intended
to
be
lightning
talks,
I'm
going
to
be
pretty
strict
about
the
time
allocation,
so
you've
got
four
minutes
when
your
four
minutes
are
up.
I'm
going
to
interrupt
you
and
ask
you
to
wrap
up
if
you
haven't
already
and.
A
A
So
our
first
speaker
is
mike
mcbride,
I'm
going
to
set
him
to
be
the
presenter
and
mike
if
you
would
unmute
yourself
and
start,
and
I
will
start
with
the
timer,
can
you
see
my
screen?
Okay,
you?
Yes,
now
you're
doing
the
present!
Oh.
B
Yep
looks
good,
okay,
great,
thank
you
for
letting
me
do
this
so
yeah.
So
we
have
this
data
discovery
topic
that
we've
been
batting
around
several
of
us
and
we
wanted
to
present
this
to
this
crew
just
to
get
a
feedback,
hopefully
others
can
participate.
Maybe
someone
will
tell
us
we're
crazy
or
someone
will
say
this
has
already
been
solved
or
yeah.
Maybe
this
is
a
good
idea
and
we
need
to
figure
this
out.
So
this
is
data
discovery.
We
do
have
a
couple
drafts
that
we're
kind
of
describing
the
problem.
B
So
this
did
evolve
out
of
a
series
of
edge
computing
side
meetings.
This
is
not
edge
computing
specific,
but
we
in
these
edge
computing
side
meetings
identified
a
variety
of
gaps
and
one
of
those
gaps
was
data
discovery,
discovering
data
that's
distributed
across
the
edge
and
how
to
find
that
data
across
different
edge
database
databases
and
needing
to
evaluate
that.
So
that's
how
this
has
created.
So
so,
what's
the
problem,
the
problem
is
that
we
wanted
to
locate
distributed
data
in
a
standardized
way.
B
B
Data
may
be
cached
copied
or
stored
across
multiple
locations
in
the
network
on
route
to
its
final
destination,
and
so
we're
trying
to
find
a
way
to
do,
and
we
haven't
come
up
with
any
solutions,
just
the
problem
kind
of
come
up
with
standards-based
solution
to
discover
where
the
databases
exist
throughout
a
network
first
and
then,
where
specific
data
objects
are
located,
that
we're
looking
for
so
the
location
of
each
data
store
is
the
first
level
discovery
program
problem
and
then
the
details
of
the
database
directory
is
the
second
level
discovery
problem.
B
So
so
what's
data
data
can
be.
You
know
anything.
The
kind
of
the
use
case
that
we're
looking
for
is
you
know
finding
statistics,
measurements
temperature?
One
of
them,
is
using
an
elevator,
for
instance,
when
you
have
sensors
all
over
the
elevator
and
you're,
trying
to
gather
that
data,
whether
it's
vibration
or
breaking
information
or
speed
or
capacity
or
whatever,
and
then
find
out.
Where
that
look
where
that
data
is
located
and
data
can
be
a
program,
it
can
be
a
service
and
it
can
be
a
resource.
B
You
know
cpu
or
memory,
and
things
like
that,
so
that
we're
trying
to
find
ways
to
find
that
data.
So
you
know
what's
next,
so
we're
trying
to
determine
if
existing
protocols
will
work
here.
To
do
that,
you
know
you
may
be
able
to
find
ways
to
extend
existing
protocols
dns,
for
instance.
B
B
We've
been
we've
presented
this
in
coin
computing
in
the
network
research
group
in
the
irtf,
but
and
we'll
be
having
a
discussion
this
week
in
this
next
week
in
coin,
but
you
know,
maybe,
if
it's.
If
this
is
valid
work,
then
maybe
we
someday
will
try
to
you
know,
create
a
a
working
group
first
going
through
the
ball.
So
that's
really!
That's
really
it.
Please
look
at
the
drafts.
This
is
how
you
can
contact
the
authors.
B
A
Thanks
a
lot
mike,
that
was
great.
I'm
now
making
your
own
shepherd
the
presenter
who's
going
to
give
the
next
talk
start
whenever
you
like.
C
So
hello,
everyone,
I'm
iran,
chef
I'll,
be
talking
about
ciphertext
format.
This
actually
is
related
in
a
way
to
the
same
problem
domain
as
a
as
mike's.
In
that
he's
looking
to
discover
data
overall,
I'm
looking
to
to
be
able
to
find
out
encrypted
data
and
attribute
encrypted
data
back
to
whoever
generated
it.
C
So
there
are
a
lot
of
standards
for
the
cipher
text,
the
row
cipher
text.
What
do
you
do
when
you
encrypt
something?
What
do
you
get
when
you
encrypt
something
with
aes?
For
example?
Where
does
the
nonce
go
into?
Where
does
the
authentication
tag
and
so
on?
But
if
you
have
a
huge
amount
of
encrypted
data
around
the
enterprise
in
many
many
locations,
many
databases,
many
files,
it's
just
not
enough
to
have
a
standard
for
the
ciphertext.
C
You
need
something
else:
something
extra
and
usually
a
set
of
headers
prefixed
to
your
row,
cipher
text
to
determine
where
it's
coming
from,
and
there
is
surprisingly
enough.
There
is
no
standard
for
this
kind
of
metadata
around
encrypted
data.
There
are
some
standards
for
data
in
motion,
some
standards
in
the
pkcs11
and
k-map
world,
but
no
real,
no
standards
for
data,
trust
that
apply
to
more
than
one
key
management
system
or
more
than
one
library.
C
So
the
goal
is
to
have,
to
some
degree
at
least
self-identifying
encrypted
data
and
to
enable
interoperability
of
encryption,
libraries
and
key
management
systems
that
are
related
to
them,
as
the
standard
format
should
include
at
least
the
identity
and
the
way
to
version
keys,
so
so
as
to
allow
for
key
rotation.
C
Of
course,
it
needs
to
be
extensible,
just
like
any
other,
broadly
used
format,
it's
very
important
for
the
data
header
to
allow
for
detection
of
encrypted
data.
So
there
must
be
a
way
for
someone
who
doesn't
have
the
keys
to
look
at
this
column
and
the
database
and
say
hey.
This
is
all
encrypted
data
and
it
it
follows
this
standard
and
in
fact
one
fixed.
C
Byte
is
good
enough
if
you
want
to
detect
the
crypto
data
at
scale
and
it
needs
to
support
granular
sorry,
granular,
key
management,
where
people
use
key
wrapping,
okay,
derivation
to
have
a
very
large
number
of
keys.
Possibly
a
different
key
for
each
encrypted
field.
There
is
an
early
draft
of
the
proposal,
I'm
looking
for
partners
to
work
on
this
with
and
if
you're
interested,
please
reach
out
to
me,
email,
the
simplest.
C
A
D
Hi
I'd
like
to
update
a
provider
update
on
the
blockchain
governance
initiative
network,
which
I
mentioned
several
times
in
the
past
few
ideas.
A
D
Let
me
provide
update
on
a
big
game,
which
is
a
multi-stakeholder
community
for
addressing
issues
in
the
blockchain
ecosystem,
with
amongst
several
stakeholders,
not
only
the
engineers,
but
also
the
regulators,
consumers
and
the
commercial.
D
We
think
that
we
need
a
server
some
certain
kind
of
of
the
problem
related
to
the,
for
example,
the
development,
how
we
can
secure
the
blockchain
system.
We
provide
some
way
to
to
tackle
with
anti-money
learning
on
scheme,
for
example.
So
we
have
launched
the
group
in
march.
Finally,
and
currently
we
have
two
active
working
groups,
which
one
is
a
governance
working
group
which
is
to
try
to
design
the
community
itself
and
I'm
one
of
the
co-chair
of
the
governance
working
group
and
the
other
one
is.
D
D
So
we're
currently
focusing
government's
working
group
is
currently
focusing
on
the
mechanisms
of
organizing
organization
of
the
beginning
itself,
and
we
are
drafting
two
documents.
One
is
a
process
document
and
the
other
is
the
ipr
portion
document.
So
this
is
a
usual
stuff
for
the
such
type
of
community,
but
we
need
to
the
the
version
which
is
adjusted
to
for
us
and
we
are
preparing
this
for
the
next
general
meeting
and
we'll
be
ready
for
the
transfer
is
ready
soon.
D
The
other
thing
is
that
privacy
and
key
management
study
group,
which
has
a
two
work
stream.
One
of
them,
is
a
chemical
outbreak
stream
and
the
other,
the
decentralized
financial
technologies
and
privacy
identity
and
traceability
works.
Even
that
is
very
long
name
and
the
both
of
them
are
currently
developing
documented,
and
there
is
a
drafts
already
on
the
google
docs,
so
you
can
leave
it
if
you
like,
and
then
it
is
possible
to
join
and
speak
up.
D
If
you
have
interest
or
really
like
to
comment,
we
are
currently
planning
our
first
leo
general
meeting
in
called
block
number
one
meeting
in
the
late
november,
that
is
the
20
november
23rd
and
25th,
and
the
team
time
is
adjusted
to
bachelor
of
mumbai,
so
the
it
will
be
the
2
at
noon
to
3
pm
utc.
D
So
the
agenda
will
be
tutorial
on
the
working
group
and
study
groups,
discussions
of
course,
and
the
information
that
on
the
registrations
are
available
from
this
link,
which
you
will
be
you
can
find
from
the
pdf
I'm
going
to
upload
so
to
begin
to
join
the
beginning
of
in
general.
As
a
member,
please
visit
the
website
above
or
just
contact
me
to
find
out.
D
If
you
want
to
discuss
on
this
thing
and
that
you
you
want
to
know
more
a
little
more
about
this,
then
please,
let
me
know
I'm
happy
to
communicate
with
you.
Thank
you
very
much.
A
Our
next
speaker
is
pascal
yurian
and
I
am
going
to
share
the
slides
from
my
machine.
A
Pascal
you
can,
let's
see,
go
to.
E
E
What
is
a
concept
you
want
to
put
your
application
online,
but
for
privacy
reason
you
want
to
keep
control
for
that
application
is
embedded
in
a
secure
element.
Securements
are
several
form.
Factors
like
sim
by
the
cal
can
be
also
integrated
in
stock.
Securement
evaluation
assurance
level
is
about
five
to
six.
Given
a
maximum
value
of
seven,
your
app
server
works
over
a
tls
1.3
embedded
server,
tls
sc
stands
for
tls
secure
element
and
is
described
by
an
itf
draft.
E
Your
app
client
works
over
tls
1.3
clients,
which
means
that
client
credentials
are
stored
and
used
in
a
secure
event.
Tls
em,
im
with
1040s
identity
module,
is
described
by
an
itf
draft.
You
see
on
your
right,
your
app
working
on
the
tds
1.3
embedded
server
over
an
external
tcp
interface.
Next
slide,
please.
E
E
Channel
security
relies
on
authenticated
encryption
with
associated
data
server
and
client
authentication
are
based
either
on
public
key
infrastructures,
api
or
pressure
key
pskr,
so
tls
gssc
1.0
works
with
il
ccmc
pursuits,
ap
curve,
dpl,
mod
and
32
bytes
pusher.
Key
next
version
will
support
pki
next
slide.
Please.
E
E
So
this
slide
show
and
tls
1.3
basic
exchange.
You
using
tls,
im
and
tls
ac
psim
is
optional.
You
you
can
choose
a
pressure
key
based
on
password
the
rush
of
a
password,
for
example,
and
your
app
is
running
over
the
tls
sc
application
and
embedded
in
a
secure
element
and
the
psr
act
as
a
kind
of
super
open
code.
So
next
slide
please
so
this
slide
show
you
an
application
example:
a
blockchain
key
store.
E
The
right
part
show
ascii
command
used
to
generate
t
to
set
key
to
compute
key
according
to
the
bib
32
specification
and
to
sign
transaction.
The
left
part
is
a
locus
prototype
board,
an
arduino
mini
issue
that
is
used
as
reader
for
secure
elements.
It
manages
the
iso
786
in
protocol
and
provides
you
interface
with
the
wi-fi
sock,
the
wi-fi
sock
as
a
tcp
stack
and
perform
network
network
exchange.
E
So
next
slide.
Please
squarability
is
an
important
issue
still
open.
Is
it
possible
to
deploy
your
arp
in
the
internet?
That's
the
question
on
the
left
a
tribal
use
case,
a
simple,
socket:
ipn
port
is
used,
as
illustrated
in
the
middle
parts.
Multiple
ports
may
be
used
with
single
ip
address
and,
finally,
on
the
right,
a
single,
socket,
ipn
port
is
used
by
several
several
name.
E
A
Next,
speaker
is
lee
zhao
and
she's
going
to
be
talking
about
dying,
cast
deja,
I'm
making
you
the
presenter,
and
you
can
start
whenever
you're
ready.
F
Okay,
hello:
this
is
israel
speaking
I'm
going
to
give
a
briefing
on
the
dimecast
or
the
dynamic
anycast.
F
B
F
Okay,
this
this
page,
it
tries
to
to
to
illustrate
the
problems
in
edge
computing
with
the
increased
deployment
of
electric
from
the
operators.
We
find
that
there
could
be
a
very
large
number
of
edge
sites.
For
example,
there
could
be
a
couple
of
address
for
each
district
within
a
big
city
and
another
feature
of
the
edge
is
they
have
very
limited
computing
resources
and
the
computing
resources
are
varying
all
the
time
for
each
of
the
sites.
So
naturally
it
comes
to
a
question
which
edge
is
the
best
to
route
a
computing
demand
too.
F
So
here
we
try
to
focus
focus
on
the
computing
service.
So
there
are
three
aspects
we
want
to
consider
the
first
one,
which
one
is
the
best.
We
need
to
consider
the
computing
resources
and
the
load
attached
to
a
particular
edge
site,
probably
want
to
choose
the
most
lightweighted
one,
and
the
second
to
consider
is
the
what's
the
net
network
path,
quality
to
a
particular
edge
and
what's
the
network
status,
and
the
third
aspect
is
since
the
computing
resources
and
the
network
status
all
vary
over
time.
F
So
we
want
to
know
all
this
information
in
real
time.
So
that's
the
problem
we
are
trying
to
tackle
on
so
to
illustrate
the
concept
of
dying
cast
here
here,
I'm
using
a
like,
like
a
5g
5g
deployment
here
as
for
illustration
purpose,
but
it's
not
necessarily
to
be
the
5g
backhaul
deployment.
F
So
basically,
we
have
two
mec
style
embassy
sites
and
there
is
a
there
is
a
client
comes
in
and
then
with
the
most
current
practice.
Normally
this
edge
computing
request
will
be
handled
by
the
local
embassy
site,
which
is
site
1
here,
but
it
could
possible
that,
for
example,
because
in
working
hours
the
industrial
park
usually
have
much
higher
load
in
the
residential
area.
Normally
not
so
in
some
of
the
cases,
the
the
the
cfn
node
here,
which
is
a
mini
data
center
gateway
here,
can
can
determine
that
this.
F
This
computing
request
would
be
best
handled
by
the
mec
site
too.
So
that's
the
green
line
shows
where
the
data
flow
goes.
So
the
client
normally
use
the
anycast
address
to
access
a
service,
and
this
package
will
be
routed
to
the
best
edge
in
terms
of
both
the
computing
resources
and
the
load
and
also
the
network
status
and
which
one
to
be
chosen
is
transparent
to
the
client.
All
clients
is
the
adding
cast
address
to
access
the
service.
That's
why
we
call
it
the
dynamic
ending
cost,
so
that
would
we
we.
F
Presumably
there
should
be
some
protocol
change,
both
the
data
plane
and
the
control
plane.
So
the
the
brown
arrow
lines
there
indicate
there
should
be
some
control,
plane,
information
change
from
the
server
to
the
to
the
data
center
gateways
and
also
between
between
the
between
the
data
center
gateways,
all
the
cfl
nodes,
which
are
the
blue
lines
here.
So
we
want
to.
Actually
we
want
to
propose
three
features
to
be
supported
in
dyncast.
F
F
To
this,
this
slide
show
the
activities
we
are
going
to
have
for
the
comp
for
the
coming
idea
week.
We
are
having
a
side
meeting
on
wednesday.
It
starts
five
minutes
after
the
itf
plenary
ends,
so
it
will
be
a
75
session,
so
the
webex
can
be
found
here
and
also
the
webex
can
be
found
on
the
side
meeting
wiki,
which
is
url
here,
so
the
purpose
of
it.
F
A
A
G
G
G
What
are
the
goals?
Our
goal
was
the
mobile
tool
for
traffic
performance
monitoring
of
increased
transfer
protocols,
also
using
the
explicit
flow
method
for
measuring
the
switches
flow
measurements
employ
few
marking
leads
inside
the
heater
of
each
pocket
for
lost
and
the
lame
turmeric
is
protocol
independent.
It
is
valuable,
particularly
for
encrypted
headers,
for
quick.
G
G
The
first
idea
was
already
introduced
in
the
last
after
ssc
in
july
and
was
all
to
miso
network
performances
with
user
devices.
There
is
a
youtube
video
that
you
can
see.
G
There
are
also
two
related
drafts
that
describes
the
details.
Our
the
idea,
the
ideas
behind
and
the
details.
One
is
about
explicit
flow
measurements
and
the
second
one
that
is
just
a
new
draft
is
a
user
device.
Explicit
monitoring
here
are
here
is:
is
the
link
to
the
project
to
the
library
something
on
github.
G
G
G
Exactly,
for
example,
when
you
launch
the
youtube
application
with
the
video
that
flows
continuously,
you
can
see
directly
in
real
time
what
happens
to
your
connections
on
the
network
and
what
are
the
the
measurement
about
the
time
or
loss
and
what
kind
of
connection
are
you
using
the
operators
with
the
customer
permission,
may
use
this
information
to
identify
network
problems
and
pro
improve
their
customer
experience.
G
G
The
second
feature
is
also
that
the
customer
choose
whether
to
share
the
performance
data,
for
example,
that
the
option
is
mobile
phone
as
collected
the
first.
The
third
feature
is
that
should
be
possible
in
the
future
to
put
performance
ratios
on
the
probe
in
order
to
signal
connections
with
problem
to
the
network
operator.
G
A
Thank
you,
so
that
was
our
last
talk.
I'd
like
to
thank
everybody
for
attending
presenters.
If
you
haven't
already
done
so,
please
either
upload
your
slides
yourself
or
send
them
to
me,
and
I
will
do
so
and
folks
heard
anything
that
was
interesting,
feel
free
to
follow
up
directly
with
the
presenters
and
and
also
send
me
feedback
on
the
format.
A
For
this
I
thought
it
moved
along
at
a
nice
clip
and
I'm
happy
to
continue
doing
it
this
way,
but
I'm
interested
in
hearing
from
other
folks
as
well.
So
thanks
everybody
have
a
great
ietf
week
and
enjoy
the
rest
of
your
day.
Thank
you.