►
From YouTube: IETF-ICNRG-20211210-1600
Description
ICNRG meeting session at IETF
2021/12/10 1600
https://datatracker.ietf.org/meeting//proceedings/
A
A
Carlos
thanks
for
joining
yeah,
I
I
had
the
same
message
here.
I
actually,
I
still
have
it
but
seems
to
work.
Nevertheless,.
A
Okay,
let's
maybe
wait
for
two
more
minutes.
I
think
people
are
trying
to
create
their
data
accounts.
D
Okay,
so
I
got
a
code
emd
page,
but
I
don't
see
where
I
I
had
a
pain
to
enter
notes,
but
now
I
don't
it's.
I
killed
it
somehow,
you,
okay.
E
D
A
Were
actually
no?
No,
no
okay,
you're
at
the
right
page,
so
you're
looking
at
the
say.
D
A
Yeah,
I
think
everything
is
a
bit
rusted
so,
but
I
think
we
will
manage.
A
So
are
you
connected?
I
can't
hear
you,
I
think.
B
A
D
A
A
C
A
End
of
this
week
to
another
icn
rg
meeting,
unfortunately
still
online,
but
I
think
everybody
understands
why
yeah
so
I'm
dirk
kosher
and
my
coach
is
dave
orren
and
just
a
second
before
we
start,
I
just
have
to
point
you
to
some
housekeeping
items
so,
namely
the
irtf
notewell
statement,
so
this
basically
pertains
to
to
ipr.
So
if
you
submit
anything
to
our
group
that
has
ipr
associated
you're
expecting
to
let
us
know,
if
you
have
any
question
on
this,
just
talk
to
dave
and
me.
A
The
irtf
also
follows
the
idf
privacy
and
and
code
of
conduct
rules.
So,
first
of
all,
please
be
aware
that
so
these
meetings
are
typically
recorded.
A
And
so
we
also
have
a
privacy
policy.
Please
check
that,
and
so
there's
also
policy
regarding
code
of
conduct
and
anti-harassment,
so
in
the
ietf.
But
these
rules
also
apply
to
us
here
in
the
irtf.
A
And
yeah
to
those
of
you
who
are
attending
ic
energy
or
irtf
meetings
for
the
first
time,
so
the
irtf
is
what
is
called
a
sister
organization
of
the
ietf,
so
the
ietf
is
setting
standards
for
the
global
internet,
the
irtf,
so
the
research
task
force
conducts
research.
So
what
we
are
doing
here
is
not
standards
development,
so
we
may
be
producing
specifications
and
so
on,
but
and
then
also
publish
those
rcs.
A
But
this
is
just
to
enable
experimentation
and
yeah
just
write
down
specifications
that
people
can
use
for
conducting
research
and,
of
course
the
hope
is
that,
eventually
you
know
some
insights
generated
that
may
yeah
guide.
Future
sonization
work
in
the
itf
check
out
this
rsc
7418
for
an
irtf
primer.
A
A
We
always
take
notes
and
produce
minutes
of
our
meetings
because
well
not
everybody
makes
it
and
we
have
a
mailing
list
where
we
try
to
do
like
the
other
most
of
the
work
in
the
group,
and
today
we
are
really
grateful
that
ken
calvert
volunteered
for
taking
notes
thanks
ken
and
so
what
this
typically
involves
is
just
trying
to
capture
the
gist
of
the
discussion
and
so
for
these
online
meetings.
A
Some
discussion
takes
place
in
the
audio
channel
and
maybe
some
other
discussion
in
the
chat.
What
we
often
do
is
just
copy
the
chat
to
the
notes
later.
A
So
we'll
just
give
you
a
status
update
after
this,
and
then
we
have
three
really
cool
research
presentations,
so
one
by
carlos
on
xeno,
one
by
red
and
femi.
I
hope
that's
correctly,
pronounced
on
gt
systems,
span,
network
architecture
and
then
one
by
she
on
his
thoughts
and
his
work
on
ndn
ts
and
an
api
design
for
that
platform.
A
Okay,
so
just
a
quick
update
on
what
has
happened
in
ic
nrg
recently,
so
we
are
very
happy
to
report
that
the
group
has
published
two
new
rc's,
so
one
of
virtual
applause,
thanks
to
everybody,
especially
the
authors,
but
also
everybody
who
helped
in
reviewing
this
and
improving
these
documents
over
time
and
if
you've
been
around.
A
You
probably
know
that
these
documents
have
been
developed
over
quite
some
time
and
have
seen
quite
a
few
iterations.
So
rc
9138
describes
design
considerations
for
name
resolution
services
in
icn,
so
yeah
what
what
role
name
evolution
could
possibly
play
and
what
you
would
have
to
consider
for
that
and
rc
9139
describes
an
adaptation
of
ccnx
to
low
power
wireless
personal
area
networks.
A
So
this
is
equivalent
to
the
ietf
work
in
like
six
slope
and
so
ipv6
over
these
low
power
pens
and
well.
This
is
actually
has
quite
some
potential,
for
you
know
enabling
further
usage
of
icn
in
construct,
networks,
iot
and
and
so
on.
So
it's
not
necessarily
restricted
to
personal
area
networks,
but
so
this
technology
could
be
used
in
all
kinds
of
networks
where
the
other
kind
of
resource
constrained
and
where
you
need
a
really
compact
representation
and
so
on.
A
A
Okay,
and
so
in
the
data
tracker,
you
can
also
see
our
you
know
various
documents,
and
so
these
are
the
active
research
group
drafts
so
that
we
adopted
earlier
and
have
been
developing
over
time
so
just
quickly
what
we
where
we
currently
are
and
what
we
plan
to
do
next.
A
So
the
other
name
solution
document
is
completed
and
is
going
to
be
published
soon,
so
that
where
there's
one
called
m,
ns
architecture
considerations-
and
we
also
have
a
document
on
yeah
doing
icn
in
ate
so
4g
networks,
which
is
also
about
to
be
published
fairly
soon.
A
So
hopefully
I
mean
we
still
haven't.
E
Reviewed
the
one
reviewer
we're
waiting
for
comments
on
happens
to
be
the
chair
of
the
ietf,
so
he's
been
a
bit
slow
in
getting
his
reviews
in
yes,
we're
hoping
that
will
happen
relatively
soon,
but
we're
blocked
on
that
at
the
moment.
A
Exactly
so,
we
have
one
other,
like
specification
type
document
on
a
technology
called
ccn
info,
so
something
that,
like
a
tool
that
allows
you
to
collect
information
about
elements
on
a
path
in
a
ccn
network-
and
we
last
called
this
document
earlier
in
the
year
and
got
some
minor
comments
that
were
addressed
by
the
authors
some
weeks
ago,
and
so
we
think
this
document
is
now
ready.
So
there
haven't
been
any
technical
changes,
and
so
we
would
dave
has
just
notified
colin
that
we
want
to
submit
this
for
our
irsg
review.
A
Now,
then,
we
have
say
other
important
in
progress
work
that
we
adopted
earlier.
So,
for
example,
the
flick
specification.
A
So
flick
is
like
manifest
technology
for
for
icn
that
yeah,
we,
we
think
it's
quite
relevant
and
we
would
like
to
get
it
published
so
early
next
year,
and
but
for
that,
we
we
need
more
eyes
on
it.
So
we
need
a
thorough
review
by
like
more
people
in
the
group.
A
So
please
everybody
who
has
an
interest
in
this
please
help
in
progressing
this.
We
really
want
to
publish
it
soon.
E
Let
me
chime
in
real
quickly.
This
is
a
pretty
important
document,
because
manifests
are
becoming
more
widely
used
throughout
the
the
panoply
of
icn
things,
and
this
has
had
it's
been
implemented.
E
It's
had
a
lot
of
work
done,
but
the
new
draft
that
I
submitted
about
a
month
ago
it
has
major
sections
rewritten,
so
it
really
needs
a
top-to-down
review
from
everybody,
as
opposed
to
a
quick
skim
and
the
quicker
we
get
that
done,
the
quicker
we
can
get
it
to
last
call
there's
still
a
few
open
issues
and
those
are
noted
in
the
document,
and
we
particularly
want
people
to
weigh
in
on
the
areas
where
we
actually
think
they're
holes
that
need
to
get
filled
technically.
E
A
Yeah
thanks
dave
exactly,
and
so,
moreover,
we
have
these
two
other
specifications
on
traceroute
and
ping,
and
so
they
also
have
been
around
for
quite
some
time
and
we
think
well
actually
technically
they're
mostly
done
so
I
think
next,
we
would
do
an
a
last
call
on
the
main
list
and
then
ask
for
four
final
comments:
yeah.
E
A
A
All
future
and
yeah,
then.
Lastly,
we
have
a
couple
of
other
documents
that
still
need
a
bit
of
work
and
but
probably
also
benefit
from
like
feedback
from
the
group
so
path.
Steering,
if
you
remember
that
is,
I
think,
quite
elegant
way-
to
make
icn
communication
stick
to
some
past
with
a
like
soft
state
approach,
which
is
beneficial
in
many
scenarios.
A
Reflexive
forwarding
was
a
technology
yeah
to
have
like,
as
some
basically
some
some
say,
soft
state
in
the
network
that
establish
some
symmetric,
forwarding
information
that
we
used
for
the
the
rise
system,
for
example
the
remote
with
an
invocation
system.
A
So
we
are
still
planning
some
updates
on
that
one,
but
eventually
so
hopefully
this
could
also
be
finished
next
year,
maybe
early
next
year
time,
tl
we
yeah
was.
This
is
a
new
tlv
structure
for
recording
time
that
we
really
wanted
to
reuse
for
the
low
pan
specification.
But
then
lopen
got
finished,
say
earlier,
and
so,
but
we
still
want
to
publish
this
as
a
standalone
rc.
A
And
so
that's
like
quick
summary
of
where
we
currently
are
and,
of
course
it's
also
very
good
time
to
think
about
what
could
be.
You
know
useful
or
interesting,
upcoming
work
and
so
in
the
area
of
iot.
So
we
now
have
this
say:
efficient
encoding
for
low
pen
networks,
we
earlier
talked
about,
say
icn,
lora,
so
low
power,
long
range
radio
systems.
I
think
that
there's
quite
some
potential
there.
A
So
personally,
I'm
also
quite
interested
in
that
and
well
then
I
think,
what's
really
and
yeah
most
people
know
this.
What's
really
interesting
in
icn
is
like
new
ways
to
support
applications
better,
and
so
we
will
hear
from
june
xiao,
for
example,
in
a
bit
about
his
thoughts.
A
Then
yeah
we
have
been
talking
a
bit
about
distributed
computing.
Also,
it's
an
ic
energy,
but
also
in
the
coin
competing
in
the
network
group
and
yeah.
Many
of
us
think
that
icn
is
a
really
useful
enabler
for
yeah,
also
rethinking
how
computing
is
done
in
the
network,
and
so
the
the
rise
work
or
the
cfn
work
that
we
published
earlier
are
just
two
examples,
but
people
may
have
many
other
ideas
as
well,
so
that
could
be
a
general
good
topic
for
us
and
quality
of
service.
A
If
you
remember,
we
earlier
published
an
rsc
1964,
that
talked
about,
say
the
potential,
so
qs
considerations
in
icn
and
I
think,
was
a
really
nice
description
of
what
could
be
say
qualitatively
much
better
in
icn
compared
to
ip,
when
you
want
to
do
qs,
and
so
the
question
is
okay.
What
what
follows
from
that?
So
do
we
want
to
you
know,
leverage
these
ideas,
maybe
specify
some
mechanisms,
so
there
has
been
some
other
discussion.
Other
work
in
ic
energy
before
maybe
now
is
a
good
time
to
kind
of
yeah.
D
A
A
Okay,
and
so
with
that
we
would
start
our
presentation
program,
and
so
the
first
in
our
list
is
carlos,
was
his
presentation
on
xeno
and
colors.
I
think
the
best
way
is,
if
you
just
shared
your
desktop
or
your
application
window
yourself.
B
Yeah
yeah.
So
if
you
could
repeat
the
question,
because
I
really
missed
it
since
it
was
mute
for
a
couple
of
seconds.
A
Okay,
no
problem
so
yeah.
Please
start
your
presentation
and
if
you
could
just
share
it
yourself,
that
would
be,
I
think,
that's
the
only
way.
E
E
Okay
and
there
it
is,
except
you
want
to
change
to
there.
You
go
yeah.
B
E
B
Okay,
okay,
so
hi
everybody.
So
first
of
all
I
would
like
to
thank
thank
you
for
the
invitation
and
to
say
that
it
is
a
pleasure
to
present
zeno
here
in
the
icnrg.
B
B
So
I
think
there
are
some
shared
aspects
to
what
dirk
just
mentioned
for
new
works
here
in
the
icmrc.
B
B
So
the
first
thing
that
happens
is
that
data
gets
produced
by
the
most
variety
of
entities.
They
can
be
robots,
satellites,
sensors
and
so
on,
and
then
this
data
is
distributed
and
perhaps
just
inside
the
same
physical
nodes
or
within
the
same
vocal
variant
network
or
eventually
across
different
geographic
locations,
and
this
distribution
of
data
happens
because
either
we
want
some
computations
to
be
applied
over
this
data
or
just
because
we
want
it
to
be
stored.
B
And
finally,
we
also
have
the
case
that
we
can
store
data
because
sooner
or
later
we
we
plan
to
retrieve
it
again,
just
for
the
fact
that
we
don't
store
data
just
because
yes,
so
we
can
say
that
in
this
journey,
data
goes
over
a
series
of
alternations
between
the
data
being
in
move,
the
data
being
addressed
and
the
data
being
in
use,
and
traditionally
there
are
different
communication
patterns
that
are
used
for
each
one
of
of
these
cases.
So
for
that
we
move.
B
Traditionally,
we
can
see
push
patterns
like
website
being
the
more
predominant
ones
while
for
that
at
rest,
the
pull
patterns
are
the
most
common
approach
and
what
is
the
catch?
The
cut
the
catch
is
that,
while
data
goes
on
this
journey
every
time,
we
need
to
move
across
that
in
motion
and
that
interest
or
that
ingress
to
that.
In
in
in
motion,
we
clash
against
technology
barrier,
meaning
that
these
two
technologies,
ecosystems,
are
really
fragmented.
B
So
on
one
side
we
have
the
techno
technology
to
push
data
and
on
the
other
side,
we
have
the
technology
to
store
and
to
retrieve
data,
and
the
problem
becomes
even
worse.
If
we
look
at
the
way
the
current
systems
are
being
designed
just
because
we
are
moving
away
from
these
monovitic
systems,
where
everything
runs
either
locally
or
in
the
clouds,
and
we
are
witnessing
a
decentralization
and
distribution
of
the
the
different
functions
that
compose
these
same
systems.
B
And
the
problem
is
that
this.
So
the
current
technologies
fail
to
provide
data
data
management
that
is
not
only
unified,
that
is
vocation,
transparent
and
also
that
is
simple
to
put
and
to
put
in
place
and
to
to
use,
and
this
is
the
gap
that
zeno
is
trying
to
fill
in.
Okay.
B
So
in
a
nutshell,
zeno
tries
to
unify
that
emotion
that
the
news
that
addressed
computations
by
blending
together,
perfect
and
distributed
queries
while
providing
built-in
support
for
geo-distributed,
storage
and
computations.
To
some
extent,
we
can
say
that
zeno
is
a
kind
of
data-centric
solution.
Okay,
that
has
that
oriented
solution,
and
it
achieves
it
by
not
only
defining
its
own
network
protocol
that
can
operate
over
different
network
technologies
at
different
stack
layers,
but
also
by
providing
a
set
of
high-level
apis.
B
B
Then,
in
the
middle,
the
zeno
zeno
defines
the
set
of
data
transportation,
primitives
that
provide
support
for
publish,
subscribe
and
for
query
replies,
communications
and
note
that
here
this
layer
is
content,
agnostic
and
then
on.
The
top
zeno
defines
a
set
of
data-oriented
abstractions,
which
is
our
api.
That
allows
more
complex
semantics
on
top
of
the
real
content
in
order
to
support,
for
example,
distributed
storages,
filtering
transcoding
of
data
and
even
remote
computations.
B
And
then,
when
we
deploy
several
xenons
together,
we
might
come
up
with
a
zenon
network
overlay,
just
like
the
one
we
have
in
this
slide,
and
here
note
that
zeno
can
either
run
in
the
devices
where
the
user
application
is
being
executed.
But
it
can
also
run
as
a
network
node
and
for
that
zeno
defines
three
different
types
of
entities
or
entities.
B
And
then
we
have
the
routers,
which
consists
on
some
software
nodes
that
can
route
messages
between
the
clients
and
the
peers,
and
these
three
roles
allow
us
to
implement
not
only
peer-to-peer
communications
over
any
type
of
connected
graph.
Topologies
allow
us
to
have
brokered
communications,
and
this
is
special
useful
for
constrained
devices
like
microcontrollers,
because
they
they
are
not
so
powerful
and
they
cannot
run
all
the
capabilities
that
are
required,
for
example,
to
do
routing,
and
we
also
have
routers
that
are
capable
of
forwarding
data
to
and
from
other
peers
and
clients.
B
For
example,
we
have
the
star
operator
that
is
used
to
replace
a
single
name
segment
in
the
key,
just
like
the
first
example
in
the
bottom,
or
we
have
a
double
star
operator,
which
is
used
to
replace
zero
or
more
name
segments
in
the
key.
So
in
this
case
the
bottom.
The
second
example
in
the
bottom.
B
Temperature,
I'm
getting
all
the
temperature
sensors
or
the
valves
from
all
the
temperature
sensors
that
exist
in
a
in
a
home.
Okay,
no
matter
what
is
the
room?
This
key
expression
will
match
it,
so
I
can
get
the
temperature
for
my
kitchen,
but
also
the
temperature
of
the
waiting
room
and
so
on
and
so
forth.
B
B
And,
at
the
same
time,
then,
we
can
also
so
it's
also
possible
to
fine
grain,
the
selected
data
at
the
declaration
time
by
defining
some
predicates
project
projections
and
the
set
of
properties.
Okay.
However,
I
would
like
to
highlight
that
the
key
expressions
are
used
throughout,
so
just
the
part
that
we
I
showed
in
the
previous
slide,
but
these
predicate
and
projecting
projections
and
properties
are
only
executed
in
the
node
that
executes
the
query
or
publishes
the
content.
B
And
these
allow
us
to
to
filter
some
data,
for
example,
if
we
take
the
second
example
here
that
where
we
have
my
car
dynamics,
basically
what
we
are
saying
is
that
return
me
the
acceleration
every
time
al
so
return,
only
the
acceleration
value
every
time
the
speed
is
above
25
kilometers
per
hour
per
hour,.
B
And
then,
when
we
have
these
keys,
when
we
define
these
keys
for
name
our
data,
how
and
who
can
make
use
of
them
and
for
these
zeno
defines
the
set
of
entities
that
can
handle
keys
and
values
in
different
ways.
B
So
we
can
have
resources
which
represents
a
name
data
that
is
represented
by
a
key
and
by
its
corresponding
value.
We
can
have
publishers
which
represent
the
spring
of
values
that
match
a
given
key
expression.
So
in
other
words,
it
represents
the
entity
that
produces
data
we
have
subscribers,
which
represents
a
think
of
values
that
match
a
given
key
expression.
So
in
other
words
the
entity
that
consumes
data,
and
we
have
variables
and
a
queryable
here-
represents
a
well
of
values
for
a
given
key
expression.
B
So
a
variable
is
also
a
produced
producer
of
data,
but
the
difference
is
that
it
needs
to
be
explicitly
requested
by
a
query:
okay
and
then
how
can
these
entities
make
use
of
zeno?
So
for
that
we
have
a
set
of
operations
that
are
listed
here
in
this
slide,
so
just
for
me
to
go
very
quickly
from
on
them.
So
we
have
scout
that.
Basically,
acts
as
discovery
mechanisms
to
see
who
is
around
which
entities
are
around,
and
this
is
very
similar
to
the
bonjour
protocol.
B
The
the
thing
here
is
that
a
zeno
session
is
not
end
to
end,
but
is
established
between
adjacent
nodes,
so
between
every
op
we
have
a
zeno
session,
then
we
have
primitives
to
declare
and
undeclare
the
different
kinds
of
entities
from
these
declarations.
Only
the
declarations
for
subscribers
and
queries
are
actually
mandatory.
For
example,.
B
So
other
declarations
for
publishers
and
resources
are
optional
and
basically
serve
to
optimize
specific
mechanisms
within
xeno.
So,
for
example,
we
can
use
a
resource
declaration
in
order
to
map
a
key
or
part
of
the
key
to
an
integer,
and
by
doing
it
we
can
save
some
bytes
in
the
wire
every
time
we
need
to
send
data.
B
Then
one
particular
thing
about
the
way
zeno
implements
queries
are
its
consolidation
strategies
and
also,
I
believe
we
will
not
have
the
time
to
dig
in
on
on
these
details.
Let's
just
consider
that
we
have
a
distributed
database
and
therefore
several
nodes
can
reply
to
a
given
query
in
this
case,
because
databases
might
not
be
aligned.
We,
so
the
queryer
might
get
different
replies
from
the
different
databases
and
to
mitigate
such
cases.
B
B
We
have
the
wages
strategy
where
only
replies
that
are
more
recent
than
the
previous
sent
are
forwarded,
and
we
have
the
full,
where
only
the
most
recent
replies
for
each
resource
are
sent
back
to
the
courier
and
these
consolidation
strategies
are
applied
in
three
different
stages
while
delivering
the
query
reply
in
the
first
router
in
the
last
router
and
in
the
reception
in
the
reception
node,
and
if
we
look
here
at
the
default
values
that
are
applied
in
these
different
stages,
so
lazy
lazy.
Full.
B
These
allow
us
to
have
eventual
consistency
by
default
when
using
xemo.
Okay,.
B
Now
going
a
bit
lower
in
the
network
stack,
so
zeno
is
transparent
from
the
underlying
network
topology
and
it
can.
It
has
been
tested
over
a
different
diversity
of
protocols
at
different
network
stack
layers,
so
we
have
successfully
tested
it
over
udp,
tcp,
tls,
quick
and
then
in
microcontrollers
over
the
threat,
protocol,
bluetooth
and
also
on
top
of
ethernet,
and
we
are
planning
to
test
and
to
make
and
provide
the
support
for
for
more
more
pro
to
for
xeno
to
run
on
top
of
more
protocols.
B
So
if
you,
if
we
look
here
at
the
example
in
this
slide,
so
we
have
a
physical
network
that
is
composed
by
ipv4
ipv6,
bluetooth,
ethernet,
and
this
xeno
overlay
is
able
to
abstract
all
these
heterogeneity
of
protocols
and
technologies
at
the
physical
network
and
to
expose
in
in
a
very
simple
way
as
an
overlay
that
only
needs
to
handle
or
only
needs
to
care.
If
the
transport
is
unicast
or
if
the
transport
is
multicast,.
B
So
the
way
resources
are
organized
in
this
example
is
we
have
the
first
segment
of
our
data
or
our
our
key
expressions
that
identify
the
data
is
defining
is
identifying
the
rule.
Then
the
second
name
segment
is
identifying
the
floor,
the
third,
the
room
and
then
the
type
of
sensor.
B
B
Okay,
I
hope
you
can
see
it
again
because
yeah
some
mac
os
notification
just
popped
out.
Okay,
perfect,
sorry
for
that.
B
So
yes,
so
we
have
also
an
eval
function
and
these
evolve
functions
could
be
used,
for
example,
to
run
additional
analytics
on
top
of
the
data,
and
the
results
of
these
computations
can
be
retrieved
by
means
of
queries,
okay,
yeah.
So
whenever
the
data
is
produced
and
what
happens
is
that
zeno
will
make
sure
that
this
data
gets
where
it
needs
to
be
delivered
depending
on
the
subscribers
and
the
storages
that
are
known
at
the
moment?
B
So
here
we
have
a
sensor
that
is
publishing
on
that
key
expression,
so
slash
move,
1,
42
sensor
temperature
and
all
the
subscribers
that
declare
their
interests
on
such
content
are
going
to
receive
it.
Okay,
so
the
one
on
the
bottom
explicitly
have
explicitly
requested
for
this
data,
while
the
one
a
bit
to
the
right
asked
for
all
values
that
are
all
temperature
values
that
are
published,
no
matter
the
floor
and
the
room,
and
on
the
top
we
have
the
the
storage
that
is
getting
all
the
information
for
the
first
floor.
B
So
here
the
ones
represent
the
the
distribution
path
and
then
suppose
that
we
have
a
second
sensor,
so
the
one
on
the
second
floor.
That
is
also
publishing
data.
Okay.
So
in
this
case
the
data
will
flow
to
the
other
storage
and
not
to
the
first
one,
just
because
they
are,
they
require
their
interest
on
these
data
with
different
key
expressions
and
then
also
notice
that
on
the
bottom,
we
have
a
subscriber
that
he
is
a
pull
subscriber.
B
B
And
now,
let's
assume
that
we
want
to
know-
or
we
have
a
query
that
wants
to
know
across
all
fours
what
is
the
temperature
in
room
42,
and
this
becomes
pretty
simple
to
do
in
zeno.
As
the
query,
the
query
only
needs
to
issue
a
query
which
will
hit
the
will
have
a
hit
from
the
multiple
storages
and
here
note
that
the
query
doesn't
need
to
know
the
location
of
this
data.
It
just
asks
for
this
information
and
the
network
is
responsible
for
forwarding
it
to
all
the
nodes
that
are
able
to
provide
this
information.
B
And
this
is
the
kind
of
location
transparency
we
can
get
from
zeno
as
they
as
the
network
is
able
to
forward
the
query
to
all
its
destinations.
B
And
then,
when
the,
when
the
databases
or
the
queryables
reply
with
some
with
some
data,
the
data
is
consolidated
on
its
way
back
and
it's
retrieved
to
the
querier
in
a
single
response.
B
So
where
do
we
see
zeno
or
where
we
see
the
main
applications
domains
for
zeno?
So
basically
any
application
or
use
cases
that
requires
the
distribution
of
data
either
via
publish
subscribe
mechanisms,
query
based
mechanisms-
and
this
includes
smart
farming,
robotics,
smart
cities,
transportations
energy,
aerospace,
and
so
on
so
forth.
We
have
some
use
cases
already
done
in
some
of
these
application
domains.
I
have
a
couple
of
examples
to
show
just
before,
just
after
so
yeah,
let's
move
to
them,
because
they
are
very
interesting
so
here.
B
I
hope
that
so
here
zeno
was
integrated
in
some
cars
in
these
in
the
autonomous
challenge.
In
order
to
provide
some
vehicular
to
infrastructure
communications.
B
Okay,
then,
here
we
have
an
example
of
online
video
game
that
is
being
developed
by
one
of
our
users
that
basically
leverages
on
zeno
to
implement
the
network
in
china.
B
And
here
we
have
a
short
demo,
where
we
have
zeno
running
on
a
microcontroller
in
the
network
and
in
a
bridge
between
zeno
and
dds,
and
zeno
is
being
used
to
send
to
send
the
values
from
to
send
the
values
that
are
published
by
a
gyroscope.
And
then
they
are
translated
and
sent
as
comments
to
the
robot.
B
So
now
some
final
thoughts
before
finishing
this
presentation,
so
some
highlights
that
we
think
are
worth
mentioning
about
zeno.
So,
as
far
as
we
know
is
the
most
wire
power
and
memory
efficient
protocol
in
the
market,
we
can
support
the
whole
cloud.
Two
things
continue:
meaning
devices
that
can
range
from
cloud
servers
down
to
microcontrollers.
B
B
Also,
we
can
support
order,
reliability,
revivable
data
delivery,
we
support
fragmentation
of
messages
and
also
batching
of
messaging,
but
batching
of
messages.
Sorry,
and
in
our
tests
we
have,
or
in
our
analysis
we
have
a
minimal
wire
operate
that
can
go
down
to
four
to
six
bytes.
B
B
We
also
have
in
our
github
repository
some
demos,
so
yep
you,
you
can
find
this
in
well,
our
homepage
in
github
or
in
guitar,
where
we
can
have
other
discussions
or
if
you
have
just
thoughts
on
how
to
use
demo-
and
this
is
it.
Thank
you
very
much
for
your
attention
and
if
you
have
questions
or
comments,
I
would
be
happy
to
discuss
them.
E
Thank
you
carlos.
So,
if
you
have
questions,
there's
a
raised
hand
icon
and
you
can
put
yourself
into
the
queue.
I
noticed
that
jose
is
already
in
the
queue
to
ask
the
question:
I'm
going
to
put
myself
in
the
queue
as
well
ken's
in
the
queue
so
jose.
Let
me
how
do
I
enable
her.
F
Okay,
I
think
it's
a.
I
think
this
addresses
a
lot
of
pain
points
of
people
like
me,
who
are
trying
to
do
distributed
applications
in
iot.
F
However,
one
question
I
have
for
you
is
that
a
lot
of
these
elements-
the
sensors
and
everything-
do
not
even
have
an
ip
address
and
a
lot
of
them
are
exposed
through
things
like
you
know,
industrial
buses
like
bacnet,
or
anything
like
that,
and
so
in
between
xeno
and
those
low-level
sensors.
B
Okay,
so
I
I'm
not
aware
of
that
bacnet,
but
so
zeno
can
run
on
top
of
data
link
protocols
so
which
means
that
we
don't
really
need
ip.
We
can
run
on
top
of,
for
example,
ethernet.
F
F
Some
of
them
are
just
like
people
entering
yes,
no
on
on
a
text
file
and
the
bacnet
is
another
big
thing,
because
it's
actually
a
very,
very
widely
used
industrial
bus,
and
so
I
I
think
what
you
presented
is
is
extremely
interesting
once
you
have,
I
think,
organized
the
lower
layers,
which
you
know
frankly
is
the
biggest
job.
F
So
I
think
it
would
be
interesting
if
you
want
to
continue
this
to
have
the
lower
layer
hooks
that
you
need
to
get
into
xeno,
because
what
I
can,
because,
if
I
had
the
lower
layer
links
to
get
into
xeno,
I
could
see
how
I
could
use
it
very
well,
and
it
would
save
me
some
time
and
dirk
is
aware
that
we've
done
work
on
how
to
establish
a
better
way
of
you
know,
doing
federated
applications
and
even
adding
correct
effort.
B
F
So
those
are,
are
your
systems.
You
haven't
worked
with
industrial
systems
that
are
already
installed.
No
okay.
So
that's
my
that's
exactly.
The
crux
of
my
point
is
that
if
I
had
access
to
my
own
controllers,
I
would
be
really
happy,
but
right
now
I
think
a
lot
of
the
pain
points
and
the
use
cases
that
you're
you
are
presenting
is
that
we're
dealing
with
existing
industrial
systems
that
expose
only
their
data
through
really
weird
means
to
be.
F
That's,
that's
my
nice,
my
nice
way
of
describing
it
weird,
so
I
would
say,
maybe
for
your
next
generation
it
would
be
great,
and-
and
you
know
you
can
contact
me
and
I-
and
I
could
give
you
some
examples
on
this,
because
I
think
what
you're
doing
is
very
important
for
what
I'm
doing,
except,
except
that
I
don't
know
how
to
communicate
with
it.
So
I'm
taking
all
the
time
here
and
I'm
going
to
stop,
but
we
can
communicate
offline.
B
D
B
If,
if
you
need
to
make
to
ensure
the
integrity
integrity
of
your
content,
you
should
do
it
on
your
application.
Okay,
here
we
are
not
aiming
as
nbn
was.
I
mean
I
stopped
following
ndn
and
ccn
for
a
couple
of
years
before
joining
adwink,
but
we
are
not
aiming
as
ndn
and
ccn
were
in
providing
already
the
integrity
of
the
content
or
to
make
it
part
of
their
design.
G
Okay,
one
one
question
on
your
slide,
where
you
said
you
have
tried
this
on
top.
You
implemented
this
on
top
of
a
lot
of
traditional
internet
protocols.
Did
you
implement
it
on
top
of
any
icon
protocol
like
ndn
or
ccn,.
B
Well,
not
really,
because
I
mean
we
are
targeting
the
same
type
of
functionalities
as
ndn,
ccn
and
others
are
trying
to
do
so,
and
since
we
we,
we
intend
to
have
a
very
low
wire
over
it
by
the
points
and
on
top
of
an
ndn,
we
would
already
have
a
big
overhead
just
with
all
the
the
layers
that
would
that
would
be
under
bible
zeno.
Let's
say
like
like
that.
G
D
B
No,
it
would
be
possible,
I
would
say
I
I'm
thinking
what
would
be
the
benefit
of
doing
it.
B
A
So
just
one
question
so
this
looks
really
interesting,
but
I
was
a
bit
puzzled
because
you
didn't
mention
security
at
all.
So
is
there
a
security
framework
and
authentication
framework.
B
So
we
have
in
our
roadmap
to
have
some
user
access
control.
We
have
some
kind
of.
So
when
we
are
establishing
a
session,
we
have
some
kind
of
negotiation
and
link
authentication,
but
let's
say
so:
it's
still
things
that
are
in
the
roadmap.
B
We
are
not
tackling
them,
I
mean
in
the
in
full
mode,
but
they
will
be
there
okay,
so
it
thinks
that
we
we
we
are
keeping
our
minds
in.
A
Great
thanks
a
lot
of
colors,
so
thank
you
very
much.
I
think
we
we
should
move
on
to
give
our
other
speakers
enough
time,
but
carlos
is
also
on
the
icy
energy
mailing
list
and
if
you.
A
His
presentation
material,
you
also
see
an
appendix
where
he
has
provided
additional
content,
also
code,
examples
and
so
on.
So
please
check
that
out.
That's
really
interesting
again,
thanks
a
lot
for
for
bringing
your
work
twice
and
yeah.
It
seems
really
interesting,
and
I
think
that
we
could
discuss
this
for
for
a
longer
time.
But
maybe
this
is
not
the
end.
So
I'm
looking
forward
to
probably.
B
A
Thanks,
carlos
okay,
so
next
up
red,
samson
and
jamie
locker
and
they
are
winning
the
prize
for
the
most
inconvenient
time,
so
they
are
joining
from
australia.
I
really
thank
you
a
lot
guys
for
for
making
this,
and
so
they
will
talk
about
a
system
that
they
developed
in
their
company
gt
systems.
C
Okay,
I
should
say,
is
in
new
york,
so
he's
pretty
cool,
I'm
the
I'm
the
guy
at
four
o'clock
in
the
morning.
So
I
think
I
should
do
share
slides
rather
than
share
screen
yeah.
C
A
D
A
A
C
C
C
A
C
C
C
And
then
I'll
just
and
so
you
should
have
my
first
slide
up
now.
A
So
I
think
if
you
want,
you
have
to
click
on
presentation
view,
then
you,
I
think,
see
everything
again
so
in
the
top
right
line
of
buttons.
C
I
still
can't
change
the
page,
so
that's
right
I'll,
just
I'll
go
to
the
presentation
and
and
assume
that
I
look
half
decent
for
you
guys.
I
think,
okay,
all
right,
so
we
we
we
done
carlos.
Some
of
the
things
you
said
were
very
similar
to
the
things
that
we
say.
I
think
with
one
fundamental
difference,
which
is
that
you're
working
at
a
transport
layer
over
different
network
layers
and-
and
we
came
to
this
from
a
slightly
different
perspective.
So
let
me
just
leap
into
that
and
it'll
become.
C
So
this
is
a
timeline
which
you
can
you
know
to
put
it
into
perspective,
but
I
think
the
most
useful
slide
is
this
one,
because
what
we
really
set
out
to
do
back
in
the
early
days
was
to
really
to
to
fix
adaptive,
bitrate
buffering
and
the
spinning
wheel
of
death,
which
we
thought
was
a
pretty
ordinary
thing,
and
and
in
doing
that,
we
we
took
a
fairly
fundamental
view
of
things
and
and
came
up
with
a
with
what
turned
out
to
be
a
fundamentally
different
concept,
very
simple,
but
but
very
different
from
the
way.
C
You
know
the
internet
ran
at
the
time,
which
was
a
lot
more
crazy
back
then
than
it
is
now
trust
me.
So
we
we
sort
of
borrowed
from
some
work
that
we
did
with
csiro
and
peer
assist
where
they
had
had
done
something
very
similar
to
a
bittorrent
patent
application
where
they
combined
peer-to-peer
with
super
with
super
servers
for
another
better
word,
and
they
got
some
very
encouraging
results.
And
particularly
the
very
important
thing
to
bear
in
mind
is
that
we've
always
been
application.
Driven
in
this.
C
We
were
driven
by
video
because
we're
in
the
manufacturing
demand
of
cd
cities.
Before
this,
we
then
had
a
customer
who
wanted
to
do
video
wanted
to
do
movies
online,
and
so
we
were
driven
by
basically
building
a
beta
streaming
system,
and
this
was
this
was
before
netflix
existed
by
the
way
or
sorry.
Netflix
was
still
mailing
dvds.
When
we
started
this
and
and
so
we
did
what
everyone
did,
we
stood
on
the
shoulders
of
giants
and
bittorrent
and
github,
etc,
etc,
and
and
sort
of
thought
about
and
said.
C
Well,
okay,
you
know
what,
if
we
do
this
properly
and
start
again,
and
we
came
up
with
a
very
simple
concept
which
we
called
well
secure
peer
assist
and
it
was
just
this
simple
thing
of
ingesting
and
it
was
based
around
you
know,
as
I
said,
peer-to-peer
and
ott
distribution
systems.
C
C
Like
name
domain
networking
content,
centric
networking
sometime
later
and
then,
but
then
one
of
the
fundamental
ideas
we
had
was
to
route
on
those
tagged,
slices
or
packets
and
and
then
to
do
that
intelligently
with
optimization-
and
you
know,
as
I
said
to
someone
the
other
day,
the
at
the
eye
inai
for
networks.
Isn't
that
eye.
C
You
don't
have
to
be
that
smart
to
make
massive
improvements
on
this,
and
then
we
came
up
the
you
know
the
the
same
idea
as
originally
sauron,
bittorrent
etc,
which
was
peer-to-peer
with
super
pops,
but
but
we
left
the
door
open
for
and
then
for
everything
to
be
appear
and
for
it
to
be
a
mesh
network
and
a
distributing
factor
distributed
storage
network
in
a
distributed
computing
network.
So
that's
kind
of
was
the
starting
point
back
in
we
started
the
r
d.
C
We
did
the
work
with
sarah
in
early
2000s
and
then
we
did
mod
for
a
while,
and
then
we
did
the
and
then
we
that
we
did
the
work
with
the
customer
and
came
up
with
this
bit.
We
we
then
discovered
a
thing
called
like
ipfs
and
protocol
labs.
You
know
interplanetary
file
system,
what
a
great
name
and
and
of
course
that
was
an
implementation
of
our
idea,
which
was
the
file
system.
I'd
been
running
around
looking
for
and
saying
you
know
we
need,
we
need
this,
but
it
it
really
was.
C
It
was
a
great
idea
when
we
found
it,
but,
as
time
went
on,
it
became
clear
to
us
that
there
were
some
problems
associated
with
that,
which
was
that
you
know
it
didn't
scale
for
distribution.
It
was
great
for
storage
but
didn't
scale
for
distribution,
and
in
particular
we
became
aware
of
that
when
they
published
some
rfps
early
last
year,
I
think
to
for
some
r
d
work
to
basically
to
help
them
scale.
C
H
It
wasn't
belonged
for
a
long
time
and
joined
them
when
they
were
working
on
this
problem.
Yeah.
C
Yeah
and
so
we
formed
this
project
team
with
a
bunch
of
other,
really
smart,
guys
and
and
sort
of
merged,
all
the
work
together
to
respond
to
protocol
labs
sort
of
call
for
help.
But
but
you
know,
given
that
we
both
really
like
their
distributed
storage
system-
and
you
know
and
really
a
lot
of
it-
was
merging
the
work
that
haymi
had
done
in
modeling
and
optimization
at
the
labs,
with
the
work
that
we've
done.
C
You
know
around
content,
distribution
and
peer-to-peer
and
and
and
sort
of
our
version
of
that
and
and
in
the
end
we
kind
of
came
up
with
a
solution.
We
first
of
all,
we
had
to
learn
how
it
all
worked,
and
then
we
discovered
the
limitations.
And
you
know
there
were
some
very
strong
points
to
you
know
as
haymi
says,
to
the
to
the
storage
side
of
product
of
ipfs,
but
but
but
it
had
some
limitations
in
distribution
which
we'd
addressed
from
the
beginning
in
in
distributing
video.
C
And
then
there
was
a
a
paper
and
I
I
was
trying
to
remember.
Whilst
I
was
listening
at
who
wrote
the
paper,
but
I
think
it
was
someone
in
either
ccna
or
icn
or
indian
had
written
a
paper
that
said,
it
might
be
really
interesting
to
smash
together,
ipfs
with
name
domain,
networking
or
content-centric
networking.
I
think
they
said
content-centric
networking
and
we're
like.
What's
this
concentric
networking
thing,
I'm
ashamed
to
say
you
know,
and
we
we
then
you
know
we
dived
into
that
whole
thing
and
we
went
wow.
This
is
nirvana.
C
C
But
but
then
you
know
what,
if
we
put
those
things
together
and
then
I
think
we're
on
a
call
hey
me
one
day,
and
you
said
something,
and
I
said
something
and
we
said
well
what,
if
we
put
smash
together,
that
that
deterministic
hash
merkle
tree
system
of
ipfs
with
the
less
deterministic
system
of
name
domain
networking,
you
know
what
would
that
need,
and
we
kind
of
worked
out
that
what
you
had
to
do
is
is
to
have
a
unified
namespace
and
that
we
think
that's
one
of
the
most
interesting
areas
that
we
could,
potentially
all
work
together
on
you
know
is,
is
really
defining
and
standardizing
that
namespace
that
unified
namespace,
because
that
then
kind
of
draws
everything
together
and,
as
I've
said
in
the
white
paper,
which
is
up
on
the
website,
I'll
give
you
the
url
for
it
later
that
we
that
we
could
actually
unify
kind
of
everything.
C
And-
and
so
we
ended
up
with
a
thing.
We.
We
call
this
hybrid
adaptive,
routing
system
which
does
both
name
resolution
and
name
based
routing
and
and
and
when
I
say
routing
it's,
it's
an
ndn
concept
of
routing.
So
we're
essentially
allowed
to
protocol,
but
with
you
know
the
bits,
the
indian
bits
to
stop,
you
know
to
do
stop
feedback
and
load
balancing,
etc
or
loops.
C
I
should
say,
but
combining
all
of
the
you
know
the
best
bits
of
of
a
layer,
2
protocol
and
so
the
fundamental
thing
which
I
should
probably
go
down
to
with
you
guys
I'm
going
to
have
to
go
too
far,
never
mind
but
but
you've
all
seen
the
narrow
ways
to
picture
and
and
so
basically
we
were
saying:
okay,
we're
going
to
we're
going
to
switch
slash
route
at
the
network
level
with
these
tagged,
packets
or
frames
or
whatever
you
want
to
call
them,
and
that
was
the
fundamental
difference.
C
Is
that
we
kind
of
said
you
know
we
didn't
even
consciously
say
ip
has
reached
its
limits,
but
that
was
kind
of
the
outcome
of
what
we
were
doing.
And
so
you
add
all
this
other.
Then
you
get
all
the
other.
These
other
benefits
of
you
know,
location
based
and
multicast,
and
it's
fully
distributed,
and
then
we
combine
hamey's
work
around
electricity
and
elasticity
and
virtualization
and
and
then
we
came
up-
and
I
remember
when
jamie-
I
don't
know
if
you
remember
when
we
when
we
first
spoke
the
very
first
time
ever.
C
You
asked
me
one
question,
you
said:
do
you
have
a
global
optimizing
system
as
well
as
a
local
optimizing
system?
And
I
said
yes
and
you
said
good,
that's
the
only
thing
that
works
and-
and
I
guess
when
we
when
we
make
our
claims
here
saying
you
know
this
stuff
works
and
it's
very
efficient.
It's
haymi's,
the
guy
who's
kind
of
proved
that
for
us
he's
he's
been
modeling
this
stuff
for
a
very
long
time,
and
we
might
get
him
to
talk
about
that
in
a
second.
C
So,
in
fact,
we
might
do
that
very
soon.
But
but
in
essence,
you
know-
and
this
only
kind
of
dawned
on
us
over
time
is
that
what
was
happening
is
that
that
the
network
is
becoming
the
cloud.
You
know
when
you
take
this
fundamentally
different
approach
and
sort
of
smash.
The
two
things
together,
then,
then
the
network
becomes
the
cloud
and
and
therefore
the
computer,
so
we're
talking
really
about
distributed
cloud
and
and
distributed
computing.
C
So
this
is
what
we
do.
I
won't
go
into
too
much
detail
about
that,
except
it
turns
out
having
modeled
used
jamie's
very
sophisticated
models
that
doing
it.
This
way
is
incredibly
cheaper
and
incredibly
more
efficient
and-
and
when
I
say
incredibly,
I
mean
you
know
up
to
50
but
we'll
get
to
those
numbers
in
a
second
and
and
it
you
push
things
out
to
the
edge
and
of
course
you
improve
latency
significantly
as.
C
Your
back
all
costs,
etc,
etc.
So
it's
very
secure
I
mean
we
were
working
with
hollywood.
We
won't,
even
we
don't
even
say
bittorrent
in
the
presence
of
hollywood.
We
call
it
the
other
protocol,
and
so
we
we
were
secure
from
day
one
and
and
it
just
got
more
secure
with
the
things
that
have
come
along
since
so
so
fundamentally
that
high
value
content
is
encrypted
at
ingest
and
that's
that's
what
makes
it
secure
using
hollywood
approved,
algorithms,
drm,
etc,
but
also
that
you
know
the
name
domain.
C
Networking
and
content-based
networking
adds
another
level
of
security,
particularly
around
signing
at
packets,
and
we
we
always
had
this
idea
and
concept
of
sovereign
identity
and
then
we
during
the
ipfs
period
in
that
time
scale
we
met
a
guy
called
jonathan
holt,
he's
one
of
our
advisors
as
well
and
john's
very
involved
in
ipfs
at
the
development
level,
but
also
as
an
investor
in
file
coin
and
a
very
smart
security
guy
and
and
he
he
has
come
up
with
a
sovereign
identity
system
based
in
the
wc3
distributed
identity.
C
D
C
Very
keen
to
to
to
find
ways
of
working
with
with
everybody
and
and
and
you
know,
although
we've
applied
for
patents,
it's
that's
about
trying
to
redress
that
balance
between.
C
You
know,
people
who
built
the
internet
for
free
and
and
the
people
who
make
a
lot
of
money
out
of
the
internet
today
and
trying
to
find
a
a
space
somewhere
in
the
middle,
where
you
know,
people
get
rewarded
for
their
work,
but
but
the
internet
gets
sort
of
built
and
taken
care
of,
so
we're
very,
very
keen
to
explore
those
models
more
so
I
think
I'm
probably
gonna
hand
over
to
amy
at
this
stage
and
let
him
just
run
through
the
stages
of
which
is.
C
H
So
yeah
I
mean
just
to
one
of
the
first
concepts
that
we
we
we
try
to
exploit
is
the
idea
of
the
is
not
just
the
content
lives
in
the
network,
but
the
applications
live
in
the
network
right.
So
now
we
have
all
these
available
resources
that
we
want
to
utilize.
H
Everything
is
getting
distributed
and
we
have
compute
storage
in
the
network
and-
and
we
need
to
understand
first,
if
you
know,
what's
the
optimal
use
of
available
resources
at
core
edge
and
peer
levels.
So
the
first
work
we
did
is
just
modeling
different
applications
running
over
this
network
and
seeing
really
if
it
makes
sense
where
those
functions
end
up
being
running.
What
is
the
optimal
place
where
they
run?
What
is
the
optimal
place
where
data
gets
stored?
H
So
when
we
start
modeling
these
these
networks,
we
see
that
you
know
if
we
end
up
allowing
you
know
unlimited
if
we
unconstrain
the
use
of
storage
and
computers
at
different
layers.
Where
do
these
applications
and
data
end
up
moving
and
how
much
cost
reduction
reduction
we
can
get
so
the
first
stage
was
just
looking
at
that
and
saying
that
you
know
as
we
move
towards
more
low
latency
obligations
and
even
for
just
content
distribution.
H
Of
course
we
get
a
lot
of
bandwidth
cost
reductions
and
you
start
moving
both
data
and
functions
to
the
edge
so
and
just
for
on
demand
previous
streaming
and
live
video
streaming.
We
end
up
getting
numbers
on
on
around
40
for
network
cost
reduction
that
we,
we
think
is
even
it
can
go
even
greater
when
we
really
go
for
industrial
automation
and
augmented
experience
services
where
we,
you
really
have
that
low
latency
requirement.
H
So
that's
the
first
stage,
even
without
going
into
changes
in
the
networking
protocols.
What
can
you
get
by
just
putting
functionality
and
data
at
the
right
place
right
from
a
placement
perspective?
H
Idea,
of
course,
is
to
exploit
programmability
and
virtualization.
So
as
you,
you
use
virtual
machines
and
you
can
program
the
network
in
between
you.
You
get
additional
cost
reductions
essentially
from
over
provisioning,
so
you
reduce
over
provisioning
because
you're
using
you,
you
have
a
network
that
is
more
elastic
now.
H
Data
and
functions
live
in
the
network,
but
they
consume
virtual
resources,
so
you
can
scale
up
and
down
the
use
of
resources
whenever
you
need
them,
so
you
get
an
extra
20
in
the
simulations
that
we
run
when
you
add
that
elasticity
to
the
optimization
and
then
finally,
we
have
a
stage
where
we
say:
okay,
you
know
if
applications
and
data
really
live
in
the
network.
The
first
thing
you
want
to
do
is
you
have
to
optimize
the
use
of
resources.
H
The
second
thing
is
to
make
sure
that
that
use
is
elastic
and
is
dynamic.
The
next
thing
is
change
and
use
more
efficient
networking
protocols.
So
the
idea
of
the
final
stage
is
using
these
integration
of
name,
resolution,
routing
and
name-based
routing,
so
using
name
resolution
routing
for
distributed
storage
for
persistent
available,
distributed
storage
and
using
name
the
name
based
routing
for
fast
access
and
delivery,
and
we
think
that
integrating
both
system
is
really
the
best
way
to
to
use
the
benefits
of
of
both
worlds.
H
Right
I
mean
they
do
have
limitations
on
their
own
with
name
resolution
routing.
You
know
it's
an
overlay
solution.
You
have
it's
a
slow
system
for
resolution,
but
it's
really
great
for
persistence,
distributed,
storage
and
name-based.
Routing
is
great
for
efficient
delivery,
but
it
has
also
some
limitations
and
the
amount
of
forwarding
state
that
you
can
keep
so
putting
them
together.
We
think
it's
it's
a
great
solution,
so
I
think
that
functionality
and
then
and
then
the
last
aspect-
is
adding
an
element
of
ai.
H
No
still
on
that
same
slide,
which
is
just
in
this
case,
it's
a
modeling
exercise,
but
we
think
that
when
things
get
so
complicated,
when
data
really
leaves
in
the
network-
and
you
have
so
many
options
to
access
data
and
to
run
functions
and
computations,
an
element
of
ai
that
combines
global
optimization
with
local
ai
agents
can
help
just
optimizing.
Both
the
resource
allocation
and
the
routing
and
caching
decisions
in
a
better
way
and
then
yeah,
maybe
red.
Maybe
then.
C
You
join
it
all
up
yeah.
So
so
then,
if
you
once
you
get
to
that
level,
then
then
you
say:
okay!
Well,
if
we
do
that,
you
know
how
does
that
have
to
work,
and
we
very
quickly
got
to
the
point.
Another
colleague
of
ours,
matt,
moran,
who's
very,
very
experienced
at
ott
and
networking
stuff,
and
he
said
you
know
no
one's
going
to
trust
us,
because
if
you
have
a
an
optimizing
ai,
then
you
know
someone
has
to
run
it
and
and
of
course,
that
that's
not
that's
not
going
to
work.
C
C
So
we
think
that
again,
this
is
a
an
area
which
we
think
is
is
very
fertile
for
cooperation,
where
you
have
this
concept
of
distributed:
local
agents,
which
which
are
optimized
and
true
and
perhaps
even
trained
in
locally,
which
is
a
kind
of
different
idea
for
a
local,
optimization
and
autonomous
behaviour,
and
maybe
hopefully,
some
emergent,
behavior
and
then
and
then
you,
you
feed
up
information
about
the
the
network
up
to
an
optimizing
intelligence.
C
But
there
isn't
just
one,
you
know
there's
multiples
of
these
and
then
you
have
to
join
them
up
at
a
couple
of
levels
which
we'll
have
some
slides
on
in
a
minute.
In
order
to
to
really
to
to
have
an
ai
api.
I
guess
for
one
of
a
better
word
where
your
your
your
you
know:
sending
stateful
information
in
between
sub
networks
and
and
it
joins
up
into
a
into
a
universal
content
distribution
network,
which
is
this
concept
that
we
have.
C
But
you
know,
as
I
keep
saying
to
the
guys,
it's
really
just
about
finding
someone
who
wants
what
you've
got
and
so
there's
a
bunch
of
guys
putting
some
satellites
up
in
medium
earth
orbit
as
a
backbone,
and
they
want
to
do
this
first
of
all,
they're
greenfields,
which
is
fantastic,
and
secondly,
they
want
to
do
it
properly,
literally
their
words
and,
and
thirdly,
they've-
come
to
an
understanding
which
seems
to
be
growing
at
various
places
around
the
world
and
I'm
sure
there's
plenty
of
people
listening
who
may
have
come
to
the
same
conclusion
that
you
know
ip
and
may
have
sort
of
come
can't
be
getting
towards
the
end
of
its
useful
life,
and
maybe
it
might
be
time
to
thinking
about
re-architecting
stuff.
C
So
these
guys
were
open
to
that
possibility.
And
when
we
came
along,
you
know,
as
they
said,
you
know,
we're
the
big
fat
pipes
and
you're
the
guys
who
fill
the
pipes
and
we
kind
of
realized.
We
needed
to
do
that.
But
we
didn't
quite
know
how,
and
so
it's
it's
turning
out
to
be
sort
of
a
bit
of
a
marriage
made
in
heaven
and
it
could
get
quite
interesting
in
the
next
very
short
space
of
time.
C
So
that's
a
system
block
diagram
of
merging
the
two
things
together,
which
talks
a
bit
more
about
that.
We
won't
go
into
too
much
detail
about
the
rollout.
It's
global,
we're
doing
africa
first,
we'll,
hopefully
do
australia
very
soon
we're
working
with
these
guys.
So
this
is
a
really
interesting
thing.
C
You
know
these
are
the
people
who
are
building
existing
networks
or
maybe
building
future
networks
in
the
case
of
nvidia,
and
so
we,
I
think,
that's
the
system
block
diagram
out
of
the
patents
which
is
similar
but
different
to
some
of
the
earlier
ones
which
are
more
involved.
This
is
the
one
I
wanted
to
get
to.
You
know
you
say:
okay!
Well,
how
are
we
going
to
do
this
and
it
it
ends
up
being
reasonably
simple.
C
Nokia,
I
have
a
now
an
open
si
linux
based
service
router,
offering,
which
is,
is
what
we're
going
to
do
this
on
initially
and
then
with
some
of
the
nvidia
ai
stuff
at
the
edge,
and
you
can
in
fact
put
a
custom
protocol
as
a
stack
alongside
a
classic
tcp
stack
and
and
connect
them
both
to
the
switching
and
routing
bus
and
then-
and
then
you
know,
if
it's
a
if
it's
a
span
enabled
packet,
then
the
span
stack
deals
with
it
and
if
it's
a
tcp
packet,
then
the
tcp
stack
deals
with
it
and
obviously
there's
some
things
you
have
to
do
about
tunneling,
etc,
etc,
but
that's
all
beautifully
dealt
with
it
in
in
ndn
already
so
then
you
say:
no
presentation
would
be
complete
without
a
slide
on
the
metaverse,
but
once
you
have
that
fundamentally
different
architecture,
where
the
network
is
the
cloud
and
the
distributed
computer,
then
you
say:
okay
well,
how
do
we
make
this?
C
This
work
in
terms
of
you
know
joining
up
all
these,
so
we
video
kind
of
tick.
We've
done
that
and
got
some
amazing
results
for
that,
and
then
you
know
now
we
we
need
to
look
at
the
virtual
worlds
and
graphics
and
computer
design
etc.
And
you
know
the
guys
who
deal
with
it.
C
The
most,
of
course
are
in
nvidia
or
one
of
and
they've
come
up
with
the
same
concept
and
said
well,
we
need
a
set
of
standards
which
pixar
have
developed
for
universal
scene
description,
and
then
we
need
a
set
of
tools
and
and
some
brokers
for
one
of
a
better
word
which
they've
come
up
with
on
universe
and
nucleus.
C
You
know,
but
but
at
the
moment
they're
focusing
that
on
corporates-
and
you
know
guys
like
bmw,
you
know
so
that
they
can
have
the
cad
systems
that
design
the
cars
talk
to
the
cad
systems,
that
design
the
factories.
But
we've
talked
to
them
and
said:
have
you
thought
about
pushing
that
out
of
the
network
and
they
said
well
no,
but
that
seems
like
a
pretty
good
idea.
So
we're
having
those
discussions
and
of
course
you
know,
the
essence
of
the
metaverse
is-
is
perfectly
suited
to
this.
C
To
kind
of
our
view
of
the
world
there's
a
fundamental
thing
in
all
of
this
which
which
carlos
got
to
you
know,
which
is
that
the
network
now
is
the
cloud
on
the
computer,
and
it
therefore
needs
a
set
of
instructions,
the
two
most
fundamental
of
which
I
publish
and
subscribe,
and
so
then
a
publisher
would
simply
publish
their
content
to
the
network
and
the
network
takes
care
of
the
rest,
and
then
you
as
a
consumer
would
say
I
want
to
subscribe
and,
and
the
network
takes
care
of
the
rest
as
well.
C
This
thing
needs
to
be
standardized.
Obviously
you
know
that's
a
huge
thing,
and
these,
as
we
say,
are
the
areas
that
there's
a
nice
set
of
rules
from
tony
parisi,
which
you
know
there's
one
metaverse
it's
open.
It's
in
brackets
are
our
editions.
It's
enabled
by
a
network
that
network
is
the
internet,
and
this
is
how
these
things
roll
out
some
things.
Some
thoughts
for
the
future
you
know
is,
as
I
talked
about,
we
perhaps
need
some
new
funding
and
operational
models
that
sort
of
combine
the
best
of
everything.
C
Clearly,
quantum
computer
is
a
huge
thing
transport,
but
I
don't
know
much
about
this,
but
it
seems
that
lasers
might
be
kind
of
good
for
that.
They're,
certainly
good
for
quantum
key
distribution,
which
the
network
we're
working
with
does
and
then,
of
course
distributed
artificial
intelligence
where
you're
doing
local
training,
perhaps
and
machine
learning
the
concepts
of
plato,
urn
tatrino,
like
you
would
train
at
tamagotchi,
you
you'll
train
an
ai
agent
and
then,
of
course,
the
the
golden
fleece
of
general
artificial
intelligence.
So
these
are
so.
C
These
are
the
guys
who
we're
kind
of
working
with-
and
this
is
this
is
our
claim-
and
this
is
this-
is
my
favorite
saying
at
the
moment.
One
one
day
you
go
to
sleep
and
you're,
the
czar
of
the
whole
of
russia
and
you
wake
up
in
the
morning
and
you're.
Not
so.
Thank
you
very
much
for
the
opportunity
to
talk
and
sorry,
I
think
we're
going
to
fraction
over
time,
but
I'll
kind
of
open
it
up
to
questions.
A
No
worries
thanks
a
lot
right.
A
So
this
is
really
quite
intriguing,
so
you,
I
think,
raised
many
interesting
aspects.
There's
also
some
thoughts
that
have
been
discussed
in
this
community
and
also
the
computing
in
the
network
community
a
lot
yeah.
So
we
don't
have
that
much
time.
So
I
think
what
I
would
like
to
do
is
just
ask
people
who
have
questions
to
reach
out
to
you
if
that's
okay,
yep,
because
I
think
there
would
be
many
like
quite
detailed
technical
questions
next,
but
I
think
it
probably
takes
a
long
time
to
answer
those.
C
I
say
the
white
paper
is
up
on
the
website,
so
gtsystems.io
and
then,
if
you
add
just
slash
white
paper,
you'll
get
the
white
paper
and
then
there'll
be
an
open
link
to
it
in
the
next
day
or
so.
A
Yeah,
that's
great
so
so
again,
I
think
many
of
us
have
thought
about
like,
like
the
connection
of
ipfs
and
and
ndn
for
example,
and
and
so
on.
So
I
think
you.
D
A
C
A
Great
yeah
thanks
a
lot
super
interesting,
and
especially
thanks
for
making
the
time.
A
F
It's
in
australia,
so
I
understand
the
the
time
differences
between
the
east
coast,
europe
and
australia.
So
it's
it's
quite
challenging.
C
A
Thanks
next
is
with
this
presentation
about
ndnts
api
design.
D
I
Go
hi,
my
name
is,
I
am
a
classic
researcher
at
the
nist,
but
on
the
nts
this
is
my
personal
project,
so
this
will
reflect
personal
opinions
today,
I'm
talking
about
under
nts
api
design
and
the
nts
is
a
set
of
alien
libraries
for
the
modern
web.
It
is
modern,
javascript
libraries.
It
can
be
used
in
both
typescript
and
javascript
projects.
I
It
works
in
node.js
and
the
browsers
has
more
than
90
percent
of
test
coverage
with
automated
and
manual
browser
testing
on
desktop
android
ios.
It
works
standalone
without
a
forward
or
it
can
connect
to
forwarders
such
as
nfd
and
dnd
pdk.
The
project
is
so
far
actively
maintained.
New
features
are
added
regularly
and
it
supports
the
latest
indian
specification.
I
I
So,
let's
first
get
into
the
low-level
api.
Many
people
would
say
low
level.
Api
is
boring,
and
this
is
probably
true
because
the
indian
team
official
position
is,
we
encourage
application
developers
to
use
high-level
api,
such
as
data
center
toolkit
and
the
common
name
library,
because
those
high-level
api
can
abstract
and
the
uncomplexity
away
from
the
application
developers.
I
However,
my
my
belief
is
interacting
with
low-level
api
is
unavoidable,
because
if
you
try
to
build
a
high
level
api
as
a
library
developer
would
have
to
deal
with
low
level
api
and
also
high
level.
Api
are
unlikely
to
cover
all
possible
apk
needs,
so
sometimes
applications
still
need
to
interact
with
low-level
api.
Therefore,
I
believe
it's
still
important
to
design
a
good
low-level
api
and
the
nts
has
some
unique
opportunities,
because
first
and
dmts
is
not
the
first
library.
I
am
really
the
first
to
implemented
a
particular
feature
in.
I
Instead,
my
preference
is
to
write
some
applications
with
existing
libraries
and
look
at
how
other
developers
are
using
the
library
to
make
applications
as
results.
I
can
feel
the
pain
points
of
existing
library,
especially
which
api
are
cuba,
some
to
use
and
which
code
snippet
have
been
copy
pasted
in
multiple
places
in
the
application,
because
it's
missing
in
the
library
and
then
I
can
improve
those
areas
in
the
nts.
I
The
other
opportunity
is
that
ndnts
is
a
personal
project,
so
I
can
have
the
freedom.
I
don't
need
to
promise
backwards
compatibility.
I
can
take
my
time
to
refactor
the
code
without
worrying
about
the
deadlines,
and
I
can
ask
invited
people
to
watch
my
push
up.
Videos
that
are
delivered
over
the
indian
textbook
then
collect
the
video
playback
metrics
to
improve
my
conjection
control
implementation,
but
back
to
the
apis.
The
first
one
I
mentioned
is
trv
decoding,
but
it's
with
trv
availability
considerations.
I
I
will.
I
will
do
an
example.
The
example
is
the
indian
links,
data
routing,
daemon,
rss
lsa
info
structure.
Is
a
structure?
Look
like
this?
It
has
a
trv
type
number,
a
trv
lens.
It
contains
a
name,
a
sequence
number
and
the
expiration
time
field
sources.
Three
are
sub
t
of
the
elements,
but
it
looks
simple.
Then
we
look
at
the
endian
specifications.
There
is
a
section
called
the
considerations
for
availability
of
trv-based
encoding.
I
That
section
dictates
that
if
the
decoder
encounters
unrecognized
out
of
order
70
of
the
element,
there
is
a
specific
behavior
to
follow,
based
on
the
trv
type
number.
If
the
pov
type
number
is
less
than
32
or
is
all
the
number
the
decoder
must
abort
decoding
and
the
reporter
error.
If
the
trv
type
number
is
even
numbers
and
the
decoder
should
ignore
that
trv
element
and
continue
decoding.
I
So
how
to
write
the
decode
in
the
libraries
and
the
nts
adopts
a
semi-decorative
structure.
So
the
code
looks
like
this:
we,
I
have
a
thing
called
the
ev
decoder
or
evolvability,
aware
trv
decoder
to
use
that
decoder.
You
can
declare
each
sub
trv.
Using
this
add
function,
I
can
add,
with
the
trv
type
and
a
small
lambda
function
of
how
to
handle
the
trb
value
or
how
to
handle
that
sub
trv.
I
The
lambda
function
can
include
actual
logic,
such
as
savings
under
portion
boundary,
which
is
useful
when
I
decode
the
data,
packets
or
decoder
certificate.
But
it's
not
here,
and
here
it's
very
simple
and
then
is
the
evolvability
consideration.
I
mentioned
earlier.
It's
handled
automatically
by
the
ev
decoder
with
ndncxx.
I
I
It
can
be
added,
but
then
it
will
make
make
the
code
even
more
complicated
python.
Dr
so
far
is
a
short
history,
because
it's
using
a
reflection
based
decoder
that
is
purely
decorative
but
python
and
the
answer
decoding
function.
It
is
less
flexible
because
earlier
I
said
that
there
there
could
be
actual
logic
such
as
the
standard
protein
boundary
and
the
python.
I
Ndn
can
also
do
that
either
you
have
to
add
extra
code
and
there's
also
python
dms
decoder
forces
is
a
class
structure
to
follow
the
trv
structure
and
it
means
application
have
to
be
exposed
to
encoding
details.
Sometimes
you
don't
want
that,
but
this
kind
of
reflection
based
decoding-
it's
not
already
possible
in
javascript,
but
I
can
do
something
similar
and
that's
something
I
want
to
explore
in
the
future.
I
Then
so,
then,
the
other
api
is
the
face
that
is
common
in
many
other
libraries,
but
in
the
nts
I
named
it
endpoint.
So
traditionally
phase
is
a
central
concept
of
many
indian
client
libraries,
ec
and
the
ncxx
anders,
and
the
nc
ccl
python
indiana
has
has
it
slightly
different,
but
what
does
face
do
face.
Does
two
things?
One
is
perform
interest
data
matching.
The
other
is
through
the
management,
doing
the
prefix
announcement.
If
your
face
is
normally
correspond
to
a
single
transport.
But
the
face
only
do
this
to
understand.
I
If
you
have
a
circuit
error
in
the
transport,
application
has
to
handle
it
manually
or
if
you
want
to
reach
interest,
re-transmission
data,
signing
data,
verification
data
buffering
with
in-memory
storage,
everything
is
manual
and
the
nts
evolved
the
face
into
a
new
concept
called
endpoint,
which
is
more
powerful.
In
the
end
point
the
consumer
can,
on
the
consumer
side,
the
library
can
provide
automatic
interest,
re-transmission
and
automatic
data
verification
for
the
producer.
I
can
do
prefix
announcement,
data
buffering
and
the
data
sign.
I
You
will
see
some
examples
later
and
each
and
pointer
can
have
more
than
one
more
than
one
transport
uplinks
that
can
connect
to
other
ndnts
applications
to
forwarders,
to
iot
gadgets,
and
any
transport
error
will
be
handled
automatically
if
possible,
having
those
or
in
the
library.
This
allows
the
application
to
focus
on
the
application
logic.
I
So
three
examples:
first,
is
the
consumer's
interest
rate
transmission
in
the
nts.
If
you
want
an
infrastructure
transmission,
you
just
need
to
set
one
option.
So
this
is
how
you
write
a
consumer.
You
prepare
the
interest
and
call
the
consumer
function.
The
data
will
come
back
later,
if
possible,
then
retransmission
can
be
unable
to
say.
I
I
want
up
to
two
retransmissions
and
the
rest
will
be
done
by
the
library
and,
if
reaches
if
after
retransmission
data
still
doesn't
come
back,
because
there
will
be
errors
and
error
handling
is
still
necessary,
but
compared
to
other
libraries,
then
developer
has
to
implement
the
interest
rate
transmission.
Basically
it's
this
flowchart.
They
have
to
implement
manually
and,
as
I
observe
in
the
implementations,
most
developers
get
it
around
in
some
ins
in
some
places
on
the
producer
side
and
the
nts
can
support
automatic
automatic
data
buffering.
I
The
use
case
for
automatic
data
buffering
is
when
the
when
the
producer
wants
to
prepare
a
multi-segment
response
to
one
interest
and
then
weight
and
the
subsequent
segment
will
be
answering
the
later
interest.
A
concrete
example
is
in
the
nfd's
management
of
protocol
data
set
of
publisher.
When
you
can
send
the
interest
to
an
ft,
ask
it
to
return
all
the
revengence,
but
but
the
rib
the
rib
can
be
larger
than
a
single
segment.
Is
there
what
can
be
multiple
segments
and
there
is
a
buffering
needed
in
the
nts?
It's
automatic.
I
I
The
developer
has
to
manually
query
in-memory
storage
for
every
incoming
interest,
which
would
be
more
complicated
than
this,
and
the
nts
can
automatically
perform
data
signing
and
the
verification
all
you
need
to
do
is
a
pro
provider,
the
signer
and
the
verifier
to
the
end
point
constructor,
and
then
your
lambda
finishing
your
or
your
consumer
does
not
need
to
do
that
anymore,
because
the
library
will
handle
it
for
you
and
the
nts
supporter
both
for
the
both
designer
and
the
verifiers.
I
They
can
be
either
a
fixed
key
or
there
or
a
trust
schema
that
choose
a
key
based
on
the
data
packet
name
for
other
libraries.
Developer
has
to
call
keychain
and
validator
manually
and
all
those
small
things,
but
they
need
a
lot
of
code
called
reputation
and
within
the
nts
you
can
eliminate
the
repeated
code
and,
as
I
mentioned
earlier,
one
of
the
challenges
in
the
nts
is
that
the
coder
size
is
a
primary
concern
on
the
web,
because
every
kilobyte
of
code
must
be
downloaded
over
the
network.
I
Visitors
will
expect
your
webpage
to
load
within
five
seconds.
That
metric
is
called
the
time
to
interactive
and
to
to
download
within
five
seconds
what
you
can
download
as
a
code
size
budget
is
170,
kilobytes
compressed,
so
to
solve
this
problem
in
the
nts.
What
I
can
do
is
I
try
to
reduce
the
core
features
that
are
always
included
in
the
compiled
code,
that,
if
application
needs
extra
features,
they
need
to
import
the
module
and
and
pay
the
overhead,
for
example,
data
buffer
feature
I
mentioned
earlier.
I
It's
its
implementation
so
far
is
quite
complicated
and
has
a
lot
of
code.
Therefore,
by
default
is
not
there,
but
if
you
want
it,
you
have
to
import
those
data
buffers
that
module
write
to
this
code.
It's
a
little
bit
of
boilerplate.
That
is
not
really
desirable,
but
it's
a
trade-off
between
api
simplicity
and
the
web
page
performance
and
the
nts
support
a
lot
of
transport
that
are
is
under
that
and
the
point
layer.
I
So
this
is
a
simple
comparison
chart
and
notably,
I
recently
added
http
3,
which
runs
over
quick
under
through
under
a
proxy
of
a
python
based
proxy
program
and
fd
can
also
support
http
3
transport.
This
allows
me
to
do
video
streaming
more
efficiently
than
using
websocket,
since
the
websocket
has
a
tcp
over
tcp
problem,
but
http
3
runs
runs
over
quick,
which
is
over
udp.
I
I
I
Is
different
but
in
browser
it's
those
two,
and
this
is
how
I
use
the
keychain.
It's
a
it's
some
code,
so
first
I
can
open
the
keychain.
If
effectively
it
will
open
index
db
that
restores
the
keys
and
the
certificates
in
the
browser
index
db
is
able
to
store
web
crypto
private
key
without
exposing
the
key
bits
you
will
see
it
later.
I
I
Then
the
nts
has
a
client
for
the
indian
certificate
management
protocol,
so
I
can
call
request
certificate.
It
will
communicate
with
remote
certificate
authority
and
obtain
a
certificate
after
it
succeeds.
I
can
finally
save
letter
certificate
into
the
kitchen,
but
there
are
a
few
limitations
with
with
web
crypto.
First
web
crypto
requests
a
secret
context
effect.
Convince
a
web
page
must
be
delivered
over
https
to
use
web
crypto.
I
This
is
a
requirement
by
the
web
crypto
specification,
but
not
every
page
is
over
https
because,
as
someone
from
philippines
they
wrote
they
wrote
to
me
in
their
countries.
They
don't
have
a
lot
of
internet
and
locals
are
communicating
over
the
coffee
shop
hot
spots.
In
that
case
they
don't
have
internet,
they
don't
have
dms.
They
have
no
way
to
get
a
certificate
from
let's
equip
encrypt
and
similar
things,
so
in
the
nts
security
feature
will
not
work
in
this
environment.
I
have
not
found
a
solution
to
this.
The
other
is
the
web.
I
Crypto
has
limited
algorithms,
the
crypto
algorithm.
Some
newer
indian
specifications
wanted
certain
crypto
algorithms
that
are
not
in
web
crypto,
so
it
means
I
cannot
really
implement
them.
Yeah.
One
of
them
is
the
flick.
Flicker
says
one
flipper
give
one
option
that
is
aes
ccm
web
crypto
doesn't
have
it.
Of
course
I
can.
If
I
deny
the
application,
I
can
pick
the
other,
which
is
aes
gcm.
That
is
there,
but
if
exiting
application
chose
to
use
aes
ccm
with
flickr,
then
on
the
nts
will
not
be
able
to
interoperate
with
that
application.
I
There
are
some
alternatives
to
web.
Crypto.
First
is
javascript
crypto
library
such
as
asm
crypto.
The
other
option
is,
I
can
compile
rust
crypto
as
a
web
assembly
module,
but
both
has
drawbacks.
First
is
the
code
size
increase
by
at
least
10
kilobytes,
maybe
200
kilobytes
in
the
case
of
certain
asm
crypto
and
the
keys
are
not
protected
anymore.
I
Remember
in
web
crypto,
that
is
non-extractable,
but
if
I
use
javascript
crypt,
it's
not
there
anymore,
since
there
is
also
no
effective
way
to
cleanse
memory,
and
if
the
code
is
delivered
over
plain
http,
it's
even
worse.
The
first
is
the
main
in
the
middle
attack,
since
it's
not
encrypted
https
anymore,
and
also
there
is
no
secret
random
number
generator.
It
basically
means
I
cannot
generate
private
key
safely
so
so
far
and
the
nts
is
still
limited
to
web
crypt.
Only
I'm
not
sure
whether
I
want
to
relax
it,
but
currently
no.
I
The
last
issue
is
not
really
api,
but
it's
closely
related,
which
is
how
do
you
name
a
browser
because
name
is
a
secret
source
of
india,
as
alex
said
for
the
enormous
users,
it's
somewhat
simple.
So
this
is
the
code
I
show
earlier,
where
generator
assigning
key.
I
need
to
give
it
a
name,
so
my
current
web
applications.
They
only
serve
anonymous
users.
I
If
I
want
any
user
authentication,
it's
a
different
challenge.
User
experience
is
important.
So
what
to
use
the
most
familiar
with
is
either
username
plus
password
or
email
me
a
magic
link
internally,
it
can
be
implemented
as
obtaining
a
short-lived
certificate
from
a
certificate
authority
that
is
controlled
by
the
web
web
server.
I
Then
we
can
also
try
to
port
open
id
os
web
or
send
those
specs,
but
to
do
it
over
and
yeah.
It
will
internally,
it
will
interact
with
nd
authenticator
app
that
the
user
could
download
or
sell
for
host
on
their
own
server
and
inside
the
center.
App
should
contain
a
as
certificate
authorities
that
are
controlled
by
the
user,
but
in
either
case
we
should
have
a
streamlined
user
experience,
because
visitors
do
not
care
whether
the
web
page
is
using
on
dns.
They
just
care
about
the
body
experience.
I
A
Probably
the
same
issue
as
before
yeah.
Maybe
in
the
meantime
one
question
so
in
your
api.
How
would
you
I
mean
deal
when
with
situations
when
things
go
wrong,
so,
for
example,
when
the
transport
protocol
has
a
failure,
do
you
have
some
kind
of
exception
system.
I
If
you
trust,
because
it
fails,
the
transport
will
get
reconnected
automatically.
Since
there
is
in
most
cases
there
will
be
event
emitters.
You
can
hook
into
the
transporter
to
find
out
it's
up
or
down,
but
every
transport
will
try
to
reconnect
indefinitely
following
exponential
backup.
Oh
okay,
there
are
some
transporter.
You
can
notice
the
web
bluetooth.
You
cannot.
If
your
gadgets
is
offline,
this
browser
don't
allow
you
to
reconnect.
A
A
So
so
it
looks
like
I
mean.
One
of
the
interesting
features
here
is
that
this
can
run
in
a
browser.
So
yeah
are
you
trying
to
experiment
with
like
like
building
web
applications
on
top
of
this,
or
what
is
your?
Your
main
motivation
here.
I
A
Okay,
wait
there's
something
going
on
in
the
chat:
okay,.
A
Yeah,
so
personally
I
I.
I
think
this
is
a
really
nice
idea
and
so
yeah.
We
have
started
using
ndnts
for
some
experiments,
and
so
I
think
one
of
the
interesting
like
possible
next
work
item
topics
in
icn
could
be
exactly.
How
would
you
really
build
web
applications?
A
And
you
also
pointed
to
how
would
you
name
the
browser-
and
you
know,
provide
something
like
an
endpoint
or
something
reachable
name
to
the
web
application,
and
I
think
these
are
really
interesting
questions
and
I
think
it's
really
ongoing
work
and
and
needs
niche
further
thoughts.
So,
thanks
again
for
introducing
this
to
us.
A
And
yeah:
okay,
if
there
are
no
further
questions.
A
Yeah,
okay,
I
mean
we
can
quickly
discuss
this.
I
mean.
A
Okay,
yeah
thanks
again
everybody
for
presenting
your
work
and
for
attending
the
meeting
I
think
was
quite
interesting
and
I
think
many
presentations
today
would
probably
lead
to
further
discussions.
Let's
do
that
on
the
main
list
or
reach
out
to
the
individuals
here,
just
some
some
thoughts
by
dave
and
myself
about
next
meetings.
A
So
so
personally,
we
think
it's
it's
a
good
model
to
gonna,
have
this
interim
meeting
so
kind
of
avoiding
these
online
itf
meetings,
because
these
weeks
are
typically
really
busy
for
everybody,
and
also
we
have
less
control
over
the
time
slot.
A
So
our
thoughts
were
like
is
so.
If
iatf
next
itf
in
in
march
is
actually
happening
in
person,
we
would
try
to
meet
there.
If
not,
we
would
go
for
another
online
meeting
before
or
after
the
itf
week
so
similar
to
this
one
yeah
quickly.
A
So,
let's
really
keep
offering
us
crossed
that
travel
will
be
possible
then
so
you
can
expect
the
coffee
papers
and
other
infos
fairly
soon
and
with
that
yeah
thanks
again
for
attending
wish
you
a
really
nice
holiday
season
and
let's
be
in
touch
looking
forward
to
further
discussions
with
you.
Thank
you.
G
A
Thanks
you
too
goodbye
and
special
thanks
to
ken
for
taking
notes.
It
was
super
useful.