►
From YouTube: IETF115-COINRG-20221108-0930
Description
COINRG meeting session at IETF115
2022/11/08 0930
https://datatracker.ietf.org/meeting/115/proceedings/
A
A
Good
morning,
everyone
the
word
good
afternoon
and
and
so
some
other
places
and
we're
going
to
start
just
in
a
few
minutes.
This
is
Computing
and
a
network
we're
going
to
give
a
few
minutes
to
people
to
arrive
in
the
room,
so
welcome
and
we'll
starting
soon.
A
B
Good
morning
everybody
I'm,
not
one
of
the
chair
I'm
the
Buddy
stand
in.
They
need
a
physical
person
to
be
here.
My
name
is
Cedric
Westfall
and
I'm
in
the
future
way
and
I
have
no
idea
what
I'm
supposed
to
be
doing
to
start.
This
I
don't
know
if
the
chairs
is
marriage,
Jose
or
somebody
on
the
on
the
line.
A
A
You
can
just
stay
there
and
we
have
everything
loaded.
F
This
space-
it
should
be
the
case
at
this
point
that
you
are
a
delegate
once
you
log
in
maybe
not
on
that
computer,
but
on
your
own
computer.
You
will
appear.
B
Yeah
I'll
log
into
the
computer
and
you
go
ahead
because
we
I
think
it's
9
34.
So
we
started.
A
So
if
you
want
to
start.
A
Okay,
then,
that
will
start
okay,
so
this
is
the
coin
Computing
in
the
network.
I
would
like
to
really
really
thank
our
proxy
Cedric,
who
introduced
himself
from
future
way
to
be
there
in
the
room,
since
none
of
us
could
travel
cool.
A
Obviously,
there's
the
usual
note
well
and
policies
against
harassment
and
everything,
a
reminder
that
this
is
the
irtf.
We
do
not
do
standards,
but
we
do
research-
and
you
will
see
today
that
we
have
a
very
interesting
research
papers
presented
plus
a
number
of
updates
and
new
drafts.
A
Jeffrey.
You
want
to
take
over.
G
G
Yes,
so
before
I
introduce
the
agenda,
maybe
I'll
remind
you
two
things
physically
in
London
people
in
London,
so
you
need
to
follow
the
masking
policy.
So
we
have
to
pass
your
mask
during
the
meeting
during
the
meeting
in
the
room.
Second,
is
that
we
maintain
just
one
single
queue
in
the
meteco.
So
if
you
want
to
go
to
the
mic,
please
also
join
the
queue
in
the
tackle.
G
So
as
for
the
agenda,
so
we
have
three
research
papers
today
to
introduce
first,
it
will
come
from
Oxford
he's
the
from
the
team
of
Noah
Zero
Man.
He
will
introduce
some
lessons
learned
from
their
electrical
in
network
classification
implementation
and
then
second
Ike
will
introduce
their
paper
about
the
end-to-end
transporter
layer
and
that
he
will
discuss
the
very
interesting
topic
related
to
end-to-end
argument
and
then
they've
from
Theo
Munich.
He
is
working
together
with
Dr
chosen.
G
He
will
discussed
the
DOT
on
a
private
provider
Network
and
after
that
we
have
Pasco
to
introduce
one
new
draft
about
the
secure
elements
implemented
in
the
internet
and
how
the
data
plan
the
programmable
data
plan
can
be
used
in
this
architecture.
And
if
we
have
time,
then
we
can
discuss
some
future
of
This
research
Loop
and
there
are
some
initial
considerations
from
cheers
and
we
can
discuss.
F
Sure
you
are
all
here,
so
thank
you
for
being
here.
Those
of
you
who
are
already
signed
on
there
there's
the
pointer
here
to
the
meat
echo,
which
has
many
good
features,
not
least
of
which
is
that
it
has
many
things
integrated,
including
it
automatically
keeps
track
of
who's
participating.
So
it
generates
a
blue
sheet
for
us,
but
for
those
of
you
inclined
to
do
so,
we'd
really
appreciate
your
involvement
in
the
shared
note-taking
and
there's
a
both
a
pointer
here
and
then
in
the
tool
itself.
F
F
We
will
try
to
also
monitor
the
chat
and
the
physical
cue
for
questions
and,
as
was
mentioned,
we're
going
to
use
the
Meade
Echo
to
maintain
the
queue,
if
everybody,
of
course
will
default
to
everything
being
off
unless
you're
speaking,
that
will
be
great
and
in
fact,
in
terms
of
the
masking
policy,
if
you
are
a
speaker,
you
are
allowed
to
remove
your
mask
when
you're
speaking.
F
Otherwise,
please
maintain
the
policy.
We
welcome
you
to
participate
in
our
mailing
list
and
you
can
you
know
to
subscribe.
We
include
the
pointer
here,
but
also
welcome
you
to
peruse
the
archives
for
ongoing
conversation
and
all
of
the
materials
for
the
session
are
available
through
the
agenda
page
and
through
the
link
here
as
well
as
again.
If
you
go
to
the
top
icon
and
look
for
the
folder,
you
can
look
at
all
of
the
materials
and
peruse
the
mature
leisure.
F
We
have
many
documents
that
have
been
contributed
to
the
discussion
we
are
for
our
next
meeting.
We
are
very
likely
to
ping
you
if
you
have
written
one
of
these
documents
in
order
to
refresh
those
that
we
would
like
to
still
consider
as
under
the
charter
and
maybe
even
advance
to
being
adopted
by
the
working
group.
F
A
With
that
go
ahead
with
that,
we
go
to
presentations
and
I
would
ask
the
first
presenter
to
load.
Oh,
maybe
I
can
actually
preload
I
actually
can
pre-load
stop
my
share
and
share
preload.
B
A
Which
speaker
are
you
the
presenter
from
from
Oxford?
Could
you
please
load
your
slides
or
share
your
slides
because.
B
B
Load
for
me,
so
they
both
heard
we're
gonna
is
gonna
share
from
his
laptop
the
presentation,
I
think
or
do
you
have
different.
B
B
A
B
B
Use
on
the
thing
that.
J
J
Okay,
so
good
morning,
everyone,
my
name,
is,
and
today
I
would
like
to
talk
about
our
research
related
to
practic,
practical
in
network
classification
and
the
license
we've
learned
during
the
past
three
years.
I'm
sorry,
yeah
I
would
like
to
talk
about
the
lessons.
We've
learned
that
during
the
past
three
years
and
this
work
is
a
joint
work
with
many
of
our
colleagues
and
from
several
different
Institute,
and
it
can
be
shown
here,
yeah,
that's
scary
and
begin
with
2019.
J
We
have
begin
to
do
the
network
machine
learning
research
and
at
that
time
we
have
a
research
named
easy,
which
also
named
us
to
switch
stream
of
machine
learning,
and
this
work
has
been
presented
in
how
night,
2019
and
a
mailing
demonstrates
the
mapping
of
trained
machine
learning
model
to
programmable
network
devices
and
for
that
work
and
mainly
introduced
four
type
of
machine
learning
models
which
is
decision
tree
support
by
custom,
machine,
k-means
and
knife
base,
and
for
that
work,
I'm
mainly
focus
on
two
type
of
Target,
which
is
bme2
and
net
fpga
Zoom,
and
next
page
after
three
years.
J
We
have
many
progress
and
currently
we
have
a
work
named
automating
Network,
machine
learning
and
it
provides
us
a
plantar
framework
that
can
help
us
to
realize
end-to-end
automatic
e-net
machine
learning
deployment,
and
with
this
framework,
we
are
able
to
Auto,
make
it
automatically
train
the
machine
learning
model
and
also
generate
Auto
generator
P4
file,
which
is
different
from
previous
work.
J
And,
finally,
after
all
this
process,
the
design
will
be
finally
Auto
loaded
to
the
hardware
selected
Hardware
and
currently
our
Frameworks
support,
more
than
12
models
and
supplementation
of
models,
and
it
support.
Besides
the
easy
support
for
type
of
model,
it
also
supports,
for
example,
season
3,
actually,
random
Forest
actually
boost
knife
base,
k-means
or
an
auto
encoder,
and
and
actually
for
for
this
model.
J
It's
not
all
the
model
that
we
can
support
it
and
we
can
even
support
more,
but
we
think
currently
is
enough
for
this
stage
and
we
also
generalize
all
these
mapping
Solutions
into
encode-based
local
space
and
direct
mapping
Solutions,
which
can
help
us
to
use
it
to
support
new
type
of
network
machine
learning,
algorithm
and,
most
importantly,
we
support
many
different
targets.
J
For
example,
we
support
Commodities,
which
Asic,
which
is
purely
Commodities,
which,
with
basic
result,
modification,
for
example,
Tofino
and
tofino2,
and
we
also
support,
for
example,
targets
like
Papi,
which
is
running
people
program,
so
Raspberry
Pi,
and
we
currently
support
two
types
of
compiler
on
it,
which
is
tapas
and
bme2,
and
we
are
also
working
on
Nvidia
spectrum
and
fpga.
J
So
the
whip
here
means
working,
Pro
progress,
process,
yeah
the
next
slide.
So
in
this
talk,
I
would
like
to
mainly
talks
about
the
challenges
and
solutions
we
face
when
we
implement
this
in-network
machine
learning,
classification,
algorithms,.
J
J
It
mainly
shows
the
typical
realization
of
this
stream
model
in
the
data
plan,
and
we
can
show
that
for
each
steps
of
the
tree
it
generalized
into
different
levels
and
for
each
level
it
will
consume
a
certain
number
of
stages
so
which
means
that
for
for
the
data
applying
program
with
limited
number
of
stages,
it
will
limit
the
types
of
the
tree
model
to
be
deployed
and
our
solution
is
use
parallelization,
which
means
that
we
can
parallel
execute
the
independent
functions
in
our
Network
machine
learning,
classification,
algorithm
and
yeah,
and,
for
example,
for
this
example
tree
models.
J
For
example,
we
can
parallel
the
use
lookup
table
as
feature
tables,
so,
for
example,
when
feature
inputs
will
be
mapped
to
codes
for
each
feature
and
then
for
the
next
stage,
which
are
three
tables,
and
these
three
table
also
execute
data
paralleling
and
for
each
tree
table.
J
It
will
just
collect
the
input
from
each
feature
table
and
just
output,
the
adder
for
the
both
of
this
tree
or
the
probability
or
the
depth
of
the
tree
and
in
the
third
stage
for
the
decision
table,
it
will
just
combine
all
the
volts
from
previous
three
tables
and
output.
The
final
classification
results
and
the
next
slice
and
the
Silicon
challenge
that
I
would
like
to
talk
about
is
about
limited
memory
and
for
the
limit
memory.
We
have
two
type
of
solution
and
first
is
use
more
efficient
mapping
solution,
for
example,
on
the
first
figure.
J
On
the
left
hand
side
we
can
see
that
compare
for
the
k-mean
solution
compared
to
for
for
the
easy
implementation
versus
classroom
implementation
in
terms
of
table
entries
consumption.
We
can
see
if
the
solution
use
a
static
number
of
table
entries.
Well,
the
classroom
table
entrance
consumption
increase
in
order
of
magnitude
as
the
model
type
increases
and
for
in
terms
of
the
accuracy
we
can
find
that
the
easy
solution
is
independent
of
model
depths.
Well,
the
classroom
solution
is
dependent
with
the
model
tabs.
J
J
It's
a
concave
codes
from
the
feature
tables
so,
which
means
that
we
need
to
use
our
self-designed
algorithm
to
map
the
exact
match
table
to
either
Luca
based
table
lookup
table
LPM,
table
ternary
table
or
range
match
table,
and
we
can
even
use
Smart
rope
to
further
reduce
the
number
of
table
entries
in
the
in
the
table.
So,
for
example,
we
can
just
drop
those
less
significant
table
entries.
J
For
example,
there
are
some
single
thing,
stable
entries
that
are
different
from
others,
that
stop
a
large
range
of
table
entries
from
merging
into
a
single
LPM
table
entry.
Then
we
can
selectively
remove
those
table
entries
and
from
the
figure
we
can
find
that
we
can
reduce
around
20
of
total
table
entries
without
significant
influence
of
the
machine
learning
classification,
accuracy
and
then
next
page,
and
also
when
implementing
Network
machine
learning
classification.
We
should
also
make
sure
that
it
should
co-exit
with
the
normal
functioning
of
the
network
device.
J
So,
as
shown
in
this
framework,
we
have
a
Content
P4
block
here,
which
can
help
us
to
generate
P4
file,
not
only
with
the
classification
logic,
but
also
with
the
use
case,
and
this
use
case
can
be,
for
example,
switch.p4,
which
is
the
L2
L3
reference
switch
and
it
is
designed
by
Intel
Tofino,
and
this
program
is
currently
used
as
a
reference
program
for
your
network
network
computing
algorithm
and
actually
our
our
our
United
machine
learning.
Classification
algorithm
is
parallely,
execute
with
the
normal.
J
This
L2
L3
switch
and
it
do
not
consume
a
lot
of
resources.
So
when
we
compare
our
Network
machine
learning,
realization
model
realization
the
resource
consumption
of
these
models.
J
We
can
see
it's
only
five
percent
to
65
of
this
reference
program
so
which
is
relatively
small
and
makes
it
possible
to
coexist
with
the
normal
switch
function
and
also
we
can
see
the
latency
of
our
e-network
machine
learning
algorithms,
and
we
can
find
that
if
we
only
implement
the
machine,
learning,
algorithm
and
and
it
shows
in
the
pink
bar-
and
we
can
see,
the
latency
is
relatively
small
compared
to
this
reference
program
and
if
we
co-exist
with
the
machine
learning
algorithm
with
with
this
reference
program,
this
L2
L3
switch
then
for
the
for
the
the
implementable
model
we
can
see,
as
shown
in
Blue
Bar.
J
And
so
no
matter
how
we,
for
example,
we
use
mapping
techniques
or
use
LPM
different
types
of
table
types
of
table
to
reduce
the
table
entries
to
save
memory,
or
we
can
do
the
parallelization
to
save
stages.
Still
we
need
to
trade,
have
some
trade-off
parameter
selection
when
we
try
to
use
machine
learning
algorithm
for
e-network
classification
and,
for
example,
for
the
as
we
shown
in
this
radar
graph.
J
So
if
we
want
to
make
sure
that
the
generate
model
have
a
same
level
of
memory,
consumption
and,
for
example,
if
we
want
to
use
more
features
than
the
number
of
trees
is
and
depth
is
limit,
and
if
we
want
to
have
more
trees,
then
we
can
only
use
a
limit
number
of
features
and
or
if
we
we
want
to
have
a
large
number
of
tabs
and
feature
and
trees.
Allow
this
limit.
J
So,
actually,
one
using
this
framework,
actually
we
can
play
with
it
and.
B
B
B
C
J
Yeah
thanks
a
lot
yeah.
So
actually,
when
you
have
the
framework
you
can
play
with
it
and
based
on
the
use
case
that
you've
select
and
you
can
select
the
set
of
hyper
parameters
that
that
best
have
the
best
performance
on
your
use
case
and
when
we
look
at
the
max
number
of
features
that
we
are
supported,
for
example,
for
the
three
models,
for
example
the
century
accessibles
and
random
forests
or
random
Forest
hybrid.
J
And
if
the
features
are
stored
in
ASCII
format
inside
the
package,
then
we
can
support
only
around
30
features
and
if
it
has
joined
Implement
with
the
reference
program
reference
which
program,
then
we
can
only
support
around
15
features
and
if
we
use
the
model
that
use
different
mapping
techniques,
for
example,
lookup
based
solution
like
a
model
like
subtract,
machine
base
and
k-means,
and
it
can
only
support
less
than
15
features,
and
this
is
also
use
case,
dependent.
B
J
We
should
also
make
sure
that
we
can
update
the
model
and
we
should
do
the
runtime
retraining
and
updates,
and
our
popular
projects
mainly
focus
on
this
problem,
and
we
mainly
solve
this
problem
by
using
digest
and
shadow
updates,
so
which
means
that
when
e-network
machine
learning
is
deployed
in
the
data
plane-
and
it
will
auto
continue
sending
digest
information
to
the
control
plan
and
the
control
plan
will
collect
this
information
and
combine
with
existing
training
data
and
use
on
supervised
learning
algorithm
to
to
to
relabel
this
data
set,
and
it
will
also
unserver
to
you
and
this
generated
data
will
be
faded
to
the
planter
framework
to
retrain
and
regenerate
the
table
entries.
J
And
actually
this
generated
table
entries
will
be
loaded
through
the
data
plan
by
using
shadow
updates.
So
when
loading
this,
it
means
we
only
update
the
table
entries
and
we
do
not
touch
the
P4
program
and
by
using
Shadow
update.
We
can
do
the
runtime
update
and
without
interrupt.
Is
normal
functioning
of
network
function
and
as
well
as
the
normal
functioning
of
classification
service
and.
E
J
Next
next,
thank
you
and
also
we
should
make
sure
that
our
in-network
machine
learning
classification,
I
have
a
good
performance,
and
so
this
is
mainly
guaranteed
by
the
power
design,
which
fits
the
commodity,
programmers
which
is
like
well
so
for
all
our
design.
We
do
not
use
recirculation
or
resubmission,
and
we
have
no
control
plan
dependencies
and
we
do
not
use
special
modules
like
customize,
which
is
we
purely
use.
Commodity
switch
ethic.
J
J
And
even
though
the
model
of
e-network
machine
learning
classification,
the
model
size,
is
still
be
limit
and
it's
impossible
still
impossible
for
us
to
implement
some
model
like
random
forest,
with
200
200
trees
and
input,
200
features
or
with
100
apps.
J
But
what
we
can
do
is
we
can
use
hybrid
deployment
to
achieve
a
high
inference
accuracy
so,
which
means
the
hybrid
deployment
means
we
deploy
a
small
model
inside
the
network.
In
the
data
plan
we
use
a
large
model
as
a
backhand
server
and
in
the
switch
we
use
based
on
the
decision
confidence
we
can
decide
if
the
decision
is
directly
been
made
on
the
switch
or
the
packet
will
be
forward
to
the
backend
for
further
process
and
next
slice.
J
Next
slice
and,
for
example,
for
a
normal
detection
use
case
for
the
right
hand,
side
upper
right
hand,
side
figure.
We
can
see
if
we
select
the
x-axis
for
the
switch
confidence
threshold
to
be
0.9
so
which
means
that
the
decision
of
more
than
70
percent
of
traffic
can
be
directly
made
on
the
switch.
Well,
we
can
see
that
the
system
hybrid
system
accuracy
in
the
blue
line
in
the
left
hand,
side
figure-
is
almost
close
to
the
Baseline,
so
the
Baseline
model
is
deployed
on
the
server
without
the
resource
constraint.
J
So
in
summary,
so
our
work
shows
the
in-network
machine.
Learning
classification
is
feasible
and
we
can
run
these
machine
learning
models
and
Community
switch
with
full
line
rate,
and
we
are
also
be
able
to
make
sure
that
this
machine
learning
algorithm
is
coexist
with
the
use
case
and
the
normal
functioning
of
this
network
device
and
most
of
the
model
is
scalable.
And
if
you
want
to
have
a
extremely
high
accuracy
or
very
large
model,
then
we
can
use
hybrid
deployment
to
deal
with
this
problem
and
we
have
where
I've
been
for
several
use
cases.
J
For
example,
the
anomaly
detection
use
case,
the
iot
Gateway
smart
iot
Gateway,
and
we
also
try
the
high
frequency
high
frequency
control
trading
use
case,
and
we
are
also
looking
for
new
use
cases
and
if
you
have
any
ideas,
we
are
really
happy
to
discuss
about
it.
A
K
Thanks
for
the
talk,
I
have
a
rather
specific
question
so
on
I
think
on
one
slide.
You
mentioned
that
so
with
the
resource,
consumption
and
all
that
stuff
that
you
had
like
that,
you
could
Implement
more
features
for
this
one
approach
than
fewer
features
with
the
other
approach.
And
then,
if
you
combine
it
with
the
switch
program,
then
you
got
basically
on
the
same
level
for
both
versions
that
you
had
and
there
I
was
wondering
what
kind
of
resources
or
resource
constraints
there
actually
is
the
limited
limiting
factor
so.
L
J
Implement,
okay,
thanks
so
for
the
constraints
so
actually
for
our
solution
of
e-network
machine
learning,
algorithms,
there
are
several
types
of
constraints
and
the
most
common
one
is
stage
consumption.
So
if
we
increase
either,
for
example,
the
number
of
features
used
or
number
of
three
numbers
I
just
use
random
Force
as
an
example.
If
we
increase
the
number
of
feature
input
and
if
we
increase
the
depth
of
the
tree
and
if
we
increase
the
number
of
trees,
then
the
number
of
stage
will
increase.
J
This
is
because
for
each
stage
there
is
a
limit
amount
of
supported
tables.
If
you
use
more
than
this
number
of
tables,
then
it
will
cause
extra
stage,
no
matter
how
you
paralyze,
IQ,
execute
these
tables
and
also
the
Mind
Race
is
another
constraint
because,
for
example,
for
some
of
the
table,
for
example,
as
I
mentioned
in
the
The
Ensemble
three
models
we
can
see
there
is
a
decision
table
and
usually
if
there
are
too
many
trees
and
the
decision
table
will
cause
loads
of
memory.
J
And
if
there
are
too
much
memory,
then
it
will
also
cause
extra
stages,
and
also
we
can
see
that
for
for,
if
you
use,
customers
hide
us
for
the
three
models,
we
can
support
more
than
60
features,
but
for
ask:
if
the
feature
is
stored
in
ASCII
format,
then
we
can
only
suppose
30
features.
This
is
because
the
parts
also
have
constraints
resource
constraints
and
we
cannot
support
on
limit
amount
of
protocols
or
headers
in
the
header
field.
J
So
these
are
the
stage
memory
and
and
and
the
head
of
and
parser
constraint
are
mainly
constraints
for
our
Network
machine
learning,
algorithms,
of
course,
for
the
general
speaking
of
e-network
machine
learning,
classification,
realization,
there
are
more
constraints,
for
example,
for
for
most
of
the
switches.
They
do
not
support
and
floating
Point
numbers
and
they
do
not
support,
for
example,
some
type
of
mathematical
operations,
for
example,
multiplication
and
division
operations.
These
are
other
constraints,
but
our
solution
can
can
can
just
do
not
use
these
operations.
A
Any
other
questions
we
can
send
the
questions
on
the
list,
I'm
particularly
interested
by
the
way
by
the
iot
implementation,
so
I'm
going
to
read
the
paper
for
sure.
Thank
you.
So
very.
C
Okay,
thank
you
for
your
good
presentation
and
your
good
paper.
So
in
South
Korea
we
had
a
similar
project.
At
the
case
we
utilized
the
team
learning
or
CNN
RNN
to
classify
traffic
and
we
found
it
works
so
I'm
wondering
if
You
Hinder,
the
new
or
three
people
are
new
picture
of
the
traffic
because,
for
example,
headquarters
were
some
the
or
better
person
make
some
new
or
traffic
for
violating
the
network.
So
in
this
case,
how
can
you
handle
this
new
3p?
One
new
picture
over
data.
C
You
use
the
machine
learning,
so
it
means
that
it
or
or
training
data
and
find
some
the
wisdom
from
existing
data.
So
how
about
the
new
data
or
new
picture
new
traffic?
So
in
this
case,
can
you
handle
the
same
picture?
Yeah.
J
Feature
or
picture
currently
we
didn't
pass
the
use
case
with
features,
but
generally
speaking,
because
inside
the
network,
the
feature
is
not
you
know,
for
if
it
is
not
packet
level
but
flow
level
or,
and
also
in
the
real
use
case,
that
the
the
you
know,
the
the
the
packet
will
go
through
different
routing
paths.
So
that's
the
reason
why
I
didn't
we
didn't
touch
it
yet,
but
there
is
some
work.
A
Okay
for
for
moving
around
with
time,
so
thank
you.
Thank
you
again
and
we'll
make
sure
follow
up
and
and
great
work.
E
B
B
It's
Ike
is
doing
it.
So
just
just
a
quick
reminder:
if
you
want
to
ask
questions,
use
the
metico
queue
just
so
that
Mario
Jose
is
managing
the
queue
remotely
it's
not
done
from
here.
So
so
they
know.
Who
is
lining
up
to
ask
questions.
Thank
you.
K
Okay
hi
again
I'm
icon,
sir,
and
this
is
Joint
work
with
Dirk
and
Klaus,
and
for
those
of
you
who
have
been
following
the
research
group
a
little
bit
longer.
K
This
is
actually
something
that
has
come
out
of
our
transport
issues,
draft
that
we
had
had
at
this
research
group
earlier
and
what
we
now
try
to
do
is
basically
yeah
have
a
bit
more
thoughts
on
how
we
can
actually
combine
or
what
is
the
interplay
between
transport
protocols,
the
end-to-end
principle
and
Computing
in
the
network,
and
this
has
resulted
in
the
paper
that
we've
presented
last
week
at
the
new
IP
and
Beyond
Workshop
and
yeah.
K
Today,
I
would
like
to
give
you
basically
also
the
the
main
ideas
that
I
presented
there,
as
well
as
some
additional
thoughts
that
came
up
in
discussions
there
at
that
Workshop.
So,
basically,
an
improved
version
of
the
talk
that
I
held
last
week
in
mixing
okay,
I.
Think
in
in
this
context,
where
we're
here,
it's
pretty
easy
to
say
that
we
can
see
that
the
networks
are
evolving
from
being
dump
networks
to
being
smarter
networks
previously
or
in
the
early
days.
K
We
could
assume
that
if
we
had
a
packet
coming
into
the
network,
maybe
send
from
host
a
so
this
yellow
packet,
maybe
that
it
would
also
come
out
on
the
other
side,
mainly
unchanged,
and
this
is
then
often
seen
that
the
network
is
just
a
gunpipe
that
forwards
the
packets-
and
this
is
somewhat
encompassed
in
the
end-to-end
principle,
and
is
it
typically
also
used
as
a
basis
for
transport
protocols,
so,
for
example,
in
TCP,
when
we
think
about
the
reliable
reliability
aspects?
K
It's
simply
that
our
TCP
assumes
that
the
packets
are
unchanged
in
the
network
and
now
with
coin
this
changes-
or
this
can
change
as
we
can
do
more
stuff
in
the
network,
so
maybe
change
the
color
of
the
packets.
And
thus
we
can
no
longer
speak
of
the
network
as
being
just
a
dump
pipe,
and
this
also
breaks
then
assumptions
for
the
transport
layer
and
that
we
can
now
make
changes
in
the
network,
and
this
has
already
been
discussed,
for
example,
in
the
recent
hot
Nets
paper.
K
So
we
are
not
the
only
ones
thinking
about
this
topic,
but
what
we
now
try
to
do
basically
is
to
think
about
a
more
general
or
have
more
General
considerations
regarding
this
topic.
So
basically
what
we
would
actually
need
to
have
a
coin-enable
transport
protocol
that
ideally
also
respects
the
end-to-end
principle
and
just
to
get
your
thoughts
right
for
the
stock
here.
So
I
won't
provide
a
lot
of
answers,
but
mainly
I
will
raise
a
lot
of
questions
that
perhaps
we
as
a
research
Community,
have
to
answer
eventually
and
in
the
following.
K
I
would
first
like
to
go
back
to
the
end-to-end
principle
and
then
talk
a
bit
about
a
few
considerations
that
we
had
how
such
solution
or
such
a
transport
protocol
looked
like
and
then
afterwards
also
have
a
few
more
thoughts
on
this
whole
topic.
K
So,
starting
with
the
end-to-end
principle,
it
goes
back
to
a
paper
from
the
1980s
so
a
bit.
So
the
this
principle
is
actually
older
than
myself.
So
probably
some
people
in
the
room
longer
nowhere
perhaps
longer
than
myself
and
the
end-to-end
principle,
basically
states
that
a
function
can
completely
and
correctly
only
be
implemented
with
the
knowledge
of
the
end
of
of
the
applications
at
the
endpoints.
K
So
basically
saying
that
the
endpoints
have
to
know
what's
going
on
inside
the
network
as
well,
and
this
is
then
seemingly
at
odds
with
the
coin
with
coin,
because
yeah
in
command,
we
would
assume
that
something
might
happen
in
the
network.
However,
there's
also
a
second
sentence,
actually
two
sentences
further
down
in
that
paper,
which
then
states
that
an
incomplete
version
of
the
function
can
also
be
useful
as
a
performance
enhancement
if
it's
provided
by
the
network.
K
And
now,
if
we
think
about
coin
as
being
such
a
performance
enhancement,
then
this
could
again
align
coin
with
the
end-to-end
principle
and
yeah.
In
this
context,
we
then
thought
about
or
wondered
a
bit
more
about
the
relationship
between
coin
and
the
end-to-end
principle
and
in
that
context,
mainly
focused
on
two
aspects,
and
that
is
first,
the
location
of
computations
and
then,
as
a
second
aspect.
What
kind
of
computations
can
actually
be
performed
and
regarding
the
location,
I
think
they
are
even
in
this
room?
K
If
I
would
ask
you
what
kind
of
computations
would
you
consider
to
be
coin?
Computations
I
would
get
probably,
let's
see,
maybe
30
different
answers.
Probably
so
a
strict
definition
of
coin
would
be
that
we
can,
for
example,
only
compute
or
perform
computations
or
networking
devices
so
really
on
switch
Hardware.
K
Basically,
as
we've
seen
in
the
previous
presentation,
for
example,
however,
there
are
also
more
free
definitions
of
coins,
so
basically
seeing
coin
as
a
subset
of
edge,
Computing
or
cloud
computing,
maybe
only
enriched
with
additional
functionality
in
the
network,
but
basically
having
some
computations
between
the
hosts
between
the
end
hosts
and
in
our
paper.
We
actually
try
to
get
around
making
a
strong
statement
at
this
point
and
basically
just
generalized
to
coin
elements.
K
So
just
saying:
okay,
we
now
consider
any
capability
that
we
have
there
between
the
end
points
as
a
coin
element
and
yeah
it's
up
for
anyone
else
to
decide
what
exactly
is
now
coin
or
not.
What
we
did
distinguished,
though,
is
where
we
perform
the
computations,
so
with
relation
to
the
endpoints.
K
So
here
we
have
in
red
the
typical
or
the
fast
end-to-end
path,
and
we
then
distinguish
between
two
types
of
computations.
One
are
on
path,
coin
elements
so
directly
on
this
shortest
path,
for
example,
and
the
other
computations
would
be
of
past
coin
elements
where
we
then
would
need
to
reroute
the
packet
or
where
the
packets
would
need
to
take
us
to
take
a
slight
detour.
K
Then,
as
a
second
aspect,
we
thought
about
what
kind
of
functionality
can
now
be
provided
by
the
networking
device
or
by
the
coin
elements,
and
here
we
took
a
rather
functional
view.
I
would
say
so
also
stemming
from
the
end-to-end
argument,
where
our
assaults
at
all
we're
always
talking
about
the
function
that
is
provided
and
here.
K
In
this
example,
we
have
a
function:
capital
F,
consisting
of
a
few
sub
functions
that
is
computed
between
host
a
and
host
B,
and
we
then
thought
about
what
kind
of
functionality
can
now
be
provided
by
the
coin
element
in
the
middle,
and
the
first
option
would
be,
for
example,
an
F1
Prime
functionality,
so
an
incomplete
version
of
the
original
function
and
then
maybe
also
a
bit
tweaked
so
that
it
can
actually
be
performed
here
on
the
coin
element
and
as
this
is
now
still
part
of
the
end-to-end
functionality,
we
call
this
an
end-to-end
function,
internal
computation.
K
So
basically
we
have
a
functionality
that
was
originally
part
of
the
function,
and
we
can
now
also
place
it
in
a
on
the
coin
element
and
then
the
other
alternative
would
be
to
have
a
function.
That
is
not
part
of
the
original
functionality,
and
this
then,
here
symbolized
as
a
function.
G-
and
here
we
were
then-
and
we
call
this-
a
function-
a
end-to-end
function,
external
computation,
and
here
we
we
were
then
wondering
whether
this
is
then
still
something
that
we
would
like,
or
that
can
be
end-to-end
compliant
or
not.
K
Computations
are
end-to-end
function,
internal
computations,
that
we
can
then
either
place
on
on
or
off
path,
coin
elements
and
in
the
following.
We
then
thought
about
how
and
where
we
could
actually
Place
such
functionality,
because
right
now,
I've
only
discussed
a
bit
about
the
general
aspects
regarding
these
things,
but
not
how
we
would
actually
practically
use
this
stuff
and
there
we
basically
came
up
with
two
design
principles:
principles
as
we
called
them,
and
the
first
one
is
regarding
the
location
where
we
can
place
it.
K
So
considering
we
have
here
two
locations
where
we
could
place
functionality,
so
we
have
F1
Prime
and
F1
double
Prime,
and
then
the
question
is:
do
we
use
F1
prime,
or
do
we
use
F1
double
Prime
and
the
first
two
aspects
that
we
hear
considered
are
rather
straightforward.
K
I
would
say
so
we
think
that
if
we
use
additional
coin
functionality,
then
we
should
still
adhere
to
the
original
requirements
of
the
functionality,
so
basically
don't
break
anything
that
has
been
working
before
and
then
as
a
second
aspect,
and
that's
basically
now
belongs
to
the
second
part
of
the
end-to-end
principle
that
I
mentioned
earlier.
K
We
also
think
that
it
should
then
also
enrich
the
original
functionality,
but
then
we
could
still
have
the
case
that
both
of
these
function-
placement
that
we
have
would
yeah
be
valid
for
both
of
these
aspects,
and
then
we
thought
about
what
kind
of
tirebreaker
could
we
have
there
or
how
we
could
then
actually
derive
the
decision
that
we
want
to
have
and
for
the
inspiration,
we'll
then
look
at
the
Simplicity
principle,
which
basically
states
that
we
should
always
Thrive
to
the
simplest
solution
or
always
try
to
reduce
complexity
as
much
as
possible,
and
we
then
basically
translated
this
into
that
the
com,
the
coin
functionality
should
optimize
functional
complexity
against
a
key
communication
requirement.
K
Okay,
fancy
words:
what
do
we
actually
mean
with
that?
So
if
we
have
now
here
these
two
functions
and
we
then,
for
example,
consider
the
latency
as
key
communication
requirement.
Then
we
would,
for
example,
use
the
lower
functionality
if
it's
possible
to
deploy
the
function
that
we
need
there,
because
in
that
case,
we
would,
if
we
consider
that
the
rerouting
to
the
upper
function
would
take
a
little
bit
longer
time.
Then
we
would
have
the
lower
latency
at
the
lower
path
and
thus
would
choose
this
location.
K
K
We
then
also
had
a
different
view
on
this
whole
topic.
Basically,
regarding
the
first
part
of
the
end-to-end
principle,
so
the
knowledge
of
the
endpoints-
and
here
we
again
have
the
overall
functionalityf
and
now
we
place
a
new
functionality
in
the
network.
K
So
it's
also
the
incomplete
version
of
the
function,
but
we
place
it
there
without
the
knowledge
of
the
endpoints,
and
in
this
case
it
could
very
well
introduce
a
lot
of
problems,
because
now
the
endpoints
are
still
Computing
the
whole
functionality,
but
there
in
the
network
we
also
compute
something
of
the
functionality
and
then
stuff
can
break
and
I.
Think
performance.
K
We
then
thought
about
considerations
for
the
transport
protocol,
obviously
mainly
coming
from
our
previous
work
that
we
collected
here
in
the
context
of
This
research
group,
and
why
did
we
actually
choose
the
transport
protocol?
Well,
it's
a
function
that
is
traditionally
implemented
in
the
endpoints
only
and
it's
also
the
one
layer
that
is
translating
from
the
network
to
the
applications.
So
it
will
also
be
the
layer
that
is
affected
by
any
changes
that
we
have
in
use
that
are
induced
in
the
network,
and
in
this
context
we
then
considered
several
aspects.
K
So
in
this
paper
we
focused
on
addressing.
So,
basically,
how
can
we
choose,
which
function
or
computations
to
execute
in
a
network
as
a
second
part
flow
granularity?
K
So
deciding
do
we
need
a
stream
notion
as
for
example,
in
gcp,
or
are
we
okay
with
the
datagram
notion
as
an
UDP
and
then
finally
also
more
evolved
communication
Concepts,
like
Collective
communication,
so
basically
not
only
having
two
endpoints
in
the
communication,
but
a
few
more
of
them
and
for
today,
I
would
like
or
yeah
for
today.
I
would
like
to
focus
on
the
addressing
part
and,
if
you're
interested
in
more
details
on
those
three
aspects
have
a
look
at
our
paper.
K
K
And
now
the
question
is:
how
can
we
do
this
if
we
want
to
have
a
function,
F1
Prime
somewhere
in
the
network
and
the
first
option
that
we
have
is
some
form
of
implicit
integration,
so
we
actually
don't
really
address
it
explicitly,
but
just
place
the
functionality
somewhere
in
the
network.
So
in
this
case
we
would
try
to
guess
maybe
where
the
packets
will
go
and
then
place
the
functionality
strategically
on
this
device
so
that
the
packets
will
actually
be
processed
by
that
computation.
K
So
this
works
in
a
lot
of
cases,
especially
in
smaller
networks.
It's
actually
also
something
that
is
typically
used
for
research
prototype,
so
I've
done
that
myself
and
quite
a
lot
of
projects.
However,
if
we
now
scale
up
the
networks,
then
at
some
point
this
becomes
really
hard
to
maintain,
and
especially
in
some
networks,
we
don't
even
know
where
packets
really
go,
and
thus
it's
really
tricky
to
do
this
to
do
it
this
way
and
additionally,
it
also
only
allows
for
the
on
path
notion
of
coin,
as
we
yeah
just
place.
K
The
functionality
on
path,
so
off
path
is
not
possible
yeah.
The
second
option
is,
then,
an
explicit
steering
mechanism,
so
here
we
then
really
apply
some
kind
of
tag
to
the
packet
to,
for
example,
say:
okay,
we
would
have.
We
would
like
to
have
the
F1
Prime
functionality
up
there,
and
in
this
case
we
wouldn't
also
have
the
off
path
notion.
However,
it's
really
unclear
how
you
would
actually
like
to
do
this.
K
Addressing
and
I
will
come
to
a
few
possible
solutions
later
on
as
well,
but
this
is
then
really
something
that
we
we
need
to
think
about
how
we
would
like
to
implement
this
and
then,
assuming
that
we
have
some
form
of
addressing
so
mainly
perhaps
focusing
on
the
explicit
addressing
for
a
moment.
If
we
then
have
two
different
locations
where
we
have
functionality,
then
we
would
might
want
to
decide
which
kind
of
this
or
which
of
these
two
functions.
K
We
would
like
to
execute
and
there
are
then
the
question
is:
how
would
we
like
to
do
this?
So
would
we
like
to
always
specify
the
exact
location
so,
for
example,
say?
Okay,
we
would
like
to
have
the
upper
F1
Prime
functionality
to
be
only
specify
some
kind
of
constraints.
So,
for
example,
saying
we
would
like
to
have
the
one
with
the
lower
latency
or
with
other
requirements,
or
can't
we
as
an
end
host
do
anything
about
it,
but
just
let
the
network
handle
it
and
the
network
decides
which
function
is
selected.
K
So,
as
I
said,
we
mainly
raise
questions
and
don't
answer
them,
so
quite
a
lot
of
them
here
and
then,
if
we
have
some
form
of
instance,
selection
mechanism
that
we
might
also
want
to
keep
Affinity
through
to
those
instances,
because,
for
example,
if
you
would
like
to
build
up
a
certain
level
of
State
at
some
point
and
then
we
would
always
want
to
go
to
the
same
service
instance.
At
that
point,
and
here
then
the
question
is:
how
do
we
actually
realize
this
Affinity?
K
Do
we
set
that
up
already
during
an
orchestration
phase
before
we
actually
have
the
first
computation?
Is
this
done
on
the
Fly?
So
again,
a
lot
of
questions
that
arise
here
and
I
think
in
the
paper
we
raise
even
more
of
them.
So
if
you
would
like
to
read
a
lot
of
questions
or
get
a
lot
of
yeah
ideas
for
questions
then
have
a
look
at
the
paper.
K
Yeah
then
maybe
trying
to
summarize
this
a
bit,
and
we
also
had
a
look
at
existing
Solutions
because
obviously
there's
a
lot
of
internet
technology
already
out
there
that
might
be
applicable
to
to
these
problems,
and
one
is,
for
example,
this
Source
routing.
So
here
we
can
as
an
end
host
already
Define,
which
path
we
would
like
Pekka
to
take
through
a
network.
However,
it
is
not
directly
something
that
we
can
use
for
defining
the
functionality
so,
for
example,
considering
that
we
might
have
different
functions
on
one
device
that
we
would
need.
K
Yeah
multiple
IP
addresses,
for
example,
for
each
of
the
functions
that
we
have,
which
might
become
quite
a
lot.
However,
there
are
a
service
function,
chaining
which
allows
already
for
steering
traffic
through
functions.
So
if
we
now
interpret
coin
as
some
form
of
network
function
or
service
function,
then
this
might
be
applicable,
and
similarly
we
have
information,
centered
networking
as
well.
We
are,
we
now
address
information
rather
than
endpoints,
and
thus
again,
if
we
now
interpret
coin
as
some
form
of
information,
then
the
these
might
also
be
applicable.
Concepts.
K
K
The
main
takeaway
from
our
paper
is
that
there
are
ways
in
which
coin
can
be
aligned
with
the
end-to-end
principle,
but
we
then
have
to
really
carefully
think
about
what
Solutions
we
actually
pick
for
all
the
questions
that
we've
raised
so-
and
this
is
the
part
that
was
basically
the
same
last
week
as
well-
the
uip
and
Beyond
workshop
and
now
maybe
something
more
provocative
that
was
discussed
in
the
context
of
that
Workshop.
K
So
there
were
actually
also
or
wasn't
other
talk
there
as
well,
so
this
Tintin
protocol
and
we
then
thought
about
yeah.
How
would
you
actually
do
this
in
practice
now,
so
we
phrased
a
lot
of
questions
regarding
transport
protocols,
but
we
now
see
already
that
there
are
two
rather
specialized
protocols
and
the
question
that
we
then
discussed
was
yeah.
Do
we
now
want
to
have
one
Global
protocol
that
basically
solves
the
end-to-end
principle
problems
of
coin?
K
K
Do
we
then
have
perhaps
a
couple
of
core
features,
so
basically
a
core
protocol
that
we
then
extend
for
the
different
specialized
domains
that
we
have
there
to
these
protocols
that
have
some
way
of
interacting
that
is
perhaps
standardized
or
do
each
of
these
protocols
really
only
apply
to
their
specific
limited
domain?
And
then
maybe
you
know
as
a
really
provocative
last
question.
K
Might
it
be
also
be
possible
to
somewhat
bend
the
end-to-end
principle
in
these
limited
domains?
Because,
after
all,
if
you,
for
example,
think
about
industrial
networks,
then
we
have
basically
that
the
whole
network
is
in
the
in
the
premise
of
one
entity,
and
so
we
could
already
think
that
basically,
this
end-to-end
principle
aspect
is
already
covered,
because
if
someone
deploys
Solutions
there,
then
they
basically
are
the
endpoints
as
well
as
the
network.
K
So
yeah
really
trying
to
be
a
bit
provocative,
maybe
here
in
the
end,
to
yeah
Inspire
some
thoughts
and
arguments
on
these.
All
on
all
of
these
topics,
and
with
that
I
would
now
like
to
wrap
it
up.
So
in
the
beginning,
I
showed
you
why
many
people
think
that
coin
is
adults
with
the
end-to-end
principle,
but
then
I
hope
that
I've
shown
you
a
way
on
a
way
of
thinking
about
it.
K
K
We
then
talked
about
two
design
principles
for
coin
and
then
finally,
I
had
a
brief
discussion
about
considerations,
at
least
for
the
addressing
part
and
as
promised
here
are
the
two
QR
codes
and
now
I.
Thank
you
a
lot
for
your
attention
and
I'm
happy
to
have
a
discussion
with
you
or
to
hear
your
comments
on
this
thanks.
A
Thank
you.
You
raised
a
lot
of
the
questions
that
we've
asked
ourselves
in
the
past
few
years.
There's
three
people
on
the
actually
now
four
people
on
the
cute
Dirk.
You
want
to
start.
E
Yes,
my
pleasure,
hey
Ike,
nice
presentation,
you
are
raising
many
questions
here.
I
think
this
isn't
very
comprehensive
approach
that
you
present
here
and
I
was
thinking.
You
are
probably
making
your
life
unnecessarily
hard.
So
so
the
question,
so
you
know
how
should
you
design
this
or
what
should
be
the
services
by
certain
protocols
really
depends
a
lot
on
what
you
want
to
do
and
I
think
it's
really
hard
right
now
to
you
know,
conceive
and
hypothetical.
E
You
know
coin
protocol
without
actually
knowing
what
you
want
to
do,
and
so
this
the
end-to-end
principle
discussion,
is
a
good
example.
For
that,
so
I
mean
these
principles
were
formulated
with
a
certain
goal,
and
so
they
might
not
necessarily
be
the
same
goals
that
you
have
when
you
think
about
Computing
in
the
network.
E
So
when
your
objective
is
to
build
an
internet
work
based
on
packet,
switching
and
statistical
multiplexing
yeah,
then
you
need
to
think
about
the
the
control
Power
of
the
end
systems
so
and
to
enable
different
kinds
of
applications,
transport
functions
and
so
on,
and
so
this
is
a
model
that
fits
very
well
to
the
internet
and
to
IP
and
so
on,
but
for
computing
in
the
network.
So
we
don't
necessarily
have
the
same
goal,
and
that
means
we
don't
necessarily
have
to
constrain
ourselves
by
by
these
principles.
E
So
just
one
example,
you
could
say
conceive
something
like
a
protocol
for
for
computing
in
a
network
like
a
data
flow
system
where
you
connect
functions
and
each
function
does
something
produces
new
data
and
so
on.
E
That
would
be,
of
course,
very
much
at
odds
with
this
within
the
end-to-end
principle,
but
sometimes
it's
just
what
you
need,
and
so
I
I
think
we
it's
probably
a
bit
too
much,
or
maybe
also
premature,
to
think
about,
like
the
the
grand
unifying
Computing
in
the
network
protocol
for
all
kinds
of
applications.
E
I
think
this
needs
to
come
from
experience
or
yeah
on
experiments
for
them,
especially
in
different
areas,
and
then
you
could
think
about
so
what
does
use
case
a
need,
use,
case,
b
and
so
on,
and
is
there
a
need
at
all
for
an
internet
level
protocol?
That's
another
question.
K
K
Thanks
for
the
for
your
thoughts
on
this,
so
actually
our
goals
were
not
to
have
this
one
unified
protocol,
but
we
just
try
to
basically,
as
you
said,
without
having
a
lot
of
practical
experience
in
the
large-scale
deployment
of
these
things.
K
To
think
about
from
our
standpoint
of
today
how
we
could
actually
align
these
aspects,
maybe
also
to
give
a
lot
a
bit
of
guidance,
maybe
for
the
first
larger
deployments
of
solutions
like
that,
so
that
we
can
then
afterwards,
maybe
in
a
few
years,
come
back
to
these
conservations
considerations
and
then
think
about
whether
they
actually
made
sense
or
whether
we
actually
need
such
a
large
scale
protocol.
And
this
was
actually
then
maybe
also
why
we
had
these
discussions
that
I
presented
on
the
second
to
last
slide.
K
So
do
we
even
need
this
large
scale
protocol,
or
are
we
fine
with
having
specialized
protocols
for
specific
applications,
because
those
two
papers
that
I
referenced
there?
They
actually
provide
solutions
for
specific
problems,
and
they
I
think
yeah
solved
them
quite
well
and
could
be
used
as
a
first
step
towards
this
direction.
Already.
E
Yeah
thanks,
let
me
just
just
quickly
I
think
it's
also
a
question
of
how
you
frame
this
problem.
So
if
you
have
the
mental
model
of
say
getting
data
from
one
end
to
the
other
and
then
doing
some
computation
in
the
middle,
this
may
give
you
some
kind
of
TCP
like
framing
or
something
I'm,
not
sure.
That's,
that's
necessarily
the
best
say
mental
model
for
thinking
about
Computing
the
network.
E
So
again,
thinking
about
like
data
flow
systems
where
it's
all
about
you
know
carrying
bits
with
some
modification
in
the
middle
from
from
one
to
the
other.
It's
more
like
you
are
you
having
like
discrete
steps
of
computation
and
each
step
produces
something
completely
different,
and
so
maybe
then,
this
this
whole
connection
or
transport
metaphor
is
not
not
exactly
helpful.
I
Yeah
hi
thanks
well
I
like
the
approach
to
to
try
to
think
about
how
to
maybe
more
align
with
the
end-to-end
argument
to
me,
the
end-to-end
argument
has
has
basically
two
advantages.
If
you
apply
it,
one
is
the
robustness
because
he
has
less
less
functionality
inside
the
network.
So
one
aspect
is
maybe
to
think
about
how
those
partial
functions,
influence
or
may
influence
each
other
so
that
that
could
be
I
mean
there
could
be
some
kind
of
interdependencies
or
side
effects
that
should
be
avoided.
I
I,
don't
I,
didn't
read
your
draft
and-
and
you
know
all
your
work,
so
that's
kind
of
new
to
me.
So
maybe
you
discussed
it
already.
The
other
thing
is
innovation.
Protection
of
innovation
or
I
mean
the
the
argument
is:
is
it's
hard
to
change
the
network
and
when
we
have
functionality
in
place
that
allows
that
like
P4?
And
what
have
you?
I
I
One
more
observation
when
you
were
talking
about
addressing
I,
think
it
complicates
things
a
lot
if
you
require
applications
to
have
knowledge
about
your
network.
So
one
thing
is
that
the
network
tries
to
figure
out.
I
What's
what
should
be
done
in
order
to
support
the
application,
but
another
aspect
would
be
to
to
require
that
the
application
has
some
some
knowledge
of
where
to
locate
the
functions
or
where
to
invoke
the
functions
and
I
think
this
is
maybe
not
a
good
direction
to
do
that
right,
because
it
makes
things
more
complex
and
I
can
remember
discussions
here
in
the
igf,
where
the
application
developers
said.
Please
no
don't
do
that.
K
Thoughts,
maybe
to
quickly
answer
to
a
few
of
them,
so
regarding
the
robustness
aspects
or
the
interaction
between
the
functions,
I'm,
not
sure
if
we
have
actually
included
it
in
the
paper,
but
at
least
in
the
draft
we
discussed
stuff
about
this
so,
for
example,
regarding
regions,
so
basically
having
re-transmission
or
if
we
would
like
to
have
some
kind
of
reliability.
Maybe
how
would
we
then
handle?
K
For
example,
if
we
have
two
or
three
functions
and
then
the
packet
get
lost,
gets
lost
after
the
second
function
and
we've
already
changed
state
in
the
first
and
second
function?
Maybe
how
can
we
actually
handle
this?
So
we
have
at
least
I
think
discussed
these
problems.
In
essence,
and
as
I
already
said,
we
mainly
raise
questions
and
not
don't
provide
much
answers,
but
that's
a
very
valid
point
and
then.
K
Exactly
that's
why
we've
basically
discussed
these
different
possibilities
that
we
have
there
and
I.
Think
the
so
at
least
the
source
routing
aspect
would
would
require
knowledge
by
the
endpoints
to
Route
it
through
there,
while
the
yeah.
If
we
do
the
the
information,
Centric
networking
aspect,
maybe
and
only
know,
we
would
like
to
have
this
functionality,
then
this
would
of
course,
then
again
reduce
the
the
complexity.
So,
of
course,
there
are
different.
K
H
H
A
very
interesting
paper
and
I
think
the
connection
between
the
transparency
of
the
end-to-end
principle
and
addressing
seems
to
be
a
really
Central
issue,
and
it's
very
similar
to
some
of
the
aspects
that
we
address
in
the
paper
presented
at
last
irtf
I'd
be
very
happy
to
very
keen
to
develop
that
further
I
think
we
started
from
slightly
different
start
point
that
came
to
some
fairly
similar
conclusions
and
I
think
understanding
how
you
define
your
addressing
and
how
you
define
your
transparency
is
particularly
important
and
I.
N
Hi
so
Lars
Eggert
first,
yes,
Public
Service
Announcement.
If
you
want
to
be
in
this
room,
you
gotta
wear
a
mask:
You
Gotta,
Wear
it
over
your
mouth
and
nose
and
it's
got
to
be
an
ffp2
and
95
or
better
mask.
If
you
don't
want
to
comply,
you
can
leave
the
room
and
join
the
session
from
your
hotel
room
or
somewhere
else.
So
please
comply
I,
know:
everybody's
ears
are
falling
off.
N
I
know
it's
not
consistent
with
what's
happening
outside
the
session,
but
it
is
the
community
consensus
that
this
is
the
policy
that
we
have.
So
thanks
for
complying
and
provide
input
for
the
next
consultation.
What
we're
going
to
do
up
masks?
That's
it
nice
talk.
Thank
you!
I've
sort
of
two
points
that
sort
of
correlate
to
the
two
parts
you
had
in
your
in
your
slide.
N
So
it's
a
work
for
a
company,
and-
and
we
do
you
know
the
very
early
stages
of
of
coin-
that
we
put
stuff
on
fpgas
and
we're
thinking
about
what
might
we
want
to
put
further
into
the
network?
And
so
the
outline
here
in
the
beginning
about
the
end-to-end
principle,
sort
of
matched
sort
of
what
I
thought
intuitively,
but
I
hadn't
really
written
it
down
or
thought
about
it,
sort
of
formally
in
unstructured
way.
So
that
was
very
useful.
Thank
you.
N
I
think
you
got
something
there
that
that
makes
sense
so
so
think
further
in
that
direction,
very
good
and
second
on
the
transport
protocol
side.
So
again
from
the
sort
of
practical
view
that
you
know,
one
I
want
to
put
stuff
that's
expensive
on
the
CPU
somewhere
else,
where
it's
cheaper,
so
I
want
to
do
the
minimal
thing
to
make
my
current
workload.
My
current
application
faster.
N
That
means
like
I'm,
not
I,
don't
want
to
bother
with
new
protocols.
I,
don't
want
to
like
I,
want
I
want
to
put
stuff
that
costs
money
now
and
put
it
somewhere
where
it's
cheaper
and
have
everything
else
be
the
same,
and
specifically
I
want
to
do
it
for
stuff
that
is
sort
of
internet
related
right.
N
So
the
more
you
change
and
the
more
it
becomes
custom,
although
it
might
be
optimized
in
some
way
the
harder
it
gets
for
me
to
actually
start
doing
this,
and
so
so,
since
this
is
the
irtf
which
is
close
to
the
IET,
if
I
would
sort
of
encourage
people
to
maybe
think
about
the
low
hanging
fruit
that
has
sort
of
easy
to
get
to
provide
a
bunch
of
benefit
already
and
that
sort
of
move
us
into
that
direction.
Where
later
on,
we
might
be
able
to
pick
up
some
things.
N
D
Yeah
Eric
Norman,
so
I
really
like
sort
of
going
back
and
thinking
about
the
stuff
in
terms
of
what
what
are
the
implications
and
sort
of
framing
it.
The
way
we've
done
it.
One
thing
I
didn't
see
here,
which
ties
into
robustness.
Is
this
notion?
Indeed
the
issues
around
state
in
the
network
right
and
and
the
sort
of
paid
sharing
aspects
we
have
when
all
of
the
session
transport
related
State
lives
at
the
endpoints,
so
it
might
be
useful
to
to
to
add
that
to
the
sort
of
list
of
things
to
consider.
D
In
this
picture,
you
raised
an
interesting
question
about
sort
of
a
common
transport
protocol,
whatever
that
means
in
the
this
context,
right
as
opposed
to
something
specific
I
think
we
already
have
examples
where
people
are
proposing
things
that
you
might
argue
not
really
compute.
But
there's
this
thing
things
called
in
inbound,
inline
oam,
where
your
compute
is
just
extracting
state
from
the
the
routers
as
you
pass
through
them
right.
D
So
you
can
actually
now
measure
things
in
more
interesting
ways,
but
but
it
said
we
have
that
and
then
we
have
ICN
at
the
other
end
right
yeah.
What
what
are
the
things,
what
is
from
a
research
perspective,
sort
of
ignoring
Lars?
You
know
we
would
like
to
deploy
something
tomorrow,
but,
but
you
know,
what's
the
sort
of
spectrum
over
there
right,
yeah
I
think
that's
something
that's
interesting
to
consider
continue
to
explore.
K
K
Perhaps
we'll
need
more
practical
experience
first,
as
Dirk
mentioned
in
this
in
the
first
question,
and
then
regarding
the
robustness
aspect,
as
I
said
to
olan's
question
earlier,
I
think
we
have
some
discussions
on
that
in
in
the
draft
I'm,
not
sure
if
we
had
discussions
about
state,
but
we
had
at
least
had
it
on
in
our
mind
when
we
wrote
about
it
about
the
stuff,
so
it
was
there,
but
they
are
also
definitely
something
that
we
need
to
consider
when
deploying
things
in
these
directions.
Yeah.
A
Okay,
so
for
sake
of
time,
we'll
go
to
the
distributed
Ledger.
Thank
you
very
much,
Ike
and
and
and
Dirk
also,
who
was
part
of
this.
So
yes,
next
presentation.
A
M
It
thank
you
yeah,
so
hello,
everybody.
Thank
you
for
attending
this
session.
So
I'm
talking
today
about
insights
on
the
impact
of
the
elts
and
provided
networks,
is
a
joint
work
with
Derek
trust
and
Mike
McBride
and
seen
Farm
Xin
Xin
next
slide.
Please.
M
So
we
have
been
working
before
already
so
we
published
in
this
ISE
white
paper,
some
some
related
material.
We
started
with
a
simple
experiment
trying
to
measure
how
was
the
dlts
behave
in
our
internet?
We
realized
a
sort
of
a
passive
measurement.
We
did
some
basic
analysis
and
we
also
wrote
this
this
draft
that
is
published
there
and
the
upcoming
paper
has
a
little
bit
more
analytical
instructory
analysis,
We,
compare
with
the
studies
and
is
about
to
be
published
next
slide.
Please.
M
So,
for
instead
of
understanding
on
a
high
level
and
how
it
is
permissionless
distributed,
consensus
systems
are
thought
and
how
the
designers
thought
about
this.
How
to
approach
the
permissionless
approach
they
take
basically
the
so-called
distributed
hash
tables
that
are
nothing
else,
that
a
place
where,
where
files
are
they
close
and
receive
content
and
the
goal
for
these
distributed
hash
tables
was
in
the
beginning
for
two
decentralize
and
distribute
the
file
system,
a
neighbor
file
system
in
a
permissionless
fashion
or
faulty
Network.
M
Then,
over
these
network
files,
someone
can
agree
on
the
status
in
a
state
machine.
So
the
goal
here
on
the
distributed
consensus
systems
is
to
decentralize
and
distribute
State
machine
in
a
permissions
fashion
as
well
over
40
Network
and
after
that
it
came
the
distributed
later
technology.
That
is
nothing
else
that
a
consensus,
oriented
system
that
is
agree
on
distributed
content.
M
We
identify
in
these
systems
in
these
three
basic
interaction.
We
call
the
DLT
service
interactions
where
a
client,
for
example,
commits
a
transaction
or
a
request
to
the
distributed
consensus
system.
A
miner
on
another
peer
can
commit
or
a
found
block
for
the
truth.
As
a
voter
in
the
distributed
consensus
system,
using
previously
discovered
clients
and
any
client
at
the
end,
can
read
the
blog
how
these
interactions
are
realized
in
Nexus
live.
Please.
M
Actually,
we
identify
a
key
mechanism
to
realize
these
interactions,
it's
the
so-called
Atomic
broadcast
and
for
the
case
of
these
larger
scale
system,
these
Atomic
broadcast
is
randomized
over
a
set
of
receivers,
mainly
to
avoid
possible
collusion
of
fires,
of
a
stable
receiver
set
and
to
ensure
distribution
of
Ledger
information
across
all
over
all
over
the
peers.
M
Over
the
time
it
comes
with
permissionless
approach,
but
also
deals
with
the
scale
nature
of
these
distributed
system,
and
it's
been
data
set
before
as
a
peer-to-peer
system
on
top
of
Ip
networks
using
GDP,
TCP
and
quick
next
slide.
Please
we
identify
in
these
build
systems
these
communication
patterns
and
we
notice
in
the
left
hand,
side.
For
example,
we
identified
the
discovery
part
of
the
protocol
and,
on
the
right
hand,
side
the
pool,
establishment,
a
part
of
the
protocol.
M
The
pool
is
necessary
to
execute
the
randomized
broadcast,
as
said
before,
that
is
the
key
core
mechanism
to
diffuse
information
over
the
entire
system
to
agree
on
the
state
of
the
of
the
of
the
system
and
the
discovery
part
starts
with
a
load
of
bootstrap
nodes
from
a
list
of
the
ltpers
that
is
housed
on
a
specific
IP
addresses
all
around
the
world.
We
randomized
this
list
and
we
tried
to
contact
them
executing
UDP
ping
and
pongs.
We
identify
which
peers
are
reachable
and
which
ones
are
not,
and
to
the
ones
that
were
reachable.
M
We
sent
queries
to
request
more
more
nodes.
With
these
notes,
we
again
randomize
this
list
of
DLT
peers
to
establish
upper
layer
communication,
something
based
on
TCP
Transport
Security.
We
execute
capabilities
exchange
and
we
end
up
adding
this
peer
to
the
pool
that
we
are
going
to
use
for
executing
a
broadcast
or
try
to
execute
broadcasting
the
entire
system
next
slide,
please.
M
M
Each
peer
needs
to
maintain
constantly
changing
this
pool,
so
it
means
that
it's
always
constantly
exchanging
information
about
signatures,
TLS,
TCP
and
session
in
general.
We
identify
also
because
for
resilience
and
reliability
how
the
failing
nodes
are
causing
latency
on
the
pool
establishments.
Hence,
is
also
delaying
the
distributed
consensus
and
for
the
case
of
TCS
and
content,
retrieval
40
cases
of
distributed
hash
tables.
M
That's
where
we
identified
that
matching
capabilities
in
in
these
peers
at
the
scale
is
very
costly
and
we
as
well
identified
the
unicast
replication
for
the
DLT
to
work
and
as
well
as
some
issues
with
the
IP
address
privacy,
because
when
you
try
to
join
these
networks,
your
IP
address
is
exposed
to
the
entire
system.
M
So
we
set
up
that
would
scale
experiment
and
we
classify
the
peers
that
we
look.
We
compare
and
we
also
Identify
some
geographical
distribution.
We
identify
some
centrality
things,
but
for
today
we
are
exposing
more
Network
oriented
results
and
we
will
start
with
full
establishment
time.
M
This
is
the
time
that
a
single
peer
from
a
local
computer
will
use
to
build
the
pool
of
peers
that
is
willing
to
broadcast
information
to
so
in
the
first
plot
on
the
left,
we
identify
for
a
single
sample.
How
long
did
it
take
to
yeah
to
to
complete
the
pool
establishment?
M
Why
we
and
we
identified
the
one-third
of
the
the
total
number
of
the
pools
at
TN
over
3,
and
we
identify
this
as
a
single
random
variable
and
we
analyze
our
entire
experiment
and
preaching
this
probability,
distribution,
We,
compare
and
approximate
with
a
log
normal
scale
and
with
a
power
load
distribution
for
the
two
random
variables
next
slide.
M
Please
why
this
time
is
so
some
sort
of
huge
and
what
are
the
components
we
try
to
analyze
from
the
point
of
view
of
identifying
what
is
happening,
to
discover
a
peer
for
outgoing
and
for
incoming
request,
as
shown
in
the
plot,
for
example,
on
the
left
side,
we
identify
the
number
of
attempts
that
our
node
tries
to
execute
over
the
Internet
out
of
these.
How
many
were
reachabled?
M
M
Yeah
after
we
discovered
the
peers
that
we
want
to
communicate,
we
are
willing
to
communicate
with
we
start
by
executing
an
attempt
of
TCP,
socket
initialization
and
that's
plotted
for
outgoing
request,
on
the
left
hand,
side
as
a
red
curve.
Out
of
these,
for
example,
we
we
were
not
successful
in
a
certain
number
of
Transport
Security
negotiation
or
capability
checkpoint
of
capability
protocol.
M
Next
slide.
Please.
M
So
we
were
thinking
so
with
these
observations
like,
for
example,
that
miners
provide
a
service
capability
to
others
binders.
M
The
communication
is
some
sort
of
constrain
it,
and
so
we
need
to
negotiate
TLS
capabilities,
certain
sort
of
hardware,
and
we
need
to
identify
the
blockchain
checkpoint
to
to
get
the
right
minors,
and
what
is
more,
is
this
group
of
peers
are
instantaneously
randomized
to
ensure
protection
against
collusion
or
what
is
so-called
Eclipse
attacks.
M
So
the
put
creation
is
done
at
every
peer
and
is
the
core
mechanism
to
enable
this.
This
operation,
trying
to
execute
a
broadcast
to
the
entire
system
based
on
unicast
operations.
These
these
operations
is
done
on
a
fixed
group
size
and
is
identified
it's
defined
through
heuristic.
There
are
some
theoretical
balancing
this
heuristic
based
on
to
Define
when
the
system
is
going
to
converge
and
when
it's
not
going
to
converge.
M
M
The
routes
become
a
pool
of
service
instances
to
enable
instantaneous
randomization
on
the
end
point,
the
reachability
can
be
improved
through
the
encoded
constraints
or
in
in
an
email
structure,
and
we
can
replace
the
randomized
unicorns
with
a
forward
multicast
capability
that
is
built
in
the
network
for
a
fixed
side
of
of
peers.
M
Some
thoughts
are
very
welcome.
Some
some
questions
as
well,
I'm,
happy
to
discuss
and
take
your
questions.
Thank
you
very
much,
foreign.
A
Are
there
any
questions,
if
not
actually
we're
getting
late?
So
maybe
we
want
to
move
to
the
next
presentation
which
I'm
going
to
load
right
now.
L
And
so
I
subject
of
this
talk
is
a
presentation
of
this
draft,
which
is
called
ioc
and
I
use
system
for
alternative,
secure
events,
and
it
is
an
architecture
of
secure
element
in
the
internet
whose
resources
are
identified
by
Yuri.
So
next
slide,
please,
oh
okay,
you
can
control
it.
I,
don't
know
so.
Two
words
about
secure
events.
Secure
event
contains
35,
microcontrollers
and
embedded
software,
so
they
have
it
evaluation,
Assurance
level
up
to
El,
6
Plus,
even
a
square
rooting
from
one
two
two
to
seven
according
to
Common
fetalia.
L
So
there
are
a
lot
of
secure
elements
produced
every
year,
so
9
billion
last
years
and
small
CPU
with
a
very
modest
quantity
of
S1
and
a
non-volatile
memory,
there's
the
Next
Generation
and
in
the
Next
Generation
you
have
a
more
RAM
and
more
fresh,
and
it's
welcome
to
notice
that
all
the
cheap
include
the
crypto
processor.
L
Legacy
communication
news,
cellular
interface
normalized
by
ISO
1716
standard,
but
you
can
find
a
to
c
interface
or
SPI
interface.
They
exchange
a
small
packet
which
are
named
apidio
by
ISO
7816.
The
small
packets
means
about
20056
bytes.
They
have
open
programming
on
their
own
months.
For
example,
Java
car,
the
6
billion
jerichada
produce
every
years
deploy
every
years.
Let's
say
that
most
of
Simca,
for
example,
use.
L
It
means
that
you
can
write
for
program
in
the
java
Cloud
language,
which
is
a
subset
of
Java,
or
you
can
another
use
a
usual
programming
language
like
like
C
and
so
on
and
at
last,
but
not
at
least
there
are
a
secure
software
management
framework,
which
is
a
standardized
by
the
global
platform
Consortium,
and
that
is
supported
by
quite
all
securement,
and
this
is
used
to
list
the
delete
and
upload
application
in
a
secure
element.
For
example,
mobile
operator
use
over
there
technique
in
order
to
download
application
in
SIM
card.
So
next
slide.
L
L
So
in
in
this
graph,
we
want
to
connect
the
security
element
to
internet
and
why
we
want
to
do
that.
We
want
to
deploy
a
online
crystallography
cartographic
resources
for
internet.
You,
you
user,
the
ID
is
well
when
you
need
to
to
store
some
key
or
cryptographic
resource
in
an
offline
mode.
You
may
use
secure
element,
and
so
it
may
be
useful
for
internet
user
to
have
the
same
level
of
trust
but
for
online
resource,
and
so
we
want
to
identify
these
Resources
by
a
uniform
resource
identity
shares.
L
The
issue
is
that,
obviously
we
will
need
an
additional
processors
to
to
do
that
with
a
network
interface
and
DCP
Eco
connectivity.
We
need
to
ship
our
Global
platform
for
on-demand
applications,
not
mandatory,
because
you
can
use
a
pre-loaded
application,
but
for
on-demand
application.
The
first
mature,
the
user,
will
ask
the
provider
to
download
a
new
application
in
a
secure
element.
So
we
we
need
this
support.
L
We
need
a
protocol
to
access
to
secure
element
resources
and
this
draft.
We
we
chose
basically
TLS
as
a
protocol
for
user
interface
service
interface.
We
need
to
Define
secure
element
naming
in
order
to
identify
this
secure
amount
of
our
internet,
and
we
need
an
attestation
procedure
for
on-demand
application.
The
goal
of
attestation
procedural
is
to
give
the
user
a
sufficient
level
of
trust
that
is
really
using
the
secure
amount
you
believe
it
is
using
with
the
right
application
inside
and
the
right,
Hardware
or
provider.
So
next
slide,
please.
L
It's
used
at
this
moment
pressure
key,
that's
to
say,
a
symmetric
secret
is
associated
to
a
server
name
and
import,
and
according
to
a
given
scheme
in
current
GitHub,
open
application,
we
use
simply
command
line
and
ask
online
it's
a
kind
of
shell
which
is
secure
by
your
TLS
and
according
to
this
game,
is
some
query
to
the
secure
elements
and
get
the
the
the
response.
L
So
in
the
drive,
the
way
you
see,
Server
component
at
first
for
administration
plane
the
protocol,
both
racks,
which
was
designed,
let's
say
a
few
years
ago,
and
which
used
Speed
Demon
for
the
server
plane.
We
use
the
TLs
for
secure
remote,
which
is
TLS
server,
1.3,
with
pressure
key
and
and
if
you
manage
change
for
computing
as
a
sheer
CE
grid,
and
this
use
also
TCP
demand
and
the
attestation
procedures.
L
L
So
this
is
a
short
review
of
the
administration
plane.
So
rocks
basically,
is
something
that
works
over
TLS
using
certificate
both
for
server
on
client.
L
It's
been
PTI,
it's
a
picky,
IMO
model
and
so
rocks
is
able
to
transport
ISO
7816
packet
and
in
order
to
use
something
called
secure,
element
identifier,
which
can
be,
for
example,
a
slot
physical
slot
on
an
i2c
address,
or
a
name,
and
because
the
Iraq's
transport
ISO
1716
with
policy
access,
he
is
able
to
transport
a
global
platform
protocol,
and
so
it
is
able
to
to
to
to
perform
the
delete
and
upload
the
application
operation
in
insecurements
and
so
next
slide.
Please.
L
For
on
the
sales
plane,
we
use
some
things
called
the
TLs
for
secure
elements
that
you
say
a
particular
profile
of
TLS
server.
At
this
moment,
you
using
a
pre
Pusher
key
and
server
name
TLS,
says
server
name,
we
put
the
server
name
in
a
field
called
answer
to
request,
which
is
obtained
when
you
reset,
when
you
reset
physically
on
the,
when
you
use
the
the
reset
pin
on
a
secure
element,
you
collect
some
things,
calls
an
answer
to
request
and
there's
an
API
to
to
put
a
in
this
answer
to
request.
L
You
have
something
called
historical
bytes
up
to
15
bytes
and
you
have
some
apis
that
enable
you
to
put
whatever
you
want
in
the
historical
back.
So
really
at
this
physical
level,
we
put
the
server
name,
and
so
after
we
Define
an
interface
to
transport
Terrace
packet
for
the
iso
7816
interface.
L
So
there
is
a
client
facing
server
and
this
client
first
things
the
other
according
to
service
name,
indication
find
the
clientele,
the
server
name
of
the
Beckham's
server,
and
so,
if
this
server
is
present,
the
security
mode
is
present
and
on
the
system
after
a
while.
It
wrote
the
incoming
and
outgoing
packet
to
and
from
this
TLS
back
in
the
server
and
on
the
client
size
you
can
to
to
access
so
on
the
client
side.
As
you
see
on
this
Dragon,
everything
is
based
on
TLS
and
TCP.
L
It
means
there
is
no
it's
pure
network
interface
and
you
may
use
some
identity
more
module
in
order
to
compute
the
procedure
or
required
by
the
appreciate
use
in
TLs,
but
it
is
not
mandatory
so
next
by
please
and
finale.
This
is
the
undermined
application
illustration
and
an
attestation.
Sorry.
So
you
see
on
on
the
on
the
left.
You
have
the
application
provider
and
on
the
right,
the
user
and
in
the
middle,
the
iosc
server,
which
is
the
infrastructure
that
all
the
set
of
security
elements.
L
So
first
the
application
provider
or
use
racks
to
download
the
application
in
the
secure
elements
and
then
binds
the
security
name
to
the
secure
element.
Identifier
and
at
this
step,
the
secure
element
as
an
application-
and
this
can
application-
can
be
remotely
used
as
a
TLS
server
and
it
store
the
pressure
key
defined
by
the
application
provider.
L
When
this
application
starts
in
the
secure
element
is
create
a
pair
of
public
and
private
key,
and
the
public
key
is
the
identity
of
the
secure
element.
Then,
after
a
while,
the
application
provider
delivers
the
public
key
of
the
component
and
deliver
a
certificate,
and
this
and
the
the
pressure
key
of
the
component
and
the
server
name
of
the
component
to
use
the
user.
The
user
open
till
this
connection
with
this
component
with
the
public
key.
L
It
checks
the
certificate
and
after
it
verifies
that
the
secure
amount
both
know
the
unshake
secret
of
the
TLs
connection
and
the
pubic
key.
And
if
the
secure
amount
knows
this
both
parameter.
It
means
that
there
is
not
a
man
in
the
middle,
because
only
one
Terrace
session
can
be
managed
at
a
given
time
and
because
the
second
element
cannot
be
close
is
the
only
component
that
stores
its
this
pair
of
cubic
and
private
key,
and
at
this
level
the
user
can
modify
the
pressure
key,
and
so
it's
mean
now.
It's
the
only
user.
L
There
are
no
patents
and
and
all
the
code
that
is
open,
so
the
code
for
tlssc
for
General
care.
This
code
works
with
many
Java
cards,
current
level
of
java
cars
that
you
can
buy
on
the
internet
or
is
a
3.04
or
3.05.
This
is
the
level
of
the
Java
card
API.
So
if
you
go
to
to
GitHub
you,
you
will
find
these
implementation
that
were
quite
should
work
with
most
of
java
candles
on
the
market
and
as
the
scheme
as
a
syntax.
L
It
simply
use
a
common
line,
so
it's
mean
when
you
want
to
create
a
key
or
to
perform
a
signatures.
You
just
open
a
TS
session
using
open
SSL
and
whatever
you
want
with
the
secure
element,
and
you
just
send
the
column
line
that
created
or
sign
and
whatever
the
good
source
of
the
server
is
at
this
level
at
this
moment
is
V5,
it's
a
commercial,
so
it
works
with
Windows.
L
It
works
with
like
Unix
and
the
Run
months
and
things
like
Raspberry
p,
and
this
is
an
open
implementation
of
the
server
that
includes
the
two
TCP
Diamond
one
for
racks
and
one
so
TLS
and
inside
the
software
there's
something
when
you
use
a
secure
element
on
a
PC
or
Linux
and
whatever
use
an
API
called
pcsa.
We
we
each
mean
pcsa
means
smart,
PC
and
inside
the
software.
There
is
an
enumeration
of
this
API,
so
it's
mean
doing
that.
L
The
software
can
be
adapted
very
quickly
to
many
kind
of
communication
interface
with
secure
element
like
obviously
pcsc
or
i2c,
or
something
score
similaras
that
exist
today
in
the
markets,
in
a
real
array
of
SIM
cards
that
are
used
for
for
roaming
purpose,
and
so
they
have
specifics,
so
so
get
the
interface.
So
next
slide,
please
and
that's
it
so
smart
question
here
you
see,
there's
a
list
of
papers,
that's
describe
this
and
and
more
and
because
you
do
to
cover
it
and
due
to
the
fact
that
most
of
conference
were
online.
L
This
last
time,
the
video
on
YouTube
that
explains
the
the
paper
and
give
illustration
of
the
process
and
so
on,
and
so
my
option
is
that
this
is.
This
is
what
we
become
working
with
item
I.
Believe
it's
open,
it's
something!
It's
not!
It
is
in
the
I
believe
it
is
in
the
scope
of
the
coin,
Energy,
Group
and
and
that's
it
I
am
done.
Thank
you.
F
I'm
just
going
to
Echo
what
was
just
asked
us
a
question
on
the
in
the
chat
which
is:
can
you
help
us,
particularly
those
of
us
who
are
in
other
time
zones,
so
we're
only
half
awake,
but
you
know
the
talk
was
very
interesting.
Thank
you
for
your
talk,
but
kind
of
the
fundamental
question
is
help
us
connect
the
dots
between
what
you
were
talking
about
and
how
this
relates
to
in-network
compute
and
is
it
I
mean
for
me?
F
I
I
definitely
appreciate
that
there
are
potentially
constrained
devices
that
need
help
or
processors
that
help
to
secure
compute
and
transmission,
but
I'm
not
sure
if
that
was
sort
of
how
you
would
connect
the
dots.
So
can
you
can
you
help
explain
to
us
how
more
pointedly
this
relates
to
coin?
L
When
you
compute
cryptography,
I,
I'm
thinking
to
the
produce
presentation,
for
example,
that
was
speaking
of
of
blockchain
issue
and
so
on,
when
you
skip
the
Laughing,
the
network,
you
used
to
have
some
such
place
to
store
Key
and
compute.
L
L
This
could
apply
and
with
some
open
stuff,
it's
not
so
simple
to
to
get
open
stuff
today
and
provable
stuff,
it's
mean
what
is
important
with
secure
events.
These
are,
you
have
some
ear
level
and
this
level
are
certified
by
a
national
Security,
Agency
usually
managed
by
governments,
and
this
means
this
is
your
root
of
trust
and
our
manufacturer
manufacturer
from
that
and
many
standards
that
apply
to
to
to
to
this
kind
of
components.
L
So
by
natural
you
have
a
lot
of
of
this
component.
You
have
10
billion
securement
deployed
every
year,
so
it's
very
huge
and
the
level
of
trust.
Everybody
knows
that
it's
not
so
easy
to
hack
your
banking
card
with
a
chip
inside,
and
so,
if
you
do
that,
you,
you
will
get
some
money,
but
in
the
reality
this
not
happen.
L
It's
very
difficult
to
download
the
software
in
a
bunker,
it's
very
difficult
to
recover
the
car,
the
the
keys
that
are
stored
in
one
car
and
so
relating
to
coin
energy.
It's
mean
it's
a
way
to
have
a
procedures
Computing
for
procedure
or
still
in
the
internet
with
I
believe
not
so
bad.
The
level
of
security
and
Trust
for
the
user.
F
Another
person
in
the
queue
and
in
the
interest
of
time
perhaps
we
need
to
take
that
to
the
list
Emmanuel
but
Charlie.
If
you
don't
mind,
suddenly
not
either
in
the
chat
or
to
the
list
or
both.
A
I
think
we
have
Okay
yeah.
Oh
my
God,
yes,
I
will
just
like
put
the
next
slide.
I
will
go
back
to
well.
Maybe
we
don't
even
need
to
have
slides
in
dress
up
time,
yeah
I'll
load,
the
chair,
slides.
A
Okay,
we're
at
the
last
slide.
A
Okay,
so
it's
the
some
of
the
root,
so
troop
topics,
actually
it
the
the
first
one.
The
question
about
the
interim
goes
to
the
the
second
bullet.
Actually,
which
is
we
wanted
today
to
have
a
presentation
about
the
chair
Reflections
after
three
years?
A
What
has
been
the
evolution
of
the
group
I?
Think
today
we
had
you
know
some
presentations
that
went
back
to
the
original
intent,
which
was
like
looking
at
transport
looking
at
security,
but
this
field
has
it.
You
know
exploded
in
the
past
three
years,
and
we
wanted
to
have
this.
Maybe
this
reflection
and
we're
thinking.
We
should
have
maybe
an
interim
where
we
would
first
go
through
the
the
whole
list
of
Publications
drafts
and
and
related
and
see.
A
You
know
where
we
want
to
move
things
Pascal
asked
for
you
know,
should
we
have
his
his
his
draft
as
a
working
group,
we
actually
put
that
on
there
on
the
list
by
the
way-
and
you
know,
go
through
the
the
the
current
Publications
there's,
some
that
are
expired.
That
probably
needs
to
be
re
re
kindled.
So
we'll
do
that,
probably
maybe
in
in
January.
A
A
There
was
an
email
on
October
25th,
which
is
a
5G
I,
think
it's
an
EU
project
and
I
looked
at
the
program
at
hot
Nets
next
week
and
there's
a
number
of
interesting
papers
that
are
related
to
this
community
and
we're
out
of
time
and
we're
going
to
send
Eve
to
bed,
which
is
like
3
30
a.m,
local
time
for
her
and
I.
Thank
you
so
much
for
attending.
We
had
a
bunch
of
people
online.
We
had
a
bunch
of
people
in
the
room.
Thank
you
for
people
who
presented
in
particular.
A
Thank
you
for
your
dedication
for
taking
the
time
to
do
this
presentation.
Thank
you
for
the
people
who
asked
questions,
because
that
shows
that
you
follow
what's
going
on,
and
it's
really
great.
Thank
you
very
much
for
Cedric
to
have
been
our
proxy
in
the
room
and
thank
you
so
much
for
having
done
that
and
thanks
to
Jeff.
But
Jeff
for
you,
it's
it's
indeed!
Well,
it's
early
early
evening
now
so
you're,
probably
okay.
Thank
you!
A
Everyone
and
we'll
have
this
interim,
so
we'll
probably
see
you
remotely
sometimes
in
January
and
thank
you
and
have
a
good
rest
of
the
week.
Thank
you
so
much.