►
From YouTube: IETF114-ICNRG-20220728-1400
Description
ICNRG meeting session at IETF114
2022/07/28 1400
https://datatracker.ietf.org/meeting/114/proceedings/
A
Okay,
so
give
give
us
one
minute:
we
are
trying
to
make
the
coin
code
shares
delegate
so
that
they
can
actually
run
the
meeting
later.
A
A
If
mary,
jose
and
jeffrey,
I
think
if
you
would
kind
of
lock
out
and
log
in
again,
you
should
have
delegate
privileges
like
lisa,
yes,
and
then
that
could
make
you
could
allow
you
to
control
the
session.
A
B
D
A
So
we
have
some
common
topics
some
mind
share
in
like
distributed
computing
and
networking.
So
we
thought
it
would
be
a
good
idea
to
try
a
joint
session,
and
so
that's
what
we're
doing
today.
So
today
the
first
session
will
be
on
icn
topics
and
the
second
half
will
be
on
on
coin
topics:
I'm
the
culture,
my
other
remote
coach
is
dave
oran
and
we
have
lisha
zhang
in
the
room
who
kindly
agreed
to
help
us
as
a
local
co-chair.
Today,
thanks
a
lot.
A
Yeah
as
all
meetings,
this
one
is
recorded,
so
just
a
quick,
a
few
housekeeping
announcements.
First
of
all,
if
you
are
in
the
room,
please
make
sure
you're
wearing
an
n95,
or
at
least
ffp2
mask
also.
We
are
here,
as
all
other
sessions
are
using
the
meat
echo
queue
management.
So
please
sign
in
to
the
session
using
the
meet
echo,
client
or
lite
client
and
make
sure
you
have
your
audio
video
off
and
yeah.
I
think
remote
participants
by
now
probably
know
how
to
use
their
headsets
and
microphones.
A
So
please
do
that
and
okay,
let's
let's
skip
this,
so
we
are
following
the
ietf
ipr
disclosure
rules,
so
in
essence
that
means
you
will
expect
it
to.
Let
us
know
if
you
here
see
or
say
anything
that
has
ipr
relevance
in
a
short
timeframe:
okay,
this
is
obviously
recorded
and
there's
also
a
privacy
and
code
of
conduct.
A
Please
consult
these
links
if
you
are
not
familiar
with
these
rules
and
finally
yeah.
We
like
to
remind
you
that,
well,
we
are
here
to
do
research,
although
that
we
are
using
the
same
same
mechanisms
for
managing
documents,
and
some
research
groups
are
also
publishing
our
seas.
A
A
Okay,
so
this
is,
I
see
energy.
Please
join
our
mailing
list.
If
you
haven't
done
so
so
we
only
agreed
on
having
jose
as
a
notetaker
for
the
first
half
and
me
for
the
second
half.
So
that's
all
sorted.
A
And
this
is
the
icn
agenda
for
today,
so
we're
gonna
hear
about
and
the
update
on
ping
and
traceroute
and
the
update
on
the
data
time
encoding,
spec
and
then
nicos
is
going
to
present
research
work
on
selective
content,
disclosure
disclosure
using
zero
knowledge,
proofs
and
yeah.
That's
it
for
for
now,
and
let's
start
with
the
ping
and
traceroute
update.
Unless
there's
anything
else,
you
want
to
suggest
for
discussion
today.
B
E
Okay,
so
this
will
be
really
quick.
Let's
see,
I
think
I
can
drive
slides
right.
A
E
Left
and
right,
arrow:
okay:
here
we
go
okay,
so
these
two
specs
have
been
around
for
quite
a
while
they've
been
advancing
slowly.
We
got
them
through
a
research
group.
Last
call
a
few
months
ago.
They
went
into
irsg
review
colin
did
a
pre-review,
and
chris
wood
did
the
irsg
review.
So
thanks
to
both
of
them
and
during
the
review
cycle,
we
actually,
we
actually
got
the
attention
of
some
some
folks
who
hadn't
participated
earlier.
Thank
you
very
much
and
jude
chao
offered
us
some
comments.
E
We've
gotten
a
few
more
since
then,
mostly
relating
to
some
technical
problems
with
how
we
decided
to
do
the
field.
E
E
E
So
there's
one
other
area
where
we
seem
to
have
blown
it
on
the
packet
encoding
for
ndn,
which
is
how
the
path
steering
capability,
which
is
not
actually
in
that
spec.
It's
another
spec
gets
placed
in
the
traceroute
packets.
E
So
we
need
to
do
a
little
bit
of
technical
work
there,
both
on
the
ping
and
tracer
outspecs
to
point
to
the
path
steering
spec
and
then,
since
the
path
steering
spec
is
mine.
E
I'm
going
to
do
an
update
of
that
to
get
the
ndn
packet
coding
correct,
so
moving
forward
we're
going
to
update
both
of
those
drafts
plus
the
path
steering
draft
in
the
next
few
weeks,
and
hopefully
that
will
be
able
to
move
everything
forward.
I'll
make
one
quick
note
that
the
past
steering
draft
is
still
an
individual
draft
and
dirk
is
going
to
talk,
probably
at
the
end
about
whether
we
are
ready
to
adopt
that
as
an
rg
item,
so
that
we
can
move
it
forward
in
parallel.
E
The
pan
and
pad
steering
the
ping
and
trace
route
can
go
forward
without
path,
steering
since
it's
useful
in
the
absence
of
that,
but
they
work.
I
hate
to
use
the
word
synergistically,
but
you
know
that's
probably
the
right
word
together,
so
having
them
all
in
the
in
the
game.
Here
is
probably
a
good
idea,
so
I'll
take
any
questions.
Folks
have
this
is
pretty
straightforward.
E
Going
once
going
twice:
okay,
anything
yeah
you
come
up
with
please
either
contact
spiros
and
me
directly
or
post
on
the
list,
preferably
post
on
the
list.
So
thanks.
A
Thanks
dave,
so
actually
I,
since
you
mentioned
past
during
actually
jumped
ahead
a
bit
too
fast.
Let's
just
quickly
look
at
our
documents
and.
A
So
here's
a
list
of
our
current
active
documents,
so
we
just
heard
about
ping
and
traceroute.
We
are
going
to
hear
about
the
delta
time
encoding
in
a
bit.
A
And
we
have
ccn
info,
which
also
has
been
around
in
isg
reviews
and
certain
and
and
for
some
time
so
colin
if
you're
listening
do
you
know?
What's
the
latest
state
of
this.
B
As
far
as
I
know,
we're
still
waiting
for
elastic
checked
we're
still
waiting
for
responses
to
the
ballot.
I
think
it's
got
two
yeses,
but
I
think
one
of
them
is
you
and
you're
the
chair,
so
you
would
have
to
recuse.
I
think
he
doesn't
quite
have
enough
positions.
B
A
F
A
B
B
Remind
people
again
after
this
meeting
I've
reminded
send
a
couple
of
reminders
already,
but
I
will
do
yeah.
A
B
A
Okay,
great
thanks
and
yeah
so
dave
just
mentioned
past,
steering
which
is
not
kind
of
really
required
for
ping
and
trace
mode,
but
it's
like
in
the
same
mental
model
of
using
icn
and
just
to
remind
everyone
that
there
has
been
an
ipr
declaration
on
past
d-ring
so
just
to.
A
I
think
we
share
this
on
the
main
list
as
well,
which
just
for
completeness
and
last
time
we
discussed
adopting
it
as
a
research
group
document,
and
we
didn't
really
follow
up
on
that.
So
far,
just
checking
if
there
are
any
opinions
on
that.
H
A
E
What's
the
deal
with
this
ipr
generation,
so
this
is
cisco.
Ipr
and
cisco,
of
course,
has
the
the
standard
itf.
You
know
mad
type,
ipr
disclosure,
which
says
you
can
use
it,
just
buy
no
royalties.
Anything
like
that
for
itf
documents.
E
Now
the
problem
is
that
cisco
has
no
policy
for
irtf
documents,
since
they're
not
destined
to
be
standards,
so
cisco
did
not
put
in
an
ipr
declaration
on
path
steering
because
they
didn't
know
what
to
say.
So.
This
is
the
third
party
declaration
by
me.
So
just
so
people
understand
this
is
in
this
sort
of
nether
world
where
cisco
doesn't
know
what
to
say
about
it,
since
they
don't
have
a
policy
for
non-standards
documents,.
A
Yeah
thanks.
That's
that's
good
to
know,
okay,
so
with
that,
let's
move
on,
and
so
next
on,
the
on
the
agenda
would
be
thomas,
who
I
think
is
in
the
room,
and
I
think
I
should
bring
up
the
slides
all
right.
A
I
This
is
an
update
of
the
alternative
delta
time
encoding
for
ccnx
using
compact
time
formats.
This
was
recently
adopted
or
after
last
ietf,
some
sometime
then
as
a
research
group
document.
So
this
is
actually
the
research
group
document
version
0,
I'm
presenting
which
has
addressed
a
couple.
I
Let's
say
cosmetic
things
and
also
editorial
aspects,
so
I
will
give
a
brief
recap
next
slide,
please.
So
what
is
the
objective?
We
are
looking
at
constrained,
iot
networks
here
and
in
this
context,
the
the
research
group
has
produced
the
rfc
9139,
which
is
icn
lopen,
and
that
was
one
open
question
on.
How
can
we
can
we
make
the
time
and
coding
more
efficient,
because
in
these
constrained
environments
the
bandwidth
is
low
and
the
latency
is
high?
I
We
have
slow
links,
lossy
links,
so
the
more
data
you
put
on
the
link,
the
more
you
lose
and
just
the
I
mean
generally,
the
field
of
power
is
that
the
the
node
processing
capacities
are
larger
and
less
battery
energy
consuming
than
using
the
links.
So
we
want
to
to
pre-process
or
process
on
the
nodes
to
save
link
bandwidth
next
slide.
Please,
and
for
this
we
look
at
the
tlvs
that
represent
time.
So
this
this
is
the
way
it.
It
is
encoded
currently
in
the
ccnx
specs.
So
we
have
relative
times.
I
That
is,
that
is
an
offset
with
a
given
in
millisecond
seconds
with
a
variable
length.
So
it
can
be,
can
be
larger
than
one
byte,
so
there's
a
length
field
and
then
you
can
have
a
corresponding
longer
time
time
value,
and
then
there
are
absolute
times
that
that
always
have
a
length
of
eight,
and
this
is
a
ut
utc
time
encoding,
an
epoch,
timer
encoding
in
the
length
of
eight
bytes.
I
So
next
slide,
please
so
the
the
mechanism
we
want
to
introduce
here
is
a
compact
time
encoding
that
supports
a
dynamic
range.
It
is
built
in
milliseconds
from
an
exponent
in
mantissa,
so
that
you
have
a
very
you-
are
have
a
relatively
high
precision
in
the
small
time
values
and
a
relatively
coarse-grained
precision
in
the
in
the
larger
values.
So
if
you
look
at
the
formula
you
have
the
subnormal
part
where
you
actually
start
start
with
a.
I
I
mean
that
the
distinction
between
the
two
formulas
is
that
the
the
the
upper
formula
starts
with
a
zero.
So
you
have
basically
you
you
divide
the
mantissa
and
then
then
you
divide
further
for
the
value
of
an
exponent
value
of
zero.
If
the
exponent
value
is
larger,
then
you
actually
go
into
an
exponential
representation,
which
is
which
is
of
course
grained
in
larger
values.
So
next
slide,
please
so
the
existing
time
values
are
in
interest
and
data
messages.
I
So
in
in
the
interest
lifetime,
you
have
a
relative
time
in
the
hop
by
hop
header
and
in
the
interest
and
the
interest
message
itself
has
a
signature
time,
which
is
an
absolute
value
and
for
the
data
you
have
an
absolute
value
for
the
hop
by
hop
header,
which
is
a
recommended
cache
time,
and
you
have
an
once
again
a
signature
time,
which
is
also
an
absolute
value.
So
the
idea
is
to
convert
the
recommended
cash
time
to
relative
times
and
to
compress
interest
lifetime.
I
So
what
changes
is
in
the
k
if
you're,
given
the
the
currently
existing
protocol
in
the
case
where
it
is
encoded
a
length
of
one,
so
that
means
a
short
length
field,
then
we
put
in
the
we
want
to
put
in
the
compact
time
offset
and
in
the
case
the
length
value
is
larger
than
one
in
the
encoding
we
use
the
regular.
I
I
mean
we
just
keep
the
regular
timing
intact
to
make
to
make
the
the
protocol
to
plug
into
the
existing
into
into
the
existing
protocol
encoding,
because
I
mean,
as
we
understand
not
all
ccnx
packets
are
on
constrained
devices.
So
next
slide
please.
I
So
this
is
the
case
for
the
protocol
integration
for
the
recommended
cache
time
before
it
was
an
absolute
value
of
length.
Eight.
Now
we
change
to
this
duality.
If
it's,
if
it's
encoded
with
a
length
one,
then
we
use
a
compact
time
offset
of
of
one
byte
if
it's
if
it
continues
to
be
length,
one
length
eight
and
it's
the
regular
timing
values
on
the
next
slide.
Please.
I
Sorry,
we
already
have
the
cache
name.
Sorry
yeah,
I'm
fine!
I
was
just
I
was
confused.
So
what
are
the
the
diffs
to
the
version?
Zero?
Five
of
the
individual
draft?
So
we
we
updated
the
the
integration
of
the
interest
lifetime
and
the
recommended
cash
time
to
to
to
in
into
interpret
the
corresponding
values.
I
We
added
equations
to
approximate
the
conversion
and
just
to
to
estimate
the
error
that
actually
occurs
because
you,
you
cannot
represent
any
value
every
value
anymore,
there's
an
approximation
formula
which
helps
to
to
see
what
actually,
what
what
error
you
you're
producing
it's
a
simple
formula.
We
also
re
arrange
the
references
and
the
acknowledgement
section
and
from
our
side
this
is
a
pretty
pretty
mature
document,
but
please
feel
free.
Please
feedback
to
get
this
document
finalized.
J
Rick,
taylor
industries
just
a
sort
of
general
question
on
this
one.
How
far
does
this
deviate
from
ieee
754
formatting.
I
J
J
I
know:
there's
an
8-bit
float
support
used
in
in
graphics
quite
heavily
if
this
deviates
too
far
away
from
that,
are
we
losing
the
ability
to
use
silicon
that
already
understands
how
to
do
this?
This
work,
I
know
I
I'm
just
interested
in
the
in
the
diff.
I
know
you
don't
need
negative
times.
No,
and
I
know
you
don't
need
unrepresentable
times,
you
know,
division
by
zero,
infinity
and
so
on.
I
I
mean
we
had
the
I
mean
the
discussion.
We
had
a
longer
discussion
about
the
the
actual
format
and
also
about
we
were
also
discussing
the
ieee
format.
I'm
not
sure
I
recall
everything
correctly
in
the
moment.
So
maybe
I
mean
if
this
is,
we
should
probably
have
a
follow-up
on
the
list
if.
I
If
we
need
to
recap
all
this
discussions,
it's
actually
something
like
two
years
ago
or
so
it's.
A
K
Okay,
so
I'm
sharing
my
screen
now
and
I
don't
see
the
conference
tool.
So
if
there
is
a
comment
or
someone,
do
you
want
to
intervene?
Please
yeah,
let
me
know
so,
I'm
going
to
present
you
our
ongoing
work
on
selective
conduct,
disclosure
using
zero
knowledge
proofs.
K
K
So
the
purpose
of
the
talk
of
tobias
in
this
meeting
was
to
propose
the
adoption
of
this
signature
scheme
by
iatf
and
in
particular
by
cfrg,
and
if
this
happens,
it's
going
to
be
very
exciting,
so
many
related
works
are
investigating
this
signature
scheme,
mostly
in
the
context
of
digital
credentials.
K
K
This
is
in
cooperation
with
the
university
of
memphis
and
in
this
project
we
are
trying.
We
are
exploring
the
application
of
this
signature
scheme
in
nbn
and,
as
a
matter
of
fact,
we
are,
we
have
scheduled
some
experiments
to
take
place
in
the
indian
testbed.
K
K
K
We
want
to
allow
users
to
request
portions
of
this
data
and,
at
the
same
time,
we
want
to
enable
these
storage
providers
to
provide
proofs
of
integrity,
but
without
sharing
with
them
signature
schemes,
signature
keys.
K
K
These
two
fields
included
in
this
file
or
even
more
importantly,
this
user
is
authorized
to
access.
Only
these
two
fields
and
we
are
working
on
a
solution
that
allows
firstly,
users
to
perform
such
a
qrs,
and
we
do
that
in
a
way
which
is
compatibly
with
ndn
api
and
secondly,
we
allow
storage
snows
to
respond
to
these
queries
in
a
way
that
allows
users
to
verify
the
integrity
and
the
correctness
of
the
generated
response.
K
So
I
will
first
introduce
the
signature
scheme,
which
is
the
main
building
block
of
our
system.
So
I
guess
you
are
all
familiar
with
digital
signatures.
Traditional
digital
signatures
allow
a
signer
to
sign
a
message
and
then
a
verifier
can
validate
the
digital
signatures
of
this
message
using
the
public
key
of
designer,
so
group
signatures
are
very
similar.
K
The
only
difference
is
that
the
sire,
instead
of
signing
a
secret
message,
it
is
capable
of
signing
a
a
group
of
messages
and
then
the
verifier
can
in
a
similar
way,
validate
the
digital
signature
of
this
group
of
messages.
So
so
far
there
is.
There
are
no
significant
difference,
but
the
group
signatures
have
a
very
nice
property.
K
They
allow
a
third
party
which
we
call
the
prover
that
has
access
only
to
the
this
group
of
messages
into
the
digital
signature,
to
hide
some
elements
of
this
group
of
messages
and,
at
the
same
time
provide
a
proof
that
proves
that
the
revealed
items
are
correct.
So
more
formally,
this
proof
is
a
zero
knowledge
proof
that
proves
that
the
prover
knows
a
digital
signature
that
covers
both
the
revealed
and
the
hidden
messages,
and
this
and
since
this
digital
signature
can
only
be
generated
by
the
signer.
K
K
So
in
our
work
we
focus
on
data
items
and
coded
association
objects
and
we
are
using
a
mechanism
called
canonicalization
and
this
mechanism
transports
adjacent
object
into
an
array
of
messages.
So
you
can
see
in
this
example
how
this
json
object
on.
The
left
has
been
flattened
to
this
list
of
messages
on
the
right.
So
this
is
a
very
simple
and
straightforward
example,
but
for
more,
let's
say:
advanced
objects
like
those
that
includes
arrays.
K
They,
the
economicalization
approach
is
not
straightforward
and
we
are
using
a
economicalization
algorithm
that
we
have
created
by
ourselves
and
it's
a
the
security
properties
of
this
algorithm
have
been
formally
verified
and
they
are
included
in
the
in
the
publication
that
I
is
located
on
the
bottom
of
this
slide,
but
there
are
also
other
working
groups
working
in
similar
colonicalization
algorithms.
So
there
is
a
great
space
for
research
in
this
area.
K
K
That
includes
the
keys
in
which
we
are
interested.
So
in
this
example
frame
we
indicate
that
we
want
to
extract
from
a
stored
item
the
device
id
the
temperature
and
they
created
the
key
of
the
metadata
field.
So
if
we
apply
this
frame
to
our
example,
json
object
we'll
get
a
new
json
object
with,
which
is
the
same
as
the
original
one.
K
If
we
extract
the
fields
that
are
not
included
in
the
json
frame,
so
we
are
working
on
a
framing
mechanism
that
allows
us
that
allows
us
to
make
more
advanced
requests.
So,
for
example,
here
we
have
a
json
object,
which
includes
an
array
called
measurements,
and
this
array
includes
key
value
pairs
and
we
can
have
a
frame
like
this
one.
So
this
frame
in
essence
says
that
from
the
measurements
array
we
want
to
learn
all
values
whose
id
equals
to
temperature.
K
K
K
Then
the
producer
assigns
an
identifier
for
this
item
and
advertise
it
in
the
indian
network.
Then
any
user
can
send
a
an
interest
message
that
includes
the
identifier
of
this
advertised
item.
Moreover,
this
interest
message
and
the
application
parameters
field
includes
the
json
frame
that
we
want
to
apply
in
this
item
and
by
convention.
K
The
identifier
included.
The
in
the
interest
message
is
appended
by
the
hash
value
of
everything
included
in
this
app
parameters
field.
So
this
interest
will
end
up
in
the
producer.
The
producer
will
extract
the
json
frame.
It
will
derive
the
new
item.
You
will
perform
the
canonicalization
algorithm,
it
will
calculate
the
zero
knowledge
proof
and
then
it
will
send
the
output
as
a
data
packet
back
to
the
consumer.
K
K
So
some
performance
results.
We
have
implemented
an
evaluated
scenario.
We
in
this
scenario
we
have
a
json
object,
which
includes
100
fields.
This
field
represents
measurements
from
iot
devices,
and
in
this
graph
we
saw
the
time
required
to
generate
a
zero
knowledge
proof,
as
well
as
to
verify
zero
knowledge
proof
as
a
function
to
the
number
of
the
revealed
items
so
as
it
may
observe,
generating
the
zero
knowledge
proof
the
time
required
to
generate
the
zero
knowledge
proof.
I
is
almost
constant.
K
It
is
not
affected
by
the
number
of
revealed
items,
whereas
the
more
items
we
reveal
the
less
time.
We
need
to
verify
zero
knowledge
proof,
but
in
any
case,
the
time
required
for
these
operations
is
less
than
eight
milliseconds,
and
I
have
to
say
here
that
for
these
measurements
we
are
using
an
unoptimized,
the
python
implementation
of
the
signature
scheme
in
an
ubuntu
machine
with
a
two
cores
and
four
gigabyte
of
thumb.
So
it's
a
it's
an
ordinary
machine
and
these
times
can
be
greatly
improved.
K
K
I
guess
an
alternative
in
which
its
measurement
in
this
file
is
individually
signed
using
a
eddca
digital
signature
scheme.
So
we
assume
that
if
we
assign
it's
a
record
of
this
file
individually
and
we
achieve
the
same
security
properties-
which
of
course
this
is
not
the
case
but
and
we
measure
the
computational
and
the
storage
overhead.
K
Of
course,
it's
straightforward
that
having
100
different
digital
signatures
requires
more
storage
space
as
opposed
to
having
a
single
digital
signature,
but
also
when
it
comes
to
the
communication
overhead,
the
the
the
bandwidth
required
to
transmit
the
the
adca.
These
are
signatures,
of
course,
is
proportional
to
the
number
of
the
revealed
items,
since
we
reveal
one
digital
signature
per
item,
whereas
when
it
comes
to
zero
knowledge
proof,
this
is
the
opposite,
so
the
size
of
a
zero
knowledge
proof
is
a
272
bytes,
plus
32
bytes
for
every
revealed
items.
K
So
the
more
items
we're
revealing
the
less
is
the
size
of
this
zero
knowledge
proof.
So
if
we
refill
32
items
or
more,
we
need
less
bandwidth
for
the
zero
knowledge
proof
compared
to
an
edc8
digital
signature.
K
K
This
approach
is
using
a
new
key
type,
and
this
a
key
is,
you
is
required
by
the
bbs
signature,
bbs,
signature
algorithm
and,
moreover,
it
defines
two
new
signature
types
once
generated
by
the
data
owner
and
the
other
is
the
zero
knowledge
proof
generated
by
the
the
prover.
So
we
can
we
can
research
how
these
in
context
can
be
integrated
into
ndn
or
in
other
related
proto
architectures.
K
As
I
told
you,
data
framing
economicalization
is
still
an
upper
problem
where
many
things
can
be
done,
especially
if
we
consider
non-json
recorded
objects,
and
this
also
many
similar
activities
are
taking
place
in
other
working
groups,
not
only
in
idf
but
also
in
the
body
such
as
identity
foundation
and
w3c.
K
In
the
context
of
a
of
the
coined
rd,
so
that's
all
so,
as
I
told
you,
we
are
working
actively
in
this
area.
Please
contact
us.
If
you
want
to
learn
more
information,
we
can
provide
the
source
code
and
many
other
helpful
pointers.
A
Okay,
great
thanks,
so
I
was
wondering
about
this:
json
object,
integration.
What
you
call
framing
is
this
actually
a
crdt
like
operation,
so
like
an
addition
where
this
commutative.
A
A
K
A
K
There,
it
is
a
very
nice
problem
if
this
intermediate
that
has
cast
the
previous
response,
how
to
generate
a
new
object
that
can
satisfy
this.
The
new
request,
which
is
which,
in
essence,
request
no
as
a
smaller
portion
of
the
of
the
original
message
and,
of
course
combining
such
responses
for
and
creating
a
new
object
is
also
another
very
interesting,
open
topic.
A
Yeah,
I
think
so
too
great
yeah.
Thanks
for
bringing
this
work
to
us,
I
think
it
fits
really
nice
into
the
spirit
of
this
session,
so
icn
and
coin
rg.
I
see
no
other
questions.
Oh,
I
think
let's
follow
up
on
the
main
list.
If
people
have
questions
and
looking
forward
to
the
paper,
thank
you
thanks
nicholas
so
just
quickly.
A
I
realized
that
some
people
seem
to
be
having
connectivity
issues
or
audio
video
issues
which
cannot
explain
directly,
so
it
works
perfectly
for
me,
but
could
be
a
good
idea
to
just
share
some
new
experience
on
in
the
chat
so
just
to
maybe
figure
out
what
this
is
so
I'm
connecting
from
germany.
I
don't
have
any
issues
today,
but
I
heard
from
others.
Then
it
doesn't
really
work
so
well.
A
Okay,
let
me
just
bring
up
our
slides
again.
A
Okay,
so
just
a
few
things
that
we
shouldn't
forget
about,
so
we
really
wanted
to
get
the
flick
specification
finished
and
published.
So
it
looks
like
it's
currently
stored
again
so
yeah
dave
and
I
have
been
trying
to
motivate
authors
and
the
group
to
kind
of
see
how
we
can
get
this
to
last
call.
We
we
think
it's
a
it's
a
really
important
specification
and
it
should
really
be
published.
A
So
if
you
have
any
idea
how
to
to
move
this
forward,
please
let
us
know
or
please
free,
also
feel
free
to
suggest
text
or
anything.
If
you
think
there's
something
missing
or
and
so
on
and
yeah,
then
of
course
we
don't
have
that
much
time
today,
but
so
potentially
there
could
be
many
interesting
work
items
to
discuss
here
and
here
just
a
few
examples
that
we
came
up
with
so
you're,
probably
aware
that
there
is
a
media
over
a
quick
discussion
in
the
ietf.
A
So
a
buff
is
this
week,
and
so
this
is
essentially
a
proposal
to
do
something
like
yeah
named
data
networking
you
can
say
over
quick
or
like
an
overlay
network
of
quick
relays.
You
can
say,
of
course,
this
could
be
done
much
better.
Maybe
that's
an
interesting
topic
for
this
community
as
well.
A
Kicked
off
a
discussion
on
self-learning,
auto
configuration
and
also
potentially
nd
and
switch
design
on
the
main
list,
and
so
there
is
some
interesting
work
that
has
happened
in
ndn
cxx
as
a
code
base.
I
still
think
this
could
be
optimized
and
maybe
that's
an
interesting
topic.
If
people
are
interested
to
discuss
further
then
so
we
we
often
you
know,
explain.
Icn
is
a
good
way
to
to
access
arbitrary
content
in
network,
which
is
true.
A
However,
if
you
think
about
doing
something
equivalent
like
web
protocol,
so
like
http
3,
for
example-
and
there
are
a
few
other
things
you
have
to
care
about
so
things
like
name
privacy,
for
example-
maybe
setting
up
something
like
a
tls
security
context
and
so
on
and
maybe
other
things.
So
we
think
that
that's
actually
an
interesting
topic
and
maybe
something
to
work
on
so
yeah.
A
We
just
saw
an
example
for
say:
icn
security
work,
that
kind
of
is
connected
to
disability,
computing
and
coin,
and
we
have
talked
about
others
before
so
we
would
also
encourage
you
to
maybe,
if
you
do,
if
you're
looking
in
this
field,
please
share
your
ideas
in
this
group
and
I
see
alicia
on
thecube.
G
B
G
So
like
the
kind
to
suggest
that
the
list
is
give
me
a
impression.
Sorry,
if
I'm
saying
the
bad
words
again,
this
is
like
you
really
look
for
work
to
do.
A
Yeah
point
point
taken:
okay,
I
didn't
explain
this
well
enough
and
of
course
some
of
these
points
are
actually
motivated
by
actual
problems.
A
So
for
just,
for
example,
like
the
like
this
first
one
meteor
over
quick
or
icn,
I
mean
they're
clearly
is
a
problem
in
distributing
real-time
multimedia
content
over
the
internet,
and
so
these
media
over
quick
ideas
directly
stem
from
these
from
this
change
in
the
like
cdn
environment.
A
And
so
you
could
you
could
say
that
that
this
is
the
problem,
and
I
see
solution
proposals
right
now
that
are
say
leave
room
for
optimization.
B
G
G
G
B
A
Thanks
no,
I
I
fully
agree
so
that,
like
just
quickly
this,
this
first
item
here
is
not
intended
as
just
like
a
protocol.
Drop-In
replacement
or
something
is
more
likely
referring
to
the
like
bigger
picture,
the
architectural
problems,
but
I
yeah
we
don't
have
time
today
to
to
discuss
this
so
there's
like
yeah
there's.
I
think
there's
a
lot
to
you
know
unwrap
here,
but
we
just
wanted
to
like
inject
a
few
ideas
for
for
maybe
deeper
discussions
or
like
actual
work
later
in
this
group.
A
A
So
we
are
done
with
the
icn
part,
but
there's
more
to
come
on
computing
in
the
network,
so
yeah.
We.
We
have
this
first
bullet
item
for
a
couple
of
meetings
now
and
we
are
really
hoping
things
will
clear
up,
but
yeah
no
promises
at
this
point
just
quickly,
please
mark
september
19th
to
21st
in
your
calendar,
and
this
is
where
the
icn
conference
happens
in
osaka
japan
this
year.
A
Okay,
thank
you
very
much.
Now,
let's
just
directly
continue
with
with
coin
rg
and.
H
D
D
D
Okay,
so
strange
being
on
this
side
of
the
presentations:
okay,
hello,
everybody
we're
thrilled
to
have
part
of
this
session,
I'm
eve
schuler
and
my
co-chairs
are
jeffrey,
hey
and
mauricio
is
a
mopati
and
who
are
also
here.
But
here
is
that
we
are
all
remote.
D
D
I
don't
believe
we
have
to
go
through
any
of
these
slides
because
that
was
already
stated
at
the
outset
of
the
icnrg,
but
we
have
three
very
interesting
presentations
today
and
we'll
begin
with
dirk,
who
will
be
talking
to
us
about
traffic
steering
at
layer,
3.
D
andy
who
will
discuss
name,
spaces
security
network
addressing
and
tushar
swamy
building,
adaptive
networks,
with
machine
learning
and
and
then
we
will
discuss
the
need
for
an
interim.
There
are
several
things
that
we've
had
on
our
to-do
list
for
quite
some
time,
namely
a
more
appointed
scoping
discussion,
really
synthesizing.
D
Many
of
the
conversations
discussions
and
debates
we've
been
having
on
the
mailing
list
for
for
many
months
and
really
to
take
a
a
step
back
and
try
to
scope
as
our
charter
states,
you
know
re-scope
or
scope
more
articulately
or
deliberately,
and
the
other
task
that
we
would
like
to
accomplish
in
an
interim.
D
So
with
that,
let's
see,
I
think
marie
jose
wanted
to
oh
well,
you
are
here
so
it's
kind
of
funny.
You
know
if,
if
you're
not
here,
this
is
how
you
get
to
meet
echo.
You
won't
get
to
see
that,
but
nonetheless,
because
this
is
a
shared
meeting,
it's
a
little
funny
how
meat
echo
tracks
or
doesn't
track
the
history
of
this.
D
It
will
appear
in
the
icn
meeting
minutes
and
meeting
under
the
data
tracker,
the
meeting
directory,
but
not
in
the
coin
rg,
so
that,
I
think,
is
an
oversight
and
I
for
meet
echo,
and
hopefully
we
will
submit
that
as
something
we'd
like
to
see
changed.
D
C
Okay
yeah,
I
just
wanted
to
say
that
we
have
two
rg
documents
that
we
would
like
to
move
forward.
One
is
expired
and
the
other
one
I
think,
needs
updates.
We
have
a
ton
of
other
documents
and
that,
as.
C
Eve
said:
it's
there's
a
lot
of
them
that
maybe
they're
they're
still
good,
maybe
that
some
of
them
should
not
be.
Maybe
would
you
know
that
they
just
expired
and
we
don't
want
to
continue
anything,
but
it
would
be
good
for
the
authors
to
tell
us
what
they
intend
to
do
and
again.
I
think
this
is
going
to
be
very
much
what
we
want
to
do
at
the
interim,
where
we
want
to
take
some
more
time.
C
We
also
want
to
re-look
at
some
of
the
the
charter.
I
don't
think
we
need
to
be
recharted,
but
there's
goals
that
we
had
that
we
need
to
to
look
into,
and
we
will
do
that
again.
In
a
september
time
frame
yeah
we've
been
hit
pretty
much
by
the
covid
and
I
think
we're
all
a
bit
under
the
weather
and
yeah.
C
So
I
think,
without
taking
more
time,
because
we're
already
our
terminates
late,
dirk
trossen
will
present
some
work
for
actually
raider
routing
and
addressing,
and
I
would
like
to
mention
that
tushar
who's
going
to
have
our
last
presentation
but
not
least,
was
also
the
applied
networking
research
prize
winner
and
since
that,
the
last
the
last
session
dirk
said
that
he
was
looking
forward
to
start
having.
C
So
maybe
some
work
in
you
know
and
and
machine
learning,
intelligence,
networking
that
type
of
stuff-
and
this
is
exactly
what
shaw
is
going
to
present
and,
of
course
the
name
space
is
always
a
common,
a
common
thing
between
icn
and
us,
and
so
I
think
those
three
presentations
will
be
very
complementary
to
what
was
presented
before
so
dirk.
Please
we're
waiting
for
your
presence.
L
Slide
sharing
see.
A
And
can
you
see,
share
preloaded
slides
in
the
top,
or
is
that
just
for
sure
for
chairs.
A
Oh
okay,
yeah
right.
I
think
it
has
to
go
through
you
now
yeah.
Let
me
just
bring
your
slides
up
and
then
I
can
drive
them
if
you
like.
L
Thank
you
yeah.
Yes,
as
as
I
mentioned,
this
is
some
work,
that's
related
to
what
you
could
call
advanced
packet
forwarding
if
you
will
more
routing.
This
is
the
joint
work
with
my
colleagues
karima.
I
mean
zoro
and
artur
and
also
with
george
khaled
two.
Hence
the
two
logos
at
the
bottom.
They
can
see
if
you
can
go
to
the
next
slide.
Please.
L
Oh,
do
I
so
the
problem
of
random
scheduling
that
we
that
we
looked
at-
and
there
is
an
accompanying
draft-
wasn't
the
list
that
were
usually
before
so
scheduling
and
then
and
joined
network
compute.
Optimization
has
been
talked
about
in
contributions
to
coin
before
so
the
environment
that
we're
talking
about
here
is
execution
of
services
in
the
distributed
service
environment
then
particular
virtualization
drives
the
distribution
of
a
service
implementation
in
one
or
more
servers,
instances
right.
L
They
are
available
in
one
or
possibly
more
network
locations
and
and-
and
you
have
to
make
a
choice
which
of
the
instances
you
would
like
to
like
to
use
for
your
computation.
That's
essentially
the
runtime
scheduling
problem.
The
additional
problem
comes
in
through
when
we
capture
this
to
the
notion
of
service
transaction
it
it
may
require
an
affinity
to
service
instance
after
you
made
an
initial
decision
because
of
ephemeral
state
that
has
been
created.
L
So
the
problem
that
we
that
we
outlined
is
to
find
the
best
service
instance
to
serve
the
client
transaction
runtime.
While
we
also
preserve
the
at
the
the
the
the
affinity
after
machine
has
been
made
and
and
the
solution
is
called
computer,
where
distributed
scheduling,
you
can
see
the
two
key
aspects.
The
one
is,
it
is
computer
where
so
best
that
we
put
in
a
in
in
quotes
there
is,
is
an
awareness
of
the
compute
capability
of
the
service
instance,
and
it
is
distributed
scheduling.
L
It
does
not
go
wire
in
in
inflection
point
it's
it's
done
at
the
inquest
to
the
network.
L
L
That
was
great,
sorry,
so
we're
basing
the
the
the
the
cards
idea
on
a
system
that
routes
service
requests
based
on
service
identifiers.
That's
that's
also,
there's
a
certain
proximity
there
to
icn.
L
If
you
will
in
in
the
sense
that
the
so
we
have
distributed
geographically
distribute
sites
over
the
service
instances,
they
are
shown
in
red
on
the
more
right
hand,
side
of
the
picture,
client
issue
service
request,
which
is
destined
to
service
identifier
and
the
incoming
semantic
router,
which
are
the
the
the
red,
boarded
one
sr
one
two
and
three
for
each
of
the
clients
forward.
The
service
request
towards
a
suitable
destination,
which
is
one
of
the
possibly
many
substances.
L
The
five
that
you
can
see
here
on
the
slide,
that's
three
different
locations,
so
you
have
three
different
locations
but
five
different
instances.
L
It
performs
an
on
path,
forwarding
decision
and
that's
the
the
key
part
that
we
proposed
and
that's
compared
to
an
existing
dns
plus
ip
of
pass
decision
that
you
could
do
as
well.
The
affinity
is
insured
in
the
system
by
using
iplocator
for
the
subsequent
request,
so
the
service
request
is
a
special
request
to
serve
identifier.
It
makes
its
way
to
let's
say
this
instance
and
then
the
subsequent
requests
for
the
transaction
are
using
the
ip
locator
of
that
instance
to
root
the
request
directly
to
the
instance.
L
What
is
computer,
where
the
swivel
is
getting
now?
One
of
the
things
we
wanted
to
achieve
is
that
we
that
we
wouldn't
need
to
signal
permanently
a
lot
of
work
on
computer,
wear
forwarding
decisions
that
are
also
cited
in
the
paper,
but
but
we
wanted
to
avoid
very,
very
frequent
signaling,
so
we
attached
to
computer
awareness
to
something
you
can
derive
from
the
deployment
of
a
server,
so
each
service
instance
is
assigned
a
normalized
compute
unit.
L
All
the
convenience
are
flattened
and
joined
in
an
identify,
specific
routing,
identifier
interval.
You
can
see
this
on
the
right
hand
side,
so
that
in
in
this
case
it
means
you
know
you
have
exactly
one
compute
in
it
for
the
first
one,
you
have
two
for
the
second
one
again
one
for
this
one
four.
L
The
scheduling
now
that
you
implement
is
a
distributed
round
robin,
so
you
essentially
run
through
this
interval
in
a
round-robin
fashion.
You
have
an
incoming
request
at
the
and
each
of
the
identifiers
exist
at
each
of
the
increases,
so
each
inquest
is
independent
from
the
others,
a
decision
of
sending
the
first
request
to
the
first
one,
the
next
two
requests
to
the
second
one,
the
the
fourth
one
to
this
one
etcetera
until
it
it
wraps
around
when
it
reaches
the
last
one,
that's
what's
being
implemented,
it's
it
can
be
implemented
like
a
link
speed.
L
We
have
a
separate
paper
we
published
last
year,
where
we
did
a
very
similar
mechanism
in
t4
and
showed
what
are
the
issues
before
in
doing
that
successfully.
L
So
we
implemented
this
in
in
in
assimilation,
and
we
also
have
now
by
the
way,
a
a
real
implementation
ebpf.
But
at
the
time
when
we
published
a
paper
simulation-
and
these
results
see
our
simulation,
we
went
by
simulator.
We
used
five
sites
for
servers
service,
instant
per
server.
You
can
also
have
multi-host,
but
we
didn't
do
this
in
the
simulation.
L
The
compute
units
are
signed
at
the
start,
so
really
emulating
a
deployment
of
these
compute
units
and
and
the
instances
run
for
a
certain
amount
of
time
until
you
may
redeploy
or
reorchestrate
your
your
your
service
setup,
you
have
five
english
semantic
widows
as
well,
and
the
sales
requests
only
go
to
one
service
function
and
there
are
single
packet
requests
with
which
they're
sent
the
main
metric
that
we're
interested
in
is
the
request
completion
time.
L
So
how
can
we
improve
the
actual
latency
at
the
request
level
of
these
service
requests?
L
So
the
scenario
one
we
we
looked
at
was
generally
in
scenario
one
and
there's
one
a
and
one
b
and
then
b
I
would
skip
over
in
the
interest
of
time.
We
had
different
design
aspects.
So
so,
what's
the
impact
of
the
centralization
versus
the
distribution,
so
we
distributed
the
scheduling
over
a
growing
number
of
of
inquest
points
and
even
in
the
end,
into
the
clients
themselves,
clients
could
be
seen
as
an
increase
point
very,
very
much
close
to
the
actual
application
versus
an
idealized
center
central
scheduling.
L
So
idealized
means
we
neglected
the
actual
path
latency
to
move
to
a
central
point
and
only
centralize
the
actual
logic.
So
not
necessarily
the
actual
latency.
Obviously
we'll
have
different
latency
there
and
and
and
the
observation
we
got
from
the
civilization
is
there's
a
natural
effect
of
the
distribution
of
the
on
the
mean
rcts,
which
means
the
distribution
itself.
The
fact
that
these
schedulers
run
independent
from
each
other
is
not
particularly
large
when
you
start
growing
this
to
to
very
many
increase
points.
L
When
you
see
an
increase
in
rcts
when
the
when
the
system
load
approaches
100,
but
generally
the
the
impact
is
relatively
small.
So
that
was
one
of
the
aspects
so
and
again
this
obviously
does
not
take
into
account
the
latency
through
running
wire,
centralized
point:
if
you
take
that
into
account,
the
numbers
would
get
better
again
for
the
distributed
scheduling
where
you
do
not
have
this
additional
latency.
L
I
skipped
this
one.
This
was
actually
in
the
pptx
that
I
sent
in
the
powerpoint.
It
was
actually
hidden.
You
can
look
at
it
separately.
If
you
download
the
slides,
we
then
compare
it
with
other
network
level
solutions,
so
we
wanted
to
compare
cards
performance
against
other
distributed
scheduling
mechanisms
that,
in
in
the
sense
of
both
factoring
compute
capabilities
in
the
scheduling
decision,
so
they
needed
to
be
computer,
aware
they
to
perform
scheduling
at
the
inquest
versus
at
sites.
L
So
we
wanted
to
compare
this
design
aspect
of
cards
and
we
also
wanted
to
evaluate
the
impact
of
distributing
compute
units
across
sites
and
with
insights.
So
if
there
are
imbalances
of
distributions
compute
units,
what's
the
impact
of
that
we
use
two
schedulers
one
is
a
random
scheduler.
It's
it's
position
of
the
increase,
a
note.
It
is
not
compute
away
at
all.
It
performs
random
load,
balancing
it
by
selecting
an
instance
uniformly
at
a
random
and
then
just
sends
it
to
the
actual
network
location.
The
second
one
is
called
steam.
L
That's
an
an
infocomm
from
instagram
paper
in
2020,
so
it
was
one
year
before
we
started
our
work.
One
of
the
courses
ramen,
is
also
on
this
paper.
L
This
is
positioned
at
the
site,
increases
not
that
the
client
increases
at
the
site
interest
and
they
for
the
the
actual
network
interest
notes
forward
the
request
to
the
sites
uniformly
at
random.
But
then
the
is
therefore
compute
unaware
at
the
network
interest,
but
at
the
site
increase.
It
is
using
node
estimation
to
find
the
right
compute
instance
within
the
site,
so
it
is
kind
of
like
a
mixture
of
compute
unaware
in
the
network,
but
computer
wear
at
the
side.
L
The
what
we
found
in
the
comparison,
not
surprising
is
the
card
significantly
reduces
the
the
the
rct
in
particular
in
high
load
settings
because
it
is,
is
taking
into
account
even
the
distribution
to
the
sites,
the
compute
units
of
the
individual
instances
which
steam
doesn't
do
steam,
on
the
other
hand,
has
issues
when
the
the
system
load
goes
significantly
up
above
eighty
percent,
as
you
can
see,
in
the
right
hand,
side,
it
jumps
quite
significantly,
even
above
the
renmin
scheduler.
L
What
we
then
looked
was
the
imbalance
of
the
computing
distribution.
So
we
we
created
imbalances
across
sites.
The
normal
configuration
was
roughly
the
same
compute
unit
number
per
each
of
the
sites
and
then
within
the
sites.
Roughly
the
service
instances
were,
you
know,
almost
the
same
distributed,
and
we
changed
this
now
in
making
one
side
very,
very
big
and
the
other
one's
relatively
weak
in
compute
units
and
equally,
we
had
in
the
second
sub-scenario.
L
We
create
the
same
imbalance
within
the
site,
so
one
of
the
service
instances
was
very,
very
big
compared
to
the
the
other
service
instances,
and
what
we
could
see
is
that
the
steam
handles
the
the
contention
quite
well
in
the
in
when
the
when
there's
an
imbalance
within
the
side,
because
it
uses
load
balancing
with
inner
side.
L
L
This
is
kind
of
like
a
media
scenario,
since
they
mentioned
media
before
in
the
ic
energy,
as
a
potential
work
as
well,
where,
where
we
have
individual
service
requests,
for
instance,
for
content
retrieval
or
you
know,
which
could
be
either
video
or
it
could
be
software
uploads,
where
I
would
like
to
have
something,
I'm
getting
a
larger
chunk
back.
So
it's
kind
of
like
a
rather
icy
energy
scenario.
L
If
you
will
right-
and
we
compare
this
to
existing
long,
lift
approaches,
what
we
mean
with
long
lift
approaches
is
an
approach
where
a
decision
which
server
to
be
chosen
is
relatively
long,
and
this
can
either
be
a
and
we
chose
to
it's
a
one
minute
transaction.
So
after
one
minute,
we
essentially
recalibrate.
That
would
mean
we
issue
after
flush
in
the
dns
cache.
We
are
issuing
another
dns
request,
hopefully
getting
another
choice
now
and
we
used
random
as
the
the
comparison
there.
L
We
also
compared
random
choice
at
packet
level,
so
where
you
would
make
a
change
even
at
the
packet
level,
and
what
we
can
see
is
is
the
the
performance
of
cards
even
up
to
very,
very
high.
The
vertical
line
here
is
100
system
load,
even
when
the
system
not
approaches
100
percent
and
what's
more
important,
because
in
the
use
case
to
written
analysis,
we
set
the
latency
at
the
end
arrival
time
at
two
seconds.
L
I
said
well
what
about,
if
you,
if
you
have
a
an
up
about
legacy
of
1.5
seconds,
so
meaning
I
still
have
enough
time
to
receive
the
packet
and
do
whatever
decoding
I
need
to
do,
but
one
and
a
half
seconds
should
really
be
the
upper
bound
latency,
and
what
is
the
number
of
clients
where
this
upper
bound
latency
is
is,
is
is
being
exceeded
and
we
can
see
that
cards
in
comparison
to
the
other
mechanism
significantly.
This
is
significantly
more
client.
L
So,
if
you
draw
in
our
horizontal
line,
which
would
be
very
much
at
the
bottom
of
this
graph,
you
can
see
that
cards
serves
24,
000,
more
clients
than
than
steam
and
162
percent.
More
even
more
clients
for
the
for
the
long-lived
affinity,
scheduling.
Oh
sorry,
for
the
packet
level,
scheduling
versus
the
long-lived
dfinity
scheduling,
so
the
the
the
packet
level
computer
variant
decision
a
little
bit
expected
has
a
significantly
higher
performance.
L
What's
the
conclusion?
Well,
the
conclusion
is
that
we
we
we
try
to
show
with
this
work
that
we
can
integrate
computer
awareness
into
the
steering
decision
of
at
the
data
plane
level.
These
are
two
pieces
of
work
at
a
system
that
we
outlined
in
the
paper,
but
also
the
accompanying
p4
work
that
we've
done
before
to
show
that
you
could
implement
this
at
the
data
pineapple
through
programming
frameworks
like
p4.
The
computer
awareness
here
is
a
relatively
static
computer
awareness,
so
it
doesn't
require
frequent
load
signaling,
it
just
says:
well,
do
you?
L
L
So
where
do
we
want
to
push
this
work?
This
was,
as
I
mentioned,
ethic
networking
paper
and
my
apologies
for
the
mix-up
was
getting
the
proceedings.
I
accidentally
disclosed
the
author
link,
as
I
was
told
later,
and
the
website
had
to
be
shut
down
so
and
this
to
the
coin
list.
It
was
only
the
link
was
only
live
for
a
couple
of
minutes.
The
proceedings
should
be
available
now,
even
though
I
don't
have
the
link
handy
at
the
moment.
What
this
did
it
is.
It
is
a
horizontal
comparison.
L
L
You
could
use
cards
at
l7
as
well
right
and
that's
a
vertical
comparison,
and
we
did
this
at
the
moment
in
a
new
paper
that
we
published
in
the
upcoming
firearm
workshop,
where
also
the
namespace
paper
that
you
will
hear
next
is
going
to
be
presented,
which
compares
will
be
then
positioned
as
an
off-pass
traffic,
steering
at
l3
against
a
sorry,
on-pass
traffic
steering
at
l3
against
an
off-pass
indirection
based
resolution
at
l7,
and
it
compares
so
that
allows
us
to
somewhat
compare
the
usage
of
the
same
mechanism
computer
by
mechanism,
but
with
different
systems
and
tests,
and
you
can
find
those
results
in
that
paper.
L
A
So
I
I
don't
want
to
start
a
rightful
discussion
here,
but
why
call
this
semantic
routing?
So
why
why?
What
do
the
forwarders
have
to
know
about
semantics?
Isn't
this
just
names
that
they
need
to
know
and
then
make
the
following
decisions
based
on
on
that
knowledge,.
L
Yeah,
so
so
the
semantic
here
is
the
service
identifier.
We
actually
changed
the
name
in
the
apk
in
the
fire
paper,
so
you
won't
find
some
anecdotes
anymore.
L
This
paper,
I
think,
was
written
initially
actually
for
infocom
last
year,
when
we
called
it
semantic
reader
and
we
called
it
after
a
service
with
it,
because
it's
a
service
identifier,
so
it's
a
bit
more
descriptive
as
to
what
the
semantic
is
for
the
same
reasons
you
weren't
the
first
one
that
actually
asked
for
you
or
what,
if
it
is
a
specific
semantic,
while
you
just
put
the
semantic
into
the
name
which
into
the
description
which
we
did,
I
just
use
semantic
reader
because
that's
what's
written
in
the
paper.
Okay,
thank
you.
C
Yeah
there's
this
is
marie.
There's
a
question
also
from
the
the
chat
dirk
which
is
from.
C
So
what
is
the
advantage
of
doing
this
at
l3
versus
seven
and
we
will
put
the
the
paper
that's
sitting
that
is
cited
there
in
in
the
in
the
minutes.
So
maybe
you
can
answer
to
to
cat
to
ken.
L
Yeah
so
so
I
mean
this
obviously
goes
after
you
know
at
the
the
paper
that
I
didn't
talk
about
just
the
one
that
comes
in
fiverr.
We
looked
at
this
and
and
of
course
the
first
thing
you
have.
You
have
the
initial
resolution
latency,
which
is
you
know
you
can
quantify.
We
do
this
actually
in
the
paper.
L
What's
the
the
typical
initial
resolution
latency
in
dns
was
optimization
in
dns
has
significantly
gone
down,
but
the
other
one
that
we
outlined
in
the
paper
is
actually
not
necessarily
an
improvement
of
the
average
latency,
but
the
variance
of
the
latency-
and
this
is
quite
clear-
if
you
think
about
you-
know
a
problem
when
you,
instead
of
distributing
a
smaller
number
of
clients
to
a
fixed
server,
you
now
have
the
option
to
pick
among
a
larger
number
of
clients.
More
servers,
you
actually,
so
you
haven't.
L
You
have
an
mm-1
system
with
n
divided
by
k,
clients
versus
n
clients
in
an
mmk
system.
Queuing
theory
will
already
tell
you
average
latency
is
about
the
same.
It's
the
variance
that
is
impacted.
The
variance
significantly
is
reduced.
So
if
you
have
use
cases
like
which
we
have
in
the
firewall
paper
like
ar
vr,
actually,
the
reduction
of
the
latency
variants
is
a
very,
very
good
thing
right.
What
you
also
have
is
a
scenario
that
we
have
in
in
the
file
workshop.
L
Where
about
resilience,
you
you,
you
share
the
damage,
so
we're
overloading
one
of
the
servers,
if
you,
if
you
have
been
attached
to
the
server
in
a
longer
lift
affinity,
obviously
you're
being
affected
immediately
and
you
will
be
affected
for
the
duration
of
the
outage,
while
in
the
in
in
the
lc
mechanism,
and
the
scheduling
happening
across
the
service
allows
you
to
distribute
the
impact
of
that.
L
It's
not
a
fail
server,
we
actually
reduced,
we
increased
the
latency
from
that.
Server
is
distributed
across
all
of
the
clients
and
and
hence
you
get
overall,
better
performance,
still
good
enough
performance
for
everybody,
while
nobody's
really
negatively
affected.
So
these
are
some
of
the
takeaways
we
have
in
this
fire
paper,
but
thanks
for
the
reference
for
you,
certainly
at
this
one
as
well.
C
I
I
just
wanted
to
make
sure
that
there
was
who
else
is
in
the
queue
yes
dirk.
D
I
I
would
also
say
that
in
the
interest
of
time
this
should
be
the
last
question,
or
maybe
we
even
take
this
question
to
the
list
so
that
we
have
time
for
the
next
couple
of
talks.
L
D
Okay,
I
think
andy
to
you
thank
you
that
was
easy,
dirk
you're
running
you're
driving
the
slides
great.
H
Okay,
this
is
all
fairly
new
to
me.
Oh
and
it's
come
up
on
the
wrong
camera
as
well,
so
you're
getting
a
side
view.
I
won't
try
and
fix
that
yeah.
This
is
this
is
some
put
together
fairly
recently,
I
don't
maybe
go
on
to
the
next
slide.
H
H
It's
also,
and
we've
got
the
that's
the
the
reference
for
the
workshop.
We
think
the
agenda
hasn't
actually
been
fully
published
yet,
but
presumably
will
be
very
shortly.
H
H
So
there's
there's
one
of
the
nice
things
about
this
is
we've
had
a
pretty
broad
range
of
input,
including
from
automotive
and
video
surveillance
partners,
which
all
come
to
in
a
couple
of
slides
with
background
is
the
question
that
we
were
asking
ourselves
was
looking
at
some
of
the
applications.
We've
got.
H
The
general
go-to
solution
that
exists
at
the
moment
is
microservices
based
architecture
normally
based
on
on
very
one
or
other
form
of
container
modularization
and
hosting
and
arrangements,
and
there's
an
awful
lot
of
traction
of
that
in
in
the
industry
at
the
moment,
and
we
started
by
noting
that
there's
a
lot
of
good
reasons
why
the
modularity
associated
with
containers
seems
to
be
a
good
thing,
covering
quite
a
wide
range
of
things,
so,
starting
even
from
the
code
development,
if
you're
following
the
agile
agenda
and
looking
for
a
modularity
ability
to
refactor
and
so
on.
H
It
provides
a
very
good
way.
A
good
unit
of
modularity
in
your
application.
Development
is
the
basic
point
of
abstraction
service,
abstraction
by
which
the
point
of
which
you
don't
see
inside
to
the
implementation
again,
which
goes
alongside
some
of
the
ability
to
to
refactor
without
impacting
the
wider
system.
H
It
provides
a
heterogeneity
between
development
or
different
runtime
environments,
different
language
environments
and
so
on.
It's
also
a
point
of
integration
and
module
test
and
end-to-end
system
tests
in
ci
cd
pipelines.
H
So
that's
the
way
we'd
encountered
it
at
the
moment
and
certainly
what
we
were
taking
is
there's
a
lot
of
momentum
to
say
this
is
a
jolly
good
thing.
All
these
things
come
together
at
one
point,
but
it
does
still
ask
the
question:
does
one
size
really
work
for
for
all,
and
so
we
were
identifying
we'd
be
identifying
a
number
of
issues
where
it's
not
immediately
clear
that
this
is
the
best
long-term
answer
or
it
can't
be
improved.
H
H
For
some
of
these
other
aspects,
not
necessarily
distribution
as
in
physical
distribution,
and
when
we
think
about
physical
distribution,
there
are
some
more
demanding
constraints
that
don't
present
a
great,
more
concern:
more
complexity
in
working
out,
the
distribution
of
the
management
of
the
distribution,
the
scale
of
the
service
abstraction,
the
point
of
service
abstraction
is
largely
fixed,
so
once
you've
decided
your
web
services,
for
example,
your
web
services
interface,
it's
hard
to
go
back
and
then
break
it
open
into
sub
components
that
are
within
it
and
say
I
want
to
distribute
a
part:
that's
within
a
container
module,
take
that
out
and
distribute
it
somewhere
else,
which
they
also
sets
up
a
a
trade-off
that
you
have
to
decide
fairly
early
on
in
the
in
your
in
your
application
development.
H
H
So
some
of
the
the
background
to
the
way
we've
been
looking
at
this,
the
couple
of
use
cases
that
have
been
really
interesting.
Certainly
I
found
them
cause
cause
a
lot
of
good
thinking
in
understanding
what
drives
the
potential
for
physical
distribution.
H
H
H
H
If
you
bring
all
the
information
up
to
the
cloud
well,
firstly,
you
require
the
full
video
bandwidth.
All
the
way
up
to
the
cloud
he's
also
got
the
potential
issue
that
what
you've
uploaded
is
the
full
raw
video
stream
with
all
sorts
of
other
information
which
you
don't
necessarily
want
exposed
and
cross-correlated
in
a
way
that
you
never
intended
for
the
particular
application.
H
The
other
one
completely
in
to
many
ways
completely
different,
is
looking
at
the
automated
production
filter
facilities
in
a
smart
factory,
where
the
sort
of
things
that
we
understand
are
developing.
There
is
going
for
a
greater
scale
of
production,
automation,
moving
where
you've
got
and
changing
from
an
environment
where
you've
got
a
lot
of
legacy
or
existing
interfaces
for
sensors
and
actuators.
H
J
H
A
a
an
architecture-
that's
a
lot
more
modular
where
the
compute
facility
may
be
in
very
small
boards,
where
raspberry
pi
may
even
appear
fairly
heavy
weight.
That
is
connected
directly
to
the
the
actuators
and
and
sensors,
but
is
also
doing
a
lot
of
the
compute
in
a
distributed
way.
And
when
you
start
looking
at
this,
it
looks
like
architecturally
a
very
similar
sort
of
architecture
to
some
of
the
applications
we've
been
looking
at,
that
are
much
more
large-scale
wand
type
distribution.
H
So
that
seemed
to
us
that
there's
a
growing
convergence
between
these.
What
a
very
at
the
moment,
very
specialist,
very
small
network
environments
and
production
facilities,
are
becoming
to
look
much
more
like
the
sorts
of
distributed
applications
that
we
might
see
in
a
wider
network.
H
I
think,
coupled
with
that
is
one
of
the
things
that
drives
this
is
that
as
the
production
gets
more
complicated,
the
time
to
reprogram
is
a
big
concern
and
the
modularization
of
the
compute
potentially
can
help
that
very
considerably.
H
Key
points
are
that
at
the
moment
there
is
almost
complete
isolation
between
application,
namespaces
and
network
addressing,
and
we
end
up
with
these
fairly
heavyweight
adjunct
devices
that,
amongst
other
things,
are
essentially
mapping
the
application
namespace
to
network
addressing
so
the
sidecars
and
the
proxy
load
balancers,
and
so
on
the
so,
what's
what
we're
looking
at
and
what
and
what
we've
been
and
what
paper
comes
to
is
what
ways
can
we
improve
the
way
the
network
works?
H
That
can
give
a
much
greater
visibility
and
connection
between
the
name,
spaces
of
the
application
and
network
addressing
and
so
three
things
that
we've
been
they've
been
looking
at
and
the
three
things
that
the
paper
concentrates
on
that's.
Firstly,.
H
So
the
the
the
first
one
is
bringing
together
the
compiler
and
the
orchestrator,
which
are
the
things:
the
compiler
maps,
the
application
name
space
to
addressing
in
the
computer,
architect,
architecture,
the
orchestrator
maps,
application
things
to
or
the
services
to
network
addresses.
H
If
we
could
bring
the
two
together,
then
the
application
could
see
much
closer
to
the
way
the
network
addressing
works,
and-
and
this
is
one
of
the
things
that
we've
been,
that
would
potentially
avoid
the
need
for
the
side,
cars
and
the
proxies.
H
There's
a
there's,
a
lot
that
would
look
into
this
and
happy
to
to
take
further
discussion
on
the
next
one.
Is
that
a
an
efficient
way
of
of
trying
to
bring
this
into
a
common
framework
is
defining
all
the
layering,
whether
there's
layer,
seven
layer,
four
layer,
three
whatever,
rather
than
by
an
intent
of
what
should
be
we're
actually
looking
at
what
a
function
executes
on
and
what
it's
transparent
to
by
an
observation
of
what
it
does.
That
then
gives
a
clear
framework
for
both
the
application
and
for
the
network.
C
Can
you
please
we
we
have
another
presentation
and
we
promised
the
person
full
15
minutes,
so
we
really
need
to
conclude
now.
H
Okay,
this
is
the
last.
This
is
the
very
last
point
here.
The
final
one
is
what's
more
most
radical
is
that,
in
order
to
join
up
the
network
addressing
much
more
coherently
with
the
namespaces
of
applications,
the
network
addressing
would
work
much
better
if
it
started
as
fundamentally
private.
Addressing
that,
then
can
be
an
extensible
and
contextualizable
in
the
same
way
that
name
spaces
work.
This
would
also
facilitate
security.
C
C
We
we
really
need
to
move
on.
We
can
move
the
discussion
on
this
to
the
list.
I
see
there's
discussion
on
the
chat.
Maybe
you
want
to
have
a
look
at
that
and
respond
and
we're
going
to
put
that
in
the
minutes.
Thank
you
very
much
and
to
shar.
C
Please
we're
waiting
for
you.
M
Can
you
hear
me,
yes,
is
it
possible
to
give
control
the
slides
I'm
on.
A
The
yeah
I'm
just
trying
to
find
you
in
the
list
just.
L
B
A
Going
on
here
we
are
okay,
sorry,
you
have
control.
M
Awesome
so
hi
everyone,
I'm
tushar
swami,
so
I'm
gonna
be
talking
about
building
adaptive
networks,
with
machine
learning
and,
more
generally,
the
the
role
of
machine
learning
and
networking
infrastructure.
M
So
more
and
more
we're
seeing
like
network
complexity
increase
and
they
can
benefit
from
data-driven
decisions,
rather
than
the
many
hand-tuned
heuristics
that
we
find
in
the
network
today,
and
so.
Machine
learning
is
a
good
solution
here,
in
the
sense
that
we're
essentially
customizing
our
algorithms
to
the
traffic
and
data
that
we're
seeing
in
the
network.
M
So
this
isn't
in
and
of
itself
a
novel
idea.
There's
been
a
bunch
of
papers
published
on
anything
from
security
control
and
analytics
different
kinds
of
machine
learning
applications.
M
But
the
issue
that
we
found
was
that
a
lot
of
these
are
just
running
in
something
like
tensorflow
pi
torch,
essentially
in
software,
and
it
that
means
that
it's
not
really
feasible
to
deploy
them
into
a
network,
because
it's
not
clear
how
exactly
they
would
fit
into
the
network
and
where
they
would
run
so
that
led
to
the
first
piece
of
our
our
project,
which
was
taurus
and
the
I'll
go
over
this
quickly,
because
I
talked
about
this
on
monday.
M
But
the
general
idea
is
that
we're
going
to
take
our
software-defined
network
and
we're
going
to
slightly
modify
it
where
policy
creation
is
takes
the
place
of
not
just
flow
rules
but
also
machine
learning,
training
and
then
in
the
data
plane.
In
addition
to
our
typical
packet
forwarding
with
match
action
tables
we're
also
going
to
do
decision
making
with
machine
learning
inference.
So
the
control
plane
will
be
developing
new
models
based
on
information,
that's
taken
from
the
network
and
it's
going
to
be
installing
model
weights
into
the
data
plane
similar
to
flow
rules.
M
So
the
issue
here
was,
if
we're
operating
in
the
data
plane.
We
need
to
be
doing
our
ml
inference
fast
to
keep
up
with
typical
data
plane
operations.
M
And
so
that
led
to
taurus,
which
was
a
switch
architecture,
a
pipeline
for
enabling
ml
inference
at
a
line
rate
at
a
per
packet
level
and
being
able
to
essentially
give
you
a
programmable
fabric
so
that
you
can
put
in
different
kinds
of
machine
learning
applications
while
still
meeting
your
per
packet
line
rate
up
operation
and
so
we're
reusing.
M
M
So
the
the
takeaway
there
was
we,
the
robustness
of
our
network,
is
going
to
be
based
on
the
quality
and
speed
of
your
reaction,
and
that
means
that
machine
learning
inference
should
happen
at
a
per
packet
level
in
the
data
plane,
and
this
is
what
the
taurus
architectures
aims
to
do.
So
we
published
that
in
s
plus,
but
there
you
know
that
brought
up
some
follow-up
issues.
M
Namely,
how
do
we
program
taurus,
like
architecture
because
now
what's
happening?
Is
that
we're
asking
network
operators
to
be
familiar
with
networking?
M
So
it's
actually
kind
of
a
lot
to
ask-
and
this
was
one
thing
we
found
by
talking
to
different
people-
the
network
in
the
networking
community.
He
said
if
he
gave
us
a
an
easier
way
to
program
this,
an
easier
stack,
we'd
be
more
inclined
to
use
it,
and
so
that
led
to
our
next
project,
which
was
homunculus,
which
was
essentially
a
high-level
compiler
or
framework
for
generating
these
data
plane
models
depending
on
what
kind
of
hardware
you
had
available.
M
So
the
idea
was
that
the
user
gets
these
very
high
level
directives
that
they
can
use
to
essentially
request
their
applications,
and
they
can
provide
the
different
network
and
resource
constraints
that
are
available
in
this
environment,
and
then
the
compiler
will
simply
generate
binaries
for
your
different
data
planes.
In
this
case,
a
taurus
switch
with
optimized
ml
models.
M
So
this
is
the
general
architecture,
the
homunculus
compiler,
there's
some
more
complicated
stuff
on
the
the
inside,
but
really
can
be
broken
down
on
the
left
here
into
three
core
pieces,
which
are
the
the
front
end,
the
optimization
core
and
then
the
back
end.
So
the
idea
here
is
the
user
is
inputting
some
sort
of
data
set
for
their
application.
M
Let's
say
in
the
case
of
anomaly
detection,
maybe
you're
having
the
the
kdd
intrusion,
detection,
data
set
constraints,
so
they're
saying
that
my
switch
has
this
many
resources,
maybe
some
limitation
on
sram
on
chip,
sram
or
dsps,
match
action
tables
and
whatnot,
and
also
network
constraints
like
I
need
to
run
at
one
gigapacket
per
second
or
I
have
a
latency
requirement
for
my
slo
objectives.
Each
switch
needs
to
run
in
under
you
know,
500
nanoseconds,
or
something
like
that.
M
So
a
little
bit
more
concretely,
you
can
imagine
the
user
is
programming,
their
switch
in,
say,
p4
and
then
the
machine
learning
portion
is
programmed
by
providing
this
data
and
configuration
information
and
then
we're
going
to
generate
these
ml
models,
and
this
is
sort
of
the
key
here.
We
have
so
many
different
constraints
between
the
network
and
the
physical
resources
available
that
it
makes
it
very
difficult
for
a
human
to
program
it.
M
But
that
means
that
it
actually
reduces
the
search
space
in
an
automl
fashion,
to
the
point
that
we
can
actually
reasonably
traverse
the
automotive
space
and
come
up
with
optimized
models
here
and
so
to
do
that
we
actually
use
multi-objective
bayesian,
optimization
with
feasibility
constraints
generated
from
the
network
and
hardware
constraints,
and
so
we're
generating
different
ml
models
and
testing
their
feasibility
and
using
that
to
guide
the
automl
search
and
then
we're
generating
whatever
back-end
specific
code.
You
need
for
your
switch.
M
So,
just
going
through
quickly
a
little
bit
more
in
depth
on
the
the
different
pieces
here
at
the
front
end,
we
call
it
alchemy,
it's
really
just
a
python
library.
So
this
is
the
full
code
that
you
need
for
generating
a
simple
model
in
this
case
we're
doing
anomaly
detection,
so
you
can
see
at
the
top
there.
You
import
your
alchemy
library.
M
In
the
second
block
here,
you
provide
some
user
function
for
loading.
Your
data
this
case
we're
just
loading
it
from
a
csv
file
and
it's
under
this
data
loader
annotation
that
allows
the
compiler
to
wrap
it
and
figure
out
what
to
do
with
it.
M
M
And
then
our
constraints,
so
we
have
performance
constraints
you
can
see
through
put
in
latency
here
and
resource
constraints,
and
this
is
all
these
constraints
are
being
placed
on
a
torus
platform
and
then,
finally,
we
ask
it
to
generate
the
binary
or
bitstream
in
this
case,
so
moving
to
the
optimization
core,
once
the
user
has
provided
all
of
this
information,
we
actually
want
to
generate
our
models.
So
this
is
the
piece
where
I
mentioned
earlier:
we're
doing
bayesian
optimization
using
the
hyper
mapper
package
and
the
hypermapper
is
suggesting
batches
of
hyperparameters.
M
So
this
is
everything
from
say,
the
number
of
neurons
and
layers
in
the
dnn
or
the
different
unrolling
factors
on
different
layers
in
your
dnn
and
different
parameters
in
the
hardware
that
would
allow
to
be
mapped
more
efficiently.
So
this
these
are,
that
will
be
sent
to
homunculus,
which
is
then
going
to
start
producing
candidate
models
based
on
these
hyper
parameters,
it's
going
to
test
models
and
see
how
well
they're
performing
say
how
good
or
bad
your
accuracy
is
and
then
doing
these
feasibility
checks.
M
So
if
someone
requested
one
gigapacket
per
second
throughput,
did
this
model
that
I
tried
out
actually
need
that.
Does
this
model
map
properly
onto
these
resources,
and
so
all
that
information
is
going
to
be
sent
back
to
hypermapper,
which
is
going
to
continue
the
bayesian
optimization
process
and
based
on
that
information,
it's
going
to
refine
its
search
more
and
more.
M
M
And
that's
really
just
a
built
from
a
template
library,
so
this
is
all
supposed
to
be
modular,
so
you
can
slot
in
different
backends
into
homunculus,
but
in
this
case
we're
building
a
multi-class
classifier
with
a
dnn
with
packet,
parsing
and
d
parsing.
So
we're
just
building
out
of
these
smaller
component
functions,
and
these
will
all
be
customized
based
on
those
hyper
parameters
that
that
were
suggested
by
the
bayesian
optimization
process.
M
So,
just
some
quick
results
here
we
tested
it
with
a
baseline
of
three
different
applications
versus
our
homunculus
version.
That
was
automatically
generated.
The
baselines
are
hand
tuned
and
you
can
see
in
actually
in
all
of
these
cases
we're
getting
a
higher
f1
score,
and
this
is
without
any
human
intervention
and
the
the
real
the
secret
behind
this
is
that
the
baseline
applications
are
done,
abstractly
with
they're
just
put
into
tensorflow
or
pytorch,
whereas
in
homunculus
you're
specifying
the
actual
platform.
M
M
M
M
So
how
do
we
take
telemetry
data
and
clean
that
data
to
feed
it
to
homunculus
so
that
we
had
this
loop
of
the
network,
essentially
taking
measurements
from
its
own
data
plane
and
then
building
progressively
newer
and
better
machine
learning
models
which
then
it
installs
back
into
the
data
plane
and
just
keeping
this
loop
going?
M
So
we
have
a
simple
pipeline
here,
set
up
with
a
streaming
database,
doing
basic
cleaning
data
extraction
and
repair
augmentation
transformation,
and
then
some
automatic
labeling
with
different
oracles,
but
the
really
the
the
takeaway
here
is
completing
this
loop,
which
gives
you
this
whole.
This
adaptive
sort
of
feedback
loop
within
your
network,
allowing
it
to
modify
itself
for
the
different
ml
applications
that
you're
working
with.
M
So
that's
it.
For
me,
I
have
links
here
for
the
taurus
and
homunculus
papers
and
we
actually
have
an
fpga
testbed
for
taurus.
If
people
want
to
try
out
and
at
sitcom
this
august,
we're
giving
a
full
day
tutorial
if
people
want
to
get
their
hands
dirty.
So
I'm
happy
to
take
any
questions.
C
Thank
you
very
very
much.
This
is
the
type
of
research
that
this
group
is
absolutely
interested
in,
not
only
because
it
com
it
has
this.
This
idea
of
the
computing
in
the
network,
but
also
the
data-driven
approaches
and
and
the
idea
that
ai
can
be,
can
be
a
tool
in
networking,
not
just
a
some
kind
of
magic
that
people
think
they
put
everywhere.
I
don't
see
any
questions.
D
Okay,
this
is
eve
schuler,
and
I
guess
what
this
goes
back
to
I
mean
so
what
I
heard
you
say
was
that
you're
putting
the
ml
needs
to
be
line
rate,
the
feature
extraction,
and
that
seems
to
be
sort
of
one
of
the
challenges
here
is
not
just
match
action,
as
we
know
it
for
packet
headers,
but
there's
this
whole
other
kind
of
algorithmic
stuff.
That
has
to
happen
at
very
high
speed.
D
What
kinds
of
feature
extraction
I
mean
so
you
talked
about
you
said
there
were
sort
of
three
use
cases
that
you
applied
this
to,
but
I've
been
very
curious
for
a
long
time.
In
this
group,
we've
talked
about
something
called
ubiquitous
witness
which
is
really
taking
image
data
and
doing
feature
extraction,
and
have
you
done
anything
along
those
lines
that
would
potentially
have
a
larger
overhead
and
kind
of
thwart
you
in
this
regard,.
M
So
we
haven't
done
much
with
image
data
at
the
moment.
So
in
this
case,
feature
extraction
is,
is
more
based
around
like
predefined,
headers,
so
say:
matt
like
our
packet
purchase
and
manchester
we
would
be
pulling
out
to
like
an
ip
address
or
something
of
those
along
those
lines,
but
yeah
the
we're
looking
to
to
start
dealing
with
like
some
sort
of
image
classification
networks
in
the
in
the
data
plane,
but
yeah.
It's
not
we're
not
certain.
M
Actually
what
like
the
the
best
use
case
here,
would
be
to
motivate,
say,
grabbing
images
from
your
packet
and
then
how
that
applies
to
the
like.
The
networking.
M
But
yeah,
it's
definitely
an
interesting
thing.
There
is.
There
is
the
the
somewhat
of
the
the
downside
that
a
convolutional
neural
networks
that
are
used
for
image
classification?
Are,
they
tend
to
be
very
large
and
we're
still
fairly
resource
limited
on
the
switch.
Even
though
we
can
do
some,
you
know
data
plane,
machine
learning.
The
networks
do
have
to
on
average,
be
smaller
than
like
the
the
resnets,
and
you
know
those
kind
of
like
the
c410
networks
that
you
see
in
in,
like
the
typical
ml
competitions.
C
F
I
am
hisham,
so
my
question:
is
you
update
the
model
in
line
yeah?
Do
you
update
the
model
in
line
in
the
data
plane
itself
or
the
model
is
created
in
the
control
plane
automatically
using
these
tools,
and
but
it
gets
updated
also
by
the
control
plane,
not
in
the
database
right
right?
Okay,
that's
it
yeah.
M
Yeah,
so
so
the
the
data
plane
is
as
far
as
is
as
far
as
the
data
plane
is
concerned,
just
as
a
static
model,
it's
doing
inference
by
applying
it
and
then
the
control
plane
is
responsible
for
taking
measurements
in
the
network
refining
models
and
then
in
its
own
judgment,
when
it's
time
sending
those
models
out
to
the
data
plane.
C
Okay-
well,
I
guess
we're
over
time.
So
thank
you.
We're
going
to
see
you
at
the
at
the
interim
and
to
shar.
C
Please
join
our
list
and
participate
in
our
discussions,
because
your
work
is
absolutely
related
to
what
we
want
this
group
to
evolve
into,
and
not
just
the
original
you
know
using
p4
to
to
do
match
action,
but
what
else
we
can
do
with
all
of
this
once
we
have
a
framework,
I
would
like
to
thank
lucia
to
have
been
or
well
for
me
an
avatar,
but
for
you
guys
a
real
person.
C
Thank
you
very
much
lucia
for
for
having
given
your
time
for
us,
and
hopefully
it
was
good.
I
would
like
to
thank
also
dirk
and
and
dave
to
have
allowed
us
to
be
hosted
in
their
two
hour.
We
reorganized
this
a
little
bit
at
the
last
a
bit
late,
because
we
were,
we
gave
our
own
slot
to
somebody
else
who
could
not
use
their
thursday
slot,
and
I
would
like
to
really
thank
icn
to
icnrg
to
have
hosted
us.
C
I
think
we
are
showing
more
and
more
that
there
are
so
much
in
common
between
the
two
communities
so
that
I
think
it
was.
It
was
good
and
maybe
we
will
have
other
similar
co-located
meetings.
I
was
thinking
this
afternoon
there's
a
distributed
networking
meeting,
which
also
has
a
lot
of
overlap
with
what
we
do
and
I
think
in
terms
of
the
research
that
was
presented
today.
C
I
wish
us
all
to
feel
better
and
for
the
other
ones
who
are
not
sick.
Well,
don't
get
sick
and
please
stay
safe
everyone
and
have
a
good
flight
back
and
hopefully,
we'll
be
able
to
see
you
in
london.
Thank
you.