►
From YouTube: IETF117-ANRW-20230724-1630
Description
ANRW meeting session at IETF117
2023/07/24 1630
https://datatracker.ietf.org/meeting/117/proceedings/
A
A
D
D
C
Please
come
in
and
they
see
that
we'll
be
starting
the
workshop
in
one
minute.
C
C
Okay,
great,
let's
get
started
hello.
Everyone
welcome
to
the
ACN
rrtf
applied
networking
research,
Workshop
and
Francis
Yen
from
Microsoft
research,
and
here
with
me,
is
Maria
apostolaki
from
Princeton
University.
Thank
you
all
for
being
here
and
for
participating
in
our
Workshop.
As
you
know,
in
rwa
is
more
than
a
workshop.
It
provides
a
wonderful
opportunity
and
a
unique
forum
for
everyone.
Who's
interested
in
applied
networking
research
from
the
world
to
come
together
to
share
insights
experiences
and
exchange
ideas.
It's
also
a
bridge
between
ietf
and
Activia.
C
So
we
hope
that
by
attending
today's
Workshop
you
will
not
only
find
it
intellectually
stimulating,
but
also
professionally
rewarding.
As
you
make
connections
with
people
like-minded,
we
have
a
full
day
of
events
and
with
many
insightful
research
papers
a
keynote
and
a
panel
discussion.
However,
before
we
officially
get
started,
we
are
required
to
present
to
you
some
irtf
policies.
C
So,
first
of
all,
since
arw
is
not
a
research
group,
Within
irtf,
the
intellectual
property
rights
disclosure
rules
from
irtf
to
now
to
apply
to
an
RW,
which
means
that
you're
not
obligated
to
disclose
intellectual
property
relating
to
the
presentations
or
the
contributions
made
to
nrw
and
the
audio
and
video
of
this
Workshop
will
be
recorded
and
posted
online.
And
you
agree
to
be
recorded
unless
you
indicate
otherwise.
C
And
finally,
by
attending
this
Workshop,
you
agree
to
the
privacy
policy
of
ietf
and
you
need
to
adhere
to
the
code
of
conduct
okay.
So
let's
get
back
to
the
main
agenda
today.
First
of
all,
Maria
and
I
would
like
to
thank
all
our
PC
members.
So
we
have
22
people
on
the
PC
this
year
and
together
they
have
done
a
fantastic
job,
writing
64
reviews
for
21
papers
and
among
the
21
summations.
We
have
accepted
11
papers.
So
that's
about
50
acceptance
rate
where
right
on
target.
C
Thank
you
all
for
their
contribution
and
the
service
we'd
like
to
also
thank
our
support.
Team
Colin,
Amy,
Alexa
and
Greg,
especially
a
big
shout
out
to
Colin,
who
is
the
irtf
chair
and
is
chair
of
the
steering
committee
of
nrw.
We
couldn't
have
pulled
itself
without
your
help.
C
Start
working
one
second,
could
you
maybe
press
the
right
button.
C
C
C
All
right,
it's
working
again,
let's
go
back
to
here.
Okay,
next
we'd
also
like
to
thank
our
sponsors,
comcast.com
and
Netflix
for
their
generous
financial
support
for
the
tribal
grants.
Thank
you
and
here's
the
overview
of
today's
agenda,
we're
in
the
first
session
and
we're
going
to
start
with
the
keynote
followed
by
research
papers,
on
iot,
programmable
networks
and
network
measurement
and
in
the
afternoon
the
second
session
we're
going
to
have
additional
papers
on
network
measurement,
along
with
clippers
on
security
and
privacy.
C
In
the
last
session
of
today,
there
will
be
papers
on
DNS
and
the
bgp.
So,
depending
on
your
interest,
you
could
select
to
attend
some
of
the
sessions
where
you're
encouraged
to
attend
all
sessions
and
with
more
details.
It's
our
pleasure
and
honor
to
have
our
guest
speaker,
Professor
Phil
Levis
from
Stanford
University.
He
was
also
my
PhD
co-advisor
back
there.
He
will
deliver
a
keynote.
It's
the
end
of
dram.
C
In
a
second
session,
we
will
have
papers,
quantifying
the
effects
of
TCP
options,
quick
and
cdns
on
throughput
mapping,
the
Ukrainian
Refugee
crisis,
using
internet
measurements
and
security
and
privacy
research
opportunities
for
ietf
protocols.
After
that,
we
will
host
a
panel
discussion
moderated
by
Maria,
with
Alicia
Zhang
from
UCLA
Chris
Wood
from
cloudflare
and
the
yoga
from
tum,
and
the
panel
will
be
about
what
do.
C
We
want
the
internet
to
look
like
in
20
years
in
the
last
session,
many
papers
on
bgp
and
DNS,
so
we're
going
to
learn
how
to
learn
the
barriers
to
work
with
public
RAR
level
data
and
a
call
for
collaboration,
DNS,
Integrations,
repeatable,
name
resolution
with
a
full
dependency
province
enabling
multi-hop
ISP,
hypergiant
collaboration
and
last
one
practical
anomaly:
detection,
large
bgp,
related
VPN
Networks.
C
We
kindly
ask
you
to
join
the
queue
by
logging
into
mid
Echo.
If
you
have
a
question
for
the
presenter,
whether
you're
in
person
or
remote,
so
we
kind
of
have
this
single
shared
queue
for
both
remote
and
in-person
participants,
because
you
know,
for
fairness
reasons
and
as
networking
people,
we
care
a
lot
about
fairness
and
queuing
policies
right.
So
we
have
they
implemented.
C
This
fifo
queue
with
drop
tail
policy,
which
means,
if
you
are
at
the
end
of
the
queue-
and
we
run
out
of
time
for
that
presentation,
you
will
be
dropped
like
a
packet,
but
you
should
always,
you
know,
feel
free
to
follow
up
with
the
presenter.
After
the
presentation
and
the
meeting
links
to
each
session,
the
mid
Echo
links
can
be
found
on
the
ietf
agenda
or
the
specific
page
for
in
RW.
Okay,
now
before
a
hand
over
my
microphone
to
the
microphone
to
fill
levels.
C
Let's,
let
me
just
give
a
brief
introduction
of
him.
So
Phil
is
a
professor
in
the
Cs
and
ee
Departments
of
Stanford
University
and,
as
I
said,
he
was
my
PhD
code
advisor
there
and
it
was
a
great
pleasure
working
with
him
for
about
years
and
Phil's
research
generally
spends.
You
know
operating
systems,
networks,
software
design,
especially
for
embedded
systems,
and
the
results
of
his
work
have
been
running
on
hundreds
of
thousands
of
devices
and
served
as
the
basis
for
internet
standards.
C
E
All
right,
yeah,
so
Francis
asked
me
for
an
exciting
or
interesting
talk,
so
I
tried
to
come
up
with
an
exciting
or
interesting
title
so
giving
this
talk
to
a
group
such
as
this.
If
there
are
folks
of
you
who
work
in
Hardware,
I
would
love
to
hear
your
thoughts.
Tell
me
that
I'm
wrong.
Please
also
ask
questions
anytime.
E
It
didn't
work
in
security,
TLS,
RAR,
congestion,
control
with
Francis
video
streaming
and
with
puffer
with
Francis
I,
actually
haven't
done
much
networking
research
in
the
past
few
years,
so
I,
don't
I,
have
a
lot
of
cool,
interesting,
Cutting,
Edge
networking
stuff
to
talk
about
so
I
apologize,
but
instead
I
want
to
talk
about
something
which
I
have
found
really
interesting
and
has
been
very
confusing
to
me
as
somebody
building
software
at
the
edge
of
the
networks
or
at
the
end,
devices
and
I
think
it's.
E
So
here's
the
basic
summary
our
networks
are
gonna,
get
faster.
Our
processors
are
going
to
get
faster,
but
we're
not
going
to
get
more
RAM.
The
cost
per
bid
is
flat.
It's
been
flat
for
10
years.
If
you
haven't
noticed,
and
it's
not
going
to
go
down,
the
performance
of
RAM
is
also
almost
there,
but
it's
also
going
to
flatten
in
probably
the
Next
Generation.
Both
it's
been
flat
in
latency
for
a
long
time.
It's
gonna,
flatten
and
throughput.
E
Unless
there
are
some
really
huge
shifts-
and
it's
not
clear
where
they'll
happen,
we'll
see
what
this
means
is.
If
you
look
10
years
forward,
computers
are
going
to
be
really
different
and
so
will
their
applications
and
so
kind
of
the
tagline.
Is
it's
the
end
of
dram,
as
we
know
it.
E
There
we
go
yeah,
okay,
good,
so
here's
the
summary
for
the
rest
of
the
talk,
I'm
going
to
explain,
what's
happening
with
scaling,
why
is
it
that
Ram
isn't
getting
cheaper?
Why
is
it
not
going
to
get
cheaper,
then
I'm
going
to
talk
about
what's
happening
with
its
performance,
what's
happening
with
latency
and
throughput?
Why
isn't
Ram
latency
going
down?
E
In
fact,
why
is
it
going
up
if
you
have
an
L3
cache
Miss
it's
going
up
and
also
why
throughput
is
going
to
go
up
for
a
little
while,
but
not
much
longer,
unless
there's
some
really
big
shifts
where
it's
unclear
they'll
happen.
E
What
this
means
going
forward
is
there's
going
to
be
three
kinds
of
memory
in
our
computers
and
I'll
talk
about
what
those
are
and
what
their
properties
are
and
at
the
end,
I'll
talk
a
little
twist
sort
of
this
evolution
of
the
pcie
bus,
cxl
compute
express,
link
and
kind
of
how
that
will
change
things
in
a
bunch
of
ways.
E
So,
let's
start
at
the
beginning,
why
is
Ram
not
getting
cheaper?
So
here's
a
plot
of
you,
know,
versions
of
DDR.
Six
is
still
you
know
hypothetical.
It's
in
workergetic,
so
here's
ddr1
through
ddr5
and
what
you
can
see
is
you
know
over
the
past
22
years
the
throughput
has
gone
up
tremendously.
There's
some
latency
improvements
in
the
beginning,
but
for
the
past
three
generations
cost
has
been
flat
cost
per
bit.
E
If
you
look
at
sort
of
a
standard
size
dim
in
the
amount,
the
dim
costs
and
you
compute
cost
per
cost
per
gigabyte.
It's
been
two
to
three
dollars.
Now.
There's
huge
variations
in
this
in
time,
because
the
ram
Market
is
very
sensitive
supply
and
demand.
But
if
you
sort
of
average
it
out,
you
get
two
to
three
dollars.
So,
for
example,
a
few
months
ago,
even
ddr5
was
very
expensive
because
there
are
shortage
of
it.
E
E
So
why
is
this?
Why
is
this
happening?
I
think
the
basic
summary
is
a
dram
bit
is
a
transistor
right
and
a
capacitor,
and
that's
that's
it
like.
You
can't
make
this
circuit
much
simpler.
It's
about
all
you
got,
and
so,
unlike
with
a
processor
where
there's
lots
of
design
flexibility,
the
cost
of
RAM
is
basically
the
cost
of
manufacturing.
Transistors.
E
And
if
you
look
at
the
cost
of
manufacturing
transistors
somewhere
around
28
nanometers,
it
became
flat
the
cost
of
actually
manufacturing
a
gate
stopped
going
down.
So
we
continue
to
be
able
to
make
Gates
smaller
and
smaller
Gates.
You
know
we
can
continue
sort
of
Moore's
Law,
you
know,
Denard
Skilling
is
dead.
We
can
keep
on
going
to.
You
know
new,
smaller
and
smaller
nodes,
but
the
cost
per
transistor
of
manufacturing
is
not
going
down.
So
I
can
make
you
dram
that's
twice
as
dense.
E
It
just
costs
twice
as
much
the
same
cost
per
bit,
because
again,
a
dram
cell
is
just
a
transistor,
and
so
for
a
really
long
time.
You
know
all
of
computing
was
able
to
chase.
You
know
this
exponential
decrease
in
cost
right.
So
when
a
new
process
appeared,
we
would
move
to
the
new
process
because
gosh,
suddenly,
all
of
your
existing
things
cost
say
half
as
much
to
make
right
you
sort
of
reduce
the
size.
You
get
twice
those
transistors
per
unit
area.
Great
everything
is
twice
the
cost
half
the
amount
to
make.
E
Therefore,
you
move
to
the
new
process
great,
and
that
then
gives
you
those
savings
on
everything
that
you're
selling,
but
this
has
basically
ended,
and
you
know
you
talk
to
Architects
and
they
say:
oh
yeah
didn't
you
know,
and
it
turns
out
lots
of
people
didn't
know
and
we're
now
kind
of
fighting
this
fixed
cost
per
transistor.
E
And
so
here's
sort
of
if
we
sort
of
chart
the
size
of
process
nodes,
you
know
over
time
something
happened
right
at
28
nanometers.
That's
when
suddenly
this
scaling
stopped.
E
So
what
happened
so
28
nanometers
is
when
we
went
from
planar
transistors
to
finfets.
So
basically,
things
are
getting
so
small.
We
need
more
surface
area,
I'd
be
able
to
actually
control.
You
know
the
voltage
across
the
gate,
and
so
this
is
when
Gates
started
becoming
three-dimensional,
and
so
they
started
becoming
more
expensive
to
make.
So
these
next
nodes,
you
know
like
the
next
one's
the
tsmc,
is
putting
out
even
finfets.
Aren't
enough.
E
We
have
gate
all
around
where
suddenly
there's
multiple
fins
or
cylinders,
because
you
just
really
need
to
affect
the
surface
area,
to
volume
ratio
to
control
the
gate
so
making
these
transistors
is
getting
really
really
hard
harder
and
harder.
It's
not
just
simple
scaling.
You
need
new
processes
right
and
so
basically
above
28
nanometers,
we
used
planar
Gates,
now
we're
using
finfets
and
soon
gate
all
around
GAA
and
the
other
problem
is
that,
as
we
make
our
gates
smaller
and
smaller,
it
gets
harder
and
harder
to
actually
do
the
manufacturing.
E
So
this
is
totally
like
a
physical
manufacturing
engineering
problem.
So
if
you
look
at
sort
of
the
number
of
steps
it
takes
to
produce,
you
know
a
die
at
different
sizes.
You
realize
wait
like
just
going
as
we're
getting
to
smaller
and
smaller
nodes,
just
the
amount
of
cleaning
you
have
to
do.
The
number
of
logic.
E
Other
steps
and
processing
steps
you
have
to
do
is
just
going
up,
so
we
can
make
put
more
transistors
onto
a
diet,
but
gosh
we
have
to
now
suddenly
run
many
more
processes
in
order
to
actually
produce
them.
It's
not
just
turning
a
crank
and
scaling
the
process
down,
and
this
has
all
kinds
of
other
implications
as
things
get
smaller
and
smaller.
So
here's
sort
of
a
nice
chart
of
you
know
now
allowable
20,
nanometer
particles
per
milliliter
of
chemicals
and
in
2022
it's
going
under
one
right,
so
you
can
have
like
one.
E
E
And
it's
not
just
that,
so
you
know
we
make
dies
and
chips
through
lithography.
So
basically
you
know
you
put
a
mask
and
you
shine
light,
and
then
you
do
a
process
for
gosh.
You
know
where
the
the
line,
the
light
shine.
We
can
then
strip
that
stuff
off
and
you
know,
there's
all
kinds
of
steps
and
I
do
this
with
etching
and
metal,
etc,
etc.
E
So
if
we
go
to
sort
of
patterning
below
28,
nanometer
28
nanometers,
if
you
look
at
the
frequencies
of
light,
basically
there's
a
point
where
stock
being
able
to
use
visible
light
or
ultraviolet
light,
we're
now
at
the
point
where
we're
using
extreme
ultraviolet
light
so
10
to
100
nanometer
wavelengths
so
way
out
there
on
the
Spectrum,
and
so
one
of
the
problems
with
extreme
ultraviolet
light
is
that
everything
absorbs
it
so
the
system's
Master
operating
a
vacuum.
E
You
can't
have
any
lenses,
because
the
lenses
will
just
absorb
the
light,
so
everything
is
through
mirrors,
and
so
this
diagram
on
the
bottom
in
the
center
shows
you
kind
of
the
series
of
focusings
that
go
on
through
a
long
series
of
mirrors
within
an
extreme
ultraviolet
lithography
process,
and
that's
what
you
see
also
on
the
bottom
right:
that's
an
actual
shot
from
an
euv
lithography
device,
so
the
process
to
make
these
is
pretty
crazy.
And
let's
talk
a
little
bit
more
about
that.
E
Just
to
give
you
an
idea
like
what
it
even
takes
to
make
the
light
to
make
these
transistors
at
this
scale.
So
the
system
is
dropping
tiny
droplets
of
molten
tin,
which
then
get
zapped
with
one
ultraviolet
laser
which
causes
these
droplets
of
tin
to
expand
out
into
like
a
disc
which
then
they
can
get
is
out
by
another
laser,
it's
even
more
powerful,
which
then
gets
them
to
produce
13.5
nanometer
light
this
happening
50
000
times
a
second,
so
the
process
that
goes
even
to
make
the
light
for
this
kind
of
manufacturing.
E
Is
you
understand
why
it's
expensive
and
I
mean
you
should
if
you'd
have
an
opportunity,
you
should
go
read
about
how
this
works,
because
whenever
I,
do
it
just
boggles
the
mind
to
me
like
the
kind
of
engineering
that
goes
into
this?
Is
one
person
puts
it?
You
know
the
Precision
of
these
of
this
lithography
is
so
good
that
if
we
were
shining
a
laser
at
the
moon
like
we
could
hit
a
quarter
right
because
it
has
to
be
at
these
skills
for
20
years.
E
E
If
they
couldn't
do
it,
then
we
would
stop
being
able
to
make
chips
smaller,
so
they
made
it
happen,
but
so
the
brief
summary
of
all
this
and
what
the
crazy
engineering
that's
going
on
behind
lithography
today
is
we
can
continue
to
make
smaller
transistors,
but
the
per
transistor
manufacturing
cost
is
flat
because
of
the
degree
of
engineering
that's
required
to
do
so.
E
So
dram
costs
are
flat.
Why
aren't
CPU
costs
flat?
We
do
see
bigger,
CPUs,
faster
CPUs,
so
the
short
answer
is
the
per
core
cost
is
going
down
slightly,
so
here's
just
a
series
of
AMD
server
class
processors,
Rome,
Milan,
Genoa
and
the
Bergamo
cores
are
getting
faster.
You
have
better
accelerators
all
kinds
of
stuff
you
can
do
in
the
design.
E
It's
also
the
case.
The
design
is
a
larger
fraction
of
the
cost
of
CPUs.
A
lot
of
design
work
goes
in
there,
unlike
say
Ram,
which
is
mostly
manufacturing
and
very
thin
margins.
So
there's
stuff
you
can
do
with
CPUs
to
use
your
transistors
more
effectively
right
in
order
to
improve
performance.
Chiplet
designs,
for
example,
is
what
you
know.
E
Md
has
been
doing
for
a
while
makes
design
a
lot
less
expensive,
rather
than
designing
one
big
chip
for
Designing
little
chiplets
and
then
stitching
together
on
an
interconnection
fabric,
so
they're
coming
up
with
ways
to
make
the
whole
process
of
Designing
the
chip
cheaper,
which
then
means
that
gosh
we
can
continue
to
have
faster
chips.
Sort
of
right
you
can
see
sort
of
the
cost
of
the
processors
is
going
up.
E
So,
in
summary,
the
price
per
bit
of
dram-
it's
not
going
down
soon,
it's
basically
the
cost
of
manufacturing
transistors.
There
isn't
much
flexibility
here.
The
cost
of
producing
transistor
spending
is
going
up
because
it's
getting
harder.
E
So
that
means
if
we
want
new
Ram,
that
has
lots
more
bits,
we're
not
going
to
get
it
with
how
we
make
it
today
we're
going
to
need
new
materials.
You
could
do
it
through
some
like
new
fancy
material.
Who
knows
what
there's
nothing
on
the
roadmap
right.
So
even
if
something
appeared
tomorrow,
we're
not
going
to
get
it
for
10
years,
I
was
chatting
with
somebody
who's
talking
with
Micron
on
the
way
the
Micron
person
put.
It
was
yeah
we've
another
periodic
table.
E
We've
tried
everything
right,
there's
no
material
coming
down
the
pipe,
so
there's
Intel
optane
memory,
which,
unfortunately,
is
just
you
know
it's
pursued
for
eight
or
nine
years,
and
the
Micron
that
was
just
shut
down.
So
that
was
an
idea
of
hey.
Maybe
we
can
do
things
a
bit
differently
that
gave
you
a
2X
density.
Improvement
turned
out
that
wasn't
quite
enough
for
the
market.
You
know
maybe
it'll
return
as
pressures
get
harder.
It's
also
non-volatile,
there's
a
bunch
of
other
things,
but
maybe
optane
will
come
back
we'll
see.
E
So
there
are
maybe
some
long
shots.
So
Ram
basically
requires
one
lithographic
step.
You
know
per
sort
of
layer,
unlike
flash
where
you
can
kind
of
the
magic
of
flashes.
You
can
make
many
many
flash
bits
with
a
single
lithographic,
etching,
step
right
or
single
off
the
graphic
step,
basically
you're,
making
bits
down
in
this
in
the
z-axis
or
on
the
y-axis.
E
Maybe
somebody
will
come
up
with
some
fancy
way
of
making
dram
where
we
can,
with
a
single
lithographic
stuff,
make
multiple
layers
that
would
you
know
that
would
be
a
pretty
amazing
thing
and
whoever
does
that
and
patents.
It
will
make
bajillions.
So
people
are
trying
I,
don't
know
if
it'll
happen,
but
that
would
be
something
which
could
maybe
change
this
I
we'll
see.
E
So
one
thing
is
that
dram
is
flat
because
you
think
just
how
highly
engineered
it
is
it's,
this
very
simple
circuit,
let's
just
make
it
as
dense
as
we
can.
Processors
or
networks
still
have
some
Runway
so
generally
Nicks
are
a
few
nodes
behind
other
processes
and
there's
still
lots
of
room
for
acceleration
CPUs.
If
you
look
at
a
modern
CPU
like
a
sapphire
Rapids,
there's
a
ton
of
accelerators
in
there
and
I
think
we're
going
to
see
that
continue,
because
that's
how
you
get
efficiency
in
terms
of
compute
per
watt.
E
E
Next,
all
right
great
again,
if
anyone
has
any
questions
or
thinks
I'm
wrong.
Please
tell
me:
you're
welcome
to
come
up
so
wait
here.
Thanks
there
we
go
great,
so
that's
cost
per
bit.
So
that's
capacity.
Let's
talk
about
latency
and
throughput.
E
So
again,
let's
go
back
to
this
table.
Here's
DDR
through
ddr6-
and
so
you
can
see
latency
is
flat
right,
so
I
mean
it's
going
down
a
little
bit.
You
know
79
74
72.
E
So
why
is
that
so?
First
of
all,
well,
the
actual
latency
of
the
dram
of
the
dim
itself
was
going
down
slightly
The
observed,
latency
of
a
processor
is
going
up,
so
here's,
for
example,
is
looking
at
you
know
a
series
of
four
generations
of
Intel
processors,
Skylake
Cascade
Lake
ice
Lake,
Sapphire,
Rapids,
The,
observed
latency
of
a
level
of
an
L3
cache
Miss
hitting
main
memory
is
going
up
from
87
to
142
nanoseconds.
E
So
the
reason
for
this
is
these
processors
are
getting
bigger
right
and
they
have
more
cores
and
they
have
larger
caches,
so
the
cache
Miss
rates
are
going
down.
So
the
overall
performance
is
going
up
right.
If
you
have
good
speculation
or
prefetching
or
whatever
it
is,
but
The
observed
performance
of
an
L3
cache
Miss
so
think,
like
random
lookups
through
a
hash
table,
the
latency
is
going
up.
Of
course,
you've
got
lots.
Of
course
your
throughput
still
goes
up.
E
You
know
in
that
case,
but
from
a
latency
standpoint
it's
going
on,
and
so,
if
you
look
at
like
what
a
sapphire
Rapids,
you
know
sort
of
just
came
out.
A
few
months
ago,
processor
looks
like
it
starts
to
kind
of
become
clear,
so
here's
the
sapphire
Rapids
die,
and
so
it
has
a
whole
bunch
of
CPU
titles
on
it.
It's
got
its
memory
control
tile.
Each
CPU
has
its
own
L1
and
L2.
Then
there's
the
shared
L3.
You
can
see
all
the
connections
on
it.
E
There's
the
Phi
for
talking
to
other
dies
for
you
know,
for
you
know,
sort
of
the
the
inter
processor
into
the
intercore
interconnect
PCI
Lanes
UPI.
You
know
all
kinds
of
stuff.
E
Of
course,
the
chip
is
four
of
these
dies,
all
stitched
together,
and
so
you
know,
if
you're
accessing
L3
well
a
an
L3
Miss
you're
going
to
have
to
go
across
this
entire
chip
to
see
if
it's
there
and
then,
if
not,
you
might
have
to
also
progress
depending
on
where
the
dram
is
hooked.
In
on
these
different
memory,
control
tiles
right,
you're
gonna
have
to
go
potentially
further.
So
there's
no
question
that
overall
performance
throughput
is
going
up,
but
there
is
this
throughput
latency
trade-off.
E
So
let's
talk
about
the
throughput
of
the
dimms
themselves,
so
this
is
kind
of
the
performance
you
can
get
from
sort
of
that
top
end.
You
know
speeds
of
DDR,
so
say
if
you
have
like
a
modern
ddr5,
you
can
get
something
like
57
gigabytes
per
second
out
of
a
dim
right.
So
that's
a
theoretical
maximum,
of
course,
in
practice
to
be
a
bit
lower,
so
that
turns
out
to
be
7.2
Giga
transfers
per
second.
E
So
this
gets
now
into
sort
of
how
these
signaling
on
these
dimms
actually
work.
So
it
turns
out
turns
out
the
dram
data
lines
are
single-ended
right.
So
it's
just
a
single
wire,
unlike
clock
lines,
which
are
differential,
there's
two
lines,
so
it
turns
out
in
differential
signaling.
We
really
know
what
the
limit
is,
which
is.
If
you
try
really
really
hard
and
you
put
a
lot
of
money
into
it,
you
can
do
200
gigabits
per
second
pretty
you
can
achieve
that.
E
100
is
pretty
straightforward
with
really
good
engineering,
50
yeah
you
can
just
do,
but
it's
really
tough
to
go
past
200
at
some
point
and
you're
talking
about
five
picoseconds
Gates
only
switch
so
fast
single
ended,
though
really
it
seems
like.
Maybe
the
Practical
limit
is
around
9.6.
The
reason
is
with
differential,
signaling
you're,
just
comparing
two
lines
right.
E
You
can
say
which
one
is
higher,
you
know,
is
it
zero
or
is
it
one
if
there's
some
interference
or
any
kind
of
noise,
or
some
kind
of
you
know
sort
of
swing
on
both
lines,
both
of
them
see
it
and
you're
fine
with
single-ended,
though
you
used
to
have
one
thing:
you're
comparing
to
a
reference,
and
it's
really
really
tough
to
keep
that
stable,
so
9.6
gigabits
per
second
for
a
single
data
line,
so
ddr6
is
trying
to
push
this
further
with
some
buffering
right.
E
E
If
anything,
we're
going
to
see
is
latency
going
up
a
bit
due
to
more
complex
caches.
Ddr
is
also
reaching
its
signaling
limit,
so
the
hope
is
ddr6
can
do
12.8.
You
know
I
hear
people
say
they
think
9.6
is
sort
of
the
Practical
limit
to
engineer
this.
E
Maybe
you
can
go
to
12
pointed
with
dieting
the
buffering,
adding
some
latency
we'll
see.
So
there
is
one
escape
hatch
on
this,
which
is
we
could
shift
from
single
entity
differential
signaling
on
dram.
So
we
could
have
two
data
lines
and
just
do
the
compare.
This
would
require
a
completely
new
DRM
designs.
There
are
no.
If
you
look
at
jetek,
there's
no
current
plans
to
do
this.
Maybe
they
will
we'll
see.
E
There's
some
historical
concerns
about
this
of
like
whether
there's
you
know
intellectual
property
stuff,
it's
the
ietf
knows
how
to
navigate
well,
but
there's
also
concerns
about
maybe
going
to
differential
signalings,
we'll
see
we'll
see
what
happens.
But
if
that
happened,
oh
then,
maybe
we'd
get
some
more
throughput.
We're
not
going
to
get
better
latency,
but
we'll
get
better
throughput.
E
E
You
know,
even
if
ddr6
or
7
or
8
does
go
to
differential
signaling.
This
is
going
to
be
like
five
years
out
at
the
very
earliest,
which
means
you
know
earliest
eight
years
before
you
can
buy
it,
and
all
this
really
just
boils
down
to
the
scale
at
which
these
things
are
operating
and
really
just
like
the
lower
level,
ee
signaling
and
all
that
kind
of
stuff.
That's
going
on.
E
E
Well,
we
went
to
multi-core
right,
so
we
hit
some
physical
limit
and
then
systems
just
moves
in
another
Direction,
and
so
what
happens
if
dram's
capacity,
latency
and
throughput
are
flat,
but
everything
else,
your
network
and
your
processor
continue
to
get
faster,
so
just
a
little
sort
of
back
to
the
envelope
calculations
here.
So
today
it's
pretty
straightforward
to
get
200
gig,
Nick,
400
gigs
will
be
soon
so
at
400
gigabits
per
second,
a
four
kilobyte
packet
is
80
nanoseconds.
E
So
it's
less
than
an
L3
cache
Miss
and
at
400
gigabits,
a
64
byte
packet
is
1.25
nanoseconds.
E
So
these
are
really
you
know
from
a
software
standpoint.
These
are
really
time
crazy
time
skills.
Furthermore,
if
you
look
at
a
modern
server
processor
and
how
much
aggregate
dram
bandwidth
that
has
400
gigabits
is
10
of
that.
E
So
that
means
that
if
I
wanted
to
Echo
just
simply
Echo
packets
right
so
write
the
packet
into
dram,
then
read
the
packet
out
of
a
dram.
That's
going
to
be
20
of
my
whole
system
memory,
bandwidth.
E
And
you
see
the
implications
of
this
in
some,
you
know
high
performance
systems
that
people
build.
So
this
is
a
nice
slide
deck
from
FreeBSD
conference,
2021
Netflix
on
how
they
stream
400
gigabits
per
second,
you
know
in
their
open,
connects
and
their
devices
and
sort
of
they
make
this
interesting
note
there's
this
Forex
memory
amplification
right.
So
they
have
the
dma
data
off
the
disk
into
memory.
E
Then
they
have
to
read
the
data
from
memory
into
the
CPU.
Then
they
have
to
write
the
encrypted
data
back
into
memory
and
then
dma
the
data
from
the
memory
back
to
the
Nic
and
that's
just
assuming,
there's
no
compression
or
other
processing
going
with
it.
This
is
like
a
4X
cost
on
the
memory
balance.
So
don't
think
the
two
I
said
before
are
20
thing,
for
you
know,
40
just
to
stream
stuff-
and
you
know
I
remember:
I
was
chatting
with
an
engineer.
Google
was
talking
about.
You
know
some
high
performance
stuff.
E
So,
given
this
there's,
this
flip
side,
which
is,
if
you
know
cost
per
transistors,
is
flat,
and
we
can't
you
know
new
nodes
are
becoming
more
and
more
expensive.
The
way
that
CPUs
are
going
to
give
you
performance
the
way
they
do
today
even
is
through
accelerators
computational
accelerators.
So
you
look
at,
for
example,
Sapphire,
Rapids
and
there's
a
whole
bunch
of
new
accelerators
things
to
make
analytics
faster
things
to
make
memory
copies
faster
things
to
do
all
kinds
of
stuff,
with
matrices
right
for
preparation
for
machine
learning.
E
So
in
the
end
in
Silicon
compute
is
cheap.
Moving
data
is
expensive,
so
we
can
make
things
blazingly
fast,
but
then
the
challenge
is
you
know.
How
do
you
feed
it?
So
you
can,
for
example,
add
lots
more
Al
use
under
your
processor,
they're
cheap.
You
can
add
all
kinds
of
accelerators
allow
you
to
do
computations
really
fast
over
large
amounts
of
data,
but
at
some
point
you
can't
feed
the
accelerator
fast
enough.
E
I
mean
DDR
doesn't
have
enough
bandwidth
to
match
the
computational
speed
of
a
CPU
that
has
a
lot
of
accelerators
in
it,
and
in
fact
you
see
this
today.
If
you
look
at
a
GPU
right
or
a
TPU,
they
don't
have
DDR
right.
They
have
high
bandwidth,
memory,
hbm
and
so
what's.
Hybrid
memory
is
basically
DDR
with
a
lot
more
data
lines,
so
dr5
64
data
lines
hpm3,
which
is
like
on
the
Nvidia
h100,
has
a
1024
data
lines,
so
16
64-bit
wide
channels.
E
E
So
one
of
the
challenges
and
again
this
gets
back
this
idea.
This
is
about
like
manufacturing,
the
physicality
of
these
devices
and
how
signals
work.
You
can't
run
1024,
copper
traces
from
your
memory,
module
on
a
printed
circuit
board,
generally
speaking
so
practical
kind
of
finish
traces
you
can
make
today,
you
know
maybe
0.152
millimeters.
So
if
you
have
a
thousand
of
those,
that's
you
know
like
155,
millimeters
of
just
Trace,
without
even
spacings
between
them
right,
so
that's
much
bigger
than
your
chip.
You
can't
do
it
so.
E
Instead,
the
way
you
integrate
an
hbm
with
a
processor
is
in
Silicon,
so
you
basically
make
effectively
like
a
PCB,
a
print
circuit
board
in
Silicon,
because
then
you
can
do
things
that
lithographic
scales,
something
called
an
interposer.
So
you
have
your
processor,
that's
built
with
its.
You
know
particular
node,
its
particular
manufacturing
process.
You
have
your
memory
module
made
by
say
Samsung
with
its
particular
manufacturing
process.
They
can
be
different,
different
nodes.
Doesn't
matter,
then
they
have.
E
You
know
pads,
and
then
you
join
them
onto
this
intro
pose
of
just
like
a
little
silicon
scale,
PCB
to
connect
them
right.
So
it
works.
It's
great,
it's
expensive
right,
so
we
now
have
these
two
parts.
We
have
this
third
part.
We
have
to
manufacture
them.
We
have
to,
you,
know,
actually
assemble
them,
and
you
know
it
turns
out.
You
can
have
lots
of
failures,
but
you
can
do
it.
There's
also
why
gpus
are
expensive.
E
It's
also
why,
for
example,
the
sapphire
Rapids
Max
that
has
integrated
hbm.
You
know
it's
an
expensive
high-end
processor,
so
in
fact
the
most
recent
Intel
processors,
the
they
have
a
model
of
the
sapphire
Rapids,
with
an
integrated
64
gigabyte,
HPM
module,
which
is
kind
of
interesting,
and
so
the
idea
is
hey.
If
you're
going
to
be
doing
something,
that's
actually
arithmetically,
really
intensive
over
large
blocks
of
data,
you
can
buy
an
Intel
processor
that
has
an
hbm
module
and
then
stream
out
of
that.
E
E
Where
does
hbm
sit
in
this
hierarchy
and
the
short
answer?
Is
it
kind
of
doesn't
right?
It
has
higher
latency
than
regular
dram,
but
it
also
has
much
much
higher
bandwidth
than
regular
dram.
So
it's
not
in
this
strict
hierarchy
of
you
know
small
and
fast
to
big
and
slow.
It's
also
a
limited
size,
64
gigabytes,
rather
than
like
half
a
terabyte
or
a
terabyte,
so
I
think
going
forward.
E
We're
gonna
see
like
this
pressure
on
dram
and
the
need
for
higher
bandwidth
to
be
able
to
process.
This
stuff
is
going
to
push
systems
towards
three
types
of
memory.
The
first
is,
you
know,
DDR
that
we
know
and
love
so
think
of
this
as
a
latency
optimized
memory.
So
this
is
the
memory
where
a
random
access
has
the
lowest
latency.
They
can
get
for
something
of
significant
size.
We're
not
talking
about
like
SRAM
caches
or
something
like
that,
so
think,
hundreds
of
gigabytes,
maybe
a
terabyte
of
ram
100,
nanosecond
latency.
E
We
also
have
capacity
memory,
so
this
basically
flash
right
so
today,
latency
is
in
the
100
like
under
100
microseconds
I
think
that
can
actually
come
down
a
lot.
A
lot
of
that
is
your
FTL
layer.
Maybe
10
microseconds
seems
feasible
if
we
push
hard
enough
down
within
the
tens
of
gigabytes
per
second
and
the
size
of
terabytes.
So
this
isn't
that
strict
and
the
hierarchy
before
these,
we
think
it
was
like
oh,
the
smaller
and
fast
and
the
the
bigger
and
the
slower.
E
But
then
the
Third
Kind
is
bamarins,
so
memory,
that's
optimized
for
throughput.
So
this
is
hbm
today,
so
think,
tens
of
gigabytes,
hundreds,
so
multiple
hundreds
of
nanoseconds
of
latency,
but
bandwidth
and
say
800
gigabytes
per
second
hpm3
module-
could
do
800
gigabytes
per
second,
so
eat
more
than
the
aggregate
memory
bandwidth
of
all
of
the
dimms
attached
to
your
processor.
E
And
so
we
look
forward.
This
pressure
is
going
to
really
push
us
to
have
machines
that
have
this
mixture
of
memories,
latency
capacity
and
bandwidth.
Each
of
them
is
tied
to
a
different
compute
element,
for
example.
Capacity
elements
you
know
usually
are
tied
to
the
CPU,
that's
where
our
discs
are
attached,
but
perhaps
it's
attached
to
the
ipu,
the
smart
Nick.
You
know
the
dpu,
our
latency
memory.
Of
course
things
like
our
smart
necks
have
some,
but
it's
mostly
in
the
CPU.
That's
where
you
know
a
terabyte
of
ram
lives.
E
And
so
what
does
our
networking
stack?
Look
like
when
really
what's
going
to
happen
is
we're
going
to
load
our
memory
into
the
dram
on
a
smart
Nick
before
we
even
offload
it,
you
know
to
a
gpu's
high
bandwidth
memory.
Is
there
going
to
be
a
point
where
our
Knicks
actually
need
high
bandwidth
memory
in
order
to
be
able
to
get
stuff
back
and
forth
fast
enough
as
the
Nicks
get
faster
and
faster.
E
So
this
is
my
my
hypothesis
about
where
computers
are
going
and
I.
Think
there's
one
interesting
twist,
which
maybe
you
know
some
of
you
probably
heard
a
bunch
about
compute
Express
rank
links,
cxl,
and
so
so
it
turns
out
the
speeds
and
the
capacities
that
we're
talking
about.
You
know
the
the
Workhorse
of
in
the
back
plane.
For
most,
you
know,
sort
of
servers
and
desktops
and
stuff
to
and
slaptops
today
as
pcie
right,
pcie
Express.
E
So
one
of
the
problems
with
pcie
is
that
it's
high
latency.
So
basically
you
know,
depending
on
who
you
ask
and
speed
up
it's
the
minimal
latency
you
could
do
like
a
pcie
operation
is
around
800,
maybe
500.
You
know
nanoseconds
or
so
when
you
think
of
this,
if
you've
got
a
400,
gigabit
Nic,
that's
40,
kilobytes
of
data,
that's
a
lot,
and
so
you
could
think
if
you
have
like
some
eight
kilo
by
jumbo
frames.
That's
five
of
them.
It's
just
the
time
to
even
talk
to
the
Nick.
E
E
You
know
genoa's
and
Sapphire
Rapids
16
Lanes,
like
you
plug
a
Nick
into,
is
480
gigabits
per
second,
so
there's
some
great
work.
You
recently
sort
of
on
the
academic
side,
looking
at
kind
of
what
the
limits
are
and
finding
you
know,
480
gigabits
per
second,
it's
not
quite
is
really
not
enough
to
do
to
really
drive
a
400
gigabit
Nic.
So
if
you
look
at
something
like
the
Nvidia,
Bluefield
3
actually
uses
three
32
Gen
5
Lanes,
that's
a
special!
You
know,
open
compute
interface,
Nic
interface.
E
To
do
that.
So
that's
fine,
but
we
can
do
it.
We've
got
the
pcie
bandwidth.
The
thing,
though,
is
that
pcie
is
just
a
data
bus.
So
basically
you
can
read
and
write
memory
across
the
bus,
but
the
two
sides
of
the
buses
are
independent
right.
So
there's
no
coherence
across
them.
You
read
something
it
might
change.
You'll
never
know
so.
This
means
in
practice
is
that
if
you
want
to
check
if
a
Nick
has
a
packet,
you
have
to
do
a
pcie
operation.
E
E
Another
way,
I
think
it
was
like
800
nanoseconds
is
approximately
400
feet
of
cable
and
propagation
delay.
That's
actually
it's
actually
pretty
big
I'd,
say
within
the
data
center.
So
what's
cxl
cxl
is
a
replacement
for
pcie
same
physical
layer,
same
signaling
same
speeds,
all
that
kind
of
stuff.
It's
great.
You
can
plug
a
cxl
car
into
your
pcie
slot.
You
know
should
work
one
of
the
things
it
does.
It
simplifies
all
the
protocols
down
so
that
hey
now,
suddenly
the
minimum
is
about
200
nanoseconds,
which
is
good.
E
A
factor
of
four
is
nice,
but
the
sort
of
more
interesting
thing
and
I'll
talk
a
bit
about
the
implication
for
this
for
networking
and
network
cards.
Etc
is
that
cxl
supports
cash,
coherent
access
to
memory,
so
this
means
is
that
two
devices
like
your
CPU
and
the
Nic
connected
over
cxl,
can
have
a
cache
coherent
view
of
each
other's
memory,
really
in
particular
the
processor
you
know
kind
of
the
Nick,
and
so
what
that
means
is
that
when
the
processor
reads
something
from
the
Nic,
it
can
pull
that
value
into
its
cache.
E
And
then,
if
the
Nick
doesn't
change
it,
it
can
continue
to
safely.
Read
it
from
its
cache
and
you
don't
have
to
do
a
pcie
operation
or
a
cxl
operation
in
order
to
see
that
the
value
hasn't
changed
the
other
side.
Now
the
cost
of
this,
the
other
side
like
say
the
Nick,
when
it
wants
to
write
that
value,
it
has
to
invalidate
the
cache
line
on
the
processor,
and
this
takes
some
time.
E
So
you
can
imagine,
depending
on
your
expected,
read,
write
trade-offs.
You
can
figure
out
how
you
want
to
Cache
these,
so
just
give
an
example:
just
try
and
make
this
concrete
for
pcie,
so
I've
got
my
CPU
and
my
Nic,
and
then
Nick
has
a
variable
in
memory,
which
is
what
is
the
tail
of
the
descriptor
ring
containing
packets
right?
E
So
what's
the
last
packet
I've
written
and
as
it's
receiving
packets,
and
so
the
CPU
wants
to
see
here,
the
new
packets
it'll
go
and
it'll
issue
a
read
operation
over
the
bus
or
pcie,
saying
hey:
what's
the
value
of
the
descriptor
tail
variable
get
a
re-result
back?
E
Okay
great
then
we
actually
do
a
memory
read
that
gets
pulled
into
cache
awesome
next
time.
You
want
to
check
if
there's
a
packet,
we
have
to
do
the
exact
same
thing.
We
have
to
do
a
pcie
operation,
because
that
value
in
our
cache
could
have
changed
like
this,
not
in
any
way
tied
to
what
the
Nick
is
doing.
The
Nick
could
have
written
to
its
descriptor
tail
and
we
don't
know
so
we
have
to
read
again.
This
invalidates
the
value
in
the
cache.
We
can
then
check.
E
E
So
with
cash
coherence,
though
right
this
changes
a
little
bit,
so
the
CPU
can
read
the
descriptor
tail
from
the
Nick
great
load
it
into
its
cache.
So
it's
reading
from
in
its
address
space,
Nick
memory,
so
it
doesn't
go
into
the
CPU
member.
It's
reading
from
Nick
memory.
Now
the
CPU
cache
references
memory
in
the
network
interface
card
great
then
it
can
just
read
from
the
cache
now
if
the
Nick
decides.
Oh,
if
the
Nick
receives
a
packet
needs
to
update
the
descriptor
tail
it'll
send
a
cache
and
validation
CPUs.
E
E
And
one
interesting
aspect
of
this
is
that,
besides
things
like
nics
and
gpus
and
all
kinds
of
peripherals,
there's
this
idea
that
cxl
and
its
lower
latency
means
that
it
can
support
memory
devices
so
like
a
device
that
just
does
memory
in
it
and
the
whole
idea
is
oh
wait
like.
If
TDR
bandwidth
is
a
problem,
maybe
we
can
choose
our
pcie
lanes
their
differentiate
signaled,
you
know,
they've
got
lots
more
bandwidth
per
pin
awesome.
E
If
you
look
at
an
AMD
Genoa,
it
has
about
4.4
terabits
per
second
of
DDR
bandwidth
and
about
4.7
terabits
per
second
of
pcie
bounce.
So
maybe
we
get
more
bandwidth.
This
way.
E
There's
lots
of
Buzz
about
this
people
are
writing
all
kinds
of
papers.
Folks
at
Microsoft
Azure
are
exploring
it
Folks
at
Facebook,
exploring
it.
One
idea
is
like
hey:
let's
take
a
big
box
and
let's
put
tens
of
terabytes
of
memory
into
it
and
then
plug
it
into
all
these
computers
with
cxl,
and
suddenly
this
cash
coherent,
shared
memory,
pool
I'm
a
little
skeptical
that
this
will
work,
but
I
mean
I,
think
from
the
perspective
of
networks
and
how
systems
interact
with
our
network
cards.
E
E
S
essentially
allows
this
low-cost
coordination
between
these
independent
compute
devices.
Remember
compute
is
easy.
Moving
data
is
hard,
and
so
the
fact
that
we
can
have
something
like
your
Nick
directly
connected
to
your
SSD
such
that,
then
all
the
CPU
can
do
is,
can
look
in
the
memory
of
all
its
peripherals,
which
are
all
cash
coherent,
and
actually
you
can
take
the
CPU
out
of
the
data
path
such
that
the
CPU
can
have
the
Nic
directly
transfer
things
to
a
disk.
E
So
in
summary,
in
10
years,
computers
are
going
to
look
really
different.
The
you
know
the
dram
wall
that
we've
hit
the
scaling
wall
means
they're,
going
to
push
towards
these
three
kinds
of
memory.
Capacity,
latency
and
bandwidth,
I
I
think
the
cxl,
and
this
notion
of
actually
able
to
connect
them
through
a
cash
coherent
fabric
is
going
to
be
an
interesting
twist.
I'm,
not
sure
how
it's
going
to
play
out,
but
I'm
excited
to
see.
E
It
means
we're
going
to
start
building
these
much
higher
bandwidth
applications.
We
can
do
things
like
start
moving
these
large
language,
the
large-scale
machine,
learning
models,
I
think
it's
an
interesting
and
exciting
time.
E
So
in
summary,
processors
and
networks
can
get
faster
for
a
while,
it's
great
but
Ram
ism
cost
per
bits
flat.
It's
going
to
be
flat
for
a
while,
at
least
you
know,
my
guess
is
10
years.
Unless
you
know
some
fantastic
new
manufacturing
process
or
material
appears,
there's
nothing
on
the
horizon.
Ram
performance
is
also
flat,
and
so,
in
10
years,
where
computers
are
going
to
look
really
different,
the
applications
they
run
are
going
to
look
different
and
it's
going
to
have
really
big
implications
to
the
internet.
E
So
it's
the
end
of
the
dram.
As
we
know
it,
I
feel
fine.
I
hope
the
internet
feels
fine
and
yeah
happy
to
take
questions.
F
E
Yeah
yeah,
the
great
point,
so
this
is
this
is
an
interesting
one,
so
it
turns
out
that
power
is
actually
a
really
big
concern
on
on
dram.
So
if
you
look
at
ddr6,
dr5
went
from
like
1.2
volts
to
1.1
volts
and
they
do
all
kinds
of
like
crazy
voltage
regulation
where
they're
just
trying
to
push
the
power
lower
and
lower,
because
it's
about
charging
those
capacitors
right
so
yeah
I
think
also
on
Modern
servers.
E
Ram
Power
is
significant,
I
mean
I,
think
you
can
feed
it
in
terms
of
like
TCO,
at
least
my
sense
is
that's
not
like
you
want
to
store
that
data,
but
certainly
in
terms
of
individual
DRM
elements
right
and
heat
dissipation
and
just
being
able
to
like
do.
I
just
have
to
put
more
and
more
dims,
but
I
can't
because
at
some
point
I
can't
run
the
traces
yeah.
That
is
definitely
so
I
guess
what
I
would
say
is
I
think
that
is
definitely
hard.
E
A
lot
of
the
design
of
dims
and
DDR
is
governed
by
that,
but
that
seems
like
a
problem
where
we
still
have.
We
still
have
you
know
leeway.
We
still
have
Runway
right.
People
can
engineer
that
there
are
other
things
where,
like
we
really
we've
got.
No
we've
got
no
plan,
B,
so
yeah
saying
a
person
in
the
back
yeah.
G
My
name
is
midi,
actually
I.
Two
questions
first
is
there
are
recent
and
it
seems
like
quite
intensive
attempts
to
mix
compute
and
memory
on
one
extreme.
It's
something
like
compute
and
memory
on
the
other.
Is
things
more
extreme
examples?
Probably
what
service
is
doing
is
a
lot.
A
lot
of
cores
a
very
fast
on
cheaper
and
cheap,
is
really
big
into
your
connect
and
some
amount
small
amount,
but
very
fast
memory
near
each
core.
So
how
does
it
fit
into
this
picture?
G
And
the
next
question
kind
of
coupled
question
is
that
it
seems
that
a
very
significant
part
of
computing
near
future
will
be
some
kind
of
machine
learning
and
machine
learning
has
very
specific
memory
access
patterns
and,
in
particular
like
inputs
and
parameters.
They
have
different
access
patterns
and
probably
there
are
some
ways
to
optimize
that
and
how
it's
going
to
influence
memory.
G
E
So,
let's
go
to
the
first
one,
so
right
so
I
think
actually
like
an
interesting
example.
Here
is
like
Samba
Nova
right,
so
the
samba
Nova
processors
like
they're,
really
about
oh
and
it's
kind
of
like
the
like
the
lightning
bolt
sort
of
great
idea.
There
is
hey,
rather
than
have
our
memory
and
our
compute,
why
don't
we
intermingle
them
like
we're,
gonna,
intermingle,
compute
tiles
and
memory
tiles
right?
E
So
it's
important
is
in
that
that
works
really
well
for
streaming,
computations
right
where
I'm
going
to
actually
progress
this
data
through
the
pipeline,
so
for
lots
of
like
machine
learning
and
data
analytics.
So
in
that
way,
yeah
I
think
we're
going
to
see
all
kinds
of
new
architectures.
They
don't
get
more
memory
that
way
right
so
I
think
in
terms
of
we're
thinking
about
like
optimizing
machine
learning,
I
mean
look
at
like
what
an
h100
can
do
or
the
TPU
can
do
like.
Yes,
we
can
feed
these
things.
E
E
You
hadn't
there's
another
point
about
the
memory
which
I
was
gonna,
so
yeah
so
I'm.
Sorry,
remember
that
if
remember
the
first
part
of
your
question
right
so
yeah,
so,
let's
so,
but
I
think
with
respect
to
like
things
like
smart
memories.
Like
I
push
my
computation
to
my
memory
itself,
like
there's
compute
on
the
memory
cells.
E
That
gets
tricky
right
because
at
some
point
computations
need
locality.
Right,
I
have
to
be
Computing
locally,
but
look
memory
actually
isn't
local
right.
I
stripe,
my
64
by
cache
lines
across
my
dim,
so
I
can
get
full
bandwidth.
So
it
starts
to
get
a
little
weird.
We're
like
wait,
I'm
actually
having.
If
I
wanted
to
do
a
computation
across,
say
multiple
cache
lines.
I
either
need
to.
You
know,
sort
of
strike
my
memory
differently
to
support
that
or
actually
need
to
bring
the
data.
You
know
to
the
core
where
the
dims
connect
yeah.
E
E
G
E
Yeah
I
think
absolutely
I
think,
like
the
GPU
and
the
TPU
folks,
absolutely
they're
pushing
that
really
hard
in
all
kinds
of
ways,
and
you
also
see
for
different
kinds
of
machine
learning.
Like
graph
neural
networks
have
very
different
access
patterns.
You
know
than
say
filters
over
images
and
convolutional
networks.
Yeah
people
are
absolutely
you
know
it's
not
just
about
optimizing
the
compute
or
designing
the
compute.
It's
also
even
designing
your
memory
controllers
and
your
memory
layouts,
absolutely
I
think
we
are
seeing
that
thanks,
yeah.
D
Hi
Jonathan
Holden
cloudflare,
not
a
hardware
person.
Is
there
ever
going
to
be
a
world
where
we
just
write
data
to
the
network
and
round
trip
it
just
to
store
it.
E
And
well
I
mean
I,
think
lots
of
data
centers
do
that
right
in
the
sense
of
like
with
the
decoupling
of
disk.
So
when
your
network
latency
is
100
microseconds
and
your
disk
latency
is
10
milliseconds
right
for
a
c
there's
no
reason
to
keep
this
is
the
whole
disaggregation
of
like
we
have
big
storage
units
and
then
computers
separated
from
that
again.
What
I'd
say,
though,
is
just
remember
like
compute
is
cheap,
so
storage
is
different.
For
that
reason,
but
compute
is
cheap.
Moving
data
is
what's
expensive
right,
that's
what
takes
power!
E
That's
what
takes
time,
and
so
there
are
two
basic
ways
you
make
competition
fast
number
one.
Is
you
parallelize
it
number
two.
Is
that
you
keep
the
data
local
like
that's?
That's
all
there
is,
and
so
the
idea
of
oh
I
send
stuff
out.
You
know
to
store
it
sure.
Are
you
talking
about
like
I,
just
make
it
Loop
through
the.
E
Well,
so,
let's
think
about
like
propagation
times
how
much
storage
you
get!
You
don't
get
a
lot
now
there
are
actually
people
who
think
this
way.
A
lot
of
super
Computing
people
think
this
way
when
they're
doing
like
super
high
data
rate,
you
know
think
many
terabytes
per
second.
They
actually
think
about
the
propagation
delay
along
their
fiber,
as
that
is
the
data
in
like
that
actually
counts.
E
D
I
E
I
think
there's
so
there's
actually
a
really
nice
keynote
by
gosh
I'm,
forgetting
his
name
University
of
California
Santa
Cruz
who's
like
a
total,
like
storage,
you
know
wizard,
and
he
talked
about
all
the
different
media
that
are
out
there,
and
it
has
this
great
thing
of
like
latency
capacity,
cost
Etc
durability,
and
it
draws
us,
you
know,
sort
of
multi
one
of
these,
like
multi-axis
Star
things.
E
He
says,
look
if
a
memory
ever
encompasses
another,
then
the
old
one
goes
away,
because
it's
just
better
in
every
way,
and
so
he
looks
at
stuff
like
DNA,
but
also
there's
some
stuff
that
came
out
of
Microsoft
on
like
glass,
etching,
right
etching
in
glass
and
the
Precision
of
that
which
tends
to
have
better
durability
than
DNA,
and
so
he
looks
at
all
the
different
possible
storage
things,
but
I
think
you
really
know
talking
about
storage
like
what's
after
Flash
right
flash
has
that
you
know
they're
continuing
to
do
more
and
more
layers
right.
E
That
has
some
legs
so
we'll
see
where
that
goes,
but
yeah
I,
I
can't
say
too
much
about
the
future
of
storage.
Ethan
Miller
is
the
person
Ethan
Miller
is
a
good
he's,
got
a
great
keynote.
Talking
about
storage
he'd,
be
a
good
thing
to
look
up.
Yeah.
E
J
J
Yeah
but
I
gotta
write
the
code,
so
my
question
was
that
I
wanted
to
push
back
on
the
very
last
sentence.
You
said,
which
is
and
I
feel
fine
I
mean
I
can
unders.
You
know
when
Moore's
Law
started
to
die
off
I
guess
it's
still
kicking
a
bit.
It's
like
okay,
I
know
how
to
do
things
in
parallel.
That's
a
lot
easier,
but
now
I
have
to
think
it's
more
specific
I
mean
I,
can
understand
accelerators
and
that's
easy.
But
now
you're
talking
about
well
accelerators
and
smart
memory
and
dumb
memory
I
mean.
E
Yeah
I
mean
I,
do
like
people
are
smart
right,
they'll
come
up
with
ways,
I
mean
so
one
thought
it's
not
necessarily
that
any
given
application
is
gonna
have
to
do
all
these
things,
because
that's
not
true
like
if
I'm
looking
up
things
in
a
key
Value
Store
like
I'm,
not
using
a
TPU
to
do
that
right.
E
You
know
it
might
be
that
the
composition
of
applications
then
does
this
a
lot
of
I
think
it's
gonna
be
we're
going
to
start
becoming
much
much
more
careful
about
our
data
structures
right,
like
maybe
we'll
go
back
to
the
1980s
or
like
we
really
tried
to
make
them
small
and
tight,
and
it's
not
just
Ram
is
free,
so
in
that
way,
like
I
feel
fine.
E
These
problems,
the
challenges
that
the
engineering
challenges
this
will
raise
for
us
are:
we've
lived
in
Worlds
like
that
before
and
we've
been
okay,
I
mean
it
means
that,
like
you
know
the
free
lunch
of
everything
getting
better,
all
the
time
is
going
away.
But,
like
oh
that's
you
know,
that's
engineering.
So
thanks.
C
Okay,
let's
thank
our
speaker,
Phil
Davis
again.
C
And
please
come
up
through
yeah.
Please
come
on
okay,
so
our
next
speaker
is
anat
Gremlin
bar
from
Tel,
Aviv,
University
and
he's
going
to
talk
about
it's
not
where
you
are
is
where
you
are
registered:
iot
location
impact
on
mud,
yeah
hi.
Thank
you
for
this
introduction
yeah.
Thank
you.
F
K
Title:
it's
not
where
you
are
it's
where
you're
registered,
iot
location
impact
and
this
research
was
done
in
collaboration
with
anat
Berlin
bar
from
the
Tel
Aviv
University,
from
with
David
high
from
the
Hebrew
University
and
Sean
the
Nino
from
reichon
University.
This
is
also
this.
Research
was
also
supported
by
Cisco
and
the
Israeli
research.
Authority
I
would
like
to
express
my
gratitude
to
the
nrws
and
the
committee
for
allowing
us
the
opportunity
to
share
our
findings
today.
K
It
means
that
is
the
geolocation
of
the
external
IP
of
the
device,
and
we
found
out
that
the
impact
of
the
IP
based
location
is
on
the
network
behavior
of
the
device,
and
we
saw
an
impact
on
the
network
Behavior.
So
we're
talking
about
ib-based
location
but
actually
and
what
it
is
a
it
even
more
impactful
is
the
user
defined
location
and
we
Define
the
user-defined
location
as
the
location
that
the
user
chose
through
the
registration
process
of
a
new
account.
K
And
in
that
case
we
have
a
scenario
in
which
we
have
a
device
that
is
located
in
the
UK
it's
physically
located
in
the
UK.
But
the
user
chose
to
register
it
in
the
US.
You
can
see
that
there
are
three
options:
the
American
server,
the
European
server
and
the
Chinese
server,
and
can
I
use
the
laser
it.
Okay
yeah
and
there
are
three
options
and
the
user
chose
to
work
with
the
American
server.
K
K
Finding
that
there
are
different
device
behavior,
these
different
network
device
behavior
for
a
different
places
for
different
locations,
so
the
first
is
a
defensive
implication
is
on
the
network,
security
framework,
mod
ietf
and
mad
RFC
8520,
and
in
that
security
framework
we
Define
a
profile
of
the
device,
a
network
profile
of
the
device,
and
if
we
would
like
to
understand
what
is
the
profile,
we
should
take
into
account
all
the
factors
that
affect
it.
Another
implication
is
on
iot
identification.
K
If
we
will
learn
our
data
set,
we
learn
in
one
location
in
a
single
location,
and
then
we
will
try
to
identify
the
same
device
with
the
same
firmware
on
another
location.
It
just
won't
work,
it
might
be
potentially
with
errors
and
essentially
each
task
that
is
composed
of
learning
normal
device
behavior
and
then
extracting
rules
such
as
the
network,
security
framework
and
extracting
features
such
as
iot
identification
is
affected
by
this
finding.
So
these
two
just
two
like
trailers
for
the
for
the
rest.
F
K
Presentation
so
I
go
over
briefly
over
the
outline
for
today,
so
I
will
show
you
that
user
defined
location
is
much
more
impactful
than
ib-based
location
and
I'll
show.
What
is
the
implementation?
How
does
a
user
defined
location
is
implemented
in
the
common
case?
I
will
cover
background
about
mud
about
the
IDF
security
framework
and
then
the
implication
on
user-defined
location
on
mod
and
then
I'll
show
you
a
proposal
that
we
suggested
to
improve
the
implementation
using
extension
features
of
DNS.
K
K
You
can
see
that
in
ninety
percent
of
the
comparisons
that
we
made
in
our
data
set
that
composed
of
dozens
of
devices
in
up
to
10
locations
for
each
device.
We
found
out
that
ninety
percent
of
the
devices
exhibited
a
different,
a
different
in
the
network,
Behavior
difference
in
the
network
Behavior,
but
just
54
percent
of
the
devices
exhibited
a
network.
Behavior
are
changing
the
network
behavior
when
we
change
the
IP
of
the
device,
the
external
IP
of
the
device
foreign.
K
K
And
is
it
was
why
there
is
a
difference
in
the
domains,
because
the
similarity
measurement
measure
the
difference
in
the
domain,
the
set
of
the
domains
that
the
device
uses
and
different
domain
names
we
saw
through
our
research
to
our
data
set
the
different
domains,
allow
different
features
and
servers
and,
for
example,
we
saw
a
camera
that
has
a
feature
that
was
available.
Only
in
China
only
did
when
we
chose
the
Chinese
server.
K
The
facial
recognition
feature
was
enabled
while
you
choose
well,
if
you
choose
the
European
server
or
American
server
server,
this
feature
was
not
available,
but
there
are
other
ways
to
implement
different
IPS
or
different
servers
for
the
same
domain,
for
example
using
IP
based
location
and
in
the
next
slide.
I
will
show
you
how
DNS
can
support
IP
based
location
decisions,
but
now
we
understand
that,
in
order
to
allow
the
user
to
decide
what
is
the
location
and
what
is
the
features
that
you
want
to?
K
The
domain
names
must
be
different,
and
here
you
can
see
the
IP
based
location
decision
you
using
DNS.
So,
in
that
case,
you
can
see
that
the
device
is
located
in
the
UK
but
registered
in
the
US.
In
that
case,
the
the
device
uses
an
api.smartings.com.
This
is
the
domain
that
it
uses
and
it
asks
the
recursive
resolver,
which
is
located
nearby
the
device
in
the
UK.
K
In
that
case,
the
recursive
preserver
sent
this
request
in
his
turn
to
the
authority
dividend
servers
and
because
the
request
came
from
the
UK
from
the
UK
resolver,
the
IEP
is
from
Ireland,
which
is
in
the
UK,
and
that
is
how
IP
based
location
can
help.
Dns
can
help
to
make
IP
based
location
decisions.
We
call
this
kind
of
domains
domains
with
no
location
identified
within
them.
We
call
it
Global
domains
and
in
the
next
slide
we
can
see
Regional
domains.
In
that
case,
it's
the
same
case.
The
the
user
registered
in
the
US.
K
The
device
is
located
in
the
UK
and
the
device
initiate
a
Dom,
a
DNS
request
to
the
dc-us
east.
It
uses
a
us
server
and,
despite
the
fact
that
the
recursive
reserver
is
in
the
UK,
the
authoritative
name
server
knows
to
answer
with
an
IP
from
the
from
the
us,
and
we
call
this
kind
of
domains.
Regional
domains,
domain,
suite
and
location
identify
within
them,
and
that
is
how
we,
this
is
how
a
user
defined
location
is
implemented,
and
this
is
how
user
defined
location
difference
looks
like
in
the
right
side.
K
You
saw
that
you
see
that
the
same
example
in
which
the
device
is
registered
in
the
US
and
the
device
uses
the
US
east.connect.smartings.com
and
get
an
IEP
in
the
US.
While
on
the
left
side
of
the
slide,
you
can
see
that
the
device
uses
another
domain,
a
different
domain,
which
is
eus.connect.matic.com,
which
is
another
location,
another
domain
which,
with
a
location
identifier
within
within
them.
K
Another
thing
that
I
would
like
you
to
keep
in
mind
is
that
there
are
not
just
two
available
locations.
There
are
much
more
in
you
can
capture
it.
You
can
in
the
app
you
can
choose
up
to
10
locations
in
this
case
we
capture
device
in
up
to
10
locations.
This
is
the
Yi
camera.
We
created
a
similarity
heat
map
for
the
similarity
measurements
that
we
made
in
up
to
10
location.
K
Each
cell
is
compared
to
a
different
location
and,
as
you
can
see,
despite
the
fact
that
you
can
choose
different
places,
you
can
see
that
there
is
a
region
actually,
so
different
places
are
addressed
to
the
same
region,
Russia
United,
Kingdom
and
Germany,
for
example,
and
what
I
would
like
you
to
keep
in
mind
is
that
there
are
more
than
just
two
options:
I
I
presented
in
the
previous
slide,
just
two
options:
to
simplify
the
slide
and
as
I
presented.
The
common
case
is
in
the
subdomain.
K
The
differences
of
the
domains
are
in
the
sub
domains.
Only
nine
percent
of
the
domains
present
the
difference
in
the
top
level
domain.
The
top
level
domain
is
the.com
the
extension
of
the
domain
okay.
So
now
we
saw
that
there
is
a
difference
that
it
is
caused
by
the
user
decision.
Let's
go
over
the
mod,
the
mod
profile,
mod,
the
ietf
standards,
our
c8520.
This
is
essentially
a
network
framework.
A
network
security
framework
that
defines
a
mod
file
and
mod
file
is
essentially
an
access
control
list.
K
K
The
logic
behind
defining
a
mod,
a
mod
file
is
that
in
iot
devices
there
are,
there
is
a
specific
goal
for
each
device
think
about
a
light
bulb
or
a
or
a
water
sensor.
These
devices
have
a
specific
goal.
It
uses
set
a
small
set
of
domains,
so
it
makes
sense
or
reasonable
to
create
this
kind
of
file.
It
won't
work
for
a
computer,
for
example,
and
now
we
can
see
what
are
the
implications
on
the
mud
framework.
K
So
if
we
have
a
device
in
if
we
will
create
a
mod
file
for
the
device
that
is
registered
in
the
US,
it
will
compose
of
these
two
domains
and
the
US
East
domain
and
on
the
UK.
It
will
create
an
EU
West,
but
the
implication
are
is
that
the
learning
phase
of
the
model
will
be
much
complicated.
It
will
be.
We
will
need
to
capture
the
device
in
every
single
location
that
the
device
supported.
K
We
need
to
combine,
maybe
the
all
rules
into
a
single
large
model,
or
we
should
use
separate
mod
files
for
each
location
and
what
about
the
explainability?
How
can
a
Security
administrator
go
over
this
mod
file
and
understand
what
is
going
on
with
this
device,
and
how
can
the
manufacturer
or
the
Security
administrator
can
maintain
even
this
kind
of
location
and,
as
I
said,
there
are
more
locations
than
just
two?
You
can
see
that,
like
this,
smart
file
can
be
available
for
many
more
so
the
next
thing
I
would
like
to
share
is
about
DNS.
K
So
there
is
an
extension
for
DNS
that
is
called
ECS.
Ecs
stands,
for
extension,
a
DNS,
client,
subnet,
and
actually
the
easiest
way
to
understand
it
is
through
the
figure.
So
in
the
figure
you
can
see
that
the
device
is
located
in
the
UK,
it
uses
a
domain
api.smartings.com,
but
in
that
case
the
course
resolver
is
in
the
US.
K
It
happens
sometimes
that
the
device
uses
a
not
new
by
a
recursive
reserver
when
you
use
open
resolvers-
and
in
that
case,
when
the
authoritative
name
service
will
get
the
request,
he
will
answer
with
the
US
IP,
because
the
recruitive
resolver
is
in
the
US.
So
to
solve
this
problem
and
to
get
the
IEP
that
is
nearby
the
device
itself.
K
K
And
of
course
we
will
use
it
just
for
the
original
domains
and
not
for
the
global
domains
in
order
to
preserve
the
behavior
of
the
device,
and
that
is
how
mud
looks
like
when
we
use
this
proposal
with
the
ECS.
In
that
case,
the
learning
phase
of
mud
will
be
much
easier.
K
So
we
made
it
very
quickly
and
let's
go
the
sum
of
the
summary,
so
the
user
defined
location
has
an
impact
on
the
iot
device,
Network
behavior
and
on
specifically
on
the
domains.
The
domain
said
that
the
device
uses
IEP
based
location
has
also
an
impact,
but
less
than
the
user
defined
location.
Data
set
and
security
measurements
should
take
into
account
the
location.
Otherwise
it
just
won't
work.
K
We
won't
be
able
to
defend
the
device
or
to
identify
it,
and
when
we
are
talking
about
iot
identification
and
user
defined
can
be
implemented
using
instead
of
using
different
domains
using
the
extension
DNS
and
that's
all.
For
now.
We
have
some
more
resources,
so
we
can
scan
the
QR
code
about
iot
about
iot
networking
about
mod
RFC.
We
have
tools
to
generate
mud,
anything
about
iot,
Network
and
iot
network
behavior
is
in
our
website.
Thank
you
very
much
for
listening.
L
M
K
So
a
VPN
can
change
the
IP
based
location.
So
actually,
when
we
made
this
when
we
made
this
research
to
quantify
the
IP
based
location,
we
used
VPN
in
order
to
simulate
the
device
and
location.
So
even
when
you
change
the
location,
the
IP
based
location
using
the
VPN,
still
the
user
can
choose
a
specific
location
in
the
registration,
processary,
user-defined
location
and
in
that
way
the
VPN
just
won't
have
any
impact.
The
user
defination
is
the
impact
factor.
M
Kind
of,
but
if
the
user
is
it,
was
the
user
who's
creating
this
VPN
in
order
to
avoid
like
selecting
different
servers.
What
what
my
question
is
that,
because
I
have
some
background
related
to
new,
like
Technologies
for
IP
based
GE
location,
one
of
that
it's
like
based
on
probing
Technologies,
so
I,
don't
know
if
you
consider
this
kind
of
approaches.
K
So
so
our
research
is,
we
measure
the
IP
based
location.
So
if
you
fake
it
using
a
VPN
or
any
other
way,
we
will
take
just
the
IP
base
that
you
fake
with
the
VPN.
So
we
then
try
to
like
find
that
you're
using
a
VPN.
K
Properties-
this
is
something
that
we
can
see,
but
actually
what
the
manufacturer
does
here
is
to
allow
the
user
to
choose,
so
they
don't
want
to
restrict
the
user
they
want
to
do.
They
want
him
to
a
lot
to
allow
the
user
to
to
work
with
the
device
so
like.
M
E
Hey
Phil
love
us
Stanford,
so,
as
you
just
mentioned,
the
companies
want
to
allow
someone
to
choose
which
server
they
use.
Can
you
talk
about
the
Regulatory
and
legal
implications
of
that,
because
this
actually
seems
in
some
ways
counter
like
I
mean,
of
course,
vpns
can
allow
you
to
escape
IP,
but
just
I
mean
at
some
point.
You
know
official.
You
turn
on
your
official
recognition
in
the
U.S
right.
It's
a
little
tricky
yeah.
K
So
actually,
I
think
that
I
don't
need
to
explain
when
they
vote.
Can
we
Zoom
to
the
to
the
there's
an
option?
Okay,
so
what
you
can
see
that
is
written
over
there
is
that
you
can
choose
the
European
server
when
you
want
to
work
with
countries
that
support
the
gdpr
and
you
can
choose
the
American
server
when
you
don't
want
to
run
to
work
with
the
gdpr
and
you
can
work
in
China
when
you
are
in
Chinese
Mainland.
So
I
think
that
there
are
explicitly
mentioning
that
you
can
choose.
E
K
C
Okay,
let's
thank
our
speaker
again.
C
A
woman
please
come
up.
Our
next
speaker
is
Roman
belcherkov
from
UC
Santa
Barbara
and
his
talk
will
be
penal,
programmable
infrastructure
for
networking.
N
Yes,
thank
you
for
introduction.
I'm
gonna
get
this.
My
name
is
Roman
I'm,
going
to
present
the
peanut
programmable
infrastructure
for
networking.
It's
a
joint
work
with
my
colleagues
from
University
of
California,
Santa,
Barbara
and
Nixon
Incorporated
and
I
want
to
start.
I
want
to
start
on
this
slide
from
printable
statement
that
one
of
the
problems
in
the
Academia
for
Network
research
are
representative
infrastructures.
N
Pretty
often
researchers
end
up
with
what
they
have
in
the
lab.
Currently
is
like
couple
of
laptops,
several
routers
switches
Etc
what
they
have
and
they're
trying
to
create
something
representative
to
collect
the
data
from,
but
the
desired
infrastructure
is
something
that
you
can
see
on
the
right
on
the
right
side
of
the
slide.
N
It
should
be
more
complex,
more,
let's
use
the
word
representative
here,
because
it
depends
on
the
actual
experiment
that
researchers
want
to
conduct,
so
non-representative
infrastructures
lead
to
non-representative
data,
bad
data
and
bad
data,
especially
if
we
are
talking
about
machine
learning,
Solutions
lead
to
bad
solutions
that
we
cannot
apply
anywhere
and
if
anyone
interested
we
have
a
paper
regarding
bad
data
leading
to
bad
Solutions
recently.
So
the
question
is
like
what
we
can
do
with
it.
Fortunately,
there
is
a
number
of
existing
platforms.
N
This
is
a
non-exhaustive
list
of
different
platforms
like
ripe,
Atlas,
Cosmos,
metrics
measurement
and
kudos
to
them
for
allowing
to
do
the
research
experiments
Etc,
but
they
are
great,
but
there
is
a
simple
problem.
Imagine
that
we
want
to
conduct
some
experiment
and
I
want
to
say
that
some
experiments
are
hard
or
even
impossible
to
implement
on
these
problems.
N
Let's
start
with
a
pretty
simple
example:
I
want
to
measure
quality
of
experience
for
YouTube,
for
these
I
want
to
watch
the
Youtube
stream
I
want
to
collect
network
data
like
raw
packets
and,
at
the
same
time,
I
want
to
measure
things
like
amount
of
help
buffer
Health.
How
many
times
video
was
stalled,
quality
of
the
video
resolution,
etc,
etc,
etc.
Okay,
that's
pretty
simple,
but
then
it
starts
to
be
complicated.
For
example,
I
want
to
do
this
over
wireless
network
and
not
of
this.
Not
of
such
not
of
many
of
such
platforms.
N
Allow
you
to
do
this
over
wireless
networks
and
I
want
to
do
it
with
in
a
live
network
with
the
different
users
when
I
have
a
traffic
aside
and
I
want
to
do
it
over
a
long
time
period,
not
just
once
but
like
in
several
weeks
or
even
months,
probably
and
as
well.
I
want
to
do
it
with
a
flexible
and
programmable
client
to
dynamically
change.
Videos,
resolutions
and
what
I'm,
measuring
and
I
also
want
to
separate
backbone
problems
from
Last
Mile
problems.
N
So
and
now
it
becomes
very
complicated
and
the
question
is
what
infrastructure
do
we
need
for
all
of
this,
and
we
we
decided
that
we'll
Implement
our
own
solution
and
this
solution,
basically
I,
will
present
on
the
next
slides
it's
programmable
infrastructure
at
University
of
California
Santa,
Barbara
I
will
describe
on
the
Romanian
slides
why
we
did
it,
how
exactly
we
did
it
and
how
to
reproduce
if
anyone
interested
on
their
compost
infrastructure.
N
So
what
are
design
principles
for
all
of
this
stuff
at
first,
we
want
to
be
able
to
measure
things
actively
and
passively.
At
the
same
time,
that's
very
important
I'll
describe
benefits
of
it.
In
the
on
the
other
slide,
the
next
one
we
want
Last
Mile
collection
to
carry
real
world
user
traffic
and
I
want
to
say
here
that
compost
networks
are
pretty
interesting
for
this
stuff,
because
campus
networks
allow
you
to
have
balance
between
unrealistic
Club
scenario.
N
When
you
have
couple
of
devices,
or
maybe
even
10
devices
and
production
Network
that
Academy
usually
don't
have
access
to,
because
it's
production
Network,
so
a
compost,
an
error
could
be
very
interesting.
In
this
case.
Campus
Network
can
mimic
typical
Enterprise
Network,
for
example,
University
of
California
Santa
Barbara
hosts
around
25
000
students,
not
counting
administrative
staff,
professors
and
everyone,
and
campus
networks.
Campus
Network
at
least
University
of
California
Santa
Barbara
is
vulnerable
to
different
things
like
the
normal
provider,
Network
latency,
spikes,
peak
hours,
user
and
traffic
overloads,
especially
during
summer.
N
N
Geographically
and
logically,
we
wanted
to
support
arbitrary
experiments,
like
literally
anything
based,
and
we
want
direct
and
fast
access
for
our
researchers,
so
they
can
iterate
on
the
hypothesis
first
and
do
not
wait
in
a
queue
or
something
like
this,
and
in
addition,
of
course,
we
want
to
do
everything,
ethical,
so
minimal
disruptions
or
no
disruptions
existing
users
preserve
privacy
and
to
make
everything
fully
reproducible
to
use
of
the
Shelf
components,
cheap
components
if
it's
possible
and
open
source
everything.
So
basically,
these
are
principles
of
our
platform,
very
brief
slide
regarding
the
overall
architecture.
N
Basically,
it's
deploy.
It's
separated
to
two
parts:
the
compost
part
and
data
center
bar
on
campus
part.
We
deployed
dozens
of
Raspberry
Pi's
that
are
using
UCSB
Wi-Fi
infrastructure
I'll
describe
it
a
bit
more
on
the
next
slide
and
we
do
active
measurements
from
these
devices.
At
the
same
time,
at
the
data
center
of
the
University's
California
Santa
Barbara,
there
is
a
backbone
basically
traffic
and
infrastructure
and
On
the
Border
Gateway.
Here
on
the
on
the
top
of
the
slide,
we
have
a
live
traffic
mirroring
to
our
servers
that
also
describe
later
on.
N
The
next
slide,
so
we
basically
have
active
and
basic
infrastructures
deployed
together
and
measuring
together
what
we
want
and
I
want
to
start
from
active
measurements,
how
we
implemented
it.
As
I
said,
we
took
Raspberry
Pi
devices.
These
are
single
board
computers
with
the
Linux
on
top.
Basically,
we
use
the
Ubuntu
optimize
for
Raspberry
Pi.
We
deployed
60
of
them
with
something
on
the
campus
and
different
locations,
and
we
will
deploy
40
more
this
month
and
no
one
can
stop.
They
are
controlled
from
a
central
server
by
solstack.
N
N
Regarding
oh
yeah,
regarding
deployments,
we
deployed
them
over
the
whole
was
mostly
these
are
different:
dormitories
and
University,
centers
and
libraries,
etc,
etc.
We
try
to
cover
almost
everything
and
more
scannable,
basically
regarding
pace
of
data
collection,
so
how
we
implemented
base
of
data
collection.
There
is
a
border
Gateway
of
University
of
California
Santa
Barbara.
We
agreed
to
the
IT
department
that
we
all
have
a
live
traffic
mirror
from
this
border
gateway
to
our
Intel,
definite
switch.
It's
a
programmable
switch
on
this
programmable
switch.
N
We
implemented
ontas
program,
it's
a
P4
program
for
anonymization
of
the
traffic.
Basically,
it
anonymizes
IP
addresses
Mac
addresses
and
everything
to
remove
possible
source
of
identification
of
the
user,
and
this
also
allows
ethical,
Review
Committee
to
check
that
we
are
really
doing
this
anonymization
and
everything.
And
then
the
final
switch
balance
this
traffic
to
our
servers,
where
we
just
basically
run
cbdump
with
additional
status
stuff
to
collect
all
this
data
and
save
the
data
more
details
are
available
on
the
website
up
to
the
like
configuration
and
links
and
GitHub
repositories
Etc.
N
So
why
exactly
active
and
basic
management
measurement?
Many
platforms,
Implement
just
based
environment,
some
platforms,
Implement
active
measurements,
active
measurements
are
important
because
in
the
world
of
encrypted
traffic
everywhere
and
like
I
love
encrypted
traffic,
but
we
still
need
some
labeled
data,
especially
for
machine
learning
algorithms,
and
for
this
we
need
the
experiments
because
we
can
control
labels.
We
also
need
base
of
measurements
because
we
cannot
create
live
Network
traffic
without
passively,
observing.
What's
going
on
without
user
traffic,
basically,
and
the
combination
of
active
invasive
measurements
on
our
campus.
N
Allow
us
to
do
two
things
very
important
things:
the
first
one.
It's
pretty
unique
observation
of
the
packet
from
multiple
Vantage
points,
so
we
can
look
at
the
packet
On,
the
Border
Gateway,
and
we
can
look
at
the
packet
on
the
device
itself
and
we
can
find
out
if
there
are
any
problems
on
the
backbone
of
the
provider.
Or
there
are
problems
on
our
campus.
We
even
can
get
the
information
from
access
points,
Wi-Fi
access
points
and
maybe
find
the
source
of
the
problem
there
on
these
access
points
and
the
second
one.
N
If
we
have
some
data
that
we
initiated
from
the
active
measurements
from
active
devices
and
we
found
some
patterns,
we
can
use
basic
observation
to
confirm
these
patterns
to
check
that
these
patterns
really
exist
in
the
real
world
user
traffic.
That
would
be
easier
to
to
find
because
we
already
identify
them
from
the
originally
active
measurements.
N
Some
examples
of
current
experiments,
as
I
mentioned
in
the
beginning,
video
quality
of
experiments
for
different
platforms,
YouTube
to
YouTube
email
Etc.
We
tried
and
still
trying
to
do,
quality
of
experience,
measurements
for
video
conference
platforms
for
Google
meet
and
for
Zoom.
We
use
this
infrastructure
for
controlled
speed
tests,
where
we
can
control
time,
whether
it
would
be
in
peak
time
or
in
a
pretty
calm
time,
interface,
wired
versus
Wireless
ligation,
whether
it's
busy
location
or
pretty
empty
Etc.
N
We
can
use
this
infrastructure
for
application
traffic
collection
for
fingerprinting
and
even
for
botnet
imitation
with
different
networks,
attacks
and
ID
departments
was
not
happy
about
it.
What
else?
Yes,
there
is
a
slide
of
different
limitations
of
this
platform.
The
first
and
very
important
one
is
that
these
theoretical
data
guarantees
of
the
campus
networks
are
only
theoretical,
so
they're
debatable
and
the
only
solution
we
know
for
now
is
to
measure
and
explore
and
confirm
whether
the
data
collected
from
this
infrastructure
is
a
representative
for
the
experiments
that
we
want
to
collect
this
data.
N
For
but
at
least,
we
think
and
claim
that
this
infrastructure
would
be
more
interesting
than
the
lab
deployment.
Ethical
review
is
important
and
required.
So
if
you
want
to
do
the
same,
please
do
it
earlier,
and
most
of
the
other
problems
are
either
administrative
problems
where
you
want
to
do
stuff
with
University
or
security
problems,
where
you
don't
want
your
deployment
to
become
a
botnet
without
your
knowledge
about
it,
and
so
at
last
we
want
to
claim
that
we've
created
all
of
this
stuff.
N
We
want
to
say
that
this
infrastructure
is
pretty
interesting
and
representative
and
interesting
for
research
for
Network
research
and
Academia.
It
has
live
user
traffic
and
many
universities
have
many
users.
So
it's
if
you're
trying
to
reproduce
it,
and
we
encourage
you
to
reproduce
it.
N
It
would
be
more
or
less
useful
for
you
for
research,
it's
cheap,
especially
Raspberry
Pi's.
They
are
pretty
cheap
and
they
deploy
everything.
It's
pretty
simple.
All
Hardware
components
are
easy
to
buy.
Well,
servers
could
be
expensive
and
on
our
website
we
have
a
separate
page
for
a
reproducibility
where
we
have
all
links
to
Amazons
and
everything
all
configurations,
all
GitHub
repos
up
to
the
code
for
labeling
our
devices
as
well
and
pretty
important
part.
N
We
invite
other
researchers
to
participate
in
this
experiment
to
submit
your
experiments
and
we
want
to
collaborate
with
you
and
provide
the
platform
for
you
as
well.
So
there
is
a
email
where
you
can
go
where
you
can
mail
us
or
go
to
the
website
and
find
the
same
information
there
and
that's
it
from
my
site.
Thank
you.
So
much
for
the
attention
and
I
want
to
answer
your
questions.
If
you
have
any
thank
you.
L
Thank
you
Greg
and
risky
Erickson,
a
very
interesting
presentation.
Thank
you.
So
you
mentioned
that
you
use
the
combination
of
active
and
passive
measurements.
So
can
you
clarify
what
active
measurement
method
method
you
use.
N
What
exactly
do
you
mean
by
measure
like
method?
We
have
Raspberry
Pi
devices,
they
have
basic
Linux
on
board
Ubuntu,
and
this
allows
us
to
do
any
measurements
that
you
can
do
basically
on
Ubuntu
device,
so
from
speed
tests
to
implementing
custom
scripts
on
python
that
you
can
run
basically
anything
okay,.
L
So
basically,
it's
not
that
you're
using,
for
example,
T1
white
or
stem
protocol
yeah.
L
Okay,
but
okay,
are
you
using
any
particular
measurement
protocol
this
time.
N
No,
no
okay,
we
using
a
particular
platform
for
submitting
experiments
net
unicorn.
The
details
are
also
available
on
the
website,
but
that's
the
only
restriction
that
we
have
and
most
often
we
do
experiments
in
Docker
containers
that
also
create
some
restrictions
for
researchers.
You
don't
have
usually
access
to
Raw
devices
and
like
Wi-Fi
statistics,
for
example,
but
it's
for
security
restrictions.
L
Okay,
so
I
have
been
given
a
thought
of
using
a
hybrid
measurement
methods
like
Institute,
OEM
or
Auto.
Networking
method.
C
C
We
can
only
take
one
more
question:
Royal
yeah,
hi.
O
Rio
yanagida
from
the
University
of
Glasgow.
Thank
you
very
much.
It
was
very
interesting,
so
I
could
see.
I
think
I
see
the
rationale
of
like
how
it's
difficult
to
do.
Network
research
involves
our
large
infrastructure,
provided
more
sort
of
like
real
life,
sort
of
scenario
or
situation.
I
get
that
bit.
Why?
Why,
like
some
clarification,
is
on
the
reproducibility?
What
do
you
really
mean
by
reproduce
reproducibility
in
this
context,
because
it
could
mean
a
wide
range
of
like
a
scale
of
likely
to
business
reproducibility
so
like?
O
N
Yes,
it's
not
quite
possible
with
a
live
network
regarding
reproducibility
I.
Don't
remember
whether
our
reproducibility
here
on
this
slide
reproducibility
means
reproducibility
of
the
platform
itself,
not
of
that
experiments.
I
mean
we
have
a
web
page
on
our
website,
where
we
show
where
we
describe
all
our
GitHub
repos,
all
our
how
our
websites
are
built,
how
the
platform
is
built,
what
components
are
used,
what
servers,
specifications
and
everything
everything
regarding
reversibility
of
experiments?
N
It's
a
separate
topic,
but
basically
Docker
based
Solutions
Plus
net
unicorn
platform
that
we
are
using
allows
us
to
have
reproducible
code
for
experiments
and,
basically,
reproducible
Pipelines,
and
it's
not
reproducibility
in
terms
of
having
the
same
traffic.
Each
time
you
create,
you
run
the
experiment,
but
at
least
it's
reproducibility
that
you
expect
the
same
behavior
during
each
time.
You
run
the
experiment
that
your
experiment
will
have
the
same.
Behavior.
C
So
you
have
to
take
the
other
questions.
Offline
next
speaker
is
Tobias
fibik
from
Max
Planck
Institute
for
informatics,
and
his
presentation
is
Crisis.
Ethics,
reliability
and
measurement.network
Reflections
on
Active
network
management,
Academia.
B
Okay,
good
morning,
academics,
engineers
and
other
people
from
the
internet
I'm
to
Pacific.
You
might
know
me
from
other
fun
research
like
darling,
airport
or
university.
It
in
the
cloud
and
now
Zoom
tells
us
what
to
do
in
our
seminars
or
without
feminist
culture.
You
won't
have
ID
security,
but
today
I'm
talking
about
well
a
little
bit
of
Essex
a
little
bit
of
reliability
and
crisis
and
measurement
Network.
So
first
Network
measurements.
Well,
that's!
Basically
what
we
are
doing.
It's
an
important
tool
for
academics
to
get
their
papers.
B
It's
an
even
more
important
tool
for
practitioners
to
understand
how
things
work
on
the
internet
and
they
come
in
active
or
passive
form,
and
usually
especially,
the
active
ones,
are
somewhat
difficult.
If
we're
measuring
things.
For
example,
emails.
These
things
got
a
little
bit
more
complex.
If
you
were
measuring
email
in
1991,
you
had
rc821
a22
a
little
bit
of
DNS
like
three
four
five,
and
if
you
were
really
into
funny
protocols,
you
could
also
get
a
couple
more
from
x400.
B
If
you're
doing
this
in
2022,
you
have
around
500
mail,
rfcs,
300,
DNS,
rfcs
HTTP
for
mtisds
Which
is
far
too
many.
Of
course,
there's
also
all
the
stuff
on
TLS
and
well
ipv4
IPv6
welcome
to
the
NTU
world,
so
this
got
a
little
bit
more
complex.
B
The
other
thing
is,
if
you
do
these
measurements
you
for
this
complexity,
you
have
to
write
reliable
measurement
software
and
a
little
example
from
my
own
mail
server.
So
one
morning,
I
woke
up
and
saw
this,
so
I
basically
saw
that
I
had
no
SMTP
anymore,
that
my
MySQL
Daemon
was
handling
around
400
queries
per
second
and
my
open
SMTP.
B
The
mail
server
was
running
at
100
CPU
solution
to
that
was
that
I
had
used
the
MySQL
back
end
with
the
default
Collide
of
MySQL,
which
is
Latin
one
Swedish
CI,
and
some
nice
person
did
some
active
probing
involving
utf-8
characters
of
account
passwords
on
my
server,
which
led
to
a
funny
Loop
of
the
authentication
module
of
open
smtpd
breaking
off.
Was
this
warning
you
see
on
screen
and
retrying
them
which
basically
saturated
the
open
smtpd,
and
if
you
write
measurement
software,
it's
really
easy
to
find
similar
things.
B
So
the
internet
is
full
of
corner
cases.
We
have
to
account
for
them.
There's
a
lot
of
Unwritten
rules
about
how
you
hold
protocols,
and,
ideally
you
want
for
your
measurement
tools,
reuse,
things
that
are
already
tested
and
deployed
software
in
general.
If
you
write
software,
you
really
benefit
from
being
a
good
and
experienced
programmer
and
well
you
don't
want
to
break
things
that
are
not
standard
compliant.
You
have
to
do
all
the
basic
things
like
Version
Control
tests,
proper
development,
best
practices.
B
In
addition
to
that,
you
also
have
to
run
your
measurements,
and
here
we
have
a
YOLO
Cola
example
of
a
measurement
you
might
see
which
involves
some
scanning
a
recursive
DNS
servant
on
authoritative,
DNS
server.
And
if
we
look
at
that,
we
see
that
they
are
funny
things,
because
the
person
who
set
this
up
apparently
forgot
that
DNS
also
comes
in
TCP
and
there's
a
lot
of
things
that
can
go
wrong
here.
B
B
So
then
we
also
want
to
do
ethical
measurements,
which
means
we
have
to
consider
all
possible
unintended
harms
and
especially
with
the
internet
full
of
corner
cases.
There's
often
the
case
that
it's
like
yeah,
we
know
the
internet
is
made
from
duct
tape
and
bubble
gum,
and
this
specific
thing
would
be
an
issue,
but
we
kind
of
decided
not
to
talk
about
it,
because
otherwise,
things
Break,
which
is
something
I
heard,
for
example,
about
colleagues,
work
on
the
aggregation
of
a
slash,
32
and
putting
it
into
the
GRT.
B
We
have
to
get
Essex
approval,
usually
from
people
who
are
academics
in
an
IRB
or
essexboard
who
have
no
clue
about
the
internet.
We
have
to
do
all
the
pro
attribution
things,
so
the
reverse
DNS
crl,
who
is
running
web
servers,
etc,
etc,
etc.
We
have
to
do
appius
handling
and
we
have
a
maintained
block
list,
so
the
PHD
we
need
is
somebody
who
thoroughly
understands
the
protocol.
Stacks
they
are
measuring
is
versed
in
all
aspects
of
it
operations,
experience
program,
experience,
system
operator
and,
let's
be
honest,
this
is
not
a
PhD
student.
B
This
is
basically
a
whole.
It
Department
and
the
thing
is
PhD.
Students
tend
to
be
people
this.
This
is
sometimes
a
surprise
for
faculty,
but
PhD
students
are
actual
people
and
if
they
are
facing
this
world
of
requirements
in
the
reality
of
a
PhD,
they
are
facing
a
world
of
these
requirements
under
four
to
eight
years,
which
they
have
for
their
PHD,
where
they
have
to
do
four
top
tier
papers.
B
Try
to
do
new
research
advancing
the
field
have
to
embed
this
in
related
work
and
usually
do
this
directly
after
the
Bachelor
or
their
master.
B
If
you
do
the
mouse
four
years
and
four
papers,
the
first
paper
should
be
under
submission
after
one
year.
This
leaves
around
about
six
months
to
basically
get
two
decades
three
decades
of
protocol
development
into
the
brain
of
a
single
person.
B
B
B
The
other
thing
is,
if
you
think
about
Junior
faculty,
there
is
something
very,
very
important
which
is
called
the
tenure
clock
and
they
tend
to
be
running
after
that.
So
they
are
doing
service
because
it
helps
tenure.
They
are
trying
to
provide
grants
because
it
helps
tenures.
They
are
trying
to
get
Publications
because
that
helps
10
year
Well,
you
get
the
idea,
professors
on
the
other
hand,
and
we
are
still
lacking
any
kind
of
evidence
that
being
a
tenured
professor,
actually
changes
the
amount
of
time
that
is
available
to
you.
B
Usually
it's
the
same
as
for
an
untenured
person
amounting
to
too
little,
and
then
there,
of
course,
is
a
thing.
If
you
become
a
professor,
you
become
a
manager
and
you
basically
get
a
bit
removed
from
the
technical
world.
So,
in
the
end,
you
might
not
actually
even
be
versed
enough
to
work
on
these
things
in
an
Ideal
World.
Of
course,
you
would
have
some
form
of
I.T
support
or
former
students.
Handing
things
over.
B
The
problem
is
that
research
program
is
not
really
a
well-established
position
in
most
universities
and
well
University
I.T,
often
difficult,
often
riddled
with
middle
boxes,
funny
40
Gates
on
campus.
Usually
you
make
a
lot
of
acquaintance
with
your
University's
I.T
Department.
If
you
run
active
measurements,
because
basically,
everyone
crashed
their
14
at
once
or
twice
also
infrastructure.
If
a
student
left,
it
becomes
undocumented,
so
the
next
students
build
their
own.
B
B
Before
closing
this
talk
a
couple
of
q
a
so
we
get
quicker
two
through
the
Q.
A
first
question,
of
course,
is
why
am
I
doing
this
alone?
Well,
it's
it's
a
relatively
simple
reason,
because
I
have
the
experience
with
joint
projects
that
it's
better
to
have
something
running,
and
otherwise
you
end
up
in
bike
shedding
so
I
want
to
build
this
and
then
hand
it
over
to
somebody
who
can
run
this.
B
There's
of
course,
a
question.
Why
not
some
us
are
one
University
group
should
run
this,
but
instead
I'm
trying
to
build
this
and
give
it
to
a
public
body.
Well,
it
should
not
be
owned
by
an
entity
that
publishes
for
a
living.
Currently
with
me,
that's
of
course
the
case,
but
I
want
to
change.
This
researchers
tend
to
be
a
bit
paranoid
of
being
scooped.
B
B
Isn't
it
easier
to
block
if
we
give
this
entity
a
fixed
prefix?
Well,
it's
kind
of
the
point
because,
usually
in
Active
network
measurements,
you
end
up
with
a
problem
that
you
get
scanned
from
Amazon.
You
block
that
and
a
week
later,
the
learning
management
system
of
the
university
is
at
that
IP,
which
is
kind
of
not
nice
what
if
it
doesn't
work.
B
How
will
this
be
paid
for
Well?
For
now,
it's
supported
by
the
2bsp
personal
bank
account
foundation
for
doing
things,
icons
that
are
useful
Grant
by
the
way,
not
accepting
other
applications
and
additionally
to
operators
indirectly
sponsor
Upstream
I
got
a
slash.
22
ipv4
Legacy
from
the
MPI
for
informatics
was
a
project
and
as
soon
as
things
are
up,
I
will
also
try
to
motivate
more
entities
and
last
slide.
Well.
Can
we
find
this?
Well,
there
already
is
a
couple
of
resources
which
are
also
in
the
GRT
there's.
B
Basically,
two
pops
already
Dusseldorf
in
Berlin,
there's
two
plant
pops
doing
something
in
MPI
informatics
to
have
a
bit
more
infrastructure
going
and
something
in
Amsterdam
I
need
collocation
hardware
and
Layer
Two
between
pops
and
on
the
web.
There
is
a
website
which
is
currently
a
static
file,
but
I'm
working
on
something
more
interactive
and
some
existing
services,
and
with
that,
thank
you
and
I'm
happy
to
take
any
questions.
B
A
Hi
Colin
Perkins
so
clearly
you're
not
wrong.
However,
I
wonder
if
the
process
of
running
this
is
easier
than
the
process
of
running
the
measurements
infrastructure.
N
A
Process
of
administering
and
managing
the
foundation
and
managing
how
to
get
people
involved,
and
so
on.
Someone
has
to
do
this
and
someone
has
to
coordinate
with
all
the
the
people
who
want
to
use
the
infrastructure
and
so
on
and
you're,
bringing
in
a
whole
nother
layer
of
management
overheads.
B
So
I
will
try
and
I
I
will
well.
The
government
governance
part
will
be
harder
than
just
building
the
infrastructure,
but
the
problem
with
the
infrastructure
built
by
everyone
that
not
everyone
can
do
that
and
not
everyone
has
the
experience
so
doing
it
in
one
spot
once
is
probably
easier
than
people
Reinventing
the
wheel
everywhere,
even
though,
for
it
Reinventing
the
wheel
is
kind
of
stand-up
practice
granted.
B
C
If
no
other
questions,
let's
thank
our
speaker,
Tobias
thank.