►
From YouTube: IAB workshop on Environmental Impact of Internet Applications and Systems 3: Improvements
Description
IAB workshop on Environmental Impact of Internet Applications and Systems 3: Improvements
Workshop webpage: https://datatracker.ietf.org/group/eimpactws/about/
Papers in GitHub: https://github.com/intarchboard/e-impact-workshop-public
Session 1 (The Big Picture): https://youtu.be/90GxlL34rQ4
Session 2 (What Do We Know?): https://youtu.be/EaNgREHLXRg
Session 3 (Improvements): https://youtu.be/jjEZwuuChZc
Session 4 (Next Steps): https://youtu.be/Pc_XY5sDR58
A
Maybe
it's
time
to
get
started,
I
see
30
plus
people
on
board,
maybe
a
bit
more
will
show
up,
but
it's
already
three
tasks,
and
so
we
have
bunch
of
things
on
that
agenda.
A
So
this
is
the
third
session
of
the
workshop
and
we'll
be
talking
about
potential
improvements
and
Eve
schooler
actually
will
will
lead
this
session
I'm
just
going
to
cover
that
reminder
about
the
base
of
working
or
ground
rules,
so
in
case
you're
just
tuning
in
for
the
first
time
on
this
session,
then
welcome
and
reminder
again
that
the
session
is
recorded
and
recordings
will
be
published.
A
Your
position
papers
are
public
already
and
of
course,
this
is
a
professional
meeting
and
we
expect
professional
behavior
and
no
kind
of
harassment
is,
is
accepted
and
that's
the
reminder
again
that
there's
lots
of
people
with
very
different
backgrounds
so
do
explain
clearly
what
what
you
mean
and
be
polite
and
learn
from
the
others
viewpoints
I.
Think
in
this
sense
in
particular,
we
will
be
talking
about
some
of
the
technical
things.
So
do
keep
that
in
mind
with
that
I
think
I
will
just
hand
it
over
to
you
Eve
and.
B
Okay,
when
I
advance
to
the
next
slide,
so
one
thing
that
was
quite
helpful
yesterday
was
to
sort
of
reiterate
what
some
of
the
goals
were,
and
so
we've
done
that
again
today
and
look
forward
to
you
helping
us
to
refine
our
objectives
and
to
hopefully
meet
them.
We
clearly
are
wandering
into
territory
where
we
are
discussing
potential
Solutions
today
and
also
some
of
the
feasibility
behind
them
and
their
benefits.
And
can
we
quantify
those
things
are
so
that
we
can
arrive
at
answering
the
questions?
Are
the
benefits
significant?
B
How
do
they
make
an
impact?
Can
they
and
how
much
do
they
make
an
impact
and,
for
example,
some
of
the
points
that
Vesna
made
on
Monday
that
were
quite
helpful
were
to
consider
you
know.
We
spoke
a
lot
yesterday
about
how
energy
usage
has
remained
rather
stable
or
static,
but
the
directives
coming
from
the
unipcc
and
elsewhere
is
the
urgency
to
reduce
our
usage
of
both
resources,
electricity
and,
ultimately,
carbon
footprint
and
emissions,
and
so
one
of
the
ways
to
consider
improvements.
B
Is
you
know
by
how
much
is
it
10
percent
less
per
year
or
some
other
goal
that
we're
trying
to
meet?
Certainly,
there
have
been
places
like
the
wri,
the
world
Resource
Institute
that
have
tried
to
quantify
how
much
faster.
We
need
to
accelerate
our
efforts
to
meet
these
goals
and,
in
terms
of
say,
the
introduction
of
Renewables
the
moving
off
of
fossil
fuel
and
how
quickly
we
need
to
move
to
the
electrification
of
transportation,
and
so
it
would
be
good
to
have
an
ambition
for
our
venues.
B
Our
areas
of
of
impact,
another
area
that
was
suggested
was
to
consider
disaster
scenarios,
emergency
situations
and
extreme
climate
as
Baseline
requirements,
for
example
in
the
United
States,
the
NOAA,
which
is
considered
the
the
National
Oceanic
and
Atmospheric
Administration.
Our
trusted
sort
of
weather
and
climate
organization
has
said
that
in
last
decade,
certainly
in
the
United
States,
the
number
of
billion
dollar
emergencies
has
doubled,
for
example,
and
then
other
kinds
of
ways
to
talk
about
improvements
or
caveats
is
to
beware
or
avoid
techno
optimism
that
everything's
going
to
work
out
technology
is
our
savior.
B
The
efficiency
Paradox
that
the
more
we
save
the
more
we
use
and
also
Power
differentials
and
the
the
question
to
you
as
an
audience
is:
are
there
other
impacts
elsewhere?
What
kinds
of
trade-offs
exists?
B
What
kinds
of
incentives
that
certainly
quite
important
to
and
and
who
are
we
trying
to
incent
and
as
well
as
security
issues
that
sometimes
more
than
sometimes
are
at
odds
with
our
goals
for
efficiency,
and
so
the
scope
today
is:
there's
no
need
to
focus
only
on
the
things
that
they,
the
ietf,
even
though
we're
sponsoring
this
Workshop.
So
it's
not
only
the
things
that
the
ITF
can
do,
but
what
can
we
do
collectively.
B
We
are
so
I
would
Advance
the
slides
if
you
could
sorry-
and
we
have
five
terrific
talks
today-
we've
allotted
about
50
minutes
to
them,
so
for
all
of
you,
speakers
we're
trying
to
stay
within
about
10
minutes
for
the
talks,
we're
lucky
to
have
a
conversation
on
metrics,
of
course,
that
underpin
everything.
B
Alexander
Khan
will
be
speaking
to
that
we're
going
to
have
two
talks
on
General
thoughts
on
the
solutions
and
trade-offs,
Carlos
prignataro
and
Suresh
Krishnan
General
thoughts
on
Solutions
and
trade-offs
that
include
routing,
for
example,
Alvaro,
retana
and
Russ
white.
We'll
speak
to
that.
And
importantly,
you
know
the
data
formats
that
underpin
much
of
this
Brendan
Moran
and
Carson
Borman,
and
a
return
to
our
beloved
topic.
Multicast.
B
We
have
about
70
minutes
of
budget
or
discussion
and
if
it's
anything
like
the
last
few
days,
it's
been,
which
is
which
have
been
fantastic,
I
really
am
looking
forward
to
that,
and
please
continue
to
drop
your
comments
and
questions
as
well
into
the
chat
window,
and
we
will
try
to
service
some
of
those
service
and
service
some
of
those
as
well,
and
please
continue
to
either
jump
in
or
raise
your
hand
to
ask
questions
as
well
with
that
I
think
we
are
over
to
you.
Alex.
D
C
Yeah
so
yeah
good
morning,
oh
good
afternoon,
everyone
so
yeah.
So
the
first
presentation
here
concerns
metrics.
This
actually
based
off
of
a
draft
of
the
submission
was
was
based
on
the
draft
that
you
see
referenced
here
along
with
a
bunch
of
co-authors
whose
name
you
see
also
there.
So
let
me
jump
into
that
so
context.
We
don't
need
to
talk
much
about
that.
I
think
clearly.
C
This
is
what
the
workshop
is
all
about:
the
fact
of
basically
how
how
we,
how
the
ITF,
how
the
network
Community
can
contribute
towards
addressing
one
of
Mankind's
Grand
challenges,
which
is
basically
reducing
carbon
footprint
and
yeah
I.
Think
as
we're
all
aware,
is
networks
are
both
an
enabler
for
solutions
for
Solutions,
but
also
a
contributor
to
the
problem
itself,
and
there
are,
of
course
many
contributors
to
network
Energy
Efficiency
today,
many
of
which
go
perhaps
beyond
what
the
where
the
ietf
can
can
contribute
directly.
C
C
Even
if
it's
just
a
smaller
styles
of
the
Bible
see
everything
counts
and
perhaps
even
if
it's
not
the
well,
we
saw
yesterday
Michael
welsell's
diagram
of
the
of
the
of
the
moon
and
the
stars
and
the
planets,
and
even
if
it's
not
a
major
planet,
maybe
if
maybe
a
moon
would
make
an
impact
already
as
well.
C
So
of
course
the
networking
can
contribute.
This
is
the
subject
of
the
discussions
or
further
discussion
that
we
will
have
here,
but
there
are
quite
a
few
potential
things.
Obviously,
areas
to
look
at
and
one
area
concerns,
certainly
basically
the
area
of
how
you
manage
that
works.
How
you
deploy
networks,
how
you
optimize
that
works
and
networking
standards
play,
do
play
a
role
and
enable
those-
and
these
are
yeah
a
number
of
things
so
basically
anything
from
well.
You
need
a
provision
networks.
Therefore,
you
do
detune
Dimension
them.
C
C
Ultimately,
total
management,
type
of
functions
and
Main
and
other
controllers
have
have
been
a
long
time
about
optimizing
various
parameters,
and
in
the
past
we
parameterized
things
such
as
utilization
or
cost
or
or
service
level
objectives
and
so
forth,
and
in
a
way
and
at
the
end
of
the
day,
energy
usage
is
just
another
great
parameter
that
that
can
be.
That
can
be
optimized
that
way
and
well,
there
are
many
other
ways
such
as
for
them.
Where
do
you
place
virtual
networking
functions?
How
do
you
plan
routes,
segments
paths
and
so
forth?
C
And,
of
course,
for
all
of
this?
How
do
you
moderate
also
the
trade-offs,
because
while
we
want
to
reduce
the
carbon
intensity,
we
of
course
still
need
to
keep
in
mind
that
there
are
service
levels
that
need
to
be
delivered
utilization
that
needs
to
be
maintained
to
make
things
economical
and
and
so
forth.
C
Well,
beyond
management.
There's
there
are
other
aspects
was
in
control.
Could
you,
for
instance,
would
it
make
an
impact
if
you
can
select
from
Greener
path
Alternatives
as
an
example,
then
there
are
network
architecture
issues.
Actually
some
of
this
came
through
also
in
earlier
talks,
but
then,
where
would
we
cash
from
a
carbon
standpoint?
Where
does
it
make
most
sense
here,
obviously
again
trade-offs
involved?
C
How
much
do
we
spend
in
transmitting
data
versus
storing
it
elsewhere
and
potentially
even
things
such
as
protocol
designs,
protocol
design
itself
can
could
be,
for
instance,
would
it
helpful
if
we
play
with
a
smoothing
versus
bursting
of
traffic
and
so
forth,
but
regardless
what
measures
in
the
end
are
I'll
selected?
It
all
starts
with
the
visibility
and
there's
this
famous
saying
by
Peter
Drucker.
C
If
you
can't
measure
it,
you
can
manage
it
and
one
might
add
basically
or
you
could
not
assess
how
effective
are
your
Solutions
and
you
had
a
device
solutions
that
the
rely
funds
on
control
groups,
and
so
accordingly,
you
do
need
visibility
and
visibility
starts
with
the
right
metrics.
This
is
really
basically
the
foundation
for
for
for
for
everything
else,
and
this
is
also
this
happens
to
be
also
an
area
that
is
very
actionable
and
where
the
ITF
may
be
able
to
make
an
impact.
C
So
concerning
metrics
well,
the
question,
then,
is
basically
what
metrics
do
we
need
to
Define
what
metrics
are
needed?
This
is,
of
course,
Very
Much,
driven
Always
by
the
types
of
question
that
we
want
to
answer.
How
do
we
assess
the
effectiveness
of
a
solution?
How
do
we
compare
between
design
Alternatives?
How
can
we
optimize
the
network
deployment?
How
do
we
know
if
one
is
better
versus
the
other,
etc,
etc,
and
so
and
yeah,
and
so
this
is
also
what
should
the
metrics
cover
right?
C
We
have,
of
course,
the
energy
usage
efficiency
and
where
the
scope
is
the
network
itself,
then
there
are
also,
but
then
the
question
is
beyond
the
usage
efficiency.
That
was
the
question.
Well,
how
are
the
energy
sources
right?
Are
they
sustainable
so,
but
this
this
basically
goes
beyond
the
network
in
a
narrow
sense.
If
you
will
addressing
the
entire
deployment
and
then
you
can
get
further
still
such
as,
for
instance,
yeah
the
the
need
for
well
basically
taking
into
account
manufacturing
life
cycle,
the
need
for
cooling
and
so
forth.
C
All
of
those
things
so
with
the
metrics.
We
want
to
provide
a
holistic
picture
that
can
be
provided
that
that
can
account
for
the
whole
picture
at
the
end
of
the
day,
not
not
just
not
just
the
part,
and
that
can
help
us
basically
address
the
question
that
we
want
to
have,
and
that
can
also
help
us
enable
the
control
loops
of
you
have
these
other
type
of
controllers,
which
is
depicted
here,
and
we
want
to
enable
them
and
and
again
based
on
metrics
so
getting
to
the
metrics.
C
So
when
we
look
at
the
metrics
well,
we
will
look
at
basically
how
to
how
we
so
one
question
is
being.
How
can
we
structure
the
metric
space
and
the
obvious
way
to
start
it,
of
course,
with
the
device
and
equipment
and
but
then
itself
probably
is
not
enough.
We
want
to
know
also
basically
about
the
flows
or
service
instances
and
so
forth.
C
We
want
to
also
assess
funds
in
the
carbon
intensity
of
paths
and
also
talk
about
the
network
at
large
and
here's
and
basically-
and
we
want
to
address
all
of
these-
these
along
all
of
the
three
verticals.
If
you
will
the
energy
usage
efficiency,
this
is
actually
where
the
current
focus
of
the
draft
is,
but
we
don't
want
to
forget
about
the
other
factors
as
well.
So
with
this,
let
me
turn
to
some
of
the
metrics
and
that
we
can
identify
just
a
disclaimer.
It's
not
a
comprehensive
list.
C
C
So
at
the
device
in
the
equipment
level
and
sorry
this
is
a
little
bit
busy,
but
basically
they
are
yeah
a
bunch
of
things.
It
basically
starts
with
standard
substance
expect
on
on
data
sheets
and
so
forth.
So
basically,
this
is
just
the
device
ratings.
If
you
will,
what
are
the
power
consumptions
when
idle?
At
virus
slide
loads
at
various
configuration
and
so
forth,
and
then
for
the
current
aspect
of
what
is
is
actually
used
and
using.
C
We
want
to
be
able
to
potentially
know
the
path
we
want
to
basically
know
this
for
different
time
intervals
and
system
start
for
the
past
minute
and
so
forth,
and
in
addition
to
these
absolute
measures,
we
want
to
also
normalize
or
derive
metrics
so
that
we
can
assess
the
actual
efficiency.
So
we
want
to
relate
it,
for
instance,
in
terms
of
well
okay,
this
is
how
much
power
we
consume,
but
how
does
it
relate
to
the
to
the
amount
of
traffic
that
we
are
actually
passing
and
so
forth?
C
C
We
want
to
manage
the
overall
ICT
environment,
which
might
include
the
power
sources
and
so
forth,
but
among
the
things
that
can
certainly
be
done,
there
is
also
to
maintain
something
like
powers
or
sustainability
ratings
which
reflect
well,
which
are
either
obtained
from
an
energy
provider
or
which
might
reflect
the
operator's
mix
of
energy
sources
and
then,
likewise,
as
we
extended,
there
can
be
Beyond
power
source
sustainability
ratings
that
might
be
also
device.
Sustainability
ratings
that
basically
rate
the
device
as
a
whole.
C
How
eco-friendly
is
it
replacement,
lifecycle
considerations,
metrics,
where
you
could,
for
instance,
indicate
how
you
how
the
energy
debt
incurred
by
the
manufacturing
of
the
device
gets
amortized
over
the
equipment,
lifetime,
yeah
and
and
and
of
course,
and
of
course,
more
in
the
interest
of
time?
Let
me
just
move
on
so
because,
as
mentioned,
it's
also
important
that
we
move
also
Beyond
equipment
and
so
forth,
It's
itself,
because
this
is
where
a
lot
of
the
networking
functions
and
operational
things
are
also
to
be
found.
C
So
one
important
aspect
concerns
flows.
How
can
we
relate
carbon
incentives,
intensity
to
flows
or
also
to
instances
of
services?
So
metrics
of
Interest
here
are,
for
instance,
the
amortized
energy
that
is
consumed
over
the
duration
of
a
flow,
so
basically
the
power
budget,
if
you
will
that
could
be
yeah
assigned
or
associated
with
a
given
flow
and
also
potentially-
and
this
might
be
important
for
optimization
things,
what
is
the
incremental
energy
that
would
be
consumed
that
would
not
have
been
consumed?
C
Otherwise,
of
course
we
heard
yesterday-
and
this
is
basically
in
the
excerpt
from
from
the
the
diagram
from
from
Dan
sheens
talk
yesterday,
of
course,
if
we
have
a
step
function,
then
this
may
be
zero,
so
this
may
not
be
interesting,
but
in
those
cases
it
will
be,
of
course,
on
the
other
hand,
very
important
to
know
when
these
steps
are
occurring.
C
Well,
then,
beyond
flows,
there
are
paths
so
clearly
it's
also
as
we
want
to
optimize
paths
and
perhaps
optimize
path,
selection
and
so
forth.
It
may
be
interesting
to
have
paths
things
such
as
paths,
sustainability
ratings.
These
might
be
functions
of
the
ratings
of
the
different
Pops
that
are
being
traversed
so
Does.
It
include
dirty
devices
if
you
will
or
is
it
composed
of
of
clean
devices,
and
the
function
could
be
anything
right.
It
could
be
a
an
average.
It
could
be
the
sum
it
could
be
the
maximum.
C
What
have
you
and
likely
we
may
want
to
know
also,
basically,
what
is
a
normalized
power
consumption
across
the
path
to
make
them
more
comparable
and
then,
finally-
and
of
course
it's
the
purpose
of
this-
we
want
to
basically
reduce
the
total
carbon
footprint
of
the
network
as
a
whole.
So
we
want
to
be.
We
will
also
need
to
aggregate
a
lot
of
these
metrics
for
the
entire
deployment
and
well
so
just
yeah
to
to
before
concluding.
There
are
just
a
few
other
considerations.
I
want
to
mention.
C
C
There
are
a
couple
of
thoughts
there
more
discussion
items
really
energy
consumption
may
be
easier
to
measure
than
actual
carbon
emissions,
so
this
is
basically
certainly
one
one
thread
of
discussion
here
and
likewise,
basically,
if
you
want
to
obtain
metrics
across
paths
and
for
flows
and
so
forth,
basically
this
this
will
be
more
than
just
instrumenting
a
yeah.
It
remain.
This
may
involve
invent
mechanisms
or
what
have
you
more
than
just
the
instrumenting
the
device
agent,
if
you
will
another
aspect,
concerns
a
certification
compliance.
C
So,
basically,
if
we
do
use
these
metrics
to
optimize
carbon
density,
it
is
of
course
important
to
have
instrumentation
that
is
accurate.
Otherwise
it
may
be
counterproductive,
and
it
may
also
be
particularly
important
when
regulation
in
monetary
incentives
get
involved
and
if
you
say,
if
you
claim
I,
offer
a
Greener
service
and
maybe
charge
something
for
it,
just
as
an
example,
how
would
you
actually
know
that
this
is
true?
C
Third,
consideration
is
one,
so
one
of
the
questions
also
how
we
can
consider
how
we
can
treat
this
not
just
as
a
problem
for
the
operator,
but
ultimately
also
attribute
the
energy
usage
to
users
and
confront
users
with
the
choices
of
the
actions
regarding
the
carbon
footprint
and
yeah
anyway.
So
this
this
concludes.
What
I
wanted
to
say.
I
want
to
mention
again,
I
believe
this.
A
lot
of
these
metrics
and
what's
related
to
this
is
quite
actionable
in
the
ITF.
C
This
is
I
believe
where
we
can
make
an
impact
and
once
metrics
are
defined,
there
are
several
areas
to
look
at
basically.
Next,
this
includes
things
such
as
young
models,
potential
protocol,
extensions
with
certain
virtual
support,
energy
parameters,
of
course,
Solutions
and
use
cases
to
drive.
Each
of
those
needs
need
to
need
to
be
defined,
and
finally,
also
just
to
mention
is,
as
I
mentioned.
This
is
based
on
a
draft.
C
B
Thank
you
for
that
excellent
summary
and
talk.
I'm
perusing
the
chat
window,
a
lot
of
wonderful
comments
there
any
actually
time
for
maybe
one
question.
B
B
D
D
So
the
question
was
really
so.
You
are
presenting
a
lot
of
different
options
here
or
possibilities.
Rather
and
of
course
there
is
a
there
is
already
about
their
standards,
both
when
it
comes
to
entity
performance,
especially
in
relation
to
networks
and
also
in
relation
to
ghd
emissions
and
so
on,
from
bodies
like
the
Etsy
European
standardization
body
and
the
itu.
So
I
was
curious
if
you
have
done
any
like
a
gap,
analysis
or
something
in
relation
to
your
work
or
if
this
is
a
later
step.
C
Well,
I
think
Gap
analysis
needs
to
be
done
absolutely.
This
is
basically
seen
as
a
later
step.
Somebody
defining
the
metrics
already
coming
up
with
a
set
of
comprehensive
metrics
is
a
is
I,
think
the
first
step,
and
then
from
that.
Basically,
then,
the
question
will
be
the
next
thing:
what
yeah?
What
exactly?
What
are
the
gaps
and
what
of
the
metrics
and
how
should
they
be
pre-prioritized
right?
Because
there
are
a
lot
of
possibilities
for
metrics.
C
Some
of
them
may
be
more
useful
than
others,
or
some
of
them
maybe
well
or
there
may
be
also
certain
use
cases
that
may
want
to
be
prioritized.
That
requires
some
of
the
metrics,
but
not
others,
but
that
would
be
yeah
I,
think
that
would
be
the
outcome,
then,
of
yeah,
of
further
discussions
and
and
the
next
step.
D
B
E
So
I
just
want
to
say,
like
I,
think,
like
Alex
like
and
I
like
and
Carlos,
we
kind
of
agree
on
a
lot
of
the
things
right.
Like
you
know,
we
I
looked
at
the
papers
as
well,
so
we
have
like
you,
know
very
similar
thoughts.
A
lot
of
the
things
I
do
I
kind
of
want
to
emphasize
on
the
things
where
we
didn't
really
have
the
same
kind
of
scope
in
the
papers
right.
E
So
so
one
thing
that's
like
I
think
like
pretty
much
everybody
in
here
knows
right,
like
with
the
scope
on
scope,
2
and
scope,
three
emissions,
and
so
one
thing
that
was
kind
of
like
you
know,
new
to
a
lot
of
us
is
that,
like
you
know,
scope
on
and
scope
2
are
mandatory
to
report
in
most
of
the
jurisdictions,
whether
it's
Europe
or
U.S,
or
like
even
like
some
place
in
Asia.
E
So
those
like
get
a
lot
of
priority
right
like
in
when,
like
things
are
getting
reported
like
you
know,
everybody
looks
at
scope
on
and
scope
2
and
how
to
like,
reduce
that,
and
you
know
how
to
get
to
Net
Zero
there,
but
score
three
is
like
kind
of
outside
the
control
of
the
arc
right,
so
the
whether
it's
usage
of
the
products
or,
like
you
know
the
energy
that's
consumed,
like
you
know
the
like
by
the
organization
and
how
it's
like
you
know
getting
produced
but
like
in
our
industry
right,
at
least
in
the
networking
industry,
the
the
scope
3
emissions
are
like
much
larger,
so
this
is
obviously
like
not
to
any
precise
scale,
but
kind
of
to
give
you
like
an
absolute
idea
of
like
this
thing.
E
Like
you
know
the
scope,
three
like
it's
much
larger
than
evens
coconut
scope
to
put
together.
So
we
need
to
kind
of
focus
a
little
bit
on
the
sculptory
emissions
because,
like
you
know,
our
organizations
themselves
are
focusing
on
scope
on
scope
too,
because
of
like
you
know,
Regulatory
and
and
legal
things
to
report
so,
and
one
thing
I
wanted
to
call
out
is
like
scope.
E
3
for
somebody
is
like
scope,
two
and
scope,
one
for
somebody
else
and
I
think
this
is
kind
of
in
the
same
direction
as
what
Alex
was
talking
about
right,
like
you
know,
when,
like
people
are
operating
gear
made
by
some
Network
vendor
right,
like
you
know,
they
have
really
in
the
reporting
chain
for
that
right
as
they
need
to
report
those
things.
But
as
like
you
know,
the
networking
industry
ourselves
like
we
need
to
kind
of
help.
E
The
customers
like
reduce
their
like
scope
on
sculpture,
which
could
be
our
scope,
three
emissions,
and
so
one
thing
that
has
kind
of
changed
over
the
years
is
like
you
know
we
kind
of
had
like
you
know,
I
would
say
three
things
that
we
optimized
for,
like
you
know,
back
in
the
day,
right,
like
you
know,
we
kind
of
tried
to
maximize
the
throughput
minimize
the
latency
and
increase
the
availability
of
the
system
itself
right,
and
so
that's
kind
of
changed
like
over
the
past,
like
I,
would
say
like
decade
or
so
right
like
Energy
Efficiency
is
another
angle
and
I
I'm
not
to
like
take
anything
away
from
this,
but
I
think
Russ's
point
is
like
this
is
an
NP.
E
Complete
problem
is
like
really
really
relevant
here.
So
what
we're
trying
to
do
is
like
kind
of
make
like
smart
choices
with
Energy
Efficiency
in
mind.
So,
like
you
know,
we're
kind
of
trying
to
bound
the
the
usage
of
the
other
dimensions.
Like
you
know,
in
kind
of
trying
to
find
energy,
efficient,
I
would
say
Frontier
for
lack
of
a
better
term,
and
this
also
includes
stuff
like
circular
economy.
Stuff
right,
like
you
know,
where
does
this
come
from?
E
There's
like
Alex
talked
about,
like
you
know,
metrics
right
but
like
stuff
happens
much
before
that.
So
is
it
like
something
that
we
can
do
with
modularity?
Is
there
something
that
we
can
do
with
like
packaging?
What
kind
of
power
supplies
are
sitting
there
right?
So
a
lot
of
the
things
are
like
even
pre-metric
right
like
so.
E
This
is
like
more
static
things
that
we
can
measure
through
the
supply
chain,
but
it's
also
something
we
kind
of
have
to
consider
in
the
overall
life
cycle
and
similarly,
like
you
know,
we
have
some
programs
at
least
like
in
Cisco.
We
have
some
programs
like
for,
like
you
know,
recycling
the
hardware
thing,
so
all
those
things
need
to
get
considered
when
we
do
the
sustainability
measurements,
so
we
see
kind
of,
like
you
know
three
faces
of
like
you
know
getting
through
this
right
like
so,
we
start
off
with
visibility.
E
I
think,
like
I,
think
most
of
us
I
would
say.
Probably
everybody
agrees
that,
like
you
know,
we
kind
of
need
to
get
visibility
into
the
system
then
move
on
to,
like
you
know
how
we
get
insights
from
it
and
how
we
actually
recommend
things
to
people
who
are
not
like
you
know
doing
this,
like
I,
said
Deja
pretty
much
right
to
like
improve
the
system.
And,
finally,
how
do
we
get
the
systems
to
kind
of
improve
themselves?
E
So
that's
kind
of
the
high
level
I
would
say
like
the
blocks
of
things,
so
it
doesn't
mean
that
really
these
things
need
to
happen
in
series
right,
like
you
know,
he
can
kind
of
start
like
you
know,
phase
two
when
phase
one
is
happening,
but
it's
kind
of
like
I
would
say
the
harder
problems
to
solve
are,
like
you
know,
coming
further
down
we're
kind
of
pushing
the
can
down
the
road.
E
So
as
like
you
know,
Alex
said
right
like
so
whether
it
has
repeated
Peter,
Drucker
or
like
Lord,
Kelvin
or
whatever
right.
You
cannot
improve
what
you
cannot
measure
so
I
think
the
we
have
a
long
history
of
work
in
the
field.
So
ITF
like,
like
idea,
has
done
quite
a
bit
of
work.
E
Like
you
know,
net
mod,
like
you
know,
gang
like
you,
know
a
whole
bunch
of
stuff
like
that's
having
an
idea
if
I
ITF
has
done
quite
a
bit
of
stuff
and
and
doing
stuff
in
nmrg,
IAB
has
done
like
a
lot
of
documents,
like
you
know,
providing
guidance
in
this
thing,
so
we've
been
pretty
successful
in
in
kind
of
standardizing
like
the
things
that
need
to
get
measured
and
how
we
measure
them,
and
so
we
do
this
for
management
like
Network
management,
for
Performance
Management
and
for
troubleshooting
right,
like
and
but
I
think.
E
We
also
need
to
start
doing
this
for
environmental
impact
and,
of
course,
it's
difficult
to
do.
But
the
problem
is
like
the
longer
we
delay
doing
this.
We're
gonna
like
leave
ourselves
open
to
a
lot
of
stuff
happening
outside
the
IDF
right,
which
is
kind
of
like
lets.
You
have
stuff
that's
potentially
redundant
so
like
different
vendors
are
going
to
do
different
things.
E
It's
going
to
be
propriety
to
them
and
sometimes
like
people
are
going
to
do
like
contradictory
metrics
as
well
right
like
no,
you
really
don't
have
a
stuff,
that's
well
reconciled.
So
this
is
a
problem
in
by
itself,
but
for
somebody
who's
using
these
things.
If
you
have
like
very
different
things
from
different
vendors
and
you
have
a
multi-vendor
network,
it
becomes
very
difficult
to
kind
of
have
like
an
overall
view
of
the
system.
E
So
we
really
need
to
act
quickly
to
do
something
that
we
can
agree
on
and
kind
of
set
like
industry
level
standards
on
on
what
is
going
to
get
measured
and
how
and
and
kind
of
terminology
is
like
really
part
of
it
right,
like
you
know,
because
people
have
different
ways
of
measuring
stuff,
so
we
kind
of
have
to
have
precise
ways
on
what
things
are.
What
so,
the
the
next
step
is
like
kind
of
like
have
something
for
the
industry
to
use
right
like
and
like
this
is
just
like
a
straw.
E
Man
proposal
is
that,
like
we
kind
of
have
some
kind
of
Open
Source
implementation
right
like
for
people
to
actually
have
these
standardized
metrics
that
we've
built
and
collected,
and
and
and
put
them
together
in
a
way
that
people
can
actually
see
their
environmental
impact.
So
whether
it's
like
Energy
Efficiency,
like
or
like
you
know
somehow
translating
this
into
carbon
emissions
or
like
whatever
angle
you
want
to
see,
then
to
kind
of
visualize.
E
This
thing,
so
that's
kind
of
the
first
step,
and
so
of
course,
like
you
know,
this
is
not
gonna
like
this.
E
Of
course,
gonna
be
some
stuff,
that's
common
between
like
different
network
vendors
and
so
on
and
users,
but
like
there's
also,
we
need
a
way
to
kind
of
customize
this
right
like
so
people
can
actually
add
some
kind
of
value
on
top
of
it,
because
if
it's
just
gonna
be
the
lowest
common
denominator,
it's
probably
not
going
to
be
enough
for,
like
a
lot
of
the
users
and
I
think
like
Eve
brought
this
up,
I
think
I,
don't
know
if
it's
like
yesterday
or
like
on
Monday
like
this
is
like
kind
of
a
multi-domain
problem,
some
point,
and
so,
if
you
have
like
similar
ways
of
looking
at
stuff,
it's
at
least
there's
like
a
hope
that
you
can
actually
kind
of
reconcile
this
across
domains.
E
But
without
that
we
probably
don't
even
have
the
same
language.
You
can
speak
across
domain
so
and
of
course
it's
a
very
difficult
problem,
but
at
least
we
have
a
start
in
that
direction,
and
and
the
The
Next
Step
would
be
to
kind
of
provide
some
kind
of
advice.
Like
you
know,
is
there
any
operational
changes
you
can
do?
Can
you
like
turn
off
specific
routers
at
specific
times?
E
Is
there
like
some
transceivers
that
can
get
turned
off
right
and
is
there
like
some
equipment
that
can
be
replaced
like
you
know,
when
it's
getting
close
to
its
like
design
lifetime?
So
those
kind
of
things
can
all
be
like
recommendations
coming
from
such
kind
of
software
towards
the
people,
so
people
can
actually
plan
for
this
and
and
and
do
productive
changes
and
like
at
the
last
step
right,
like
we
kind
of
do
this
at
a
longer
time
scale.
E
So
if
it's
going
to
be
months
or
years
like
a
human
looking
at
it
and
providing
Solutions
or
looking
at
Solutions
and
ordering
stuff,
it's
all
going
to
work,
but
at
some
point
a
human
doing.
This
is
not
going
to
cut
it
right.
So
we
can,
if
you
want
to
do
something
in
a
smaller
time
scale
and
we
kind
of
have
to
start
building
some
amount
of
self
awareness
into
the
network
and
I
know.
E
Iitf
has
some
work
ongoing,
but
it's
not
really
in
the
space,
but
I
think
we
can
actually
start
looking
at.
Is
there
something
we
can
actually
do
and
probably
make
like
small
changes
and
and
and
have
like
a
feedback
loop
right
to
see
how
the
changes
affect
the
system
and
kind
of
keep
repeating
this
in
in
small
increments
and
and
one
key
thing
we
see
is
like
this
needs
to
be
done
in
a
declarative
fashion.
E
I,
don't
think
like
people
are
gonna,
start
doing
this
and
say
like
hey
like
make
my
I,
don't
know:
energy
effici,
Energy,
Efficiency
stuff,
like
93
or
whatever.
It's
not
really.
The
the
goal
for
somebody
right
like
people
are
gonna,
have
to
specify
something
in
a
higher
level
and
we've
seen
all
the
collected
metrics.
We
kind
of
have
some
kind
of
machine
learning
algorithm
that
can
go
and
look
at
opportunities
to
improve
these
things.
Right
and,
and
also
like
you
know,
looking
at
like
something
called
scope,
4
right,
scope,
4
is
like
really
avoided.
E
Emissions
I
think
it
was
yeah,
so
talked
about
it.
Yesterday,
like
you
know,
we
us
meeting
here
online,
she
has
a
lot
of
travel.
That's
like
let's
go
for
so
I.
Think,
having
some
kind
of
automated
system
is
going
to
help
us
look
for
opportunities
in
this
space
to
see
like
you
know
what
kind
of
emissions
that
can
be
avoided
in
the
future,
so
I'm
just
going
to
closing.
E
E
Side
and
and
looking
at
a
vendor
agnostic
standard
is
like
very,
very
important
because
it
like
we
see
a
lot
of
stuff,
that's
coming
out
in
the
market.
It's
really
greenwashing
right,
like
you
know,
it's
just
coming
up
with
stuff
to
say,
oh,
like
we
are
doing
like
amazing
stuff,
but
not
really
substantive,
so
we
kind
of
need
to
do
something:
that's
vendor
agnostic
and
and
and
and
that's
something
that's
like
substantial
to
reduce
the
environmental
impact.
E
Another
thing
we
can
do
I
think,
like
probably
like
Alvaro
and
Russell,
cover
it
somehow
in
their
thing,
like
kind
of
looking
at
their
paper,
so
like
we
kind
of
need
to
build
robustness
and
recoverability
into
the
protocols,
so,
instead
of
doing
unnecessary
redundancy,
so
a
lot
of
the
times
we
have
protocols
which
are
like
really
over
engineered
they're,
like
you
know,
having,
like
you,
know,
four
links
on
standby,
doing
nothing
just
in
case
things
go
wrong.
E
I
think,
like
you
know,
we
kind
of
need
to
avoid
stuff
like
that
and
build
robustness
and
recoverability
into
the
protocols
and
and
also
look
at
like
how
we
meet
SLS
in
the
more
energy
efficient
way.
So
we
don't
need
to
really
beat
the
SLS
all
the
time,
but
also
look
at
like
you
know
what
is
the
like.
I
would
say
the
least
thing
we
can
do
to
meet
the
slas
and
finally,
like
you
know,
kind
of
avoid
micro,
optimizations
and
consider
product
life
cycle.
E
So
like
one
of
the
things
we
kind
of
like
realize
is
like,
let's
say
a
core
router
of
today
could
become
like
a
edge
router
for
tomorrow,
right
like
if,
if
it's
possible
to
have
some
of
the
functionality,
that's
required
for
the
edge
router.
That's
not
always
there.
So
what
we
kind
of
do-
and
we
are
Engineers
like
at
least
most
of
us-
are
Engineers.
We
try
to
optimize
everything
like
very,
very
tightly
to
the
use
case
and,
like
you
know,
we
are
proud
of
it,
but
I
think
at
some
point.
E
We
need
to
look
at
the
flexibility
so
that,
like
you
know,
things
can
actually
last
longer
and
that
provides
like
you
know
much
more
benefits
in
in
some
cases
than
optimizing
everything
like
very,
very
tightly
for
what
can
be
done
and
I
know.
E
This
case
is
like
where
this
doesn't
really
work,
like
you
know,
for
example,
Carson
is
doing
a
lot
of
stuff
in
iot
and
like
something
that
lasts
like
a
year
on
a
double
A
battery,
like
those
kind
of
things
don't
work,
but
at
least
like
think
about
it
when
we
design
protocols,
so
that's
pretty
much
it
for
myself.
Thank
you.
B
Great,
thank
you
that
was
that
was
a
terrific
overview,
I
think
in
the
interest
of
time.
We
will,
unless
somebody
has
their
hand
up
and
I'm,
not
seeing
it.
B
G
That's
what
I'm
gonna
do.
Hopefully
it
continues
to
share
correctly.
G
Right
so
this
is
based
on
a
draft
Alvaro
and
Manuel,
and
I
did
a
lot
of
work
way
way
back
when
about
energy
awareness,
and
we
approach
things
from
a
lot
of
different
ways
and
one
of
which
was
to
just
reduce
the
amount
of
energy
that
is
required
by
routing
protocols,
but
we
kind
of
walked
away
from
that
a
little
bit
because
we
didn't,
we
weren't
certain
how
much
gain
we
would
have
there,
so
maybe
keeping
that
in
the
back
of
your
head
as
another
possible
optimization
in
the
future
of
it,
not
something
I'm
looking
at
right
now,
so
what
we
did
say
is
we
said
there
are
essentially
three
modes
that
we
could
think
of.
G
That
would
be
helpful.
The
first
would
be
to
reduce
links.
For
instance,
I
have
two
links
right
here.
Perhaps
I
could
get
rid
of
one
of
those.
Two
I
have
two
parallel
paths
here.
Perhaps
I
could
get
rid
of
one
of
those
two
and
the
second
is
removing
redundant
equipment.
So,
for
instance,
if
I
could
get
rid
of
these
two
routers
or
power
them
down
for
some
period
of
time,
or
something
like
that.
That
would
also
reduce
cost
or
reduce
energy
usage.
G
Another
is
is
been
talked
about
in
the
chat
is
that
you
can
actually
reduce
the
speed
of
these
links.
If
this
is
100
Gig,
then
you
can
potentially
drop
it.
You
know
if
you're,
using
quam
and
you're
using
say
four
channels
of
quam
over
an
optical
link
or
something
like
that
or
a
wireless
link.
You
might
be
able
to
reduce
to
25
gig
or
something
like
that
and
reduce
the
power
usage,
and
all
of
these
can
be
done
in
a
Time
variant
way
and
I
think
that's
an
important
point.
G
But
now,
let's
look
at
some
of
the
strict
routing
protocols.
Things
now
I'm
not
going
to
talk
again
a
lot
about
equipment,
because
equipment
is
problematic
in
that
the
more
you
bring
something
down
and
pull
it
back
up
the
more
often
it
is
to
the
more
the
more
often
it's
going
to
fail,
and
so
you
just
have
to
think
about
the
trade-offs
in
the
equipment.
Failure
versus
how
often
you're
bringing
it
up
and
down
just
like
a
light
bulb
in
reality.
The
more
often
you
turn
it
on
and
off
the
quicker.
G
I
would
argue
that
we
don't
know
the
answers
to
these
questions
right
now,
because
we
haven't
done
a
lot
of
measurement
in
this
area
to
understand
like
if
I
shut,
a
router
off
15
times
in
a
day
versus
never
what
is
the
uptime
going
to
be,
but
thinking
about
strictly
from
a
control
plane
perspective,
just
thinking
about
some
of
the
impacts
that
we
have.
For
instance,
let's
say
if
this
left
is
my
bandwidth,
and
this
right
is
my
energy
usage,
then
I
can
say
I'm,
sorry,
yeah
I
can
say:
okay.
G
Well,
you
know
what
I
think
I
did
this
one
backwards
by
the
way.
I
think
this
is
one
four,
but
anyway
I
could
say
you
know
what
I
could
save
a
lot
of
energy
by
cutting
this
link
out,
but
when
I
cut
that
link
out
I'm
driving
traffic
up
through
this
upper
path,
this
increases
what
we
call
stretch,
which
is
simply
the
number
of
hops
in
the
in
the
network
itself,
and
the
thing
is:
is
that
every
hop
that
you
clock
off
Optics
and
into
electronics
and
off
Electronics
onto
Optics?
G
When
you
do
these
serialized
deserialized
steps,
you
are
adding
delay
to
the
network
and
you're
potentially
adding
Jitter
to
the
network,
because
here,
first
of
all,
you
have
just
the
simple
I'm
switching
from
this
path
to
this
path,
which
is
going
to
cause
the
routing
protocols,
convergence
which
causes
Jitter,
what
the
application
sees
as
Jitter
as
the
traffic
shifts
from
One
path
to
the
other
path.
The
second
thing
is:
As
I
push
more
traffic
onto
this
path.
G
These
cues,
if
I,
have
very
carefully
tuned
my
quality
of
service
and
I'm,
using
traffic
engineering
or
traffic,
steering
to
make
sure
that
the
network
is
optimal.
So,
for
instance,
let's
say
that
I
push
all
my
video
traffic
this
way
and
all
my
voice
traffic
this
way
and
then
I
kill
this
path
for
Energy
savings.
I'm
now
mixing
video
and
voice
in
the
same
queue
structure,
which
is
much
more
difficult
to
do,
and
not
have
lots
of
things
like
that.
G
So
you
decrease
the
aggregate
bandwidth,
your
your
increasing
stretch,
you're,
potentially
increasing
Jitter,
your
can
be
increasing
delay,
and
so
these
can
all
have
a
negative
impact
on
applications.
So
the
whole
point
here
is
not
that
we
shouldn't
do
any
of
this
stuff.
The
point
is,
we
need
to
think
about
it
and
figure
out
how
to
do
this
stuff
rationally
and
where
it
makes
sense
and
where
it
might
not
make
sense.
It
may
be
that
in
some
cases,
shutting
down
a
particular
set
of
links
might
save
us
energy.
G
So
I
need
to
decide
when
I
should
take
a
link
or
piece
of
equipment
out
of
out
of
commission
for
Energy
savings.
I
need
to
know
what
that
looks
like
when
I
bring
it
back
up.
I
need
to
know
how
I'm
going
to
determine
to
bring
it
back
up,
and
this
is
true
also
for
even
things
with
short-term
sleeps,
like
micro,
sleeps
and
stuff
like
that
which
we've
talked
about
in
the
past
to
solve.
G
Some
of
these
problems
is
to
do
micro,
sleeps
to
say:
okay,
I
really,
don't
think
I
have
traffic
for
you
for
the
next
second.
So
let's
shut
down
this
link
for
the
next
Second
and
come
back
up
and
check
with
me
in
a
second
these
kind
of
micro,
sleep
ideas
have
been
around
for
a
long
time.
G
G
So
we
need
to
think
about
from
a
routing
perspective,
control
plane
perspective.
How
do
I
remember
that
link?
Is
there
how
do
I
remember
what's
reachable
via
that
link?
How
do
I
handle
things
that
change
in
reachability,
while
the
link
is
asleep
or
down
or
the
device
is
down
or
asleep,
and
what
do
I
do
with
those
things
now,
I'll
say
that
this
part
of
things
will
probably
be
covered
in
the
TBR
working
group.
G
That's
coming
up
right
now,
in
fact,
I
have
a
meeting
this
afternoon
to
talk
about
next
steps
in
the
TBR
buff.
So
that
is
something
that
we
need
to
think
about.
Another
thing
is
that
there's
work
kind
of
going
on
in
that
area.
The
next
is
Convergence
impact,
so
when
I
converge
bgb
first
of
all,
since
bgp
is
the
big
kid
on
the
Block
nowadays,
first
of
all,
I
have
hunt
I
have
the
potential
of
all
sorts
of
just
not
converging
whether
or
not
people
believe
this,
the
internet
core,
never
converges,
never
ever
converges.
G
G
And
so
you
need
to
think
through,
like
how
do
we
make
sure
that
we're
not
dropping
packets
and
impacting
application
performance,
or
do
we
not
care
in
some
cases
there
are
some
applications
that
don't
care
about
drug
packets
and
that's
cool,
that's
great,
but
we
need
to
think
about
those
things
when
you
try
to
understand
how
to
make
the
control
plane
aware
of
all
that
stuff
and
do
the
right
thing
and
finally
impact
on
Fast
convergence.
G
So
you
know
a
lot
of
parallel
links
and
redundancy
is
just
there
to
converge
more
quickly
in
the
case
of
failure,
and
so
how
do
we,
if
we
put
something
to
sleep
or
we
take
it
offline
to
save
energy?
How
do
we
anticipate
failure
or
how
do
we
deal
with
it?
And
what
do
we
do
about
the
fast
convergence
situations?
G
So
these
are
all
things
that
just
need
to
be
thought
about
again,
not
saying
we
shouldn't
do
any
of
this,
just
trying
to
point
out
or
try
to
figure
out
some
of
the
places
where
work
needs
to
be
done
and
how
you
know
what
we
can
do
or
where
we
might,
what
we
might
want
to
do
to
try
to
solve
some
of
these
things
and
some
of
the
work
that's
been
done
in
the
past.
G
So
hopefully
that
is
useful
from
the
perspective
of
stuff,
so
I
see
Carla
said
in
data
centers,
you
always
have
an
Alabama
Network.
Well,
that's
true
in
some
data
centers
and
it's
not
true
in
other
data
centers,
some
data
centers
have
gone
just
for
Port
count
problems
to
an
inbound
Network.
So
it's
not
consistent.
Let's
see
someone
else
said
yes,
exactly
constraint,
based
optimization
and
as
I
said
earlier
in
the
chat.
One
thing
to
remember
is
that
optimizing
for
two
metrics,
like
bandwidth
and
energy
usage,
is
technically
MP
complete.
G
You
can
do
it,
but
you've
got
to
merge
the
metrics.
Somehow
you
can't
actually
run
shortest
path
first
or
even
bgp.
Part
of
the
reason
bgp
is
is
by
stable
is
because
we
try
to
optimize
on
multiple
metrics
and
when
you
do
that,
you
end
up
in
a
non-atomic
state
where
order
of
operation
makes
a
difference,
and
things
are
gone
are
hard.
B
Right
and
there's
this
very
rich
conversation
going
on
in
the
text
so
much
so
that
I
finally
stopped
looking
at
it,
because
I
really
wanted
to
listen
to
your
talk.
So
if
there
are
people
who
feel
that
their
question
has
not
been
answered,
please
jump
in.
B
Okay,
that
was
fascinating
Russ,
especially
given
your
you
know
the
level
of
details
which
you
understand,
the
landscape
within
the
ITF
and
what's
actually
going
on
with
routing,
so
that
was
really
instructive,
although
I
can't
say
that
I
understood
all
of
the
acronyms
but
I'm
gonna
drill
into
your
paper
in
hopes
of
finding
some
of
that.
B
And
the
next
presentation
should
be
on
data
formats,
so
I'm,
not
sure
Brett,
if
Brendan
or
Karsten,
what
one
of
you
will
be
there
you
go.
Thank
you.
Brendan
yeah,.
H
I'd
be
happy
to
it
looks
like
I.
Have
a
minor
glitch
with
needing
to
have
WebEx
have
permissions
to
actually
share,
so
I
might
have
to
do
a
quick
rejoin
here.
I
I
B
The
slides
to
see
the
screen,
we
see
the
it's,
not
in
presentation
mode,
it
just
is
yet,
but
we
do
see
there,
you
go
okay,.
J
Okay,
thank
you
so
just
first
so
I'm
a
second
year
PG
student
in
Belgium
at
your
server.
So
maybe
my
presentation
will
be
like
a
bit
too
much.
Research
oriented
but
I
hope
that
we
will
have
some
interesting
conversation
afterwards.
So
the
the
paper
in
the
positional
paper.
We
just
wanted
to
reconsider
multicast
because
we
had
problems,
we
moved
against
and
now
we
think
maybe
thanks
to
the
energy
impact
we
have
with
music
acids,
may
be
interesting
to
reconsider,
at
least
for
some
applications
in
some
Networks.
J
So
just
I'm
pretty
sure
you
almost
know
what
is
multicast
but
to
be
sure
that
we
are,
on
the
same
page,
a
quick
reminder
with
multicast.
You
avoid
sending
multiple
times
the
same
data
in
the
network.
So
sorry,
okay,
so,
for
example,
here
imagine
that
this
router
wants
to
send
the
same
data
to
routers
one
two
and
four
with
unicast.
J
You
will
send
separately
the
data
to
each
receiver,
so
you
have
multiple
times
to
send
data
flowing
in
the
network
on
some
lanes,
for
example
the
first
one
and
the
second
one,
but
with
multicast
you
only
send
the
packet
once
in
the
network
and
then
the
routers
will
duplicate
this
data
to
reach
older
receivers
of
the
network.
J
So
there
are
currently
some
applications
that
use
multicast,
for
example,
still
IPTV
and
Stock
Exchange,
but
we
think
it
may
be
interesting
to
reconsider
multicast
for
other
applications,
for
example
here
the
the
the
conference
we
we
just
have
now.
It
may
be
interesting
to
reconstruct
multicast
if
we
could
deploy
it
in
the
wide
error
network,
for
example.
J
So
in
the
positional
paper,
what
we
did
was
that
we
wanted
to
practically
show
how
the
multicast
is
more
efficient
than
unicast.
J
So
we
made
an
emulation
of
the
Geo
Network,
which
is
composed
of
22
rotors
and
36
Mix
links,
and
we
we
just
sent
some
dummy
traffic
from
the
source
to
an
increasing
number
of
receivables,
and
for
that
we
we
compared
unicast
and
multicast,
and
the
multicast
mechanism
we
used
was
the
bit
index
explicit,
replication
protocol,
so
beer
in
short,
because
we
have
an
implementation
of
it,
which
is
open
source
and
for
simplification.
J
We
just
ignored
the
communication
setup,
so
we
just
imagined
that
every
receiver
already
made
some
multicast
join
in
the
multicast
case
and
already
set
up
the
the
traffic
in
for
the
unica's
case.
J
So
the
first
thing
we,
of
course
we
we
know
it-
is
that
multi
gets
reduced
to
the
bytes
Footprints
because,
as
we
avoid
sending
multiple
times
the
same
packets
in
the
network,
when
you
start
increasing,
really
match
the
number
of
receivers.
Sorry
for
noise,
we
avoided
multiple
times
in
the
same
packets
on
some
links.
So
here
we
see
that
when
we
increase
the
number
of
receivers
above
seven,
for
example,
we
start
having.
J
So
we
are
just
I'm
sorry,
we
will
just
send
fewer
bytes
on
the
network
compared
to
unicast,
which
is
more
linearly,
because
when
you
increase,
when
you
add
the
new
receiver,
you
will
send
individually
this
packet
to
the
new
receiver,
as
we
saw
before
now.
The
second
thing
we
analyzed
was
the
number
of
CPU
Cycles
on
the
source,
because
with
multicast
you
only
send
the
packet
once
compared
to
sorry.
J
We
multicast
you
send
the
packet
only
what
once
compared
to
unicast
when,
where
you
must
send
an
additional
packet
every
time
you
have
a
new
receiver,
so
here
with
multicast,
we
have
no
increase
when
we
change
the
number
of
receiver.
Of
course-
and
this
is,
this
will
be
more
important
when
you
consider,
for
example,
protected
payloads,
because
with
unicast
you
must
encrypt
a
separately
the
payload
each
time
you
want
to
send
to
a
new
receiver,
but
with
multicast
you
could
only
encrypt
it
once
and
then
send
it
to
the
network.
J
So
the
the
the
question
that
we
we
have
now
is
that
multicast
works
well
in
theory,
but
why
isn't
it
more
deployed
to
today's?
J
And
basically,
when
I
was
doing
my
research,
the
the
question
was
the
because
of
this
paper,
because
what
it
showed
basically,
is
that
with
multicast
we
have
issues
everywhere
and
it's
really
difficult
to
deploy
it.
So,
in
the
positional
paper
we
reviewed
three
of
these
issues
and
try
to
find
some
possible
solutions.
Of
course,
this
is
an
open
discussion,
so
you
might
not
agree
and
I
would
be
happy
to
discuss
to
discuss
it
with
you.
J
So,
for
example,
the
first
issue
was
that
IP
multicast,
which
was
the
the
first
major
deployment
of
multicast,
required
states
on
the
rotors
for
each
multicast
group
in
the
network,
so
some
papers,
for
example,
the
Dr
multicast
paper,
showed
that
when
you
start
increasing
too
much
the
number
of
groups,
you
might
see
packet
losses
because
because
the
routers
cannot
handle
all
that
state
caused
by
the
multicaster
groups.
So
the
first
work
which
has
been
standardized
in
2017
by
the
IHF,
was
to
provide
a
kind
of
stateless
multicast.
J
Finding
mechanism
called
beer,
so
I
talked
a
bit
earlier
about
beer
and
it's
not
widely
deployed.
But
theoretically,
it's
really
interesting
because
you,
you
don't
have
this
state
on
the
rotors
and
you
can
only
you.
The
state
is
now
embedded
inside
the
packet,
so
you
can
scale
to
an
increasing
number
of
multicast
groups.
J
The
second
issue
and
I
think
it's
not
discussed
very
much
in
the
community,
but
it's
that
20
years
ago,
when
you
wanted
to
deploy
a
new
protocol,
it
had
to
be
in
the
kernel,
because,
because
of
the
performance
Gap,
we
had
between
user
space
and
kernel
space.
J
We
we
had
to
deploy
them
in
the
kernel,
but
it
was
really
a
burden
to
to
implement
new
protocols
in
the
kernel
and
that's
why
we
we
stay
and
we
stay
with
TCP
for
such
a
long
time.
But
now
in
2022
this
performance
Gap,
decreased
and
so
now
it
might
be
interesting
to
reconsider,
deploying
new
protocols,
pneumaticas
protocols
on
top
of
an
IP,
multicast
forwarding
or
beer
for
winding,
for
example,
because
a
simple
example
is
the
quick
protocol
which
is
implemented
in
user
space.
But
it's
widely
deployed,
of
course,
because
of
Google.
J
But
it's
widely
it's
been
widely
deployed
and
it
only
works
in
user
space.
So
now
we
can
build
these
new
protocols
and
not
have
the
burden
to
deploy
them
in
the
kernel,
so
they
are
also
more
easy
to
to
extend
the
final
issue.
We
briefly
discussed
in
the
paper
and
I'm,
not
an
expert
in
that
right
now.
So
it's
open
to
discussion
and
I
know
it's
a
tricky
one,
but
it's
almost
impossible
currently
to
deploy
multicast
in
the
wide
Arrow
Network
because
of
the
internment
policies.
J
What
we
thought
in
the
paper
was
that
maybe
we
can
think
about
multicast,
overly
networks,
for
example
using
cdns
as
multicast
relays,
because
now
these
sedans
are
almost
everywhere
in
the
world,
so
we
can
build
a
new
multicast
of
the
network
on
top
of
it
more
easily
compared
to
before
so
there
were.
There
are
other
issues
that
were
that
we
briefly
talked
about
in
the
paper
so,
for
example,
the
data
encryption
in
Dynamic
groups,
so
imagine
that
you
want
to
have
a
protected
traffic
between
the
source
and
the
receivers.
J
What
happens
when
someone
leaves
the
group,
because
you
only
you,
have
a
shared
key
between
the
source
and
the
receivers?
And
if
someone
wants
to
leave
this
group,
you
have
to
find
a
way
to
efficiently
efficiently
forward
a
new
key
to
the
remit
to
the
to
the
receivers
that
remain
in
the
group
without
sending
this
key
to
the
to
the
living
receiver.
J
So
there
are
some
some
possibilities.
There
were
some
solutions
that
were
proposed,
but
it's
still
I
think
an
opening
research
problem
also
when
we
want
to
make
some
multicast
traffic,
for
example,
for
software
updates,
which
is
an
interesting
ID.
But
you
want
to
be
sure
that
every
receiver
correctly
received
the
data
and
there
there
isn't
any
packet
loss.
J
So
we
have
what
we
call
the
AK
or
NAC
implosion
in
reliable
multicast,
because
we
need
the
system
to
say
that
we
received
or
not
the
data,
and
it
can
be
a
problem
on
the
source
to
receive
too
much
of
these
acknowledgments.
J
The
third
issue-
and
maybe
it's
one
of
the
trickiest,
also
is
that
you
have
you
want
to
have
in
some
situations
a
source
authentication
mechanism
to
ensure
that
the
correct
sender
sends
the
data
to
the
receivers.
And
that's
you
know
you
don't
have
someone
in
the
group
having
access
to
the
shared
key,
which
can
be
also
used
to
encrypt
data,
sending
fake
data
to
the
network
and
floating,
for
example,
the
receivers
with
this
fake
data.
J
J
This
trade-off
between
Simplicity
of
deployment
and
maintenance
maintenance,
with
unicast
versus
the
efficiency
of
multicast
forwarding,
and
for
the
past
decades
we
only
wanted
to
make
to
have
Simplicity,
and
so
that's
why
we
removed
almost
everywhere
multicast
and
deployed
unique
as
Solutions
with
obsidians,
but
no
in
this
more
energy
efficient
desire
networks,
it's
time,
maybe
to
reconsider
it
and
we
and
reconsider
having
efficient
networks.
J
And
if
you
don't
think
it's
an
issue,
but
I
am
I
mean
all
of
us
think
it's
maybe
an
issue
just
recall
that
during
the
coronary
virus
crisis,
the
the
French
government,
but
apparently
also
in
the
rest
of
Europe's,
but
the
French
government
asks
Netflix
to
reduce
the
video
quality
of
their
series
and
film
just
to
release
the
pressure
on
the
network.
J
So
I
don't
say
that
we
must
do
multicast
for
Netflix,
but
just
to
say
that
congestion
in
today's
networks
can
also
happen
when
you
have
a
lot
of
people
using
the
networks,
and
even
without
that,
we
know
that
multicast
is
a
more
energy
efficient
mechanism.
So
in
an
energy,
if
you
want
to
reduce
the
energy
Footprints
I
think
it
may
be
very
important
to
reconsider
it.
So
thank
you.
B
Thank
you
so
much
and
thank
you
for
getting
us
up
to
date
on
what
has
transpired
in
the
multicast
landscape,
since
it
is
something
that
has
been
very
much
discussed
for
at
least
a
couple
decades
or
more
within
the
ITF
discussed
Advanced
progressed.
So
it's
great
to
see
that
your
thesis
work
is
in
this
area
and
I
wondered
you
know
at
the
very
beginning
of
this
session.
I
don't
know.
Pascal
are
you
still
here?
B
You
had
made
a
question
about
on
protocol
designs,
add
to
the
list
of
protocols
to
evaluate
you
know.
So
in
the
very
first
presentation
there
was
discussion
about
well,
what
should
we
be
looking
at
Pascal
I,
don't
know
if
you
want
to
state
in
your
own.
If
you
want
to
expand
on
your
comment
here,
add
to
the
list:
the
use
of
broadcasting
reactive
protocols,
yeah.
K
Yes,
thank
you
Eve
yeah
there
was
there
was
the
presentation
they
told
us
on
the
first
day
of
this
Workshop.
Well,
one
of
the
items
was
the
work
that
the
ATF
has
been
doing
over
pretty
much
the
last
decade
on
Energy
savings
in
protocols
and
probably
the
place
where
this
happened.
The
most
is
the
iot
community,
because
a
lot
of
those
protocols
were
designed
to
operate
on
battery
and
effectively.
K
We
have
already
designed
routing
protocols
which
incorporate
some
routing
stretch
to
save
energy
and
state
in
the
devices
and,
for
instance,
interestingly
enough,
when
the
design
points
for
for
the
repo
protocol
was
that
the
control
could
never
exceed
the
data
and
that's
that's
an
interesting
constraints
if
you
look
at
it,
but
but
it
was
there
and
we
had
to
to
cope
with
this
and
and
the
the
design
of
the
routing
protocol
was
impacted
by
this.
K
Another
aspect
that
I
have
in
mind
that
we
we
took
a
lot
of
care
about
in
the
iot
groups,
was
the
use
of
broadcaster
ratios,
because
it's
been
said
a
lot
that
the
the
energy
consumption
does
not
depend
much
on
traffic.
It
depends
on
the
peak
usage
or
Peak
capability.
Well,
that's
that's
mostly
true
on
wires
on
wireless.
K
It's
a
lot
less
true
and
effectively.
We
see
different
stages
and
we
measure
that
to
the
micro
coulomb.
Actually
we
we
see
that
there
is
a
very
low
power
state
of
the
device
where
it
can
only
be
awakened
and
there
is
a
processor
on
the
state
of
the
device.
Then,
when
the
radio
is
on,
there
is
a
big
peak
of
energy
consumption.
I'm
talking
about
small
IIT
devices
here
and
then
when
the
device
is,
is
transmitting
or
receiving
data.
K
Then
there
is
another
big
peak
of
of
energy
consumption,
so
you
really
see
in
which
stage
of
energy
consumption
you
are
and
we
we
figured
in
particular
that
to
save
energy
we
had
to
avoid
as
much
as
we
could
the
use
of
of
broadcast
and
that's
why
we
designed
the
six
slope
button
and
the
pipe
and
often
the
to
avoid
those
those
things.
So
that's
what
I
had
in
mind,
basically
saying
hey.
When
we
look
at
protocol
protocol
designs
effectively,
you
can
think
them
with
energy
in
mind
or
not
and
it
it's.
K
We
also
did
what
Russ
said
by
the
way
this
decides.
You
are
having
a
composite
metric.
We
Ripple
works
with
what
we
call
an
objective
function,
which
is
a
composite
metric
and
effectively.
We
want
it
to
incorporate
power.
I
mean
it's
just
an
option
that
you
can
do
in
this
Con,
so
so
I'm
fully
in
line
with
everything
that
first
said
by
the
way,
and-
and
yes,
we
have
some
experience
at
eitf
and
we
would
gladly
share
that
and
and
possibly
expand.
You
know
the
use
of
our
designs
outside
of
the
iot
community
right.
B
Okay,
there
were
I
was
hoping
that
the
kind
of
the
discussion
about
broadcasting
multicast
were
sort
of
complementary
and
you
know
shared
some
Spirit
of
how
do
we
get
to
efficiency.
So
that
was
why
return
to
your
question?
Okay,
so
it's.
K
It's
like
it's
slightly
related,
it's
just
that!
Basically,
if
you
want
to
keep
devices
with
in
with
low
power,
you
cannot
have
them
expect
data.
At
any
point
of
time,
there
must
be
a
lot
to
sleep,
which
means
that
while
they
sleep,
they
cannot
expect
data.
So
you
have
to
design
with
that
in
mind,
so
that
that's
kind
of
complementary
with
multicast,
because
if
you
stop
still
doing
multicast,
that
means
that
you
have
to
have
some
some
idea
of
the
schedule.
The
way
you
schedule
your
multicast,
so
the
device
can
remain
asleep.
B
Thank
you.
I
saw
that
there
was
another
question
here
about
cdns
that
may
or
may
not
have
been
answered.
B
Let's
see
Yari
it's
a
great
observation
that
cdns
might
help
with
the
inter-domain
issues
we
previously
had,
but
isn't
there
also
a
different
impact?
B
If
we
have
a
CDN
node
and
a
DSL
ISP,
for
instance,
we
have
limited
savings
potential.
Maybe
that
got
answered.
A
There
was
some
discussion
about
business
incentives
for
the
cdms
to
do
this
and
right.
B
And
so
I
think
the
follow-on
question
was
where
I
was
headed
was
Dom's
question
was:
why
would
cdns
reduce
their
revenue
from
unicast.
B
G
They,
wouldn't
necessarily
it
would
just
depend
on
what
the
business
model
looks
like
and
stuff
like.
Do
you?
Can
the
CDN
build
a
business
model
about
it's
gonna
around
it
to
make
money?
Doing
multicast
is
going
to
depend
on
Market
forces
more
than
it
is
the
cdn's
design,
largely
I
mean
if
somebody
give.
If
somebody
if
somebody
were
to
come
to
Akamai,
for
instance,
and
say
we
want
to
give
you
x
amount
of
multicast
traffic,
and
we
want
you
to
distribute
for
us
in
a
multicast
way,
and
you
know
we're
going
to
pay.
G
H
It
strikes
me
that
there's
another
lever
here
and
what
it
looks
like
to
me
is
that
this
is
an
opportunity
for
isps
to
improve
their
peering
agreements
right
if
you've
got
an
ISP,
that's
Distributing,
roughly
the
same
content
at
roughly
the
same
time
to
a
whole
lot
of
users.
Then
they've
got
an
opportunity
to
pair
with
a
CDN
and
improve
their
peering
agreements
as
a
result
of
the
reduced
traffic
that
they
have
Upstream.
So
I'm,
not
sure.
B
And
I
think
kind
of
back
to
Yari
I
think
while
I
was
calling
out
your
question,
I
was
trying
to
find
it
in
this
list
of
a
wonderful
conversation
back
and
forth.
Was
you
know?
What's
the
business
reason,
and
one
of
the
business
reasons
could
be
what
we
just
heard
about,
which
is
you
know
from
Maurice,
which
is
there's
Energy
Efficiency
to
be
had
and
the
more
that
that
becomes
a
differentiator?
Maybe
this
is
you
know,
at
least
this
is
what
occurred
to
me
was.
B
Maybe
the
cdns
use
that
and
Market
that,
and
that
then
allows
them
to
adopt
whatever
Technologies,
whether
it's
multicast
or
something
else
that
allows
them
to
claim
that
and
not
just
claim,
but
quantif
allows
them
to
quantify
how
much
they're
saving
by
using
these
other
techniques,
so
that
might
be
kind
of
an
interesting
thing
that
I
anticipate
developing.
F
Someone
who's
worked
for
aquamar
forever.
Okay,
I
can
say
that
I've
never
heard
of
philosophical
objection
to
multicast
inside
the
company,
and
it's
just
that
the
arrangement
of
the
way
things
work
solves
a
lot
of
the
business
problems
and
the
problems
on
the
client
and
and
as
I
typed
into
the
chat.
A
lot
of
our
a
lot
of
CDN
contracts
are
based
on
things
like
gigabytes
delivered
to
client.
F
So
if,
if
it
could
be
accomplished
with
some
blend
of
multicast
and
unicast,
or
something
like
that,
I
think
that
would
you
know
that
would
be
fine.
F
You
know
a
lot
of
the
way
that
streaming
kind
of
played
out
makes
it
on
demand
and
unpredictable
if
it
was
more
like
traditional
television,
where
you
know
if
HBO
or
Netflix
said
hey
the
show
starts
at
9
00
PM
in
this
time
zone,
it
would
be
easier
to
kind
of
take
one
take
one's
mind
technically,
to
try
to
use
things
like
multicast
there's.
Also
a
lot
of
DRM
and
rights,
things
that
kick
in
and
the
the
current
the
parts
about
encryption
and
the
presentation
were
interesting
to
me.
B
Great
thank
you
for
that.
Any
other
questions
here.
I
had
forgotten
that
we
had
swapped
the
the
orders
and
it
was
sort
of
feeling
we
had
opened
things
up
to
the
floor.
So,
let's
return
to
the
previous
talk
and
now
that
you
have
rejoined,
let's
see
if
we
can
get
the
slides.
B
H
B
H
So
Suresh
and-
and
one
of
the
other
speakers
gave
me
an
excellent
lead-in
here-
there
was
some
discussion
about
the
you
know.
Suresh
mentioned
that
Carson
has
been
working
on
iot
devices
that
last
for
a
year
on
a
battery
and
and
I
believe
Pascal
also
mentioned
working
constrained
networks.
I
think
that's
a
great
lead-in,
because,
ultimately,
what
we're
talking
about
here
is
a
new
constraint,
we're
talking
about
an
energy
constraint
on
our
Network
and
that's
just
another
kind
of
constrained
Network.
H
So
the
question
that
we
have
is
essentially:
how
much
can
we
steal
from
a
constrained,
Network
Technologies
and
apply
to
a
constrained
energy
Networks?
So
with
with
that
in
mind,
we
we
started
with
encoding.
H
So
the
first
question
is:
does
it
matter?
Is
this?
Is
this
worth
doing
at
all?
If
the
difference
that
we
get
out
of
encoding
things
differently
is
small,
then
maybe
there's
no
point.
H
Well
it
really.
It
depends
what
kind
of
data
we're
encoding
that
turns
out
to
be
the
most
important
part
of
this
so
text
encodes
encodes
into
text
formats
well
as
expected,
but
everything
else,
encodes
poorly
binary
data
gets
you
a
33
inflation
integers
are
usually
around
50.
floating
point.
It
can
range
from
not
much
to
huge
there's
you
know,
structures
are
are
also
bad
and
I.
Think
I
just
saw
something
in
the
chat
about
security
and
encodings.
H
Yes,
I'm
personally
I
really
dislike
Jose
from
a
security
perspective,
encoding
binary
objects
into
Json
just
seems
risky
to
me
in
a
secure
format,
but
that's
just
me
so
really
there
is,
it
turns
out.
There
is
an
impact,
but
we
need
to
quantify
exactly
how
big
it
is.
So
what
we
did
to
try
and
work
out
exactly
what
we're
dealing
with
is
to
take
a
series
of
sent
ml
examples.
Now
sentimel
is
a
sensor
sensor
measurement
list
RFC,
and
the
idea
here
is
that
we
want
to
get
some
data.
H
That's
representative
of
non-text
things
that
are
being
transferred
and
encoded
in
either
Json
or
seabor.
Can
you
still
see
my
slides,
or
did
they
just
disappear?.
H
Right
we
wanted
something
that
was
encoded
in
both
Json
and
seabor,
the
idea,
here
being
that
we
can
actually
show
the
difference
between
a
common
text.
Encoding
and
a
common
binary
encoding
so
set
on
L
was
a
great
option
because
it
actually
has
both
of
those
already,
and
we
can
encode
either
way
with
with
relative
ease
this.
Let
us
do
this
this
chart,
which
gives
you
an
idea
of
roughly
how
much
data
is
used
by
each
of
these
examples.
H
Well,
if
the
majority
of
the
internet's
traffic
is
isn't
taken
up
by
video,
this
isn't
going
to
move
the
needle,
because
really
that
comes
down
to
video
compression
more
than
anything
else,
and
what
we're
talking
about
here
is
well,
it's
compression
we're.
Just
you
know
an
encoding
difference
can
be
a
compression.
H
So
then
the
next
question
is:
how
does
this
impact
energy?
So
if
there's
no
real
difference
between
energy
for
text,
encoding
and
energy
for
binary
encoding,
then
there's
not
much
point
in
continuing
this.
So
let's
have
a
look
and
see
if
we
can
quantify
it
a
little
bit
now.
H
This
came
up
something
related
to
this
came
up
earlier
on
the
list
talking
about
always-on
capacity
versus
you
know,
data
dependent
capacity
and
a
lot
of
the
discussion-
that's
gone
on
so
far
has
been
centered
around
wired
networks,
but
the
question
I
would
ask
is
how
many
people
are
using
exclusively
wired
networks
in
their
homes.
Individual
energy
use
from
Network
traffic
is
probably
largely
Wi-Fi,
except
in
a
business
context,
so
that
to
me
suggests
that
we
should
be
looking
at
Wireless
Technologies
as
well.
H
Certainly
in
the
data
center
and
in
back
hall,
we
need
to
consider
the
the
other
side
of
things
the
wired
side,
but
the
point
here
is
that
the
the
last
step
is
almost
always
Wireless
and
that's
where
this
matters
now
I'm,
not
modeling
Wi-Fi,
here,
I'm
modeling
with
Laura
and
the
reason
for
choosing
Laura
in
this
in
this
talk
is
because
Laura
has
some
really
convenient
time
on
air
calculations.
It
makes
it
very
easy
to
estimate
the
energy
use,
so
Laura
is
a
pretty
good
example.
H
It's
also
a
widely
deployed
iot
Network,
which
gives
us
some
some
good
guesses
on
how
things
will
work.
So
the
results
of
of
this
model
are
that
we
get
you
know
often
30
or
better
Energy
savings.
H
By
by
doing
this,
and
one
of
the
interesting
aspects
of
using
a
low
power
Wan,
is
that
the
the
send
and
receive
energy
is
actually
quite
a
large
portion
of
the
device's
energy
budget
that
turns
out
to
be
on
the
order
of
millijoules,
as
you
can
see
in
the
graph
quite
regularly
and
those
that's
a
substantial
portion
of
the
data,
the
the
device's
battery
lifetime.
We
talked
about
this
a
little
bit
in
the
in
the
paper,
but
you
can
see
that
the
consumption
of
the
devices
actually
turns
out
to
be
measured
in.
H
H
There
are
some
interesting
effects
that
we
found
like
the
that
the
overhead
of
setting
up
a
packet
actually
substantially
reduces
the
impact
on
short
messages.
Larger
messages
are
less
affected,
and
this
is
mostly
because
of
the
the
messages
in
Laura
are
quantized
to
127
bytes
right.
So
there
are
some
substantial
impacts
that
we
get
from
the
from
the
cons,
the
energy
reduction.
It
means
you
can
have
smaller
batteries,
that
means
devices
can
have
longer
life
and
and
that's
a
a
question
of
both
cost
and
e-waste.
H
You
can
use
smaller
energy
harvesting
systems
and
the
devices
themselves
can
be
lower
cost.
So
this
has
a
social
justice
and
a
an
environmental
justice
aspect
to
it
as
well.
By
reducing
the
we,
as
the
ietf
could
potentially
make
devices
Cheaper
by
making
their
encodings
smaller.
H
So
we
have
a
couple
of
choices
in
the
ietf
Json
and
seabor
seem
very
common
for
hierarchical
data
formats
as
of
today.
H
H
People
think
that
it's
easier
to
debug
Json
than
it
is
to
debug
seabor,
and
in
my
experience
this
is
not
directly
true.
There
are
many
tools
to
help
you
debug
seaboard.
On
top
of
that,
we
get
a
lot
of
discussions
about
I,
don't
need
to
install
a
tool
to
look
at
Json.
Well,
you
know
you
can
decode
C
board
on
in
a
web
browser,
so
that
also
is
not
directly
a
as
as
realistic
of
a
problem
as
it
seems
at
first,
but
there's
some
unpleasant
truths
here.
H
These
are
decisions
that
we
make
as
engineers
and
they're.
Actually
tooling.
Pro
problems
not
encoding
problems.
The
vast
majority
of
the
data
that
we
send
isn't
for
us
as
people
debugging
the
protocol,
it's
actually
for
a
machine
to
interpret
so
when
we
make
decisions
as
the
as
members
of
the
ietf
to
pick
a
format,
that's
easy
for
us
to
debug.
We
are
actually
incurring
a
long,
lasting
cost
for
the
entire
world
to
bear
that
we
just
to
make
our
lives
a
little
bit
easier
during
debugging.
H
H
They're
simple
to
parse,
which
means
we
have
lower
embodied
energy
in
our
devices
due
to
lower
code
overhead
and
lower
memory,
and
we
have
lower
Active
Energy,
there's
less
compute
overhead
in
order
to
actually
interpret
the
data,
there's
less
per
character,
work,
for
example,
escaping
and
delimiting,
and
we
have
less
redundant
conversion
work,
converting
something
to
base64
to
transfer
it
over
a
network
in
order
to
convert
it
back
to
Binary
just
makes
very
little
sense
a
lot
of
decimal
conversion
as
well,
and
then
because
the
our
data
format
is
smaller,
we
spend
less
energy
and
transmit
and
receive
assuming
it's
a
wireless
network.
H
There's
lower
interpretation
complexity
as
well.
If
the
complexity
for
decoding
the
format
is
lower,
that
means
that
the
security
posture
of
your
system
is
also
simpler
and
we
end
up
with
more
deterministic
encodings,
which
is
good
from
a
security
perspective
as
well
largely
due
to
White
space
and
escaping
choices,
but
also
due
to
a
few
other
interesting
properties
that
you'll
find
in
Json
decoders.
H
First,
off
I
think
that
we
need
to
consider
content
and
intended
use
for
data
representation
formats,
so
configuration
formats,
for
example,
text
formats
make
sense.
If
you
have
primarily
text
content,
then
text
formats
are
appropriate,
but
if
you
have
primarily
non-text
content,
we
should
be
preferring
for
binary
formats,
and
this
isn't
a
game
changer
for
E
Impact.
This
is
a
small
contribution,
but
it's
also
part
of
an
ongoing
Trend,
where
more
and
more
formats
are
indeed
moving
to
Binary
encoding,
and
there
are
examples
towards
this.
H
For
example,
HTTP
2
is,
is
now
supporting
binary
encodings
as
well.
H
So
that's
most
of
what
I've
got
to
say:
I
I
see
there's
a
lot
going
on
in
the
chat
if
anyone
would
like
to
I,
obviously
haven't
followed
it.
So
if
anyone
would
like
to
to
bring
questions,
I'd
be
happy
to
talk
more
about
this.
B
Thank
you
very
much
for
your
for
your
presentation
very
enlightening.
B
There
was
one
of
the
questions
that
got
raised
and
and
again
you
know
it's
a
little
bit
off
topic,
but
it's
I
mean
you
you're
sort
of
speaking
to
methodology,
not
just
the
specifics
of
Jason
Json
versus
seabor,
but
you
know
Alex
asks
the
question:
how
about
the
energy
impact
of
rust
versus
non-rest
and
the
need
to
continuously
transfer
all
that
state
trade
off
with
the
need
to
maintain
State
at
the
server
of
course,
and
so
I
wonder
if
I
know
that
you
and
Karsten
also
are
involved
in
some
of
that
effort.
H
Well,
Karsten
more
so
than
I,
so
I
would
invite
him
to
jump
in
on
that.
One.
I
So
if
you
want
me
to
talk
about
rest,
that's
maybe
a
different
discussion,
but
it's
also
an
interesting
one,
because
rest,
of
course,
is
something
that
that
came
from
the
big
web,
which
which
never
has
been
particularly
interested
in
conserving
energy,
and
it
has
been
more
interested
in
actually
providing
scalability
and
it's
one
of
the
major
scalability
tools
we
have
there
now.
The
the
interesting
thing
about
rest
is
that
it's
really
something
that
that
describes
the
transfer
layer
and
you
always
can
push
up
things
into
the
application
layer.
I
So
it
would
be
interesting
to
see
if
we
have
pockets
where,
where
that
actually
doesn't
work
very
well
or
could
be
exploited
to
to
get
better
better
efficiency
in
the
web.
This
has
been
has
been
not
very
popular
because
it
really
makes
load
balancing
very
hard.
But
of
course
not.
All
of
our
communication
needs
to
be
load
balanced.
H
Tutorless's
Point
about
putting
things
the
other
way
around
that
we
need
to
build
the
diagnostic
framework
out
a
lot
more.
My
experience
in
working
with
seabor
has
been
that
the
diagnostic
framework
is
actually
quite
a
lot
built
out
already,
just
not
being
aware
of
the
availability
of
those
of
those
diagnostic
tools
may
be
the
issue,
and
so
maybe
there's
an
awareness
problem
that
we
have,
but
I
would
still
argue
that
within
the
ietf
we
have
an
obligation
to
consider
the
impact
of
the
protocols
and
formats
that
we
design,
and
that
I
mean.
H
B
And
this
also
speaks
to
maybe
what
vesnow
has
been
raising,
which
is
we
already
have
security
consideration
sections
in
our
drafts?
What
about
sustainability?
Is
this
kind
of
analysis,
something
that
should
be
required,
not
just
in
the
latter
stages
of
draft
development
but
from
the
get-go.
H
Well,
I
think
that's
a
great
idea,
I
think
discussing
why
encoding
choices
are
made
in
protocol
documents
and
in
format.
Specifications
is
a
great
plan
because
if
that
reaches
a
stage
at
the
IAB
or
at
the
iesg,
rather
when
the
the
review
is
done
on
an
RFC
and
it
says
well,
we
chose
a
text
format
because
it
was
easier
to
look
at
in
notepad
than
the
iesg
has
an
opportunity
to
go
back
and
say:
well,
that's
not
really
how
we
do
things
anymore.
B
And
actually,
you
said
something
I
that
really
struck
a
nerve
with
me,
which
is
when
we
write
down.
You
know
how
we
solve
Solutions.
If
we
explain
the
why
we
made
the
decisions
we
made
like
a
lot
of
that,
unless
it's
captured
in
an
email
archive,
you
know
that
doesn't
necessarily
make
it
into
a
spec.
B
So
maybe
some
of
this
would
be
forced
to
be
put
into
a
spec
forced,
as
in
you
know,
encouraged
by
having
at
least
some
section
of
the
draft
that
encourages
the
authors
to
to
talk
about
efficiencies
in
with
with
a
lens
towards
sustainability.
I
Yeah
so
I
think
one
sorry,
Gary
I
see
you
have
your
hand
dressed,
but
let
me
just
quickly
answer
to
that:
I
think
we
we
need
to
stop
just
taking
certain
default
choices
for
granted.
So
why
are
people
using
Json
for
protocol?
Why?
I
Why
are
you
flying
to
this
conference
and
not
taking
the
train?
Well,
everybody
is
flying
to
the
conference.
So
why
should
I
not
do
that
and
I
think
we?
We
need
to
take
the
small
step,
just
just
thinking
about
considerations
like
that,
and
this
is
not
about
just
about
Json
or
Siva,
so
using
HTTP
2
instead
of
HTTP
11
has
exactly
the
same
advantages.
A
A
If
we
had
data
on
where
in
the
internet,
with
what
parts
of
tech
or
what
applications
we
might
have
biggest
issues
with
regards
to
the
bloated
formats,
do
we
actually
know
that
and
if
we
do
know
that
could
be
quantify
how
much
we
would
save
and
not
that's
a
percentage
of
in
this
protocol,
but
sort
of
more
more
on
a
global
basis.
A
I
understand
that
for
a
particular
iot
deployment,
for
instance,
the
battery
lifetime
impact
would
be
massive
and
and
very
very
much
needed
and
that
that's
a
reason
to
do
it,
but
on
a
global
scale,
maybe
the
web
protocols
and
but
on
the
other
hand,
they're
kind
of
moving
a
little
bit
towards
binary
with
H2.
H
You
get
a
one-third
savings
directly
because
Jose
uses
base64,
encoding
and
Seaboard
uses
a
plain
binary
representation
so
from
that
in
in
authentication
or
or
encryption
formats
that
are
based
on
Jose.
Switching
to
C
board
saves
you
a
third
and
that's
probably
a
minimum,
because
there
are
a
few
inflated.
There
are
a
few
other
inflated
bits
and
pieces.
A
H
So
this
is
this
is
the
point
I
was
trying
to
make
earlier
when
I
I
said
that
the
vast
majority
of
the
traffic
on
the
internet
as
I
understand
it
today,
is
done
via
done
via
video
traffic
right.
That's
that's
where
the
majority
of
it
goes
and
video
traffic's
already
highly
optimized,
for
you
know
a
lot
of
reasons.
But
if
you
want
another
data
point,
there's
a
lot
of
email
that
goes
around
and
email
is
mime.
Encoded
and
mime
is
base64
whenever
it
encounters
something:
that's
not
ASCII.
H
M
So
Rob
Wilson
here
I,
was
going
to
ask
a
question:
I,
don't
know
it's
for
you,
Brendan
North,
for
Carson.
One
of
the
differences
between
saying
using
Jason
and
C
board
is
really
about
how
you
encode
the
identifiers.
That
of
what
a
particular
field
means.
You
have
a
choice
of
whether
to
use
a
string
which
is
always
human,
readable
or
numerical
identifier
that
can
be
converted
to
a
string
in
some
manner
by
some
look
up
to
it
somewhere.
How
far
are
you
proposingly
push?
This?
Are
you
proposing?
M
We
shouldn't
use
those
string
identifiers,
we
should
use
numerical
identifiers
or-
or
is
that
not
something
you
you
looked
at
in
terms
of
because
I
think
that
makes
a
difference
to
how
easy
is
to
debug
it
decode
it
can
I
see
the
names.
Do
I
have
to
do
some
conversion
or
how
how
easy
it
is
to
use
those
talks.
H
Yeah
so
I
thought
about
this.
A
lot
and
I
think
there
is
actually
one
missing
tool
and
what
that
tool
would
be
is
a
web-based
decoder
for
these
things,
where
you
can
stick
the
cddl,
which
defines
your
seaboor
structure
into
the
decoder,
along
with
data
that
you
expect
to
match
it
and
I,
don't
think
that's
a
big
step
because
all
of
those
tools,
as
far
as
I
understand
it,
are
available
today,
just
not
joined
up
together
and
available
on
the
web.
So
this
isn't
a
big
stretch.
H
It's
definitely
doable
and
the
direction
I
would
push
it
since
I
come
from
the
constrained.
Node
world
is
all
numeric
identifiers,
because
if
you
need
to
validate
that
structure,
then
you're
going
to
need
its
data
representation
or
it's
it's
data,
modeling
language,
representation
anyway,
so
you
might
as
well
save
the
size.
M
So
so
then,
my
question
would
be
is
for
those
cases,
then,
if
you've
got
to
get
the
schema
anyway
to
decode
it
it
would
you
be
better
if
using
one
of
the
even
tighter
binary
encodings
like
GPB,
where
it
has
to
have
the
schema
to
decode
the
data
and
get
it
even
tighter
I
mean
so
there's
a
trade-off
here
between
yeah
how
tightly
you
can
press
and
I
like
I
like
c-books.
It
feels
to
me
like
it's.
M
A
good
compromise
between
you
can
still
decode
it
without
requiring
the
schema,
and
yet
it's
still
quite
tighter
than
say
Jason
is
so.
H
I
think
yeah
yeah,
it's
an
interesting
question
and
I,
don't
know
exactly
how
far
to
push
that
and
that's
something
that
I
would
encourage
each.
You
know
RFC
author
to
consider
carefully.
You
know
which
which
direction
they
want
to
go
with.
That
I
can
see
an
argument
on
extremely
constrained
nodes
that
it's
useful
to
to
do,
use
that
schema
approach,
but
at
the
same
time
there
is
an
element
where
maybe
that's
not
the
right
thing
in
each
application.
H
That
definitely
do
take
the
seaboor
approach
rather
than
the
extremely
tight
one,
and
you
know
partly
that's
just
for
a
code,
reuse
aspect
of
things
when
you
have
devices
especially
constrained
devices
that
use
something
to
encode
the
data
they're
send
or
decode
the
data
they
receive,
then
being
able
to
share
that
code
turns
out
to
be
really
important
and
I'm,
not
sure
how
reusable
that
is,
once
you
get
into
schema-based,
parsers
I'm,
not
sure,
honestly,
it
maybe
it's
a
better
choice.
Maybe
that's
something
that
needs
more
investigation.
B
I
wanted
to
in
the
time
remaining
I
wanted
to
reiterate
that
we
should
open
up
the
floor
to
all
of
the
speakers
and
all
the
presentations
that
were
made,
but
there
were
a
couple
of
comments
in
the
chat
that
I
want
to
leave
that
off
with
and
maybe
before,
moving
off
of
the
conversation
about
Seaboard
Jason
to
kind
of
point
out,
vesna's
comment
about
social
engineering,
so
this
is
sort
of
broader
once
again
like
is
there
something
that
a
community
can
do
similar
to
Flying?
B
Shame
that
Karsten
pointed
out.
Can
we
come
up
with
some
kind
of
and
I'm
reading
this?
Can
we
come
up
with
some
kind
of
social
pressure
for
people
Engineers
being
pressured
against
using
the
wasted,
wasteful
encoding,
wasteful
protocols,
wasteful
equipment
and
towards
the
more
sustainable
option?
So
I
wonder?
Are
there
folks
out
there
who
have
an
opinion
about
what
is
the
social
engineering
that
we
can
do
to
move?
These
levers.
I
Well,
I'm
not
sure
I
I
have
an
answer
to
that
question,
but
I
think
it's
related
to
one
thing
that
came
up
in
in
the
chat,
which
really
is.
I
I
So
the
the
default
choice
of
Json
may
not
always
be
the
best
choice,
and
if,
if
we
get
there,
then
we
already
have
one
step.
The
Next
Step
might
actually
be
to
get
a
little
bit
more
more
information
out.
So
so
people
are
not
asking
something
like
what's
in
the
chat,
should
we
use
c-bar
or
Co-op?
This
is
like
asking
whether
you
should
use
HTML
or
HTTP,
and
the
third
thing
we
can
make
improvements
in
protocols
without
being
entirely
certain
that
the
results
will
be
significant.
I
This
is
again
like
flying
to
a
conference
whether
you
are
flying
to
a
conference
or
not,
is
not
going
to
to
change
the
climate,
but
if
we
do
it
enough
that
it
starts
making
a
difference,
yeah
so.
N
By
the
way,
I
also
try
to
provide
an
answer
on
the
chat
already
so
I'm
already
seeing
in
industries
that
have
constrained
an
unconstrained
devices
in
their
use
cases
that
they
quickly
recognize
that
when
they
start
with
a
constrained
protocols,
they
can
reuse
them
in
the
unconstrained
case,
but
obviously
not
vice
versa,
and
that
helps
to
you
know,
eliminate
and
a
lot
of
further
work,
the
ReUse
of
the
unconstrained
protocols
and
duplicating
the
effort
in
the
constrained
world.
So
in
in
these
cases,
you
don't
need
to
do
a
lot
of
social
engineering.
N
H
Tourless
I
think
that's
a
great
observation
and
I
I
think
that
that's
something
that
we
should
really
be
considering
from
within
the
ietf
and
I
I
would
very
much
encourage
people
to
to
take
that
to
any
working
groups
they
participate
in.
Just
bringing
up
the
question
of
you
know:
should
we
adopt
constrained
node
networking
techniques
in
our
protocols
since
they
are
available,
we
know
how
to
do
them
and
they
mean
that
our
Protocols
are
more
widely
applicable.
B
Okay,
I
also
wanted
to
return,
even
though
Russ
has
sadly
had
to
leave
the
meeting.
Chris
Adams
had
raised
a
very
interesting
question.
Again,
it's
more
General
than
Russ's
talk,
so
I
would
like
to
put
it
out
there
to
the
broader
Community
Chris
had
asked.
Do
any
protocols
include
the
well
I
kind
of
rearranged,
the
wording,
but
it
it's
basically
do.
B
Any
protocols
include
the
ability
to
communicate
how
much
longer
they
are
able
to
sustain
a
given
amount
of
throughput
or
latency,
and
so
I
wonder
if
folks
might
like
to
comment
on
that,
because
I
thought
it
was
a
question
that
again,
if
we're
we're
gonna
have
some
more
interesting,
optimization
behaviors
that
that
the
duration
of
certain
metrics
is
quite
interesting,
and-
and
this
is
something
that,
of
course,
orchestrators
and
data
centers,
which
are
controlled
environments
are
able
to
do
because
of
their
them
being
omniscient
and
knowing
all
the
nodes
that
are
participating
and
their
interactions.
E
It's
something
that
can
happen
in
real
time.
Right,
like
it's,
some
of
the
country
known
protocols
can
consider
that
in
the
metrics,
but
I
don't
think
it's
like
a
specific
thing.
That's
done,
but
I
think
Pascal
is
on
the
call
right.
He
can
talk
about
like
role
or
something
where
this
is
kind
of
taken
into
account.
But
it's
not
like
something.
That's
specifically
signal:
pascala.
Are
you
on
there
as
well.
K
Some
of
them
were
effectively
reused
in
Rift,
for
instance,
so
this
has
already
happened
and,
as
you
said,
the
reasons
are
probably
not
for
power,
although
there
is
a
big
impact
on
performance
and
Power
in
Rift,
because
it
reused
some
Concepts
from
repo
now
I
was
more
thinking
about
IPv6
Nepal
discovery
that
there
is
a
huge
consumption
of
Wireless
resources
which
are
related
to
the
reactive
procedures
in
traditional
IPv6,
neighbor
Discovery,
and
that's
one
big
big
use
case
where
we
can
look
at
what
sign
has
done
for
ND,
making
it
proactive
and
avoiding
broadcast
and
the
use
of
broadcast
due
to
ND
and
wireless
is
incredible,
I
mean
that's,
that's
a
lot
more
than
people
think
we
we've
we've
made
a
measure
of
how
many
broadcast
packets
were
sent
because
of
ND
during
a
keynote
by
our
CEO
at
the
Cisco
life,
and
it
was
300
per
second
well.
K
It
was
a
big
role
right.
It
was,
and-
and
most
of
that
is
effectively
canceled,
because
we
have
proprietary
code
in
our
controls
and
APS,
but
the
the
protocol
itself
is
incredibly
chatty
and
and
that's
broadcast
and
like
I
said
and
and
like
no
one
has
said,
wireless
is
not
wired
for
wireless,
the
the
excess
everybody
is
paid
cash.
It's
paid
and
double
paid.
K
It's
paid
once
because
you're
using
spectrum
and
an
energy
to
send
every
extra
byte
and
it's
linear
to
to
the
number
of
bytes
inspired
it's
repaid
because
the
mobiles
you
have
in
the
air
the
more
chances
you
have
of
one
losing
this
Frame,
because
every
byte
can
be
where
you
start.
You
know
losing
the
frame
and
second,
that's
when
other
devices
might
have
to
wait,
and
so
basically
the
load
on
your
network
and
the
queues
form
Etc.
And
this
is
additional
energy.
K
N
But
maybe
one
other
answer
is
the
way
I
I
understand.
The
question
would
be
that
in
routing
protocols
we
have
a
way
to
carry
schedule
based
information,
something
that
applies
to
certain
time
ranges
or
time
points
in
the
future
and
I.
Don't
think
we
have
done
that,
but
we
had
a
buff
at
last.
Itf
would
try
to
start
it
investigating
that
the
time
variant,
routing
buff,
so
maybe
we
are
going
to
to
look
into
that
problem.
B
Jan
lindblad,
would
you
like
to
make
your
comment
that
you
just
typed
into
the
chat
I,
think
it's
a
really
great
question
well
perspective
yeah.
Thank
you.
F
O
We
are
monitoring,
networks
and
devices
in
a
large
scale
and
we
need
to,
but
right
now,
I
think
that
much
of
that
monitoring
is
happening
in
erotic
roadways,
even
back
to
doing
SNMP
polls
and
other
things
that
are
well
terribly
inefficient.
Actually,
and
we
are
doing
this
every
few
minutes
with
a
lot
of
devices-
and
this
is
consumer
power
that
equals
many
nuclear
power
plants
that
are
devoted
to
this
all
year
round.
B
Right
and
actually
to
elaborate
on
that,
I
think
that
we're
here
we're
asking
to
Define
metrics
and
to
do
more
measurement
to
get
more
insights,
and
yet
our
measurement
monitoring
infrastructure
needs
to
sort
of
be
self-reflective
and
be
efficient
itself,
so
I'll
leave
it
at
that.
Alex
looks
like
you
have
your
hand
out.
Yes,.
C
N
C
I
just
wanted
to
respond
to
Jan
I
do
agree
that
oh
the
way
it's
done
is
often
inefficient.
However,
that
being
said,
there
are
alternatives.
Right
I
mean
there
is
a
lot
of
stuff
well
from
from
from
streaming
and
subscribing
to
to
things
to
having.
Basically
certain
event-based
type
of
mechanisms
of
all.
So
part
of
this
is
again
not
the
availability
of
whether
the
technology
isn't
there.
G
L
A
question
about
some
of
that,
because
one
of
the
things
that
has
come
up
it
seems
is
that
networks
are
massively
over
version
of
the
time
and
we're
designing
for
the
Peaks
rather
than
the
average
use
cases
and
like
there
are
other
other
sectors
that
do
have
to
manage
for
Peaks
and
also
the
and
also
have
averages
which
are
much
much
lower
and
they've
got
different
ways
to
kind
of
basically
pay
to
make
sure
that
there
is
capacity
available
and
stuff
like
that,
like
we
can
look
in
the
energy
sector
to
see
how
you
have
like
Pica
plants,
which
are
only
on
a
few
times
a
year
or
like
batteries,
and
things
like
this.
L
That
was
why
I
was
asking
about.
Is
it
possible
to
have
something
where
you
are
able
to
say
how
long
something
is
on
because
I
I
kind
of
was
thinking
that
we
have
ideas
like
say,
happy,
eyeballs
and
things
for
finding
two,
multiple
low
latency
routes,
to
kind
of
get
to
something?
If
you
know
that
there's
maybe
a
low
latency
route
that
will
last
for
a
little
bit
time
that
can
buy
a
little
bit
of
time.
For
you
to
kind
of
default,
lots
and
lots
of
capacity
to
being
off
rather
than
being
on
by
default.
L
So,
rather
than
having
to
think
about
how
you
scale
something
down,
you
can
flip
it
around
to
say:
well,
we've
got
a
bit
of
kind
of
grace
period
to
scale
things
up.
As
we
know,
this
Thundering
Herd
is
coming
through,
maybe
there's
a
kind
of
like
flip
in
the
way
we
think
about
this
to
make
some
of
that
possible,
because
that
is
a
possible
way
that
we
might
think
about
this
I
suppose
that
was.
That
was
what
I
want.
B
To
share
yeah
and
I
think
turles's
response
also
is
very
apropos,
which
is
to
have
a
look
at
some
of
the
time
variant,
discussion,
time,
variant,
routing
discussions
for
exactly
this
reason.
If
you
actually
understand
and
know
a
priori
or
can
predict
with
high
reliability,
the
behaviors
of
links,
then,
would
how
would
our
routing
infrastructure
change
I
believe
that
Suresh
was
next.
E
Thanks
Steve,
so
I
think
one
of
the
things
I'm
kind
of
like
learning
from
this
Workshop
is
like
there's
a
lot
of
people
with
a
lot
of
knowledge
in
like
specific
Pockets
right,
like
you
know
where
they
know
of
like
energy,
efficient,
Alternatives
and
so
on,
but
I
don't
think.
We've
done
a
good
job
like
collecting
these
things.
E
So
so
my
higher
level
point
was
that
sometimes
people
make
these
choices
for
a
given
reason,
like
you
know,
for
example
like
what
Jan
said
like
somebody
might
have
a
legitimate
reason
for
like
pulling
this
24
7
at
like
five
second
intervals
or
not
right,
but
I,
think
it's
I
think
we
should
do
something
to
kind
of
like
bring
up
these
kind
of
compromises
like
you
know
what
you
get
and
what
you
don't
get
like
a
binary
encoding
versus
like
text
encoding
and
there's,
like
you,
know,
bunch
of
thing
that
came
up
right.
E
E
This
is
not
the
primary
problem
to
solve
a
lot
of
the
time,
so
I
think
it's
an
interest
to
start
putting
these
things
together
into
some
kind
of
document
where
we
can
actually
talk
about
hey,
consider
this
have
you
thought
about
this
right
and
and
so
on,
right
like
for
lack
of
a
better
term,
for
example,
sustainability,
considerations,
or
something
like
that
like
to
kind
of
start
writing
something
up,
but
not
necessarily
like
you
know,
pushing
people
to
put
it
into
a
document
right,
which
is
like
probably
something
we
can
look
at
later,
but
at
least
for
people
to
know
hey
that
there
are
these
choices
and
for
them
to
make
the
charge
not
by
default.
B
That's
a
great
suggestion
and
maybe
it's
a
best
known
methods
or
considerations.
O
C
I
just
wanted
to
add
one
more
thing
busy
on
the
list
of
trade-offs,
I
guess
to
consider
clearly
because
prediction
was
mentioned
so
so
clearly
busy
if
you,
if,
if
you
have
prediction,
if
you
have
ways
to
better
predict
what
is
coming,
as
of
all,
you
can
optimize
certain
choices,
but,
of
course,
in
order
to
do
the
prediction
that
in
itself
is
typically
or
in
many
cases,
will
be
based
on
on
a
pleading,
more
data
and
interpreting
that
data,
and
so
clearly
we
have
that
other
trade
of
the
content.
B
B
Well,
we
have
five
minutes
left
and
I'm
wondering
maybe
for
some
of
you
who
haven't
spoken
up.
Would
you
here's
a
moment,
maybe
to
ask
your
question
that
you've
been
dying
to
ask
or
anyone
else
who
has
a
final
remark.
B
I
really
would
like
to
thank
everybody
for
the
broad
range
of
comments
and
and
questions
and
presentations
thoughtfulness.
It
really
has
been
it's
almost
overwhelming
the
amount
of
information
flowing
from
all
all
corners
of
the
ITF,
so
to
speak,
all
the
layers,
all
the
constituents.
It's
really
been
quite
helpful.
B
All
right,
Yari,
our
Colin,
any
final
closing
remarks.
A
Yeah
I
think
Pauline
wants
to
say
something:
I
I
was
try
and
trying
to
figure
out
what
did
we
actually
go
through
and
what?
What
are
the
conclusions
and
I
wrote
down
in
my
notes
that
yeah
step,
one
awareness
that
this
is
his
matter
step
two
is
visibility
that
we
actually
can
can
measure
and
have
metrics
and
so
forth.
A
Step
three
is
improvements
could
be
in
many
different
places,
implementations
and
and
technology
or
even
energy
sources,
and
it
was
interesting
that
a
lot
of
this
actually
doesn't
have
to
be
a
compromise.
It
can
be
a
win-win
for
everybody
that
you
can
get
more
battery
Lifetime
and
faster
Network
and
less
energy
use,
but
you
also
have
to
worry
about
some
cases
where
you
do
have
trade-offs
and
Russ
talked
about
some
of
those,
and
that
was
very
interesting
and
we
had
a
bunch
of
different
directions.
A
Were
things
that
we
could
do
automation,
they
can
optimize
in
small
time
scales,
slowing
down
systems
in
various
Dynamic
ways
using
better
formats
or
interaction
patterns
that
are
actually
designed
for
energy
consumption,
also
in
mind:
yeah
lots
of
good
stuff,
I
I'm
very
pleased
with
this
session,
so
many
people
from
different
angles
that
were
able
to
provide
information
for
everybody
else.
Thank
you.
B
We
have
one
last
session
on
Monday
and
it
is
happening
it
begins
at
the
same
time.
Is
it
also
a
two-hour
session?
Sorry.
A
It
is
a
90-minute
session.
Okay,.
B
And
debate
and
discussion,
of
course,
as
always,
and
we
will
post
again
the
the
comments
in
the
chat
which
are
equally
enticing
and
need
further
examination
in
some
cases,
there's
some
great
pointers
that
have
been
included.
B
Okay,
great
well
with
that.
We'll
give
you
a
minute
back
and
thank
you
for
showing
up.
B
Yes,
some
of
you
are
at
the
beginning
of
your
weekend.
Others
of
us
are
at
the
beginning
of
our
day,.