►
From YouTube: IETF115-TVR-20221110-0930
Description
TVR meeting session at IETF115
2022/11/10 0930
https://datatracker.ietf.org/meeting/115/proceedings/
A
Good
morning,
welcome
to
the
TBR
off.
A
I'm
Lou
Berger-
this
is
Russ
Wright,
we're
co-chairing,
and
we
also
have
here
with
us
who's
volunteer
to
act
as
our
secretary.
Thank
you
all
the
material
is
online
and
if
you're
interested
in
the
topic,
please
take
a
look
at
the
material.
We
also
have
the
standard
web
page
available.
That
summarizes
the
purpose
of
the
boss,
we'll
of
course
go
over
that
here
and
we
have
a
couple
of
drafts
even
though
they're
a
little
bit
early
next.
A
This
is
an
ietf
meeting.
All
of
our
meetings
are
governed
by
that.
Our
note
well,
which
has
rules
regarding
our
participation.
Basically
everything
you
say
here
becomes
a
part
of
our
permanent
record.
If
you're
not
familiar
with
the
note.
Well,
please
go
to
the
ietf
page
and
take
a
look
and
familiarize
yourself
with
our
rules
of
participation.
Next,
we
also
have
rules
related
to
conduct,
and
basically
we
ask
you
to
always
treat
each
other
with
respect
and
professional
and
be
professional
with
each
other.
A
Of
course,
it's
okay
to
have
good
technical
argument,
but
please
keep
it
at
the
at
the
technical
and
professional
level
and
of
course
we
have
a
document
governing
that
too
TCP
54
next
for
those
in
the
room,
we
ask
two
things.
The
first
is:
please
scan
the
QR
code
joining
join
the
online
Tool,
whether
you
use
the
phone
or
the
the
computer.
It's
really
important
because
of
two
things:
it
gives
us
our
blue
sheets.
A
It
also
allows
you
to
participate
in
polling
and
we
will
conduct
some
polls
later
in
the
the
session
and
we'll
do
that
versus
raising
hands
or
humming,
so
that
we
are
Equitable
to
the
people
who
are
remote,
so
they
can
participate
at
the
same
level
for
remote
participants
you're.
Here
thanks
so
much
for
joining
and
please
meet
your
mics
if
you're,
not
speaking,
the
only
additional
piece
of
information
on
on
this
slide
really
is
come
join
us
on
joint
minute
taking
so
we
use
Hedgehog
ether
pad
whatever
you
want
to
call
it.
A
It's
a
it's
a
collaborative
tool
where
everyone
can
make
sure
that
we're
capturing
the
conversation
and
the
discussion
that
happens,
you
don't
have
to
capture
the
material
on
the
slides,
that's
already
on
the
slides
and
we
have
YouTube
available,
but
for
the
discussion
it's
really
important
to
help
capture
the
capture.
The
discussion
going
back
to
the
going
back
to
the
scanning
in
for
on-site
I
forgot
to
mention
the
other
bullet.
B
When
you
speak
at
the
mic,
please
make
sure
you
give
your
name
before
speaking,
because
the
people
who
are
taking
notes
and
people
who
are
remote
often
have
a
hard
time
distinguishing
voices
in
their
current
environment
and.
A
A
So
what
are
we
doing
here,
you're
going
to
hear
in
a
moment
about
the
problem
we're
trying
to
solve,
but
from
sort
of
the
administrative
side
we're
trying
to
answer
a
few
questions?
A
First,
what's
that
problem,
Rick
is
going
to
talk
about
that,
and
then
we
also
are
going
to
follow
up
the
the
problem
statement
with
some
use
cases
that
really
drives
our
requirements.
Helps
us
better
understand
the
problem,
we're
solving.
A
Eventually,
we
have
to
figure
out
what
new
work
is
to
be
done,
where
the
gaps
are
in
the
existing
Technologies
and
that's
those
two
are
sort
of
the
main
purposes
of
the
discussion,
but
from
a
management
standpoint
from
the
isg
from
the
IAB.
A
very
important
question
is:
is
there
sufficient
interest
in
working
on
this
problem
at
the
ietf?
B
A
So
we
have
a
posted
agenda.
The
agenda
is
a
little
different
today
than
it
was
yesterday
or
the
day
before.
Basically,
we
got
a
little
more
information
and
some
additional
contributions.
That's
always
really
appreciated.
As
a
contribution-driven
organization.
One
thing
I
want
to
say
about
the
time
here
these
are
approximate
times.
We
have
a
good
30
25
minutes
for
discussion.
If
that
discussion
happens
earlier
in
the
session,
that's
okay,
we
don't
have
to
be
completely
rigid
to
these
times.
C
Morning,
all
oh,
that's
very
loud
good
morning,
all
so
I
have
a
published
problem
statement
which
I'm
not
hugely
proud
of.
It
was
pretty
much
a
stream
of
Consciousness
done
pretty
quickly
and
off
the
back
of
that
had
very
productive
meetings
with
a
number
of
people
who
picked
up
on
that.
C
There
was
good
correspondence
on
the
mailing
list
and
I
want
to
call
out
Adrian
for
taking
me
to
one
side
yesterday
and
saying:
let's
drill
this
down
to
something
a
bit
more
short
and
tight,
and
that
really
hasn't
been
updated
yet
and
is
kind
of
the
content
that
I
will
come
up
with
at
the
very
end
of
this
slide
deck.
But
meanwhile
can
we
have
the
first
slide.
Please.
C
So
I'm
going
to
jump
straight
into
the
meat
of
it
as
I
and
several
other
people.
We
have
chatted
to
see
it.
There
is
the
following
problem:
we
understand
that
in
routing
you
have
nodes,
you
have
links,
they
come
they
go.
You
need
a
protocol
to
try
and
build
some
sort
of
topology
so
that
you
can
build
end-to-end
paths
across
these
Networks.
C
C
What
we
are
discussing
here
is
the
proposal
that
maybe
there
is
another
case
here,
which
is
these
links,
and
these
nodes
may
change
in
predictable
or
scheduleable
ways,
so
you
can
know,
with
a
certain
degree
of
confidence
in
advance
of
a
break
or
a
restoration
in
link
adjacency
or
whatever
that
it's
going
to
happen.
Therefore,
as
a
routing
protocol
implementation
or
a
protocol
design,
you
don't
have
to
react
to
it
happening.
Oh
my
God,
my
Link's
just
gone.
C
You
can
say,
oh
at
12
pm
on
Wednesday
that
Link's
going
so
I
can
pre-compute
an
alternative
or
I
can
keep
a
backup,
topology
ready
to
apply
or
I
can
do
smart
things
and
I'm
not
entirely
sure.
Personally.
What
all
of
those
smart
things
are,
but
I
would
see
that
as
an
interesting
area
to
investigate
so
that
when
that
scheduled
event
occurs,
you
are
ready
to
do
it
and
you
can
move
swiftly
on
without
losing
end-to-end
connectivity.
Your
traffic
can
flow
less
disruption,
Etc.
So
next
slide.
Please.
C
So
I'll
give
you
a
second
just
to
read
through
this
a
little
bit,
but
the
obvious
answer
to
that
statement
is
people
come
back
and
say:
well,
actually,
routing
protocols
already
handle
convergence
when
we
lose
links
when
nodes
fail
and
in
general,
that
convergence
is
fast
enough
and
fast
re-route
mechanisms
exist
and
they're.
Good
we've
worked
on
this
for
donkey's
years
within
the
IDF,
we've
got
a
good
Suite
of
protocols.
C
That
will
pretty
much
do
this,
but,
as
I've
said
before,
rooting
does
not
currently
handle
the
potential
connectivity
represented
by
root
nodes
and
links
that
are
scheduled
to
turn
up
in
the
future
or
are
scheduled
to
disappear.
So
you
know
it's
going
to
happen.
It
just
hasn't
happened
yet
and
I
personally
see
that
as
a
critical
difference
so
and
that
third
point
is
basically
an
extension
of
that
which
is
not
only
do
we
we
don't
handle
the
fact
that
we
know
a
node
is
going
to
appear
or
links
are
going
to
become
available.
C
C
C
How,
if
we
can
say
it's
coming,
can
we
also
include
how
long
is
it
going
to
be
up
for
and
equally
the
converse?
So
can
you
say
at
this
point
in
time
something
is
going
to
happen
either
an
availability
is
going
to
arrive
or
a
link
is
going
to
disappear.
So
therefore,
service
will
not
be
available
across
that
adjacency.
How
long
is
it
going
to
be
for,
and
will
this
repeat,
will
this
recur?
C
How
long
does
this
whole
description
of
forthcoming
events
going
to
be
valid,
for
so
we're
sort
of
starting
to
talk
about
possible
to
describe
a
schedule,
and
that's
the
word
I'm
using
at
the
moment,
I'm
not
trying
to
impose
any
words
onto
any
any
future
solution
to
this
problem.
But
can
we
talk
about
schedules,
forthcoming
events
and
is
that
of
use
to
routing
protocols?
Do
we
see
advantage
of
doing
that?
C
The
follow-on
question
for
that
is
if
we
see
available
and
unavailable
as
one
of
the
things
you
would
describe
in
that
schedule,
could
we
say
well,
availability
could
just
be
seen
as
a
Boolean
metric
up
down.
Could
we
say
change
link
costs?
You
know
whatever
the
waiting
you're
using
to
build
your
your
shortest
path
across
your
available
graph.
C
C
Next
slide,
please,
okay,
so
this
is
the
full
text
and
I'm
not
going
to
go
through
it
again.
I've
broken
out
the
key
points
in
the
previous
three
slides,
but
that's
there
so
please
grab
it
read
it
through.
My
intention
is
to
work
with
Adrian
to
reintegrate
this
text
into
the
existing
problem
statement
and
to
update
it.
E
Hi
Kevin
fall
did
a
little
work
in
this
area
about
10
years
ago.
I
just
wanted
to
just
some
comments
observations.
So,
interestingly,
you
know
we
can
trace
rooting,
we
do
in
sort
of
internet
and
whatever
to
some
work
that
had
been
done.
E
You
mentioned
dijkstra
algorithm,
for
example,
so
1958
there's
a
publication
flows
over
time
for
Ford
from
Ford
Fulkerson,
which
is
the
same
origin
story
that
we
have,
for
you
know
regular
kind
of
rooting
protocols,
so
I
think
when
I
first
came
across
that
text,
it
was
interesting
to
me
that
it
was
really
so
old,
but
just
some
other
comments
that
I
would
concur
with
you.
E
It's
it's
sort
of
a
different
problem
space
from
a
kind
of
the
graph
theoretic
point
of
view,
because
you
have
a
time
Dimension
and
all
of
that,
whether
it's
useful
in
this
context
that
that's
an
important
sort
of
debate
to
have,
but
I
guess
I
would
just
say
that
there
is
a
lot
of
academic
work
in
this
area,
but
it
uses
words
that
we
don't
tend
to
use
here,
and
it
also
just
one
other
little
thing:
sort
of
meta
thought
about
rooting.
E
You
know
when
I
went
through
school
and
learned
about
rooting
it
was
presented.
Like
you
know:
here's
link
State,
here's
whatever
distance,
Vector
Etc,
but
there's
another
whole
world
that
looks
at
the
same
problems,
which
was
the
operations
research
community
and
their
research
and
its
optimization
problems
and
they're
they're
duels
they're
really
the
same
kind
of
problems
in
many
cases,
and
so,
if
you,
the
the
other
kinds
of
interesting
work
that
relate
to
this,
that
probably
is
not
an
ietf
scope,
but
just
to
give
you
some
thinking
is
like
Dynamic
transshipment
problem.
E
Things
like
this
is
where
you
have.
You
know,
warehouses
and
highways,
and
the
highways
are
fast
or
slow
and
the
way
arrow
and
the
warehouses
have
so
much
capacity,
and
you
need
to
get
certain.
E
C
Two
quick
answers:
first
off
I
think
the
the
presence
of
existing
research
in
this
area
is
a
really
good
thing,
because
I
always
worry
when
I
think
of
something-
and
everyone
says,
that's
unique,
no
one's
ever
thought
of
that.
It
probably
means
it's
really
dumb.
So
the
fact
that
other
people
have
been
spending
eight
years
decades
researching
this
stuff
I
think
that's
really
useful
and
second
of
all,
I'm
gonna
come
back
to
a
bit
of
the
technology
piece
and
look
at
some
of
the
Gap
analysis
stuff
in
a
later
presentation.
C
It's
we've
scheduled
it
in
Fairly,
bite-sized
chunks
and
you're
right,
there's
what
we
do
in
the
internet
area,
particularly
in
pretty
stable
networks
that
we're
all
pretty
familiar
with.
There's
the
whole
Mani
piece.
I
want
to
talk
about
where
there's
a
different
approach
and
you're
right.
The
the
kind
of
generic
traveling
salesman
problems,
the
resource,
scheduling
across
routes
that
are
not
traffic
based,
there's
a
whole
wealth
of
stuff
there
and
having
some
kind
of
timetable,
I
mean
think
about
railway
systems.
C
B
A
F
So
this
is
Alex
Clem
futureway
I
have
one
question
to
the
problem
statement
so
I
understand
the
basic
premise
of
this,
but
I
think
the
second
part
is
missing,
namely
what
you're
actually
planning
to
do
with
this
information?
So
assuming
that
you
had
this
solved,
how
will
you
leverage
it?
How
will
this
be
better?
What
problems
do
you
solve
better
by
knowing
the
schedule
Advance
versus
having
to
reconverge
or
or
what
have
you
so.
G
F
C
Fair
criticism
you're
right
that
first
off
the
problem
statement
can
be
much
better,
but
the
quick
answer
is
I
think
there
is
an
opportunity.
If
we
know
something
is
going
to
change
in
advance,
then
we
don't
waste
time
and
possibly
precious
resources
trying
to
work
out
why
it
failed
trying
to
bring
it
back,
because
you
know
it
has
gone,
no
need
to
try
and
restore
that
peer
link.
No
need!
So
that's
one
opportunity
to
not
waste
scarce
resources
and
the
other
opportunity
is.
If
you
know
something
is
going
to
happen
in
advance.
C
You
can
do
that
calculation
as
a
low
priority,
because
you
know
how
long
you've
got
to
do
it.
If,
particularly
when
we
look
at
iot
anything
in
space,
you
know
General
resource
constrained
things
anything
we
can
do
to
save
on
conservation.
So
if
we
know
something
is
going
to
change,
we
can
do
the
calculation
now,
rather
than
have
to
rush
a
reaction
to
something
happening.
We
can
be
sensible
about
the
resources
we
allot
to
make
making
that
change,
because
we
knew
it
was
about
to
happen.
B
A
H
H
C
Thanks
Tony
I'll,
try
and
answer
some
of
those
quite
quickly.
So
I'll
start
with
the
second
point,
which
is:
if
a
schedule
says
a
link
is
going
to
be
up.
Do
we
always
believe
it
I
think
would
be
a
mistake.
I
think
experience
tells
us
that
you
always
have
to
deal
with
exceptions.
You
always
have
to
deal
with
unpredictable
failures.
We
do
that
with
all
the
routing
protocols
from
across
the
entire
spectrum
of
dynamic
to
to
very
stable
environments,
so
I'm
not
suggesting
that
you
that
you
don't
expect
failures.
C
If
you
have
a
schedule,
I
think
that's
naive,
the
dynamic
question.
So
that's
a
scoping
question.
My
gut
tells
me
you
try
and
start
with
the
simple
cases
and
then
expand
out
to
the
more
complex
ones.
Whilst
when
you
have
a
better
handle
about
your
approaches,
but
that
doesn't
mean
you
don't
discount
more
complex
cases
while
you're
looking
at
your
simple
solution.
That
is
a
very
politicians
answer
to
that
question.
Yeah.
A
I
think
that
you
might
have
missed
each
other
because
you
have
on
your
slide,
I
think
the
answer
to
his
question,
which
is
dynamic,
topology,
is
in
scope
and
that's
represented
by
that
T2
and
T4
is
at
T2.
You
had
a
break
along
that
path
and
then
at
T4
it
comes
up
to
a
different
technology,
goes
somewhere
else.
So
and
Tony
you
can
jump
back
in
and
answer
if
I
get
that
wrong,
but
I
think
that's
what
you
meant
by
Dynamic
topology.
So
there's
a
chain.
Not
only
is
there
a
change
in
link
status?
C
Yeah,
no!
No!
If,
if
that
is
our
definition
of
dynamic,
then
yes,
absolutely
I
come
from
a
Manet
background,
where
Dynamic
can
mean
almost
chaotic
and
those
edge
cases,
I
would
suggest
still
remain
out
of
Scope
when
everything
is
moving
at
such
speed
or
your
schedule
is
talking
about
nanoseconds
and
the
predictability
of
this
stuff
becomes
very
low
probability.
I
think
we
disappear
into
a
corner
case.
We
shouldn't
go
in,
but
yeah.
I
J
Thank
you,
yeah
I
just
wanted
to
make
a
comment
from
a
really
a
comment
from
a
colleague
of
mine,
which
I
think
echoes
a
comment
that
was
made
at
the
mic
earlier
and
also
by
you
Rick
that
it's
like
a
bus,
scheduling
or
train
scheduling,
or
you
know,
using
a
Maps
app.
It's
related
to
driving
directions,
but
it
you
know,
but
it's
also
sufficiently
different,
that
it
warrants
a
special
look
and
so
for
the
purposes
of
like
buff
working
group
formation.
J
I,
think
it's
it's
worth
recognizing
those
differences
and
and
having
a
dedicated
space
to
concentrate
efforts
on
that.
A
G
Julian
luchak,
so
this
buff
is
actually
quite
timely
for
me,
because
I
came
across
a
situation
recently
as
follows.
So
you
have
a
network
in
which
there
are
a
number
of
remote
sites
and
so
in
the
main
particle
Network.
G
Let's
say:
you've
got
a
router
R1,
which
is
connected
to
a
remote
site
and
in
remote
sites
you've
got
routers,
R2
and
R3
facing
the
main
network,
but
there's
only
one
fiber
connection
into
that
site,
and
so
what
you've
got
is
a
a
fiber
switch
in
the
site
that
connects
to
fiberlink
normally
into
R2
on
that
site,
so
I
went
up
is
normally
connected
to
R2,
but
if
R2
should
fail,
there's
some
automation
that
makes
the
switch
flick
over
to
R3
now,
while
two
and
R3
are
connected
to
each
other
and
there's
some
other
reasons
behind
so
R2.
G
Sorry
R3
knows
as
much
about
the
topology
as
R2
does,
even
though
it's
not
connected
at
the
moment
to
the
main
site,
and
so
it
with
current.
You
know,
know
protocols
and
so
on.
It's
frustrating
that
you
know
to
take
a
vote
of
a
few
seconds,
for
you
know
R2
to
come
up.
The
protocols
to
come
up
Etc,
so
this
you
know
sort
of
chimes
in
you
know
very
well
with
that
scenario.
In
fact,
thank.
C
You
so
that's
that
I
think
is
yet
another
example
of
of
reducing
the
the
loss
of
of
data
transmission
capability.
When
something
has
happened,
if
you
can
know
it's
happened
in
advance,
then
you
can
say
well
as
soon
as
that
goes.
I
know
what
I
it
was
expected.
That's
due
to
R3
r3's
warm
standby
and
ready
for
it
rather
than
coming
from
cold.
C
K
L
Hi
Andrew
from
liquid,
intelligent
Technologies,
so
I
live
in
a
part
of
the
world
where
power
is
not
always
the
most
stable.
Luckily,
in
Kenya
it's
more
stable
than
my
colleague
from
South
Africa
I
think
he
brought
the
parachute
load
shedding
with
him
with
it.
When
we
arrived,
but
here's
the
thing
yes
I
can
say
the
thing's
going
to
go
down.
We've
got
the
fast
reroute
Etc,
however,
if
I
get
an
outage
and
I
know
that
this
is
power
or
has
a
good
probability
of
being
power
based
on
the
schedule
and
I'm.
L
Just
interestingly
enough
was
looking
at
the
load
shedding
for
my
parents
house
and
they
three
to
five
thirteen
seven
o'clock
to
nine
thirty
blah
blah
blah.
How
I'm
going
to
react
to
that
based
on?
Is
this
power
or
something
much
more
temporary?
How
long
that
power
outage
could
be,
could
potentially
be
very
different
and
if,
for
example,
I've
got
someone
at
the
knock
in
India
or
in
the
UK
picking
this
up
and
looking
well,
it's
just
another
outage
I'll
quickly
balance
this
here,
that's
different
to
saying:
I
know
this
is
power.
L
My
schedule
says
it's
going
to
be
out
for
eight
hours.
It's
going
to
hit
a
peak
time
over
here.
I
may
want
to
rebalance
this
traffic
in
a
very,
very
different
way,
and
that
makes
this
really
important
that
I
do
have
a
way
in
the
network
to
pick
these
things
up,
Etc
and
then
make
decisions
on
that
that
aren't
necessarily
related
to
some
guy's
impression
in
India
who
just
saw
that
the
router
went
down
you
know.
So
there
are
a
lot
of
possibilities
here
to
get
pretty
interesting
with
how
you
do
this.
L
The
other
thing
is:
if
I've
got
algorithms
that
are
learning
how
to
balance
traffic
when
something
is
down,
I
can
tell
that
algorithm.
This
is
because
of
power.
Outage
learn
something
different
versus
this
thing
has
just
rebooted
itself
and
may
come
back
at
any
instance.
So
there
are
a
lot
of
possibilities.
I
see
here
and
yeah.
I
am
extremely
interested
in
this
thanks.
C
Andrew
yeah
I
think
when
it
comes
to
the
what's
in
scope
and
what's
out
of
scope.
If
people
do
think
this
is
a
problem,
we
need
to
balance
I
personally,
don't
think
we
should
get
too
caught
up
in.
How
do
we
create
schedules?
I?
Think
that's
kind
of
beyond
the
scope
of
what's
achievable
in
a
in
a
reasonable
working
group
schedule.
C
There
is
some
really
cool
stuff
we
can
do
once
we
have
a
schedule
and
I
would
and
I'm
going
to
get
on
to
this
later,
but
it's
more
about
trying
to
keep
that
scope
down,
but
that
doesn't
mean
other
things
can't
be
built
that
we
consider
to
be
particularly
at
the
scope
of
something
like
a
routing
working
group,
but
yeah
once
you've
got
this
data
and
you
can
work
out
how
to
get
it
around
your
network
and
and
get
it
in
the
system.
M
The
cross
and
Huawei
I
asked
the
question
on
the
on
the
on
the
chat
and
I
I
thought
I
might
relay
this
here.
It's
called
time
variant,
routing,
but
but
I
feel
there
are
different
variances.
We
we
may
want
to
consider
it
came
out
already
before
it
may
well
be
energy.
It
may
well
be
locality.
You
know,
there's
a
fine
time
variant,
the
the
focus
on
the
schedule
very
limiting.
If
you
will-
and
it
turns
you
know-
link-
is
up
and
down
schedules.
M
C
Take
your
point:
I'm
really
trying
to
stop
boiling
oceans
here
and
I.
Do
wonder
that
I
don't
want
to
disappear
down
to
a
generic
expression.
Evaluation
condition
causes
effect.
That's
that's!
That's
programming
in
the
network,
I
think
that's
still
very
much.
The
work
of
the
irtf
I
think
if
we
say
time
things
change
at
a
point
in
time.
Can
we
prepare
ourselves
in
advance
that
reduces
our
scope
to
something
achievable,
which
will
have
noticeable
benefit?
C
C
Yeah
yeah,
so
a
scheduled
change
in
energy
consumption.
I.
Think
we've
got
use
cases
talking
about.
In
fact,
I
know:
we've
got
use
cases
coming
talk
about
that,
it's
not
just
on
and
off
it
could
be.
We
know
that
I,
don't
I,
don't
know
the
the
atmospherics
are
such
that
our
satellite
links
are
really
good
until
there's
a
rainstorm
coming
in
at
3
P.M
this
afternoon
you.
M
You
can
do
a
lot
with
the
schedule.
I
do
agree,
I
mean
if
you,
if
you
provide
an
API
in
the
scheduled
building,
that's
how
we
solve
some
of
the
problems
in
the
past,
but
obviously
the
conditions
for
the
schedule
can
be
can
be
exactly
along
the
dimension
that
I
just
mentioned.
It
could
be
driven
by
Energy
prices.
In
order
to
do
carbon
aware
routing,
it
could
be
based
on
expected
movements,
I
think
on
the
mailing
list.
I
sent
the
paper
around
we
published
20
odd
years
ago
on
using
contextual
information.
M
C
Agree
and
I
think
that
the
carve
Point
here
should
be.
Let's
say
we
have
a
schedule,
let's
see
if
we
can
Define
what
that
schedule
should
have
inside
it,
and
let's
discuss
the
kind
and
I'm
getting
beyond
the
problem
statement
into
what
I
think
the
scoping
is
so
actually
I'm
going
to
stop.
Can
we
revisit
this
in
my
next
slide
or
my
later
slide
deck.
N
So
when
a
a
node
or
a
link
changes
the
network
transients
and
during
that
transient,
a
lot
of
innocent
traffic
is
disrupted
yeah
because
the
formation
of
micro
Loops,
so
we're
going
to
do
this,
can
we
include
in
the
base
technology,
micro,
loop
prevention
in
some
way
or
other
to
try
and
minimize
this?
This
very
disruptive
effect
the
Technologies
for
knowing
are
for
doing.
It
are
well
known.
C
Absolutely
I've
seen
no
reason
why
that
shouldn't
be
considered.
It's
that's
just
the
sort
of
thing
that
we
can
attempt
to
avoid.
If
we
knew
it
was
coming
up,
we
might
be
able
to
use
different
techniques
to
say
when
we
have
to
make
this
change
because
it
is
predicted,
we
can
avoid
that
sort
of
micro,
loop
formation
or
whatever,
because
we
can
all
look
at
we
being
a
set
of
of
routers
or
whatever.
We
can
pre-agree
a
an
alternate
topology
to
have
prepped
without
those
micro
loops,
and
we
can
pre-agree
to.
N
Make
no?
No
it's
not
it's
not
quite
as
simple
as
that
right
so
because
you
have
to
deal
with
the
lack
of
synchronicity
in
the
changing
in
the
fibs,
which
can
take
quite
a
long
time.
So
actually
it's
not
an
instant
event,
changing
the
topology,
but
actually
a
topology
change
process
that
you
need
to
do
that,
takes
you
from
the
old
topology
to
the
new
topology
via
a
number
of
short-term
virtual
topologies,
usually.
C
And
that
sounds
absolutely
like
something
that
should
be
addressed
as
part
of
a
TBR
working
group.
I.
Think
that
would
be
absolutely
a
charter
item
to
say:
can
we
in
general
terms,
describe
how
one
makes
this
change
such
that
we
avoid
these
things
and
the
reason
I
say
general
terms
is
I?
Don't
think
it's
right
that
a
potential
TBR
working
group
should
tell
link
state
routing
what
to
change
in
ospf.
We
should
go
to
ospf
and
say
these
are
mechanisms
that
we
talk
about
go.
Can
you
integrate
it.
N
The
the
techniques
were
investigated
quite
some
years
ago
now
in
the
transport
in
the
routing
area
working
group.
So
a
lot
of
the
techniques
are
already
documented
great.
Let's.
A
Perfect,
thank
you
all
for
a
really
good
discussion,
and
hopefully
we
have
a
good
picture
of
what
the
the
problem
is.
We're
trying
to
solve
here.
That
said,
we're
going
to
look
at
some
specific
use
cases
and
I
think
they'll
Echo.
Some
of
the
conversation
Ed.
O
Okay,
so,
when,
when
Rick
and
I
started
talking
about
what
would
time
variant
routing
look
like?
Obviously
a
problem
statement
was
good,
but
everyone
has
in
their
in
their
mind
a
mental
model
or
a
problem
that
they've
encountered
and
that
they're
trying
to
solve
and
trying
to
bring
that
down
into
a
series
of
use.
Cases
was
an
interesting
Endeavor
because
we
could
have
two
of
them
or
we
could
have
two
thousand
of
them,
and
how
do
we
scope
and
understand
what
differences
are
important
to
talk
about?
O
So
what
I
want
to
talk
about
for
the
next
10
or
15
minutes,
or
so
is
an
initial
take
of
how
we
carve
out
use
cases
to
identify
what
we
think
are
the
unique
differences
that
and
the
expected
benefits
in
those
use
cases
of
something
like
TBR
time
variant.
Information
next
slide,
so
to
do
that,
I
wanted
to
to
just
go
over
some
background.
O
The
approach
that
I
took
in
laying
some
of
these
out,
based
on
the
conversations
and
the
working
on
the
mailing
list,
talk
a
little
bit
about
some
formatting
and
then
go
over
what
I
call
sort
of
the
three
use
cases
which
are
kind
of
categories
and
I'll
name
them
here
and
we'll
go
into
them
in
detail
in
just
a
moment.
The
first
is
cases
where
nodes
change
their
functionality
to
preserve
local
resources,
local
resources,
like
resources,
local
to
that
node.
O
O
Power
is
really
expensive
right
now.
Data
is
really
expensive
right
now,
it's
not
a
matter
of
node
survival,
but
it's
a
matter
of
reacting
and
and
working
better
in
an
external
environment
and
then
the
last
one
which
is,
which
is
one
that
is
probably
very
familiar.
When
we
talk
about
things
that
change
over
time
is
what
happens
when
nodes
in
our
Network
are
moving
when
they're
moving
far
away
from
each
other
or
when
they're
moving
through
a
difficult
environment
or
when
one
just
turns
a
little
bit
and
can't
talk
to
the
other.
O
So
next
slide.
So
before
we
go
into
those
three,
how
did
we
get
to
those
three
and
how
are
we
going
to
talk
about
them?
So
how
did
we
get
to
them?
Well,
obviously,
we
wanted
to
constrain
anything.
We
talked
about
to
the
problem
statement
that
Rick
had
just
talked
through,
because
this
is
the
problem
that
we're
trying
to
solve.
O
But
then,
when
we
talk
about
use
cases,
it
is
categories
of
these
problems,
so
we
say
use
cases,
but
the
three
things
that
I
just
said
I
believe
are
categories
of
problems
or
characteristics
of
problems,
and
the
idea
here
is
that
again,
I
don't
want
to
worry
too
much
about
two
or
three
or
four
instances
of
the
same
kind
of
problem
and
calling
them
different.
Another
way
of
saying
it
is
what
we
think
here
are
from
a
use
case
perspective.
O
These
three
are
such
that
a
solution
for
one
Exemplar
of
that
use
case
could
be
a
solution
for
any
other
Exemplar
within
that
use
case.
So
the
format
for
this
is
we
want
to
Define
what
they
are.
We
want
to
list
out
what
the
assumptions
are
and
assumptions
here,
because
the
use
cases
are
again
a
little
bit.
Abstract
are
what
information
do
we
think
would
need
to
be
present
in
order
to
yield
deterministic
benefits
from
having
a
predictable
schedule,
and
then
the
next
part
is
okay.
O
Well,
assuming
that
we
have
data
that
we
need
to
key
off
of
then
what
are
those
benefits
and
when
we
list
out
those
benefits,
it's
not
a
and
then
there
will
be
a
list
typically
two
or
three
or
four
or
five.
It's
not
that
we
think
we
will
get
all
of
them
all
at
once,
all
the
time,
but
maybe
you
get
some
combination
of
them
and
in
different
cases,
and
then
last
I'll
end
each
use
case
with
a
quick
Exemplar
just
to
try
and
ground
the
thinking.
A
Next-
and
we
probably
will
wait
for
discussion
until
all
three
presentations
go
if
you
do
have
something,
that's
just
like
a
clarifying
question,
that's
great,
but
for
discussion,
let's
hold
it
until
the
end
of
the
three
use
cases.
Thank
you.
O
I
will
happily
defer
discussions
until
I'm,
not
on
stage
so
use
case
number
one
local
resource
preservation
I
mean
this
is
the
idea
where
we
have
nodes
that
just
operate
with
limited
resources,
either
because
they're
in
a
very
difficult
environment,
or
because
there
is
some
nature
of
the
node
itself,
where
it
is
meant
to
be
small,
it
is
meant
to
not
need
a
lot
of
power.
It's
meant
to
not
have
a
lot
of
storage.
O
O
O
So
if
we
assume
that
we
are
a
resource
constrained
node
and
if
we
assume
that
that
resource
management
is
important
and
in
this
category
of
use,
cases
is
sort
of
dominant
to
node
function,
then
what
we
would
say
is
it's
important
in
these
cases
to
know
that
our
resource
expenditures
are
knowable.
The
amount
of
resources
consumed
by
node
functions
can
be
known
in
advance
if
I'm
managing
my
node
functions
based
on
the
amount
of
battery
life
left
and
I,
know
how
to
transmit
a
data
volume.
O
O
We
assume
that
the
re,
the
accumulation
back
of
resources
again
using
the
example
of
power,
how
much
we
tend
to
expect
to
get
back
a
battery
charge
from
things
like
solar
charging
when
will
that
get
us
back
to
good,
should
also
be
calculable,
and
then
the
last
is
that
if
we're
making
node
function,
decisions
based
on
the
resources
at
a
node,
there
is
an
assumption
that
that
set
of
decisions
and
those
rubrics
or
those
metrics
aren't
changing
so
rapidly
that
within
any
sort
of
given
period
of
time,
we're
not
we're
not
changing
our
internal
cost
functions,
because,
if
you're,
if
you're
always
looking
at
a
different
way
of
managing
resources,
every
few
minutes
or
every
few
hours
throughout
the
day,
it
becomes
more
difficult
to
get
deterministic
benefits
back
from
adhering
to
something
like
a
schedule.
O
O
You
can
get
better
thermal
savings,
you
can
get
better
storage
management
and
and
really
honestly,
just
the
data
delivery,
because
that's
what
most
of
this
is
about
continues
to
be
a
little
bit
better,
because
if
you
can
manage
your
resources
better,
if
you
can
stay
active
in
the
network
better,
if
you're
not
wasting
Power
by
having
a
radio
on
when
you
don't
think,
there's
a
likelihood
of
having
a
link.
For
example,
then
you
are
you're
able
to
eventually
get
more
data
through
your
system
next
slide.
O
So
a
graphic
one
here
is
to
to
come
back
and
say,
and
these
are
taken
from
the
use
case
documents
which
are
published
as
well.
O
Then
you
could
put
some
very
different
plots
together
for
nodes
that
maybe
are
distributed
in
a
particular
way,
and
you
could
see
how
that
would
change
underneath
of
that,
the
topology
of
the
network
at
different
times
time,
one
time
two
and
time
three.
So
this
is
something
that's
while
generic
I
think
we
can
all
come
up
with
or
understand
how
this
would
look
in
in
actual
deployments.
From
an
assumptions
perspective,
we
think
you
know
these
power.
O
Graphs
are
knowable
both
accumulation
and
expense,
the
resource,
the
radio
management
that
sort
of
horizontal
line
of
how
much
or
how
little
isn't
changing
over
time-
and
it
looks
at
places
here
where
the
topology
itself
is-
is
predictable,
scheduleable
and
likely
also
communicable
across
other
nodes
in
the
network.
O
All
right
use
case
two,
which
is
now
not
about
resource
management
and
preserving
node
function,
is
instead
about
adapting
to
external
conditions
and,
and
the
idea
behind
this
is
that
there
is
likely
some
cost.
Some
external
costs
associated
with
participating
in
a
network
and
a
common
example
of
that
is
use
of
infrastructure,
use
of
power,
infrastructure,
use
of
data
infrastructure,
nodes
that
are
Wireless.
They
will,
for
example,
use
cellular
infrastructure
I.
Think
all
of
us
understand
the
idea
of
one
Peak
and
off-peak
usage
times
and
what
happens
with
overage
rates.
O
If
you
go
over
your
one
Peak
minutes,
you
know
it
came
out
in
the
in
the
news.
What
three
or
four
days
ago
that
starlink
has
come
back
and
said
you
know.
If
you
go
over
a
particular
cap,
then
that
will
be
problematic
and
you
will
have
an
overage.
However,
our
off-peak
time
is,
you
know:
11
P.M
to
7,
A.M,
I,
guess
your
time,
and
so
that
means
great.
If
I
have
you
know
low
priority
data,
then
that's
when
I
should
choose
to
send
it.
O
Not
all
the
costs
are
strictly
financial,
and
the
benefits
here
may
be
that
we
choose
to
rank
or
associate
cost
functions
with
the
use
of
links
in
a
different
way,
with
the
extra
information
next
slide
so
again
assumptions
and
benefits.
We
would
assume
that
if
this
were
something
that
would
benefit
from
TBR
schedule
like
information,
we
would
say
we
can
measure
those
external
costs.
We
understand
our
data
rate
usage
or
our
energy
usage.
O
The
the
changes
in
those
costs
are
either
predictable
or
scheduled
or
scheduled
because
they
were
predicted
and
that
the
cost
differences
persist
for
long
enough
and
optimizing
them
has
a
savings
that
is
big
enough
to
justify
the
extra
computation
or
work
to
figure
out
and
make
use
of
this
additional
knowledge.
But
of
course,
I.
Think
if
you
do
that,
then
you
can
come
back
with
some
benefits
and
say
well
wait
a
minute
we
can
filter
links.
That
is
a
very.
O
This
link
was
very
low
cost
five
minutes
ago,
but
now
we're
in
on
peak
hours
where
we're
about
to
go
into
overage,
and
now
this
link
might
have
a
different
higher
cost,
either
Financial
cost
or
logical
cost,
and
maybe
we
want
to
think
about
it
differently.
Because
of
that
we
can
also
come
back
and
say
in
certain
cases.
Maybe
we
want
to
plan
and
accumulate
Data
before
we
use
a
link
based
on
its
cost.
O
O
Maybe
we
wait
on
that
for
a
variety
of
reasons,
and
again,
all
of
this
comes
back
to
trying
to
get
better
data
delivery
and
make
better
data
delivery
decisions
next
slide
again
for
this
one,
a
contrived
node
one
node,
two
node
three,
where
we
come
back
and
say
if
we
were
to
Simply
on
the
vertical
say:
cost
High,
Low,
Time,
T1,
T2
T3
across
the
three
node
Network.
Those
can
change
over
time.
In
this
case,
maybe
it's
spatially
distributed
nodes
with
the
cellular
backhaul
and
we
look
at
how
the
topology
does
not
change.
O
The
links
are
still
there,
but
we
may
come
back
with
things
like
well:
Node
1
to
node
two
can
be
either
low
cost
or
high
cost
at
different
times,
and
even
perhaps
in
in
this
third
case
at
the
bottom.
You
may
wind
up
saying
if
I
want
it
to
be
low
from
Node
1
to
node,
2
and
low
from
node
2
to
node
3,
and
it's
something
that's
reasonable
in
your
network.
You
may
actually
hold
on
to
it
for
a
little
while
waiting
for
that
node
two
to
node
3
cost
to
be
lower
next
slide.
O
Then
the
last
slide
is
sort
of
these
mobile
networks,
and
obviously
Leo
satcom
is
is
an
example
of
that.
But
there
are
others,
and-
and
there
are
really
three
cases
of
Mobility
but
again
under
the
idea
of
a
solution
for
one
might
be
a
solution
for
many,
so
we
don't
necessarily
perhaps
see
them
as
three
distinct
use.
O
Cases
are
cases
where
I
have
two
nodes
have
moved
far
enough
apart
that
they
lose
the
ability
to
talk
to
each
other
or
two
nodes
have
not
moved
too
far
apart,
but
they
together
are
moving
through
an
environment
or
an
environment
is
moving
through
them
such
that
they
can't
talk
to
each
other,
or
one
of
them
has
pointed
their
antenna
somewhere
else,
and
they
can
no
longer
talk
to
each
other.
All
of
those
are
Mobility
based
or
motion
based
losses
to
a
link
next
slide
so
again
from
an
assumptions
and
benefits
point
of
view.
O
If
you
understand
the
motion,
if
you
know
where
things
are
going
with
some
predictability
with
Leo,
we
tend
to
know
where
they
are,
but
we
maybe
don't
know,
what's
powered
on
or
where
things
are
pointed
and
or
if
you
understand
the
environment,
which
may
be
also
impacting
you,
then
that
is
knowledge
you
can
exploit
to
understand
when
an
adjacency
would
go
away
understand
when
a
link
could
come
back,
maybe
to
even
include
that
the
resumption
might
be
a
different
link
than
the
one
that
went
down,
which
would
be
a
useful
thing
that
you
can
understand
how
your
data
rates
would
be
adjusted
and
generally,
you
can
use
all
of
this
for
better
filtering
of
the
links
as
well
Leslie,
so
again
in
our
Network
Leo
constellation.
O
If
we
have
a
couple
of
spacecraft
and
we
assume
that
they
have
a
different
ground
coverage
over
time,
for
example,
spacecraft
one
is
over
ground
at
time,
three
spacecraft,
two
at
time,
two
and
so
on.
Then
we
can
sort
of
understand,
as
this
train
goes
by
over
the
ground
station.
O
In
minutes
we
need
to
understand
handoff,
and
we
need
to
understand
that
certain
links
are
going
up
and
down
and
as
one
spacecraft
leaves
the
ground,
you
then
need
to
go,
for
example,
to
an
inner
satellite
link
to
the
spacecraft
behind
you
so
that
it
can
keep
that
going
back
down
to
the
ground
and
that's
something
you
do
because
you
can
predict
and
you
have
a
schedule
to
deal
with
that
level
of
dynamic
topology.
O
A
Thank
you.
Sorry,
I
turned
the
mic
off.
Kevin
is
going
to
present
next
and
he
is
remote
Kevin.
You
don't
need
to
share
your
screen,
we'll
just
pass
you
control
so
just
hold
on
one
moment.
A
Kevin,
can
you
say
something
to
make
sure
we
you
we
know
you're
here.
A
A
A
Kevin
you
need
to
unmute.
We
can
see
that
you're,
muted.
A
P
Cool
so
while,
while
we
figure
out
the
audio
and
some
slides
my
name's
Dan
with
Kevin
at
Airbus
I'm
at
Lancaster
University,
we
are
introducing
a
use
case
next
slide.
Please
there's
already
been
a
couple
of
mentions
to
sort
of
emerging
satellite
Club
constellations.
You
know
the
space-based
use
case.
It
sounds
extremely
futuristic,
but
of
course
these
Networks
are
being
put
in
space.
There
are
several
large
constellations
with
tens,
hundreds,
thousands
of
nodes
whizzing
around
the
planet.
P
At
the
moment
they
are
extremely
small,
they're
increasing
in
size
and
number.
They
have
multiple
link
types
next
slide,
please
that
we
can
choose
in
order
to
set
up
the
physical
connectivity.
So
in
some
of
the
constellations
there
are
high
bandwidth
interfaces
on
the
order
of
gigabits
per
second
using
free
space,
Optics
operating
actually
at
speeds
that
are
sufficiently
are
significantly
faster
than
traditional
sort
of
earth-based
fiber-based
systems.
P
There
are
also
radio
interfaces
and
those
radio,
those
RF
interfaces
are
also
evolving
over
time
up
to
sort
of
gigabits
and
tens
of
gigabits
per
second.
In
terms
of
building
the
the
space
topology,
we
can
continue
through
various
space
segments
using
interspace
links
isls,
but
we
also
have
the
opportunity,
with
some
constellations
to
Bounce
Down,
to
the
planet
using
base
stations
and
in
sort
of
several
major
countries.
There
are
often
multiple
base
stations
that
can
be
used
next
slide,
please.
P
So
what
we
need
to
kind
of
figure
out
is
how
we
build
this
network
topology,
given
that
there
are
several
space-based
segments
that
we
can
use
as
well
as
sort
of
dropping
down
back
to
terra
firma.
P
We
know
that
these
links
offer
different
characteristics,
so
there
are
bandwidths
latencies,
potentially
Jitter
as
well,
if
you're,
a
starlink
user
and
and
I
am
as
well
as
a
tester
you'll
notice
that
we
have
a
lot
of
bandwidth
and
availability.
But
actually
there
are
cue
issues
and
latency,
and
unpredictability
for
jitta
can
cause
a
major
problem
for
certain
low
latency
applications.
Next
slide,
please.
P
So
we
need
to
figure
out
in
advance
how
we're
going
to
build
our
physical
topology,
given
that
there
are
multiple
links,
bandwidth
constraints,
potential,
cue
issues
and
also
the
amount
of
power
available
on
some
of
these
satellite
nodes
as
well
as,
of
course,
the
bandwidth
that's
going
to
be
potentially
available
at
a
given
point
in
time.
Now.
This
is
a
a
network
path,
computation
problem
that
can
be
solved
with
a
combination
of
traditional
techniques.
We've
heard
earlier
sort
of
some
linear
Solutions,
like
dijkstra.
P
You
could
of
course
use
a
control
center
with
humans
sort
of
biological
path,
computation
to
do
some
offline
planning
to
try
to
optimize
your
network.
What
what
we're
looking
for
next
slide?
Please
is
potentially
a
method,
a
process
of
building
a
network
topology
virtually
in
advance
that
can
then
be
overlaid.
P
So
we
also
need
to
consider
that
this
isn't
just
a
planning
problem,
but
it
needs
to
react
potentially
to
changing
real-time
conditions,
but
real
time
is
relative,
so
real
time
for
a
fixed
Network
might
be
a
few
seconds
or
tens
of
seconds
when
you
start
losing
an
adjacency
to
a
particular
sort
of
next
hot
neighbor.
P
In
your
igp,
but
in
space,
sort
of
a
real-time
change
might
be
over
several
hours
as
you're
recharging,
a
particular
node
in
order
to
sort
of
gain
power
and
then
be
able
to
light
some
new
interface,
there
will
also
be
connectivity
potentially
coming
Online
predicted,
though
that
is
not
space-based,
but
is
high
altitude.
So
there
are
sort
of
UA
UAV
unmanned
aerial
vehicles
that
potentially
can
augment
your
network
next
slide.
Please.
P
So
we
will
need
to
build
this
kind
of
network
graph,
including
some
sort
of
online
capability,
with
the
ability
to
consider
offline,
and
maybe
the
definition
of
online
and
offline
needs
to
be
clear
here
for
people.
Essentially
it's
some
kind
of
entity
that
is
talking
directly
to
the
network,
I,
suppose
that's
what
I
might
consider
online
offline
would
be
to
pull
out
the
traffic
engineering
database
run.
P
Some
evaluations
create
a
candidate
topology
that
that
might
be
done
in
an
offline
instance,
and
then
I
want
to
apply
that
at
some
point
in
the
future.
But
something
has
to
kind
of
merge
these
two
databases
at
some
point
and
if
we
had
Kevin
online
he
he
would
probably
talk
about
conops,
which
is
a
form
of
sort
of
operational
management.
That's
done
by
a
team
of
people,
augmented
by
heuristics
or
sort
of
non-heuristic
technology,
so
I
suppose
for
some
of
these
use
cases
the
time
Horizon
is
very
different.
P
You
know
for
a
satellite
Network,
we
might
be
talking
about
minutes
hours.
You
know
days
during
the
chat.
Someone
was
talking
about
deep
space
connectivity
as
well
for
for
other
use
cases.
P
It
might
all
be
happening
over
several
minutes
or
or
hours
and
I
know
that
you
know
we
we
get
excited
or
I
get
excited
about
space-based
communication,
but
but
the
same
Concepts
and
requirements
also
apply
to
things
like
rural
internet,
where
essentially,
you've
got
microwave
links
or
indeed
free
space
objects
where
you've
only
got
these
links
available
at
particular
times
of
day,
because
that's
when
you
can
get
enough
sunlight
like
to
power
the
interfaces
as
well,
but
they're
highly
predictable,
because
generally
the
sun
will
rise
and
set
in
the
same
location.
P
If
it
doesn't,
then
we're
probably
in
trouble
next
slide,
please
and
the
Network
Discovery
building
the
topology
analyzing,
the
topology,
making
the
predictions
building
some
future
candidate
technology
and
I
mentioned
in
the
chat
that
this
this
feels
like
or
sounds
like
potentially
a
scheduled
or
excuse
me
scheduled
virtual
Network
topology
needs
to
apply
to
a
variety
of
service
types,
your
users
that
are
actually
attaching
to
the
network
and
those
you
know
those
are
listed
there
for
sort
of
space-based
marine
high
altitude
unmanned
aircraft
next
slide,
please!
P
So
what
would
we
like
from
TBR?
And,
of
course
this
is
sort
of
walking
into
a
toy
shop
and
saying
well,
Dan,
you
know,
choose
anything
that
you
want
I'm
going
to
try
and
grab
everything.
I
can
knowing
that
there
is
a
kind
of
reality
question
that
we
need
to
apply
here
and
wanting
to
kind
of
minimize,
potentially
the
amount
of
state
that
gets
created
and
potentially
injected
in
a
in
a
routing
protocol.
That's
going
to
be
used
in
an
online
function,
so
bear
with
me.
P
We've
just
kind
of
Kevin
and
I
have
put
everything
potentially
that
we
would
like
here,
but
there
has
to
be
a
method
of
exporting
or
retrieving
information
from
the
network,
discovering
capability
understanding
what
metrics
we
will
need
to
consider
and
it's
all
of
the
standard
sort
of
routing
metrics
that
we
might
want
in
terms
of
latency
bandwidth,
Etc
costs,
but
cost
might
also
have
a
physical
element
to
it.
In
this
case,
so
there'll
be
a
network
cost
and
maybe
a
fiscal
cost
of
powering
a
unit
versus
choosing
a
particular
link.
P
We
will
need
to
consider
the
sort
of
the
resiliency
here
as
well.
There's
a
restoration
consideration.
P
Not
only
does
the
generation
of
a
virtual
Network,
topology
or
some
candidate
future
topology
need
to
be
considered,
but
it
needs
to
potentially
react
to
changing
Network
conditions
in
the
event
of
some
kind
of
catastrophic
failure,
such
as
the
removal
of
a
node
or
degradation
of
a
link
we
may
need
to
switch
over
and
that
that
could
be
I
think
someone
mentioned
earlier
occlusion,
you
know
which,
which
may
be
weather
or
or
or
some
other
sort
of
debris,
that's
getting
in
the
way.
P
Okay,
so
I'm
just
looking
at
the
time
here
as
well,
so
maybe
our
stock,
because
there
are
going
to
be
commonalities
with
some
of
these
requirements
across
the
other
use
cases.
Are
we
taking
questions
now?
Are
we
saving
them
for
after
the
I.
A
Would
prefer
to
save
them,
but
if
you
have
a
clarifying
question,
please
come
on
up
or
come
to
the
queue
and
Eve
if
you
get
ready
for
the
next
one.
Thank
you.
I
I
think
we're
going
to
have
some
good
discussion
on
the
gaps
and
mechanisms,
so
I
want
to
make
sure
to
reserve
some
time
for
that
all
right.
Eve,
okay,.
Q
Okay,
so
I
will
extend
this
conversation
by
talking
about
something
we've
begun
to
call
carbon
aware
networking
which
we
believe
solidly
Falls
within
this
discussion
around
time,
variant,
routing
as
an
example
of
an
environmental
impact
use
case
and
I
will
tie
it
to
one
of
the
use
case
categories
that
Ed
so
clearly
talked
about
next
slide.
Q
Oh
I
see:
okay,
here
we
go,
I
do
have
control
all
right,
so
the
backdrop
here
is
that
for
anyone,
who's
been
following
the
UN
intergovernmental
panel
on
climate
change,
there's
pretty
dire
urgency
to
try
to
meet
these
thresholds
that
are
out
there,
after
which
you
know,
we
really
can't
even
model
what
the
consequences
are
going
to
be
and
in
the
most
recent
reports,
those
recommendations
and
reports
have
been
called.
Q
You
know,
code
red
moments,
you
know
we're
really
at
a
Tipping
Point,
and
so
what
can
all
of
us
be
doing
to
be
more
considerate
about
climate
change
and
greenhouse
gas
emissions?
And
if
you
look
at
the
information
communication,
Technologies
contribution
to
those
emissions,
it's
fairly
sizable
and
growing,
and
some
estimates
are
that
at
least
the
electricity
usage
by
ICT
is
between
will
be
between
two
and
it's
about
two
percent
now,
but
by
2030,
will
be
about
six
percent
of
all
Global
electricity,
which
in
turn
has
a
carbon
footprint.
Q
The
network
is
at
least
as
large
and
as
sometimes
as
large
as
one
and
a
half
times
the
size
of
the
data
center
in
terms
of
its
footprint,
carbon
footprint
there's
also
the
recognition
that
there
is
an
increasing
tidal
wave
of
data
that
gets
generated
by
all
the
things
that
are
connected
to
the
network
and
it's
almost
as
though
the
only
way
we
could
get
to
a
situation
where
we
have
carbon
neutrality
or
Net.
Q
Zero
impact
is
to
not
just
consider
Energy
Efficiency
of
our
products
and
components,
but
to
really
think
about
the
coupling
of
Renewables
with
all
of
our
infrastructure
and,
additionally,
when
you
think
about
what
is
so
disruptive
about
this.
Is
that,
if
we're
going
to
be
moving
to
clean
energy
to
address
the
issues
of
greenhouse
gases,
I'm
hearing
an
echo
I,
don't
know
if
others
are
hearing
an
echo.
Q
Okay,
that
to
move
to
the
electrification
of
Transportation,
hopefully
clean
electrification
and
transportation,
we're
going
to
need
about
four
times
the
amount
of
electricity
that's
currently
being
generated,
and
some
estimates
are
that
that's
going
to
entail
about
20
times
as
much
infrastructure.
That's
already
out
there,
so
we're
at
this
very
interesting
moment.
Q
We
know
that
there's
going
to
be
a
long
Arc
of
rollout
of
this
infrastructure,
and
so
in
that
time
and
and
we
may
never
get
to
a
fully
renewable
or
clean
energy
oriented
grid
and
and
even
if
we
could,
there
was
going
to
be
this
long,
Arc
of
infrastructure,
rollout
and
even
longer
amounts
of
time
than
how
long
it
takes
to
roll
out
Telecom
or
Internet
infrastructure.
Q
And
in
that
time
we
want
to
be
thinking
about.
Well,
how
can
we
steer
our
usage
of
electricity
to
be
the
least
impactful,
and
there
is
this
metric
that
is
quite
interesting,
which
is
carbon
intensity,
which
is
a
measure
effectively
of
how
green
is
the
electricity,
and
we've
been
talking
Rick
and
I,
and
other
colleagues
at
Yale
University
in
Oxford,
about
the
what-ifs
behind?
Could
we
add
to
the
performance
metrics
of
networks
which
are
typically
described
in
terms
of
their
packet
loss,
their
latency
and
their
Jitter?
Q
And
so,
if
you
were
to
think
about
sort
of
sustainable
networks,
you'd
want
to
be
thinking
about
how
do
you
design
them
so
that
they
consume
less
energy?
How
do
you
decarbonize
the
electricity
that
they
do
consume
and
then
there
are
other
kinds
of
actual
environmental
impacts
which
we're
not
going
to
get
into
today.
Q
That
are
other
concerns
that
are
things
like
around
water
and
waste
and
chemistry,
and
things
like
that
now
we're
inspired
here,
because
many
of
the
hyperscalers
are
adopting
techniques
called
that
they
are
referring
to
as
carbon
aware
or
carbon
intelligent
Computing,
where
they
are
physically
time
and
space
shifting
their
workloads
so
that
we
place
them
in
places
where
there
is
the
least
carbon
intensity.
Q
And
the
attempt
here
is,
as
stated
earlier,
is
to
maximize
the
usage
of
Renewables
like
solar
and
wind,
alongside
of
other
things
that
are
considered
clean
energy
when
we
look
at
them
from
a
greenhouse
gas
emission
or
carbon
standpoint.
There's
also
another
interesting
phenomenon
happening
that
if
you
look
at
places
like
California
and
Germany
parts
of
the
globe,
where
Renewables
are
being
integrated
at
accelerated
pace
and
consume,
or
at
least
deliver
over
a
high
percentage
of
the
electricity
in
the
grid.
Q
Unfortunately,
there's
a
mismatch
between
the
supply
and
the
demand
for
that
electricity,
and
so
sometimes
there's
excess
Renewables
being
generated
and
that
go
wasted
as
a
consequence,
and
so
what
the
data
center
community
has
begun
to
do
is
to
use
the
data
centers
themselves.
As
a
virtual
battery
of
sorts,
so
that
in
moments
where
there
is
an
excess
of
Renewables
that
time,
elastic
workloads
are
sort
of
on
the
ready
to
absorb
that
excess
energy.
Q
So
naturally,
the
question
is
well:
how
does
this
relate
back
to
what
we
could
be
doing
with
networking,
and
we
do
believe
that
there
are
carbon,
aware
techniques
that
we
should
consider
adopting
and
that
when
we
started
to
look
at
this
problem,
we
naturally
thought.
Oh
well,
clearly
we
want
to
take
routes
and
we
want
to
Route
our
data
Transmissions
through
parts
of
the
network
where
there's
the
greatest
carbon
efficiency
so
where
the
carbon
intensity
would
be
lowest,
but
there's
also
dtn
like
time.
Shifting
that
we
could
also
leverage
here.
Q
Several
of
us
who
are
part
of
this
conversation
also
come
out
of
the
deterministic
networking
community,
and
there
is
the
ambition
that,
for
this
kind
of
traffic
engineering
we'd
like
to
also
be
able
to
guarantee
that
we
stay
below
certain
thresholds
along
Pathways
in
the
networks
and
even
go
so
far
as
to
Ponder.
Q
Could
we
be
reserving
resources,
whether
those
are
battery
related
resources
or
predicted
availability
of
Renewables
along
those
paths,
in
the
same
way
that,
for
example,
in
the
detnet
community
we
talk
about,
or
at
least
in
the
time-sensitive
networking
Community?
We
talk
about
reserving
buffers
along
Pathways
in
order
that
our
our
packets
don't
confront
to
any
kind
of
congestion
along
those
paths.
So
that's
the
equivalence
there
and
then.
Finally,
in
order
to
do
some
of
this,
we
really
do
need
Telemetry.
And
how
do
we
make
that
Telemetry
itself
as
a
workload?
Q
So
I
I
was
inspired
by
Ed,
slides
and
so
I.
You
know,
I
I
kind
of
I
took
his
example
of
you
know.
Q
Sort
of
providing
this
overview
of
you
know,
I,
believe
that
the
carbon
aware
networking
Falls
squarely
in
the
operating
efficiency
use
case
and
that
we
could
comprehend
carbon
intensity
as
part
of
a
cost
function
for
links
this
the
the
reality
that
we
have
batteries
in
many
situations,
not
just
virtual
batteries
but
actual
batteries,
also
means
that
there's
shades
of
use
case
one
about
resource
preservation
that
we
could
leverage,
because
it
may
be
the
case
that
you
might
want
to
opt
for
battery
operation
when
the
battery
has
a
mix
of
electricity
whose
carbon
intensity
is
less
than
the
carbon
intensity
coming
out
of
the
wall
socket
and
something
I
won't
really
get
into,
because
I
really
haven't
thought
about
it
really
deeply
enough.
Q
But
there
is
this
cons,
this
possible
shade
of
use
case
three
about
mobile
devices,
that
we
can
dispatch
mobile,
distributed
energy
resources,
whether
those
are
energy
generation
or
storage,
in
the
places
where
they're
needed-
and
this
is
something
commonly
that
commonly
occurs
in
the
event
of
emergencies,
for
example.
But
how
it
relates
back
to
TDR.
Q
Is
we
recognize
that
there
are
causes
for
the
loss
or
reappearance
of
adjacent
lengths
that
are
related
to
external
environmental
factors
such
as
the
sun
is
shining
or
the
wind
is
blowing,
or
these
thresholds
that
we've
talked
about
either
those
have
been
exceeded
or
were
staying
beneath
them
for
carbon
intensity
and
clearly
there
are
benefits,
because
the
expected
loss
of
links
are
not
seen
as
errors
but
as
optimizations,
and
that
the
resumption
of
these
links
is
not
about
the
ReDiscover
rediscovery
from
scratch,
but
they
are
waiting
on
standby.
Q
This
is,
of
course,
I
went
back
to
Ed's
slide,
unfortunately,
a
slightly
earlier
version
of
head
slide,
but
but
the
point
is
that
it
is
really
to
ask
the
question
whether
this
use
case
meets
these
assumptions
and
I
think
to
a
large
degree
it
does.
We
believe
that
there's
measurability
that
there's
predictability
in
scheduling
when
these
cost
changes
occur
and
we
can
communicate
that
in
advance
and
that
you
know
here's
something
to
to
consider
is
how
often
are
these
changes
to
the
topology?
Q
That
is
in
part,
a
function
of
how
often
something
like
carbon
intensity
is
shared
by
the
utilities
and
what
we
can
come
back
to
that
in
a
moment.
But
there's
also
the
question
about
is
the
cost
of
the
savings?
Does
it
justify
the
effort
or
the
work
that
we're
going
to
do
to
enable
this?
Q
One
data
point
is
that
in
the
recent
California
temperature
during
the
summer
in
California
that
we
had
a
Spate
of
temperature
and
weather
that
the
utilities
you
know
reached
out
to
the
to
the
populace
and
said,
could
you
please
not
use
your
electricity
and
someone
turned
around
and
you
know,
did
a
Model
given
the
amount
of
electric
vehicles
that
we
have
and
the
battery
of
availability?
Q
Could
we
avert
blackouts
and
the
cost
of
that
to
in
the
you
know,
to
the
economy,
and
so
something
like
that
might
be
a
way
to
quantify
is
what
we're
doing
beneficial
enough.
I
think
it's
hard
to
to
specify
completely
what's
enough,
but
we
can
also
come
back
to
that,
but
the
in
terms
of
the
possible
benefits
I
think
that
it
also
matches
up
quite
nicely.
I
didn't
I,
don't
believe
that
it
has
this
property
about
first
planning.
Q
One
interesting
facet
in
terms
of
the
environmental
measures
is
we
are
going
to
face
some
regulatory
pressure.
So
again,
there's
going
to
be
some
carbon
taxes,
and
so
some
of
this
may
be
around.
Are
there?
Are
there
actual
costs,
Financial
costs
that
are
can
be
associated
with
this
problem?
That
also
were
part
of
the
link
state,
not
just
Environmental.
Q
As
I
intimated,
carbon
intensity
is
not
something
that
Network
operators
own,
it's
really
the
utilities
own
this,
and
there
are
issues
about
how
granular
the
areas
or
regions
are
within
which
carbon
intensity
is
derived
and
actually
exported
and
shared
the
frequency
with
which
it's
updated
and
what
parts
of
the
world
it
is
actually
being
openly
communicated
to
Beyond.
Just
the
utilities.
Q
There's
this
subtle
interplay
between
the
batteries
and
the
electricity
generation
that
we'll
have
to
deal
with
an
SI
intermitted.
The
justification
of
function
is
something
that
needs
deeper
analysis.
I
think
that
is
it.
Q
There
are
a
few
other
resources.
If
you'd
like
to
read
more
about
sort
of
the
whole
state
of
carbon,
responsive,
Computing
and
towards
carbon,
aware
networking
and
I
think
we
are
now
we
can
open
up
the
mic
for
all
three
presentations.
A
F
R
Thank
you
very
much,
I
think
I.
Think
this
working
group
is
interesting
and
I'm
willing
to
to
help
in
working
in
this
group,
but
I
still
see
that
the
the
solution
or
the
proposed
solution
is
not
clear.
I
from
the
beginning
or
the
first
presentation,
it
considers
only
or
saying
that
that
it's
varying
the
metrics
or
changing
the
metrics
for
the
a
routing
protocol.
R
It's
not
much
clear
I
think
maybe
I
can
propose
that
either
we
say
that
changing
or
or
a
scheduling,
change
of
some
kind
of
algorithm
is
it
are
we
changing
the
algorithm
or
only
the
metrics?
At
least
we
be
clear
on
that?
Also,
because
and
also
we
need
to
clear
on
the
conditions
or
that
we
can
say
initial
conditions
for
this
kind
of
variation
to
to,
and
also
we
should
consider
the
stability
of
the
routing
routing
mechanism.
B
Things
I
think
that
in
the
long
term,
what
we'd
like
to
do
here
is
to
Define
models
and
stuff.
Of
course,
it's
up
for
the
work
group
to
decide
what
the
actual
scope
is,
but
to
figure
out
what
the
models
are
and
what
needs
to
be
done
and
kind
of
push
it
back
to
the
routing
area,
routing
the
specific
routing
protocols
to
figure
out
how
to
represent
those
things
and
how
to
deal
with
them.
B
S
S
I'm,
not
sure
that
the
ITF
as
a
community
is
the
best
community
to
deal
with
all
those,
because
we
don't
have
the
expertise
in
some
of
those
dimensions,
but
maybe
putting
that
into
the
irtf
I
think
it
would
be
a
much
much
more
much
better
place
to
deal
with
that
until
the
problem
has
been
defined
at
the
technical
level.
What
we
could
go
down
and
solve
being
like
much
more
specific.
A
Miss
luberger
speaking
as
contributor,
I
I,
completely
agree.
There's
there's
a
a
lot
of
issues
here
and
one
of
the
challenges
for
the
routing
area
is
going
to
be.
What
does
a
routing
area
do
I
think
another,
and
there
was
a
side
meeting
from
the
transport
area.
Yesterday
is
from
the
transporter
area,
they're
going
to
have
a
question
of
what
do
they
do
you
know
so
it
may
not
just
be
one
working
group,
but
it
may
be
multiple
because,
as
you
said,
it's
a
really
big
problem.
S
I
P
Daniel
King,
so,
with
the
greatest
respect,
I
have
to
disagree
with
Dean.
It
wasn't
actually
my
intention
for
coming
up
here,
but
I
don't
see
this
as
an
iotf
problem.
I
see
this
as
a
an
engineering
problem
and
I
see
this
being
applied
to
existing
infrastructure,
that's
out
being
used
commercially,
so
that
that's
just
my
thoughts
there.
The
the
point
for
coming
to
the
mic
was
thank
you
Eve,
for
your
presentation.
I
see
some
sort
of
commonality
in
terms
of
requirements
for
the
area.
The
use
cases
that
you
mentioned
there.
P
We
we've
got
a
internet
draft.
It's
actually
in
the
I,
think
the
routing
area
so
I
wonder
if
we
can
work
offline
to
kind
of
sync,
some
of
our
requirements
and
figure
out
the
best
way
to
kind
of
publish
that
I
suppose
some
of
that
may
be
dependent
on
the
conclusions
that
we
reach
at
the
end
of
the
session
today
as
well.
Thank
you.
T
Hi
Mallory
noodle
Center
for
democracy
and
Technology
I
just
wanted
to
support
this
work,
and
thank
you,
Eve
for
your
presentation.
T
T
I
did
want
to
also
point
out
that
in
the
irtf
we
have
had,
we
at
least
had
one
and
recent
presentation,
although
we've
had
some
in
the
past
from
barath
ragaven
who
came
to
talk
about
the
ways
to
reduce
the
footprint,
so
I
think
some
of
the
metrics
that
are
given
around
to
you
know
how
much
electricity
is
needed,
and
so
on
I
mean
it
doesn't
really
address
the
fact
that
sometimes
the
source
of
that
electricity
is
also
potentially
a
problem,
and
so
accounting
for
that
I
think
there
just
needs
to
be
a
general
approach
to
reduction
and
minimization,
because
that's
probably
the
best
direction
of
travel.
U
U
So
in
my
mind
right
so
the
internet
we
run
routing
and
then
we
run
transport
on
top
and
transport
basically
operates
on
rdt
time
scales
and
if
routing
also
starts
to
operate
on
rtt
time
scales,
we
actually
don't
really
have
a
transport
thing
that
we
know
of
that
will
operate.
On
top
of
that,
so
I
wonder
if
we
could
limit
this
proposed
work
to
the
space
where
changes
to
the
topology
happen
on.
You
know,
time
scales
that
are
a
factor
X
for
some
number
of
the
you
know:
90
percentile
rtt,
or
something
like
that.
U
So
it's
you
know,
there's
a
certain
expectation
that
the
topology
isn't
crazily
changing
all
the
time
right.
It's
it's
generally
stable
and
maybe
you
can
predict
when
a
change
will
happen,
and
so
you
can.
When
you
have
a
path,
you
can
run
a
TCP
style
transport
over.
It
I
think
that
might
sort
of
get
us
in
a
box
where
we
might
know
what
to
do
about
for
standardization.
Thank.
A
You
yeah
that's
a
helpful
comment.
Joel
Halpern
made
a
similar
comment
in
the
chat
that
you
know.
Are
we
talking
minutes
microseconds
nanoseconds
and
you
know
the
general
discussion.
There
is
as
we're
talking
longer
periods
and
scheduled
scheduled
items
that
are
maybe
minutes
hours
longer,
depending
on
where
you
are
so
that's
a
helpful
comment
with
that.
We're
going
to
move
on
to
Rick
he's
actually
going
to
be
the
last
sort
of
formal
presentation
and
we've
reserved
an
extra
10
minutes.
A
We
had
more
when
we
started,
but
we
were
served
after
10
minutes
of
discussion
and
it's
probably
best
just
to
integrate
it
into
his
talk
into
the
later
slides
Rick
over
to
you.
C
Thanks
Lou
for
saying
formal,
I
I
rarely
do
anything
particularly
formal,
so
yeah.
This
is
almost
a
continuation
of
of
where
I
started
and
and
I
hope
addresses.
Last
Point,
directly
and
I
I
think
I'll,
try
and
make
comment
to
Dean
as
well.
Well
what
we
go
through
so
down
while
we
go
through
this
next
slide,
please
so
I
started
this.
This
kind
of
goes
back
to
to
where
I
started.
C
With
my
thinking
of
my
discussions
were
there
and
various
others
about
this
too
I
believe
there's
a
kind
of
a
spectrum
of
of
routing
environments
which
goes
from
the
very
fixed.
Never
change
doesn't
actually
exist
environments
where
we're
talking
about
ISP,
backbones
cloud
service
providers,
that
kind
of
stuff
it's
big,
it's
stable,
it's
manageable!
It's
expected
when
you
put
a
link
in
somebody's
actually
laid
down
something
physical
and
plugged
it
into
something
else:
physical.
You
expect
it
to
still
be
there
and,
at
the
other
end
of
the
spectrum.
C
You've
got
swarms
of
uavs
buzzing
around
in
an
uncontrolled
environment
in
the
middle
of
a
thunderstorm
in
a
war
zone.
You
know
that
that
world's
worst
ad
hoc,
everything
is
in
complete
chaos,
contested
environment,
but
those
aren't
just
two
use
cases
that
I
believe
there's
a
spectrum
and
I
just
want
to
kind
of
look
at
the
two
extremes
of
that
and
introduce
the
idea
that
I
think
there's
a
there's
a
either
a
second
dimension
to
this
or
there's
there's
another
environment
where
I
think
TVR
lives.
So
next
slide,
please!
C
C
C
It
was
when
you
started,
because
that's
how
you
configured
it
or
your
auto
configuration
system,
kind
of
found
it
and
found
it
was
there
and
Link
service
is
maintained
by
reactive
re-routing
and
the
key
point
is
it's:
it's
rerouting,
so
something
fails
we'll
go
around
the
problem
and
we
will
maintain
those
end-to-end
paths,
but
we
will
react
to
that
failure
and
this
ties
back
to
what
I
was
saying
at
the
beginning.
And
the
fourth
point
is
links
are
expected
to
come
back,
so
there's
active
monitoring
to
say:
is
it
back
yet?
C
Is
it
back
yet
or
and
yes,
there
are
more
Advanced
Techniques
around
that.
But
fundamentally,
when
your
link
fails
in
these
fixed
environments,
the
expectation
is,
it
should
be
there.
So,
let's,
let's
try
and
work
out
when
it
comes
back
so
that
we
can
reactively
reroute
to
it
when
we
spot
it,
come
back
nice
and
promptly
and
and
get
back
to
that
stable
thing
that
the
sysadmin
wanted
to
happen
and
it's
it's
kind
of
the
mindset
that
has
influenced
the
design
of
those
protocols.
C
You
know
we're
talking
bgp
and
rspf
and
Isis
and
and
even
I'll
point
the
finger
at
BFD
here
to
say
it's
a
great
technique
for
getting
prompt
recovery,
but
you
are
reacting
to
a
failure
and
trying
to
recover
quickly
and
BFD
helps
you
do
that.
So
the
key
issues
and
I
kind
of
touched
on
this
earlier,
but
I'll
repeat,
is
that
when
you're
reactively
re-routing
you
will
always
have
loss
even
when
you
knew
it
was
about
to
happen
because
you
are
reacting
to
something
happening
and
going
oh
God,
it's
gone
right.
Oh
we'll!
C
Go
this
way,
it's
fine
or
luckily
I.
You
know
there
is
still
that
loss.
While
you
switch
they're
an
opportunity
to
do
something.
Seamless
because
you
knew
it
was
coming,
is
missed
and
the
other
side
of
it
is
that
proactive
monitoring
piece.
If
you
know
it's
not
there
and
it's
not
going
to
be
there
until
tomorrow.
Why
are
you
monitoring
to
see
if
it
comes
back?
Yet
it's
just
it's
just
a
bit
wasteful
and
using
going
back
to
some
of
the
use
cases
that
the
aired
and
and
Eve
and
and
Kevin
it
is
Kevin.
C
Isn't
it
yeah
the
Kevin,
presenter
and
Daniel
presented
some
of
those
environmental
resources
are
really
important.
C
Some
of
those
environments,
the
fact
that
we're
even
burning
these
resources
is
is
important
in
a
larger
sort
of
global
scale
and
I'm,
a
computer
scientist
I,
don't
like
wasting
compute.
You
know
why
have
I
got
an
inefficient
algorithm
doing
something.
Let's
just
be
purist
about
this.
So
there's
lots
of
lots
of
reasons
from
different
aspects
about
wasting
resources
is,
is
not
a
good
thing.
Let's
go
to
the
other
end
of
the
spectrum.
So
next
slide.
C
Let's
look
at
man
a,
and
this
kind
of
comes
back
to
lars's
point
where
there
are
environments
where
everything
is
wild,
so
nodes
move
randomly
within
the
topology
because
they
may
be
physically
moving.
They
may
be
doing
crazy
things
at
the
link
there,
which
causes
all
sorts
of
stuff
to
go
on
links
are
frequently
break.
That's
a
general
assumption
in
in
the
world
of
mobile
ad
hoc
networking,
your
links
they're
not
going
to
be
stable.
They
are
pretty
much.
You
know.
As
I
say,
links
are
considered
ephemeral
effectively,
so
your
network
save
service.
C
Just
like
the
fixed
environments,
it's
maintained
by
reactive,
rerouting
I
found
a
new
opportunity.
I've
lost
an
opportunity.
I'll
quickly
rebuild
my
topology
I
may
do
it
in
very
exciting
and
funky
ways,
because
many
tackles
this
environment
that
may
be
considerably
less
efficient
than
link
state
routing,
but
it
will
work
in
environments
where
link
State
just
doesn't,
but
it's
still
fundamentally
reactive,
rerouting.
You
are
reacting
to
some
break
in
your
current
assumption
about
your
end-to-end
path
and
you
are
therefore
desperately
trying
to
work
out.
C
Some
alternate
way
to
re-establish
it
and
links,
as
I
said,
are
considered
ephemeral
and
therefore
adjacencies
are
proactively
discovered,
so
you'll
see
if
you
do
sort
of
read
through
most
of
the
many
protocols
instead
of
understanding
that
the
peer
on
the
other
end
of
that
link
is
expected
to
be
there
and
let's
try
and
keep
that
establishment
going.
Bfd
style
or-
or
you
know,
however,
you
want
to
do
it.
Your
TCP
keeper
lives,
Etc
you'll,
see
a
lot
of
the
man.
A
protocols
will
do
things
like
active
discovery
of.
C
Oh
there's
other
new
peers
are
the
peers.
I
was
expecting
still
there
have
any
peers
disappeared
since
I
lasted
there's
a
lot
of
helloing
a
lot
of
two
hot
neighbor
Discovery.
It's
a
very
interesting
things,
but
it's
active
there's,
no
idea
of
stop
looking
your
radio
is
silent
between
now
and
Wednesday,
and
therefore
you
have
the
same
problems,
which
is
the
reactive
rerouting,
and
this
is
more
important
to
many
environments
where
you
probably
are
talking
constraining
devices.
C
You've
got
that
little
loss
because
you've
lost
that
link.
Quick
rethink,
oh
oh!
It's
that
way.
There's
a
loss
there
and
to
go
back
to
the
point
of
you
could
accidentally
form
micro
Loops,
because
several
things
could
spot
that
link
fading
at
the
same
time,
which
means
that
initial
stab
at
oh
it'll
go.
This
way
could
well
be
wrong
and
then
you
get
corrective
and
loop
avoidance
and
loop
back
off
kind
of
algorithms,
which
are
exciting,
but
you
haven't,
got
a
stable
network
service.
C
While
all
that's
going
on
and
again,
you've
got
that
proactive
Discovery
rather
than
proactive
link
re-establishment,
which
is
wasting
resources
and
particularly
in
the
Mana
environments.
That's
not
a
problem,
so
you'll
note
that
the
key
issues
have
kind
of
been
copy
and
pasted
between
these
two
environments,
because
I
think
they
are
common
problems
to
both
of
them.
C
So
third
answer
that
came
out
on
the
list
is
people
said:
oh
well,
dtn
can
solve
this.
If
you've
got
link
outages,
we
can
do
store
and
forward
it's
great.
We
don't
have
to
worry
about
the
fact
we
can
we
can.
While
we
try
and
work
out
what
the
next
thing
is.
If
something
goes
off
for
two
hours,
dtn
can
save
us,
dtn
is
great.
I
am
a
chair
of
dtn
and
I
can
tell
you.
Dtn
is
great,
but
it's
a
transport
protocol.
It's
for
delivering
data,
it
doesn't
do
routing.
C
In
fact,
we
it's
actually
out
of
Charter
at
the
moment
for
various
iotf
political
reasons,
but
well
for
scoping
reasons,
not
political
reasons.
There
are
problems
we
still
have
to
solve
before
we
try
and
solve
routing,
and
if
TVR
can
come
up
with
some
of
the
routing
problems,
dtn
will
use
it.
Dtn
isn't
an
input
to
TVR.
Dtn
in
fact
would
like
to
see
the
outputs
of
something
like
TVR,
so
that
kind
of
implies
why
I've
started
having
these
conversations
at
all.
C
However,
there
is
some
interesting
research
that
is
not
happening
with
the
ietf
and
experimentation
and
actually
I
think
field.
It
I'm
looking
to
see
if
anyone
who
knows
this
is
willing
to
nod
if
any
of
this
is
live
particularly
around
what
they
call
contact
graph
routing.
C
So
that's
an
understanding
that,
at
a
certain
point
in
time,
two
neighbors
will
see
each
other
and
therefore
we
can
start
moving
dtn
bundles,
their
sort
of
jumbo
packets
between
them
and
that's
now
become
documented
by
ccsds,
which
is
the
space
governance
community
and
called
saber,
because
it's
a
great
acronym
schedule
aware
bundle
routing
and
the
reference
there
for
anyone
who
wants
to
to
look
at
it
and
it's
what
I
find
interesting
is
other
communities.
C
They
are
aware
that
having
a
schedule
and
trying
to
move
things
around
based
on
the
schedule
is
a
good
idea.
So
again,
this
is
underlining
going
all
the
way
back
to
Kevin's
point,
which
is
this
isn't
new
and
I.
Think
that's
a
good
thing,
because
it
kind
of
underlines
that
I
think
we've
got
a
problem
here.
So
Keys
is
about
gtn.
It's
not
a
routing
protocol
and
the
second
point
is:
it
doesn't
actually
deal
with
IP
for
those
who
are
unaware
of
dtn.
C
So
that
brings
me
on
to
really
the
potential
work
items,
because
if
we're
talking
about
forming
a
working
group,
we
ought
to
have
some
kind
of
discussion
about
what
we
should
do.
I
think
we
should
have
a
formal
use
cases
document
it
makes
sense.
I
am
a
huge
fan
of
putting
fence
posts
around
the
scope.
Is
how
I
like
to
describe
it,
so
you
know
quite
clearly
what's
in
and
what's
out,
because
it's
written
down
somewhere
because
it
kind
of
allows
people
to
have
a
common
view
about
what
we're
trying
to
achieve
here.
C
So
problem
statement
use
cases
that
kind
of
thing
to
make
sure
I.
Just
speaking
as
a
chair,
it's
really
helpful
within
a
working
group
to
be
able
to
say
great
conversation,
not
in
scope.
Let's
get
something
to
describe
this
bit
because
it
most
definitely
is
in
scope,
so
I
think
use
cases
document
is
useful.
I
think
we
should
look
at
a
general
purpose,
solution
for
time,
variance
and
coordinate
with
other
working
groups
who
have
the
expertise
on
particular
routing
protocols.
C
I,
don't
think
it
is
correct
or
effective
for
a
TVR
working
group
to
say
we
will
write
a
bgp
extension
of
some
sort.
I
think
that's
completely
incorrect.
I
think
the
TVR
working
group
should
look
at
understanding,
time,
variance
and
its
impact
on
routing
and
there's
already
discussions
going
on
in
the
chat
about
what
is
a
schedule.
What
should
go
in
a
schedule
is
a
schedule,
a
good
concept.
How
do
schedules
get
merged
between
nodes?
C
Are
we
talking
about
link
stuff
or
are
we
going
to
talk
at
the
end-to-end
service?
You
know
there's
some
interesting
topics
here.
Once
we've
got
that
we
can
go
out
to
the
next
date.
Routing
groups
Pym:
if
we
want
to
touch
multicast,
you
know
some
of
the
many
groups,
whoever
and
say:
if
have
you
considered
time,
variance
as
part
of
your
protocol
stack?
If
you
have
not
here,
is
some
information
models
that
we
and
some
functional
operations
around
the
creation
ingestion
introspection.
C
C
C
Three:
let's
get:
let's
get
the
routing
working,
because
everything
else
will
build
from
that
and
therefore
I
think
the
following
things
get
chucked
out
and
it
doesn't
mean
that
they're
not
really
cool
things
to
look
at
I,
just
think
they're
part
two
or
three
or
four
store
and
forward
gtn's
doing
great
stuff
at
the
transport
layer.
Let's
not
go
there.
That
doesn't
mean
talking
about
buffers
and
understanding
the
transition
when
something
scheduled
happens
isn't
in
scope.
C
But
let's
not
talk
about.
Oh,
we
could
hold
data
for
a
very
long
time
and
then
move
it.
Please
leave
that
to
transport
and
dtn
at
the
moment.
I
have
the
discussions,
but
I,
don't
think
I,
don't
think
it's
go
anything
to
do
with
making
transport
protocols
schedule
aware
should
definitely
be
a
part.
Two
in
my,
in
my
opinion,
I
don't
want
to
start
touching
the
transport
stuff
tuning
tcp's
behavior.
When
you
know
you
know
something's
going
to
fail
in
in
20
seconds.
Please
don't
go
there.
C
I
think
I
mean
great
fun,
but
we
won't
achieve
anything.
You
know
it's
a
hard,
hard
topic
and
I
think
we
need
the
groundwork
before
we
can
sensibly
start
addressing
that
I.
Don't
think
it's
worth
getting
into,
and
I've
said
this
earlier,
how
we
create
the
schedules,
how
we
do
the
prediction:
I
think
we
should
start
at.
Let's
pretend
we
have
a
a
prediction:
let's
pretend
we
have
some
kind
of
schedule.
C
E
Hi
Kevin
Paul,
so
I
after
lars's
comment
I.
It
struck
me
a
little
bit
that
given
what's
happened
in
this
kind
of
area
in
the
past
we
when,
when
dtn
work,
was
a
research
group
topic.
We
intentionally
confused
the
acronym
in
its
definition,
so
that
the
D
could
be
delay
disruption.
Whatever
we
want
to
have
some
research
on
would
sort
of
be
in
scope
and
I.
E
What
what
is
this
covering
and
so
just
to
throw
out
a
couple
you
know.
So
there
was
the
use
cases
we
had
space
and
so
on,
there's
been
underwater
networks.
The
prop
the
prop
delay
is
five.
You
know
five
orders
of
magnitude
worse
than
it
is
on
RF
propagation.
So
that's
going
to
affect
you
know.
If
you
had
rooting
updates,
it
would
affect
it.
E
Would
that
be,
the
type
of
thing
would
be
in
or
out
of
scope,
because
that's
a
world
where
you
could
have
an
end-to-end
connection
from
the
sender
to
the
ultimate
receiver,
but
the
properties
of
the
network
underneath
are
pretty
different
than
what
we
would
normally
do
so
I
I,
guess
what
I'm
I'm
not
saying
that's
the
one
to
worry
about,
but
there's
a
few
other
features.
When
you
say
you
know
rtt.
E
That
needs
to
be
decomposed
a
little
bit
to
mean
like
well
really
what
what
is
that
and
and
then,
if
you're,
going
to
separate
out
the
storm
forward
layer
which
is
okay,
I
suppose,
but
then
it
would
be
interesting
to
to
know
what
the
communication,
if
you
will
API,
is
between
those
because
scheduling
you
know
back
to
the
dynamic
transship
and
all
that
stuff.
That's
very
interested
in
knowing
what
paths,
what
links?
E
What
places
there
are,
what
storage
facilities
there
are
along
the
network
and
one
other
one
that
I
don't
think
we
heard
about
is
like
properties
of
nodes
along
the
way,
and
so
at
some
point
we
had
discussed
some
nodes
being
more
reliable
or
more
secure
or
located
in
certain
geographies.
We
trust
or
don't
and
so
on,
and
that
would
affect
the
routing
decision
as
well
and
maybe
that's
an
out
of
scope.
But
all
these
a
lot
of
richness
had
been
discussed
over
the
years
and
what
you
might
want
to
use
in
your
routing
decision.
A
I
think
we're
not
going
to
have
time
to
go
into
all
the
questions,
but
I
think
we
should
capture
them
and
say
these
are
items
that
we
need
to
address
as
part
of
the
working
group
or
in
scoping
the
working
group.
So
thank
you
for
that.
I
think
I
got
some
good
notes
on
that.
Please
check
the
the
notepad
to
make
sure
we
we
got
it
correctly.
Brian.
D
Hello,
one
other
thing
that
hadn't
been
discussed
so
far
and
is
more
of
a
gap.
Analysis
than
a
use
case
is
on
the
management
side.
There
are
things
like
netconf
that
have
a
candidate
data
store
versus
a
running
data
store.
So
there
is
a
variation,
that's
allowed,
but
you
can't
schedule
it
and
you
only
get
one
of
them,
and
so,
when
the
concept
of
multi-tenancy
management
comes
into
play,
it
really
throws
a
wrench
into
things,
because
you
don't
want
to
have
to
have
a
separate
management
Organization
for
every
single
time.
A
Yeah
good
comment
in
one
of
the
earlier
working
groups.
M
A
Week
someone
pointed
out
that
there
was
a
a
working
group
that
spent
a
lot
of
time
talking
about
how
to
document
schedules.
So
you
know
there
are
good
building
blocks
in
other
working
groups
that
we
definitely
can
can
Leverage
next
doorless.
I
Yeah,
thank
you
very
much.
So
one
of
one
of
the
questions
we
usually
or
I'm
trying
to
web
out
my
mind,
is
how
do
we
get
from
here
to
some
running
code?
That
does
something
useful
right
and
I
haven't
seen
very
little
unfortunate
in
the
presentation.
That
would
give
me
a
really
good
hint
for
that.
If
the
you
know
way
to
get,
there
is
really
mostly
building
a
system
out
of
mostly
existing
components
in
the
ITF
plugging
them
together.
I
The
magical,
sdn
controller
that
solves
99,
a
good
amount
of
data
models
and
then
some
app
use
of
the
routing
system
that
that
would
be
something
that
would
be
really
good
to
show
in
some
examples
that
that's
the
goal
and
then
I
mean
we've
got
the
example
working
groups
that
have
done
this
kind
of
trying
to
use
some
build
something
useful
with
minimum
additional
work.
Combining
what
we've
done
in
the
iitf,
my
own
animal
working
group,
probably
is
something
like
that
right.
So
99
of
was
just
plugging
together.
I
Things
still
a
lot
of
work,
so
I'm
not
sure
if
that
would
then
go
into
routing
or
Ops
like
they
put
enema
into
Ops
right.
If
this
is
really
a
little
bit
more
fishing
and
we
don't
really
know
if
we
can
come
up
with
a
such
a
big,
you
know
system
design
or
something
that
really
we
are
very
certain
will
give
value.
Then
now
we
also
have
fairly
good
experience.
I
I
think
by
now,
with
the
special
interest
group
model
that
we're
using
in
so
far,
primarily
in
Ops
with
Mobs
and
iot
Ops
right
so,
but
that
puts
you
into
more
constraint
about
what
your
supposed
and
able
to
write
in
rfcs,
so
just
as
process
recommendations
of
where
you
might
want
to
look
into
going.
The
next
steps
making
proposals
toward
some
form
of
a
working
group.
A
Yeah
thanks
for
that
Toro
us,
certainly
after
this
meeting
we're
going
to
take
all
the
feedback
work
with
the
ad
the
isg
and
needed
the
IAB
on.
What's
next,
what
you
see
here
on
the
screen
actually
came
with
input
from
Alvaro
of
make
sure
that
what
we
come
up
with
is
attainable
and
actionable
right.
You
know
we
we
can
go,
deliver
something
and
it's
a
probably
a
a
Yang
model
or
an
information
model,
and
we
get
when
we
succeed
at
this.
We
can
talk
about
what
happens
after
that
yeah.
I
So,
just
as
a
personal
contributor,
I
like
the
components
right
but
I'd,
be
a
lot
more
happy.
If
is
there
some
place?
That
says,
and
here
is
how
these
components
can
be
mixed
together
to
build
a
system
that
actually
delivers
value,
because
just
starting
out
throwing
components,
you
know
against
the
walls.
I
see
that.
C
V
So
I
I
like
this.
This
is
really
cool
stuff
and
we
should
definitely
do
this.
Has
a
lot
of
interesting
different
use
cases
and
yeah
I
mean
we
should
go
ahead.
The
only
caveat
that
I
would
put
on
this
or
a
sort
of
little
worry
is
that
there
may
be
some
sort
of
a
chicken
and
egg
problems
hiding
here
and
I
I
actually
cut
on
on
the
line
before
before.
V
The
statement
was
made
about
schedule,
creation
being
out
of
scope
and
that's
a
cute
thing,
and
indeed
like
the
like.
You
have
this
local
and
Global
Optima
and
you
deciding
what
you're
gonna
do
with
your
node
may
not
actually
like
your
local.
Optimization
may
not
be
what
you
actually
want
for
a
lot
of
these
cases,
particularly
when
it
comes
to
things
like
energy
and
energy
is
also
sort
of
an
interesting,
complicated
case,
because
you
have
so
many
like
it's
it
I
suppose
noted
it's
pretty
broad.
V
So
you
have,
you
know
different
aspects
to
worry
about,
but
also
that
you
have
different
techniques.
You
have,
for
instance,
implementation
techniques
where
you
can
put
nodes
into
a
relatively
low
power.
Sleep
states
where
they
can
still
act
as
part
of
the
network.
You
don't
have
to
tell
anybody
that
you're
sleeping,
but
but
you
can
still
save
a
huge
amount
of
energy
or
you
can
be
sort
of
totally
off
and
and
not
not
respond
to
anything.
And
then
you
have
to
tell
the
rest
of
the
network.
V
So
I
think
some
of
the
use
cases
are
sort
of
really
straightforward.
That,
like
a
satellite,
is
forced
to
go
on
a
rotation
around
Earth
and
you
exactly
know,
what's
going
to
happen,
some
of
the
other
stuff
where
you
start
to
compute
that
well,
you
know,
there's
these
criteria
and
these
cost
and
and
so
on,
and
they
interact
in
interesting
ways
across
parts
of
the
network.
Then
that
gets
more
complicated
but
yeah.
So
my
advice
is
to
stay
as
focused
as
possible.
W
Hi
Adrian
Farrell,
thanks
to
the
proponents
and
the
chairs
for
really
putting
this
together.
I
want
to
agree
and
disagree
with
Lars
I
want
to
agree
that
there
is
part
of
this
problem.
Space
is
the
time
variant,
provision
and
awareness
of
service
points
and
direction
of
Transport
paths
and
I
want
to
disagree
because
I
think
this
is
about
packet,
routing
and
reaching
inside
the
network.
W
And
you
know
the
the
transport
is
an
overlay
over
pocket
routing
and
we
need
to
look
at
both
of
these
problems
or
maybe
all
three
of
these
problems
in
our
analysis
and
work
out
which
ones
we
are
solving
and
then
work
out
where
to
solve
them
and
not
just
use
the
existence
of
one
problem
to
say.
Well,
you
can't
talk
about
the
other
problem.
S
They
are
more
Dynamic,
so
just
to
clarify
my
previous
Point
time.
Variant.
Routing
is
an
interesting
problem,
but,
let's
not
it
become
kitchen
sink
for
everything
else.
100
degree.
That's
one
thing:
let's
define
very
clear
problem
that
you
want
to
solve
and
that
we
can
solve
for
mesh
networks.
C
A
X
Hi
Colin
Perkins
conferences
with
no
hats.
Let's
be
clear.
This
is
an
area
where
obviously,
there's
been
a
lot
of
previous
work
and
a
lot
of
previous
research
I
agree
that
it
perhaps
seems
reasonable
to
try
and
explore
whether
we
can
do
a
focus
standards
activity
here.
X
It's
time
to
maybe
narrow
this
work
down
and
see,
what's
feasible,
I
think
I'd,
Echo
I
think
it
was.
The
comment
allows
me
made
think
about
the
amount
of
time
variance
and
the
amount
of
Link
latency
you
have
in
the
network
and
draw
bounds
around
what
you're
going
to
consider
because
it
makes
the
scope
fusible.
X
X
Some
of
these
have
a
radically
different
properties
and
need
ready.
You
know
radically
different
amounts
of
domain
expertise
so
before
picking
a
particular
scope
over
the
edge,
the
group
to
make
sure
it's
got
people
with
the
relevant
domain
expertise.
Just
in
case,
you
accidentally
missed
something.
A
A
Pretty
important
point
about
make
sure
we
have
the
right
people,
so
we
have
six
minutes
left.
The
plan
is
there's
we're
gonna.
Do
three
polls
they're
already
in
the
chat?
If
you
want
to
see
what
they're
coming
them
coming
and
then
we're
gonna
turn
it
over
to
ask
the
ad
and
our
IAD
advisor
if
they
have
anything
to
add
and
then
we'll
probably
close
out?
No,
we
definitely
will
close
out.
Sorry.
I
should
say
probably
so
I'm
going
to
get
the
first
poll
going,
foreign.
A
We're
getting
I
think
a
lot
of
positive
response,
certainly
more
than
half
the
room
has
said.
Yes,
a
few
have
said:
I'll
take
the
lower
the
hand
as
no
but
I
think.
That's
that's
pretty
pretty
clear
we're
going
to
move
on
to
the
next
question.
A
A
So
again
we
have
a
good
percentage
of
people
responding.
That's
always
a
good
thing,
but
this
time
we
have
more
of
a
split,
but
still
the
majority
are
saying:
yes,
we're
going
to
move
on
to
the
last
question.
A
A
A
So
I
think
here
too
it's
very
similar,
although
there's
I'm
actually
a
little
confused
by
the
results,
because
the
previous
one
it
felt
like
there
was
more
opposition,
and
this
one
we
have
like
more
people,
say
they're
willing
to
show
up
and
do
work.
So
I
guess
this
is
the
the
most
important
item
so
with
that
with
that
I
would
like
to
so
so
I
see
a
iesg
member
getting
up
at
the
mic
I'm
going
to
ask
if
he's
asking
is,
is
he
coming
as
iesg
or
okay?
L
I
think
we
just
need
to
be
to
remember
that
you
know.
Obviously
these
aren't
votes.
You
know
they
are
an
education
and
I
think
my
reading
of
this-
and
this
is
from
what
I'm
seeing
here
is
that
people
are
saying:
maybe
it
shouldn't
be
in
routing
for
the
lost
one,
but
at
the
same
time,
if
it
is
we're
still
prepared
to
work
on
it,
that
would
be
my
how
I
would
read
that
yeah.
So
that's
just
from
you
know
from
an
ad
perspective,
how
we're
evaluating
this?
That's,
how
I
would
read
that.
A
Reasonable
because,
for
example,
I
think
the
answer
is,
we
should
have
multiple
working
groups
because
I
think
there's
multiple
problems,
so
I
could
see
if
someone
thought
it
was
either
or
that
so
with
that
I
with
that
I'd
actually
like
task,
Alvaro
and
I,
think
Valerie
I
think
you're
our
IAB
advisor.
If
either
of
you
would
like
to
say
anything,
and
if
you
don't
that's
okay,
too,.
K
Sure
unlover,
the
Thunder
Road
Andy.
First
of
all,
thank
you.
Everyone
for
coming
today,
I
think
134
people
still
at
the
end
as
a
lot
of
people.
It
confirms
one
of
the
reasons
why
we
wanted
to
do
this.
Buff
is
their
interests
in
the
problem
and,
of
course,
the
poll
have
demonstrated.
I
K
There
is
interest
people
think
there
is
a
problem
to
solve.
People
are
willing
to
to
contribute.
Thank
you
to
the
chairs,
of
course,
for
running
this
to
the
proponents.
K
Everyone
I'm
gonna,
just
say
the
the
obvious
that
everyone
has
been
saying.
We
need
to
scope
down
the
problem.
This
was
a
non-working
group
forming
buff,
which
is
precisely
to
gather
use
cases
to
gather
interest.
The
area
of
potential
application
of
TBR,
as
everyone
has
said,
already,
is
very
very
wide.
We
need
to
narrow
down.
We
need
to
scope
whatever
we're
going
to
do
or
can
do
in
in
the
routing
area
or
potentially
elsewhere
to
things
that
we
can
deliver
that
we
can
work
on
they're
going
to
be
clear.
K
So
that's
what
I
would
like
to
see
being
discussed
in
the
main
list
going
forward.
What
are
the
I?
Don't
know
one
two,
three,
whatever
deliverables
that
we
can
go
and
try
to
move
forward
and
as
we
see
that
discussion,
we're
going
to
evaluate
that
and
figure
out
what
what
the
next
steps
are
going
to
to
be
after
the
discussion.
So
again.
Thank
you
so
much
thank.
H
T
Mallory
noodle
Center
for
democracy
and
Technology
I
think
Russ
also
was
here
from
the
IAB,
and
so
it's
not
only
me
but
I,
guess,
some
of
so
because
I'm,
not
a
domain
expert
as
I
sort
of
look
at
the
baf.
T
Some
of
the
questions,
I
recall
from
the
things
we
evaluate
are
gonna
I'm
going
to
focus
on.
Where
else
is
this
work
happening,
I
I
saw
earlier
in
some
of
the
presentations.
Some
of
this
is
itu
related.
T
All
of
that
is
going
to
be
really
helpful
information
for
for
me
as
I
go
through
that
questionnaire.
So
anyone
who
wants
to
yeah
I,
just
I
would
say
I
will
appreciate
anyone
who
can
respond
for
to
some
of
my
questions.
Maybe
I
can
talk
to
you
all
as
the
delegates
about
about
those
sorts
of
things
that
I
don't
quite
understand,
but
that's
sort
of
the
nature
of
the
review
that
we
do.
But
it's
great
to
see
all
the
interests.
I
think
that's
a
strong
sign
thanks.
A
I'm
we're
happy
to
talk
with
you
offline
and
anyone
to
follow
up
I
put
in
the
chat
the
link
to
the
group
area.
You
can
find
there
the
archive
as
well
as
how
to
join
the
list.
Please
join
start
contributing
it'd
be
really
appreciate
to
continue
the
good
energy
that
was
here
today
on
the
list
and
we'll
work
with
the
isg
and
the
IAB
on
on
next
steps,
and
thank
you
all
for
I.
Think
a
very
successful
session
of
TBR
hopefully
see
you
in
Yokohama.