►
From YouTube: IETF112-INTAREA-20211109-1200
Description
INTAREA meeting session at IETF112
2021/11/09 1200
https://datatracker.ietf.org/meeting/112/proceedings/
A
A
So
we
are
sharing
here
wasim
and
myself
hi
everyone,
let's
start
with
the
note.
Well,
so
please
remember
that
by
participating
to
the
ietf
meeting,
you
agree
to
follow
the
ietf
process
and
policies
that
your
contribution
is
covered
by
patent
applications.
Then
you
have
to
please
disclose
or
notify
the
chairs
about
it.
A
A
Please
extend
respect
and
courtesy
to
colleagues
at
all
times,
you'll
remember
that
the
discussions
are
impersonal
and
for
technical
reasons.
A
Our
goal
here
is
to
devise
solutions
for
the
global
internet
and
that's
the
main
goal
of
the
discussion
not
to
attack
or
criticize
personal
contributions,
but
to
advance
the
the
technology
together
and
please
make
sure
that
you
are
prepared
to
the
to
the
discussion
by
by
reading
the
the
mailing
list
and
the
slides
beforehand.
A
B
I
can
offer
the
same
service
like
last
time,
so
I
don't
type
quick.
So
I
will
not
take
notes
live
right
now,
but
I
can
prepare
minutes
afterwards
with
the
recordings.
If,
for
you
guys,
it's
okay.
A
Lovely.
Thank
you
very
much
thanks
for
that
luigi
and
everyone
else.
Please
feel
free
to
contribute
also
to
to
the
notes,
and
if
you
can
help
luigi
with
that.
A
All
right,
so
we
have
a
quite
packed
agenda.
We're
going
to
start
with
a
quick
update
on
the
working
group
items
moving
to
set
to
presentations
on
the
lowest
address
and
formal
reserved
address
space.
A
All
right
so
just
a
quick
update
for
the
working
group
status,
as
you
probably
saw
on
the
mailing
list,
the
gui
generic
udp
encapsulation
draft
was
declared
dead
in
october.
A
Okay,
so
that's
pretty
much
it
so
maybe
we
can
move
to
the
to
the
next
presenter.
D
A
Yes,
we're
having
an
issue
with
the
with
the
screen.
Is
it
possible
to
you
to
share
your
your
own
screen.
D
D
Can
you
see
this?
Yes,
no.
We
can
okay
well
I'll
present
from
here.
Okay,
thank
you.
Thank
you
very
much
and
good
morning
to
everyone
and
good
other
times
of
the
day
to
everyone,
depending
on
time
zone
and
part
of
the
world.
D
I'm
seth
schoen,
I'm
here
presenting
on
the
ipv4
unicast
extensions
project,
and
we
have
submitted
two
drafts
which
are
up
for
discussion
today
as
well.
We've
uploaded
two
drafts
in
the
same
series
after
the
deadline
for
this
interior
meeting
and
so
I'll
just
preview,
the
other
two
very
quickly,
but
the
first
two
on
lowest
address
and
240
are
up
for
discussion
in
this
meeting.
D
D
And
I'd
like
to
note
that
the
largest
change
that
we're
going
to
talk
about
here
240
has
been
default.
Behavior,
that
is
the
behavior
that
we
specify
has
been
the
default
in
many
widely
used
operating
systems.
Since
a
similar
change
was
proposed
at
ietf
in
2008,
it's
actually
quite
possible
that
many
or
most
of
you
are
using
devices
to
view
this
presentation
and
to
interact
with
this
meeting
that
already
implement
the
behavior
that
we
specify.
D
D
Now.
Subsequently,
the
ietf
community
created
a
standard
specifying
broadcast
at
the
highest
address
of
each
subnet,
and
everyone
has
agreed
in
all
documentation
for
a
long
time
that
the
highest
address
is
the
consensus
broadcast
address
and
the
lowest
address
is
only
meant
for
backwards.
Compatibility
with
the
4.2
bsd
behavior
bsd
itself
changed
over
to
using
the
highest
address
for
broadcast
in
4.3
bsd.
Just
three
years
later
in
1986,
and
so
various
rfcs
say
we
want
to
maintain
backwards,
compatibility
with
the
historic
behavior.
D
D
D
Our
second
draft
for
discussion
today
is
our
unicast
240
draft.
This
refers
to
the
former
class
e
back
when
we
spoke
of
classful
addressing
or
experimental
range
240
slash,
4
from
240
up
to
255
as
the
first
octet.
This
is
over.
268
million
addresses
more
than
six
percent
of
all
of
the
ipv4
address
space.
D
Now
these
addresses
were
reserved
for
future
use
in
the
1980s.
There
were
many
reasons
to
think
that
this
would
be
useful,
that
there
would
be
additional
addressing
modes
other
than
unicast.
For
example,
anycast
was
proposed
to
use
a
dedicated
address
space.
There
have
been
ipv6
transition
mechanisms
that
have
been
proposed
to
use
different
address
space
and
so
on.
D
D
This
also
works
so
well
that
we've
seen
documentation
showing
that
some
cloud
vendors
are
unofficially
using
this
address
space
as
private
address
space,
because
they
know
that
the
particular
systems
that
are
going
to
interact
with
each
other
are
your
running
operating
systems
that
support
it
just
to
preview
drafts
that
we
uploaded
after
the
deadline.
We
have
a
draft
on
the
0
8
network,
which
was
reserved
for
an
icmp
based
auto
configuration
protocol
for
hosts
back
in
1981.
D
Nonetheless,
the
space
has
remained
reserved,
even
though
it's
not
being
used
for
anything
and
modern
ic.
Modern
non-icmp
ipv4,
auto
configuration
using
dhcp
uses
only
one
address,
rather
than
16
million
addresses
this
behavior.
That
we
document
in
that
draft
is
also
current
default.
Behavior
in
linux
and
we've
uploaded
the
draft
on
unreserving
most
of
the
127
ipv4
loopback
network.
D
D
These
changes
would
then
roll
out
gradually
in
ordinary
software
updates,
and
I
keep
coming
back
to
the
proverb
that
says
that
the
best
time
to
plant
a
tree
is
30
years
ago
and
the
second
best
time
is
now,
by
which
I
mean
that
we
will
appreciate,
in
the
future
having
the
opportunities
to
assess
the
compatibility
and
the
usability
of
the
large
number
of
addresses
freed
up
by
these
changes.
But
if
we
don't
begin
the
process
of
making
these
changes,
we
won't
have
that
choice
in
the
future.
D
F
The
presentation,
I
think
it
would
be
very
helpful
to
you
know,
add
information
about
use
cases
where
you
know
the
the
current
non-endorsed
mechanism
of
you
know.
People
privately
using
these
addresses
aren't
good
enough,
and
that
would
be
enabled
by
by
doing
this
endorsement,
because
you
know.
F
It
it
feels
a
little
bit
like
fracking
right,
so
I
mean
we've
we've
been
doing
that
ipv6
for
for
a
very
very
long
time
and
would
be
really
good
to
see.
Why
that
this
investment
here
from
the
ietf
side,
to
to
to
actually
adore
support
that
and
get
involved
in
and
all
these
you
know,
counteractive
changes
would
be
highly
beneficial.
I'm
not
saying
that
I'm
completely
opposed
to
it.
I
think
this
discussion
had
been
had
several
times
over
the
past
10
years,
but
I'd
certainly
love
to
see.
F
D
So
I
think
the
fact
that
there
is
private
use
of
some
of
these
addresses
we've
seen
as
a
good
indication,
as
I
mentioned,
that
the
addresses
can
be
unreserved
and
are,
in
fact
usable
that
it's
a
reality.
That's
achievable.
D
D
D
D
In
other
words,
the
attitude
of
implementers
to
making
the
changes
in
advance
of
the
official
internet
standard
varies
a
lot.
Some
are
quite
happy
too,
and
some
are
quite
reluctant
to
I
think
the
you
know.
The
main
use
case
is
numbering
hosts
in
unicast
and
we
continue
to
see
enormous
demand
for
ipv4
addresses
for
numbering
hosts
in
unicast,
and
I
see
a
lot
of
people
in
the
chat,
mentioning
ipv6,
which
is
certainly
a
very
fundamental
thing
right.
D
D
We've
seen
in
research
on
ipv6
deployment
that
many
people
don't
foresee
getting
rid
of
dual
stack
and
getting
rid
of
coexistence
as
something
that
will
happen
in
the
foreseeable
future.
And
I
say
that
not
to
discourage
or
criticize
ipv6
adoption.
But
to
say
ipv4
is
still
the
majority
of
the
internet.
And
it's
still
something
that
enormous
numbers
of
end
users
continue
to
use
and
continue
to
view
as
a
requirement
to
use.
A
A
Jim
is
probably
having
issues
with
his
mic.
Maybe
we
can
move
to
ted
for
now.
G
Yeah,
so
let's
see
if
this
works,
so
the
issue
here,
I
think,
is
that
you've
you've
pretty
clearly
explained
how
to
do
how
to
solve
this
problem
right
and
that's
great-
and
I
think
your
explanation
is
fine,
but
what
you
haven't
explained
is
why
to
solve
this
problem
and
what
I
mean
by
that
is
two
things
one.
This
is
an
example.
You've
mentioned
that
the
240
slash
whatever
it
is,
is
being
used
in
private
addressing,
and
so
you
know,
there's
a
policy
question
there.
G
If
we
were
to
make
it
easy
to
use
the
240
plus
address
space,
what
would
the
allocation
strategy
be?
For
that?
Would
it
remain
private
use?
Would
it
be
public
use?
We
don't?
We
don't
have
an
answer
to
that.
Secondly,
there's
the
question
of
you
know
whether
this
is
a
good
idea
is
actually
independent
of
whether
the
ietf
should
be
promoting
it,
so
so,
essentially
you're
asking
the
ietf
to
to
kind
of
rubber
stamp.
G
This
idea
of
you
know
prolonging
the
life
of
ipv4
through
these
mechanisms
and
prolonging
the
life
of
ipv4
through
these
mechanisms
might
well
be
a
good
thing
to
do,
but
that
doesn't
mean
that
it's
consistent
with
the
ip
with
with
the
ietf's
mission.
So
the
question
is
not,
should
you
do
it
because
clearly,
as
you
said,
people
are
doing
it,
the
question
is:
should
the
itf
do
it
and
that's
what
I
don't
think
you've
answered
here
you,
you
haven't
really
made
a
case
for
why
the
ietf
should
do
this.
G
What
you've
made
a
case
for
is
why
users
of
ipv4,
in
certain
circumstances,
should
do
this.
That's
not
to
say
that
you're
wrong.
I'm
just
saying
that
if
you
wanted
to
to
proceed
with
this,
you
would
really
need
to
make
that
case.
I
don't
think
that
this
stands
alone
without
you
making
that
case.
D
D
D
So
I
to
your
observation
about
this
might
be
a
good
thing
and
it
might
not
be
a
good
thing
for
iedf
that
may
be
so,
but
if
so,
there's
no
other
entity
that
has
its
role
an
ietf
stature
to
make
recommendations
that
would
be
widely
or
nearly
universally
followed
with
regard
to
the
maintenance
of
protocols
like
ipv4,
and
that
is
a
reason
that
we
think
that
ipv-
that
excuse
me
that
itf's
statement
on
this
is
important
right.
We've
talked
to
implementers,
we've
said
to
implementers.
This
is
a
good
idea.
Some
implementers
have
said.
D
A
A
Yeah
very
very
low,
so
so
I
think
we
get
the
discussion
and
definitely
this
is
something
that
we
will
need
to
be
we'll
need
to
take
to
the
list.
So
I
would
just
if
you
don't
mind
set,
let's
just
hear
buffs
point
and
maybe
one
quick
reply
on
ted,
but
please
take
this
discussion
to
the
list.
E
Hi,
should
I
go
ahead?
Yes,
go
ahead.
Well,
sorry,
yeah,
just
a
couple
quick
things.
I
manage
a
couple,
ipv4
networks
and
I
operationally
for
me
it
would
be
very
it
just
created
a
mess.
If
I
tried
to
do
any
of
this
because
it's
a
mixed,
it's
a
mixture
of
hosts
and
routers
from
all
different
vendors,
some
of
which
are
well
known,
some
of
which
aren't
and
if
I
start
using
these
addresses
and
anything
stops
working.
E
I
have
sort
of
an
operational
support
nightmare.
I
just
don't
see.
This
would
only
be
cost
for
me
and
I
have
no
shortage
of
addresses
on
any
of
these
networks
because
I
use
private
v4
addresses
and
I
you
know,
there's
no
address
shortage,
so
I
don't
really
understand
who
this
benefits
there
there
clearly,
isn't
you
make
a
claim
that
this
generates
a
lot
of
money?
Well,
I
don't
see
there's
any
money
here.
E
G
Yeah
just
very
quickly,
I
just
wanted
to
reiterate
that
the
the
ietf,
essentially,
what
what
seth
is
asking
for,
is
for
the
ietf
to
express
a
policy
and
the
reason
he's
asking
the
itdf
to
do.
This
is
because,
as
he
said,
the
ietf
is
a
very
influential
body
in
expressing
policies
of
this
type
and
that's
exactly
why
the
ietf
needs
to
decide
whether
the
ietf
wants
to
promote
this
policy.
G
H
I
H
You
can
see
thanks
an
update
on
the
internet,
addressing
proper
statement
and
gap.
Analysis
drafts
two
slots,
I'm
combining
them
into
a
single
presentation
and
splitting
the
presentation
into
two
parts.
First,
I
give
a
recap
and
updates
on
the
drafts
itself
and
then
feedback
from
the
related
site
meeting
which
took
place
yesterday
after
the
itf
sessions,
and
I
know
that
quite
a
number
of
you
guys
attended
the
side
meeting.
So
on
the
on
the
problem
statement.
H
Just
to
recap,
we
have
a
number
of
example:
scenarios
where
the
internet,
addressing
you
know,
place
a
potential
issue,
a
hindrance
for
internet
service
provisioning.
We
have
a
number
of
those
example:
categories
constraint,
devices,
dynamically,
changing
topologies
traffic,
steering
and
others
that
you
can
find
in
the
problem
statement
draft.
We
then
also
identify
the
issues
that
that
you
know
internet
addressing
may
exhibit
in
those
scenarios
with
respect
to
efficiency,
effectiveness,
the
complexity
and
and
others.
H
It
first
investigates
the
properties
of
internet,
addressing
the
fixed
address
length,
the
ambiguous
address
semantic
and
the
limited
ultra-semantic
support
as
and
describes
those
investigates
the
extensions
or
number
of
extensions
that
patch
the
addressing
properties
and
addresses.
Therefore,
the
challenges
we
out
or
some
of
the
challenges
that
we
outlined
in
the
problem
draft
the
extensions
themselves.
We
position
in
our
draft
as
an
explicit
proof
for
potential
gaps
that
have
been
identified
by
the
community
with
respect
to
the
to
the
identified
properties
of
internet,
addressing.
H
We
also
identified
that
the
gaps
that
were
filled
by
the
extensions
are
done
according
to
the
various
methodologies
that
we
that
we
are
that
we
are
describing
in
the
gap
analysis
draft.
H
We
then
investigate
the
gaps
that
are
left
by
the
extensions
and
some
of
the
issues
that
are
introduced
by
the
extension
themselves.
Again,
with
respect
to
complexity,
the
efficiency
extensibility,
but
increases
complexity
and
the
fragility
for
scenarios
where
multiple
extensions
may
coexist,
that's
that's
presented
in
the
last
section
of
the
gap.
Analysis
draft.
H
So
what
are
the
updates
we
made
to
those
two
drafts?
There's
no
problem
statements
on
version
zero.
Two.
We
changed
the
the
scenario
descriptions
so
that
it
focused
more
on
the
problem
of
internet,
addressing
some
wording
changes
we
were
introduced.
We
also
simplified
the
problem
statement
section
to
be
more
explicit
and
more
clearly
conclude
towards
the
gap
analysis
that
is
done
in
the
second
document,
so
they
are
they're
they're.
The
mainly
editorial
changes
that
we
introduced
in
in
in
that
draft.
H
In
the
gap
analysis
itself,
we
updated
the
extensions
to
properties,
as
we
also
mentioned
in
the
last
ietf
meeting,
and
also
simplified
the
conclusion
section
to
make
it
more
straightforward
and
clear
so
again,
hopefully
including
increasing
the
the
understandability
of
the
draft
mainly-
and
this
is
the
version
zero
one
that
we
uploaded.
H
The
key
part,
however,
that
we
that
we
worked
was
to
get
feedback
from
the
community
and
and
and
we
organized
the
site
reading
yesterday
evening.
Yes
evening,
our
time,
but
the
purpose
to
jumpstart
a
wider
discussion
that
can
be
carried
over
to
the
mailing
list
we've
seen-
and
there
was
an
explanation
yesterday-
that
luigi
gave
in
the
introduction,
but
the
problem
of
running
the
ietf
online
that
we
see
in
in
some
of
the
discussions.
H
I
a
decrease
in
the
actual
engagement
and
the
discussion
that
happened
purely
over
the
mailing
list,
given
that
the
direct
social
contact
during
the
itf
meetings
is
missing.
So
we
try
to
really
run
this
almost
like
an
as
an
experiment.
If
you
will
to
describe
drafts
beyond
the
actual
list
to
try
to
jumpstart
the
discussion
with
insights
from
from
panelists
and
the
community
at
large
that
we
invited
to
join
the
site
meeting.
H
H
We
think
that
the
the
the
experiment
was
to
an
ex,
to
quite
an
extent
successful.
We
had
a
huge
amount
of
exchanges.
We
had
142
messages
in
the
chat
that
were
counted,
which
probably
can
give
us.
You
know,
weeks
or
even
months
of
email
exchange
that
we
can
generate
out
of
this,
which
is
our
intention
we're
still
going
through
the
messages
so
that
we
can
deflect
the
discussion
back
onto
the
list.
H
H
It
may
not
scale
for
every
zero
zero
draft,
so
I'm
not
entirely
sure
that's
a
model
you
can
copy
for
every
new
work
you
want
to
propose,
but
we
try
to
at
least
you
know
for
hours
and
kind
of
we
we
believe
it
was
a
a
you
know,
a
very
good
attempt,
a
bit
on
the
on
the
data.
As
I
said,
it
happened
yesterday
at
1800
utc.
H
We
had
64
maximum
attendance
over
61
on
webex.
We
also
have
a.
We
also
had
a
youtube
live
stream.
You
can
also
watch
the
recording.
Now
we
have
three
live
stream
viewers
as
well
so
64.
In
total
we
invited
a
number
of
panelists,
so
we
we
we
contacted
dino
robert
michael
dirk
and
neymar
lauren
basilius,
unfortunately
couldn't
make
it.
H
I
removed
them
from
the
list,
but
we
had
exchanges
with
him
as
well
on
the
various
aspects
to
bring
from
their
perspective
into
the
discussion
and
after
in
50
minutes
in
introduction,
we
had
an
open
discussion
planned
for
45
minutes.
We
ran
over
20
minutes.
We
kept
the
discussion
going
because
we
felt
it
was
a
very
good
discussion
and
you
know
we
didn't
want
to
take
the
steam
out
of
this
discussion
very
early.
So
we
had
more
than
an
hour
in
total
open
discussion.
H
The
material
is
uploaded
at
github
we
had
from
some
of
the
panelists.
We
had
material
that
we
were
showing
which
is
available,
but
then
also
in
our
chat
participant
list,
and
all
of
that
you
can
get
at
the
at
the
github
under
that
link
and
the
actual
video,
as
shown
here
on
youtube,
if
you
want
to
watch
the
site
meeting.
H
So
what
were
the
the
the
key
insights
that
we
that
we
found
from
this
side
meeting,
but
we
had
lots
of
discussion
and
viewpoints,
so
the
topic
seems
to
be
of
interest.
The
number
you
know
is
you
know
it's
not
bad,
there's
about
slightly
more
than
half
of
the
people
currently
in
this
working
group
meeting.
So
that's
that's
a
good
number
and
our
intention
there
is
the
takeaway
is
to
funnel
that
discussion
in
the
way
forward,
so
utilize,
the
material.
H
The
discussions
behind
the
discussion
points
the
chat,
evidences,
the
statements
that
we
took
as
notes
and
create
discussion
from
this
or
discussion
threads
from
this
on
the
actual
mailing
list.
That
is
good.
So
it
served
that
purpose
of
behalf,
but
I
said
it
also
showed
that
the
topic
itself
seems
to
be
of
interest.
Given
the
rather
lively
discussion,
we
had
the
very
key
point
that
was
made
very
early
on.
There
was
a
lot
of
discussion
around
this
key
point
that
there
seems
to
be
a
larger
architectural
discussion
learning.
H
This
is
not
maybe
just
about
addressing
so
revisiting
or
addressing
may
just
be
the
outcome
of
that,
and
we
will
continue
to
drive
the
address
in
discussions
with
the
current
drafts,
but
we
will
also
attempt
to
capture
these
large
architectural
points
in
possible
future
material.
That
was
a
very
key
observation.
We
we
made
from
the
discussions
and
from
the
point
that
initially
it
was
made
by
der
kutcher
and
his
material.
H
H
You
know
to
an
extent,
quite
daring,
but
quite
interesting
that
even
the
ose
model
was
being
brought
up
as
well
as
variable
length,
addressing
that
seemed
to
have
had
concepts
that
made
a
lot
of
sense
in
preventing
some
of
the
issues
that
we
have
observed,
also
in
our
drafts,
and
that
people
also
know
generally
that
we
see
today
in
internet
addressing-
and
I
think
for
us-
the
questions
really
you
know
can
be
which
could
feed
very
well
into
the
gap
analysis.
H
What
are
these
past
counts
that
we
may
want
to
look
again
at?
Can
we
tease
them
out?
What
are
their
impacts?
What
could
be
their
impacts
if
we
would
maybe
think
about
some
of
these
past
concepts
in
a
in
in
this
new
context
of
revisiting
internet
addressing?
Ultimately,
we
believe
it
can
enrich
the
gap.
Analysis
draft
beyond
the
currently
listed
extensions
to
ip
that
we
currently
have.
So
that's
something
that
we
took
away
as
a
key
insight
as
well
another
threat,
quite
clearly,
this
is
to
not
aim
at
the
replacement
of
iap.
H
That's
not
the
aim.
We've
also
believe
we
state
that,
in
the
proper
statement
draft
already
but
evolve
ip
in
the
light
of
existing
deployment,
so
this
is
not
about
ripping
gear
out,
throwing
it
away
and
putting
new
gear
in.
There
was
a
quite
clear
sort
of
discussion
around
this
particular
in
the
chat
as
well.
That's
the
fourth
key
inside
that
we
took
away
from
this.
H
As
I
mentioned,
the
the
the
chat
exchange
was
quite
intensive,
which,
which
was
good
and
that,
but
that
was
part
of
the
community
engagement,
the
exchanges
we
saw.
They
were
trying
to
group
them
a
little
bit
and
this
morning
when
going
through
in
preparation
for
these
talks,
these
are
the
slides,
as
you
can
imagine.
They
really
happened
after
based
on
the
meeting
material
on
addresses
and
identifiers.
H
Just
putting
you
know,
two
steps
on
here
around
apps
having
less
to
do
with
the
network
and
having
less
knowledge
about
addresses,
from
dino's
and
and
and
and
the
idea
that
urls
help
you
to
find
services.
You
know,
and
what
you
want
is
not
wait
where
to
get
it.
So
you
have
to
redefine
what.
Where
really
means
there
was
a
common
denominate,
and
it
was
quite
a
number
of
other
statements
around
address
that
identifies,
which
was
very
rich
on.
There
was
quite
a
bit
of
discussion
on
privacy.
H
We
put
these
two
against
each
other.
They
were
literally
in
the
chat
connected
about
the
ephemeral
nature
of
eid's
website,
culture's
response
to
to
the
extent
that
you
may
not
actually
require
them
as
well
as
robert's
point
that,
while
security
may
be
concerned
with
not
taking
control
of
a
ui
ua,
you
can't
actually
really
hide
that
the
u.s
you
know
is
in
the
sky
over
you.
So
these
kind
of
privacy
aspects
were
discussed
in
the
mail
exchanges.
H
Well,
these
only
as
two
examples:
each
security
was
another
one
about
the
the
you
know,
alicia
making
the
the
point
quite
clearly
that
one
of
the
number
one
challenges
in
the
internet
today
is
is
probably
security
rather
than
speed
right,
and
apart
from
the
fact
that,
unfortunately,
packets
flow
to
the
six
big
boys,
as
dino
pointed
out,
so
we
are
connecting
from
the
security
aspect
back
to
an
extent
to
the
privacy
aspect.
H
There
again
more
material
can
be
found
in
the
in
the
message
log
on
the
security
aspect,
but
also
another
thread
of
discussion
around
future
use
cases.
Particularly
you
know
what
are
the
features
we
really
would
want
from
from
the
network,
as
as
dino
pointed
out,
and
the
crutches
point
on
cdn's
and
and
hyper
scale
is
as
relevant
use
cases,
but
we
maybe
could
serve
better
than
we
currently
do.
H
Overall,
I
said
the
chat
messages
are
available.
These
are
just
pieces.
We
pulled
out
this
morning
after
a
first
skim
from
our
side.
It's
for
all
to
view
in
the
github
in,
in
all
the
detail
that
you
can
have
from
the
chat
messages,
we
will
go
through
them
more
thoroughly,
after
also
this
working
group
meeting
to
deflect
more
discussion,
not
necessarily
even
on
those
fourth
threads.
I
I
already
mentioned,
but
maybe
also
on
others.
We
identify
onto
the
in
the
area
list
in
relation
to
the
addressing
discussion.
H
So
what's
the
takeaway
we
got
from
all
of
this
well,
the
volume
was
discussion
was
very
positive.
Lots
of
chat
messages
very
lively
discussion,
but
when
happily
over
people
didn't
drop
off,
I
think
took
the
overrunning
in
the
stride
of
the
discussion
and-
and
we
continue
to
set
it
for
about
20
minutes.
H
We
found
there's
enough
content
of
discussion
to
create
follow-on
threads
on
lists,
so
that
was
one
the
purpose
of
the
meeting
we
had.
We
will
go
through
the
midi
material
to
create
those
threads
and
and
get
them
on
to
the
list.
In
order
to
continue
the
discussion,
we
also
identified
a
rich
set
of
contributors
having
views
on
addressing
and
the
larger
issues,
particularly
the
potential
architectural
issues
we're
looking
into
adding
contributors
as
co-authors.
H
So
I
think
there
are
a
number
of
people
should
expect
emails
and
outreach
to
them
to
to
hopefully
help
us
with
the
discussion
as
we
go
forward
and
then
reflect
the
community
and
put
at
large
into
the
revised
drafts
after
this
itf.
So
we
may
also
look
into
adding
statements
similar
to
the
ones
I
mentioned
before,
as
contribution
into
the
vice
draft
in
order
to
seed
material
into
the
actual
drafts.
H
Most
importantly,
contributors
are
very
welcome
to
join
in
this
effort.
So
please,
if
you,
if,
if
you
found
the
discussion
interesting,
if
you
look
through
the
recording
because
you
couldn't
make
it
and
you
feel
that
you
have
something
contribute-
please
reach
out
to
us-
we're
very
happy
to
increase
the
set
of
contributors
to
this
discussion
in
order
to
push
this
forward.
H
Just
to
reflect
on
the
hot
rfc,
you
may
have
seen
the
video
that
my
colleague
eo
made
on
internet
addressing
worth
thinking
our
takeaway
from
the
from
the
side
meetings.
Yes,
absolutely
it
seems
to
be
interested
in
doing
that,
so
so
we
we
take
that
as
a
positive
to
move
forward
and
if
you
have
any
questions
and
comments
happy
to
do
them
there.
Thank
you.
A
Right
do
we
have
any
questions,
especially
on
the
on
the
drafts
or
any
discussion?
A
Otherwise,
maybe
we
can
take
the
feedback
to
the
to
the
mailing
list
to
to
make
it
part
of
the
interior
and
then
move
to
the
next
presentation.
A
Okay,
so
next
in
line
is
how
you.
I
Okay,
so
hi,
I
I
thank
you
for
joining
this
session.
I'm
going
to
talk
about
the
short
hierarchical
ip
address
at
edge
networks.
This
work
is
based
on
a
paper
we
published
last
year,
but
we
tailored
this
for
the
edge
network
and
also
interoperation
with
ipv6.
I
So
here's
a
key
of
motivation
for
this
work
well
for
most
of
the
edge
networks,
we're
talking
about
iot
networks
and
we
find
the
iot
entities
when
they
communicate.
They
are
very
sensitive
to
overhead
and
energy.
This
is
because
mainly
they
involve
the
short
message.
Exchange
and
many
devices
are
battery
powered
they
use
wireless
channel
and
have
a
very
low
storage
and
the
computing
power.
I
But
on
the
other
hand,
the
ipv6
overhead
is
very
large
majorly
due
to
the
address
part
and
since
all
all
these
iot
devices
in
the
in
the
same
edge
networks,
they
pretty
much
share
the
same
ipv6,
prefix
and
also
most
communications
could
happen
just
between
the
adjacent
and
the
related
entities.
I
And
so
that's
why,
if
we
observe
the
address
they
are
using
actually
for
the
complete
entity
ipvc
address,
they
actually
contain
several
parts.
The
first
part
is
a
common
ipv6,
sub
subnet
prefix
is
shared
and
then
below.
It
is
a
we
consider,
that's
the
ending
id,
but
if
we
can
further
partition
the
network
into
multiple
hierarchical
levels,
then
we
can
see
the
edge
network
actually
contain.
Multiple
sections
of
the
address.
Only
the
last
part
is
actually
can
be
considered
the
entity
id.
I
I
So
this
way
we
can
save
a
lot
of
communication
overhead.
The
left
side
shows
a
proposal
only
concerning
the
addressing
part
of
the
package
by
eliminating
using
the
four
complete
ipv6
address.
We
just
use
a
variable
lens
source
address
and
destination
address
to
supporting
that.
We
added
two
actual
fields:
the
south
address
lens
and
the
destination
address
lens,
to
indicate
the
length
of
the
address,
and
as
for
the
network,
we
can
see
we.
We
have
a
multiple
level
of
the
network
within
the
subnet
setup
network
and
there
are
several
some
special
routers.
I
We
call
them
level
gateway,
routers,
actually
work
hands,
each
each
level
boundary
for
the
ingress
and
egress
communication
and
in
each
level
network
we
also
have
some
other
normal
intra-level
router
called
ilr,
so
they
do
further
for
all
the
routings
in
the
within
this
level
network.
I
So
here
is
a
more
tangible
example
in
this
sub
network.
Actually
is
a
just
ipv6
subnet,
the
it
which
is
given
a
a
16
19
bit
prefix,
which
means
this
some
network
on
a
32-bit
address
space.
I
So
within
it
we
can
directly
allocate
endnote
entities
so
which
will
use
a
32-bit
network
id
the
node
id,
but
we
can
also
further
partition
this
space
into
some
other
further
sub
networks,
with
a
shorter
network
id,
for
example,
on
the
left
side,
you
can
see
we
partition
this
network
to
into
two
more
levels
and
the
first
level
has
a
16-bit
prefix
and
the
third
level
have.
We
have
further
a
eight
bit
prefix
and
on
the
right
side
we
partition.
I
We
have
another
sub
level
network
which
uses
24
bits
prefix.
I
So,
within
this
network
you
can
see,
for
example,
node
x
and
the
y
are
both
located
in
the
by
the
same
level
two
network.
I
If
two
notes
from
different
subnets
want
to
communicate,
there
are
involved
some
address
operation
as
a
gateway
router.
For
example,
you
can
see
the
bottom
right
side,
as
example,
if
the
node
x
want
to
talk
to
node
in
in
another
network,
since
the
they
first
compare
their
source
and
destination
address
with
a
different
lens,
so
it
means
they
need
to
go.
Actually
the
destination
address
is
longer.
It
means
that
the
package
should
go
up
to
the
upper
level
network,
so
it
will
be
forward
to
the
gateway
router
b.
I
First,
so
at
this
gateway
router,
the
router
will
augment
the
source
address,
which
is
the
prefix
stored
in
it,
which
is
aaa
in
this
case,
and
now
the
thousand
destination
address
of
the
steam
lens,
which
means
that
they
are
in
the
same
level
of
network
that
they
will
forward
in
this
level
and
eventually
it
will
reach
the
the
gateway
router
and
of
another
network
lgre.
I
So
and
this
this
point,
the
another
thing
will
be
done
by
the
router
gateway
router
is
to
prove
the
prefix
off
from
the
destination
address,
because
it's
no
longer
needed
so
now,
after
the
address
destination
address
pruning,
the
packet
will
be
will
enter
this
network
and
eventually
forward
to
its
destination.
I
I
The
the
operation
is
also
very
similar
to
the
gateway
router
system.
If
the
packet
need
to
enter
the
internet,
we
will
attach
the
source
prefix.
If
the
package
enter
in
will
go
into
the
azure
natural
network
will
prune
the
destination
prefix
and
another
mode
is
that
we
support.
We
can
support
a
net
gateway
at
this
point,
so
we
can
allocate
one
or
more
public
ip
addresses
to
each
edge
network.
I
I
So
the
benefit
of
scheme
that
is
first
is
a
totally
interoperable
with
a
current
internet
and
it
have
a
significant
header
overhead
saving
from
60
to
up
to
70
percent.
We
can
further
based
on
this.
We
can
have
some
further
competition
compression
on
other
ipv6
header
fields.
By
doing
that,
we
can
save
even
even
more
and
this
actually
simplified
both
control,
plane
and
the
data
plane,
because
we
enforce
a
strict
hierarchical
network
architecture
at
the
edge.
I
So
we
can
have
a
better
address,
aggregation
and
a
simpler
router
design
house
smaller
folding
table,
so
we
have
down
the
p4
based
prototype
and
evaluate,
and
it
show
the
it's
a
it's
a
very
promising
and
another
key
benefit
is
incrementally
deployable
because
it's
a
totally
transparent
to
the
external
internet
and
they
you
can
basically
implement
this
in
each
edge
network
individually.
It
can
still
work
within
to
communicate
with
any
entities
in
the
internet.
I
Another
key
issues.
We
want
to
compare
this
with
some
other
header
compression
schemes
proposed
by
six
pen
and
lp1
working
groups.
The
first
is
ship
ship.
We
call
this
our
scheme
ship
is
a
hierarchical
and
extending
from
edge
to
core.
I
Also,
we,
the
scheme
is
applicable
to
all
kinds
of
networks,
but
the
other
proposals
may
be
limited
to
some
specific
network
environment
and
also
shape
is
applicable
to
arbitrary
network
topology.
I
Unlike
the
other
header,
compression
schemes
may
only
applicable
on
the
point-to-point
channel
only
and
also
they
need
they
need
to
compress
the
package
decompress
package
before
routing
the
package,
because
otherwise
they
won't
recognize
what
it
is
and
also
our
scheme
only
considers
ip
addresses.
So
therefore,
it's
orthogonal
to
the
other
compression
scheme.
I
If
you
only
really
want
to
use
those
schemes,
you
can
still
apply
them,
but
it's
just
with
some
new
benefits
introduced
by
the
shorter
addresses
and
also
ship
is
that
is
a
you
know,
stateless
in
a
sense,
it
doesn't
need
to
maintain
any
dynamic
or
static
context
between
between
the
the
pair,
the
peers
for
communication.
I
I
Also,
we
allow
the
communication
between
any
internet
addressable
nodes,
which
means
within
networks
those
can
directly
contact
to
talk
to
each
other.
It's
not
need
to
that,
doesn't
need
to
go
to
some
central
point
as
a
proxy,
so
this
pretty
much
is
what
I
have
today
and
we
welcome
collaboration
the
any
future
work
suggestions.
Also,
we
want
to
get
suggestions
to
find
the
best
working
group
to
adopt
this
work.
A
Thank
you
very
much,
stuart.
J
J
J
That
was
what
the
title
said,
but
it
didn't
actually
have
the
comparisons.
It
was
just
a
list
of
claims
about
ship.
So
I
think,
if
you
want
to
make
a
compelling
case
for
this,
you
need
to
make
that
comparison.
Chart
that
show
how
this
new
idea
compares
with
the
thing
that
is
already
widely
deployed
and
successful.
I
Yes,
thank
you
for
the
suggestion
yeah
in
the
next
revision.
I
plan
to
add
this
comparation
yeah.
Thank
you.
A
Okay,
thanks
dave.
K
Put
this
into
the
chat,
but
in
terms
of
your
last
bullet,
I
think,
was
something
like
find
the
best
place
to
adopt
the
work
or
some
phrasing
like
that,
and
I
just
wanted
to
say
that
I
think
the
best
place
for
this
discussion
is
in
the
six
slow
working
group
right,
which
does
more
than
just
the
header
compression
scheme
that
you're
talking
about
right.
They
do
over
any
particular
link
type
and
so
there's
multiple
different
link
types
there,
not
just
802.15.
K
That
was
on
your
slide,
and
so
I
think,
that's
probably
where
it's
their
right.
Expertise
is
because
that's
the
group
that
reviews
you
know
compression
e-lighting
fears,
fields
and
various
formats
and
short
addresses,
and
all
that
kind
of
stuff
is
in
scope
for
that.
So
my
recommendation
is
that
this
proposal
just
gets
dispatched
over
the
six
slow
working
group
to
evaluate.
G
This
is,
I
was
the
author
looked
at
thread
because
I
think
they
do
something
fairly
similar
in
terms
of
the
way
they
do
routing
and
it
would
be
worthwhile
to
make
sure
that
there
isn't
some.
You
know
reinvention
of
the
wheel
going
on
here.
Of
course,
threat
is
not
an
itf
protocol,
so
there's
that
whole
issue,
but
but
at
the
very
least
I
think
it'd
be
worth
investigating
whether
there
is
overlap
there.
A
All
right,
stuart,
do
you
have
a
question
or
was
that
from
the
previous.
J
L
D
L
The
traditional
internet
architecture
lacks
the
validation
of
a
package
source
address.
A
sender
can
search
the
source
address
when
sending
packets,
which
is
also
known
as
source
address,
spoofing
with
source
address.
Spoofing
attackers
can
carry
various
attacks,
such
as
reflective
details,
so
source
address,
validation,
save
is
necessary,
mutually
agreed
not
for
rotting
security
manners
is
calling
a
network
operators
to
implement
save.
L
L
L
L
L
P1
is
the
source
address
prefix
of
router
3.
p1
apostrophe
is
the
spoofed
p1
by
router
2
p1.
Double
apostrophe
is
the
spoofed
p1
by
routers
in
af3,
intro
asc
was
source
address,
source
address,
spoofing
from
inneres
router,
1
and
router
4
should
drop
the
package
with
p1
apostrophe
from
router
2,
while
except
the
packet
is
p1
from
router
3.
L
L
Acr
based
receive
configures
matching
rules
to
specify
which
source
prefixes
are
acceptable,
but
it
requires
manual
configurations
to
update
strict
uipf,
takes
the
source
address
as
a
destination
address,
to
look
up
the
field
and
requires
the
forwarding
interface
of
the
fib
matches.
The
incoming
interface
of
the
packet
for
intel
es
sale.
Efp
urps
is
recommended
to
be
deployed
at
customer
interfaces.
L
It
maintains
a
rpf
list
at
each
customer
interface,
while
loose
urpf
is
recommended
to
be
deployed
at
provider
and
peer
interfaces.
It
only
requires
the
source
address
appears
in
the
field.
However,
existing
intro
and,
inter
es
save
mechanisms
have
inherent
false,
positive
or
false
negative
problems.
L
L
L
L
L
M
L
Is
is
peer,
as1
and
s2
are
its
customers
when
as4
runs
efp
uips
at
customer
interfaces,
the
sale
rule
is
packaged
with
source
addresses
belonging
to
as4's.
Customer
coin
can
arrive
from
every
customer,
so
it
says
in
as4
customer
coin
convert
each
other
when
as4
runs
moves,
uips
at
provider
and
peer
interfaces,
the
received
rule
is
packaged
with
any
source
addresses
existing
in
fib
can
arrive
from
every
provider
or
peer.
L
L
An
ideal
save
mechanism
should
guarantee
accuracy
because
false
positives
cause
traffic
disruption,
while
false
negatives
give
attackers
the
freedom
to
force
source
addresses.
However,
as
a
function,
existing
save
mechanisms
cannot
guarantee
accuracy.
Intra-Assay
mechanisms
have
false
positive
problems
and
interact
mechanisms
have
forced
negative
problems.
L
The
root
cause
of
their
inaccuracy
is
that
they
are
achieve
c
based
on
local
fib
or
rib
information
which
may
not
match
the
real
data
forwarding
path
from
other
sources
in
order
to
avoid
false
positives
and
reduce
false
negatives
as
much
as
possible.
Save
should
follow
the
real
data
forwarding
path.
To
this
end,
a
path
probably
method
can
be
taken,
that
is,
the
source
router.
Since
probing
packets
carrying
source
information,
then
each
each
intermediate
router
can
generate
saved
rules
based
on
source
information
and
incoming
interface.
L
L
L
N
Working
group
is
the
best
one.
I
think
for
this
opsec
personal
security
is
mostly
the
best
one.
I
know
that
jen
is
in
the
the
participants
of
this
meeting.
So
jen
as
you
are,
the
co-chair
of
opsec
is
not
mistaken.
You
may
want
to
say
a
few
words
on
this.
M
A
F
Hi
yeah:
oh,
no,
wait
a
second!
That's
the
wrong
slides
oops.
F
All
right
so
a
mouthful
of
a
title,
so
I
wanted
to
present
this.
I
wrote
this
draft
on
behalf
of
a
design
from
a
team
of
colleagues,
because
it
is,
you
know,
a
really
great
evolution
of
brte.
In
my
opinion,
why
going
to
end
area
well,
so
I
think
what
I
wanted
like
to
explain
here
is
is
what
I
think
is
a
great
example
of
a
story
on
more
intelligent
variable
length
addressing
helps
to
solve.
F
You
know,
problems
that
we're
currently
working
on
in
a
better
fashion
and
yeah.
So
I
I
I
hope
it
area
is
not
for
unicast
only.
Obviously
the
target
group
for
this
would
be
beer,
but
I
wanted
to
you
know,
show
it
from
the
addressing
perspective.
So,
what's
wrong
with
brte?
That's
a
great
question
to
ask
when
you
have
a
draft
in
isg
review,
but
it
is.
F
You
know,
in
my
opinion,
the
best
multicast
forwarding
solution
for
the
constraints
it
was
defined
against,
and
that
is
that
we
wanted
to
add
path,
steering
to
the
beer
multicast
solution,
which
is
stateless,
and
we
wanted
to
keep
pretty
much
everything
in
the
forwarding
plane
that
we
defined
with
beer
with
brte
and
so
try
to
make
only
the
minimum
necessary
changes
in
the
forwarding
plane
to
enable
the
the
traffic
steering.
F
F
But
this
comes
at
you
know
quite
a
good
amount
of
undesirable
limitation
complexities,
which
you
know
I
saw
especially
working
through
the
brte
draft,
and
that
is
that
by
representing
the
forwarding
through
a
flat,
fixed
size,
bit
string
of
you
know,
pre-configurable
length
you're,
getting
yourself
into
a
lot
of
limitations
and
the
the
number
of
bits
is
is
also
split
between
the
receiver
notes
and
the
topology
notes.
So
to
speak.
F
So
without
going
into
a
lot
of
details
of
something
about
brt
that
many
people
here
may
not
know
because
they
haven't
looked
at
it,
what
what
we've
pretty
much
done
is,
we
have,
you
know,
started
with
you
know
a
fixed
forwarding,
plane,
design
and
ended
up
with
a
lot
of
additional
controller,
operational
complexity,
less
traffic
efficiency,
but
yeah.
We,
we
got
the
forwarding,
plane
simplicity
of
what
we've
started
with
beer.
So
now,
let's
see
what
we
can
do
better.
F
If
we,
you
know,
think
that
we
can
build
a
more
flexible
forwarding,
plane,
functionality,
the
and
and
the
and
the
goal
and
that's
kind
of
the
the
first
part
of
the
name
is
to
really
come
up,
ultimately
with
a
solution
that
supports
multicast
in,
for
example,
service
provider
or
industrial
large-scale
networks
in
a
more
simple
fashion
than
current
ip
multicast,
with
all
the
good
traffic
engineering,
what
we
want
the
traffic
steering
and
then,
of
course
there
is
correlated
you
know,
qs
and
other
things.
F
So
it's
a
controller-based
design-
and
this
is
just
you
know,
I'm
going
to
skip
for
the
details.
So
here
is
basically
the
addressing
structure
that
we're
calling
recursive
bit
string
structure
and
what
it
really
is
is
a
representation
of
the
desired
delivery
tree.
F
Where,
for
every
node,
you
have
a
bit
string
of
pretty
much
the
the
bit
string
itself
is
just
the
sequence
of
adjacencies,
meaning
neighbors
that
that
particular
router
has
one
bit
for
each
and
the
bit
string
in
the
packet
has
the
bits
set
to
the
neighbors
where
packets
need
to
be
sent
to,
and
then
for
each
of
the
bits
that
are
going
to
an
adjacent
router.
There
is
another
recursive
unit
that
is
starting
with
a
bit
string
and
then
again,
of
course,
with
the
recursive
units.
F
To
that
you
know
neighbors
neighbors
again
and
so
the
fields
that
we
need
for
that.
Obviously
some
starting
fields
that
tell
the
total
length
of
the
address
structure
and
then
the
length
of
all
the
recursive
units
after
the
bit
string
and,
of
course,
each
recursive
unit
itself
has
again
the
same
structural
elements.
F
So
here
is
without
trying
to
walk
through
it
in
full
and
glory
detail
the
example
that
is
also
in
the
draft
and
that
shows
how
you're,
starting
out
with
a
packet,
originated
by
client
one
which
has
such
a
recursive
bit
string
address,
and
when
you
then
see
what
happens
on
router
r,
then
you'll
see
that,
when
it's
being
sorry
on
on
router
b,
when
it's
forwarded
to
router
r,
you
already
see
that
you're
forwarding
only
the
extracted
part
of
the
recursive
bit
string
address,
which
is
the
recursive
bit
string
address
for,
b
and
b,
of
course,
does
the
same
thing.
F
Sorry
to
r
and
r
does
the
same
thing
forwarding
to
s
and
e.
So
the
further
the
packet
progresses
through
the
delivery
tree,
the
smaller
the
address
becomes
so
what
what
type
of
simplification
performance
enhancement
do
we
get
through
all
of
this
right?
So,
first
of
all,
we
forego
the
whole
forwarding
aspect
that
we
needed
to
do
loop
prevention
because,
by
you
know
the
address
becoming
shorter
and
shorter
we're
having
an
equivalent
of
clearing
bits
to
avoid
those
loops.
F
F
There
is
no
need
to
split
up
the
whole
topology
into
subsets
of
the
topology
to
fit
all
the
adjacencies
and
end
nodes
into
a
fixed
size,
a
bit
string
of
of
n
bits
where
you
know
the
whole
topology
would
maybe
need
20
times
n
right,
and
that
also
means
for
brte,
then
that
the
need
to
optimize
minimize
the
number
of
bits
for
the
topology
to
represent
it
goes
away.
So
there
are
a
lot
of
things
like
you
know,
lan
bits
and
point
to
point
bits.
F
So
those
are
all
operational
semantics
that
an
operator
or
a
controller
could
invent
to
to
optimize
this,
and
this
all
has
to
be
documented.
It
is
documented
in
brt,
and
so
especially
for
sparse
distribution
right
for
trees
with
a
limited
number
of
receivers.
This
is
very
easy.
You
can
always
create
just
a
single
packet
to
deliver
to
any.
You
know
subset
of
receivers,
which
is
never
possible
in
you
know,
a
large-scale
bureau,
brte
network.
F
F
Maybe
I
have
more
time
in
in
the
beer
working
group,
but
same
type
of
representation
that
we
have
done
as
a
type
of
you
know,
informal,
you
know
normative
description
of
how
the
forwarding
would
work,
but
the
the
forwarding
playing
complexity
here
and
that's
obviously
now
the
interesting
part
to
compare
for
all
the
benefits
that
we
get
the
basic
bit
string.
Replication
is
exactly
the
same
as
brte
with
just
you
know,
a
much
simpler,
subset
of
different
type
of
adjacency
required.
F
So
the
main
edit
work
is
really
that
for
each
of
the
replicating
adjacencies,
you
know
you
need
to
have
the
calculations
to
find
the
offset
and
the
length
of
the
recursive
unit
in
the
address,
extract
that
and
make
that
become
the
new
address.
F
F
So,
of
course,
what
needs
to
be
done
more
is
a
stochastical
analysis
and
comparison
of
the
efficiencies,
the
number
of
copies
header
size,
and
so
with
this
solution
compared
to
beer
and
brte,
there's
obviously
wide
space
to
explore.
Based
on
the
interesting
use
cases,
the
draft
like
the
original
beer
and
bt
architecture
doesn't
discuss.
Packet
encoding
could
equally
use
the
existing
packet
encoding,
which
obviously
would
be
a
waste
because
it's
a
fixed
length,
then
that
needs
to
be
indicated,
but
otherwise
it
would
be
perfectly
fine.
K
Understanding,
I
think
I
followed
your
present
just
to
check.
Please
confirm
that
your
primary
use
case
is
where
the
addresses
that
you're
talking
about
would
be
constructed
by
a
router
and
used
to
encapsulate
multicast
addresses
over
the
top,
because
the
question
I
was
gonna
ask
is:
how
do
you
learn
what
address
you
should
send
to
and
I'm
guessing
that's
because
you're
using
routing
protocols
to
have
all
the
knowledge
necessary
to
construct
that
did
I
follow
it
right
or
am
I
out.
F
So
that's
why
it's
simplification
would
be
to
extend
beer
all
the
way
to
the
hosts
which
isn't
architecturally
problem
but
which
is
a
problem
of.
Is
that
something
you
today
can
scale
and
with
this
solution,
I
think
it's
it's
a
lot
easier
to
scale
this
for
applications
than
it
is
with
the
existing
vrt.
A
Thanks
are
there
any
other
questions.
H
H
There
are
many
networks
for
nowadays,
because
ip
is
heavy
for
energy
sensitive
devices
in
this
network,
for
they
actually
need
the
application
layer
gateways
to
transfer
their
connectivity
into
the
ip
then
connect
to
the
other
servers
in
the
ip
networks,
and
also
they
cannot
communication
between
the
different
non-ip
technologies.
H
Even
they
are
in
the
same
80
network,
and
there
are
three
main
issues
for
this:
it's
not
supports
the
end-to-end
security,
ip
layer,
security
also,
the
tos
and
those
non-ip
terminals
are
invisible
to
the
to
the
pip
networks
and
the
servers
cannot
see
them
either.
The
servers
only
see
the
delegate
from
the
gateway
and
the
dynamic
drawing
and
leaving
for
those
devices
to
net
ip
network
is
complicated.
H
M
H
H
Okay,
so
let
me
continue
I'm
at
the
bottom
of
this
page.
The
ipv6
is
not
suitable
because
the
actually
this
non-addresses
and
the
header
for
the
bytes
in
total,
not
including
the
options
exchanging
headers
that
consume
more
energies
and
times
both
on
terminals
and
network
transmission,
six
nodes
and
six
notepad
actually
has
done
great
job
to
compress
the
header,
including
the
address,
to
save
the
energy
for
the
network
transmission,
but
actually
it
puts
the
terminals
burden
even
heavier.
H
H
Okay,
I
continue
so
what's
the
this
new
proposal
native
minimum
protocol,
it's
a
simplest
protocol
with
relatively
short
short
addresses
that
we
particularly
designed
for
the
edge
network
with
those
resource
constrained
devices.
H
H
H
This
is
the
data
packet
header.
It
is
designed
with
the
bitmap
mechanism
we,
for
now
we
have
the
most
frequently
used.
Six
fields
include
encoded
in
one
byte.
Bitmap
modules
can
be
supported
by
extent
in
a
bitmap.
For
now
we
have
only
destination
source.
Next
header
payload,
lens
checksum,
dns
indicator.
This
is
an
example.
H
It's
typically,
the
header
has
only
five
bytes
and
it
can
be
even
shorter
to
four
bytes
without
the
source
address,
because
the
gateway
already
know
who
sends
this
this
package
to
it-
and
this
is
the
address
management
functions,
design
all
nodes
in
the
same
network
use
the
same
addresses
8,
bytes,
18,
16
bytes
is
configured
on
the
gateway
and-
and
we
have
the
address
allocation
through
the
address
request
and
address
assignment
message.
H
H
This
is
a
for
now,
the
only
service
we
provide,
because
the
most
of
the
staminas
may
request
the
dns,
how
to
do
it.
In
the
minimal
protocol,
the
amp
terminals
send
a
dns
request
package
to
the
gateway
when
the
gateway
receives
the
packet.
It
directly
translates
the
network.
May
your
information,
the
dns
request
and
send
a
regular
dns
package
request
package
to
the
dns
saver,
which
configured
earlier
on
the
gateway
then
gets
the
dns
return
information
bike.
H
Some
consideration
for
security.
We
actually
have
shorter
checksum
from
16
bits
cut
into
8
bits.
It
costs
nest,
bits
and
computation.
We
do.
H
And
constru
consideration:
we
need
the
user
type
new
user
type
and
we
need
the
registration
for
the
mp
control
message
type
and
the
bitmap
table.
Okay,
that's
it!
This
is
the
first
time
we
show
this
and
we
think
that's
very
useful.
So
I
would
like
to
hear
response
how
to
improve
whether
this
is
useful
even
way
whether
we
need
to
make
it
even
simpler,
which
we
think
that's
almost
impossible.
K
Sure
I'll
just
make
the
same
comment
that
I
made
on
a
previous
presentation
in
here,
which
is,
I
think
this
one
is
best
reviewed
in
the
sixth
low
working
group.
For
example,
I
think
the
claim
that
is
on
slide
two
or
so
that
six
low
requires
terminals
burdens
to
get
even
heavier
is
something
that
I
think
that
they
should
address
whether
they,
whether
that's
correct
or
not.
I
I
don't
know,
but
if
it
is
correct
that
something
the
six
little
working
group
should
take
his
feedback
but
either
way.
K
Build
those
look
slow,
so
that's
where
I
would
review
this
work
and
encourage
you
to
go
and
present
this
there
and
I
think
it'd
be
a
great
discussion.
So
thank
you
all
right.
H
A
Great,
thank
you
shane,
so
we
are
going
to
move
to
roger.
O
Okay,
thanks
juan
carlos,
so
my
name
is
roger
marks
and
I
want
to
talk
about
an
ieee
802
project
standardization
project
called
802.01cq
on
address
assignment.
This
has
been
discussed
in
the
ietf
802
coordination
activity,
and
recently
I
was
asked
to
bring
this
presentation
to
update
interior.
O
So
this
is
a
project
that's
been
around
for
a
few
years,
but
it's
gotten
a
slow
start
and
now
it's
starting
to
move
more
quickly
with
some
drafts.
It's
about
multicast
and
local,
address
assignment
of
ie802
addresses
or
what
you
might
call
mac
addresses
it's
done
in
the
802.1
working
group
in
the
tsn
time.
Sensitive
networking
task
group,
the
project
charter
for
this
activity
specifies
that
it's
on
local
addresses
and
it
should
support
both
peer-to-peer
address
claiming
and
address
server
capabilities.
O
And
the
intention
is
to
consider
the
fact
that
global
addresses
could
eventually
be
exhausted
in
the
802
space,
and
there
should
be
a
way
to
promote
the
use
of
the
local
address
space
and
provide
locally
unique
addresses.
O
An
additional
burden
that's
placed
on
this
project
is
to
deal
with
the
multicast
addresses
and
the
there
it's
a
little.
It's
a
little
different
than
what
people
always
expect,
because
and
many
people
are
familiar
familiar
with
the
idea
that
a
multicast
address
is
assigned
to
a
protocol,
but
in
in
this
project
the
multicast
addresses
need
to
be
assigned
to
end
stations.
O
This
is
driven
partly
by
the
idea
that
in
some
tsn
networks,
streams
are
addressed
to
multicast
addresses
and
those
are
assigned
by
the
sender
or
the
talker,
and
so
each
stream
and
its
set
of
addresses
is
uses.
A
different
and
its
destinations
is
just
decided
by
the
the
sender
and
it
needs
a
pool
of
multicast
addresses
that
it
can
use
for
that
purpose.
Currently,
there's
an
ieee
standard
a
to
1722
that
provides
a
peer-to-peer
way
for
devices
to
get
these
multi-cast
addresses,
but
this
would
provide
new
capability.
O
So
it's
important
to
understand
that
half
of
the
802
addresses
are
global
and
those
are
by
by
a
rule
required
to
be
unique
among
all
devices,
and
the
intention
is
to
have
that
uniqueness
go
over
a
period
of
100
years
and
we're
well
into
the
hundred
years
and
there's
still
addresses
left,
but
but
we
were
worried
about
them
being
exhausted,
and
these
are
generally
burned
in
by
the
factory.
O
And
so
these
are
flight,
addresses
they
they're
only
real
purposes,
and
and
and
the
only
content
in
the
address
is
its
uniqueness,
along
with
an
indication
of
whether
it's
global
or
local,
and
whether
it's
unicast
or
multicast,
but
half
of
the
802
addresses
are
local
and
it's
possible
to
assign
those
dynamically.
O
And
there
are
many
many
addresses
here,
because
the
size
of
the
global
space
and
local
space
are
equal.
So
you
can
assign
these
addresses
very
liberally,
but
also
because
you're
doing
them
dynamically.
You
can
assign
them
thoughtfully
to
have
addressing
power
where
there's
content
in
the
address,
rather
than
just
uniqueness
in
the
address.
So
what
happens
in
80201c2
is
called
the
block,
address,
registration
and
claiming
protocol
barc
and
bark
assigns
mac,
addresses
and
blocks
where
a
block,
an
address
block
is
a
set
of
local
addresses.
That's
equally
unicast
and
multicast.
O
Each
contiguous
sub
box-
and
these
are
all
distinct
disjoint
address
blocks,
and
there
are
two
types
there
can
be
registerable
address
blocks
that
are
handed
out
by
a
registrar
which
is
a
kind
of
a
server
that
holds
blocks,
inventories
of
address
blocks
and
then
there's
also
claimable,
addresses
or
cas
that
are
in
claimable
address
blocks
or
cabs
that
are
each
identified
by
acaba,
which
is
a
claimable
address,
and
the
kaaba
is
a
mac
address.
O
That's
a
multicast
address,
that's
used
as
an
identifier
and
there's
a
number
of
temporary
unicast
addresses
that
are
used
for
initial
discovery
by
a
claimant
that
doesn't
have
a
burned-in
unicast
address,
say
from
the
factory,
so
this
figure
tries
to
show
a
little
sketch
of
these
addresses.
So
this
is
showing
12
nibbles.
That's
part
of
an
802
48-bit
mac
address
and
up
in
the
header.
You
can
see.
There's
this
bit
m,
which
is
the
well-known
multicast
a
bit.
One
or
zero
depend
on
whether
it's
unicast
or
multicast
and
then
the
the
purple.
O
The
second
bit
here
in
n1
is
the
indicator
of
a
local
address,
and
so
in
barc.
You
always
have
you
use
a
one
one
one
for
these
three
bits
and
the
m
is
unicast
or
multicast,
but
we
also
structure
and
bark
that
the
first
part
of
the
first,
the
first
nibble-
and
I
want
to
call
your
attention
to
these
two
green
bits,
j
and
k,
because
those
are
the
ones
that
tell
you
the
size
of
the
the
address
block,
and
so
it
tells
you
how
many
addresses
that
you
get
in
your
address
block.
O
So
you
get
16.
The
block
has
16
to
the
jk
claimable
addresses
or
16
to
the
jk
register
roll
addresses,
depending
upon
which
one
you
use.
So
here's
an
example
of
what
the
claimable
address
blocks
look
like,
and
it
shows
three
sizes,
0
1,
2
and
3,
and
the
size
zero
block.
Is
it
has
this
header
at
the
top
and
the
first
three
nibbles,
and
then
it
leaves
you
nine
more
nibbles
to
assign
an
address
and
because
there's
an
asterisk
here.
O
That
means
that
you
can
have
a
local
if
you
have
a
unicast
address
and
a
multicast.
So
your
block
is
one
unit
cast
and
one
multicast
address.
But
if
you
go
to
the
larger
sizes
you
can
have,
for
example,
three
three
nibbles
worth
of
address
space
and
so
now
you're,
giving
an
address
block
that
has
4096
unicast
addresses
equal
number
of
multicast
addresses
in
this
cab.
That's
the
address
block
and
then
this
caba
is
the
identifier
of
the
address
block
and
its
multicast
address
itself.
O
And
so
the
way
you
do
claiming
is
here's
an
example
where
you
have
five
devices
that
each
hold
a
caba
address,
block
one
two,
three,
four
five
and
here's
a
new
one
that
wants
to
claim
an
address
so
a
block,
so
it
picks
cabba
one
and
it
sends
a
multicast
message
addressed
to
the
multitest
address
cable
one.
O
Well,
it
turns
out
that,
because
it's
a
multicast
address
the
only
device,
that's
listening
for
that
address
is
the
one
that
holds
the
same
claim,
and
so
these
four
of
these
devices
don't
even
listen
to
the
message,
but
the
one
that
holds
exactly
that
same
block.
That's
being
claimed
here's
the
message
and
it
responds
with
a
unicast
message
to
the
to
the
claimant,
saying
no,
you
can't
have
that
address.
I've
already
got.
You
can't
have
that
block.
I've
already
got
it,
so
it
tries
again
with
a
different
address
six.
O
This
time
nobody
responds
because
nobody
hears
that
message
and
then
it
decides
to
claim
kaaba
six.
This
can
also
you
know
the
the
registration
process
is.
Such
the
claimant
doesn't
have
to
be
worthy
aware
that
there's
a
registrar
available
when
it
begins
the
claim
it
sends
this
thing
play
a
message,
but
if
there's
a
registrar
it
can
respond
and
provide
an
address.
O
So
in
this
case
the
claimant
sends
a
discover
message
and
let's
say
there
are
two
registrars
out
there-
that
each
respond
with
an
offer
to
the
claimant
by
unicast,
and
then
the
claimant
picks
the
one
that
it
likes
and
sends
a
unicast
message
to
that
registrar
and
it
reserves
that
address
one
of
the
things
that
you
can
do
with
this
kind
of
structure.
It's
because
it's
it's
in
blocks,
you
can
do
a
kind
of
a
hierarchical
structure
and
there's
some
techniques
that
are
that
are
available
within
the
standard.
O
This
this
structure
is
an
is
my
example.
It's
not
standardized,
but
it's
to
show
that
you
can.
You
can
have
a
kind
of
a
semantic
structure,
so
an
address
can
have
a
field
that
tells
you
how
the
address
is
formatted
and
then
within
it.
O
So
what
you
end
up
with
is
a
general
address
assignment
method
that
can
eliminate
the
need
for
global
addresses
and
reduce
the
consumption.
It
still
maintains
uniqueness
within
the
land.
It's
completely
backward
compatible
with
the
existing
802
addressing
and
bridging,
because
it's
just
a
for
a
device
that
just
sees
it
as
a
flat
address.
It
works
the
same
as
a
global
address.
O
It
could
be
used
to
address
privacy
concerns
because
the
addresses
are
dynamically
assigned
and
they
don't
they
don't
continue
and
you
can
use
some
semantic
structuring,
including
you
know,
because
a
device
you
can
grant
a
device
a
whole
block
of
addresses.
It
can
use
the
the
free
fields
to
do
things
like
flow
identification
or
identify
streams
within
the
core
address
block
that
it's
been
given.
O
You
could
scale
this
up
and
you
can
use
it
in
hyperscale
class
networks
and
it's
an
alternative
to
completely
random
assignment
like,
for
example,
in
the
wireless
case,
you
have
a
dynamic
assignment
possibility
that
can
give
you
address
privacy
and
meanwhile,
we're
protecting
in
the
protocol
against
address
duplication
and
you
can
code
the
address
blocks.
There's
also,
it's
also
possible
to
use
this
to
consider
how
you
do
48-bit
bridge
lands
to
carry
64-bit
addresses.
O
What
needs
to
be
explored
is
the
implications
on
ip
and
you
could
think
about
arp
and
how
arp
applies
in
this
case,
and
the
intention
is
for
the
next
draft
of
this
standard
to
be
circulated
to
the
int
area.
And
hopefully
that
will
happen
and
maybe
within
the
next
couple
of
months
a
month
or
two
there'll
be
a
new
directly.
N
But
more
about
thank
you
for
coming,
I'm
quite
happy
to
see
that
ieee
engineers
are
coming
to
present
their
work
and
in
terraria
it's
very
useful
to
get
this
collaboration
between
ietf
and
ieee.
So
thank
you
for
coming.
F
Thanks
eric
tourist
yeah,
I
mean
this.
This
is
interesting.
Thank
you.
We,
we
obviously
have
a
lot
of
experience
with
things
like
that
from
multicast
at
layer
3
as
well.
So
I
think
one
of
the
the
big
issues
of
this
is
how
you
know
you
ensure
that
the
applications
are
going
to
deal
with
the
unexpected
loss
of
an
assigned
block.
If
you
had
a
network
partitioning
and
then
you
know
partitions
merge,
you
have
duplicate
allocation
of
the
same
block
which
you
would
need
to
trigger
up
to
the
applications.
F
So
you
know
you
need
a
decision
mechanism
who
of
the
two
allocators
loses
the
block
and
so
yeah.
I
I
very
much
appreciate
these
fully
distributed
algorithms,
but
be
sure
that
you
really
walk
through
all
the
problems.
F
Yeah
but
again
I
think
the
problem
is,
you
know
the
understanding
of
the
implications
going
up
to
the
application
level
right.
So
the
I
think,
that's
that's
what
you
always
need
to
worry.
F
Will
you
will
you
be
able
to
have
the
enforcement,
the
verification,
the
validation
tools,
that
applications
really
behave
benignly
under
under
this
circumstance,
because
something
that
is
rare
is
is
for
especially
if
you
go
into
industrial
or
any
other
type
of
you
know
highly
resilient
application
environments.
You
need
to
think
about.
You
know,
test
suites
or
other
mechanisms.
A
All
right,
so,
thank
you
very
much
roger
so
then
I
guess
we
expect
to
to
see
some
some
feedback
from
from
the
ieee
and
and
get
the
draft
circulated
in
the
interior.
O
Yeah,
hopefully,
within
a
month
or
two
we'll
have
it
and
we're
already
preparing
a
draft
statement
to
provide
to
pass
it
over.
A
Perfect,
thank
you
very
much
so
for
those
interested.
I
guess
we
will
see
a
a
link
on
the
on
the
mailing
list
about
this,
so
we're
at
the
end
and
thanks
everyone
for
attending
linkedin.
I
hope
you
have
a
good
end
of
day
and
follow-up
meeting,
so
good
flight.