►
From YouTube: IETF112-BIER-20211111-1430
Description
BIER meeting session at IETF112
2021/11/11 1430
https://datatracker.ietf.org/meeting/112/proceedings/
A
B
A
Navarro,
I
just
replied
your
email.
We
could
talk
about
again
the
list
in
the
in
the
room
today
if
you
want,
but
I
think
it's
pretty
much
all
there.
A
Okay,
tony's
back
still
looking
for
human
he's,
our
first
presenter
after
our
status,
hi
hi
greg.
Can
you
hear
me?
Oh
oh
yeah,
I
hear
you
excellent.
Thank
you
very
much
all
right
tony!
You
see
any
other
reason
why
we
shouldn't
get
rolling
hi
everyone
all
right,
everybody.
What
I've
got
is
a
question
on
process.
Actually,
if
you
can
give
me
some
feedback
alvaro
on
taking
notes
now,
I've
got
the
the
notepad
thing
up
is
the
intent
that
people
put
their
own
stuff
in
there.
C
No,
so,
ideally
we
would
do
the
minutes
the
same
way
as
before.
Right
you
would
have
a
minute
taker,
yeah
advantage
of
the
shared
minutes
or
the
shared
place
there
is
that
then
I
can
go
there,
for
example
and
add
the
question
that
I
asked,
or
you
know,
clarify
something
or
spell
my
name
correctly
or
you
know
whatever
it
is,
so
there's
the
opportunity
for
other
people
to
help,
but
ideally
there
should
be
some
responsibility
for
that,
but
that
that
one.
A
D
Some
kind
to
throw
something
in
I
saw
with
the
other
working
group,
was
that
they
actually
also
take
the
chat
and
they
paste
it
after
the
minutes
into
this
notepad.
A
E
You
guys
are
the
toughest.
No
one
here
wants
to
be
famous
people,
no.
F
A
Oh
man,
all
right
I'll,
take
a
minute.
Can
you
do
it
all
right?
Thank
you
tony
at
least
now
it's
up
in
that
dock
now,
we'll
know
where
they
are
excellent.
All
right
so
welcome
to
beer,
welcome
to
well
whatever
time
zone
you're
in
and
pretend
that
you're
not
there.
A
If
you're
in
the
wrong
room
stick
around,
we
have
fun
here,
no
well
note
taken
nods
in
the
room.
I
wish
I
could
see
you
all.
I
assume
you've
read
this
fair
assumption.
Thanks
jabberman
is
collaborative,
so
that's
the
the
url
right
there.
I've
got
it
up
in
a
tab.
I
recommend
everyone
do
the
same,
so
that
when
you
have
questions
you
can
either
put
them
in
there
or
you
can
make
sure
that
they
were
entered
correctly.
A
If
you
didn't
do
them
yourself,
you
know,
as
avaro
mentioned
correct
names,
spelling
what
have
you,
but
that
that's
a
great
way
to
ensure
we
have
everything
in
one
place
from
everyone
who
contributed
so
take
that
down,
bring
up
open
a
tab
and
get
ready
to
have
some
fun.
What's
the
status
with
jabber
then
do
we
have?
I
remember
seeing
yesterday
there
was
a
it
was
some
sort
of
jabber
button
somewhere
here
I
thought:
okay.
C
I
was
just
saying,
there's
also
a
new
tool
that
the
atf
is
using.
That
is
also,
at
the
very
end,
solid
bot.
I
guess
it's
the
new.
I
am
thing
if
you
guys
know
where
the
jabber
logs
are
all
of
this
goes
into
the
log
as
well.
A
A
I
don't
think
that
sunk
in,
but
it's
worth
a
try
and
here's
our
agenda.
We
made
a
lot
of
progress
last
meeting
in
terms
of
we
have
a
bunch
of
stuff
in
these
shepherds.
We
have
a
bunch
of
stuff
that
needs
adoption
call.
We
had
some
drafts
actually
make
some
progress.
A
To
be
completely
frank,
it's
been
summer
and
I've
been
gone
and
I've
been
as
offline
as
much
as
I
possibly
can.
So
some
things
progressed
some
things
didn't
now
it's
cold
and
wet,
and
I'm
stuck
inside
and
I
think
we'll
be
making
some
progress
after
this
all
right.
Anything
missing
from
this
agenda.
C
Hey
thanks,
I
just
wanted
to
say
I
don't
think,
there's
anything
missing
in
the
agenda,
but
what
caught
my
attention-
and
this
is
really
for
the
whole
working
group-
not
just
to
you
know
you
greg
or
tony
we're,
not
gonna
have
time
right.
You
know
we
have
40
minutes
of
if
time
permits
and
unless
I
shut
up
right
now
and
everyone
does
it
very
fast.
C
You
know
it
seems
that
you
know.
Maybe
we
can
have
an
interim
at
some
point.
C
C
So
what
I
want
to
say
is
that
you
know
it's
up
to
all
of
us:
the
authors
and
the
chairs
and
people
who
are
not
authors,
who
can
review
documents
to
make
these
things
progress
right,
because
you
know
there's
no
way
that
in
10
minutes
we're
going
to
be
able
to
get
an
update,
dig
into
the
issues
and
resolve
them.
Here
we
have
to
go
to
the.
A
C
Worse
than
a
normal
meeting
correct,
so
we
want
to-
or
we
need
to,
I
think,
as
a
working
group
and
to
be
a
little
bit
more
active.
You
know
for
all
of
us
to
again
you
know
sponsor
or
push
discussions.
You
know
the
draft
authors
and
you
guys
chairs
to
to
make
sure
that
these
questions
happen,
and
I
said
even
for
the
people
who
are
not
authors,
we
need
you
to
review
documents
so
that
they
are.
C
You
know
well
understood
and
well
reviewed
by
the
time
they
get
out
of
the
working
group.
So.
A
This
is
our.
This
is
our
longer
gap
anyway,
so
it
makes
sense.
I
think
what
I've
done
in
the
past
in
groups
when
we
start
getting
wedged,
I
think
we
did
it
here
at
beer
as
well
like
mid
late
january,
seems
like
a
good
target.
We
get
everyone
to
agree
before
they
disappear
on
the
holiday.
A
A
B
I
wanted
to
ask
about,
in
course,
of
discussing
progress
of
path,
mtu
discovery
in
beer.
B
B
To
progress
path
into
discovery,
and
there
we
need
to
finish
the
ping
document,
or
at
least
they
need
to
get
together.
B
A
Entitled
to
as
well
yeah,
that's
that's
kind
of
not
working,
okay
need
another
boot
or
two
all
right.
Well,
that's
that's
my
intent
after
this
meeting
going
through
all
the
docs
and
see
where
they
are
sandy
put
together
a
fantastic
list
for
us
last
time
we
moved
some
of
that
along
and
we'll
just
go
from.
There
probably
get
confirmed
from
the
entire
group
on
the
list
and
where
these
docs
are
and
we'll
start
moving
this
stuff,
because
that
we're
going
to.
A
B
I
greatly
appreciate
humans
volunteering
to
shepherd
their
path,
mtu
discovery,
but
yeah
we'll
we'll
need
to
progress.
The
ping
too
excellent.
A
Thanks:
okay,
human:
were
you
waiting
for
me
to
authorize
you
to
do
something?
I.
A
G
A
So
this
is
actually
a
pretty
cool
tool.
You
hit
this
everything
that's
been
pretty!
That's
why
I
asked
to
get
them
all
up
ahead
of
time,
so
we
can
just
pop
through
this
so
I'll
hit
this
I'll,
select
your
you
beer,
mldp,
I'm
going
to
go
back
over
no
yeah,
ledp,
excellent,
all
right,
so
that
is
on
top
of
the
queue
on
this
dock
boom
and
share
and
I'll
step
through
them.
For
you,
what
did
it?
Do
it
stopped?
Sharing?
Okay,
I'll!
Try
it
again.
G
So
I
think
couple
of
ideas
ago
I
went
through
this.
I
think
we
asked
for
last
call
because
we
haven't
got
any
comments
or
any
questions
on
it.
We
were
hoping
that
the
last
call
would
start
generating
some
comments.
Questions
on
the
draft-
I
personally
didn't
see
the
last
call
email
going
around,
so
I
thought
maybe
it's
good
to
give
everybody
another.
G
You
know
refresher
on
what
we
are
trying
to
do
on
this
draft.
So
if
you
could
go
to
next
slide,
please
so
this
is.
This
draft
is
really
v2
of
the
pim
signaling
over
beer.
We
are
trying
to
achieve
the
same
thing
with
the
mldp
over
pier.
The
entire
point
of
this
draft
is
that
if
you
do
have
a
customer
that
or
operator
that
wants
to
go
from
some
of
these
legacy,
multicast
protocols
to
the
beer,
they
probably
want
to
upgrade
some
of
their
part
of
their
networks,
one
at
a
time.
G
Usually
they
might
start
from
the
core
or
whichever
part
of
the
network
it
is,
and
this
draft
will
make
it
easier
on
them
to
upgrade
a
part
of
the
network
on
the
mld
mldp
network
to
beer.
Without
really
changing
the
software
on
the
routers
that
are
doing
mldp
or
really
doing
any
excessive
configurations
on
any
of
these
routers
that
are
doing
mldp,
you
just
grab
the
portion
of
the
network
that
you
want
to
upgrade
to
beer.
G
You
upgrade
it
to
beer
because
of
the
the
hardware
support
sphere
and
any
portion
of
the
network
that
doesn't
support
the
hardware.
Doesn't
support
or
or
the
software
for
that
matter
doesn't
support
beer
forwarding
can
a
stay
on
mldp
without
any
excessive
software
upgrades
or
any
change
in
the
topology
of
the
network
itself.
Next
slide,
please.
G
G
He
even
I
forgot
the
terminology.
Since
it's
been
so
long,
it's
a
egress
beer
edge
router,
so
that
becomes.
G
So
as
this
fake
signalings,
they
come
in
from
from
down
streams
the
I
the
ingress
beer
edge
router,
if
you
will
builds
their
the
tree
figures
out,
who
are
the
peers
and
it
builds
the
beer,
the
beer
neighborship,
and
actually,
when
the
packets
come
in
the
mldp
packets,
they
come
in
to
the
edge
router.
We
swap
the
label
with
the
tldp
label.
G
We
we
push
on
the
beer
header
and
we
go
down
a
stream
to
all
these
other
b
routers,
where
that
were
actually
signaling
their
interest
into
that
effect
and
into
that
source
of
the
multicast,
and
that's
basically
how
the
data
path
starts.
Working
when
we
get
to
the
downstream
beer
edge
router,
we
pop
the
beer
header.
There
is
a
protocol,
the
b
protocol,
which
is
the
mpls
and
based
on
that,
we
realize
that
there
is
a
label
that
we
need
to
swap
within
with
the
downstream
mldp
domain.
G
So
this
was
a
good
refresher
for
myself,
too,
actually
so,
basically
again
you're
looking
for
any
comments,
any
suggestions,
if
yeah
you
know
anybody
in
the
working
group
feels
that
you
know
we
need
to
beef
up
the
the
draft
by
all
means.
You
know
come
on
the
mailing
list
and
let
us
know
if
not
maybe
we
should
do
a
last
call,
and
maybe
that
will
trigger
some
extra
discussions.
I
So
this
is
lower.
One
thing
is,
you
could
send
oh
and
you
could
send
a
email
to
the
mpls
working
group
and
ask
for
comments.
Also
yeah,
I
don't
I
had
read
the
draft
there
are
almost
no
changes
at
is
significant
for
mldp,
but
it
might
trigger
some
responses
at
least
and
the
shares
for
beer.
I
I
Oh,
when
you
start
the
working
group
last
call,
you
just
send
a
mail
to
the
mpls
working
group
telling
us
that
you
have
started
the
work
in
your
last
call.
Yeah.
A
Yeah,
exactly
as
that's
what
I
was
saying
as
we
was
last
call
in
our
lit
and
our
group
comments
should
start
there,
but
I
still
it
depends
on
the
integration
but
oftentimes.
It
then
goes
over
to
the
chairs
of
the
other
group
for
approval.
As
you
know,
before
you
take
it
up
to
publication.
I
C
So
you
can
just
you
know
cc
also
on
the
last
call.
Also,
can
you
please
copy
bess,
just
in
case
there's
someone
there
that
you
know
wants
to
take
a
look
at
this
as
well.
I
I
have
one
question.
C
G
C
Yeah,
the
other
thing
that
I
notice
is
that
there
are
no
normative
references
in
the
draft.
So
please
check
that
because
I
mean
there
should
be
no
head
references.
A
A
We
have
the
pce-based
beer,
we
see
two
presenters
either
one
of
you
in
the
room.
A
J
A
J
I'm
juan
ali
from
china
telecom
I'm
going
to
introduce
our
updates
some
comment.
Some
comments
were
made
at
the
previous
meeting,
including
objects
for
mac
formats
dance,
and
it
is
suggested
to
break
this
pce
object,
pcp,
objective
objectives
down
more
in
this
update.
The
overall
process
has
not
changed.
Our
main
change
from
the
previous
version
is
to
put
some
duplicate
fields
into
tlvs.
J
We
defined
40
new
tlvs,
namely
multicaster
source
address,
address,
address
tlv
multicast
group
addressed
here.
We
will
see
we've
here:
vp
vpn
information,
tl
we
and
the
beer
information
tlv
among
these
tlvs
vpn
information
tlv
contains
rd
and
the
forwarding
label
the
label
lens
can
be
set
to
zero
fear
information
until
we
contains
router
identification,
information
in
beer
to
man
or
subdomain,
specifically
subdomain
id
bf,
rid
and
bsl.
J
J
J
J
D
Okay,
no
queue,
then
I
comment
as
participants.
So
read
the
draft
with
great
interest,
all
cool.
I
think
it
should
be
more
precisely.
F
D
Pce,
you
know
ip
multicast
beer
overlay
or
something
because
it
only
does
traditional
ip
multicast
on
top
of
beer,
but
I
mean
that's
cool.
I
have
a
specific
question
where
I
stumbled
in
the
draft
where
you
have
this
forwarding
indication,
object
and
you're
passing
around
the
beer
beat
mask:
that's
all
cool,
but
there
is
a
bee
bit
which
you
can
set
and
indicate
known
beer,
so
that
confused
me,
why
would
you
pass
around
the
beer
beat
mask
if
it's
not
beer.
K
Use
for
the
pure
scenario,
kindly
you
know,
for
the
numbers
in
our
way,
I
think
we
should
define
the
other
our
similar
object
to
to
project
the
the
ir
to
how
to
for
the
how
to
reach
the
receiver.
D
K
A
Just
just
general
comment:
I'm
happy
to
see
this
as
one
of
the
initial
architects
of
beer.
Even
before
the
buff.
You
know
we
were
seeing
all
these
scenarios,
and
this
is
one
that
came
out.
A
Basically,
you
know
the
signaling
over
the
top,
and
I
guess
the
question
I
have
going
forward
on
this
is:
do
we
want
to
make
this
the
definitive
document
for
all
pce
regarding
beer,
or
just
this
particular
scenario,
and
then
just
you
know,
pick
things
up
as
they
come
down
the
road
just
general,
I'm
not
looking
for
an
answer,
I'm
just
thinking
something
to
think
about
we'll
take
that
list
as
well
as
we
move
this
thing
along.
D
L
L
M
Okay,
so
this
is
about
some
thoughts
on
how
to
do
a
slicing
and
traffic
differentiation
in
beer
I'm
presenting
on
behalf
of
the
co-authors
here,
jeffrey
and
tony
next.
I
please,
I
sharing
my
sharing
it
myself
or
are
you
running.
M
Oh
okay,
all
right,
so
the
it's
the
arrow
on
the
screen,
not
the
arrow
on
the
keyboard.
Okay,
so.
A
M
Okay,
so
some
slicing
backgrounds
from
the
ietf
network
slices
framework,
spec
ietf
network
slice,
is
a
basic
logical
network.
Topology
and
traffic
associated
with
with
this
slice
is
identified
with
a
slicer
identifier
in
the
packet
itself,
and
then
there
is
a
extension
to
to
that
in
this
draft
best
bar
where
it
introduces
a
concept
called
the
slice
aggregates,
which
comprises
one
or
more
etf
networks.
Nice
traffic
streams
now
notice
that
here.
M
Slice
aggregate
can
be
any
of
the
following
which,
including
entire
slice
or
a
set
of
entire
slices,
that
share
the
same
logical,
topology
and
or
just
some
flows
in
a
particular
slice.
It's
very
flexible
so
and
a
slice
aggregate
will
get
per
harvey
per
hop
or
forwarding
behavior
specific
to
that
slice.
Aggregate
which
includes
the
next
hop
they
use
to
reach
the
destination
and
some
queuing
treatments
and
other
things.
M
And
now
some
related
beer
background,
so
in
beer
you
have
one
or
more
beer
subdomains,
that
map
to
a
topology,
that's
into
one
mapping,
and
then
each
subdomain
will
have
a
corresponding
birth
beer
or
the
routing
the
routing
table
useful
for
beer
beer
forwarding.
That's
one-to-one
mapping
that
birt
is
a
calculated
according
to
the
topology.
M
That
subdomain
is
and
then,
when
you
consider
the
b
string
lens
and
then
when
you
need
to
use
multiple
sets,
because
the
number
of
bfers
is
greater
than
your
b
string
lens,
then
you
you
need
to
have
multiple
sets
and
there
the
each
biff
will
correspond
to
that.
Tuple
sub
domain
b
string
lens
and
set
id
now
each
bift
is
identified
with
a
by
a
opaque
20-bit
number
called
beef
id
in
a
beer
package
which,
which
could
be
a
label
or
via
just
any
other,
opaque
number.
M
So
beer
forwarding
is
based
on
biffs,
which
is
derived
from
the
birds
birt.
So
you
can
say
that
the
beer
forwarding
is
based
on
the
birt.
M
So
that
birth
would
determine
the
next
hop
and
then
one
after
that
you
can
use
traffic
class
bits
in
the
mpos
label
entry
or
using
the
dscp
in
the
beer
header
to
determine
other
behaviors
like
queuing.
M
So
now,
how
do
we
within
that
background?
How
do
we
do
slicing
with
beer?
Now
we,
if
a
slice
aggregates,
corresponds
to
a
single
slice
or
to
a
set
of
slices
that
share
the
same
logical
domain.
Then
there
are
two
ways
to
do
it.
One
is
that
you
map
that
sa
to
a
subdomain
with
that,
because
the
subdomain
id
is
only
eight
bit.
M
So
that
means
that
you
can
only
do
256
essays
now,
if
you
want
to
do
more
than
that,
there
is
actually
you
can
map
sa
slice
aggregate
to
a
birt
directly
now
in
the
brt
will
map
to
the
tuple
subdomains
coma
sa
instead
of
just
mapping
to
a
subdomain.
M
Now,
when
you,
when
you
use
a
wii,
u
map
sa
to
a
birt
directly,
that
means
that
in
theory,
you
can
support
two
to
the
the
power
of
20
essays
as
long
as
your
your
device
can
hold.
That
many
entries
so
now
mapping
the
sa
directly
to
a
birds.
Instead,
the
subdomain
has
is
desired,
even
when
you
have
fewer
than
256
essays,
because
when
you
have
fewer
subdomains
you
you,
you
don't
need
that
many,
that
much
provision
in
related
subdomains,
for
example,
you
need
to
you.
M
M
Now
the
slice
aggregation
selector-
oh,
I
I
forgot
about
to
mention
that
that
that
is
the
defined
in
the
draft
best
bar
spec.
It's
basically
identifier
in
the
package
that
identifies
the
slice
aggregate
now,
in
this
case
the
the
slice
selector
will
map
to
one
of
the
beef
ids.
M
In
that
case,
the
slide
selector
slice
aggregate
selector
selector,
won't
map
to
the
to
some
traffic
class
or
dscp
bits.
M
But
if
you
that's
not
enough,
then
a
beer
extension
header
will
need
to
be
used.
The
proto
code
field
in
the
beer
header
can
indicate
a
beer
extension
header
followed
and
one
of
the
beer
extension
header
will
carry
the
slash
selector
to
identify
the
slice
aggregation.
M
So
now
a
beer
extension
header-
it
has
never
been
discussed
here
in
the
beer
working
group
itself.
That
concept
was
introduced
in
the
in
this
int
area
or
function.
Generic
delivery
functions
I'll
talk
about
it
briefly.
Here.
It's
still
a
developing
idea
for
the
purpose
of
doing
genetic
functions
at
different
layers,
for
example,
fragmentation,
reassembly
and
security
features
esp
or
in
c2
oam
and
new
use
case.
Here
is
a
traffic
differentiation.
M
We
also
try
to
align
the
mpos
beer
extension
headers
with
ib
extension
header
structure,
because
the
idea
is
that
we
want
to
to
do
this
across
different
layers,
so
it's
better
to
to
align
those
extension
headers
as
much
as
possible.
M
So
the
that
draft
has
some
talks
about
the
both
mpos
and
beer
extension
header,
but
that's
just
a
container
draft
and
the
specific
mpos
extension
header
and
the
beer
extension
header
will
be
discussed
in
beer.
It's
just
I
have.
We
have
not
officially
brought
it
here
yet,
but
this
is
like
a
teaser
here.
That's
we
will
be
start
talking
about
the
peer
extension
header.
M
So
yeah
next
steps.
We
we
introduced
this
concept
here
and
in
the
in
the
drafts
we
did
provided
the
signal
name
for
either
iss
or
ospf
and
obvious.
We
will
need
to
specify
all
those
details
for
the
other
signaling
signaling
protocols
and,
more
importantly,
we
we
we
will
want
to
start
talk
about
beer
extension
headers
in
beer
working
group
both
for
this
purpose
and
for
other
purposes.
N
So
one
of
the
cool
things
we
can
be
doing
better
is
is
better.
N
For
example,
the
you
know
per
flow
stateless
qs,
which
I
was
presenting
in
tsn
and
and
that
sorry,
that
networking
group-
and
I
was
actually
also
explicitly
referring
to
the
problem-
that
we
need
extension
headers-
that
support
these
these
mechanisms
better
and
then
it
would
be
great
if
we
wouldn't
have
to
reinvent
the
wheel
between
mpls
and
ip
and
obviously
also
a
beer
for
for
these
type
of
qs
extension
headers.
N
So,
and
beer
specifically,
of
course,
is
the
of
course
better,
even
with
brte
the
the
technology
to
allow
the
stateless
multicast
replication
to
go
along
with
the
you
know,
stateless
qs,
that
would
be
driven
by
an
extension
header.
M
Yeah,
indeed,
that's
that's
the
intention
of
that
generic
delivery
functions,
drafts
and
the
better
qs
across
a
treatment
across
different
layers.
That's
one
of
the
use
cases
here.
N
Yes
and
I
think
what
we're
missing
out-
and
I
think
that's
for
us
to
go
on
friday
and
see
what
happens
with
the
popeye
hop
header
for
ipv6-
is
how
we
can
try
to
see
it.
You
know
how
not
only
get
buy-in
with
the
mpls
design
team,
but
you
know,
come
up
with
with
a
proposal
that
also
would
be
accepted
by
six
man,
six
or
whatever.
The
the
v6
side
is.
N
H
O
P
Great
jeffrey,
I
remember,
there's
a
document
in
your
working
group
about
the
flexagon
for
beer.
What
is
the
relationship
between
these
two
documents?.
M
I
guess
I
will
need
to
read
that
document.
First,
I
I
did
not
consider
see
the
connection
there,
but
now
that
you
bring
bring
it
up,
I
will
I
will
go,
read
that
drafts
again
and
then
see
if
there
is
any
connection.
B
G
Yeah
on
the
same
lines,
this
ipa
and
and
the
flex
algo
stuff
on
on
the
beer,
I
thought
we
were
trying
to
solve
the
same
type
of
issue
with
them,
that
you
create
flex
algo,
a
slice
if
you
will
blue
red
whatever,
and
each
sub
domain
is
being
binded
to
to
that
slice.
G
Now,
honestly,
I
haven't
read
this:
this
tease
idf
network
slices,
like
I'm,
not
even
sure
how
they
are
trying
to
identify
as
slice
like
it
might
make
sense
in
ipv6,
but
in
the
mpls
world.
I
I
need
to
go
to
read
the
draft
but
yeah
just
for
the
working
group
that
ipa
and
and
the
flex
algo
stuff
that
that
I
think
that
was
the
idea
behind
it
too.
M
The
flex
argon
may
be
used
as
a
as
a
the
control
plane,
signaling
proctor
for
to
to
cut
to
define
the
topology.
I
think
right
now.
This
draft
mainly
talks
about
so
how,
in
the
data
plane
we
can,
I
would
we
can
have
slices,
but
I
I
will
yeah.
I
will
go,
go
back
to
that
flex,
algo
draft
and
then
and
ips
the
bar
stuff
and
to
see
the
connections.
O
H
O
Okay,
I
think
it's
an
interesting
topic
and
I
read
the
flex
ego,
a
beer
draft
and
this
draft
both
and
I
think
that
the
flash
eagle
is
aligned
with
the
subdomain,
so
one
flex
eagle,
is-
is
equal
to
one
supplement
and
this
structure
is
is
taking
effect
in
the
subdomain.
So
I
don't
think
it's
the
tool
is
conflict.
A
Okay,
next
up
is
torless.
You
got
the
next
two
prizos
torrelis,
so
I'll,
hopefully
be
able
to
grab
those
as
well.
N
N
Right
so
reclaiming
time
for
the
working
group,
so
there
is
no
11.
Unfortunately,
so
we
got
the
complete
isg
review
at
the
end
of
august,
one
discussed
which
is
easily
cleared.
There
was
a
typo
into
the
field
of
code,
thanks
benjamin
and
a
long
list
of
good
comments
from
eight
reviewers
and
then,
unfortunately,
an
unexpected
busy
september
october,
with
conflicts
like
me
being
on
nomcom
and
you
know
see
next
slot,
so
I'm
I'm
working
through
the
11
as
we're
speaking
and
hope
to
have
them
out
soon.
N
I
I
really
want
to
take
the
opportunity
for
the
comments
to
you
know
also
improve
the
document.
There
was,
for
example,
from
benjamin
this
thing
that
he
was
always
confused
about
when
we're
talking
about
beer.
N
What
was
meant
so
in
terms
of
is
it
the
overall
architecture
that
is
inclusive
of
brte
or
just
the
you
know,
non-beer
tea,
as
he
proposed
me
to
prefix
it
so
that
that
that
reminded
me
of
writing
text
where
I
had
to
write
ip
unicast
to
make
it
easier
clear
what
I
meant
when
saying
ip
as
opposed
to
ip
multicast.
N
So
you
know
non-te
beer
yeah
so
that
that
was
the
only
kind
of
thing
I
I
wanted
to
mention
from
that
all
right
so
but
yeah,
so
it
should
should
easily
proceed
because
it's
all
comments
and
and
the
discuss
is
simple.
Okay,
next
presentation,
stop
sharing
slide
and
give
me
one
more.
N
Okay
right
so
right,
so
this
is
a
mouthful
kind
of
whitewriter
document.
If
you
can
write
everything
into
the
title,
carry
a
great
minimalist
multicast
using
beer
with
recursive
bitstring
structure
addresses,
and
this
actually
based
on
the
work
done
by
a
team
of
colleagues
in
beijing
and
you'll,
see
immediately
why
they
wanted
me
to
write
the
draft
up
for
this,
because
it's
really
kind
of
brt
next
generation
and
which,
of
course,
leaves
the
question.
N
You
know
what
the
heck
is
wrong
with
brte
and
beerte
is,
in
my
opinion,
the
best
forwarding
solution
for
the
target
constraints
and
the
target
constraints.
Where
that
we
obviously
wanted
to
have
steering
for
a
beer,
and
we
didn't
want
to
change
the
forwarding
plane
at
all.
N
So
we
ended
up
for
some
particular
options
in
beard
te
to
have
a
very
small
changes
like
you
know,
the
do
not
clear
bit
or
so,
but
in
the
end
it
is
pretty
much
like
the
beer
forwarding,
plane
and-
and
that
was
the
goal
and
that
unfortunately,
as
we've
seen
working
in
the
working
group
through
you
know,
all
the
comments
and
concerns
about
scalability
does
lead
us
with
a
lot
of
complexities
in
brt.
N
So
the
the
fundamental
reason
is
that
we
have
flat
bit
strings
in
the
packets,
which
can
address
only
let's
say
a
total
of
256
bits,
and
then
part
of
that
is,
for
you
know,
bfers
and
part
of
that
are,
for
you
know,
adjacencies,
like
links
in
the
topology
to
get
across
and
and
and
splitting
up
the
topology
so
that
you
fit
the
bits
into
this
is
a
problem
in
brte
makes
a
controller
complex
makes
it
really
important
to
save
the
number
of
bits
that
you
are
doing
same
issue
exists
to
to
some
extent
in
beer,
but
less
severe,
so
especially
for
brte
that
that
bit
string
is
really
something
that
makes
the
controller
plane
difficult.
N
With
all
these
options
that
we
have,
like,
you
know,
ring
bits
and
point-to-point
bits
kind
of
you
don't
see
it
in
the
forwarding
plane,
because
that's
simple,
but
the
controller
plane
is,
is
difficult
and
then
also
you
may
end
up
requiring
a
larger
number
of
copies,
because
you
need
to
split
up
the
topology
cross
more
bit
strings
than
you
actually
would
need,
because
you
can't
fit
them
all
in
a
single
bit
string.
N
So,
ultimately,
what
we
have
we
have
operational,
you
know
both
for
the
humans
and
the
controller
software
complexity,
less
traffic
efficiency
for
forwarding
plane,
simplicity,
so
which
brings
us
to
back
to
what
what
is.
Actually
you
know
the
vision
that
we
wanted
to
get
with
beer.
In
my
opinion-
and
so
this
is
what
you
know
we
gave.
You
know
the
visionary
name
for
the
service
providers
for
the
carriers
right.
N
We
want
to
have
really
a
minimal
multicast
solution
that
really
becomes
simpler,
so
controller
does
help
to
simplify
a
lot
of
things
and
when
we,
for
example,
also
manage
to
eliminate
the
duplication
between
beer
and
the
flow
overlay
by
extending
beer
or
the
control
plane
all
the
way
into
the
centers
and
receivers
so
that
they
don't
have
to
bother
about
ip
multicast
at
all
and
the
all
the
overlay
signaling
they
were
fairly
fine
on
the
controller
plane.
N
I
think
we
can
also
afford
some
more
intelligent
forwarding
in
the
forwarding
plane,
and
this
is
basically
what
this
draft
is
primarily
about,
because
this
is
just
the
vision
part,
and
that
is
effectively
why
the
heck
can't.
We
basically
make
the
bit
string
a
structured,
a
data
structure
in
which
we
effectively
replicate
the
distribution
tree
that
we
want
the
packet
to
have
right.
N
So
if
you
want
to
pack
it
to
to
be
replicated
from
r
to
one
two
three
and
to
n
and
then
from
one
to
you
know
again
some
some
number
of
bfrs
well,
but
we
can
basically
encode
that
in
a
structure
so-
and-
and
that's
shown
here-
you
have
a
fixed
field
in
the
beginning,
you
have
the
bit
string
for
the
local
node
that
is
processing
the
packet.
N
And
then
you
have
a
sequence
of
you
know
what
we
call
recursive
units
one
each
for
each
bfr
that
you
are
sending
the
packet
to
and
of
course,
once
that
packet
is
forwarded
to
one
of
those
bfrs.
N
You
are
going
to
rewrite
the
address
you're
pretty
much
extracting
the
recursive
unit
for
that
neighbor
from
the
address
and
make
that
become
the
new
address
and
that's
what
you're
sending
the
neighbor.
So
what
the
neighbor
then
gets
is
effectively
its
recursive
unit,
which
again
has
the
bit
string
of
where
it
should
replicate
the
packets
to
and
then
the
you
know,
recursive
units
for
the
you
know
further
hops,
and
so
yes,
there
is
some
overhead,
for
you
know
the
structuring
here
right.
N
We
basically
need
to
include
some
type
of
addressing
field
that
tells
us
how
long
the
recursive
units
are,
but
that
is
pretty
much
the
only
structural
overhead.
The
rest
is
just
bit
strings.
So,
instead
of
having
a
humongous
bit
string
to
represent
the
whole
topology,
we're
only
requiring
to
include
the
bit
strings
for
the
notes
that
the
packet
actually
is
traversing
and
so
here
in
more
detail
to
walk
through,
and
there
is
one
bug
in
the
picture
which
I've
I
fixed
in
the
draft.
N
So
the
draft
has
exactly
the
same
example
and
that
exactly
shows
that
in
in
all
the
glorious
detail,
so
when
the
packet
ultimately
gets
to
a
client,
of
course,
for
the
clients,
you
don't
need
another
recursive
unit.
So
there
is
a
new
flag,
then
in
the
bift,
which
basically
says
whether
or
not
a
particular
bit
is
to
a
bfr
or
to
you
know,
punt
up
and
center
to
receiver,
and
that
is
then
ultimately
the
only
complexity
to
work
through.
So
the
draft
has,
of
course
oh
wait
a
second.
N
This
is
not
the
slide
yet.
So
let
me
summarize
here
on
the
slide.
First,
the
simplification
performance
enhancement
that
we're
getting
so
obviously
there.
We
don't
need
loop
prevention
through
the
clearing
of
bits
because
by
just
extracting
the
recursive
unit
for
the
next
next
stop
we're
kind
of
achieving
the
same
goal.
So
that
is,
that
is
for
free,
the
loop
prevention.
N
We
don't
need
to
split
up
the
topology
into
different
si
or
multiple
sd
bit
strings,
so
the
amount
of
state
that
we
need
to
maintain
on
every
bfr
is
minimal.
It's
just
you
know
one
bift
sized
exactly
to
the
number
of
bits
representing.
The
number
of
you
know,
links
to
neighbors
or
you
know.
If
we
want
to
have
overlay,
of
course,
tunnels
to
remote
neighbors.
There
is
no
need
for
all
the
complicated.
N
You
know
bit
semantics
to
optimize
and
limit
the
number
of
bits
we
have
across
the
whole
topology,
and
so
this,
of
course,
is
especially
great
for
sparse
distributions,
meaning
sparse
multicast
trees,
and
I
think
we
know
that
most
multicast
trees
are
sparse.
The
network
is
large,
but
the
number
of
receivers
we
need
they
may
be
anywhere,
which
is
why
you
know
we
have
problems
with
the
brte,
but
they
will
be
small,
so
they'll
very
much
likely
fit
into
a
single.
N
You
know
packet,
header
of
maybe
up
to
512
bits,
and
then,
if
really
the
total
number
of
receivers
gets
larger,
then
the
controller
can
very
easily
split
up
the
whole
distribution
tree
into
any
subset
of
receivers
and
build
the
distribution
tree
for
that,
because
you
know
the
there
is
no
further
cut
up
of
the
bit
into
different
silsds.
N
So
all
the
complexities
that
that
we
hated
when
working
through
brte
are
gone
with
that
and
now
we
have
the
the
cost
of
business,
of
course,
in
the
forwarding
plane,
and
so
this
is
the
pseudo
code,
and
so
there
is
a
little
bit
of
you
know:
calculation
in
the
beginning
for
the
packet,
but
when
you
look
on
the
right
hand,
side
the
replication,
then
of
course
the
only
big
complexity
is
once
you
have
calculated
the
offset
and
length
of
the
recursive
unit
for
each
packet
copy.
N
You
need
to
do
the
rewrite
of
the
packet
header
and
extract
that
address
field
and
make
that
become
the
new
destination
address
field,
so
that
that
is
exactly
you
know.
I
think
four
four
forwarding
planes.
The
interesting
question
you
know
is
that
already
now
or
at
which
point
in
time
and
what
type
of
you
know
routers,
would
that
be
sufficiently
easily
feasible
to
be
productizable
other
thoughts.
N
So
this
is,
of
course,
a
natural
variable
length
address,
structure
for
versus
the
naturally
fixed
size
and
beer
and
brte,
but
sorry
shut
up,
okay,
so
yeah.
So
I
was
talking
about
the
the
forwarding
complexity,
so
I
think
one
of
the
interesting
things
to
do,
and
that
would
of
course
be
great
for
for
for
researchers
to
also
jump
into
that.
To
do
really
the
you
know,
simulation
comparison
of
you
know
the
the
number
of
bit
strings.
N
We
need
the
size
of
bit
strings
and
so
on,
between
beer,
brt
and
and
and
this
rbs
solution.
The
packet
encoding
packet
header,
of
course,
not
purpose
of
this
draft.
Like
all
our
documents,
it
could,
of
course,
use
rfc
8296
and
be
considered
yet
another
vift
mode
right
after
we.
Brte
also
includes
the
notion
that
the
ifts
would
need
to
be.
N
You
know,
set
to
a
mode
for
brte
forwarding,
so
it
could
be
brbs
or
whatever
we
want
to
call
it,
but
of
course
that
would
be
wasting
bandwidth
because
we
would
then
use
the
maximum
bit
string
size.
So
I
think
that's
an
interesting
question
but
of
course
using
82
96
would
be
the
fastest
way
to
to
adopt
something
like
this,
and
I
think
that's
it.
Thank
you
very
much.
N
Right
and
I
think
the
answer
of
course,
so,
first
of
all,
we
we
haven't,
you
know
decided
on
what
the
unit
of
allocations
in
in
the
bid
strings
are
so
there's
a
reference
encoding
option
and,
of
course,
depending
on
the
packet
header
that
we're
choosing
we
would.
We
would
specify
that
so
here,
it's
assuming,
for
you,
know
the
ift's
to
be
every
bit
long,
so
units
of
bits
the
whole
forwarding
engine
units
of
bits.
That
would
allow
you
to
calculate
that
and
then
the
answer
still
depends
on.
N
Are
you
going
to
go
across
20
hops
to
the
receivers
or
through
10
hops
right?
So
I
would,
I
would
say
that
maybe
roughly
roughly
calculated,
if
you
have
a
2000
receivers,
you
could
end
up
with
as
little
as
maybe
double
the
the
number
of
total
bits,
4
000
bits,
and
then
you
break
it
up
into
the
number
of
packets
that
you
need
to
make
each
of
the
subtrees
be
sufficiently
small
for
whatever
your
maximum
size
is
right,
but
exactly
that
that
goes
to
the
you
know.
M
Okay,
so
basically
at
least
the
the
the
number
of
bits
number
of
receivers
and
then.
N
Yeah,
you
need
more
more
bits,
but
you
don't
have
the
the
problem
in
in
brte
that
you
need
to
do
a
lot
of
design
up
front
right.
You
simply
you
know
on
every
bfr
you,
you
simply
have
a
bift.
You
have
100
interfaces,
you
have
100
bits.
You
have
10
interface,
you
have
10
bits,
so
that
is
plain
and
simple,
and
then
the
controller
ultimately
just
needs
to
figure
out
into
how
many
packets
to
split
it
up
to
send
it
right.
N
K
Oh
yes,
hey
toddlers.
I
saw
the
similar
session
has
been
presented
in
another
working
group.
I
think
the
internet
area.
N
Working
group,
so
this
regarding
some
logistics
things
did
you
get
any
feedback
from
that
group
regarding
your
session
yeah,
I'm
trying
to
remember
this
week
was
so
busy
and
I
would
say
I
have
to
go
back
to
the
notes
to
remember.
There
was
some
feedback.
There
were
some
questions,
but
unfortunately
yeah
I
forgot.
Where
was
the
presenter.
N
Point
was
in
the
ind
area.
There
is
a
lot
of
discussion
about
variable
length
addressing
and
what
their
benefit
is,
and
everybody
is
only
talking
about
unicast,
and
so
I
was
specifically
going
there
to
say
that
we
actually
have
the
best
track
record
in
the
ietf
of
what
we
can
do
more
and
better
with
addressing,
even
when
it
was
fixed
things
right.
Remember
all
the
things
we
did
in
ipv6
with
the
addressing-
and
you
know,
beer,
of
course,
being
cool
new
semantic,
and
this
one
being
a
proposal.
N
You
know
that
expands
beer
in
a
way
that
we
can
get
more
out
of
variable
addresses.
That
was
the
purpose
of
presenting
to
india.
Okay,
thank
you.
A
O
N
So
so,
basically
on
every
hop
you're
extracting
a
recursive
unit
right,
so
every
bit
string
in
the
structure
is
only
examined
by
one
node
and
once
it's
examined
and
acted
upon,
it's
not
seen
in
the
address
anymore
because
you're,
you
know
you're
removing
all
of
that
before
forwarding
the
packet
to
the
next
node.
O
O
N
D
Tony
yeah
yeah
coming
from
my
side,
so
first
tallest
your
draft
names
start
to
sound
more
and
more
like
folded,
proteins
in
biology.
You
know
you
better
watch
it.
D
Yeah
well
yeah,
so
that's
you
know,
of
course,
is
a
joke.
My
question
would
be
if
we
even
want
to
look
at
the
adoption
of
the
staff.
Is
that
calvary
covered
in
the
charter
by
the
brte
moniker
or
is
something
else
completely?
D
I
mean
I'm
sympathetic
to
mucking
around
with
this
big
mess.
If
it
makes
sense,
you
know
we
just
need
some
kind
of
license.
You
know
to
stretch
this
story
farther.
I
have
no
answer,
it's
just
more
something
that
I
put
into
the
room:
okay,
yeah.
N
N
On
steroids,
so
that
it
fits
the
because
I
mean
it's
still,
you
know
all
the
adjacencies,
all
the
the
basic
you
know
the
bift,
and
what
the
bift
does
is
a
simple
subset.
It's
just
that
the
address
itself
is
structured
differently
so
to
to
remove
the
the
limitations.
So
I
would
certainly
make
the
very
strong
point
that
architecturally,
it's
very
much
derived
from
brt
and
learned
from
the
limitations
we're
getting
by
just
using
the
same
address
structure
that
we
did
for
for
beer.
M
H
A
G
A
N
Right-
and
I
think
also
you
know,
working
through
brte,
which
will
be
finished
yes
very
soon.
We,
I
think
we
also
identified
the
concerns
with
this.
You
know
scale
and
topology
engineering,
which
the
draft
so
largely
now
documents,
and
so
I
think
that
that
also
should
serve
as
evidence
that
something
that
get
rid
of
that
problem
would
be
very
useful.
E
A
I'm
still
waiting
was,
was
the
embedded
slide,
not
working
for
you.
A
You
should
slide,
maybe
no,
if
you,
if
you
go
up
in
the
right
below
your
name,
the
audio
there's
the
hand
the
note
the
screen,
the
camera
and
the
mic.
Second
over
from
the
left
is
share.
Pre-Loaded
slides,
that's
the
dock!
If
you
click
that
you
can
select
your
deck
and
show
it
directly
without
sharing
your
screen.
A
And
there
it
is
which
one
there's
chenbir
the
te
lan
e,
p,
f
r.
Q
Is
that
it
hello
everyone
yeah?
This
is
today
I'm
going
to
talk
about
the
bt
for
broadcast
link.
Next
next
page.
Q
A
Q
A
Q
A
If
c
is
capable
of
getting
it
there,
you
build
your
tree
with
your
pc.
However,
you've
done
it,
you
build
your
mask
and
if
you
included
g
in
this
there's
been
a
mistake,
there's
no
need
necessarily.
Is
there
told
us,
do
you
want
to
quickly
jump?
In
I
mean
I
I
I
can
imagine,
there's
a
scenario
where
this
problem
exists.
I
just
don't
see
where
it
is
here.
That's
all
this.
This
is
more
of
a
problem
not
with
it's
with
a
pce
that
made
a
mask
that
creates
this
condition.
N
So
I
mean
the
duplicate:
packets
would
only
happen
when
you
put
more
adjacencies
than
one
across
the
same
link,
so
the
controller
has
always
the
the
the
task
to
to
avoid
yeah
you
here
here.
It's
all
unique.
Q
N
Q
Q
We
will
assign
two
bit
positions:
one
for
nine
connected
adjacency,
the
other
for
forward
connected
electricity
from
pursuit
number
to
real
note,
so
for
line
connected
adjacency
from
a
real
node
to
pursuit
node
that
the
real
node
will
act
as
a
proxy
of
a
prosciutto
node.
So
this
is
similar
as
ospf
and
iss.
N
Of
saving
time
here
right
just
to
cut
to
the
chase,
is:
is
it
the
proposal
to
have
something
like
the
beer
packets
be
layered
to
multicast
across
the
that
land.
Q
Yeah
this
yesterday
introduce
a
a
proxy
those,
so
those
the
real
node
will
connect
to
the
projector
node.
So.
N
This
is:
are
you
going
to
have
packets
being
sent
as
layer
2
multicast
so
that
they
can
reach
multiple
bfrs
on
the
same
lan?
No?
No!
No!
No!
No!
No!
No!
So
this
one,
just
the.
A
Q
That's
the
one
implementation,
so
note
c
will
have
a
main
folding
table.
It's
normal
for
interval
just
contain
nine
connected
adjacencies
from
real
node
for
to
the
pursuit
node,
and
then
this
node
also
have
a
four
folding
table
for
the
pursuit
node.
So
this
is
a
foreign
table
for
professional
laser
for
a
container
adjacency
forward
connecting
agency
to
the
presumably
nice
hope.
So
that's
for
one
implementation.
Next
page.
Q
N
I
I
I
have
to
look
more
at
the
examples.
There
obviously
is
the
option
in
brte,
for
an
adjacency
or
for
a
bit
to
have
a
a
list
of
adjacencies
to
which
to
replicate
so
I'll
have
to
really
sit
down
and
compare
whether
what
you're
proposing
cannot
equally
be
done
with
that
existing
option.
Q
So
right
now
I'm
going
to
talk
about
beer,
equal
production
next
page.
Q
So
in
appear,
protection
improves
page
half
of
the
case
that
node
backup
user
node
protect
primax
node,
which
you
assume
that
those
nodes
are
sent
there
be
a
package
to
the
same
as
c
receivers.
Q
Bicarbonate
nodes
per
protects
primary
nodes,
but
these
two
nodes
will
send
they'll
be
attached
to
different
ce's,
so
this
two
case
is
indicated
by
flag
so
as
flag
as
equals
to
one
means
they
will
send
it
to
the
same
receiver,
which
is
a
case
coupled
in
the
previous
version.
Q
Q
So
the
solution
for
the
case
b,
so
it's
working
like
this
way
so
e,
so
bicarb
equals
node
protector.
Primary
node
is
configured
with
the
flag
indicates
whether
these
two
nodes
send
that
be
a
package
panel
to
the
same
receiver,
ces
or
or
different
cs
flag
equal
to
zero
means
that
they
send
it.
They
are
payable
to
the
different
receiver.
Ces
igp
will
distribute
this
information
in
the
network.
Q
C
will
extend
is
a
forwarding
table
to
contain
a
bicarbophone
entry
for
e
version
of
f,
which
considers
the
flag
s
the
bicarbonate
equals
node
e
will
create
a
multicast
phonon
table
for
inverse
node
f,
which
contains
for
the
behaviors
of
f.
In
addition,
by
cap,
horizontal
e
will
extend.
It
is
a
normal
forwarding
table
to
contain
a
bicarbonate
entry
for
f,
with
a
pointer
which
pointing
to
the
multicast
feedback
for
f,
when
flag
equals
to
zero.
So
at
this
point,
network
is
ready
to
provide
protections
for
iris.
M
So
once
you
do
the
okay,
the
case
b,
why
do
you
still
need
case
a.
Q
Yeah,
that's
a
good
question
so
originally
we
have
case
a
and
then
cover
most
of
the
case
or
general
cases.
I
I
think
some
of
people
reach
the
question
so
in
case
that
if
two
nodes
send
their
bs
payload
to
different
c's,
so
we
need
to
cover
this
case.
So
that's
the
background.
Q
R
I
have
more
one
more
answer
case:
a
is
simpler,
the
solution
for
case
a
is
simpler
and
more
robust.
Therefore,
that's
a
reason.
That's
also
a
reason
to
to
to
a
differentiate
between
case
a
and
case
b.
A
A
Your
time
for
extending
the
session,
thank
you.
Thank
you.
Thanks
for
your
patience
and
we'll
be
following
up
on
the
list
with
well
quite
a
bit,
you
guys
have
a
great
rest
of
your
week.