►
From YouTube: IETF108-BIER-20200729-1100
Description
BIER meeting session at IETF108
2020/07/29 1100
https://datatracker.ietf.org/meeting/108/proceedings/
A
B
C
A
You,
okay!
Well,
we
are
all
above
the
hours
time
to
go,
so
we
need
jabber
scribe.
We
need
no
minute
taker.
We
need
you
to
note.
Well,
everyone
note
well,
raise
your
hand
noted
all
right.
A
A
A
A
A
E
A
All
right,
this
is
hard
audio,
seems
to
be
kind
of
intermittent
all
right.
I
think
we've
got
that
covered,
so
here's
the
agenda
we
currently
have
if
anything's
missing
shout
now.
We
do
have
kind
of
a
tight
time,
especially
with
all
the
tech
bits
to
work
out
to
make
sure
this
thing
goes,
but
we
can
make
room
for
something
missing
and
we
can
change
speak
up.
A
I
guess
it's
fine,
it's
hard.
I
can't
even
look
people
in
the
room
and
get
like
see
them
all,
look
at
their
keyboard
and
avoid
eye
contact.
Okay,
talk
about
some
status
here,
a
lot
of
stuff
backed
up.
You
know
for
all
the
wrong
reasons,
but
here
we
are
where
we
are.
We've
got
in
fact,
tony.
A
If
you
can
jump
onto
some
of
these
too,
because
we
kind
of
shared
some
of
this,
but
we
got
consensus
on
the
beer
bar
draft,
we're
just
waiting
for
the
shepherd
right
up
and
I
think
we
need
some
reviews
holy
crap.
I
think
we
need
some
reviews
from
other
working
groups
as
well.
A
Terk
is
actually
we
got
a
shepherd
now.
I
think
it's
ready
to
go
last
call.
We
have
definitely
some
issues
around
the
name
which
I
don't
know
how
much
friction
that's
gonna
solve,
but
greg's
speaking
as
a
personal
contributor,
I
didn't
think
we've
ever
had
acronym
police
in
the
ietf
before,
but
maybe
that's
a
new
invention,
we'll
see
how
that
flies
jump
in
if
you've
got
new
info
about
this
or
comments
or
something
go
along
the
way
beach
bls.
A
I
guess
there's
some
issues.
We
do
have
a
shepherd
now
it
should
be
moving.
I
think
it
also
needs
to
be
reviewed
across
working
group,
so
once
we
clear
here
we'll
bump
it
same
with
pim
signaling,
I
think
it
also
needs
clearing
from
pim,
as
does
mld.
So
we
have
these
all
ready
to
queue.
We've
got
shepard's
all
four
of
them,
which
is
great,
I
think.
A
If
we
go
through
the
working
group,
shepard
right
up
kind
of
in
parallel,
we
can
jump
to
the
other
working
groups
and
get
there
clear
before
we
go
to
the
queue
anyone
agree.
Disagree
new
info
on
this
revs
concerns.
A
Wow,
this
is
tough
because
I
don't
know
if,
like
my
audio
dropped,
whether
your
boy
just
think
this
is
great
all
right.
These
are
actually
ready
for
for
last
call.
We
don't
have
shepherds
for
them.
Yet
I
do
have
a
couple
names
had
come
up
before
actually
got
more
volunteers
than
needed
for
the
previous
list,
so
I'm
gonna
dig
from
that
reach
out
to
you
and
assign
these,
and
then
you
know
see
how
we
roll
from
there
questions
concerns
issues
yeah,
stig.
H
A
A
H
A
H
A
I
don't
know
what
that
means:
grep
the
mic.
What
do
you
mean?
There
are
two
yeah
there's
a
close
in
an
open,
a
green
check
and
a
red
chip
right.
J
A
H
So
if
somebody
jumps
into
the
queue
I
put
well
right
now,
there's
two
people
on
the
mic:
stick
tom
hill
right
and
zhang
is
as
cute
up
so
I
killed
stick
mike
and
tom
hill
mike
and
I
pushed
the
okay.
Oh.
A
H
E
E
E
J
A
Sandy
it's
interesting,
so
they
speak
and
then
they
vanish
from
it.
Are
you
clearing
them
off
of
there?
I
don't
know.
Well,
let's
roll
any
more
comments
on
this
stock
on
these
three
docs
and
status.
A
A
You
know,
if
there's
motivation
for
this,
you
know,
jump
back
in
rex
is
still
there
yeah.
This
is
kind
of
this
disappeared,
which
kind
of
surprised
me
too.
So
if
there's
motivations
to
bring
this
back,
these
are
still
rolling.
Php
expired
was
kind
of.
I
guess
it's
still
sitting
in
the
same
queue
with
the
html
one.
So
if
it
rolls
this
might
resurrect
problem
safe
is
expired,
but
you
know
it's
out
there.
I
don't
know
if
we
need
to
resurrect
it
there's
more
things
to
bring
up,
it
may
just
rot
and
disappear.
A
I
don't
think
there's
no
need
to
take
that
on
to
an
actual
published
dock,
but
it's
it's
a
working
dock
for
the
group.
So
if
there's
new
problems
to
arise,
we'll
resurrect
otherwise
it'll
die
in
its
case,
use
cases
is
different.
You
know
we're
keeping
it
involved
in
that.
I
think
at
some
point
we'll
ship
it,
but
I
don't
think
we're
there.
Yet
anyone
think
this
is
actually
cooked.
A
All
right,
yeah,
I
think
we're
learning
as
we
go,
especially
with
we're
at
the
cusp
of
seeing
live
deployments.
My
expectation
is
we'll
see
some
deployment
docs
come
from
here
as
we
learn
so
this
this
may
stand
this
may
you
know
spawn
off
some
new
stuff,
we'll
see
what
we
learned.
A
Oh
great,
oh
look
at
that
came
back
all
right
in
adoption
call,
so
the
these
four
actually,
you
know
been
ready
for
adoption
for
a
bit.
I
think
it
was
kind
of
fun
to
do
it
this
week
and
kind
of
pretend
this
is
ietf
week
and
we
have
stuff
to
discuss
along
the
way
plus.
A
B
Perfect
so
next
slide,
please.
B
So
this
is
going
to
be
a
short
update,
so
we
we
did
get
some
of
the
comments.
Well,
not
some
of
the
comments
large
amount
of
comments
from
mistake
me
and
men.
We
went
through
the
documentation,
most
of
them
were
housekeeping
and
we
did
correct
all
those
housekeeping
items.
B
I
did
see
an
email
from
a
stick
again
yesterday,
which
I
did
reply
to
all
those
comments
that
you
had
on
on
the
emails
they
they
are
addressed.
I
did
add
the
inane
a
in
the
new
version
which
I
will
upload,
but
I'm
hoping
that
it
should
be
ready
and-
and
there
should
be
no
more
con
concerns.
B
So
I
I
don't
know
whether
you
have
I
know
most
of
the
comments
are
coming
from
a
stick.
I
don't
know
whether
you
have
read
the
version
nine
of
the
draft.
If
so,
I'm
hoping
that
it's
it's
ready
to
go.
A
Wow,
okay,
so
ready
to
go.
This
is
you
know
this
is
actually
encouraging
to
with
we
got
it
overlapped?
You
guys
know
that
right
most,
you
guys
go
to
the
pim
group
as
well,
but
still
there's
some
people
may
not
get
there.
So
actually,
with
nine
stick,
if
you
can
clear,
you
know
the
notes
that
are
concerned
about
that
we'll
go
to
you
know,
finish,
call
and
take
this
to
the
fem
group.
Before
we
go
to
the
queue
anything
else
concerned.
B
M
B
Thank
you
and
I'll
give
five
minutes
back
to
the
to
the
group
because
of
the
the
beer
signaling,
the
mld
signaling,
you
guys
did
bring
it
into
the
into
the
adaptation
call.
So
thank
you
very
much
for
that.
So
I
have
nothing
on
that.
One.
J
A
All
right,
well
right
responses
have
been
good
too
hooman,
so
this
is.
This
is
gonna
roll,
pretty
quick
mldp,
so
I
guess:
does
this
go
off
to
pim
for
approval
two
or
do
we
have
a
catch-all
multicast
group,
or
is
this
go
up
to
mpls.
A
Is
there
oh
yeah,
I
keep
forgetting
that
part.
There
we
go.
N
Hey,
I'm
not
mistaken,
mlbp
might
originally
have
been
mpls
as
well.
A
Yeah,
that's
what
I
was
thinking.
It
was
maybe
an
mpls
group
that
was
waiting.
Maybe
I
mean
sending
that
to
a
couple
of
groups
is
not
gonna,
be
a
problem
I
mean
ultimately
goes
to
isgq
and
the
whole
ietf
reads
it,
but
just
having
some
focus
feedback
is
going
to
be
important,
so
yeah
mpls
pam
anyone
else
got
ideas.
That's
just
wait.
Maybe
get
an
ad
feedback
on
that.
That
should
be
it.
Yeah.
Okay
sounds
good
to
me.
A
A
F
You
do
have
two
mics
one
very
handsome
and
one
just
a
headset,
so
we
did
update
this
draft.
It
is
progressing
well
it's
about
a
year
and
a
half
in,
and
we've
made
some
good
progress
incorporated
some
of
your
comments
greg
as
of
this
week.
Actually
thank
you
for
making
some
comments
and
keon
has
joined.
The
draft
he's
made
some
good
comments
and
he's
helping
us
to
progress
it
so
next
slide.
F
So
stig
and
others
made
some
comments
some
time
ago
to
remove
the
unhelpful
scenarios
section
kind
of
like
a
use
cases
kind
of
a
section,
so
we
got
rid
of
those
and
replaced
them
with
transport,
independent
and
native
ipv6
models
which
I'll
share
in
a
minute.
F
We
did
also
something
new,
hopefully
just
to
make
it
clearer,
and
this
will
be
something
that
everybody
needs
to
review
is
divided
the
requirements
into
mandatory
and
optional.
F
We
remove
some
commentary
down
in
the
appendix
we
have
summarize
the
solutions
and
the
various
solutions
and
we've
removed
some
of
the
commentary
from
the
solutions.
Particular
beer
ethernet.
Some
of
the
comments
from
you
greg
this
week.
F
F
And
made
it
more
understood
that
it's
being
integrated
into
an
extension
header
next
slide.
Stop
me
anytime,
if
invited
many
questions.
The
purpose,
just
as
a
reminder,
is
to
specify
the
requirements
for
transporting
packets
with
beer
headers
in
an
ipv6
environment.
F
F
So
one
thing
that
we
did
do
is
to
with
the
approaches
we
did.
As
I
mentioned,
tr
divide
them
up
into
transport,
independent
models
and
native
ipv6
model.
So
what
fits
into
transport
independent
model
approaches?
We
feel,
is
these
two
drafts,
as
tony's
mentioned
many
times.
This
is
something
that
we
don't
want
to
in
this
case:
leave
beers,
layer,
2.5,
and
so
in
these
cases,
in
this
model
an
ipv6
tunnel.
If
you
want
to
call
it
a
tunnel,
we
can
call
it
something
else.
F
And
then
the
next
slide
goes
over
the
native
ipv6
model
approaches
and
there's
a
couple
address
there
that
fit
that
type
of
an
approach
where
the
beer
header,
as
I
mentioned,
is
integrated
into
a
extension
header
and
processing
is
processing
of
the
beer
header.
The
bit
string
is
implemented
as
part
of
the
extension
editor
processing.
So
we
kind
of
categorize
these
different
approaches
into
different
models,
both
good
models,
both
very
appropriate
but
just
different
next
slide.
F
Another
thing
that
we've
done,
as
I
mentioned,
is
we've
taken
the
requirements
and
put
them
into
buckets
since
there's
a
lot
of
requirements.
We
in
the
course
of
discussion,
thought
okay,
let's
put
many
of
these
into
a
list
of
mandatory
requirements
and
then
others
optional,
and
so
that's.
This
is
what
we've
done,
and
so
many
the
mandatory
ones
are.
You
know
any.
The
solution
needs
to
be
there
too
agnostic.
It
needs
to
support
the
beer
architecture
it
needs
to
conform
to
the
existing
ipv6
8200
support
deployment
of
non-bfr
routers
support,
interas
multicast
deployments
not
required.
F
It
it
may
not
be,
but
it's
it
seems
like,
as
for
future
deployments
of
this
solution,
that
you
would
want
it
to
have.
You
would
want
that
solution
to
be
entered
into
es.
A
Sure,
but
we
don't
even
have
an
inter
beer
domain
solution
so
going
between
two
beer
domains
between
two
as's.
They
don't
share
a
bit
set
at
all:
there's
just
an
abr
function
between
the
two
there's,
nothing
in
the
end
cap
and
code:
that's
actually
inner
domain.
Okay.
So
I'm
kind
of
you
know
as
a
mandatory
requirement.
I
we
have,
I
can't
even
define
those
requirements,
so
it
seems
kind
of
a
bit
of
a
stretch
to
call
that
mandatory.
A
I
think
it'd
be
great
if
there
were
some
mechanisms
in
the
solution
that
facilitated
that
simplified
it
in
some
way.
That's
great.
But
again
you
know,
like
I
said,
inner
beer
requires
an
abr
function
as
it
is.
You
leave
it
completely:
decap
re-encode
re-encap
and
come
back
out
to
the
side,
there's
nothing
in
the
transport
that
I've
seen
that
facilitates
that
yet,
but
if
you've
got
something
great,
I
just
let's,
let's
keep
it
off
the
mandatory
list
and
until
we
have
some
consensus
on
that,
if
others
agree
with
that,
then
we
will.
A
Words
like
this
make
it
tough
just
going
next
line
support,
simple
encapsulation.
You
know
what
simple
I
don't
know.
What
would
define
simple
versus,
because,
honestly,
what
I'd
say,
simple
end
cap
is:
take
the
existing
payload
and
put
an
end
cap
on
it.
A
A
So
if
you
put
that
as
a
requirement,
it's
mandatory,
you
kind
of
rule,
one
of
those
out
already
from
my
my
chair,
my
seat,
not
chair
group,
I'm
just
my
individual
contributor
position
sitting
right
here:
support
deployment,
security
well,
security's
kind
of
an
umbrella
for
all
of
ietf.
Everything
comes
through
and
reviews
that
I
don't
think
a
particular
yeah.
I
guess.
F
I
F
Maybe
to
not
put
requirements
on
certain
app
implementations.
Perhaps
I
don't
know.
I
F
A
A
Draft
and
under
the
optional
list,
I'm
not
opposed
to.
You
know
growing
that
list.
This
is
the
draft
matures,
but
I
agree
if
just
one
of
them
stands
there.
It
sounds
like
the
rest,
don't
matter.
So,
if
we're
going
to
have
an
optional
list
of
support,
we
should
probably
flush
it
out
a
bit
more.
F
P
A
Sure
the
question
the
question:
wasn't:
why
is
mvpn
optional?
The
question
is:
if
we're
going
to
list
mvpn,
why
don't
we
have
evpn
other
solutions
with
beer
listed,
there
was
all
so
yeah.
I
understand.
A
K
I
have
some
comments
from
the
member
of
the
working
group.
I
have
read
document
and
I
think
it's
great
to
have
discussions
about
the
mandatory
or
optional
requirements
here,
but
I
think
at
least
the
classification
is
valid
and
reasonable,
and
it
is
a
good
start
to
have
the
discussions
in
the
mini
list,
about
which
exact
item
should
be
in
mandatory
or
should
be
optional.
I
think
all
of
that
detailed
discussions
could
be
start
based
on
this
class
classification.
I
Another
comment
about
sport,
beer
architecture,
it's
my
understanding
is
that
existing
beer
architecture
is
mainly
based
on
their
2.5.
As
the
the
dark
mentioned
and
now
the
their
three
base
approach
is
that
consider
extending
the
architecture
or.
A
It's
a
good
call,
yeah,
that's
it'll
be
presented,
and
it's
on
the
agenda
here
at
the
last
so
we'll
book
chance
to
dive
in.
F
Okay,
so
again,
you
know
our
goal
is
to
help
steer
the
working
group.
If
it's
helped
great,
we
as
authors
phil,
maybe
you
should
start
considering
some
of
these
solutions
to
it
as
something
that
you
may
want
to
adopt.
If
it's
not,
then
you
know
what's
lacking.
There's
a
few
comments.
That's
been
made
about
the
ordering
of
the
mandatory
and
optional
we'll
we'll
address
those.
If
there's
any
others,
then
please,
let
us
know.
A
There's
a
question
on
the
list
about
fragmentation
and
then
fragmentation
and
the
the
concern
that
it
won't
operate
as
the
draft
actually
explains
the
fact
that
once
you've
chopped
up
the
beer
packet
itself,
it's
not
beer
anymore.
It's
a
fragmented
v6
until
the
end
of
that
segment
that
fragmentation
path.
A
So
you
know
I
think
that
needs
to
be
addressed
as
well.
So
I
I
just
read
it
this
morning.
I
think
sandy
sent
it
out
on
the
list.
Yeah.
F
So
we
do
have
a
section
on
fragmentation,
so
I'll
take
a
look
and
see
what
what
sandy's
saying
and
we'll
we'll
address
it.
H
Q
Cute,
I
just
wanted
to
note.
I
think
that
oem
is
always
a
must.
F
H
All
right,
so
let
me
sum
that
up,
please
make
sure
you
take
minutes
as
I'm
happy
about
the
mandatory
and
optional
and
as
to
the
view
graphs.
H
I
like
the
differentiation
you
made
between
the
2.5
and
the
3
and
in
no
case
well
in
the
2.5
case.
This
tunnel,
if
you
want,
is
not
a
p2mp
beer
in
2.5
is
not
taking
advantage
of
any
kind
of
underlying
multicast
capabilities,
because
otherwise
the
ethereum
caps
would
also
use
l2
multicast,
which
we
don't
very
consciously
okay.
H
So
please
make
sure
that
you
kill
that
in
the
draft,
because
it's
confusing
so
the
optional
staff.
So
I'm
happy
that
the
stuff
is
making
to
the
optional
right,
and
I
think
we
have
a
problem
here
with
the
charter.
The
optional
staff
is
suggesting
that
we
should
be
basically
building
an
l3
tunneling
technology
that
is
multicast
capable
all
right
now.
What
does
tunneling
means
right,
and
why
is
the
2.5
star,
not
tunneling?
What
is
except
encapsulation
and
tunneling?
H
H
You
fudged
into
the
draft
right
now
the
tunneling
that
the
charter
has
something
to
do
with
tunneling.
There
is
no
word
tunneling
in
the
charter
whatsoever.
Okay,
so
you
asking
to
basically
build
a
completely
different
layer
with
this
optional
stuff.
It's
fine!
I
mean,
if
that's
what
we
agree
on
and
we
start
to
work
on
all
the
stuff.
Fine,
but
just
be
aware
that
you
are
stretching
out
the
charter
and
building
basically
a
completely
new
l3
tunneling
technology,
for
which
we
already
have
a
lot
of
solutions
as
to
the
native
model.
H
The
oam
is
also
part
of
that
right.
So
now
we
have
broam,
it's
2.50
am,
if
you
start
to
build
the
v6
oam
into
the
whole
thing
that
you're
basically
generating
a
new
tunneling
technology
with
another
oem
or
you
that,
whatever
you
pick
up
the
oem
of
v6,
there
is
no
need,
in
my
opinion,
but,
and
we
definitely
don't
have
the
charger
for
it,
but
it
would
decide
to
stretch
the
charter
there,
but
it's
a
charter
discussion
right
to
the
native
model.
H
One
little
observation
that
struck
me
when
I
read
the
stuff-
and
I
observed
the
spring
discussions
about
implications
of
security
when
you
start
to
abuse
v6
addresses
as
something
else
and
put
all
these
options
on
right
now.
It
says
that
for
deployment
securing
the
native
model,
you
need
to
have
the
beer
packet
in
a
trusted
v6
domain,
whatever.
That
is
right,
because
the
v6
list
did
not
even
agree
what
is
a
trusted
domain?
H
At
the
same
time,
you
try
to
push
on
the
requirements
that
it's
support.
It's
supposed
to
support,
inter
as
multicast
deployment.
Now,
if
you
are
talking
about
a
closed
v6
domain,
to
even
propose
a
security
model
at
the
same
time,
you
want
to
make
this
thing
support,
inter
as
deployment
those
two
things
seem
to
collide
head-on
for
me
and
that's
pretty
much
the
end
of
my
comments,
so
I
hope
the
notetakers
caught
some
of
the
stuff.
H
So
I
think
this
mandatory
and
optional
has
to
be
further
clarified
and
we
have
to
talk
whether
we
really
want
to
build
an
l3
tunneling
solution
which
the
optional
stuff
implies,
whereas
the
mandatory
section
starts
to
look
better
and
better
and
the
security
stuff
on
the
native
concerns
me
to
no
ends.
I
would
like
to
have
a
very
extensive
discussion
about
the
whole
thing
unless
we
killed
the
inter
as
multicast
deployment
required,
which
is
fine
for
me
as
well.
Okay,
I'm
done
thanks.
H
A
H
N
Is
yours
thanks
tony?
What
was
that
as
an
individual
contributor
or
was
that
a
a
chair
comment
that
was
a
chair
comment?
We
have
some
stuff
right
and
control
the
charter
yeah,
I'm
I'm
I'm
somewhat
confused
about.
N
You
know
this
classification
with
the
the
different
type
of
you
know:
ways
that
that
you
describe
tunnel
versus
encapsulation.
So
would
there
be
a
better
way
to
to
to
formulate
that
into
writing,
or
so
so
that
that
can
be
discussed
because.
H
It's
very,
very
simply:
tolerance,
in
my
opinion,
just
replace
wherever
you
have
beer
with
mpls
and
think
what
will
happen
if
you
start
to
build
for
mpls
a
v6
native
solution
and
think
why
mpls
has
no
fragmentation,
no
encryption
and
so
on.
You
can
just
replace
beer
with
mpls
and
go
through
the
thinking
and-
and
you
know,
mpls
is
a
very
well
understood
and
deployed
technology,
and
if
you
try
to
build
a
v6
native
solution,
just
think
what
would
happen
and
start
to
insist
on
ipsec
fragmentation
and
so
on.
N
Well,
I
think
it
would
definitely
be
good
if
we
take
it
back
to
the
list
to
be
able
to
give
more
thought
on
it
than
you
know
as
possible
in
this
limited
time
here,
and
because
I
I
currently
can't
get
myself
to
think
that
the
proposal
is
stretching
the
charter
with
the
way
it
does
things.
So,
given
how
you're
making
that
claim,
I
think
it's
important
to
see
that
we
get
more
eyes
on
onto
your
comments.
H
K
I
think
the
summary
of
two
models
is
clear
and
easy
to
understand
and
it
can
clas
classify
all
the
existing
vip
physics
solution.
Actually,
I
think,
and
also
I
think,
both
of
the
the
solutions
or
matters
should
be
accepted
by
the
working
group,
for
example
the
transport
independent
or
native
anti-basics,
I
think
from
the
chatter
or
rc
of
the
architecture
of
beer.
I
I
cannot
see
why
it
should
be
excluded
in
this
working
group,
and
I
think
it
is.
They
are
all
reasonable
from
my
side.
So.
A
A
A
It's
not
there's
nothing
about
the
beer
forwarding
that
is
v6
and
even
though
you
put
the
bits
in
an
extension
header,
it's
not
v6
as
soon
as
you
get
to
that
edge
where
you
have
to
process
now
beer
bits
v6
is
gone,
you're
doing
beer
at
this
point,
and
so
it's
not
clear
to
me
how
that
v6
header
gets
reconstructed
if
you're
doing
non-bfr's
midpoint.
So
again,
there's
there's
probably
the
solution
we'll
talk
about
at
the
end,
but
ultimately,
I'm
trying
to
understand
what
value
is
being
added
here.
A
What
additional
functionality
we're
getting
that
we
don't
currently
have
with
the
myriad
of
tools
already
available
to
us
across
the
idea,
so
that
that's
that's
the
challenge
I
keep
trying
to
put
forward.
So
I'm
understanding
what
an
unique
problem
you're
trying
to
solve,
because
I
feel
like
every
time
we
there's
a
descriptor
and
we
we
pin
it
down
with
trying
to
understand
it.
The
goal
posts
move
and
we
go
back
and
chase
our
tail
again.
A
So
we
need
to
focus
the
conversation
and
finish
these
things,
so
we
understand
them
and
not
just
keep
trying
to
bring
it
back
around.
So
hopefully,
some
of
these
conversations,
the
email
will
clarify.
Maybe
today's
presentation
will
you
know
jar
some
brain
cells
loose
for
all
of
us.
H
Yeah,
so
that's
the
different
way
to
structure
the
discussion.
What
problems
are
you
solving
and
what
benefit
are
you
bringing
right,
I'm
being
steered
more
from
the
technology
differentiation
in
the
charter?
I
I'll
take
it
up
as
lead
on
myself
to
start
this
threat
on
the
list.
Okay,
laura
mike
is
yours.
L
N
Yeah
with
respect
to
the
the
the
the
points
from
greg,
I
thought
in
in
the
in
the
earlier
discussion
where
I
was
sending
replies
to
I
was
I
was
pointing
to
you
know
one
of
the
benefits
being
the
way
that
the
framing
is
handled,
and
you
know
that
very
often
mapping
into
you
know
a
framing
lower
than
ipv6
encapsulation
creates
a
lot
of
additional
deployment
challenges
and
implementation
challenges.
N
I
mean
it
may
not
be
be
ideal
for
all
the
cases,
but
it's
a
very
good
baseline
right
and
it
also
maps
into
the
ability,
then
to
use
you
know:
ipv6
filtering
ipv6
diagnostics
as
the
lowest
bottom
frame
and
so
on.
So
I
think
some
of
the
questions
greg
that
you
were
asking
were
already
raised
or
or
answered
in
in
in
the
email
threads
in
before.
So
maybe,
let's
see
if.
N
Can
what
what
we
can
do
in
the
document
that
is
missing
in
those
explanations.
H
K
H
O
A
Q
Please,
okay,
so
we've
been
talking
about
how
to
practically
and
usefully
apply
bfd
to
beer
network,
and
now
we
have
two
rfcs
8562.
It's.
Q
Demand
mode
with
a
head
end,
which
is
logically,
could
be
bfir
that
allows
tails
or
bfer
to
detect
the
failure
of
the
multicast
distribution
tree.
Q
Q
So,
with
the
active
tails
defines
three
modes:
the
first
mode
when,
in
addition
to
periodic
directly
sending
b,
bfd
control,
packets,
the
head
end
or
bfr
I'll
use.
This
terms
interchangeably
sends
poll
messages
and
poll
messages
are
the
same.
Bfd
control
messages
with
the
pole
built
set
and
vfd
based
specification.
5880
defines
the
pulse
sequence
for
the
negotiation
between.
Q
Entities
between
bfd
systems
of
certain
changes
and
configuration
changes,
or
it
could
be
to
update
of
the
status
change,
so
the
head
end
sends
the
full
message
and
expects
their
the
remote
system
to
respond,
whether
either
paul,
if
it
doesn't
agree
or
with
the
final
message
and
50
85
63
active
tale,
defines
two
modes,
so
their
multicast
polling
and
composite
polling
composite
filing
is
a
combination
of
poll
messages
being
sent
over
the
multicast
distribution
tree
and
then
sent
individually
to
their
bfr
or
to
the
tail
to
particular
tale
to
detail
poll
message.
Q
But
detection
time
in
this
case
will
be
the
interval
between
poles.
Our
next
slide.
Q
Please
so
we
can
see
here
is
what
how
it
works
there.
Q
I
use
some
other
diagram,
so
just
excuse
their
labeling,
so
p1
in
this
case
broadcasts
their
bfd
control
messages
and
four
five
and
six
are
can
detect
the
failure,
but
p1
would
not
know
so
to
know
for
p1
to
learn
about
their
perception
of
4,
5
and
6
of
the
multicast
distribution
tree
is
to
send
the
following
messages
and
if
it
sends
over
the
same
multicast
tree,
so
it's
a
multicast
polling
and
if
the
multicast
three
doesn't
work
so
they're
ex
their
timer
on
polling
message
will
expire
and
will
it
will
send
a
sick
number
of
following
messages
to
conclude
that
something
doesn't
work
with
their
distribution
tree.
Q
If
four
and
five
and
six
receive
the
pollen
message,
they
will
respond
to
the
poll
with
the
final
bit
set
individually.
So
p1
not
only
can
learn
of
tails,
but
at
the
same
time
it
will
could
learn
of
okay,
which
one
tail
is
missing,
missed
to
respond.
So
it
can
assume
that
there
is
a
failure
and
again
their
detection
time
will
be
defined
by
the
interval
between
the
pollen
messages.
Q
So
it
just
illustrates
how
to
work
so,
for
example,
if
we
have
a
failure
on
a
segment
between
p1
and
p2,
so
it
will
affect
four
and
five,
so
thus
multicast
polling
would
not
reach
them
if
we
use
a
composite
filing
so,
which
is
a
combination
of
multicast
falling
and
unicast
poland
in
unicast,
poland
suggested
not
to
use
multicast
distribution
3
in
underlay,
so
the
detection
will
be
faster.
Q
A
A
question
in
here
you
said
the
tree
underlays
I
mean
terminologies,
I'm
getting
mixed
up
in
because
I
don't
think
of
beers
being
a
tree
at
all.
We've
got
a
replicating
fabric
and
I
define
ad
hoc
tree
forwarding
based
on
the
header
of
the
package
itself,
so
ultimately
that
underlay
is
taking
the
shortest
path
based
on
unicast
forwarding
or
you
cast
routing
effectively
right,
and
it
builds
that.
So
if
I'm.
Q
There
is
a
challenge,
yes,
I
I
agree
again
for
their,
for
example,
for
a
composite
polling
to
be
useful.
Q
There
needs
to
be
certain
support
to
make
a
disjoint
of
unicast
paul
being
sent
the
path
and
from
the
multicast
tree
that
we're
monitoring
oh
yeah.
Well,
it
might
yeah.
I
I
agree.
That's
that's.
That's.
A
Q
Q
I
understand
yeah,
I
agree
again,
you
know
oh
there.
Well,
the
first
of
requirement
of
oem
is
that
especially
active4am
is
that
our
probes,
our
test
messages,
are
in-band
with
the
data
flow,
but
at
the
same
time,
especially
here
in
this
example,
we
want
to
have
a
guaranteed
desjoint
paths.
A
O
A
Q
Okay,
yeah,
but
again,
my
understanding
is
that
there
are
the
the
network
that
we
are
sending.
The
beer
over
might
not
necessarily
be
best
route,
but
it
could
be,
for
example,
path,
engineered.
H
Yeah
right,
so
I
agree
with
greg
the
s
one.
I
recommend
you
need
to
flash
that
one
out
and
think
also
through,
for
example,
architecture
where
there
will
be
no
igp
at
all,
but
all
the
beer
tables
will
be
controller
program.
So
I
think
you
may
need
an
option
where
the
the
unicast
pole
goes
either
in
band
or
out
of
band
from
my
side,
jeffrey's
on
the
mic.
I
I
What
all
these
pores
and
answers
all
those
mechanisms
that
does
it
not
apply
to
say,
rcptp
tunnel
the
same
way,
it
seems
to
me
that
the
beer
here
is
really
just
a
means
for
you
to
gather
those
messages
through.
It
does
not
need
any
to
be
to
be
involved
at
all,
and
also
what
do
you
do
after
you
detect
the
failures
things
like
that?
It
does
not
seem
to
be
beer
specific
either.
Q
Okay,
I
I
agree
there
is
it's
not
beer,
specific,
it's
specific
for
multicast
distribution
tree,
but
let's
proceed
because
I
think
that,
to
this
point,
we're
already
presented
and
discussed
so
now
we're
getting
to
another
mode,
which
is
just
barely
mentioned
in
85
63,
and
it's
head
notification
without
falling
unsolicited
event,
triggered.
H
Q
Okay,
thank
you.
Okay.
So,
as
noted,
there
are
three
modes
being
mentioned
in
active
tale:
bfd
for
the
multi-point
networks
and
the
third
one
is
not
technically
detailed
and
disc
discussed,
but
I
think
that's
the
most
interesting
for
beer,
because
it
doesn't
use
the
polling
messages.
Q
So
instead,
their
b3r,
which
is
a
tale
in
the
distribution
tree,
is
sending
its
own
full
message
to
the
head
end
when
it
detects
the
failure
of
the
multicast
distribution
tree
informing
the
bfr
of
the
failure,
so
that
significantly
improves
their
defect.
Detection
time.
Q
So
what
happens
then
it's
again
it's
up
to
the
protection
scheme
that
is
used
in
this
particular
beer
domain.
Q
It
could
be
that
bfr
changes,
something
in
how
it.
Q
Q
Right,
so
this
is
diagram
just
to
indicate
so
how
it
works.
When
there
failure
on
the
network
and
the
distribution
tree
occurs,
there
is
no
need
for
p1
to
pull
tails.
Q
P4
and
p5
will
detect
the
failure
and
detect
the
failure
in
terms
of
regular
vods,
so
missing
and
consecutive
control,
packets
and
then
we'll
generate
unicast.
Paul
messages
address
to
p1
and
everything
works,
because
we
can
math
map.
Q
What
happens
after
that?
It's
outside
of
bfd,
so
whatever
protection
mechanism
is
used,
recovery,
restoration
for
the
control
plane
or
otherwise
it's
not
part
of
dfd,
but
the
bfd
part
is
only
the
failure.
Detection.
Q
So
welcome
discussion
and
I
appreciate
their
working
group.
Adoption
call
so
looking
forward
for
your
comments
and
questions.
H
Is
that
it
you
just
start
a
hum,
so
what
are
we
humming?
Look
at
that.
A
All
right,
who's,
who's
read
the
draft.
E
A
E
A
Adoption
all
right,
we
know,
activity
on
the
list,
then
greg
the
movies.
But
people
haven't
even
read
this
thing
yet
I
think
there's
some
outstanding
questions
on
anyway.
It's
it's
work.
We
should
definitely
be
monitoring
within
the
group
we'll
see
if
we
can
make
progress
and
we'll
call
for
an
adoption
later.
If
people
get
some
traction,
I
think
this
works
progresses
regardless.
So
anyone
else
want
to
chime
in
listen
for
homs.
A
H
H
A
Think
I
mean
I'm
not
going
to
suggest
throwing
out
the
complexity
of
waiting
for
a
second
path.
To
return,
I
mean
that's
fine.
If
someone
wants
to
deploy
something
to
that
scale,
yeah,
I
think
it'd
be
great
to
have
describe
the
use
case
in
which
there
isn't
a
unique
path
that
this
is
just
a
replicating
fabric.
I
mean
how
do
I
do
that
detection
without
having
to
build
a
parallel
network.
H
I
So
tony
you
mentioned
earlier
that
the
these
are
sent
as
a
beer,
oam
packets.
I
still
don't
know
why
it
needs
to
be
broad
packets.
It
seems
to
me
that,
for
whatever
you
need
to
do
here
for
the
for
your
purpose,
if
you
just
send
them
as
regular
or
bfd
packets,
like,
however,
you
do
it
with
ethernet
or
with
employees,.
H
J
H
I
H
I
I
I
see
that
the
some
of
the
the
the
the
messages
are
addressed
to
the
the
the
dfir
ip
addresses.
Q
No,
no,
no,
no
all
right
read
the
draw
yeah.
They
don't
have
to
be
addressed
so
for
the
multicast
for
the
multi-point
networks,
it
uses
not
a
synchronous
message,
but
demand
message
and
the
mad
message
is
a
explicitly
unidirectional.
Q
So
thus
you
don't
have
to
address
it
to
the
particular
ip
address.
H
H
O
D
Hi,
I
hope
you
can
hear
me
now.
I
can
great
okay,
so
I'm
just
going
to
introduce
myself
I'm
daniel
I'm
a
phd
student
at
the
chair
of
communication
networks
in
tubingen
and
professor
mendt
requested
this
slot,
so
that
I
can
present
some
yeah
research
insights
from
beer
and
p4,
and
there
is
a
video
version
of
this
of
this
presentation
with
more
details.
If
you're
interested
in
it,
I
can
send
you
the
link
in
the
video
I
prepared
yeah
a
quick
introduction
to
ip
multicast
and
beer.
D
I
don't
think
that
this
is
necessary
right
now,
so
I
think
we
can
skip
the
first
few
slides
and
yeah
exactly
yeah
skip
okay,
I
yeah,
which
one
skip
two
three
and
the
next
one
should
be
interesting.
I
think
yeah
right
here
exactly
okay,
yeah.
H
D
It's
the
reference
architecture,
others
have
different
properties,
but
now
I'm
going
to
talk
about
the
most
important
things,
so
the
oh?
No,
that's
fine!
The
the
slide
was
fine.
Oh
sorry,
no
problem
the
pipeline
is
separated
in
an
ingress
and
egress
pipeline
and
both
consists
of
parsers
deposits
and
image
action
pipeline.
D
The
parsers
are
programmable,
so
you
can
like
deploy
custom
headers
with
p4,
which
is
great
for
implementing
novel
protocols
and
the
programmable
match.
Action
pipeline
is
just
some
kind
of
control
flow
with.
If
statements
and
different
match
action
tables
to
execute
packet,
specific
actions
like
finding
the
next
hop
and
so
on
and
another
important
part,
is
the
packet
recirculation,
which
is
actually
not
p4
specific.
But
this
is
important
for
our
implementation,
so
I'm
going
to
explain
it
in
detail
so
now
next
slide.
Please
yeah!
Thank
you.
D
If
you
want
to
recirculate
a
packet
in
p4,
you
have
to
send
it
to
a
specific
switch
in
turn.
Recirculation
port
and
after
the
packet
has
been
processed
by
the
ingress
and
egress
pipeline.
The
packet
is,
then
not
sent
through
a
physical
port,
but
to
this
switch
intern
port,
which
then
processes
the
packet
like
it
has
been
received
regularly
so
from
the
beginning
of
the
pipeline.
D
The
problem
is
that
this
port
has
only
limited
capacity
and
if
too
many
packets
are
recirculated
or
too
often,
then
this
port
is
overloaded
and
the
throughput
may
decrease,
and
you
can
increase
this.
We
call
it
recirculation
capacity
with
additional
physical
ports
in
loopback
mode.
So
when
a
packet
should
be
recirculated,
you
send
it
either
to
the
switch
internal
recirculation
port
or
to
one
of
the
loopback
ports
we
implemented.
D
Some
sort
of
round-robin
approach
to
distribute
the
load
on
the
ports
and
use
this
in
our
beer
implementation.
Okay,
next
slide,
please
yeah!
What's
what
we
basically
did
was
to
implement
the
beer
header
in
before
the
basic
idea
is
that
from
this
beer,
header
of
a
packet
and
some
paths,
maybe
from
the
igp
or
a
controller
we
determine
or
a
switch
determines
all
of
the
next
hops,
creates
the
right
amount
of
clones
and
then
forwards
the
packets
to
all
of
the
clones
and
the
clones
to
all
of
the
next
stops.
D
D
First,
when
the
beer
packet
is
received
by
the
switch
it
processes,
the
packet
in
the
ingress
pipeline,
which
basically
finds
the
first
next
hop
and
calls
a
clone
operation.
However,
the
cloning
is
performed
only
after
the
ingress
pipeline.
So
after
the
ingress
pipeline,
one
packet
instance
is
forwarded
to
the
determined
next
top
and
the
clone
is
then
processed
again
in
the
egress,
some
changes
to
the
header
and
stuff
and
then
sent
to
a
recirculation
port
so
that
it
can
be
processed
again
next
slide.
Please.
A
A
Okay,
so
you're
doing
a
first
bit
set
and
then
walking
the
header
from
there.
Each
replication
is
another
bit
set
yeah.
How
are
you
clearing
those
bits
of
those
if
they're,
all
paralyzed
at
this
point
right,
you
got
to
know
who's
been
forwarded
and
cleared?
I
guess
you're
only
clearing
the
bits
on
that
package.
So
in
this
blue
path,
the
big
clearing
has
taken
place
in
this
beer
process.
Here.
D
Yeah,
so
this
is
really
p4
specific.
It
has
some
special
properties
about
about
cloning
and
stuff
and
metadata,
and
if
you
want
to
know
real
the
technical
details,
which
maybe
are
maybe
a
lot,
stefan
can
tell
you
about
it,
but.
D
Store,
we
store
the
fbm
from
the
build
in
a
metadata
field
and
then
in
the
egress
we
so
so
in
the
ingress.
Basically,
we
change
the
packet
header
to
the
bit
string
that
is
sent
to
the
first
next
hop
and
in
a
metadata
field,
which
is
some
kind
of
variable.
We
store
the
information
of
the
remaining
bits
and
set
it
to
back
into
the
header
of
the
packet
clone
in
the
egress
pipeline.
Okay,
okay,.
A
So
this
is
a
combination
of
p4
and,
of
course,
the
architecture
that
you're
putting
this
onto
that
requires
doesn't
have
any
replication
mechanisms
in
the
fabric.
It's
just
a
research
pipe
yeah
right,
okay,
I'm
just
wondering
I
mean
I'm
just
trying
to
get
my
head
around.
What's
possibly
here
I
mean,
assuming
we
had
some
more
robust
hardware
that
had
replication
in
the
fabric.
This
pipeline
wouldn't
be
a
limitation
of
p4.
That's
just
a
that's
just
architectural
limitations,
yeah.
D
Yeah,
so
actually
there
are
a
lot
of.
There
are
some
limitations
of
before
that
requires
some
workarounds.
D
H
Yeah
the
whole
thing
also
depends
on
how
many
packet
descriptors
you
sell.
Silicon
can
carry
and
and
from
where
on.
You
have
to
recirculate
it's
a
little
bit.
Silicon
specific.
So
assuming
something
like
everything
is
every
copy
is
a
recirculation
is
the
simplest
model
yeah
yeah
tallest
was
on
the
mic.
Daniel
was
on
the
mic
in.
N
Sequence,
please
dollars
yeah,
no
and
and
I'm
I'm
thinking
that
maybe
we
get
through
the
slides
first
before
you
know
getting
to
these
things
of
the
hardware,
because
I
think
there
would
be
more
to
be
said
about
it.
Daniel.
D
D
As
I
said
this
well,
this
is
basically
what
we
just
talked
about.
This
is
an
example
where
a
beer
packet
well
should
be
sent
to
three
neighbors
at
three
next
top.
So
when
it
arrives
at
the
switch
one
packet
instance
is
forwarded
to
the
first
neighbor,
a
packet
copy
is
recirculated
and
in
a
second
pipeline
iteration,
it's
the
same.
One
instance
is
forwarded
to
the
next
stop.
One
instance
is
recirculated
and
in
the
last
iteration
the
last
next
stop
receives
the
packet
and
processing
stops,
and
here
two
things
are
important.
D
A
D
There
is
a
way
to
do
it
in
parallel.
However,
then,
no
like
dynamic
kind
of
configuration
is
possible
and
you
would
have
again
some
dependent
state
of
the
of
the
of
the
subscribers.
I
mean
there
is
a
lot
to
discuss
actually
about
it.
We
had
some
different
approaches
to
implement
it
with
different
advantages
and
disadvantages,
but
this
is
the
most
simple
one:
okay,
thanks
yeah,
as
I,
as
I
said
before,
this
packet
recirculation
may
have
a
negative
impact
on
the
throughput,
so
we
thought
about
measuring
it
and
we
built
a
little
setup.
D
Yeah
with
the
tofino
as
a
centerpiece
and
the
tofino
is
a
p4
programmable.
High
performance
switch
asic.
It
is
used
in
the
edge
core
wedge
100
bf32x,
which
is
then
in
fact
a
p4
programmable
100g
switch
with
32
100
g
ports
and
we
deployed
our
program
on
it.
D
We
have
a
traffic
generator
with
up
to
100
gigabit
per
second
that
acts
as
an
ipmc
source.
It
sends
the
traffic
to
the
tofino,
which
encapsulates
the
packets
and
then
sends
them
to
a
number
of
next
stops,
which
are
bmb2s
bmv2s
rp4
software
switches.
D
But
since
we
want
to
measure
the
throughput
in
hardware,
we
send
only
n
minus
1
packets
to
the
twos
and
the
last
next
hop
is
always
the
traffic
generator
itself,
so
that
we
can
measure
the
throughput
in
the
end
at
the
traffic
generator
yeah.
The
tofino
is
connected
to
a
controller
which
configures
the
whole
setup,
and
if
you
go
to
the
next
slide,
I
can
show
you
guys
what
your
cap
was.
A
Sorry
did
you
actually
implement
the
the
beer
ether
type
for
the
hop
well.
D
Not
that
we
and
we
did
the
end
cap,
but
it
was
more
like
if
a
multicast
packet
arrived,
we
put
a
beer
head
on
it.
D
Right
so
just
prototyping
for
for
the
for
the
setup,
but
doing
it
would
would
be
not
that
hard
actually.
D
Well,
yeah,
but
I
mean,
and
before
you
can
like
match
on
anything,
you
can
build
your
own
headers
or
what?
What
do
you
mean
exactly.
A
C
Video,
oh,
I
think
it's
I
just
do
you
hear
me
yeah,
I
hear
it
now.
Yep
yeah.
Did
you
also
measure
the
latency
incurred
with
every
recirculation
that
you
also
measure
the
latency
edition.
D
These
are
details
stefan
have
to
talk
about.
I
think
we
did
it
we've
but
yeah.
He
is
writing
q.
So
he
can
tell
you
about
it.
S
S
Step
anyway,
okay,
so
basically,
it
roughly
depends
on
the
complexity
of
the
data
plane.
But
you
can
say
one
pipeline
iteration
requires
about
300
to
400
nanoseconds
and
that's
the
added
latency
that
you
get
for
each
recirculation
wow.
That.
S
H
D
D
Yeah,
so
basically,
these
are
some
measurements.
We
did.
The
traffic
generator
sends
with
100
gigabit
per
second
and
on
the
x-axis
we
have
different
numbers
of
recirculation
ports
in
the
first
one.
We
only
have
the
the
switch
in
turn
one,
and
we
see
that
this
is
enough
to
forward
the
traffic
to
two
next
stops
in
line
rate
and
if
the
number
of
next
tops
increases.
D
D
D
Is
one
switch
in
turn
recirculation
port,
and
if
you
need
more
to
prevent
packet
loss,
you
need
physical
ports
in
loopback
mode.
When
we
talk
about
recirculation
ports,
basically
we
mean
that
we
always
use
the
switch
in
turn
one,
and
if
this
is
not
sufficient,
we
deploy
additional
physical
ports
in
loopback
mode.
So
in
this
figure
the
the
left
column,
where
number
of
recirculation
parts
is
one
we
use
only
the
virtual
the
switch
in
turn
one
and
in
the
column,
in
the
middle.
H
Yes,
okay
yeah,
so
that
was
my
observation
about
the
architecture.
Basically,
how
many,
how
many
packet
descriptors
with
the
metadata?
Can
you
carry
through
your
silicone
right?
So
there
are,
of
course,
other
chips
that
do
different
trade-offs
and
the
more
beer-centric
the
chip
will
be,
the
less
it
will
be
doing
the
recirculation
and
the
more
basically
replication
just
of
pekka
descriptors
would
allow,
but
you
know
nothing's
for
free,
especially
when
you
bake
sand.
D
Yes,
so
it
is
a
actual
problem
of
the
features
provided
by
beer
by
p4.
Sorry,
we
had
an
approach
where
we
could
implement
beer.
We
think
that
we
could
implement
it
in
line
rate
for
any
numbers
of
of
neighbors
and
whatever,
but
it
would
require
like
exponential
state
independence
of
the
number
of
next
hop,
which
is
well
not
that
good,
but.
H
Yes,
it's
a
problem
which
features.
A
D
H
Yeah,
I
think
this
is
a
very
valuable
discussion.
Since
you
know
beer
is
moving
more
and
more
to
deployment
implementation.
That
will
give
you
know
silicon
designers
and
people
deploying
and
judging
different
solutions,
kind
of
a
handle.
You
know
what
you
have
to
pay
for
what
kind
of
beer
quality.
So
I.
J
H
G
Hear
me:
yes,
yes,
okay,
I
have
a
question
about
a
bit
stream
size,
because
every
asic
has
a
restriction
about
how
how
large
could
be
heaters
to
process
and
the
bit
stream
could
be
could
be
huge
b
stream
could
be
half
a
bit,
for
example,
it's
easy
because
it
could
be
500
routers
in
in
the
network.
But
that
reason
did
you
have
any
test
for
the
final?
For
how
long
could
be
the
header?
How
long
could
be,
which
is
supported
by
this
particular
asic.
D
I
think
in
a
prototype,
so
this
would
also
be
a
question
for
stefan
in
the
prototype.
I
think
we
used
128
bits
for
the
for
the
bit
string
in
the
beer
header
with
the
additional
fields
in
the
end
yeah.
I
think
this
this
has
to
be
answered
by
stefan
and
we
have
to
consider
that
we
signed
an
nda
with
barefoot.
D
So
I'm
not
sure
about
what
details
we
can
talk,
but
from
what
I
got
is
so
barefoot
is
the
manufacturer
of
the
tuffino
and
from
what
I
got
since
they
that
they
are
trying
to
remove
the
nda
to
talk
about
the
more
interesting
stuff.
H
Okay
thanks
edward
toles
mike
is
yours.
N
Right
so
I
think
I
think
we're
at
the
end
of
of
the
main
section
afterwards
comes
the
brfr,
so
I
think
you
know
one
of
one
of
the
interesting
follow-up
points
would
be
to
recognize
that
there
is
this
basic
challenge
of
you
know.
I
think
the
current
hardware
is
it's.
It's
to
only
be
able
to
have
pre-ordained
replication
masks
to
actually
do
native
replication
in
the
fabric
right
as
opposed
to
what
beer
would
require,
which
is
to
calculate
the
replication
mask,
or
else
lose
the
throughputs
through
the
recirculation.
N
So
maybe
one
thing
would
be
to
come
to
figure
out,
and
you
know
a
proposed
p4
extension
abstraction
go
to
to
p4
forum
and
say
here
is
basically
you
know
a
programmable
replication
mask
abstraction
so
that
future
fabrics
could
implement
something
so
that
we
could
really
calculate
the
replication
bit
mask
and
not
have
to
reuse
circulation
right
so
that
that
was
my
thinking
on
on
the
topic.
N
Load,
the
silicon
yeah.
No,
I
was
actually
just
thinking
about.
You
have
the
p4
program
and
instead
of
doing
you
know
on
every
recirculation,
a
calculation
of
here's.
N
N
D
B
S
D
All
right
yeah,
so
in
in
this
prototype,
in
addition
to
regular
beer
forwarding,
we
wanted
to
do
some
some
resilient
stuff.
The
beer
patent
mentions
some
kind
of
fast
reroute
based
on
loot,
free
alternates
in
previous
research.
We
have
found
that
the
look
free
alternates
are
well.
They
have
some
downsides.
D
They
cannot
protect
against
all
single
link
failures.
They
may
cause
some
micro
loops,
and
so
we
thought
about
a
very
simple
protection
scheme
for
our
implementation,
which
we
call
tunnel
base
be
a
fast
reroute.
If
you
go
to
the
next
slide,
sorry
that
the
slides
are
like
split
yeah.
Well,
the
basic
idea
for
us
was
that
we
tunnel
the
packet
through
the
routing
underlay,
because
we
thought
maybe
some
kind
of
unicast
mechanism
is
deployed
or
sorry.
D
The
forwarding
entries
of
the
routing
underlay
are
computed
faster,
so
the
packet
is
tunneled
around
the
failure,
but
more
interesting.
I
guess
if
you,
if
you
go
to
the
next
slide,
is
how
fast
we
can
restore
connectivity
in
the
in
the
event
of
a
failure
to
switch
from
a
primary
path
to
a
backup
path,
and
we
built
a
similar
hardware
setup
as
before,
where
the
tofino
sends
the
packets
on
a
primary
path,
then
we
switch
this
path
off
and
measure
at
the
destination,
how
much
time
it
takes
until
the
packets
arrive
again.
D
H
Q
The
queue
okay
greg,
my
cures.
Yes,
so
it's
very
interesting!
Thank
you.
What
triggers
the
switchover.
D
Q
Okay,
so
basically
you
don't
have
their
oem
protocol
detecting
the
failure
you
trigger
something
internally,
just
to
measure.
D
F
D
Of
them
we
use
to
detect
the
yeah
port
up
down
events,
you
can
say,
and
then
we
switch
back
to
the
to
the
back.
Q
Okay,
so
effectively
you
you,
you
look
only
at
your
local
port,
you
don't
look
in
a
beer.
The
main
failure.
D
No,
it's
it's
local,
fast,
free
road,
so
I
mean
you,
you
could
do
it
with
some
kind
of
of
liveness
detection
to
remote
points
in
a
domain
where
maybe
some
non-beer
devices
are
are
in
between
and
the
mechanism
would
basically
on
on
data
plane
would
be
the
same.
I
see.
Okay,
thank
you.
Thank
you.
D
Well,
what
we
see
on
the
left
is
the
the
reconfiguration.
If
no
f,
no
fast
reroute
mechanisms
are
deployed,
then
we
require
around
166.
Milliseconds.
Remember
at
this
point
that
the
controller
is
directly
connected
to
the
tofino.
If
this
would
not
be
the
case,
the
the
restoration
time
would
have
been
significantly
higher
and
basically
on
the
right
side,
we
see
the
restoration
time
for
the
fast
reroute
mechanisms,
which
is
about
half
a
millisecond,
so
the
the
switch
over
is
pretty
fast.
D
A
D
D
E
A
Back
to
the
the
replication
issue,
I
like
I
like
torus's
idea,
and
I
was
kind
of
thinking
on
the
same
ideas
like
what?
What
feedback
can
we
make
to
the
p4
forum
to
understand
you
know
what
the
gaps
are,
but
what's
happening
in
in
just
ip
multicast?
I
mean
this
is
again
at
the
core.
It's
a
replication
problem
in
the
in
the
hardware
that
takes
up
those
ports.
A
Is
there
specific
p4,
replication
semantics
that
are
used
for
multicast,
that
we're
not
using
for
beer
or
or
is
it
something
we
can
use,
but
with
tortoise
idea?
If
beer
can
facilitate
that
replication
better
in
the
hardware,
because
we
can
pre-compute,
that
would
be.
You
know
a
fascinating
point
of
progress
as
well.
S
Yes,
I
can
answer
the
question
directly,
so
there
is
a
native
multicast
feature
or
let's
say
in
the
p4
language,
there's
multicast
defined
and
the
tofino
has
a
native
multicast
feature,
but
you
have
to
define
multicast
groups
a
priori
or
let's
say
you
have
to
define
them
by
the
control
plane.
S
So
if
our
distribution
treaty
changes,
we
would
need
to
change
the
multicast
groups,
definition
for
the
b
header
and,
of
course
we
would
have
first,
a
dynamic
state
and
exponential
state
if
we
would
do
it
without
dynamic
and
that's
just
not
feasible
for
this
solution.
N
So
I
I
think
this
is
fairly
common
to
all
asic
based
forwarding
right
way.
I
I
think
p4
simply
adopted
what
they've
seen
in
you
know:
enterprise
and
data
center
switches
right.
So
you
basically
got
a
lookup
table
of.
However
many
entries
you
can
have
a
hundred
a
thousand
ten
thousand.
Each
entry
had
the
replication
set
and
then
basically
your
whole
forwarding
table
is,
you
know,
look
up
s
comma
g,
star,
comma
g
replicate
to
entry
number
one
number
100
or
so.
So
that's
basically
the
state
that
you
need
to
establish.
N
In
our
case,
you
know
it
would
be
2
by
the
power
of
n
possible
output
interfaces,
that's
the
exponential
stuff.
So
this
is
the
old
hardware
for
ip
multicast.
That's
not
ideal
for
beer
so
and,
and
that's
basically
the
discussion
to
have
with
p4
how,
at
least
in
the
language
abstraction
we
could
overcome
that
have
a
native
beer
semantic
and
then
see
what
switch
vendors
can
make
out
of
that.
D
All
right,
yeah,
so
so
you're
addressing
a
very
important
issue
with
that
I
mean
in
before
there
has
been
a
lot
of
before
research
in
the
in
the
past
years.
In
the
last
year
a
lot
of
work
came
has
been
published,
so
we
hope
that
the
vendors
or
the
manufacturers
are
listening
to
the
community
or
to
the
researchers
or
to
the
industry.
Whatever
about
missing
features,
and
I
mean
it
would
be
great
if
something
like
that
could
be
implemented.
H
Yeah
great
pretzel,
let's
I
think
it's
an
excellent
idea
to
bring
you
to
the
p4
the
forum
and
push
it
there
looking
forward
to
more
presentations.
H
A
So
you
said
you
had
a
demo
video
too.
I
I
don't
think
we're
gonna
have
time,
but
let's,
let's
see,
maybe
we
can
throw
the
end
otherwise
put
it
on
the
list
and
we
can
all
watch
it.
There.
D
Yeah,
so
this
is
not
not
a
particular
demo.
It's
it's
a
video
of
this
presentation
with
a
little
bit
more
details,
but
yeah.
I
I
will
post
the
the
link
on
your
list
so
that
you
can
watch
it
again
if
you
want
to-
and
actually
we
are,
we
are
currently
a
paper
about
this
beer.
Implementation
is
currently
under
review
for
the
for
a
tnsm
special
issue,
and
maybe
this
is
also
interesting
for
someone,
because
there
are
more
other
results
and
more
details
about
the
implementation.
A
Excellent
thanks
for
this.
This
is
great
yeah.
Thanks
questions
we're
getting
a
little
wedge.
We've
got
three
percent
presentations
left,
so
I
think
we
should
move
along
yeah.
Thank
you.
Thank
you
very
much.
This
is
great.
A
Next
up,
we
have
xiaomi.
H
H
Yeah
and
your
prize
all
also
went
away.
J
J
N
H
T
Okay,
hello,
everyone,
it's
xiaomi
here!
This
presentation
is
on
the
exploration
encapsulation
for
iom
data.
This
is
the
zero
zero
version
draft
next
slide.
Please
this
job
provides
the
encapsulation
for
iom
overview.
Two
options
of
encapsulation
are
described
and
compared
option.
One
is
be
a
header
plus
bom
header,
plus
iom
header,
plus
payload
option.
2
is
view
header,
plus
iom
header,
plus
payload.
T
Please,
this
job
outlines
climates
on
iom
over
there.
Some
multicast
flows
are
sensitive
for
loss,
delay
and
other
factors
such
as
the
live.
Video
real-time
meeting
the
operator
wants
to
know
the
real-time
statistics
for
these
flows.
Then
instituom
is
an
applicable
oem
tool
to
fulfill
the
requirements.
It
provides
a
way
to
achieve
unpassed,
telemetry
information
collection.
Next
slide.
Please.
T
T
It
contains
iom
type,
iom,
header,
lens
reserved
field
and
iom
data
defined
in
iom
data.
Working
group
draft
then
below
im
header
is
the
payload
of
data
packets
option
one
requests
to
assign
a
new
brm
message
type
for
iom,
but
doesn't
request
to
assign
a
new
protocol
type,
because
the
om
message
lens
can
be
used
to
decide
the
border
between
the
iom,
header
and
the
payload
next
slide.
Please.
T
Here
is
the
capsulation
format
of
option
two.
The
outermost
header
is
the
beer
header
defined
in
rfc
8296,
then
immediately
following
the
bl
om
header
is
the
iom
header.
It
contains
iom
type,
ion
header
lens
reserved
field,
next
protocol
and
iom
data
defining
iom
data
draft
within
them.
The
next
protocol
field
follows
the
definition
of
via
proto,
including
the
new
value
dvd,
indicating
next
ion
header,
then
below
iom
header
is
the
payload
of
data
packets
option
2
requests
to
assign
a
new
beer
protocol
type
for
iom
next
slide.
Please.
T
We
think
option
1
and
option
2
both
are
feasible
for
iom
over
beer
encapsulation
and
it's
selected
in
this
document.
The
first
reason
is
that
option
two
is
more
concise
than
option.
One
option
two
doesn't
need
to
encapsulate
boam
header,
but
option
one
needs
to,
and
the
second
reason
is
that
option
two
is
a
relatively
common
method
for
iom
award.
Foo
such
as
iom
over
gre
and
io
nsh.
T
T
H
No
one
on
the
mic,
so
then
two
questions
from
my
side.
Well,
one
observation,
so
I
think
we
we
have
six
beats
for
the
beer
protocol,
so
I
just
as
participant
I
see
no
problem
to
add
you
know
a
different
protocol.
My
only
question
would
be.
We
have
to
two
bits
right:
the
marking
beats.
Will
you
also
assign
different
semantics
to
those
bits
bait
based
on
the
protocol
or
the
bits
remain
in
terms
of
cement?
No
new
semantics.
H
It
remains
right,
so
I
mean
it's,
it's
it's
an
optimization,
but
I
see
nothing
wrong
with
it.
Thanks.
Thank
you.
A
A
A
So,
let's,
let's
get
that
thread
on
the
list?
Let's
see
if
we
get
some
some
traction
there,
I
think
you
know,
even
for
myself
better
understanding.
I
o
I
am
and
trying
to
get
my
head
around
what
we're
adding
here,
but
it
makes
a
lot
of
sense.
Anyone
else
comments,
questions
all
right.
Thank
you
very
much.
A
O
Okay,
okay,
but
yes
seems
like
there
are
no
enough
time
to
present
it,
but
I
will
try.
Okay,
let
me
start
hello,
everyone,
and
this
is
their
source
protection
structure.
You
have
course
regression
easton
from
china
mobile
next.
Please.
O
We
know
that
br
can
use
the
igp
fr
function
to
achieve
fast
convergency.
That
is,
if
the
path
between
dfir
and
the
bfe
are
fails,
the
bfir
can
still
stand
back
to
bfer
as
long
as
there
is
another
working
path.
This
document
is
talking
about
that.
There
is
no
another
path,
so
the
failing
bfr
cannot
send
the
packet
to
the
bfer.
O
Let's
see
the
background
of
this
structure,
as
we
all
know,
multicast
source
connects
two
routers
to
avoid
the
single
node
failure
in
this
figure.
There
are
two
ingress
routers
connected
to
a
source
in
their
domain.
The
two
ingress
routers
are
called
bfir.
These
documents
tries
to
discuss
how
to
reduce
the
packet
during
the
bfir
switchover.
O
J
O
The
full
the
follow
standby
mode
detail
has
not
been
finished
in
the
structure,
but
today,
let's
discuss
it
in
enveloping
their
overdrops.
Three
standby
modes
have
been
introduced
code
warm
or
hot
standby
for
code
standby
mode
is
a
bfdr
celebrity
bfir,
for
example,
bfr1
as
the
umh
and
the
segments
to
edge
to
get
the
multicultural
flow.
One
bfdr
finds
that
the
dfir1
is
done.
Vfvr
segments
to
bf
ir2
to
get
the
multicultural
flow
for
one
standby
mode.
O
The
vfd
are
signals
to
both
b51
and
bf
ir2
in
case
the
fir1
is
the
selected
efir
bfr
one
forwards,
the
flow
to
the
vfdr,
the
bypass
bfir,
for
example.
The
bfr2
must
not
forward
the
full
flow
to
the
bfdr
until
the
selected
bfir
is
done
for
hot
standby
mode.
The
bfer
segments
to
both
bfi
is
both
bfis
are
sending
the
flow
to
the
bfer.
O
A
Just
a
question:
how
deep
they
go
to
detect
the
duplication.
H
Yeah,
so
I
mean
the
problem
is
very
real
and
this
is
useful,
but
mostly
people
who
run
this
kind
of
live
life.
They
want
completely
disjoint
topology.
So
how
do
we
make
sure
that
there
isn't
an
bfr,
something
in
the
middle,
which
is
in
both
paths
at
which
point
in
time
basically
doesn't
matter
which
bfir
you
select?
If
these
things
fails,
you
lose
both.
E
A
Right
I
mean
if
it's
something
we
need
to
put
in
there,
so
I
was
thinking
you
know.
Clearly.
The
question
I
had
was
at
detection.
If
we're
doing
it
at
the
bfir,
that's
quick
right.
We
don't
have
to.
We
have
to
decap
the
packet
and
go
into
the
the
data
somewhere.
Look
at
it
at
a
you
know,
a
sequence
number
or
something
like
that.
That's
that's
clean!
So
then
each
bfer
in
this
live
live
scenario
detects
its
primary.
A
It
I
mean,
then
all
those
live
mechanisms
will
be
there
like
when
the
hysteresis
in
a
queue
all
right.
Thanks.
E
O
Yeah
okay,
so
this
is
a
comparation
among
the
three
standby
functions
for
bfir
code
standby
mode.
Just
to
do
a
regular
operation
forwarding
flow
according
to
the
request
from
vfpr
in
warm
standard
mode.
The
bfr
should
take
the
role
of
selected,
bfir
or
backup
bfir
or
neither
of
them.
The
backup
bfir
must
not
forward
flow
to
the
vfdr,
whether
selected
the
bfr
fails
or
the
path
between
selected
bfi
and
the
bfer
fails.
The
backup
bfir
must
start
forwarding
the
flow
to
bfdr
in
hot
standby
mode.
O
The
bfir
needs
not
to
know
the
rows
of
selected
or
backup.
Bfir
bfi
are
just
forwarding
the
flow.
According
to
the
request
received
from
vfdr
for
bfe,
I
encode
standby
mode.
The
vfdr
must
select
vfr
is
the
selected
bfir
as
a
signal
to
it
when
the
selected
bfr
fails
or
the
path
between
the
selected
bfr
and
the
vfdr
fails.
The
bfvr
must
diagonal
to
the
backup
vfr
to
get
the
flow
in
warm
standard
standby
mode.
The
vfr
does
not
select
the
selected
vfr
or
bypass
vfr.
O
The
vfd
are
just
a
segment
to
both
of
them
in
hot
standby
mode.
The
bfb
are
signal
to
both
of
the
bf
irs
to
get
the
flow,
and
the
bfr
must
discard
the
duplicate
flow
from
the
bypass
vfr,
whether
selected
bfr
fails
or
the
path
between
selected,
bfr
and
bfer
fails.
The
vfdr
master
switch
is
a
forwarding
plane
to
forward
the
flow
comes
from
the
backup
vfr
for
bfr
the
intermediate
node.
The
bfr
knows
nothing
about
the
role
of
bfir.
Just
do
the
beer
forward
a
packet
forwarding
only
in
hot
standby
mode.
O
O
Now,
let's
see
what
will
happen
when
the
bfir
switch
over
in
code
standby
mode,
vfdr
detects
the
failure
of
the
selected
dfir
and
the
second
node
through
the
backup
bfir.
There
is
a
packet
loss
during
the
signaling
until
vfdr
receives
the
flow
from
the
backup
vfr
in
warm
standby
mode.
The
back
half
of
bfir
detects
a
failure
of
the
selected
dfir
that
there
is
packet
loss
during
the
bfis
reachover
in
hot
standby
mode.
The
bfvr
detects
the
failure
of
the
selected
bfr,
because
there
are
two
flows
arrived
on
the
vfer.
O
Failure
next
includes
node
failure,
detection
and
the
path
failure,
detection
for
node
failure,
the
pure
radar,
where
they
secondly
may
be
used.
But
we
all
know
it's
not
quick,
so
we
can
do
the
detection
through
appear
layer.
The
backup
bfir
can
monitor
the
lighted
bfi
and
the
vfdr
can
also
monitor
the
selected
vfr.
O
A
O
A
We're
really
tight
really
tight
and
I
hate
to
cut
him
off.
Okay,
all
right
so.
H
So
those
are
mechanics,
but
I
think
what
is
very
important.
First,
what's
the
definition
of
a
flow
and
second,
we
have
to
look
at
the
impact
of
the
architecture
because
it
looks
like
we
start
to
distribute
per
flow
state
over
the
whole
beer
fabric,
which
is
probably
not
a
particularly
good
architecture.
Okay,
from
my
side,
I
think
those
things
need
to
be
discussed
out.
A
Okay,
so
let's
see
who's
read
this
draft.
J
A
I
I
did
not
read
it
in
detail,
but
I
did
it's
in
my
complete
depth
queue
I
went
through
skimmed,
so
I
would
say
you
know.
Live
live
is
definitely
something
like
protections,
adult
duplicate
sources.
Whatever
it's
a
live
live
I
mean
these
are
real
things
with
with
multi-point
transports
out
there.
Today,
I'm
having.
A
As
well
with
beer
is
going
to
be,
I
think
needed
what,
whether
you
know
it's
all
fleshed
out
here
or
not.
It's
not
really
the
point,
but
this
is
work
I
think,
should
be
moving
forward.
Let's
do
a
quick
hum
fast
who
thinks
this
should
be.
J
A
U
So
a
little
bit
of
history,
so
the
the
native
ip
solution
of
beer,
v6
solution
was
proposed
in
2000,
sorry
in
in
in
102,
and
there's
this
split
of
the
requirement
document,
which
is
now
a
in
a
stable
status
as
a
working
group
document.
So
today,
in
this
session,
we'll
focus
on
updating
you
guys
of
the
improvement
we
have
been
made
from
zero
six
to
zero
eight
version.
So
next
slide
so
for
xerox
version.
We
inherit
the
next
header
value,
which
is
defined
and
allocated
by
ina.