►
From YouTube: IETF109-BMWG-20201119-0500
Description
BMWG meeting session at IETF109
2020/11/19 0500
https://datatracker.ietf.org/meeting/109/proceedings/
A
A
Okay,
everybody,
let's
get
started,
I'm
al
morton
bmwg
co-chair
for
the
for
this
session.
My
my
co-chair.
Sarah
banks
hasn't
joined
us
yet,
but
I
hope
she
will-
and
it's
a
very
early
thursday
morning
in
u.s
eastern
time,
zero
dark
zero,
but
in
our
working
area
of
time
it's
12
12
noon
on
thursday,
and
so
we're
going
to
start
the
meeting
now
here
for
ietf
109.,
if
you're
not
subscribed
to
the
bmwg
list
and
would
like
to
be.
A
So
most
important
is
our
our
note
well
and
if
you've
been
attending
sessions
so
far
this
week,
you've
seen
this
a
few
times,
but
as
a
special
part
of
the
top
here
that
I've
started.
A
We
I
want
to
remind
everybody
that
we
work
as
individuals
and
we
try
to
be
nice
to
each
other
and
99.9
percent
of
the
time
that
that
has
always
happened
in
bmwg.
In
my
experience,
so
also
it's
a
reminder
of
the
ietf
policies,
in
effect
various
topics
such
as
patents
and
codes
of
conduct.
A
This
talk,
or
this
little
slide
here,
is
meant
to
point
you
in
the
right
direction
and
if
you
have
ipr
associated
with
your
contributions
or
your
participation
in
this
session,
contribution
is
defined
as
almost
almost
anything
active
like
a
statement
at
the
microphone
or
typing
a
message
into
the
chat
box.
A
A
All
right,
so
here's
our
agenda
now,
as
as
you
guys,
can
see
I'm
so
far,
I'm
all
alone,
and
but
what
I
have
done
is
to
post
the
the
agenda
that
I've
been
mailing
out
into
the
the
kodi
md
note-taking
tool.
So
actually
note
taking
and
really
just
decision.
Capturing
should
be
very
easy
tonight,
and
so
then,
let
me
let
me
quickly
ask
for
some
help
from
one
or
two
note
takers
who
can.
A
Do
some
do
a
little
note
taking
for
us
here
and
I
suppose
everybody
knows
how
to
un
unmute
their
mic
under
the
under
the
message.
That
is
your
name
at
the
top
left
on
your
browser,
there's
an
unmute
button
and
that's
where
you
can
identify
yourself
as
a
someone
who
can
really
help
me
out
tonight.
B
C
So
I
can't
hear
anything
anymore,
but
I
could
take
notes.
C
Apparently,
okay,
so
I'll,
probably
when
I
hit
them.
Okay,
I've
got
a
weird
feedback.
If
I,
if
I
unmute
or
if
I
move
to
mic
yeah,
I
could
hear
from
nine
okay,
I'm
fine
nevermind.
A
All
right,
good
thanks
very
much
bill
jabber
is
something
we'll
all
be
able
to
see
in
the
chat
panel.
So.
A
You
know
we
I'll
ask
your
help
to
monitor
jabber
with
me
and-
and
I've
already
mentioned
the
ipr
and
the
notorious
blue
sheets
that
we
always
used
to
use
at
the
at
the
face-to-face
meetings
are
now
collected
automatically.
A
So
we
don't
have
that
to
worry
about
okay,
so
we'll
tonight's
agenda
and
I'll
ask
for
any
bashing
at
the
end,
we'll
do
the
working
group
status
and
then
the
three
working
group
drafts
on
evpn
a
next
generation,
firewall
benchmarking,
back-to-back
frame
benchmarks
and
and
then
on
to
the
proposals.
A
A
A
Those
are
all
three
three
drafts
of
which
I
think
it
looks
like.
I
I
meant
to
delete
the
drafts
for
all
of
those
but
they're
in
the
agenda
and
they're
in
the
the
code
emd
note
taking
and
and
then
we've
got
some
testing
at
the
end
associated
with
the
draft
upgrade
to
benchmarking
methodology
with
a
open
source
tool,
siiit
test,
so
any
bashes
to
the
agenda.
A
All
right
seeing
none,
I
think,
we'll
move
ahead.
I
I'd
like
to
like
to
make
sure
everybody
knows
that
our
let's
see
I'm
just
going
down
the
list
of
names
here.
We've
got
21
people
and
the
bmwg
area,
director
advisors,
warren
kumari.
A
So
when,
when
warren
warren's
with
us-
and
he
will
be
able
to
comment
on
our
work
as
we
go
vladimir,
I
I
see
you-
I
see
you
in
the
queue
here.
Let's
see
what
do
I
do.
A
All
right,
I
can
go
ahead.
A
Screen
yeah
vladimir,
we
I
haven't
been
able
to
hear
you
yet.
A
A
A
It
looks
like
you've
got
audio
available,
but
we
can't
we
can't
hear
you
speaking
vladimir.
D
D
A
A
E
A
I'm
sorry
who,
who
was.
E
E
A
I'm
a
little
I
gotta
tell
you,
I'm
a
little
reluctant
to
close
the
session
down
now
that
we're
now
that
we're
rolling
here.
So
I
mean
if,
if
individuals
want
to
drop
and
rejoin
that's
and
then
test
the
chat
out
that
way,
that's
fine.
A
If
I
I
don't
know
whether
that
will
do
any
good
but
in
any
case
we're
gonna
forge
ahead.
A
So
feel,
free
to
jump
in
on
the
microphone
assuming
you're,
assuming
your
microphone
works
with.
You
know
like
a
an
immediate
request
for
the
floor
or
or
a
or
to
join
the
microphone
queue.
A
And
so
then
we
got
a
quick
working
group
status
as,
as
we
know,
the
evpn
draft
came
back
to
the
working
group
and
we
still
need
editorial
review
mostly
on
on
chapters
three
and
four,
so
it
would
be
good
to
get
some
volunteerism
to
help
us
take
care
of
that.
The
back-to-back
frame
draft
has
gone
through
area
director
review
and
the
comments
have
from
warren
have
been
addressed
and
and
a
new
version
of
the
draft
is
available
warren.
A
So
you
can
you
can
check
that
out
so
well.
The
next
working
group
draft
that
we've
got
active
is
the
next
gen
firewall
wall,
benchmarking
and
we'll.
You
know
we'll
take
a
look
at
some
possible
actions
on
that
tonight.
A
And,
of
course,
I
mean
there's
a
good
reason
to
finish
up
these
these
working
group
drafts,
because
the
proposals
keep
coming
and
you
know
shall
we
make
way
for
new
work.
I
I
think
I
think
we
should
most
of
these
proposals
are
are
very
familiar
to
us
now.
A
So
we're
a
little
bit
behind
on
the
milestones
and
and
in
fact
we've
you
know-
we've
got
some
other
ones
coming
up,
which
are
you
know
also
in
pretty
good
jeopardy
here,
but
we'll
you
know
we'll
we'll
deal
with
the
milestones
at
some
future
date.
A
Let's
just
try
to
get
the
try
to
get
the
august
2020
ones
out
of
the
way
here.
As
soon
as
we
can.
A
All
right
so
then,
basically,
the
the
I've
already
described
the
evpn
status.
In
some
detail.
There
was
a
new
draft
zero
six
put
up
in
august.
It
corrects
a
lot
of
small
errors
in
in
the
later
sections,
but
I
think
I
think
those
sections
still
need
a
read
for
through.
So
would
anybody
care
to
volunteer
basically
just
to
do
a
you
know
an
editorial
pass
over
the
over
those
two
sections
and
help
sudeen
out
with
the
wording.
A
Happy
to
to
do
that
excellent
thanks!
So
much
for
your
volunteerism,
okay!
So
well!
That's
what
we
needed
somebody
to
take
a
look
at
those
sections
and
and
then
send
a
mail
to
the
list
and
you
know
mark
it
up.
However,
however,
you
can
do
it
most
efficiently
and
and
we'll
we'll
get
that
moving
forward
thanks,
very
much
okay.
A
So
then
we're
on
to
our
next
topic,
which
is
next
generation
firewall
benchmarking
and
I've
got
the
slides
right
here,
and
this
should
fix
that
right
and
and
and
who's
gonna
present,
the
the
firewall
slides
tonight.
I.
G
I
am,
and
bala
will
help
me
on
some
of
the
more
technical
questions.
If
there
are
any.
G
All
right,
so
the
current
draft
is,
as
was
posted.
I
guess
was
about
maybe
a
couple
weeks
back.
It's
version,
five,
the
I
sent
a
message
to
the
list
outlining
the
changes
and
the
updates,
but
I'll
just
go
over
them
here.
G
We
added
next
generation,
ids
ips
requirements
to
the
abstract
or
the
fact
that
it
was
covering
that
off
so
I'll
that
may
need
that
that'll
mean
that
we'll
need
to
change
it
from
next
generation
firewall
the
title
anyway
to
on
network
security
devices
or
next
generation
network
security
device.
I'll
talk
to
you
over
email
about
that
we
changed
the
scope
to
add
the
validation
of
security
effectiveness
configurations.
G
That's
now
in
appendix
a
we
changed
our
requirements
in
4.2
set
configuration.
We
added
a
feature
table
that
directly
addresses
the
ids
ips
area.
That's
now
table
two.
We
added
a
description
for
security,
features
to
the
table
three
and
then
we,
since
next
generation,
ips's
ids's,
tend
to
operate
in
bump
in
the
wire
mode.
G
We
added
text
that
indicated
that
the
acl
configurations
is
not
recommended
for
these
devices
and
then
we
added
a
new
subsection
for
4.2.1
security
effectiveness
configuration
and
this
section
now
describes
how
to
select
cvs
next
slide
out.
G
G
G
Traffic
mix
on
7.1.3.4
test
results,
validation
criteria,
we
removed,
we
removed
c,
which
has-
and
I
neglected
to
add,
more
detail
on
this,
and
I
can
send
you
some
updated
slides
for
the
archive
out,
but
essentially
it
we
removed
see
which
covered
off
latency
and
the
reason
we
did.
G
That
was
that
it
proved
to
be
a
very,
very
complex
test
that
we
didn't
feel
had
adequate
validation
as
to
it
remaining
in
there,
and
we
didn't
want
to
do
you'll
leave
something
in
there
that
we
couldn't
get
behind.
G
As
far
as
you
know,
we
tested
this,
so
we
decided
to
remove
it
and
we
updated,
or
we
created
a
contributors
section,
which
is
section
10,
an
acknowledgement,
section,
section
11
and
then
the
appendix
a
we
removed
the
netsec
open
traffic
mix,
descriptors
and
added
a
test
methodology,
security,
effectiveness,
evaluation
criteria
section.
So
that
is
the
new
appendix
a
next
slide
up.
G
So
the
next
steps
and
basically
we're
going
to
need
to
submit
a
version
six,
but
the
version
six
will
not
have
any
significant
changes
to
it.
Unless
the
working
group
asks
us
to
to
make
make
some
changes
and-
and
you
know
we're
in
agreement
of
course,
but
the
version
six
will
contain.
Grammar
and
punctuation
changes
the
following
two
bullets
or
mistakes
that
shouldn't
be
in
there
and
additions
to
the
acknowledgements
section,
and
we
would
like
version
six.
G
You
know
to
be
submitted
for
working
group
last
call
I
I'm
I'm
a
little
fuzzy
as
far
as
what
the
final
steps
go,
I
know
needs
to
go
the
I.
The
area
director
needs
to
review
it.
I
know
the
isg
needs
to
review
it,
but
I'm
not
too
sure
how
to
get
to
there
from
where
we
are
right
now.
Okay,
al.
Can
you
outline
for
me
what
we
need
to
do.
A
Of
course,
of
course,
brian
this
is
you
and
your
team's
sort
of
first
pass
through
the
process,
and-
and
so
you
know,
we've
we've
had
this.
We've
had
this
draft
in
the
in
the
working
group
and
for
a
while
over
a
year-
and
it's
been
a
working
group
item,
my
chartered
item
for
over
a
year
and
and
we've
had,
you
know
really
good
interactions
and
good
reviews
recently.
A
So
and-
and
it
sounds
to
me
as
though
the
authors
have
you
know-
kind
of
reached
the
point
of
diminishing
returns
here,
and
they
might
benefit
from
any
last
comments
that
we
can
precipitate
by
performing
the
the
working
group
last
call
kind
of
function.
Now
in
in
the
in
the
benchmarking
methodology
working
group.
A
Very
often
we
have
more
than
one
working
group
last
call,
especially
if
we
get
a
lot
of
comments
during
a
working
group
last
call,
but
so
we
might
have
a
we
might
have
a
noisy
one
and
then
we
might
have
a
quiet,
one
or
or
or
if
everybody's
satisfied
with
the
draft
and
and
they've
read
it,
and
they
say
so
during
working
group
class.
A
Call
then-
and
it
may
just
be
one
we'll
see,
so
let
me
let
me
just
because
we've
got
a
we've
got
a
a
tool
for
this
now.
Let
me
ask
a
question
of
the.
A
G
Draft,
how
do
we
respond
to
that.
A
Well,
in
the
in
the
in
the
place
where
you've
been
looking
at
the
participants
list,
there
should
be
a
like
a
poll
that
comes
up
there
and
and
you
get
to
click
the
first
two
button
to
either
raise
your
hand
if
you,
if
you've
read
it,
if
that's
a
yes
or
do
not
raise
your
hand,
if,
if
you,
if
you
haven't,
read
it
and
and
and
so
far
I
see
six
seven
people
who
are
raising
their
hands
and
one
person
who's
indicating
that
they're
not
gonna
raise
their
hand,
which
is
eight
eight
people
out
of
22,
the
other
I'm
not
doing
the
math.
A
The
other
14
are,
are
not
going
to
play
along,
but
but
look
seven.
If
seven
people
have
read
the
draft
and
and
there's
no
objections
to
starting
a
working
group
last
call,
but
then
I
think
we
can
go
ahead
so
I'll.
Ask
the
question
then:
are
there?
Are
there
any
objections
to
to
starting
a
working
group?
Last
call:
oh,
and
I
also
see
that
tim
winters
has
joined
the
queue
here
so
go
ahead.
Tim
enable
your
mic.
H
Hey
al
and
brian,
it's
tim
brian.
I
know
you
guys
said
in
the
in
the
draft
that
you
removed
the
nets
like
open
traffic.
Did
you
just
replace
it
with
just
regular
background
traffic?
I
haven't
had
a
chance
to
I'll
I'll
look
at
this
as
part
of
the
working
group.
Last
call,
but
I
thought
I
had
you
here.
I
could
ask.
G
We
we
replaced
7.1
with
with
more
extensive
guidance
on
how
to
on
on
on
what
to
do,
but
we
we've
essentially
decided
that
we're
going
to.
G
We
haven't
been
able
to
address
the
differences
between
the
the
test
tools
and
and
what
what
and
it's
it's
not
within
the
except
what
we
consider
acceptable
limits,
and
rather
than
rather
than
that,
allow
that
to
delay
the
draft
we
sort
of
stepped
away
a
little
bit
from
the
from
the
netsec
open,
developed
traffic
mix,
because
we
we
didn't
feel
that
we
we
could
defend
it
adequately.
Now
we're
still
working
on
it.
We're
still
hoping
but
hope,
springs,
eternal
right.
H
Yeah,
okay,
that
makes
sense.
I
think
this
document's
ready
for
working
group
last
call
I'll
so
I'll
now
review
that
as
part
of
the
process.
C
Okay,
that
works
so
I've
read
halfway
through
the
document.
I
do
have
some
grammatical
notes.
I
I
could
wait
until
you
could
commit
version
of
six
and
I
could
review
it
again
or
I
could
send
you
what
I
have
and
you
can
use
that.
I
don't
know.
G
By
all
means,
send
us
send
us
what
you
have
I
mean
if
it's,
if
it's,
if
it's
redundant,
that's
that's
perfectly
fine.
As
far
as
I'm
concerned.
A
All
right
so
we'll
so
we'll
record
a
a
working
group
last
call
for
a
next
generation
firewall
brian
one
thing
I
remember
you
mentioned
was:
I
think
you
wanted
to
change
the
file
name,
yeah.
G
Because,
if
possible,
I'd
like
to
we'd
like
to
change
it
to
refer
to
it
as
as,
what's
in
the
title
here,
benchmarking
methodology
for
network
security
device
or
devices.
A
Yeah,
usually
we
don't
change
the
file
name,
even
if
the
name
of
the
draft
or
the
title
of
the
draft
changes
it's
just
it's
just
easier
to
keep
the
history
that
way.
Okay,
so
you
know
a
lot
in
fact,.
A
Up
having
kind
of
funny
names
because
of
that,
but
this
one's
pretty
close,
I
think,
you'll
agree.
So
you
know
all
you
have
to
do
is
open
it
up
and
and
you'll
we'll
all
see
the
new
title.
So,
okay,
that
works
for
me
sure
I
hope
it
works
for
you
yeah
good
thanks
yeah,
it's
not
it's
not
a
it's!
It's
not
a
deal!
Breaker
right!
A
I
didn't
think
so.
All
right
and-
and
I
think
I
think,
we'll
probably
have
a
working
group
last
call
as
long
as
a
month
on
this,
because
we've
got
the
thanksgiving
holiday
coming
up
in
the
united
states
and
you
know
basically
a
lot
of
people
trying
to
finish
a
lot
of
stuff
up
at
the
end
of
the
year.
So
I
think
I
think
a
month
on
this
is
probably
by
a
reasonable
amount
of
time.
G
Once
I
get
the
changes
from
bill
I'll
I'll
roll
those
into
the
document,
then
you
know
bella
and
I
will
do
do
a
quick
review.
Balor
will
will
you
know,
submit
it
as
the
act
as
the
xml
document,
so
we'll
try,
depending
on
how
long
bill?
Well,
if
bill
was
able
to
get
me,
the
changes
this
week,
we'll
we'll
roll
out
the
updated
document
at
the
beginning
of
next
week.
A
Very
good:
well
thanks,
thanks
to
you,
baba
and-
and
you
know,
basically,
all
all
the
friends
of
ours
in
nitsik,
open
for
working
on
this
and
and
I
think,
you've
gotten
some
good
reviews
before
we
may
still
get
some
more
so
great.
Thank
you.
We
look
forward
to
it.
We
really
do
excellent
progress
thanks
very
much
all
right,
thanks
all
right.
A
So
the
next
item
on
the
agenda
is
one
that
I've
I've
already
talked
about
quickly:
the
back-to-back
frame
draft,
it's
an
update
to
rc
2544.,
and
we
have
you
know.
As
I
said,
the
the
warren
kumari's
area
director
review
has
been
dealt
with
in
in
03
version
and
any
any
comments
on
that.
I
I
It's
typing
my
response,
but
apparently
I've
forgotten
how
to
type
nope.
I
hope
to
push
the
go
button
on
that
in
the
next
couple
of
days.
Please
poke
me
again.
If
I
forget
this
week
has
been
understandably
to
me.
A
Yes,
understandably
strange
for
all
of
us
and
and
busy
very
good,
all
right
thanks
warren
okay.
So
so
now
we're
off
to
our
proposal.
Topics
and
vladimir
veselev
is
is
gonna
just
say
a
few
words
about
the
the
yang
data
model.
I've
got
in
fact.
I've
got
some
notes
here
that
I
can
that
I
can
bring
up
that
were
part
of
the
agenda.
A
Oh,
but
that's
right,
he
was
having
trouble
with
audio
yeah,
all
right,
so
I'll
I'll.
Try
to
take
care
of
this
form
vladimir
updated.
The
draft
on
september
9th.
A
You
know
adding
a
mechanism
for
the
synchronization
between
the
generation
systems
and,
like
you
know,
he
had
another
a
crack
at
doing
some
work
during
the
hackathon.
I
was
hoping
we
would
hear
a
little
bit
about
that,
but
perhaps
he'll
send
mail
to
the
list
about
that
topic.
A
So
so
we've
moved
through
the
last
two
items
very
quickly,
very
good,
okay.
So
the
next
item
on
the
agenda
is
the
methodology.
Oh,
I
I
yeah
I'd
like
to
I'd
like
to
ask:
let's
see
here.
Is
this
thing
still
all
right?
I
can
end
this
session
then
so
right,
okay,
as
has
has
anybody,
read
the.
D
F
Yes,
I
had
some
problems
with
the
audio,
but
now
I'm
back
and
if
you
want
a
short
update
on
the
draft,
it
is
a
very
minimal
change.
Actually,
we
just
added
some
synchronization
which
allows
the
generation
to
be
started
in
a
certain
moment
in
time
by
adding
a
timestamp,
so
epoch
data
leaf.
That
defines
the
moment
where
the
generation
of.
F
A
Okay,
okay,
good
and-
and
I
think,
you're
you're,
getting
some
good
interactions
with
raphael's
draft
the
automation
of
vnf
benchmarking
that
we're
going
to
hear
about
next.
So
so
that's
that's
good
too.
A
I
I
think
that
you
know
we'd
like
to
get
some
more
readership
on
both
these
drafts
and
yeah,
because
it
looks
to
me
like
nobody's,
read
it
so,
at
least
according
to
the
poll
I
just
ran
so
well:
let's,
let's
try
to
change
that
folks
and
and
help
help
vladimir
out
with
some
some
reviews.
A
A
A
So
the
next,
the
next
item
on
the
agenda
is
the
methodology
for.
A
But
one
one
one
other
thing:
first
here
I
thought
I
saw
sarah
for
a
moment
and
now
she's
gone
again.
A
I
guess
she's
having
trouble
connecting
all
right,
we'll
we'll
we'll
keep
on
rolling
here
so
the
next.
The
next
draft
is
methodology
for
vnf
benchmarking,
automation
and.
A
A
Okay,
raphael
are
you
with.
A
A
Okay,
sarah
sarah
just
sent
me
a
quick
email.
She
says
she's
trying
to
get
muteco
working.
A
I
hope
she
can
contact
someone
else
who
can
help
in
the
meantime,.
A
We'll
just
take
a
quick
look
here
in
in
version
5,
which
were
the
major
technical
changes.
Basically,
all
of
these,
in
fact
raphael
mentioned
in
a
in
a
message
to
the
list:
a
fairly
extensive
set
of
changes
which
almost
amount
to
a
refactoring
of
this
draft-
and
I
think
that's
good
and,
as
I
said,
there's
been,
there's
been
some
good
interactions
on
the
list
between
raphael
and
and
and
vladimir,
who
have
a
a
sort
of
a
modeling
of
the
test,
generator
control
in
common.
A
So
let's,
let's
hope
that
raphael
joins
us
later,
but
otherwise,
I
think
we'll.
Otherwise.
I
think
we'll
move
on
to
the
next
topic
for.
A
Now,
okay,
so
the
next.
The
next
draft
is
5g
transport
network
benchmarking
and
that
will
hopefully
be
presented
by
luis
m
conference.
Oh
that's
great.
Louise.
B
B
Okay,
just
a
refresh
about
the
motivation
of
well
sorry,
but
I'm
listening
a
bit
of
echo
so
well.
I
will
try
anyway,
so
just
refreshing
the
motivation
of
the
under
the
scope
of
the
draft.
So
essentially
the
motivation
is
why,
because
5g
networks
are
being
gradually
deployed
by
operators,
so
essentially
what
would
be
nice
is
to
have
a
way
of
benchmarking
solutions
in
a
manner
that
we
could
obtain
somehow
a
better
idea.
B
There
will
be
pieces
that
are
probably
already
there,
but
others
that
are
not
there,
so
the
main
intention
would
be
so
how
to
to
get
this
inventory
of
what
is
there
and
what
is
needed
and,
finally,
to
provide
guidelines
on
fighter
transponder
benchmarking
just
to
have
a
basis
of
comparison
and
a
basis
of
I
mean
a
kind
of
reference
for
for
this
next
slide.
Please.
B
So
I
will
go
now
through
the
updates
from
the
zero
zero
version.
The
zero
one
version
was
not
presented
by
lack
of
time.
I
think
that
it
was,
I
don't
remember
well,
it
was
in
in
montreal.
I
guess
so
in
this
version
zero
zero.
Two.
I
will
provide
the
updates
from
from
zero
zero.
So
essentially,
we
have
added
some
discussion
on
kpis
for
assessment
of
the
technologies
and
we
have
covered
both
control
and
management.
B
Kpis
and
data
plane
kpis
in
the
case
on
control
and
management
playing
kpis
essentially
will
leverage
on
on
the
idea
that
he's
been
developing
this
working
group
about
the
this
itf
network
slice
controller.
That
would
be
the
one
in
charge
of
generate
creating,
let's
say
the
itf
network
slices
so
assuming
or
taking
us
a
reference,
the
rfc
8456
on
sdn.
B
B
A
B
A
B
B
So
apart
from
that
also,
and
because
there
are
some
novel
attributes
being
considered
for
fighting
the
the
potential
kpi
to
what
other
potential
kpis
to
cover
could
be
things
like
the
capability
of
isolation
between
the
transport
technologies
and
some
other
kind
of
attributes
that
could
be
required
for
forfeit
and
here
essentially,
what
I
have
in
mind
is
what
is
being
done
in
in
gsma,
with
this
genetic
slice
template
generic
size
template
with
essentially
least
a
number
of
attributes
and
slos.
B
B
Further
updates
are
well
essentially
to
start
looking
at
the
potential
topologies
for
to
be
smart
and-
and
here
I
I
borrow
essentially
this
figure
from
one
of
the
drafts
in
in
this
working
group-
that
I
think
that
summarized
very
well
what
kind
of
topologies
or
what
kind
of
architecture
we
could
consider
here
with,
for
instance,
in
the
radio
site,
considering
front
hall
network
and
mid-whole
network.
B
So
these
are
we
have
this
have
particularities
in
in
terms
of
street
requirements
like
latency
bandwidth,
and
so
so
could
be
interesting,
probably
to
to
consider
and
finally
to
leverage
on
all
the
idea
of
network
slicing
and
with
this
respect,
to
link
with
the
work
that
is
being
done
in
this
working
group.
B
I
would
like
also
to
mention
some
other
activities
that
could
be
related
to
this
one
or
complementing
this
one.
So
in
all
this
activity
around
fiji-
and
there
are
some
other
working
as
seos
working
in
different
from
different
angles.
So
probably
this
could
be
a
piece
complementing
all
of
them
or
or
we
can
see
in
the
other
way
around.
So
we
will
have
other
pieces
complementing
this
work.
B
I
would
like
here
just
to
mention
two
of
them.
There
is
a
new
work
item
in
hcint,
which
is
a
committee
for
interoperability,
and
these
work
ideas
are
dedicated
for
end-to-end
testing
and
validation
of
particle
applications
over
5g
networks
and
even
ambient.
But
now
by
now
the
the
point
is
only
5g,
so
they
are
looking
at
the
essentially
the
kpis
that
could
be
observed
by
the
by
the
vertical
industries
and
all
the
I
mean
the
users
of
5g
networks.
B
So
there
is
an
ongoing
activity
also
for
defining
tests
for
front
front-hole,
mid-coil
and
backhoe
network,
so
could
be
something
to
look
at
as
well
or
to
see
how
we
could
complement
all
that
work
next
slide.
Please.
B
Okay,
so
this
is
the
final
one,
so
as
next
steps,
essentially
what
we
identify
for
sure
to
collect
feedback
and
comments
from
the
working
group
to
see
if
there
is
interested
people
willing
to
participate
and
and
provide
input
and
views.
This
would
be
perfect
also
to
keep
working
on
on
the
draft.
B
So
we
acknowledge
that
this
is
not
yet
material
at
all,
so
it
requires
much
more
much
more
work
and
essentially,
while
identifying
the
requirements
and
characteristics
of
fiji
and
all
what
we
have
mentioned
before
about
attributes
as
a
laws
and
so
and
essentially
to
work
in
the
different
dimensions
right
now:
the
control,
plane,
data,
plane
management,
plane
and
even
architecture
to
identify
what
could
be,
let's
say,
the
the
lines
of
the
final
lines
work
and
for
sure
to
prepare
a
new
version
for
next
itf
and
and
to
provide
corresponding
updates
as
as
today,
and
that's
all
from
my
side.
A
Oh,
it's
a
lot
easier
for
folks
to
hear
me
if
I
unmute
we,
we
can
I'd
like
to
ask
the
group
if
there's
any
questions
or
comments
on
on
your
latest
draft.
C
Oh
yeah,
so
there's
from
the
first
is,
is
the
ietf
the
right
place
for
this
work,
or
should
it
be
rather
be
standardized
in
three
gpp
or
etsi
and
a
car
student
says
I
am
active
in
oran,
wg
dyne,
which
works
on
something
potentially
similar
and
then
the
second
question
is
since
this
working
group
is
about
benchmarking,
what
do
you
plan
to
define
the
draft
to
support
brand
benchmarking?
This
would
require
advanced
simulators.
Testing
with
real
user
equipment
would
not
work
well
for
benchmarking.
I
think
that's
two
questions.
B
B
It's
true
that
there
are
some
blue
frontiers
in
in
some
of
the
aspects,
and
we
have
mentioned
oran,
so
that
could
be
some
co-leading
work,
but
I
mean
clearly
the
idea
here
would
be
not
to
not
to
duplicate
the
effort,
so
whatever
could
be
done
in
in
orem,
for
instance,
for
front
hall.
B
It
probably
could
be
just
simply
mention
here
as
a
reference
and
and
not
going
not
going
further
or
even
complementing
if
they
do
not
consider
certain
technologies,
because
by
now
they
they
are
not
considering
all
the
technologies,
all
the
different
kind
of
technologies
that
could
be
in
itf.
So
essentially,
the
approach
here
would
be
complementary,
not
not
duplication.
B
I
I
think
here
essentially
what
we
we
could
do
is
to
to
define
the
the
benchmarking
methodology,
and
probably
we
could
refer
to
that
point,
but
I
think
that
we
will
not
enter
in
further
details
in
the
respect
in,
in
the
sense
that
maybe
we
can
highlight
the
difficulties
or
or
the
problems
that
real,
I
mean
particular
implementations
could
bring
into
the
topic,
but
anything
else.
B
So
we
will,
let's
say,
look
at
the
problem
from
the
transport,
analytic,
itf
technology
perspective,
and
so,
if
we
let's
say
identify
slos,
we
are
individual
slos
of
of
the
idf
technologies,
not
from
the
devices
that
I
mean
the
radio
part,
for
instance.
So
this
is
my
my.
A
A
Please
bill:
who
was
it?
Who
was
it?
Who
asked
the
question
and
can
we
maybe
get
a
maybe
get
an
answer
from
them
on
which
oran
group
is
is,
is
potentially
doing
overlapping
work?
I
think
they're
divided
up
into
about
eight
different.
B
The
group
is
the
working
group,
nine,
which
is
is
for
a
transport
essentially
looking
at
transport.
A
Okay,
okay,
so
so
with
with
their
would
they
overlap
both
these
areas,
luis,
the
control
and
management
and
the
data
plane
or.
B
A
Let
me
let
me
ask
this
question
that
came
to
mind.
You
say
that
this
you
have
another
draft.
I
believe
that's
proposed
in
the
teas
working
group.
Is
it
still
at
the
proposal
stage
or
any?
You
know
how?
How
does
it
look
toward
toward
the
path
toward
adoption.
B
Well,
let
me
summarize
a
little
bit
what
is
being
done
in
this.
So
in
this
there
is
a
specific
design
time,
sorry,
design,
team,
working
on
network
slides
and
the
concept
of
neighborhood
slides
how
to
land
this
concept
to
to
itf
technologies.
So
there
are
a
number
of
documents
running
in
parallel.
By
now,
the
the
closest
to
the
adoption
would
be
a
one
dealing
with
definition
of
what
is
an
atf
water
slice
and
another
one
describing
the
framework.
So
these
are
the
ones
close
to
to
to
be
adopted.
B
Not
yet
adopted
are
mean,
are
outcomes
of
the
design
team,
but
not
yet
adopted
by
the
working
group.
There
are
some
some
discussions
just
yet
in
topics
like
isolation,
for
instance.
So
apart
from
from
that,
there
are
other
bunch
of
drafts,
but
individual
drops
by
now.
So
this
is
more
or
less
the
status,
but
there
is
some
activity
that
is
somehow
running
so
here.
B
The
or
this
other
activity
would
somehow
complement
from
the
perspective
of
how
to
benchmark
the
how
this
concept
of
itf
networks
lies
could
be
later
on,
mapped
to
the
different
technologies
like,
for
instance,
semi-reality
version,
6
or
I
don't
know,
or
flexible
ethernet
or
whatever.
B
A
To
me,
as
though
there's
going
to
be
some
time
between
you
know
when
the,
when
the
actual
protocols
are
decided
for
the
control,
plane,
management,
plane
and
and
and
when
you
know
between
now
and
when
those
decisions
are
are
taken
in
for
ietf.
A
So
so
that
I
mean
that
we
we
could,
we
could
sort
of
take
a
generalized
approach
like
we
did
with
the
sdn
controller
benchmarking
at
first
and
and
try
to
work
it
that
way.
But
you
know
that
that's
that
that's
up
to
you.
I
think
that's
it's
a
it's
a
bit
more
work
to
to
try
to
generalize
things
completely
and.
A
A
Nine
people
for
what
they
have
in
mind
so
that
we
clearly
avoid
the
overlap.
If
we
do
anything
in
that
space.
B
Okay,
yep
totally
totally
agree
just
for
qualifying
the
this
working
group,
nine
is,
is
looking
in
this
front
hall
nicole.
There
could
be
other
areas
that
are
not
covering
that
yet
relevant
for
for
us,
this
is
for
sure
to
er
I
mean.
Probably
we
don't
have
the
answer
yet,
so
we
will
see
along
the
time
to
what
extent
there
are
gaps
in
that
are
covered
by
them
and
gaps
that
are
not
yet
covered
by
iitf,
because
probably
other
gaps
have
been
already
covered.
B
C
Also
responded.
Thank
you
for
your
response,
lewis.
With
regard
to
test
tools,
I
suggest
to
compare
with
the
next
generation
firewall
document.
We
also
added
new
test
methodology,
which
requires
test
tools
to
support.
I
would
suggest
you
work
with
test
tool
vendors
to
ensure
that
the
methodology
you
plan
to
define
is
implementable
and
impress
in
good
ietf
sends
to
first
conduct
a
full
proof
of
concept
before
night
and
rfc
is
approved,
close
close
print.
C
I
am
quite
worried
that
testing
with
the
single
I
you
I'm
sorry
single
ues
is
does
not
do
justice
to
the
problem
and
that
the
document
might
become
a
dead
horse
if
there
is
no
way
to
implement
ue
benchmarking
across
ran
and
5g
core.
In
reality,.
C
Yeah
and
so
al
I
I,
what
are
you
recommending.
A
Let's
you
know,
let's
follow
up
on
some
of
these
topics,
with
respect
to
the
the
activity,
the
activity
and
other
working
groups
and
and
whether
luis
wants
to
try
to
take
on
a
sort
of
a
generalized
thing
for
the
control
and
management
plan
and
kpis
absent
of
ietf
network
slice,
another
topic
decision
on
the
protocols
there
and
then
for
for
data
plane.
A
Kpis,
you
know
investigating
that
further
with
the
oran
working
group,
nine
and
avoiding
the
overlap
and-
and
unlike
our
last
commenter
suggested,
you
know
getting
getting
somebody
from
the
test
equipment.
Vendors
on
the
author
list
would
be
good.
A
All
right,
any
anything
else
on
this
on
this.
A
Okay,
well,
well,
thanks
very
much
for
updating
it
and
presenting
it
again,
luis
I,
you
know
this
can
potentially
be
a
a
very
interesting
topic
and
and
we'll
see
we'll
see
where
it
goes.
A
So
our
next
draft
up
is
considerations
for
benchmarking,
network
performance
and
containerized
infrastructures,
and
I've
got
the
slides
right
here
and
kj
sun
I
think
is
going
to
present
for
us
is
that
right,
kj.
J
Okay,
so
hello,
everyone
today,
I
will
present
about
our
draft
considerations
for
benchmarking
network
performance
in
containerizing
infrastructure.
J
Actually,
at
the
last
time
we
present
in
the
itf
106
at
the
time
the
draft
version
is
was
general
two
and
then
now
is
zero
five.
So
I
briefly
describe
about
some
updates
and
then
our
works
in
the
hackathon.
The
next
slide,
please
yeah,
so
this
draft
is
to
describe
difference
and
additional
considerations
for
benchmarking,
containerized
infrastructure
compared
with
the
buoyant
based
infrastructure.
J
J
Yeah
so
from
zero,
two
and
then
zero
three,
we
just
adding
some
little
description
in
the
chapter
three
about
the
huge
pages
sending
luma
and
then
you
can
check
in
the
our
draft
and
then
next
slide.
Please.
J
Yes
and
then
the
from
the
channel
3
to
04.
J
So
at
the
time,
so
we
add
some
benchmarking
experience
in
the
chapter
6,
which
that
we
was
working
on
the
I
hp,
hackathon
106,
and
so
in
that
benchmarking
experience,
we
used
implement
the
environment
using
the
user
space
networking
model,
and
then
we
tried
to
verify
cpu
allocation
of
native
quality
cpu
scheduler.
J
And
additionally,
we
also
try
to
figure
out
about
the
pneuma
affinity
so
that
we
assign
the
cpu
of
the
network
interface
and
the
container
into
a
same
pneuma
zone
or
different
luma
zone.
And
depending
on
that,
we
want
to
measure
the
network
performance
and
for
the
traffic
generator.
We
use
tracks
at
the
external
side
and
then
we
use
the
imx
strap
pattern
for
measuring
in
the
next
slide.
Please.
J
And
then,
unfortunately,
at
the
at
the
hackathon,
we
cannot
succeed
or
the
implementing
and
then
testing,
because
there
are
some
errors
and
then
our
main
errors,
or
was
some
routing
table,
doesn't
work
when
we
send
the
package
using
tdxl
and
then
we
figure
out
that
it
is
come
from
the
it
packet
floating
luries.
J
First,
only
the
default,
the
virtual
routing
and
floating
functions
in
the
bpp
switch,
so
the
or
at
the
last
figure
we
use
that
or
two
or
switch
over
the
default
switches
to
connect
to
the
each
network
port
of
the
port.
So
but
we
try
to
or
at
the
time
so
relax
one
and
breath
two
or
cannot
interface
or
kinda
rock
the
packet
or
to
the
port.
J
So
we
solve
that
problem
by
assigning
one
interface
directly
to
the
default,
routing
and
folding
functions,
and
then
the
other
is
connecting
to
the
vrf1
to
the
port.
So
next
slide,
please
yeah!
J
So
then,
this
is
the
our
test
result
and,
as
you
see
the
some
some
performance
is
released
between
the
previous
vpp
switch
and
port
and
then
the
same
numeral
affinity,
increased
the
network
throughput
by
about
50
percent
and
in
the
switch
only
model
is
just
package,
is
going
through
the
vpp
switch
and
then
going
back
to
directly
to
the
t-rex
without
of
the
routing
to
the
pod.
So
the
you
can
see
the
some
the
metrics
I
so
result
for
that
and
the
next
slide.
J
J
Yes
and
then
this
version
is
zero,
five,
so
update
from
zero
four
to
05,
or
we
adding
one
more
chapter
for
benchmarking
experience,
and
then
we
implemented
a
test
at
this
hackathon
109.
J
So
in
the
case
or
in
in
this
experiment,
we
use
the
different
user
space
network
model
or
which
is
device
pass-through
or
with
the
scriv
and
then
dpdk,
and
we
want
to
verify
huge
pages
impact
on
the
network
performance
because
that
we
already
wrote
in
our
document
that
in
the
containerized
infrastructure
or
we
can
set
up
no
more
or
the
small
size
of
the
huge
pages,
so
that
we
want
to
whether
it
is
impacted
to
the
top
performance
or
not
next
slide.
Please.
J
So
so
we
uploaded
some
infrastructure
setting
and
the
manuals
in
the
github,
and
then
this
left
figure
is
our
the
data
path.
So
in
that
figure
the
trap,
generators
or
send
the
packet
to
the
the
kubernetes
walker
node,
and
then
the
network
network
interface
card
is
enabled
or
the
sriv
so
that
they
forward
the
packet
by
passing
the
corner
space
and
directly
go
to
the
the
network
port
of
the
containers.
So
for
that,
so
physical
hardware
specs
are
similar.
J
What
that
we
used
in
the
last
hackathon
and
then
we've
used
currencies
or
one
master
node
and
one
worker
node,
and
then
we
use
the
moto
c
or
cni
and
then
the
cmk
for
cpu
finding
and
I
started
we
plug
in
with
the
dpdk
functions.
Okay,
let's
next
slide,
please.
J
Yeah
for
testing
we
assigned
a
four
gigabyte
memory
for
each
containers
and
then
two
scenarios
of
the
huge
page
settings.
J
So
one
is
the
the
two
megabyte
huge
page
size
and
then
or
2048
pages
and
another
one
is
the
one
gigabyte,
huge
page
size
and
four
four
huge
pages,
four
number
of
huge
pages
and
then
for
a
traffic
pattern,
we're
using
t-rex
and
then,
firstly,
we
are
trying
to
the
internet
frame
our
site,
so
the
from
the
64
byte
to
the
1.5
k,
byte,
and
then
we
tried
to
the
I
mixed
traffic,
but
we
we
did
not
succeed
yet
so
that
we
will.
J
We
are
going
to
test
for
that.
Okay
next
slide,
please.
J
So
this
is
our
some
res,
the
test
result
and
when
you
see
the
result
and
the
huge
page
size
does
not
affect
to
the
performance
itself,
because
we
think
that
the
either
frame
size
is
limited
to
just
1.5
kbytes.
So
the
even
we
just
have
the
pages
size
is
the
the
two
megabytes.
J
So
it
is
just
enough
for
the
just
to
store
the
packet
and
the
processing
packet,
but
this
rejoice
is
just
for
networking,
which
means
that
the
container
just
have
some
pseudo
functions
to
receive
packing
and
then
just
fold
it
to
the
another
network
interface,
so
the
so
that
we
need
to
more
some
some
some
complexity
of
to
to
measure
the
the
the
exact
the
performance
measurement.
J
J
Troubleshootings
and
issues
so
for
testing.
We
have
several
some
some
some
problems.
First,
is
that
the
out
of
the
memory
error
or
which
that
sometimes
port
access
to
the
non-allocated
memory?
So
the
for
that,
so
we
reconfigure
and
then
the
reallocating
the
process
was
required
and
then
or
when
we
benchmarking
for
different,
huge
page
size.
So
one
or
one
huge
page
size
should
be
test
for
each
time.
J
So
when
we
change
to
two
megabytes
huge
page
size
to
the
one
gigabyte
huge
page
size,
so
we
need
to
different
configuration
of
group
settings,
so
plugin
settings
or
kubernetes
settings
should
be
required
and
then
it
should
be
repeated
many
times
so
that
it
takes
a
lot
of
time
to
change
configuration
and
also
to
it
may
add,
be
the
high
risk
to
be
error.
J
And
then
so.
The
last
was
worse
is
that
I
told
before,
though
we
just
the
using
the
pseudo
functions
and
then
just
they're
talking
the
performance.
So
we
use.
J
We
need
to
consider
some
some
impact
on
the
application
process
and
then
also
we
have
to
consider
some
tradeoffs
between
the
network
performance
and
the
resource
utilization.
So
we
will
try
to
to
to
test
it
later.
Okay,
next
class
slide.
Please.
J
Okay,
this
is
our
last
slide,
so
for
updating
the
draft
to
the
general
six.
So
so
we
we're
including
our
result
and
then
our
troubleshooting
of
the
last
hackathon
and
then
updating
some
up-to-date
network
technology,
and
we
also
considered
to
expand
some
other
benchmarking
scenario,
which
is,
for
example,
east
west
traffic,
benchmarking
scenario,
and
then
we
also
plan
to
the
next
hackathon,
the
one
or
110.
J
A
Thank
you
kj.
I
I
I
say
we
have
benson
in
the
queue.
Please
go
ahead,
benson
with
your
questions
on
the
draft.
K
So
I
guess
I
had
a
question,
maybe
more
related
to
your
presentation.
A
lot
of
it
seems
quite
specific
to
the
hardware,
because
you
talk
about
new
allocation,
I'm
wondering
how
portable
your
methodology
would
be.
K
There's
a
lot
of
different
hardware
and
while
numa
effects
are
important,
is
there
a
way
to
kind
of
write
things
so
that
measurements
can
be
done
for
different
architectures
and
where
numerous
effects
also
differ.
J
Yes,
for
for
testing
in
our
environment,
so
one
of
our
different,
the
hard
hard
thing
was
that
so,
even
though
there
are
some
virtualized
infrastructure,
but
depending
on
hardware
or
depending
on
the
network
technology,
they
have
some
the
dependency
of
the
hardware's,
such
as
some
some
numera
technology.
For
example,
we
use
the
cmk
for
cpu
pining,
so
it
supported
some
some
hardware
about
us
in
case
of
other
hardware.
We
cannot
support
it
very
well
also.
J
Yes,
they
they
maybe
have
some
harder
specific,
some
effect
for
that.
A
So
kj
one
of
the
one
of
the
things
that
benson's
question
brings
up
to
me
and-
and
something
I
was
thinking
about-
is
that
basically
going
back
to
the
beginning
here,
your
you're
writing
what
we
call
a
a
considerations,
draft
and-
and-
and
you
know
I
was-
I
was
sort
of
wondering
you
know-
where
are
your
detailed
procedures
and
so
forth,
but
but
actually
in
a
considerations,
draft
you're,
you're
kind
of
examining
the
problem
space
and
trying
to
help
other
people
become
aware
of
the
problems
we're
up
against
when
we
do
try
to
go
on
and
and
do
more
with
methodologies
measuring
network
performance
in
the
containerized
infrastructure
space.
A
So
if
that's,
if
that's
consistent
with
the
way
that
the
way
that
you
wanna
proceed,
then
I
think
you're,
probably
you're,
probably
on
the
right
track.
But
as
as
mentioned
as
benson
mentions,
I
think
you
want
to.
You
know:
try
to
tip
your
hat
toward
generalizations
as
well,
and
maybe
you
know
maybe
some
different
architectures
as
you
as
you
make
some
statements
and
and
recommendations
it
would.
A
It
would
be
good
if
the
draft
had
some
some
general
recommendations
that
we'll
that
we'll
be
able
to
follow
in
our
in
in
future
development
of
metrics
and
methodology.
J
A
Okay,
good,
good
and,
and
and
it's
always
it's
always
good-
to
make
some
recommendations
as
well,
so
the
generalization
and
if
you
can
and
then
some
some
recommendations
I
know
another
way
to
go
on
is-
is
to
look
at
the
other
architectures
and
to
say
you
know
we
looked
at.
We
looked
at
this
numa
architecture
in
in
some
detail,
but
it
has
parallels
to
the
you
know
the
the
this
alternative
architecture
you
know
in
arm
or
whatever.
A
Yeah
one
one
one
question
I
had
was
was
related
to
the
it
was
related
to
the
network
interfaces
it.
It
looked
to
me
as
though
you
had
two
two
10
gig
interfaces
here
and.
A
Probably
the
same
thing
here
right
and
then
and
then
and
then
when
the
results
come
out,
we
we
must
be
looking
at,
I
mean
so
so.
A
Two
two
10
gig
interfaces
would
be
would
be
here
if
we
we
must
be
looking
at.
You
know
the
aggregate
of
bi-directional
results
here
right.
J
Yes,
yeah.
J
Yeah,
so
I
I
think
actually
so
in
the
hackathon,
our
just
one
of
our
team
members
just
testing
for
that,
but
yeah.
I
I
also
so
looking
some
some
weird
when
I
see
that
scrap,
but
just
we
can
say
that
yeah
so
we're
using
t-rex
and
we're
sending
the
traffic
until
the
40
gigabit
bps
so
and
then
they
came
to
the
results
for
that.
J
So
actually
we
need
to
some
delete
check
about
this
result
and,
as
you
said,
we
also
can
check
about
our
hardware
or
specification
or
data
port
specification
or
the
difference
between
the
previous
hackathon
and
then
this
hackathon.
A
Okay,
so
so
it
sounds
like
I
mean
it
sounds
like.
You've
still
got
some
development
to
do
here
and
and
but
it's
it's
interesting
work
and
you
know
anytime,
you
put
test
results.
A
A
So
so
next
up
up
is
the
discussion
of
future
prospects
for
three
drafts:
the
multiple
loss
ratio,
search,
the
probabilistic
loss
ratio
search
and
the
network
function,
service
density,
draft
and
maciac.
Are
you
with
us.
D
A
A
Well,
I
don't
see
maciac
well,
let's
he
did
he
did
send.
He
did
send
us
a
message
to
the
list
today
and
one
of
the
key
one
of
the
key
things
that
masiak
was
raising
had
to
do
with
the
multiple
loss
ratio
search
draft.
A
Which
veraco,
pollock
and
maciac
have
written
together
and
masiak
was
was
under
the
impression
that
two
meetings
ago
we
we
adopted
this
draft
as
a
working
group
draft.
A
But
I
don't
think
I
mean
I
don't
think
that
was
ever
ratified
on
the
mailing
list
and
it
doesn't
show
up
in
the
meeting
minutes
according
to
massie
x
message
to
the
to
the
list
today
so
in
in
you
know,
in
general,
this
is
a
this
is
a
draft
that
that
is
designed
to
search
for
not
just
the
zero
loss
ratio,
but
other
levels
of
loss
ratio
all
in
a
combined
search
algorithm,
which
is
you
know,
which
is
pretty
efficient.
A
If
you,
if
you,
if
you
know,
if
that's
the
kind
of
thing
you
want
to
do
and
and
it's
also
very
straightforward-
you
know
it's
a
it's
a
very
fast
search
algorithm.
They
use
it
in
the
the
fdio
open
source
project
and
the
their
systems
integration
test.
So.
A
Has
has
anybody,
has
anybody
read
this
draft,
and
and
would
anybody
like
to
show
sort
of
some
interest
in
it
especially
interest
to
review
it.
L
L
I
would
like
to
to
read
it
and
and
comment
it,
and
I
would
I
wanted
to
ask
the
authors
if
they
have
made
a
comparison
in
efficiency
between
this
search
algorithm
and
the
traditional
binary
search.
Have
you
do
you
know
about
it?
It
has
happened
so
they
compared
their
performance.
A
Have
and
the
the
results
are,
the
results
are
are,
are
you
know,
basically
the
kind
of
thing
where
you
would
have
to
conduct
multiple
binary
searches
for
each
different
level
of
loss,
ratio
and,
and
because
this
you
know
because
this
searching
is,
is
pretty
much
going
on
in
parallel,
the
multiple
loss
ratio,
searches
is,
is
generally
quicker.
L
Good,
so
I
myself
interested
in
rfc,
2544,
compliant
zero
loss
search
and
unfortunately,
I
have
to
perform
it
many
times
to
comply
with
rsc8219,
and
it
requires
a
lot
of
time.
So
if,
if
there
would
be
a
better
algorithm
which
supports
zero
research,
but
just
better
just
faster
than
binary
search,
I
would
you
would
be
happy
to
use
it.
A
Yeah
that
that
I
can't
answer
directly,
but
there
is
a
chance
that
this
that
this
might
be
faster
than
straight
binary
search
as
well.
I
I
think
the
I
think
the
very
first
step
is
to
kind
of
take
a
jump
from
the
maximum
level
down
to
the
whatever
the
received
throughput
is
and
then
search
around
there,
so
they
they
kind
of
they
kind
of
make
a
big
step,
big
step
right
right
at
the
right.
At
the
start,.
A
Very
good,
very
good,
while
you're
doing
that
gabor
I'll
put
in
a
plug
for
the
binary
search
with
loss,
verification
which
you
can
find
in
etsy
test009.
A
It's
a
it's
designed
to
test
in
the
you
know
in
the
virtualized
environment,
where
we
can't
get
rid
of
the
transients
that
you
know
that
we
have
to
live
with,
but
but
those
transients
don't
really
they
don't
really
influence
the
true
resource
limitations.
They're
just
you
know
big
interrupts
that
happen
once
in
a
while
and
they
follow
up
the
binary
search
because
of
it.
A
A
All
right
so
you're
and
so
you're
willing
to
review
this.
Are
there?
Are
there?
Are
there
any?
I
I've
already
reviewed
it,
and-
and
I
will
probably
do
that
again-
are
there
any
objections.
A
Okay
hearing
none,
then
I
think
we
will
go
ahead
and
and
do
that.
A
So
we're
so
we're
looking
at
we're.
Looking
at
a
working
group,
adoption
for
ml
re
mlr
search
zero,
zero,
three
it.
It
would
probably
be
a
good
idea
if,
if
maciac
and
racco
brought
it
out
of
the
expired
state
before
we
do
that,
but
but
we
won't
stand
on
on
circumstances
here
all
right,
so
I
think
the
the
other
two
we're
going
to
have
to
let
go
unless
monsieur
is
with
us.
Now.
A
The
the
probabilistic
search
is
a
lot
more
complicated.
Let's
put
it
that
way
and
as
opposed
to
a
deterministic
system
of
search,
there's
a
I
mean,
there's
an
attempt
to
define
the
you
know
find
this
critical
load,
satisfying
a
target
loss
ratio.
A
A
It's
a
I
mean
that
when
I've
looked
this
over
I've
kind
of
felt
that
it
was
a
little
bit
experimental,
I
I
I
was
sort
of
looking
for
the
the
you
know,
some
sort
of
verification
that
that,
what's
being
what's
being
promised,
is
actually
being
delivered
and
that
the
additional
detail
is
is
valuable
and
I'm
sure
I'm
sure
vrecko
has
some
of
those
things
in
his
mind,
but
we're
at
this
stage.
A
I
I'm
I'm
still
looking
for
more
with
with
that
quick
introduction
or
or
reminder
about
this
draft
any
other.
A
Comments:
okay,
so
we'll
we'll
take
this
discussion
topic
to
the
list,
as
well
as
the
network
function,
service
density
topic
that
the
mossiac
raised.
I
mean
basically
he's
he's
talking
about
many
network
function,
services
on
the
the
common.
A
What
does
cots
stand
for?
It's,
you
know,
it's
a
general
purpose,
computing
platform
and-
and
you
know,
he's
got
these-
got
these
different.
A
Different
diagrams
here,
where,
where
the
vnf's
or
the
network
functions,
are
either
connected
through
the
the
v
switch
or
the
v
router,
or
they
have
like
a
memory
interface
that
helps
them
speed
up
their
their
inner
connectivity,
and
you
know
lots
of
considerations
to
take
to
take
care
of
this.
A
But
man,
many
of
the
many
of
the
problems
are,
are
basically,
you
know
right
right
right
here
in
the
virtualization
technology
and
the
the
the
many
many
choices
that
you
have
when
you
when
this
is
a
a
cloud
network
function
and
the
cloud
networking
plugins
that
are
possible.
A
So
again,
I
think
this
is
one
we're
gonna
have
to
take
to
the
list.
A
A
A
Well,
then,
then,
we're
on
to
our
final
topic
here,
which
is
it's
a
it's
a
a
quick
talk
here
by
gabor
lentz
from
budapest
university,
and
you
know
I'm
again
reminded
here
gabor
that
back
in
back
at
the
at
the
at
the
beginning
of
march,
my
very
last
international
trip
I
was
in
budapest.
I
really
should
have
looked
you
up
and
I'm
I'm
sorry
for
missing
that
opportunity.
A
But
in
any
case
you
know
welcome
to
welcome
to
bmg
again
you
you
helped
marius
jurgescu
with
his
work
some
time
ago
on
the
the
ipv4
or
ipv6
benchmarking
topic,
and
it's
good
to
have
you
back
and
please
go
ahead
with
your
talk.
L
L
L
So
marius
has
divided
these
technologies
into
different
categories,
and
one
of
them
was
a
single
translation.
Category
and
siit
perth
addresses
this
category,
in
which
case
one
of
the
ports
of
the
tester
end
of
the
dude.
This
is
test
is
ib
version.
Six
and
the
other
is
ipv
version.
Four
and
there's
a
translation
between
the
two
in
the
device
under
test
and
the
tester
has
to
test
such
kind
of
devices.
L
Situ
is
one
of
the
one
representation
of
of
this
group,
of
course,
site
stateless
n864.
However,
there
is
also
state
49864,
which
is
not
in
the
scope
of
ssid
perv.
Yet
so
this
little
tester
supports
some
traditional
tests
like
throughput
and
frame
roslate.
These
are
the
same
as
in
rfc
2544,
and
some
tests
are
different.
For
example,
latency
has
been
redefined
in
8219.
L
It
requires
to
send
at
least
500
frames
marked
and
their
timestamps
were
at
least
500
frames,
instead
of
only
a
single
one
like
in
rfc
254
2544,
and
it
is,
it's
also
included
packet-
delay
variation.
It
is
a
new
test
compared
to
the
traditional
benchmarking
rfcs.
So
my
tester
supports
these
tests.
It
doesn't
support
yet
back-to-back
frames
because
of
performance
issues,
but
I'm
thinking
of
including
also
that
later
on,
so
some
general
features.
Rfc
2544
requires
to
be
able
to
test
with
a
single
ip
address,
pair
or
25
256
destination
networks.
L
My
site
perf
supports
a
number
between
the
two
and
it
also
supports
fixed
port
numbers.
I
mean
that
the
fixed
frames
and
the
same
test
frames
are
sent
out
and
rst
48
14,
which
was
recommended
by
ul
that
I
should
support
that,
and
I
implemented
that,
and
I
also
implemented
that
I
use
increasing
or
decreasing
port
numbers,
because
it's
not
very
easy
and,
of
course,
besides
a
stateless
n864,
you
can
use
it
for
testing
a
pure
ipv
version
for
wiper
version.
6
routing.
L
Just
a
very
little
about
implementation,
it
is
important
because
I
wanted
sid
perth
to
be
flexible
and
high
performance
to
support
both.
It
was
possible
that
I
use
simple
binaries
for
the
different
measurements
and
a
bachelor
script
which
executes
the
binaries,
for
example.
If
I
would
like
to
use
throughput
testing
there's
a
tester
for
throughput,
which
is
also
used
for
packet
loss
rate
testing
and
a
different
binary
is
used.
A
different
transcript
is
used
for
throughput
and
frame
loss
rate
testing.
L
Just
if
you
want
to
test
throughput,
you
implement
a
binary
search,
and
if
you
want
to
do
some
frameworks
retesting,
then
you
just
test
a
different
frame
rates
and,
as
you
see
in
the
slides,
it
was
written
in
dpdk
the
binary,
and
it
is
also
flexible
in
the
way
that
it
has
some
different
input
possibilities.
L
Some
parameters
which
do
not
change
using
a
consecutive
executions
they
are
put
in
the
configuration
file
and
others
are
put
in
command
line
so
that
they
can
be
easily
changed
from
the
bash
script.
Could
you
please
go
to
the
next
slide?
L
L
And
there's
some
some
results,
so
the
new
feature
is
random
port
numbers.
So
I
compare
its
performance
with
the
original
one
with
those
end
up
on
numbers
and
it
seems
that
its
performance
didn't
really.
It
didn't
really
decrease
due
to
this
random
port
numbers.
So
I
have
to
rewrite
the
test
frames
just
before
sending,
but
and
of
course
I
have
to
generate
two
random
numbers
per
frame.
L
I
use
a
mersenne
twister
64-bit
magentister
and
it's
still
remain
very
fast,
so
I
think
it
it's
a
good
performance,
so
I
tested
ipv
version
for
kernel
routing.
This
was
enough
for
that.
So
I
think
for
for
some
sat
implementations
it's
more
than
enough,
so
I
just
want
to
offer
it
for
the
list
members.
L
L
So
there
are
some
some
papers
about
it.
The
first
has
already
been
approved
and
the
second
one
is
under
review,
so
they
are
to
describe
how
it
works
and,
of
course,
there's
a
link
for
the
github
storage.
This
is
a
source
code.
So
if
you
have
any
questions
or
comments,
you
can
just
tell
me
now
or
write
on
the
list
or
write
me
personally.
So
thank
you
very
much
for.
A
Well,
thanks.
A
I
think
that
I
mean
one
of
the
things
that
that
struck
me
here
is
this.
You
know
the
obviously,
the
random
port
numbers,
as
you
say
it
has
an
impact,
is,
is
this
one
of
the
things
you
plan
to
write
about
in
your
in
your
draft.
L
Not
really
it's
it's
something
some
different
things,
but
I
didn't
have
time
to
update
the
deaf
story.
I
will
do
it
for
the
next
atf.
A
Right
right,
that's
what
I
mean.
In
other
words,
some
of
your
some
of
your
measurement
experience
is
what
you
can
bring
to
bear
on
the
on
future
versions
of
the
draft
right.
Yes,
good,
good,
okay,
all
right
and
let's
see
yeah
is
there.
Is
there
anything
else
you've
learned
here
that
will
that
will
influence
the
the
draft
in
the
future.
L
Well,
I
don't
have
time,
I
honestly
admit
that
I
didn't
have
time
to
think
about
the
draft,
but
I
will
do
it
in
two
months.
A
Okay,
okay,
good
good
yeah,
I'm
sure
I'm
sure
some
things
will
come
up
and
especially
if
other
people,
if
other
people,
pick
up
the
pick
up
your
tool
and
give
it
a
try,
you
know
your
utility,
then
then,
maybe
you
know
we'll
all
learn
something
more
together.
That
would
be
great.
A
So
let
us
know
if
you
do
that
on
the
on
the
and
if
anybody
downloads
the
source
code
and
and
tries
it
out,
but
let
us
know
on
the
on
the
bmw
list,
please.
Thank
you
all
right.
Thank
you
very
much.
Gabor
all
right.
A
Well,
we've
we've
come
to
the
end
of
the
agenda
and.
A
I
don't
see
anybody
who
was
who
was
missing,
who
has
perhaps
rejoined
us.
Let
me
just
check
in
with
with
bill
here
bill
so
far,
I've
I've
only
got.
I
think
I've
only
got
two
action
items
for
the
chairs
to
to
run
a
working
group
last
call
on
the
security
next
generation
security
devices
draft
when
that
appears
in
in
zero
six
form
and
then
to
run
the
working
group
adoption
on
the
multiple
loss
ratio
search
draft.
C
Everything
yeah
I
wanted
to
confirm
which
one
you
were
adopting,
which
you
just
indicated.
So
that's
good.
A
Okay,
yeah
just
mlr
search,
that's
the!
I
think.
That's
the
one.
That's
received
the
most
comments
and
and
interest
so
far,
there
was
even
even
massive
was
responding
to
some
comments
just
the
other
day.
So
that's
good
all
right.
A
Well,
any
other
any
other
business.
Any
other
requests
for
the.
A
Floor
going
once
twice
three
times:
okay!
Well,
then
it
then
it
remains
for
me
bill
to
to
thank
you
very
much
for
for
helping
out
with
the
with
the
notes
here,
you
typed
them
into
code,
emd
right.
C
Well,
I'm
gonna!
So
on
this
thing
I
hit
the
publish
button
on
it.
Will
that
save
it
or
something
like
that.
C
Well,
I
just
pushed
it
so
yeah.
It
says:
elmore
owns
this
note,
so
it
looks
like
it
might.
A
Good
good,
I
just
press
publish
too
and
that's
maybe
that's
doubly
good.
A
A
And
and
as
as
everybody
can
see,
it's
getting
pretty
late
here
for
for
me
getting
late
for
bill,
so
I
think
we'll.
A
We'll
call
it
a
night
and
we'll
we'll
we'll
talk
to
you,
talk
to
you
on
the
list
and
see
at
the
next
meeting
in
the
spring.
Bye.
For
now,
everybody.