►
From YouTube: IETF110-BMWG-20210311-1600
Description
BMWG meeting session at IETF110
2021/03/11 1600
https://datatracker.ietf.org/meeting/110/proceedings/
A
As
I
do
have
two
monitors,
but
not
today,
so
it's
a
it's
gonna
be
a
little
bit
of
a
challenge,
but
I
think
we've,
I
think,
we've
managed
it.
B
A
That's
right
so,
according
to,
according
to
my
wristwatch,
we
we
have
reached
our
start
time
and
so
welcome
everybody.
We
will
have
a
session
for
approximately
two
hours
today.
I'm
I'm
al
morton
co-chairing
the
benchmarking
methodology
working
group
with
sarah
banks,
hello
sarah
good
morning,
and
we
have
our
our
alternate
ops
area
area
director
with
us
today.
Rob
wilton
rob
welcome.
A
And
so
we
have,
we
have.
We
have
a
actually
not
too
deep
an
agenda
today.
So
we
can.
We
can
afford
a
little
bit
to
take
some
time
with
discussions
and
and
so
forth,
and
then
that's
probably
a
good
thing,
so
we
will.
We
will
do
that
today.
A
Let's
see,
there's
something
else
I
wanted
to.
Oh
yes,
so
we
we
have
the
notewell
which
which
we
always
go
through
here,
but
our
our
own
version
of
this
is
that
we,
you
know,
we
work
as
individuals
and
we
try
to
be
nice
to
each
other
and
those
are
not
too
hard
to
do,
but
then
also
as
a
reminder
of
the
ietf
policies,
such
as
patents
in
the
code
of
conduct.
It's
meant
to
this.
This
slide
is
meant
to
remind
you
that
we
we
have
policies
in
those
regard.
A
Everything
you
say
or
do
in
a
meeting
is
a
contribution
as
such
is
covered
by
the
ipr
policy
and
if
you
have
any
questions,
there's
a
whole
list
of
best
current
practices
that
you
can
refer
to
and
the
privacy
policy,
and
if
you
have
encountered
someone
not
working
respectfully
with
you,
we
have
the
ombuds
persons
and
the
the
ops
ombuds
team
and
a
website
where
you
can
get
in
touch
with
them
or
learn
more
about
that
process.
A
Good
all
right,
so,
as
I
mentioned
here's
our
agenda,
do
we
have
any
volunteers
to
take
notes
in
the
in
the
the
normal
list
of
folks
who
are.
C
And
I'm
happy
to
volunteer
everyone
else's.
A
Well,
thanks
very
much
rob
I
I
was
hoping
to
get
hoping
to
get.
You
some
help
there,
but
you
did
volunteer
in
advance
and
we
very
much
appreciate
that
and
actually,
if,
if
anyone
wants
to
help
rob
the
the
tool
is
our
let's
see
the
tool
is,
is
our
note-taking
tool
here
on
the
on
the
the
user,
interface
and
and
and
actually
to
help
out
here
very
quickly.
A
Crash
everything
I'm
gonna,
I'm
going
to
oh
yeah.
I've
got
to
click
it
yet
all
right
and
then
oh
yeah
good.
What
is
this
all
right?
So.
A
Is
this
is
the
this
is
actually
the
past
the
agenda
version
one?
Oh,
maybe
it's
okay
yeah!
This
is
fine
all
right!
This
is
good
so
that
that's
perfect,
thank
you.
A
E
I'll
I'll
erupt
I'll
do
my
best
to
help
out
it's
magic,
hi,
hi,
magic.
A
Yeah
good
to
hear
your
voice
and
thank
you
for
your
offer
of
help.
Everyone
appreciates
it
so
then
you
know
we
can
easily
monitor
jabber
and
and
other
things
in
the
in
the
medeco
interface
that
our
muteco
developers
have
kindly
provided
again.
Thank
you
for
that
guys
and
and
we've
we've
gone
through
the
ipr
and
the
you
know
the
note
well
so
any
so
then,
now
we'll
quickly
talk
about
the
agenda,
any
bashing
needed.
A
We
have
the
working
group
status
that
we'll
talk
about
some
feedback
on
the
back-to-back
frame
draft
we'll
cover
quickly,
because
it
probably
affects
a
lot
of
things.
Then
we
have
the
the
working
group
drafts.
Some
of
these
will
go
quickly.
Others
will
will
need
to
talk
about
a
bit
and
and
then
a
proposal
that
received
a
lot
of
discussion
over
the
last
interim
period
between
meetings,
the
yang
data
model,
so
so
we'll
go
in
this
order
and
any
comments
or
any
bashing
needed
on
that
all
right
sounds
good.
A
So
if
there's
any
other
business
and
and
if
there's
any
other
time,
we
will
we'll
cover
items
at
the
end.
If
folks
want
to
talk
for
a
few
minutes
and
then
we'll
we'll
move
ahead
with
the
agenda,
thank
you.
A
So
here's
the
quick
status,
the
evpn
draft
is
back
to
the
working
group
post
area,
director,
review
and
and
sarah
has
called
for
a
a
working
group
last
call
there
a
day
or
so
ago,
and
then,
after
that,
we
will
return
to
publication
requested
if
it's
a
favorable
last
call.
Thank
you
for
moving
that
along
sarah
and
and
thanks
to
brian
monkman,
for
comments
and
sudeen
the
lead
author
for
providing
a
draft
for
this
meeting.
A
So
I've
just
mentioned
the
the
back-to-back
frame
draft,
it's
approved
and
and
I'll
talk
about
those
implications
in
a
moment
or
two
they're
they're
part
of
this
slide
deck.
A
So
in
next
generation
firewall,
while
benchmarking,
we
had
a
working
group
last
call
on
version
o5,
which
really
generated
a
lot
of
good
comments,
and
we,
you
know,
I'm
going
to
shut
my
email
down
now.
I
think
good,
all
right
so
then.
So
now
we
have
the
revised
version:
zero,
six
and
looking
it
over.
I
think
we're
gonna
want
a
conformational
working
group
last
call
after
an
editor,
some
more
looking
at
it.
A
We
may
want
to
resolve
some
things
which
I'll
talk
about
today,
but
then
you
know
we're
gonna
give.
I
think
we're
gonna
give
folks
a
little
more
time
to
look
at
this
and
make
sure
that
that
all
the
many
comments,
the
first
working
group
best
call
precipitated-
have
been
resolved.
A
So
thanks
and-
and
we
have
a
main
topic
to
talk
about
there
to
the
status
of
that
document.
So
then
proposals
keep
coming
we're
trying
to
make
way
for
new
work.
Here,
I
think,
and
with
with
all
the
drafts
we've
we've
talked
about
just
before,
and
we
did,
we
did
adopt
new
work.
As
I
mentioned,
the
multiple
loss
rate
search,
which
is
part
of
our
working
group
documents.
A
Now
most
but
most
of
the
proposals
are
very
familiar
to
us,
so
we
we
should
probably
try
to
make
some
some
adoptions.
Let's
say,
let's
say
you
know
ones.
A
Working
group
drafts
or
individual
drafts
that
are
are
really
receiving
attention
from
the
working
group
we
can.
We
can
probably
consider
them
for
as
strong
candidates
for
adoption
all
right
so
on
the
milestones
we're
a
little
bit
behind
on
a
few
things
here,
but
I
think
we
can.
We
can
resolve
quite
a
few
of
these
fairly
quickly
and
and
early
this
year
and
then
we'll
be
we'll
be
in
a
position
to
update
others.
A
A
A
All
right.
Let's
go
back
up
here,
good
all
right,
so
I
think
this
was
shared
with
the
working
group,
but
but
just
in
case
it
wasn't,
I'm
I'm
back
showing
a
slide
here:
titled
transport
area
and
area
director
transport
area,
director
review
of
the
back-to-back
frame
benchmarking
update
draft.
A
So
we
we
had
exactly
one
sentence
in
the
draft
and
it
drew
upon
the
guidance
from
rfc
2544,
which
specified
a
simple
waiting
time
for
the
device
under
test
cues
to
empty
after
the
tran
transmitted
load
ceased
at
the
at
the
end
of
a
trial
and
that
time
traditionally
and
for
for
many
years,
has
been
sufficient
at
two
seconds.
A
So
we
we
got
a
transport
area
review
of
the
back
to
back
frame
draft
which
basically
pointed
out
that
the
the
buffer
sizes
in
our
devices
under
test
today
could
be
very
much
longer
than
the
titan
longer
than
the
buffer
lengths
we've
tested
in
the
past.
A
They
could
be
one
and
a
half
seconds
long
and
they
could
be
what
the
transport
area
calls
a
buffer,
bloat
size
buffers
and
anything.
You
know
anything
in
the
one
second
realm,
one
second,
two
seconds:
whatever
it
happens
to
be,
if
you're,
if
you're,
if
you're
sending
with
a
high
enough
rate
to
fill
the
buffers
of
the
device
under
test,
then
the
implication
is
that
you
likely
have
to
wait
longer
than
two
seconds
and
to
be
safe.
A
You
might
have
to
wait
30
seconds
or
more
for
all
frames
to
exit
the
device
under
test.
A
You
know
I
kind
of
found
that
to
be
a
bit
shocking,
but
this
was
the
advice
that
we
got
and
you
know
I
have
to
say
that
that
once
this
discussion
got
rolling
our
our
old
friend
and
chairman
emeritus
scott
bradner,
weighed
in
on
on
the
topic
and
helped
to
clarify
it,
and-
and
you
know
we
all
basically
recognized
especially
it
took
some
time
for
me
to
recognize
what
what
exact
components
we
needed
to
nail
down
in
order
to
express
this
clearly,
especially
because
you
know
the
in
fact,
the
buffer
sizes
that
I've
been
benchmarking
recently
are
very,
very
much
smaller
than
anything
that
would
call
be
called
buffer.
A
We're
working
in
the
microsecond
range.
Let's
put
it
that
way,
and
and
that's
why
it's
important
to
have
the
correction
factors
that
we
that
we
have
here
and
and
so
forth.
That's
a.
A
Good,
it's
been
a
good
use
of
of
this
of
this
draft,
so
here's
where
we
ended
up
with
with
text
and
it's
a
lot
more
than
one.
Second.
Second,
I'm
sorry,
it's
a
lot
more
than
one
sentence
now,
as
as
you
can
see
so
in
the
section
on
the
test
for
a
single
frame
size
each
trial
in
the
test.
This
is
where
you're
trying
to
find
the
longest
burst
of
frames
that
will
pass
through
loss
free.
A
The
second
component
is
the
time
to
receive
the
transferred
burst
of
frames,
and-
and
this
of
course
might
overlap
the
first
component
in
time
and
then
a
third
component
of
time
at
least
two
seconds,
not
overlapping
the
time
to
receive
the
burst
in
into
and
to
ensure
that
the
buffers
have
depleted
so
longer
times
must
be
used
when
conditions
warrant,
such
as
when
buffer
times
are
greater
than
two
seconds
and
and
are
measured
and
or
when
the
burst
sending
times
are
greater
than
two
seconds.
A
So
care
is
needed
since
this
time
component
directly
increases
the
trial,
duration
and
many
trials
and
tests
comprise
a
complete
benchmarking
study.
So
we
can't
really
just
in
you
know,
increase
this
waiting
time
without
an
overall
time
penalty
and
those
of
us
who
have
been
you
know,
testing
and
testing
and
testing
and
waiting
for
results
to
show
up.
You
know
this.
The
the
waiting
time
after
a
trial
has
a
direct
impact.
A
So
it's
a
balance
to
strike
and-
and
I
think
we've
got
some
wording
here
now
that
that
does
that
so,
and
we
also
mentioned
the
upper
limit
for
the
for
the
time
to
send
each
burst
must
be
configurable
to
values
as
high
as
30
seconds
buffer
time
results
reported
at
or
near
the
upper
limit
are
likely
invalid.
We
saw
some
of
that
in
the
open
platform
for
nfv
benchmarking
testing,
where
basically,
the
at
the
at
the
larger
frame
rates.
A
The
packet
forwarding
rate
was
equal
to
the
back
to
back
frame
at
the
large
for
the
large
frame
sizes,
and,
and
that
means
you
don't
you-
don't
accumulate
a
buffer
or
a
queue,
and
you
you,
basically,
you
basically
send
and
send
and
send,
and
then
you
report
this
maximum
configured
time
and
buffer
length.
The
the
the
resolution
for
that
is
something
else.
A
We've
attacked
in
this
draft,
where
we
have
the
the
rfc
2544
throughput
tests
take
place
first
and
then
any
any
tests
where
the
frame
size
yields
a
the
maximum
theoretical
frame
rate.
Then
you
don't
test
that
for
for
back
to
back
frame
benchmark
and
the
and
the
attempt
to
infer
the
buffer
time.
A
A
No
all
right
well,
this
may
this.
This
may
very
well
impact
our
our
future
drafts
because
we're
likely
to
get
you
know,
sort
of
the
same
transport
area
comment
regarding
buffer
bloat
sizes
and
and
we
have
to
be
ready
for
it
and
that's
why
I
share
this
experience.
A
Maybe
we
can
head
that
one
off
at
the
pass
in
the
in
our
in
our
future
work
where
this
is
relevant.
A
A
So
that's
the
chairman's
status
and,
let's,
let's
check
out
our
agenda
here
quickly.
So
our
our
next
topic
is
the
is
the
evpn
draft
status
and
sarah
I'll
give
you
another
opportunity
here
to
explain
what
happened
and
where
we
are.
Thank.
A
Okay,
good,
thank
you
all
right.
Well
then,
I'll
cover
this
this
quickly.
As
I
said,
we
had
a
good
good
couple
of
reviews,
and
this
is
now
in
working
group
last
call,
I
think,
the
last
day
it
is
march
23rd,
so
that
covers
our
our
our
responsibilities
here
and
we'll
move
this
along
and
again,
thanks
to
everyone,
brian
and
sadine,
who
helped
get
the
draft
to
o7.
Much
appreciated
all
right.
A
So
then
brian
I've
got
your
next
generation
firewall.
Benchmarking
draft
queued
up
here.
Next,
so
I'll
bring
up
the
slides,
and
I
will
try
to
do
something
here
with
the.
B
Okay,
all
right
just
go
to
just
go
to
the
second
slide,
got
it
all
right,
so,
just
jumping
right
in
as
al
indicated,
we
received
a
boatload
of
comments
and
suggestions.
Our
count
estimates
that
it
was
like
well
over
70..
B
A
significant
number
of
them
were
requests
for
clarification
and
grammatical
changes.
If
you
do
a
diff
between
five
and
six
you'll
you'll
see
you
know
what
was
changed
in
in
this
area.
B
B
So
in
examination
of
the
intent
of
the
draft
and
and
the
additions
of
the
security
effectiveness
section
to
include
network
ips,
we
changed
the
target
of
the
draft
from
next
generation
firewalls
to
network
security
devices.
B
B
We
did
some
had
some
internal
discussions
and
reviewed
it,
and
we
do
believe
that
this
this
draft
should
supersede
3511
and
we
had
text
in
the
beginning
of
the
draft
explaining
explaining
why
and
under
the
test
bed
setup
of
section
four.
B
B
We
clarified
the
security
definitions
contained
in
section
42
table
3..
It's
pretty
significant
rewrite
just
to
make
it
a
lot
clearer
and
to
sort
of
bring
some
sort
of
commonality
to
the
language
used
in
you
know,
to
line
up
with
other
definitions
used
elsewhere,
then
we
totally.
We
did
a
significant
rewrite
of
section
6.3,
which
was
originally
just
the
kpi
section.
B
We
changed
the
name
to
benchmarks
and
kpis,
and
the
goal
was
to
clarify
and
eliminate
any
of
the
ambiguities
that
existed
and
there
were.
There
were
a
number
of
them
and
we
used
rfc
2647
definitions
where
it
was
where
it
was
applicable
and
it
it
was
applicable
in
a
number
of
cases.
I
think
three
or
four
of
them
in
in
the
section.
A
So
those
are
the
terms
and
definitions
just
to
give
folks
a
little
background
here
who
might
be
new
to
this.
Those
are
the
terms
and
and
definitions
that
supported
rfc
3511..
There
were
you
know
there
was
a
time
when
we
traditionally
wrote
our
our
terms
in
a
terminology
draft.
You
know
where
the
definitions
also
appeared,
and
then
the
methodology
was
a
separate
draft
and
that
dates
all
the
way
back
to
the
very
first
pair
of
drafts
that
came
out
of
the
benchmarking
methodology
working
group.
A
So
it's
it's
really
good
that
you're,
you
know
getting
some
consistency
with
those
terms
brian,
since
you've
got
a
lot
of
good
items
on
this
particular
slide
here.
I'd
I'd
like
to
interrupt,
and
and
maybe
we
can
hold
some
of
the
discussion
that
we
had
planned
just
so
that
we
can
close
on
a
few
items.
How
many
by.
A
All
right,
then
this
is,
let
me
take
a
quick
look
at
it
all
right.
So
there's
more
changes
there,
all
right
so
we'll
we'll.
So
let's
handle
this
set
of
changes.
First,
all
right
so
so
does
you
know
I
forgot
to
check
this.
Does
this
mean
that
the
that
the
title
of
the
of
the
document
has
changed
now,
which
is.
B
That
yeah
it
it
it
does
it
more
accurately
reflects
the
scope
of
the
draft.
A
Good
good,
because
you
know
I
I
suddenly
one
day
I
suddenly
began
to
wonder:
well
what
do
they
mean
by
next
generation?
You
know:
are
we
really
talking
about
the
modern
generation
of
firewalls,
the
ones
that
are
here
now
and
and
and
and
this
is
a
less
a
much
less
ambiguous
title.
So
thanks
for
thanks
for
going
there
with
that
change,
yeah!
That's
that's
good
as
well.
B
A
No,
absolutely
not,
let's,
let's,
let's,
let's,
let's
not
do
that
in
fact.
Okay,
that's
right!
There.
D
A
There
there
are
ways
to
track
that
fairly
automatically
now,
but
it's
absolutely
not
necessary.
Get
at
this.
A
Oh,
I
get
it
yep,
that's
fine,
all
right!
So
then,
so
so
we
so
we've
got
that
change
and
then
we've
got
another
topic
here.
The
the
author
proposal,
since
we
brought
this
up,
is
to
have
this
draft
supersede
rc
3511,
which
is
the
you
know,
the
roughly
15
year
old
benchmarking
methodology
for
firewalls.
A
So
what
we?
What
we,
what
we
need
to
discuss
is
is
whether
the
working
group
agrees
with
that
and
then
we
also
have
to
sync
it
up
with
the
ietf
terminology.
A
Where
supersede
is
kind
of
in
the
middle
between
where
we
we
might
call
update
and
and
of
course,
there's
a
you
know,
there's
a
million
definitions
for
what
what
constitutes
an
update
in
in
ietf.
I
think
there's
some
good
ones
out
there
and
they
may
have
even
tried
to
standardize
a
few,
but
but
the
but
the
obsolete
definition
is,
is
fairly
clear
and-
and
I
think
that's
what
you
meant
with.
B
D
B
H
A
Although,
although
maybe
I
didn't
hit
it
and
and
something
else
happened,
but
now
it
looks
like
I'm,
I'm
I'm
back
and
the
screen
is
visible
as
well.
No,
no!
I
don't.
A
Oh
okay,
all
right.
Let's
try
that
again
presentation
view:
okay,
yeah!
It
looks
like
it
completely
reset
my
my
settings
here.
A
So
let's
go
with
entire
screen
again
allow,
and
now
it
looks
like
it's
going
all
right.
You
don't
run
solar
winds.
Do
you
no.
A
Yeah
it
looks
like
there
was
a
there
was
a
bobble
here
that
caused
me
to
drop,
maybe
momentarily
enough
to
lose
everything.
So,
let's
continue
forward.
I
I
I
assume
that
somewhere
in
this
discussion
of
supersedes,
I
I
was
basically
trying
to
get
the
answer
to
the
question.
Do
you
think
supersedes
means
to
make
rfc
3511
rf
obsolete.
B
That
is
our
our
sense
of
things
when
we
reviewed
3511
and
compared
it
with
where
the
state
of
the
of
the
technologies
were
today
compared
to
what
we're
looking
to
achieve.
With.
With
this
draft,
we
felt
that
that
the
changes
were
significant
enough,
that
it
would
supersede
35,
11.,
okay,.
A
All
right
so
so,
then
it
sounds
like
in
in
our
ietf
terminology.
That
would
we
would
need
to
update
the
header
to
say
that
that
this
sort
of
the
status
obsoletes
rc3511
that's
going
to
have
to
go
into
the
abstract
and
we'll
have
to
edit
the
that
intro
paragraph
right.
At
the
end,
the
last
sentence
to
use
the
terminology,
if
we
did.
B
We
did
update
the
the
introduction
paragraph
right
at
the
end.
We
we
are
now
saying
that
says
all
these
reasons
have
led
to
the
creation
of
a
new
network
security
device,
benchmarking
document
and
this
document
supersedes
35.
A
Right
so
that
so
the
key
point,
though
brian,
is
that
we
have
to
use
the
word
obsolete
or
makes
obsolete,
okay,
rc
3511.
and
that's
okay
and
that's
it
and
that's.
If
the
working
group
agrees
on
that
so
understood.
A
So
I'm
gonna,
I
mean
it's
a
you
know,
always
an
unofficial
ask
when
we're
in
the
we're
in
the.
A
This,
but
are
there,
are
there
any
objections
to
obsoleting
or
making
obsolete.
A
In
the
context
of
publishing
the
the
the
draft
that
we're
discussing
now,.
D
Hi
speaking
as
a
participant,
when
you
say
network
security
device,
I
think
that
spans
a
pretty
enormous
set
of
devices.
D
Clearly
next
generation
we
can
arm
wrestle
over
next
generation,
but
a
firewall
is
pretty
clearly
in
line
pretty
clearing
pretty
clearly
transmitting
and
receiving,
but
there's
a
whole
another
set
of
devices
that
are
just
passive,
and
so
they
wouldn't
even
be
doing
that,
but
they
would
absolutely
call
themselves
a
network
security
device
so
that
the
change
in
title.
I
was
wondering
if,
if
you
could
say
more
in
in
specifically,
why
go
to
such
a
generic
term
versus
just
shoring
up
your
potential
confusion
over
hey
next
gen
versus
current
or
modern?
A
Well
and
it's
interesting
sarah
I
just
put
up
on
the
screen
here.
Bala
has
made
a
comment
in
which
he
says
no
change
in
the
title.
The
title
is:
benchmarking:
methodology
for
network
security
device
performance
was
used
in
the
previous
version.
I
assume
he
means
rc
3511
as
well.
No,
no!
No,
I
mean
I
mean.
G
D
D
If
you're
really
going
after
a
firewall,
I
see
it
even
in
the
abstract.
Now
it's
it's.
It's
got
ngids
and
ips,
which
is
an
interesting
term,
but
I
I
I
I'm
just
saying:
hey
as
a
participant
you're,
it's
going
to
take
a
lot
to
convince
me
that
that's
the
right
approach
to
take.
I
definitely
wouldn't
test
my
ideas
the
same
way.
D
B
So
fair
enough
given
given
that
and
given
that
you
know
what
we've
developed
here
in
in
our
collect
in
our
opinion,
goes
way
beyond
the
scope
of
just
the
next
generation
firewall.
B
And
you
know
we
have
moved
all
of
the
all
the
security
effectiveness
stuff
regarding
the
x-ray
stream
firewalls,
the
next
generations
ipx
down
to
an
appendix
and
clearly
focused
on
performance
testing
aspects
of
things.
You
know
we.
We
think
that
it
would
definitely
lend
itself
to.
You
know
next-generation
firewalls,
next-generation
ips's,
where
that
firewalls,
you
know
anything
that
any
network
security
device
that
is
handles.
You
know
traffic
and
makes
a
security
decision
based
on
that.
B
D
Is
then
hey
you
didn't
mention
ids,
which
is
a
really
good
example,
so
pulling
that
I
think
helps.
But
so
can
I
defer
to
answer
that.
I
mean
knee
jerk.
What
I
would
say
is
benchmarking
firewalls
or
whatever
we're
going
to
call
on
g
fw
and
then
ips's
makes
lots
of
sense.
But
let
me
re-read
it
again
and
come
back
with
that
question
in
the
back
of
my
mind
and
I'll
circle
back
on
the
list
to
give
you
some
feedback.
B
So
so
one
of
one
of
the
things
we're
wanting
to
avoid
is
have
a
test
lab
decide
that
they
want
to
use
this
draft
to
do
testing
and
they
say
well.
We
can
use
it
in
the
web,
app
firewall
space
from
the
performance
requirements,
aspect
of
things
and
then
have
a
waff
vendor,
say
yeah,
but
it's
not
meant
for
wafts,
because
in
the
title
it
says
such
and
such
so
I
mean
we're
trying
to
avoid
that.
D
I
understand
like
I,
I
would
say
the
same
thing,
though
let
me
re-read
it
again
with
your
question
in
mind
and
the
objection
you're
trying
to
avoid,
and
let
me
see
if
I
can
come
back
with
some
reasonable.
A
A
So
I
so
I
I
didn't
hear
any
objections
to
making
3511
obsolete,
but
but
it
sounds
like
we've
got
a
little
more
discussion
to
to
undertake
anyway,
but
in
the
but
in
the
next
in
the
next
draft.
Let's
try
that
let's
try
the
let's
that
the
you
know
that
the
working
group
looks
at
let's,
let's
try
that
as
a
obsolete,
3511,
11
and
and
see
what
the
reactions
are,
because
that's
something
that's
definitely
got
to
be.
A
You
know
confirmed
on
the
mailing
list
at
the
very
least
all
right.
So
then
one
other
one,
one
other
topic
that
came
to
me
in
in
my
review:
brian
bala
and
carson.
A
By
the
way
you
know
thanks
so
much
for
taking
on
70
comments
and
and
I'm
sure
that
was
really
honorous
toward
the
end
of
the
process.
Here,
we're
really
making
good
progress
of
toward
clarification
and
and
moving
this
up
to
the
next
step.
Yeah
bella
didn't
sleep
much,
no,
no
and
and
and
and
I
and
I
I
know
exactly
where
you're
living
bala
I
got,
I
got,
I've
got
the
same
same
situation
on
my
hands
elsewhere.
A
A
And
in
benchmarking
methodology
working
group,
we've
got
a
pretty
solid
definition
of
throughput
right
now,
but
it
doesn't
come
from
2647..
A
In
fact,
what
you'll
quickly
see
here
is
that
the
term
the
term
throughput
it
it's
not
defined
in
2647,
and
I
think
if,
if
and
if
you're,
really,
if
we're
talking
about
tcp
or
or
other
you
know
reliable
transport
layer
traffic,
then
I
think
we're
talking
about.
Where
is
it?
I
think
it's
it's
good
put
yeah.
A
The
number
of
bits
per
unit
time
forwarded
to
the
correct
destination
interface-
minus
any
lost,
are
transmitted
so
well.
B
A
B
All
of
the
I
think,
universally
I'm
probably
sticking
my
neck
out
a
little
bit
when
I
say
universally,
but
a
significant
majority
of
the
security
product
vendors
didn't
like
us
using
goodput
ouch.
A
Yeah,
so
do
you.
A
Do
you
mean
reliable
transport
protocol
throughput
or,
or
specifically
I
mean
I
I
wouldn't
see-
I
wouldn't
want
to
add
an
adjective
here
like
tcp
throughput,
because
you
know
pretty
quickly.
People
are
going
to
want
to
talk
about
reliable
transport
throughput
which
is
based
on
udp
and
quick
right
and,
and
that
was
one
of
the
other
things
that
kind
of
was
bouncing
around.
In
my
mind,
when
we
were
talking
about
next
generation
firewalls
as
well
and
you're,
just
throwing
that
terminology
out.
So
you
know
we
we
need
to.
A
We
need
to
be
sure
the
vendors
understand
that
the
benchmarking
methodology
working
group
has
already
claimed
this
term.
Okay
and
certainly.
A
So
all
I
was
always
going
to
say
is
if
we
need
to
only
back
up
if
we
need
to
if
we
need
to
explain
what
happened
here.
A
Oh
this
is
yeah,
wait
and
then
this
one,
if
we,
if
we
need
to
call
this
good
put
but
then
but
then
make
some
explanatory
statement
later
about
you
know
the
devices
in
the
marketplace
sometimes
call
this
throughput,
but
it's
different
from
the
rc
2544
throughput
and
you
know
on
and
on
and
and
and
and
and
so
forth.
Then
you
know
that
that
might
get
us
to
a
compromise,
but
we,
but
we
can't
leave.
B
So
just
raccoon
just
made
a
comment
saying
that
throughput
is
now
defined
in
7.1.3.4.
Maybe
let's
just
scroll
down
to
that
one
and
see
all
right.
D
B
It's
we
can
use
the
wording
there
and
then
I'm
not
not
entirely
sure.
B
I
I
I,
if
we're
going
to
be
taking
it
and
taking
the
input
from
the
bmwg,
which
I
think
we
we
would
be
fools
not
to
back
to
our
our
internal
working
groups.
I
just
want
to
make
sure
we
have
a
very
clear
explanation
as
to
where,
where
the
recommendation
is
coming
from
and
why.
G
G
So
that's
why
I
I
think
rs25
for
2544
is
mentioning
frame
for
seconds,
but
I
think
for
the
state
for
firewall,
we
have
to
measure
bit
bitrate
because
the
firewall
is
taking
take
a
look,
taking
look
on
the
bits
and
packets
payloads,
so
it's
important
to
so
the
bit
rate
than
frame
rate.
So
this
is
one.
I
G
G
A
Yeah,
I'm
not
arguing
that
I'm
and
but
but
I
will
say
that
the
rfc
2544
throughput
is
used
commonly
also
at
the
the
packet
layer,
but
the
results
can
be
expressed
in
bits
per
second,
that's
where
we
compare
it
with.
You
know
the
maximum
theoretical
bit
rates
and
frame
rates
and
and
so
forth.
We
we
we
we
just
have
to.
We
just
have
to
put
the
right
story
together
here,
so
that
we
don't
have.
A
G
Here,
I
think
the
only
only
the
problem
is
you
see
the
throughput,
the
rfc
264
explaining
not
the
throughput
is
explaining
a
bit
per
seconds,
so
we
just
take
the
same
definition.
The
only
the
problem
is.
If
we
change
the
title.
Okay,
there
you
see
bit
per
seconds
as
a
kpi,
but
in
our
draft
you
see
throughput.
A
Want
consistency,
yeah,
and
so
that's
why
I'm
pointing
to
if
you're,
if
you're
thinking,
good
put
and
and
that's
what
you
just
explained,
bala
with
re-transmissions
and
losses
that
are
are
taken
care
of
you
know,
that's
fine
and
that's
a
term
that
was
used
in
rfc
3511,
the
previous
firewall,
benchmarking
rfc,
but
that's
it.
So
I
it
sounds
to
me
as
though,
and
of
course
you
can
express
good
put
in
bits
per
second,
but
but
it
sounds
to
me
as
though
you
really
want
to
sit.
G
G
So
if
you
want
to
measure
good
put,
you
need
to
eliminate
all
re-transmission
retries
and
everything-
and
the
point
is
the
system
is
complex,
not
only
the
test
and
the
device.
Only
the
the
test
devices
test
equipment,
then
we
need
to
clarify,
make
sure,
okay,
who
is
dropping
packet
or
who
is
making
free
transmission
and
all
kinds
of
things
we
need
to
eliminate
in
order
to
measure
the
good
port.
So
good
put
is
the
the
traffic
rate
without
any
re-transmission
delays
and
anything.
J
Right
so
I
think
this
is
covered
in
8238
definition
of
application
good
put,
but
I
would
like
to
think
about
the
the
majority
of
the
readers.
What
is
the
target
audience
so,
of
course,
now
somebody's
calling
me
the
target
audience
and
the
target
audience
is
just
the
orderly
guy
or
or
a
girl
who
just
has
an
understanding,
an
informal
understanding
of
throughput.
J
A
J
A
It's
in
well,
it's
the
definition
is
in
two
places.
Actually
three
places
actually
four
places:
it's
in
rfc,
1242
1242.
A
It
is
in
1944,
that's
the
original
version
of
of
2544
and
it
and
actually
the
wording
is
correct
in
1944,
because
something
went
wrong
in
the
editing
process
and
the
definition
in
2544
is
missing
some
words,
and
so
the
fourth
and
final
place,
where
the
definition
exists
is
in
the
errata
for
rfc
2544,
where
I
pointed
out
that
something
went
wrong
in
the
editing
process
and
the
full
definition
is
also
for
rc.
A
2544
is
in
the
errata
that,
for
this
is
all
for
25,
rfc,
2544
throughput,
so
yeah
there's
a
long
history
here,
we're
carrying
some
baggage
a
lot
of
it.
You
know
a
lot
of
this
took
place
before
I
got
very
active
here
in
in
2003,
but
I,
but
I'm
responsible
for
noticing
the
mistake
and
and
entering
the
errata
I'll,
take
credit
for
that.
Nevertheless,
we
you
know
we've
in
in
our
context
and
and
and
this
is
a
standard
as
much
as
anything
in
in
our
in
our
industry.
A
This
is
this
is
the
definition
we
use.
You
know
they're
there.
Surely
there
are
lots
of
other
ways
to
define
this,
but
within
our
working
group
we've
got
a
good,
solid
foundation
to
stand
on
and
and
if
we,
if
we
get
close
to
that
with
another
term,
let's
let's
clarify
it
with
with
at
least
one
really
useful
adjective.
I
So
I'll
al
carson,
bella
brian,
if
I
can
add
this
magic
here,
I
think
this
work
first.
Firstly,
thanks
very
much
for
for
driving
it,
as
I
said
in
my
email,
excellent
work,
and
I
think
this
is
an
opportunity
for
us
to
actually
standardize
on
nomenclature
exactly
as
carson.
You
said,
because
making
it
complicated
for
the
for
the
for
the
users,
it's
not
gonna
help
anybody
and
just
a
quick
example
in
in
the
normal.
I
You
know
packet,
blasting
tests
we
use
in
our
in
our
tests
the
rfc
2504
nomenclature,
but
but
we
we
quote
throughput
in
in
bits
per
second
and
also
in
packets
per
second
here,
we're
clearly
dealing
with
a
stateful
traffic.
I
So
we
are,
after
something
related
to
transactions
and
transaction
throughput,
and
we
have
transactions
per
seconds,
and
we
have
you
know,
connections
per
seconds
and
other
metrics.
So
it
would
be
good
to
use
this.
I
It's
an
excellent
opportunity
to
actually
look
at
at
at
defining
this
term
well,
so
that
there
is
no
no
ambiguity
for
stateful
traffic
and
the
the
the
only
issue
here
is
that,
independently
on
what
functions
are
performed
on
the
on
the
packet
flows,
there
will
be
quite
huge
dependencies
on
the
not
only
the
packet
size,
but
also
the
protocols
used
right,
whether
it's
tcp,
quick
and
and
and
and
so
on.
I
So
there
will
be
always
a
number
of
I
think
adjectives,
plus
protocols
or
or
some
other
metadata
associated
with
the
with
the
performance
metric
word,
whether
it's
throughput
or
or
transaction
throughput,
or
something
but
it'll
be
good
to
have
that
clarified
here,
and
I
think
this
is.
This
is
exactly
the
place.
This
draft.
A
So
that's
great
good!
Thank
you.
I
I
think
that's
a
that's
a
decent
summary
of
what
we
talked
about
and
and
the
next
steps
and
anything
further
on
this
topic.
It's
an
important
one,
obviously
to
the
working
group
and
and
and
to
the
world
at
large.
J
I
would
like
to
point
to
two
parts
of
this
definition
which
are
not
included
in
to
my
knowledge
in
any
other
throughput
definition.
The
first
one
is
the
word
allowed
so
for
security
devices.
There
is
a
difference
between
traffic
that
could
be
forwarded
and
traffic.
That's
allowed.
Okay,
maybe
that's
not
so
much
a
problem.
The
other
problem
is
the
correct
destination
interface,
so
in
load,
balancing
or
any
other
kind
of
layer,
7
device.
F
A
Actually,
we've
got:
we've
got
the
terms
correct
destination
here
in
the
old
good
definition.
So
that's
that's
a
that's
a
good
start,
and
then
you
know
you
you
you
have
to.
You
basically
have
to
decide
where
in
the
you
know
where
in
the
stack,
this
definition
is
going
to
exist
at
what
layer
you
have
to
nail
that
down
and
and
and
I
I
think,
your
point
about-
allowed
permitted
traffic.
A
This
is
the
you
know
that
picks
up
the
aspect
of
you
know
the
firewall
operation,
where
it's
going
to
be
tossing
away
traffic,
potentially
while
you're
trying
to
forward
allowed
traffic.
So
those
and
those
are
the
kinds
of
tests
that
people
are
interested
in.
So
I'm
sorry.
I
Too
yeah
so
carson
just
stimulated
my
my
my
my
thinking
here,
the
you
know
clearly
as
expressed
earlier.
I
think
it
was
either
balor
or
brian
we're
dealing
here
with
quite
complex
packet
processing
functions
deployed
on
those
on
those
devices
or
appliances.
I
So
the-
and
I
think
I
mentioned
this
in
my
my
comments,
but
I
didn't
really
get
to
the
bottom
of
it
because
I
clearly
run
out
of
time
so
the
the
presence
or
not,
of
cves,
which
is
very
well
handled
in
the
draft,
so
the
throughput
or
you
know
we
were
actually
measuring
performance
here
and
this
performance
of
you
know
clean
traffic
and
also
performance
under
under
attack
right
with
with
avc's
with
cv.
I
Is
there
so
there
will
be
a
cve
dependency
and
carson,
as
you
said
you
know,
is,
is
what
does
matter
the
permit,
throughput
and
and
the
deny
actions
are,
are
ignored.
I
I
mean
they
all
they
all
matter,
and
so
so
somehow
that
the
performance,
slash
efficiency
definition
should
capture
that
we're
dealing
with
the
security
devices
that
are
filtering
the
good
from
bad
and
and
the
definition
should
capture
that
somehow
this
is
not
just.
You
know
dropping
packet
due
to
the
capacity
issue.
It's
not
the
only
action
that
we're
measuring
here
we're
actually
measuring
the
response
to
the
the
the
malicious
traffic.
A
Yeah,
so
there's
that's
very
good,
so
there's
there's
absolutely
some
aspects
of
this
definition
that
are
unique
that
we
want
to
retain
and
that
we
want
to
differentiate
from
the
rfc
2544
throughput
in
our
definition,
in
in
all
of.
A
I
think
that's
the
that's
the
summary
of
where
we
are
at
the
moment.
B
So,
in
order
to
avoid
avoid
us
or
like
us,
meaning
like
the
folks
from
the
netsec
open
side
and
the
folks
from
the
bmw
site
going
back
and
forth
on
this,
I'm
wondering
whether
it
would
be
useful
to
have
a
a
call
with
a
small
group
of
people
so
like
I'm
thinking
magic
and
maybe
someone
else
or
a
couple
of
others
from
the
benchmark,
working
group,
side,
who's,
interested
and
then
some
from
folks
from
the
netsec
open
side.
B
So
we
could
just
nail
this
and
sort
of
get
get
something
where
we
all
agree
and
then
we'll
put
it.
You
know
write
up
a
a
draft
definition
circulated
amongst
everybody
and
get
everybody
to
bless
it
and
then
put
it
in
the
document.
I
think
that
might
be
a
more
efficient
way
of
doing
it.
A
Yeah
I
mean,
I
don't
think
it.
I
don't
think
we
we
need
a
interim
meeting
for
that,
necessarily
as
long
as
everybody
as
long
as
any
everybody
from
bmwg
who's
interested
is,
is
allowed
to
attend.
B
A
Yeah
yeah
so,
but
you
know,
I'm
I'm
just
trying
to
avoid
the
overhead
of
an
of
an
interim
meeting
when
we're
basically
going
to
be
talking
about
one
definition
and-
and
I
think
that's
I
think,
that's
fairly
reasonable.
It's
you
know
it's
really
just
a
bunch
of
knowledgeable
people
getting
together
each
each
side
of
perspectives,
expressing
their
views
and
and
then
and
then,
as
you
say,
brian
nail
it
that's,
you
know,
go
away
and
get
it
right.
That's
what
we're
talking
about
here,
basically
yeah!
A
So
so
that's,
I
think,
that's
I
think
that's
a
good
good,
a
good
way
forward
on
on
this.
Does
anybody
have
any
objection
to
proceeding
in
that
way
to
have
an
informal
call
at
a
reasonable
hour
where
you
know
where
we
hope
anyone
who
has
a
strong
interest
in
this
topic
and
and
basically
there
were
five
of
us
who
spoke
up
today,
six
on
on
all
the
topics
of
this
draft,
but
plenty
of
people
who've
made
comments
along
the
way
and
might
want
to
join
us.
A
So
I
mean
it's
a
it's
a
small
but
very
interested
group
and-
and
I
think
that
would
keep
in
deep
within
the
manageable
size
and
then
of
course
any
you
know
any
preliminary
agreement,
it's
going
to
be
reported
back
to
the
group,
both
in
the
context
of
like
a
like
an
email
status
and
updated
words
to
the
draft
and
then
working
group
review
that
follows
so
it's
it's.
You
know
nothing's
going
to
happen
here
without
full
access
to
the
information.
B
A
C
Yes,
so
obviously
I
don't
know
this
technology
very
well,
but
I
I
do
want
to
share
the
fact
that
the
comments
that
I
think
a
couple
you're
making
of
this
should
be
a
different
term,
as
in
at
least
it
should
have
some
adjective
to
describe
the
throughput
is
different.
I
think
that
is
key,
and
although
it's
hard
to
speak
for
what
the
isg
would
do
during
review,
I
suspect,
if
you've
tried
to
re,
redefine
throughput
in
this
document
to
have
a
different
meaning.
C
I
suspect
that
that
would
cause
angst
within
the
isg
review,
so
it
doesn't
have
to
be
a
completely
different
word,
but
at
least
making
sure
that
it
is
quite
clearly
different,
I
think,
would
be
a
good
thing
to
achieve,
having
a
meeting
to
discuss
this
going
forward.
That
sounds
like
a
good
idea,
a
good
idea
to
me,
but
just
to
phrase
that.
A
That
that's
that's
good,
that
sounds
that
sounds
supportive
of
the
general
direction.
We're
going
and,
and
also
you
know,
as
a
reminder
to
everyone,
which
is
something
which
rob
is
is
very
familiar
with.
You
know
the
the
iesg
sometimes
has
people
on
the
the
committee
that
on
the
group,
who've
done
a
lot
done,
a
significant
amount
of
benchmarking,
and
and
and
from
that
experience
they
go.
Oh,
what
the
hell
are
you
guys
doing
here,
so
you
know
we're.
A
Our
work
is
often
looked
on,
looked
in
on
at
the
very
last
stage
by
very
knowledgeable
people
and
performance
measurement,
experts
and
so
forth.
Who
only
get
to
weigh
in
at
that
point,
and
you
know
we
have
to
be
ready
to
defend
what
we
what
we
decide.
A
Yeah
yeah,
that's
exactly
right,
good,
all
right,
so
brian
you've
got.
I
think
that
was
the
last
one
on
this.
One.
You've
got
a
few
more
topics
on
on
this
highlight
of
changes
here
so
yeah.
You
know
good
so.
B
The
changes
to
the
actual
test
cases
7.1
through
7.9,
the
basic
intent
and
process,
was
unchanged.
You
know
any
anything
that
we
changed.
There
were,
for
clarity's
sake,
we
added
the
ayanna
considerations,
text
that
wasn't
wasn't
in
in
section
8
previously,
and
then
we
added
the
associated
rfcs
that
we
referenced
to
section
12.2,
informative
references,
and
then
we
moved.
B
We
moved
the
set
classifications
that
we
previously
had
in
section
four
into
appendix
b
and
and
so
on,
because
it
wasn't
applicable
across
the
board,
so
it
just
made
more
sense
to
to
make
it
an
appendix
good,
and
that's
that's
pretty
much
it
I
mean
if
you
take
a
look
at
diff,
diff
2
you'll
see
all
the
whole
work
that
we
did
so.
Oh.
A
Yeah
yeah
tons
of
tons
of
work
there
and
much
appreciated
so
yeah.
B
And-
and
I
really
you
know,
speaking
on
behalf
of
all
the
folks
on
the
netsec
open
side
that
that
worked
on
this-
the
input
that
we've
received
and
that
we
anticipate
receiving
going
forward
is
great
and
I
I
think
it
will
really
really,
in
the
long
term,
work
out
that
we'll
have
a
much
more
solid
document
that
I
think
will
survive
the
test
of
time.
A
Great
well,
I'm
glad
they're,
the
netsec
open
community
who
has
contributed
so
much
to
this
is
also
open
to
the
the
wider
review.
Here.
That's
that's!
What
we've
been
trying
to
get
going
all
along
and
as
as
far
as
the
next
steps
go,
we've
got
the
we've
got
sarah's
review.
We've
got
to
work
on
on
throughput
definition,
with
an
adjective
the
something
something
throughput
and
we've
got
to
get
some
additional
review
by
by
folks.
A
If,
if
you,
if
any
interest
has
been
peaked
today
by
by
these
discussions
and
and
the
topics,
because
this
is
and
we're
gonna
nail
down
that
obsolete
part
of
it
too
so
yeah.
B
A
B
If
anybody
you
know
al,
I'm
assuming
you're
interested
in
in
being
involved
on
on
this
call
magic
as
well,
I'm
assuming,
but
if
anyone
else
is
interested
drop
me
an
email,
it's
a
bmunkman
netsecope.org
and
we
can.
We
can
set
that
up.
A
Yeah,
please,
please
drop
your
email
into
the
into
the
chat
brian,
and
that
will
help
people
get
that
right
away.
Yeah
I'll
do
that
right
now,.
B
Should
I
put,
should
I
post
yeah
kind
of
hate
the
idea
of
saying
this,
but
should
I
should
I,
post
to
the
bwg
mailing
list,
an
invite
to
the
meeting,
or
is
that
just
asking
for.
A
Yeah
yeah,
no,
I
think
that's,
I
think,
that's
fine!
I
I
I.
I
think
that
that
we
should
be
open
as
possible
and
I
don't
think
we're
gonna
get.
You
know
a
tremendous
overage
that
would
mess
up
our
conference
now.
A
B
Hey
sarah,
how
long
do
you
think
it'll
take
you
to
go
through
the
the
50
odd.
D
Pages,
I
think
it's
a
day
per
page
brian.
So
can
you
give
me
a
couple?
No
no
can
I
have.
I
will
do
it
as
soon
as
I
can
I'll
try
to
get
to
it
this
weekend.
In
all
honesty,
I
was
gonna.
Have
somebody
on
my
qa
team
here
at
work?
Take
a
look
as
well,
because
it's
something.
B
D
So
you
know
if
you
can
a
week
formally,
that
would
be
best,
but
I
will
do
my
very
best
to
get
my
review
in
this
weekend.
B
A
I'm
sorry,
sorry,
sorry,
I
I
I
was
just
trying
to
recognize
masiak
in
the
queue
we've
we've
done.
I've
done
a
bad
job
of
managing
the
mic
line
here
today,
but
every
I
think
everyone's
been
heard
that
wanted
to
be
heard.
Go
ahead.
Masiak.
I
Yeah
sorry,
so
I
I
also
forgot
to
raise
my
hand
earlier,
and
I
was
brought
again
so
apologies
for
that.
Now,
I'm
following
the
discipline
in
my
client
and
I
was
waiting
patiently.
Thank
you
al.
I
have.
I
actually
have
one
or
two
generic
comments.
If
this
is
a
good
time,
because
we're
gonna
leave
this
this,
this
draft
now
correct.
A
Yeah,
I
think,
if
you
can,
if
you
can
just
do
that
in
five
minutes,
that
will
be
great.
I
Okay,
so
I
will
not
be
yeah,
I
am
sometimes
taking
time
but
I'll
keep
it
short.
So
the
one
question
I
have
is-
and
I'm
not
sure
I
asked
it
in
the
in
my
review.
This
is
about
security
devices
appliances,
but
is
the
aim
here
also
to
address
the
virtualized
or
or
cloud-based
security
offerings?
I
So
do
you
think
so
it
is
the
target
for
the
draft
directly
or
well.
B
Tangentially,
I
I'm
not
I'm
we're
going
into
looking
at
this,
with
with
an
expectation
that
things
might
have
to
change.
That
may
lead
to
a
requirement
to
have
to
have
a
new
draft
but
or
our
new
a
new
rfc,
but
we
haven't
reached
that
point
yet
when
I
said
we're
just
starting
to
think
about
it,
I
do
mean
just.
I
Okay,
so
because
if
it
does,
then
this
draft
should
be
acting
as
a
foundation
all
right,
at
least
that's
my
that's.
I
Something
to
build
on,
there
is
a
lot
of
new
network
as
a
service.
This
marketing,
sassy
things
secure
access,
secure
edge
and
it
network
security
generally
became
a
much
bigger
thing
than
before,
due
to
covet
and
number
of
people
living
online,
so
work
life
and
so
on.
So,
okay,
so
thanks
very
much
because
there's
a
direct
impact
there
on
sizing
as
the
current
sizing-
and
I
think
I
raised
that
is-
is
clearly
focusing
on
on
physical
appliances,
but
maybe
that's
something
to
keep
in
mind
the
other.
I
One
is
the
comment
I
made,
and
I
guess
I
I
still
need
to
finish
the
the
checking
because
and
because
you
actually
did
address
most
of
most
most
of
my
comments.
So
thank
you
very
much
for
that.
I,
but
there
are
a
few
loose
ends,
which
I
guess
my
questions
or
points
were
not
very
specific
or
specific
enough,
and
that
is
the
the
features
that
are
recommended
to
be
configured.
I
So
I
guess
I'm
going
to
revisit
this
point
and
and
clarify
further
on
what
is
the
recommendation
recommended
versus
optional
and
and
and
the
use
of
the
word
you
know
should
consistently
be
enabled,
because
ips
is
always
there
and
and
if
people
take
that-
and
this
is
the
baseline-
that
ips
is
always
there
that
exclude
cases
where
ips
is
not
there.
I
B
So
the
purpose,
the
purpose
of
our
recommendations
when
it
comes
to
the
features
being
considered,
was
not
necessarily
to
test
stringently
the
security.
The
effectiveness
of
these
features,
the
the
goal
was
to
ensure
that
the
features
were
enabled
that
we
verified
that
they
are
acting
and
running
in
a
manner
that
we
would
expect
we
being
the
testers,
would
expect
and
then
leave
them
on
during
during
the
performance
testing,
and
so
you
know
we're
we're
not.
We
were
not
looking
to
make
this
a
document
that
was
an
exhaustive
security,
efficacy
test.
B
I
Okay,
so
I
guess
I'm
going
to
revisit
and
reread
that
because
that's
not
the-
and
I
guess
I
included
this
in
my
in
my
comments
and
and
if
there
is
any
scope
for
for
mentioning
that
you
know
some
of
those
features
or
any
of
those
features
could
be
tested
in
in
isolation
in
terms
of
measuring
their
efficiency.
I
But
let
me
think
about
that
and
I'll
come
back
to
you
with
with
comments,
and
I
understand
that
there
is
time
for
completing
the
review,
because
I
actually
only
skimmed
through
section
seven
one,
which
was
good,
but
I
would
like
to
spend
a
bit
more
time
on
that.
So
so
I
understand
that
there
is
a
time
for
that
before
the
next
revision
or
or
is
that
not
the
case.
B
We
welcome
we
welcome
comments
and
suggestions,
some
things
we
as,
as
evidenced
by
how
we
responded
to
your
previous
comments.
There
will
be
some
that
we
think
make
a
lot
of
sense
and
that
we
will
change,
but
with
respect
to
the
test
cases,
seven
one
through
seven,
nine,
you
know
it's
it's
going
to
be
a
a
reasonably
high
bar
that
we'll
need
to
discuss
with
you
in
order
to
to
make
changes
in
in
that
area.
If
it's
going
to
affect
the
tests
themselves,
the
execution
of
the
tests.
I
I.
I
B
I
My
you
know
the
best
thing
I've
ever
seen
written
down
so
but
okay,
all
right,
but
I
get
a
point
brian,
so
we'll
I
will
do
my
best
to
turn
throughout
this
as
soon
as
possible.
A
Okay,
so
we
we've
we've
got
our
next
steps.
We've
got
a
couple
of
additional
comments
and,
assuming
all
that's
been
captured
in
the
notes,
we'll
get
it
out
to
the
mailing
list
very
shortly,
brian
bala
karsten.
Thank
you
so
much
for
your
efforts
again,
like
you
said,
thank
you
and
we'll
we'll
now
move
on
to
the
next
topic.
If
that's
okay,
actually
the
next
topic
is,
is
the
multi.
A
Oh
gosh,
multi,
multi-loss
ratio
search,
and
I
want
to
make
what
do
I
want
to
do
here?
I'll
make
this
bigger
and-
and
I
saw
so
maciac
and
veraco,
which
one
is
it?
Is
it
you
masiak
who's,
going
to
present
this
or
voraco?
A
K
B
A
It's
you
racco,
I
I
I
I
need
to
beg
a
two
minute
health
break
here.
If
you
want,
you
can
start.
A
Start
slowly,
but
I
will
be
right
back.
Okay,.
A
All
right,
okay,
I'm
I'm
back,
and
it's
if
I
guess,
if
you
start
slowly
viretco,
you
will
pick
up
anyone
who
anyone
else
who
needed
a
health
break.
Thank
you
for
waiting.
K
Okay,
so
hello,
I'm
rocco
black,
I'm
presenting
an
update
on
this
draft.
Mlr
search
means
multiple
loss
ratio
search,
and
now
I
see
that
it
is
not
mentioned
anywhere
in
this
presentation,
so
hopefully
next
meeting
it
will
be
somewhere
there.
The
draft
status
is
the
important
thing
and
first
of
all
the
draft
was
adapted.
That
means
that
now
it
has
different
file
name
with
a
different
version
number,
but
otherwise
the
contents
have
not
changed
in
any
meaningful
way.
My
plan
was
to
prepare
some
changes
for
this
meeting,
but
I
have
run
out
of
time.
K
K
K
K
So
this
is
like
this
is
a
fake
result.
I
do
not
have
a
real
round
with
real
numbers
to
show
this
behavior,
but
at
least
the
numbers
are
easier
easier
to
follow,
so
sure
you
can
go
to
the
next
slide.
K
Where
I
describe
the
improvements
initially,
my
idea
was
to
just
make
the
draft
obviously
be
able
to
support
multiple,
multiple
loss
ratio
goals,
because
the
currently
it
focuses
on
just
two
goals.
One
goal
is
exactly
zero
loss,
which
leads
to
ndr
no
drop
rate
and
the
other
one
is
some
zero
small
ratio
which
leads
to
pdr
partial
drop
rate.
K
It
is
obvious
for
me,
at
least
that
the
previous
logic
can
be
generalized
to
support
any
number
of
ratios,
but
it
wasn't
not
clear
from
the
previous
text
how
it
should
be
generalized,
but
then
I
encountered
this
one
inefficiency,
so
I
endeavored
to
fix
this
inefficiency
while
doing
the
change
from
fixed
to
configure
the
number
of
loss
ratio
goals
and
the
main
change
is
that,
contrary
to
the
previous
version,
I
mean
currently
published
version.
K
There
is
no
coupling
between
which
measurement
belongs
to
which
ratio,
because
in
the
external
search
this
can
change,
we
have
seen
that
previously
measurement
result
that
looks
unrelated
turned
out
to
be
related.
So
the
main
change
is
that
now
there
is
quality
quantum
database
that
holds
all
the
results,
at
least
for
the
particular.
K
Measurement
duration
and
the
the
fact
which
of
those
results
are
acting
as
upper
bound
or
lower
bound
for
a
particular
ratio,
is
computed
in
the
runtime
after
each
new
result.
So
this
way
even
old,
and
it's
basically,
we
avoid
the
previous
situation,
even
measurements,
that
when
they
were
done
looked
unrelated
now
they
can
become
related
based
on
this
computation
during
ground
type.
K
There
are
some
technicalities,
for
example,
I
am
now
introducing
effective
loss
ratio.
This
is
to
avoid
false
decisions
when
measurement
at
higher
rate
leads
to
lower
loss
ratio,
because
the
intention
is
still
to
be
conservative
in
the
search.
So
if
this
so-called
ratio
loss
inversion
happens,
we
do
not
trust
those
lower
loss
ratios,
and
we
assume
they
are
the
same
as
the
next
smaller
rate,
and
this
is
the
example
yeah
I
think
I
have.
There
is.
G
K
For
the
code,
so
that
people
can
check
how
exactly
the
current
logic
looks
like,
I
still
do
not
have
a
good
enough
english
description.
How
exactly
does
it
look
like
mainly
related
to
what
happens
when
you
start
new
phase,
and
you
do
not
have
all
the
bounds
at
the
current
duration?
K
G
K
Happens
with
the
new
logic,
and
you
can
see
there
is
green
number
11.
This
is
what
the
new
logic
does
it
realizes
that
previously
unrelated
measurement
now
works
as
a
valid
bound,
so
the
search
ends
more
quickly
and
by
the
way
it
gives
a
different
result.
That
is
because
the
the
duty
is
not
behaving
deterministic
in
a
deterministic
way,
but
there
is
no
easy
way
to
deal
with
it
with
this
framework,
I
believe
this
is
a
good
solution
to
have
equally
valid
result,
even
if
it
is
different
when
the
this
result
comes
sooner.
A
Yeah
yeah
it'll
it'll
be
it'll,
be
interesting
to
see
how
how
you,
how
you
handled
the
device,
as
you
put
it,
the
device
under
test,
be
performing
in
a
non-uh
deterministic
way.
With
respect
to
increasing
load,
I
mean
this
is
this
is
the
problem.
A
You
know
we
recognized
between
physical
devices,
where
we're
pretty
much
able
to
get
rid
of
the
transient
problems
and,
and
then
the
you
know
our
virtualized
versions
of
the
same
devices
where,
where
transients
come
around
and
they
they
are
a
necessary
part
of
operation,
but
they
they
bother
our
our
our
longer
term,
searching
and
and
kind
of
give
us
a
clue
that
the
resource
limitation
answer
is
is
below
a
certain
load
level
when,
in
fact,
no
that
was
just
a
transient
that
happened
and
the
resource
limitation
is
actually
above
here
somewhere
we've.
A
K
For
this
particular
algorithm,
my
goal
is
to
optimize
the
search
logic,
so
the
algorithm
never
finds
any
consistencies
and
thinks
everything
is
good.
Of
course,
this
does
not
happen
because
the
algorithm
starts
with
shorter
durations
and
then
need
to
remeasure
with
longer
durations.
So
here
you
can
see
the
pdr
after
five
measurement.
Five.
Second
measurements
was
between
15
and
16,
and
then
in
30
seconds
it
was
forced
to
concede
it
is
lower,
so
this
will
always
well.
This
can
always
happens,
and
sometimes
this
happens
reliably.
D
K
Be
impossible
for
the
30
second
measurement
to
not
encounter
this
interrupt
and
the
previously
good
result,
which
was
lucky
now,
can
never
be
achieved
after
any
repetitions
and
so
on.
So
this
algorithm
is
prepared
for
this.
The
improvement
is
that
things
only
get
more
stable
within
one
specific
phase.
So
when
we
switch
from
five
seconds
to
30
seconds,
things
can
break,
but
they
will
not
break
as
badly
as
previously,
because
previously,
the
search
for
pdr
has
broken
the
previously
stabilized
search
for
ndr,
which
is
the
inefficiency
we
want
to
avoid.
K
K
K
K
We,
for
example,
decided
that
quadrupling
the
interval
with
gives
better
results,
because,
usually,
when
you
find
out
that
the
old
bound
no
longer
holds
the
new
bounds
are
not
adjacent,
there
are
several
expansions
away.
So
by
increasing
the
interval
length
more
aggressively
we
can
spend,
we
can
save
some
time
on
average,
so
this
will
probably
also
get
back,
and
finally,
there
is
uneven
splits.
This
is
some
information
theory
applied
to
this.
K
If
the
current
interval
width
is
not
a
power
of
two,
we
can
spend
some
time
by
not
splitting
evenly
but
trying
to
get
closer
in
to
the
power
of
two
of
the
resulting
intervals.
For
example,
one
to
two
split
is
the
logical
thing
to
do.
If
you
find
yourself
to
be
in
three
times
your
interval
goal,
so
this
will
be
another
improvement
on
average.
So
I
think
it
will
be
worth
documenting
in
this
in
this
next
version.
A
Well,
well,
that's
that's
great.
I
I
think
you're
headed
down
the
right
path
for
several
of
these
veraco
and
maciac,
and
you
know
just
in
my
my
opinion
as
a
as
a
participant.
This
is
getting
better.
I
I
you
know.
A
I
I
even
encountered
a
case
like
the
one
you
described
with
the
you
know
the
binary
search
with
loss,
verification,
algorithm,
you're
a
little
bit
aggressive
on
moving
the
limits
around
and
how
to
fix
one
of
those
and-
and
you
know
it
only
it
only
really
it
only
really
showed
up
because
of
you
know
one
kind
of
corner
case
of
testing
so
but
but
it's
good
to
get
these
things
fixed
before
we
send
them
out
in
the
world.
A
So
that's
good
any
any
comments
on
the
on
the
draft
or
and
or
the
future
plans
here.
A
D
A
Available:
okay,
well,
we'll
continue
to
push
for
for
reviewers
on
the
mailing
list,
and
and
and
thank
you
for
your
time
and
your
your
preparation.
Today,
racco
and
massey
much
appreciated.
A
Yeah,
thank
you
and,
and
congratulations
on
your
this
is
your
first
version
of
the
working
group
draft.
So
you
know
now
now
is
when
the
working
group
really
has
to
start
paying
attention.
That's
that's
with
my
working
group
chairs
hat
on.
D
Yeah,
congratulations,
good
call,
adal.
E
I
Oh
marcia,
yes,
just
evratco.
In
case
you
haven't
seen
the
carson
asked
the
question
on
the
on
the
chat:
whether
there
is
any
a
plan
or
an
observation
of
mlr
search
used
in
other
contexts,
then
fdio,
which
is
where
the
code
is
being
developed
and
are
there
any
other
implementations
on
the
horizon
radco?
Do
you
want
to
take
that
or
do
you
want
me
to
talk.
K
Well,
there
is
one
version
of
a
mlr
search
library
in
python,
available
on
pi
pi,
but
it
is
an
older
version,
that's
what
we
are
currently
using
in
ccit.
Basically
in
fdio
we
do
not
have
a
good
process
to
publish
different
versions
as
quickly
as
we
are
able
to
produce
them,
so
we
definitely
want
to
improve
on
that
and
other
than
that.
I
am
not
aware
of
anybody
else
trying
to
this
algorithm.
I
think
everybody
that
I
know
of
is
using
this
python
library.
I
Yeah
and
in
terms
of
in
terms
of
who
is
using
those,
I
know
that
the
nfv
bench
guys
were
using
it
at
some
point.
I
don't
know
what
is
the
current
situation?
I
I
mean
interacting
with
alec
hawthorne,
so
I'll
you
may,
you
may
know
better
the
situation
there
and
but
in
the
context
of
fdio,
we
do
have
podesta
fdio,
carson,
answering
questions,
and
we
have
members
of
linux
foundation,
networking
where
fdio
sits
together
with
openv
and
so
on,
and
we
have
the
we
have
the
intel
and
arm
as
at
least
two
parties
that
are
using
the
mlr
search,
but
that
is
done
in
with
system
code
and
we're
currently
testing
vbp
and
dpdk
in
fdio.
I
The
other
guys
are
testing,
you
know
whatever
they,
they
they
develop
or
or
verify.
I
I
don't
have
a
full
visibility,
the
main
domain
in
case
it
is
not
clear
for
for
folks-
and
I
don't
know
to
a
degree
we
describe
it
in
a
in
the
draft
intro.
I
The
main
goal
here
is
to
really
reduce
the
amount
of
time
it
takes
to
discover
the
rates
and
and
the
target
environment
or
or
deployment
scenario
is
automated
test
execution
for,
for
benchmarking,
run
by
as
part
of
the
ci
cicd
system
to
verify
the
performance
of
you
know
physical
appliances
or
or
virtual
appliances,
and
apply
this
algorithm
not
only
to
packets
per
seconds
or
beat
per
second
throughput,
but
we
also
have
applied
it
now
to
connections
per
seconds
and
and
and
unstateful
throughput.
I
Let's
put
it
this
way,
so
we
believe
it
is
quite
universally
applicable
as
a
as
alternative
to
a
straightforward
binary
search
and
with
addition
of
the
multi-rate
support,
one
can
now
define
not
only
zero
frame
loss
and
some
non-zero
plr,
a
packet
loss
ratio,
but
but
but
more
rates
if
one
desires.
A
Good,
thank
you.
When
it
comes
to
nfv
bench,
I
think
that's
in
its
sunset
mode
of
operation.
That's
that's
my
quick
feedback
there
from
the
parent
project.
A
Open
platform
for
nfv
has
become
a
an
egyptian
goddess
of
the
nile
and
okay,
so
yeah
yeah.
It
is
kind
of
nice.
So
we
will
we'll
we'll
look
forward
to
the
to
the
changes
and
the
updated
drafts
of
maciac
and
veraco,
and
thanks
very
much
for
your
discussion
today.
A
Thank
you
all
right
so
on
to
the
next,
and
that
is
vladimir,
and
he
wants
to
tell
us
a
little
bit
about
his
work
at
the
hackathon
and-
and
you
know
maybe
some
other.
A
H
Okay,
I
he
only
has
a
chance
to
test
the
microphone
with
meat
ankles.
So
I
change
the
microphone
now.
Yeah,
it's
a
short
sequence
of
slides,
so
you
can
go
to
the
next
one.
This.
H
This
briefly
describes
the.
H
The
goal
for
the
project,
obviously
it
has
the
the
drafts
and
then
are
four
repositories
with
white:
the
the
test
code
for
the
the
test
case
implemented
in
python.
H
There
is
a
net
conf,
a
young
specific
code
which
implements
everything
related
to
young
and
net
conf,
passes
the
configuration
and
it
calls
the
command
line
tool
and
this
command
line
2
can
be
implemented
for
any
type
of
tester
that
already
exists,
and
there
we
have
like
a
reference
implementation
in
hardware
which
is
done
in
very
walk
and
even
unicorn
box
and
pcb,
which
connects
like
off-the-shelf
fpga
board,
so
that
we
can
synthesize
that
so
we
can
actually
measure
how
good
other
test
generators
are
implementing
this
draft
if
they
implement
it.
H
A
I
can,
I
can
hear
you
just
fine
vladimir
you're
you're
much
better
once
you
change,
microphones
or
something.
H
Okay,
good,
so
this
should
be
fairly
simple.
Even
for
people
without
interest
in
testing,
we
have
even
like
tags
for
the
interfaces
and
the
device,
so
we
have
a
tester
and
we
have
the
same
device
implementing
the
device
under
test,
just
different
sd
cards,
and
this
this
was
the
setup
for
the
hackathon.
H
H
So
it
is
not
as
deterministic
as
the
other
one,
which
is
the
hardware
very
walk
implementation,
which
is
just
configured
to
register
interface
and
both
devices
have
the
same
command
line
to
interface.
So
actually
it's
very
simple
to
select
if
you
want
to
use
the
software
implementation
or
the
hardware
implementation
and
the
netcons
codes
like
the
young
core
implementation,
just
calls
command
line
tools.
H
You
don't
need
to
know
anything
about
netcon
for
young
and
it's
implemented
in
a
transactional
way
like
you
have
command
line
two
that
starts
the
traffic
generator
for
a
certain
interface
and
then,
if
you
make
a
change,
then
it
stops
it
and
starts
it
again
when
you
do
a
commit
in
net
confirms,
so
I'm
trying
to
separate
the
net
complexity
from
what
people
actually
doing,
traffic
generators
and
otherwise
just
need
to
know
this
is
important
if
this
draft
is
going
to
be
a
success,
I
think
so.
H
H
Very
good
yeah-
and
I
think,
important
point
with
this
next
step
with
the
draft-
is
to
to
find
actually
serious
organizations
which
are
interested
in
it.
I'm
not
seeing
much
of
a
point
of
pushing
the
draft
before
such
a
party
exists.
I
can
continue
working
on
it.
You
are
more
than
helpful
bringing
focus
to
the
world,
so
anyone
knows
that
it
exists,
so
we
can
just
continue
doing
that
on
the
next
hack.
H
I'm
sorry
go
ahead,
I'm
pretty
much
finished
with
the
presentation
that
the
last
slide
just
shows
the
the
amount
of
work
done
during
the
hackathon.
If
you
go
down,
it
was
one.
Yes,
it's
a
very
minimal
amount
of
work
compared
to
the
the
entire
project,
so
this
is
just
implementing
the
latest
changes
and
it
is
yeah.
We
granted
also
public
access
to
the
netconf
note.
So
there
are
some
existing
validation
tools.
H
That
can
say
if
the
implementation
is
okay
or
not,
this
really
doesn't
have
much
significance
for
the
the
standardization
work
with
the
model
like
this
small,
like
we
test
our
implementation
of
it,
but
other
people
can
have
their
obviously
own
implementations,
and
this
is
not
that
significant
for
the
workload.
This
was
more
significant
for
the
the
hackathon
presentation.
H
A
Good
all
right!
Well,
I
noticed
I'm
just
gonna,
you
know
jump
in
with
my
participants
comments,
vladimir.
I
I
wanted
to
check
on
on
something
that
I
thought
I
saw
in
the
in
the
benchmarking
page
for
this.
So
let
me
let
me
look
a
little
bit
here.
A
Where
is
it
related
draft?
So
it
would
be
down
at
the
bottom
right,
oh
yeah,
so
there's
there's
some
yang
validation,
returned
warnings
or
errors
on
on
the
data
tracker
page
that
that
includes
this,
this
draft.
So
that's
that's
something
to
take
a
look
at.
It
looks
like
it's
only
one
warning,
though,.
H
A
Yeah,
that's
good!
Thank
you.
I
mean
it
looks.
It
looks
to
me
here
as
though
there's
really
something
trivial.
It
wasn't
even
worth
pursuing
with
everybody
else's
time,
but
I
saw
this.
I
did
see
this
red
yang
indication
before
and
I
wanted
to
check
that
if
there
was
something
we
needed
to
do
and
something
you
knew
about,
we
could
make
that
as
a
comment
here
today.
I
think
it
just
just
to
bring
it
to
your
attention.
It's
probably
something
that
will
turn
something
quick.
A
You
can
do
that
can
turn
that
that
indication
green,
very
quickly.
Okay,
so
then,
let's
go
back
to
the
slides
and
does
does
anybody
have
any
comments
on
on
the
work
I
mean
just
just
as
a
background.
We
we,
we
did
see
some
pretty
good
comments
from
tom
petch
and
I
think
some
other
reviewers
plan
to
take
a
look
at
this
and
and
and
frankly,
this
is
one
of
the
more.
A
This
is
one
of
the
more
active
proposals
in
the
working
group
now,
so
you
know
we
might
consider
we
might
consider
adopting
this
work
and
pushing
the
draft
through.
You
know
our
normal
process.
If,
if
that's,
what
folks
want
to
do.
H
I
I
think
there
is
very
good
opportunity
to
cooperate
with
the
ml
search
draft
because,
as
I
see
it,
the
the
ml
search
is
okay
command
line
too.
In
my
view
that
performs
the
search
while
I
am
implementing
a
draft
specifying
the
parameters
for
another
command
line,
two,
which
is
the
the
trial,
so
that
the
search
algorithm
is
calling
iteratively
the
trail
and
that's
what
we
have
standardized
in
the
draft.
So
these
are
two
drops
which
are
they
they
should
be
able
to
work
together.
H
F
Okay,
any
any
response
to
that.
K
I
will
respond,
I
I
was
raising
head,
but
you
are
not
looking.
Oh.
K
K
Baracka,
I
was
just
hoping
he
will
switch
back
to
see
nevermind.
From
my
point
of
view.
Mr
search
uses
a
plug-able
piece
of
code.
It
is
called
measurer
which
is
doing
measurements.
Basically,
it
gets
all
the
traffic
definition
and
duration
and
rate
and
expects
a
result,
and
I
can
envision
this,
I'm
not
sure
about
the
end,
but
definitely
the
cli
tool
be
part
of
that
measurer.
D
K
Connecting
it
together
but
yeah,
definitely,
the
mlr
search
has
some
requirements
which
are
subsets
of
what
this
draft
seems
to
be
describing,
so
it
should
work
and
yes
in
fdio,
we
are
trying
to
improve
our
our
code
to
make
it
more
modular.
Basically,
we
already
have
some
command
line
utility
with
ad
hoc
arguments,
so
it
will
be.
K
We
are
planning
to
to
make
it
more,
let's
say
systematic,
so
I
will
definitely
look
more
deeply
into
this
draft
and
either
change
the
fdi
code
to
follow
it
more
closely
or
ask
questions
comments
on
the
mailing
list.
When
I
see
something
that
does
not
really
fit,
I'm
not
sure
if
I
will
be
doing
as
much
work
as
to
be
called
co-author,
but
you
can
definitely
count
on
me
to
do
reviews
and
comments.
A
Very
good
thanks.
I
I
note
that
rob
made
a
comment
here,
always
happy
to
see
ietf
standardizing
more
yang
models.
A
Yes,
of
course,
so
any
any
other
comments
on
on
this
and
and
we've
we've
got
one
volunteer
to
do
some
reviewing
that's
good.
A
Anything
else
on
the
the
yang
model
draft
and
the
work
done
at
the
hackathon.
H
I
I
don't
have
anything
to
add.
I
I'm
much
better
at
handling
things
business
on
the
mailing
list.
So
me
talking
is
like
exception,
so
I'm
just
happy
that
we
we
have
some
more
people
on
board
and
we
can
cooperate,
and
that
is
like
this
online
itf
session.
A
Me
very
good,
very
good,
well
that
that's
what
our
that's,
what
our
mailing
list
is
for
and
and
we'll
try
to
we'll
try
to
minimize
the
amount
of
time
that
you
have
to
be
live
on
the
on
the
calls
flat
if
you
want.
A
All
right
well:
well,
then,
I
think
we've,
I
think,
we've
reached
the
point
in
our
agenda,
which
is
what
we
call
any
other
business,
and
here
I
I
quickly
wanted
to
mention
that
there
was
an
email
from
kj
and
his
team
of
co-authors
on
the
containerized
network,
benchmarking
draft
they
they
wrote
to
us
about
a
short
update,
but
also
who
said
that
you
know
this
meeting
was
going
to
take
place
in
the
middle
of
the
night
for
them,
and
so
they
would.
A
They
would
not
try
to
join
us
this
time,
but
that
that
message
is
in
our
archive.
If
you
haven't
seen
it
and
please
take
a
look,
that
draft
is
still
active
as
well.
So
just
adding
that
point
all
right,
so
then
I'll
open
the
floor
to
any
other
any
other.
A
All
right
hearing
none
it.
It
remains
to
me
to
thank
my
my
co-chair
for
her
contributions
to
making
things
move
along
here
and
to
rob
for
taking
notes
today,
all
the
all
the
draft
developers
and
authors
and
commenters.
This
was
a.
A
This
was
a
particularly
good
meeting
to
have
a
two
hours,
because
we
certainly
had
plenty
of
comments
to
deal
with
70
on
the
list,
and
I
think
we
earned
a
two-hour
meeting
without
any
doubt
so
thanks
everybody
for
making
it
worthwhile
and
productive
time
all
around
so
we'll
see
on
the
mailing
list.