►
From YouTube: IETF115-BMWG-20221110-0930
Description
BMWG meeting session at IETF115
2022/11/10 0930
https://datatracker.ietf.org/meeting/115/proceedings/
A
B
B
A
Is
this.
A
Here,
I'm
just
sharing
my
video
in
a
few,
you
know
for
a
few
moments
here:
I,
don't
think
I've
got
the
bandwidth
to
continue
to
do
that.
So
I'd
just
like
to
say
hello.
Thanks
for
joining
the
BMW
G
session,
I
assume
that
folks
in
the
room
can
hear
me
I
can
see
you
on
video.
If
you
can
hear
me
just
just
wave
your
hand.
A
Thank
you.
Thank
you
all
three
of
you
who
are
waving
there
good
to
see
you,
okay
and
Vlad,
joined
the
queue
go
ahead.
Vlad,
oh
now,
he's
gone.
Okay
sounds
good
all
right
and
we
had
a
chat.
Yes,
Boris
said
we
can
hear
it
very
good.
A
Okay,
so
Warren
is
Warren
Warren's
joining
us
right
now.
In
fact,
there
he
is
Warren
will
be
our
AP
advisor
and
Sergeant
at
Arms
in
the
room
and,
as
I
said,
I'm
Al
Martin,
one
of
the
co-chairs,
based
on
the
messages
from
Sarah
I'm,
not
sure
whether
she
will
be
able
to
join
us
today,
but
we
will
muddle
onward.
So
this
is
the
benchmarking
methodology
working
group
session,
and
we
got
a
couple
hours
here.
A
If
you're
not
subscribed
to
BMW
gmailing
list,
this
is
the
place
where
you
can
do
that.
A
Okay
note:
well,
the
the
special
no
well
for
our
group
is
that
we
we
work
as
individuals
try
to
be
nice
to
each
other.
That's
usually
pretty
easy
to
do.
We
all
have
a
common
interest
in
testing
here
and
benchmarking.
So,
let's
just
leave
it
at
that
you've
seen
the
the
IDF
policies,
probably
various
times,
while
participating
in
the
meeting
so
feel
free
to
you
know
delve
into
this
further.
Basically
any
contribution
you
make
it.
The
meeting
could
be
a
statement
at
the
microphone
email
to
the
list.
A
A
So
here's
our
agenda,
we've
got
the
status
and
the
proposal.
A
And
adoptions
to
talk
through
we've
got
the
multiple
loss
ratio
search.
We've
got
some
slides
on
that.
That's
actually
03.
Now
a
quick
update
on
Next
Generation
firewall
we've
got
some
new
slides
for
the
Yang
data
model
and
also
slides
for
the
benchmarking
methodology
for
stateful
non-xy
gateways.
I've
uploaded
all
of
these.
If
you
can't
see
them
I
guess
it's
because
the
the
system
is
lagging
behind
the
updates.
A
So
you
know
I'm,
sorry
for
that,
but
don't
worry,
we'll
get
the
slides
presented
in
any
case
and
then
we'll
be
looking
at
the
proposals
here.
The
considerations
from
benchmarking,
mailroom
performance,
containerized
infrastructures
also
methodology
for
mpls,
Segment,
routing
and
V6
segment,
routing
and
problems
and
requirements
for
evaluation
of
integration,
spelling
iterate
space
and
terrestrial
Networks.
A
So
any
bashing
of
the
agenda
needed.
A
Hearing
none,
then
we'll
just
go
ahead
with
that.
So
it's
a
it's
important
to
point
out
that
there
is
a
you
know,
there's
a
really
easy
notetaker
facility
here
made
available
in
Meet.
Techo,
obviously
I'll
be
trying
to
take
a
few
notes
as
we
go
along,
but
if
anybody
else
would
like
to
jump
in
and
help
type
a
few
notes
on
the
questions
and
answers
to
come
up.
That
would
be
great.
So
please
help
out.
A
If
you
can
we've
we've
got
a
fairly
easy
way
to
follow
the
Jabber
I'm
you
know:
I
can
bring
that
up
pretty
quickly
if
I
see
an
alert
for
that.
But
if
somebody
else
sees
a
note
in
the
jabber
and
I,
don't
see
it
feel
free
to
go
to
the
mic
and
bring
into
my
attention.
Thank
you
all
right,
so
I
think
we're
good.
There.
A
The
status
is
that
we've
got
all
the
iesg
discuss,
ballots
have
been
resolved,
got
a
large
rewrite.
That's
come
to
fruition
on
the
multiple
loss
race.
Car
search
looks
like
some
good
progress.
There,
we've
adopted
the
Yang
model
draft
and
also
the
staple,
and
that
XY
Gateway
drafts
and.
D
A
Still
got
lots
of
proposals
to
deal
with
so
I
encourage
people
to
read
the
proposed
drafts
and
give
us
some
comments.
That'll
be
a
basis
for
considering
you
know
what
we
begin
to
adopt
and
when
and
we've
we've
got
some
drafts
in
that
list
that
have
really
shown.
A
You
know
lots
of
progress
and
lots
of
you
know
hackathon
work
and
lots
of
you
know
new
drafts
and
then
additional
work
going
on.
So
you
know,
there's
there's
additional
work.
We
want
to
adopt
there,
I
think
and
I
encourage
you
to
consider
those
those
drafts.
You
know
when
you,
when
you
join
the
group,
it
really
helps
out.
If
you
do
some
reviewing.
So
that's
that's
the
way
to
you
know
really
join
the
community
here.
A
So
thanks
for
doing
that,
and
when
the
time
comes,
please
volunteer
to
review
some
drafts.
A
Okay,
so
no
new
rfcs,
but
our
Charter's
been
fairly
stable
for
quite
a
long
time.
If
you're
looking
for
information
about
how
to
join
the
group
or
how
the
group
operates,
we
have
a
supplementary
working
group
page.
It's
hosted
here
so.
A
And
our
Milestone
situation
is
where
we've
done
done
with
the
first
three
and
we've
got
some
work
to
do
on
on
some
of
these
others.
A
I,
don't
think
we've
got
much
progress
going
on
on
the
automation
drafts
that
we
were
seeing
for
a
while.
So
we
may
consider
dropping
this
and
we
also
have
to
add
a
milestone
for
the
benchmarking
for
stateful
net
xy
gateways.
A
So
that's
that's
some
background
work
that
the
chairs
have
to
have
to
pick
up,
but
we'll
we'll
do
that
in
the
fullness
of
time.
So
any
questions
about
the
working
group
status.
A
Okay,
very
good!
So
let's,
let's
see
what
we've
got
SlideShare
wise
here.
A
So
I
see
that
you
know,
for
whatever
reason,
multiple
loss
ratio
search
made
it
into
the
the
Dex
the
easy
decks
to
share
so
I'm
I'm
doing
that
one!
That's
our
next
deck
up
here
and
I
see
that
we
have
mosiak
and
viratko
both
present
today.
So
all
of
you
guys
will
be
doing.
The
presenting
today
looks
like
masiak.
D
Hi
hi
everybody
hi
all.
Hopefully
you
can
hear
me.
D
So
we
actually
presented
together,
I'm
gonna,
do
first
few
slides
and
then
I'm
gonna
hand
over
to
ratko
to
talk
about
next
level
of
detail.
So
another
video
and
the
Feats
are
going
ahead.
Then,
let's,
let's
get
going
so
welcome
everyone
again
I
am
we
have
posted
the
the
third
division
of
the
slide
after
the
deadline?
So
apologies
for
that
it
is
a
bit
of
the
rewrite.
As
all
your
demands.
D
How
do
I
can
I
move
the
slides,
or
can
you
move
the
slides
for
me?
Hey.
A
I
have
to
do
it
so
sorry
for.
D
That
okay,
so
next
slide,
please
okay
yeah,
so
we
posted
the
the
updated
version
on
the
9th
of
November.
We
did
not
manage
to
push
it
before
the
deadline
and
the
draft
is
still
very
much
a
work
in
progress.
Next
slide.
D
What
are
the
major
updates?
We
had
a
very
good
feedback
from
the
a
working
group
thanks
to
Al,
Gabor
and
Vladimir,
and
he
made
us
realize
that
we
need
to
actually
spend
time
to
better
articulate
and
two
things.
First,
spend
more
time
on
replicating
the
problem
statement
and
and
and
then
approach
the
basically
describe
the
approach
to
each
and
every
of
those
problems
that
we
are
proposing,
with
mlr
search
methodology
and
and
supported
by
the
the
code.
D
We
have
been
running
in
the
LF
fan
of
the
iel
assistant
project,
so
that
this
is
what
the
third
revision
is
about.
It
is
a
major
rewrite.
The
concepts
are
very
much
the
same,
but
then
we
did.
We
did
spend
time
to
really
separate
the
problems.
So
again,
thanks
to
I
think
it
was
Vladimir
who
articulated
it
very
well
in
one
of
the
emails
about
a
year
ago.
I
think
so
we
also
have
done
another
thing.
D
We
have
applied
stricter
discipline
in
terms
of
the
terms
we're
using.
So
we
have
been
using
a
number
of
terms
from
the
open
source
project.
We
have
a
revised
number
of
BMW,
G,
rfcs,
1242,
2285
and
2544,
of
course,
and-
and
we
have
reference
to
the
number
of
terms
there,
including
offered
loads
maximum
overload,
forwarding
rate
and
forwarding
great
at
maximum
overload
and
such
so.
Those
references
are
now
there
and
are
used
quite
strictly
in
our
review.
The
when
we
describe
the
the
methodology.
D
What
we
have
removed
from
the
from
the
draft
is
a
direct
references
to
the
implementation.
We
realized
that
they
may
have
been
long
and
sort
of
Meandering
and
confusing,
and,
and
some
of
them
got
a
bit
outdated.
Radko
will
we'll
talk
everybody
through
the
the
versions
of
the
mlr
search
as
the
methodology
evolved
a
bit
and
and
really
the
the
focus
on
problem
statement
and
the
and
the
explanation
of
methodology.
D
We
are
targeting
the
BMW
G
Community
to
help
us
Harden
both
of
these
and
and
of
course,
you
know,
validate
validate
them
as
a
first
and
and
force
most
next
slide.
D
So
a
quick
recap
on
what
multiple
loss
ratio
search
is
or
why
we're
doing
it,
and
and
what
are
the
goals
so
mlr
search,
is
about
improving
and
enhancing
the
throughput
search
that
is
specified
in
rrc2544
and
just
to
emphasize
it
it's
about
enhancing
and
improving,
not
replacing,
and
in
fact,
mlr
search
is
compatible
with
RFC
24
Triple
H,
with
a
certain
input
parameters.
D
What
is
driving
us?
What
is
driving
us
to
Define
this
improved
Network
throughput
searches,
the
challenges
we
have
faced
when
testing
nav
systems
and
and
other
software-based
networking
systems?
D
What
is
specific
about
those
systems
is
that
we
we
do
need
to
execute
a
large
volume
of
those
tests
for
those
systems
and
and
also
the
behavior.
There
is
not
as
deterministic
as
we
have
seen
with
hardware
and
networking
systems,
so
the
three
objectives
or
three
goals
that
we
are
aiming
to
address
with
mlr
search.
Are
we
want
to
minimize
the
overall
duration,
the
test
execution?
D
We
want
to
search
multiple
loss
ratios,
so
performance
of
the
device
at
different
loss
ratios
including
zero
loss,
and
we
also
want
to
improve
the
results,
repeatability
and
comparability
across
implementations
of
the
mlr
search
and
throughput
search
and
across
you
know,
instances
of
execution,
including
between
the
different
labs.
D
D
The
bullets
here
are
really
related
are
corresponding
one
to
one
to
the
sections
or
subsections
in
the
latest
version
of
the
draft.
So
we
we
have
split
the
the
problem
statement
into
the
sort
of
Five
Points.
The
first
one
is
about
the
long
test.
D
Duration,
throughput
search
as
specified
in
RFC
to
5.4
is
just
too
slow
for
for
software
networking,
specifically
for
environments
where
the
continuous
test
execution
is
part
of
the
development
Pipeline
and
of
evaluation
pipeline
so
covering
you
know,
it's
basically
executing
number
of
tests
covering
multitude
of
different
packets
processing
modes
and
and
different
configurations.
D
So
so
we
need
to
basically
reduce
the
overall
execution
time.
Second,
problem
is
related
to
what
we
are
testing
the
system
that
we
are
testing.
So
we
have
we
we
within
the
draft
we
are
distinguishing
between
the
dut
and
Duty
being
the
the
actual
software
program.
A
software
processing
packets,
which
is
of
interest
to
the
to
Benchmark
and
Sut,
is
something
that
is.
You
know
around
the
magnets
for
a
run.
The
server
Hub
is
the
operating
system
and
we've
shared
resources
and
potentially
other
software
programs
running
on
that
very
system.
D
So
we
refer
to
it
as
the
UT
being
effectively
nested
within
within
that
system,
and,
and
we
also
recognize
that
the
performance
that
we
are
measuring
from
outside
the
SUV
performance
is
a
spectrum
and
and
what
we
are
after
is
the
noiseless
and
noiseless
ant
of
that
of
that
Spectrum.
D
Reputable
repeatability
and
compatibility
already
talked
about
it's
about
being
able
to
reproduce
the
results
easily
and
and
and
compare.
D
The
other
aspect
is
measuring
the
throughput,
but
with
non-zero
loss
and
we've
observed
in
the
industry
that
what
is
common
when
testing
nav
and
software
Solutions
networking
Solutions
the
zero
frame
loss
is
not
as
popular
as
as
the
non-zero
frame
loss
and,
and
we
would
like
to
capture
capture
that
and
make
the
search
procedure
user
friendly
to
allow
people
to
end
the
labs
to
characterize
the
a
range
of
a
range
of
performance
and
not
excluding
zero
loss,
but
rather
adding
the
non-zero
loss
performance
and
do
it
in
a
systematic
manner
and
the
last
one
is
related
to
inconsistency
of
of
trial
results.
D
That
may
happen
during
the
search
and
the
search
approach
needs
to
be
able
to
to
handle
those,
especially
when
we
are
searching
for
both
non-zero
and
zero
and
and
when
we
encounter
a
non-zero
loss
trials
that
are
with
smaller
load
that
are
a
smaller
load
than
for
zero
loss
trials.
So
handling
the
inconsistency
in
the
deterministic
manner
and
is
is
one
of
the
problems
that
we
have
also
would
like
to
recognize
next
slide.
D
Please.
So
we
included
the
terminology
update.
It
is
much
more
compact
than
the
previous
versions
and
again
we
are
referencing
existing
specifications
from
the
BMW
G.
D
The
the
the
three
major
ones
are
well
the
throughput,
as
defined
in
RFC
to
544
no
change
there.
It
must
be
zero
loss
and
then
we
are
introducing
something
we
refer
to
as
conditional
throughput
and,
and
the
term
is
we're
using,
is
for
non-zero
loss
and
it's
a
it's
a
throughput
that
is
measured
under
you
know
with
a
number
of
conditions.
D
It
coincides
it's
equal
to
the
throughput
when
the
loss
ratio
goal
is
set
to
zero,
but
in
fact
it
is
referring
to
the
forwarding
rate
and
as
specified
in
the
RFC
2285
for
non-zero
loss,
richer
goals.
D
The
the
another
important
terminology
update
is
the
the
the
lower
upper
bound
we're
using
this
term
quite
heavily
when
describing
the
methodology,
they
are
usually
relative
to
the
they're,
always
relative
to
the
current
search
phase,
and
it
is
actually
the
handling
of
those
bounds
that
is
the
Crux
of
the
or
the
core
of
of
the
efficiency
of
the
of
the
of
the
approach.
C
Hello,
this
is
not
black
speaking,
so
I'll
continue
the
top
slides.
So
we
have
simplified
the
description
of
our
search.
We
are
no
longer
calling
it
an
algorithm.
We
are
calling
it
methodology
because
some
pieces
are
missing,
but
this
is
the
main
structure.
C
So,
first
of
all,
there
are
main
inputs
which
control
what
the
reported
value
will
be
at
the
end
and,
as
mentioned
a
very
small
special
goal,
which
can
be
zero
non-zero
and
there
is
a
target
trial
duration
which
governs
how
long
the
final
file
should
be
the
one
that
will
be
turned
into
the
result
and
also
started
Precision.
Basically,
when
doing
the
minor
researcher
at
the
end,
how
close
the
lower
bound
and
upper
bound
should
be?
The
search
happens
in
as
a
sequence
of
phases.
C
There
is
single
initial
phase
and
then
there
are
multiple
middle
phases
at
the
final
phases.
By
the
way
earlier
that
measures,
those
middle
phases
were
called
intermediate
phases.
But
no
we
choose
this
thing,
so
the
initial
phase
is
there
to
Kickstart
the
other
other
phases,
and
it
is
also
the
source
of
time
spent
compared
to
the
usual
approaches.
Basically,
we
are
doing
three
trials.
One
is
done
at
the
maximum
overload.
C
Another
one
is
done
in
the
frame
rate,
the
forwarding
rate
of
that
previous
trial,
and
once
again
we
we
use
forward
the
grade
of
the
second
trial
to
use
as
a
load
for
the
third
trial,
and
there
is
mol
and
frmol
that
was
already
defined
in
RFC
2285
and
we
just
introduce
frfrmol.
C
The
next
iteration
turns
out
a
frmol
is
working
good
as
a
upper
bound
and
FRFR
mol
works
good
as
a
lower
Bound.
In
the
definitions
of
up
about
and
lower
bound,
we
allow
each
bounce
to
be
valid
or
invalid,
so
it
does
not
matter
if
frfrm
oil
load
still
leads
to
not
zero
loss.
The
middle
phase
will
headlight
speaking
about
the
middle
phases.
Now
we
are
explicit.
There
are
four
actions
executed
in
that
middle
phase,
so
we
are
not
calling
it
The
optimizations.
C
We
are
still
debating
how
to
improve
the
terminology.
Usually
I
say
something
like
there
is
a
phase
goal
which
includes
translation
for
that
specific
phase
and
so
on.
So
we
still
need
to
clear
that
out,
but
the
idea
is
clear:
we
are
iteratively
improving
getting
closer
to
the
Target
translation
and
Target
position
that
we
are
using
the
fact
that
middle
phase
is
are
shorter
than
the
final
phases.
C
C
This
is
the
quality
that
is
retired
as
the
condition
of
the
robot,
and
there
is,
there
is
a
one
final
phase
for
each
ratio,
goal
and
sequence
of
middle
phase
is,
is
preceding
each
Final
Phase,
so
there
is
Final
Phase
for
the
first
goal
and
only
then
start
middle
phases
for
the
second
goal
and
so
on,
and
so
on.
C
Yeah,
this
is
a
way
too
complicated,
so
it
is
not
important
for
you
to
understand
every
little
detail
here,
but
this
is
basically
the
an
explanation
of
why
the
draft
is
trading
so
much
so
there
is
a
table.
The
table
has
a
five
rows.
Basically,
each
row
is
one
implementation
of
the
mlr
search
and
the
logic
of
that
implementation
is
different.
C
This
is
what
we
are
still
running
in
the
cc2
production
version:
zero
six,
which
is
not
now
uploaded,
mostly,
is
matching
the
next
production
already
code,
which
we
are
loaded
using
but
we'll
be
using
in
the
next
release
and
the
reason
why
we
spent
so
much
time
writing
these
drafts
and
remove
the
most
of
the
detail
is
not
row
number
four.
Where
I
had
an
idea
and
implemented
it
and
then
decided
the
time
gates
are
not
good
enough.
So
I've
already
been
training
number
five
and
I
expect
that
version
zero.
C
I
will
talk
about
the
big
one
later,
and
this
is
why
you
are
not
describing
all
the
details
yet
in
this
version,
because
I
said
in
some
previous
presentations,
this
draft
started
as
a
description
of
an
algorithm
of
a
code
that
we
were
running
and
we
are
already
now
protecting
it
so
that
there
can
be
just
one
specification
of
optimal
search
methodology
that
is
compatible
with
multiple
different
cold
versions
and
all
of
them
can
be
called
Larson.
We
are
not
there
yet,
but
we
are
hoping
together
next
slide.
Please.
C
Maybe
it
will
be
a
good
story.
Talk
about
the
second
Point
first,
because
I
started
to
talk
about
it
in
the
previous
slide.
There
is
compatibility
with
binary
research
and
with
lost
Burbank
verification
that
is,
and
another
extension
of
a
RFC
to
544,
which
was
used
in
a
RFC
nine
zero,
zero,
the
the
last
number,
but
it
is
possible
to
introduce
more
input,
parameters
to
mlr,
search
and
turn
the
mlr
search
to
be
compatible
with
that.
C
Basically,
you
can
Implement
binary
search
with
most
verification
inside
the
version
4
of
mlr
search,
and
this
will
be
another
tool
for
people
who
wants
to
get
more
stable
results
even
with
facing
with
SSD
that
has
large
spectrums.
So
there
is
big
difference
between
load,
almost
always
get
zero
loss
and
upload
that
almost
never
but
sometimes
gets.
C
In
the
terminology
and
how
the
algorithm
handles
it,
also,
this
zero
three
version
does
not
address
the
important
question
of
consist
of
inconsistent
trial.
This
is
somehow
related
to.
C
Once
again
to
answer
of
the
SCT
performance
Spectrum,
you
can
use
decisions
that
prefer
one
end
or
other
end,
but
this
is
very
important
for
comparability.
If
you
have
one
implementation
that
you
prefers
model
that
prefect
otherwise,
it
is
literal
that
one
of
them
will
be
reporting
a
lower
number
than
other
evil.
C
Winters
need
the
time
Duty
and
those
decisions
needs
to
be
introduced
in
the
definitions
of
lower
bound
and
yeah
Final
Phase
call,
we
probably
don't
be
the
name
in
the
next
rough
version,
but
basically
some
of
the
definitions
that
are
in
the
two
of
three
versions
are
not
big
enough
for
the
comparability
is
not
the
current.
C
Oh
okay.
Basically,
the
last
item
is
what
we,
as
authors,
are,
what
probably
bmwg.
Basically,
we
both
to
review
focus
on
the
problems,
because
the
problems
is
the
part
that
we
are
happy
with
and
if
Community
agrees,
these
are
the
problems
we
can
proceed
in
describing
the
Solutions
in
the
next
block
version.
Thank
you.
A
Thanks
thanks,
Mercy
I
can
write
about
and
so
I
think
that
you're,
you
know
you've
got
a
work
in
progress
here.
That's.
B
A
Know
on
its
way
to
the
state
where
you
want
it
to
be,
and
I.
A
Folks
have
a
chance
to
take
a
look
at
the
current
draft,
especially
if
you've
read
it
before
and
and
see
the
direction.
That's
changing
see
the
direction
it's
going.
Maybe
that
would
be
good
feedback
to
our
co-authors.
Here,
to
you
know
either
let
them
know
they're
in
the
right,
quadrant
or
well.
You
know
how
much
more
they've
got
to
do.
So.
That's
great,
thank
you
both
for
your
for
your
efforts
and
preparing
this
and
we'll,
hopefully
ask
some
more
comments
by
the
next
time.
A
We
get
together,
okay.
So
our
next
topic
is
the
benchmarking
methodology
for
network
security,
device
performance
and
there's
no
slide
deck
on
this
today.
But,
let's
see
I
got
a
chat
here.
A
A
Okay,
there
was
some.
There
was
some
chatting
there
off
during.
A
Then
that's
good
I
think
it's
I
think
it's
been
resolved
good,
all
right
the
last
round,
so.
B
A
Work
on,
let's
go
back
to
this
security
device
performance.
All
the
discuss.
Ballots
have
finally
been
resolved
that
took
about
I,
don't
know
about
eight
months,
but
but
it
was
a
lot
of
work
by
the
authors
to
deal
with
many
many
comments
here
and
then
obviously
discuss
ballots
are
blocking.
A
The
main
question
is
I,
don't
think
they're.
The
approved
announcement
has
been
sent
for
this
and
so
we're
sort
of
looking
to
warn
for
a
timeline
on
when
you
can
review
the
latest
draft
and
maybe
release
that
approval
announcement.
A
I'll,
wait
till
Warren's
back
in
the
room
to
pursue
this
a
bit
okay.
So
moving
on
to
the
next
topic,
which
Vladimir
has
supplied
a
couple
of
slides
for,
we've
got
the
Yang
data
model
for
Network,
test
error
management
and
I'll
wait
Warren's
here,
Warren.
E
Yep
yeah
Warren's
been
here
he
just
I,
guess
sitting
off
camera,
so
you
didn't
know
yep,
hopefully
sort
of
just
after
the
ietf
meeting
finishes
I'll
be
able
to
send
that
just
need
to
go
and
review
and
make
sure
they're
all
all
met.
But
yeah
I
think
that
the
disgust
was
cleared
just
before
the
meeting
so
yep,
hopefully
just
after
the
meeting
I'll
be
able
to
do
one
final
review
and
then
click
the
or
sort
of
do
the
final
approval
bit
so
yep.
E
A
Well,
I
I
just
wanted
to
be
able
to
see
when
you're,
in
the
room
and
and
not
so
that
yeah
so
that
I
don't
talk
about
it,
but
which
is
you
know?
The
only
thing
I
have
to
say
are
good
things:
Warren,
the
but
I
guess
wherever
you're
sitting,
wherever
you're
sitting
it
was
off
camera
and
and
right
there.
I
can
see
your
hat.
So
that's
great
perfect
thanks
a
lot.
Okay!
A
Well,
so
that
I
think
that's
a
that's
a
really
good
outcome
and
we
thank
the
co-authors
for
all
the
really
hard
work
on
the
security
devices
draft
and
we'll
be
looking
forward
to
your
your
final
review.
There
Warren
much
appreciated
all
right,
so
let
me
see
if
I
can
Now
find
and
present
the
Yang
model
drafts
if
for
some
reason,
they're
not
showing
up
in
the
decks
ready
to
be
shared.
F
A
D
A
You
can
see
you
can
see:
I
updated,
I
uploaded
this
deck,
but
then
I
go
over
to
the
pre-loaded
slides
oops
when
I
go
over
the
pre-loaded
slides
Yang
is
not
showing
up
so
back
here.
A
A
Okay,
so
I
can't
see
the
queue
right
now
and
I
can't
see
the
you
know
the
mic
situation,
but
so,
if
you
need
to
make
some
comments
here,
just
make
some
noise
and
I
think
that'll
work.
Fine
Vladimir.
Are
you
ready
to
go
yeah.
G
It's
not
a
big
with
all
the
slides.
It's
no
like.
B
G
G
Yeah,
so
I
can
just
go
quickly
to
the
slides,
so
the
first
one
it
is
draft
status
and
it's
work
in
progress.
There
are
no
outstanding
issues
raised
on
the
list
right
now
and
the
running
code
is
in
the
OK
shape
with
significant,
yet
not
full
coverage
of
the
model
features.
G
So
we're
going
to
the
next
slide
is
a
description
of
the
draft
changes
from
zero
to
one
version
of
the
draft
which
was
submitted
to
the
mailing
list,
some
some
weeks
before
ITF
conference,
and
it's
pretty
much
the
same.
What
stays
in
that
email?
That's
what
the
slides
contains
and
it
states
one
modifiers
are
added.
Dynamic
data
field
of
data
functionality
to
the
generator
model,
and
example
configuration
where
action
type,
which
can
be
either
increment,
decrement
or
random,
offset
mask
and
repetitions
count.
G
G
So
it
seems
it's
quite
flexible
what
we
have
in
the
draft
now
and
we
might
need
to
constrain
it
a
bit
because
it's
not
only
16
bits
or
24
bits
modifiers.
You
can
specify.
Currently
it's
like
a
bit
field
which
specifies
the
mask
and
it's
not
constrained
in
any
way.
So
we
should
consider
some
some
sort
of
constraint
so
that
it's
not
impossible
to
implement
it
without
the
deviation.
G
The
second
point
is
removed
where
the
ethernet
specifics
that
static
data
fields
like
specifying
Source,
address,
Source
Mark,
address
destination,
markers
and
ethernet
type
which
existed,
but
they
were
like
a
subset
of
the
flexibility
provided
by
the
the
general
Frame
data
mechanism,
which
is
what
most
of
the
the
people
programming
this
generator
will
be
using
anyway.
So
having
this
redundancy,
just
increase
complexity
and
puts
dependency
on
one
technology
which
I
don't
think
it's
necessary,
so
it
could
be
just
removed
without
having
all
the
all
the
issues
with
properly
defining
it.
G
And
the
third
point
is
in
the
changes
is
the
editorial
changes
which
were
made
on
the
proposed
on
the
mailing
lists
and
they're
now
applied.
So
this
is
going
to
the
next
slide,
which
summarizes
the
hackathon
activity,
and
it's
not
much.
We
just
added
the
the
simplest
implementation
of
this
kind
of
modifier,
which
just
takes
from
from
the
configuration
this
single
modifier,
which
it
is
a
list
in
the
model,
but
the
implementation
just
fails
everything
that
he
has
more
than
one
modifier.
G
So
it
just
takes
the
parameters
through
this
modifier
and
allows
you
to
implement
a
basic
RFC.
2889
Benchmark,
like
you,
can
put
a
bridge
with
incrementing
Mac
addresses
and
determine
what
is
its
limit.
That's
like
an
application
that
proves
the
concepts,
that's
why
this
was
added,
because
otherwise
it
will
be
possible
to
implement
such
a
benchmark
or
it
would
be
extremely
slow,
like
you
need
commit
transaction
for
each
Mac
address.
You
want
to
add
to
the
table
that
would
be
use
useless.
G
So
so,
and
another
thing
that
was
worked
on
the
hackathon
was
resolving
issues
with
the
two
chains,
upgrading
to
like
updated
dependencies
with
the
latest
version
of
Debian
and
python
tree
and
other
boring
things
that
are
not
I'm.
H
G
Going
to
go
into
detail
there
and
there
is
a
slide
which
publishes
all
the
links
and
the
topology
of
the
design
that
reference
implementation.
So
that's
there.
When
you
have
interested,
you
can
always
download
it
and
check
where
this
repositories
are
and
what
is
being
committed
to
them
and
what's
why
it
contains
image
from
the
the
hackathon
and
it's
named
the
end.
So
this
just
picture
of
the
table
of
the
setup
and
the
the
main
screen
in
London.
A
Very
good:
well,
thanks
for
that
update
and
thanks
for
continuing
to
pursue
this,
you
know
simultaneously
with
running
code
and
in
the
context
of
the
hackathons.
G
Not
much
not
much
interest
in
during
the
the
hackathon,
and
especially
this
one
I
mean
I
just
have
to
be
honest.
It
was
very
little
like
a
course
domain.
Cooperation
in
on
our
table,
so
I
I
have
had
some
success
with
the
people
interested
on
other,
like
venues
like
fpga
conference
here
in
Norway
and
so
I'm,
trying
to
find
from
three
different
areas:
people
that
are
inside,
like
the
networking
worlds
in
ITF
the
the
network.
B
G
Community
that
has
a
stake
in
this
work,
so
it's
like
I'm
trying
to
figure
out
how
to
to
involve
people
and
create
interest,
but
yeah,
particularly
in
the
the
London.
We
didn't
manage
to
get
any
any
kind
of
interesting
connections.
A
Well,
I'm!
Sorry
about
that,
but
well
I'm,
hoping
for
looking
for
more
interest
course
in
the
future.
G
A
A
Right
now,
I'm
sharing
the
the
note
taking
screen.
A
All
right
so
I
don't
know
what
went
wrong
last
time,
but
I'm
going
to
stop
sharing
this
and
I'm
gonna
try
this
again
with
with
the
gabor's
slides.
A
Yeah
I
still
don't
see
the
board
slides
and
the
decks
ready
to
be
shared,
so
I,
don't
know
why
that's
being
held
up
there
seems
to
be
some
uh-huh.
Some
some
really
long
delay
there.
But
if
I,
if
I
share
my
screen
and
I
share
screen
too.
A
Then
now
you
guys
should
see
a
version
of
the
you
know
the
bmwg
meteco
page
that
converges
on
infinite
Infinity.
Would
somebody
confirm
that.
A
Good
thanks
and
then,
if
I,
if
I,
bring
up
a
slide
deck
here
or
Gabor,
which
is
basically.
A
This
one:
can
you
guys
in
the
room,
see
this?
Yes,
okay,
so
I'm
I
apologize.
If
people
can
see
the
slide
deck
last
time,
I
did
exactly
the
same
steps
and
Gabor.
A
Let's
go
ahead
and
work
your
way
through
these
try
to
take
about
no
more
than
15
minutes.
Okay,.
I
Okay,
can
you
set
the
the
slideshow
full
screen.
A
A
I
There
we
go
yeah,
it's
much
better.
Yes,
it's
much
better
nice
full
skin.
Thank
you.
Thank
you.
So
our
draft
is
about
the
benchmarking
methodology
for
a
stateful,
n80,
XY
gate
phase
using
RFC
4814,
it's
a
random
port
numbers.
Could
you
go
to
the
next
slide.
I
That's
that's
right!
Thank
you.
So,
in
my
slide
we
just
try
to
summarize
the
aim
of
our
draft,
so
our
aim
is
to
achieve
some
reproducible,
stateful
and
hdx5
performance
measurements
producing
meaningful
results,
and
to
that
end
we
want
to
facilitate
to
carry
out
the
measurement
procedures
of
old
old
bmwg
benchmarking
rfcs,
like
2544,
5180
and
8219,
and
all
the
measurement
procedures
they
defined
for
throughput.
I
Latency
Frameworks
lead
Etc
to
Benchmark
stateful
nadx
fly
gateways
without
some
careful
thing:
it's
not
possible,
especially
if
we
would
like
to
also
comply
with
RFC
4814,
several
import
numbers
using
them,
because
through
the
estate
gateways,
you
cannot
just
send
through
packets
with
any
random
port
numbers.
I
I
Thank
you,
so
the
progress
of
our
draft
I'm
very
happy
that
during
summer,
at
iitf
114,
our
draft
has
been
adapted
by
the
working
group.
Thank
you
very
much
for
everybody
for
the
support
and
since
then
we
submitted
two
versions
version.
Zero
was
just
a
little
update.
We
added
stateful
nadxy
asset
with
n864
Gateway
as
an
example,
because
previously
only
states
were
nt4.
Gateway
was
on
the
game
so
and
the
same
as
an
example,
and
we
did
some
conditional
checking
and
Corrections
and
in
the
next
version
the
current
version
we
added
some
more.
I
I
Thank
you.
So
this
is
the
the
slide
which
contains
which
we
added
this
set
for
nt64.
Of
course
our
methodology
works
with
any
IP
version,
but
it
was
an
idea
that
not
only
net
four
four,
but
six
four
should
be
displayed
as
an
example.
So
just
a
quick
reminder,
we
have
two
devices
the
test
and
the
device
under
a
test
which
is
now
the
stateful
nat64
Gateway
and,
as
I
said
before,
you
cannot
send
just
a
package
through
this
Gateway
and
initiate
and
connections
can
be
initiated
only
from
the
IP
version.
I
6
slide
side,
so
the
this
is
on
the
left
side
of
the
figure.
So
this
is
why
the
Port
of
the
left
of
side
of
the
tester
is
called
initiator
because
it
can
Ascend
any
frames
and
then
the
states
will
indexed
by
Gateway
translates
the
frame
and
Records
a
connection
tracking
table
entry
and
forwards
it
back
to
the
responder
Port,
which
collects
a
source
IP
address
destination,
IP
address,
Source
port
number
destination.
I
Thank
you.
So
the
basic
idea
is
that
we
defined
a
primary
test
phase
which
serves
two
purposes
during
this
permanent
test
phase.
The
connection
downtable
of
the
device
and
the
test
is
field
and
the
state
table
of
the
responders
field
without
its
four
tappers.
So
after
that,
you
can
do
a
real
test
phase
in
which
you
can
execute
any
of
the
classic
measurement
procedures
like
super
frameless
rate,
latency
Etc,
and
in
addition
to
that,
this
Premier
test
phase
can
be
used
alone
as
a
measurement
phase
for
maximum
connection
establishment
rate.
I
Thank
you.
No
it's!
This
is
the
new
materia.
Oh
yes,
okay!
This
is
one
more
reminder
slide.
Yes
to
support
repeatable
measurements.
We
we
can
achieve
to
extreme
situations
which
can
be
simply
ensured.
The
first
one
is
when
all
test
frames
create
a
new
connection.
This
is
ideal
for
measuring
maximum
connection.
Is
that
measurement
rate
and
the
other
same
situation
when
test
strains,
never
create
a
new
connection.
I
This
idea
for
the
classic
test,
like
throughput
framework,
latency
Etc
and
the
conditions
to
achieve
then,
is
that
we
should
start
each
Elementary
test
with
a
large
enough
and
empty
connection
tracking
table,
and
we
should
slow
down
normally
enumerate
all
possible
port
number
destination.
Import,
Source,
Port
analysis
report
number
combinations
in
the
primary
phase
and
we
should
set
the
timeout
in
the
device
under
test
properly
High
to
be
higher
than
the
the
gap
between
the
two
phases
and
the
real
test
phase.
I
I
Yes-
and
there
was
something
in
ISC
8219
in
section
10-
that
it
mentioned
a
test
with
several
Network
flows,
although
it
didn't
specify
exactly
how
to
create
server
Network
flows,
for
example,
using
multiple
Source
IPL,
the
series,
multiple
destination,
IP
addresses
or
multiple
Source
sport
numbers
or
destination
port
numbers.
I
So
if
we
would
like
to
cover
a
wide
range
with
a
low
number
of
measurements,
we
can
increase
tenfold
or
if
we
would
like
to
make
a
fine
gain
analysis,
we
can
just
double
the
number
of
the
size
of
the
number
of
destination
Point
numbers.
So
then
we
can
do
a
lot
of
measurements.
Next
slide,
please.
I
Yes-
and
this
is
something
maybe
unusual
but
I
will
say,
but
it's
it's
really
real
the
reality
in
a
stateful
nit,
six,
four
gateways
that
they
are
many
times
implemented
in
software,
so
you
don't
buy
a
device.
I
This
is
a
device,
but
you
download
the
software
many
times
three
software
and
you
just
install
it
and
set
it
up
and
use
it,
for
example,
which
I
use
this
Joule
or
it's
a
bit
slow,
but
it
can
be
also
used
tiger
plus
IP
tables,
because
Target
is
a
state
last
one
and
it
needs
a
stateful
counterpart
and
also
I
use
open.
Bsd,
PF
and
fdiobp
also
implements
Salesforce
n864.
I
So
many
times
we
use
a
software
for
State
4980xy
implementation.
However,
our
typical
view
was
released,
I
would
say
in
my
name.
Is
it
my
typical
view
is
that
we
have
a
tester
and
a
device
under
test.
I
was
talking
about
that
a
few
minutes
ago,
but
in
this
case
we're
not
testing
a
device,
because
the
software
is
not
bound
to
a
specific
Hardware.
I
So,
but
it's
not
really
useful
that
we
can
know
that
we
have
a
given
implementation
using
a
given
server
and
we
measure
something
in
that
case
we
don't
know
about
how
it
would
perform
in
a
different
server.
So
what
is
more
useful
for
us
in
this
case,
perhaps
the
performance
of
a
given
implementation
using
a
single
core
of
a
random
CPU
and
even
more
debt.
I
I
Yes
and
one
more
thing
which
was
missing
from
our
draft-
and
it
is
present
in
several
other
drafts
that
we
should
specify
reporting
format.
Of
course.
The
very
first
thing
is
that
measurements
must
be
executed
multiple
times
and
the
number
of
measurements
must
be
included
in
the
report,
and
if
we
have
multiple
results,
then
we
need
to
use
some
summarizing
function.
We
recommend
median
the
summarizing
function
because
it
is
less
less
dependent
on
on
outliers,
less
sensitive
to
outliners
than
average,
and,
of
course,
we
are
also
interested
in
the
dispersion
of
the
results.
I
So,
first
person
time,
99
percentile
can
be
used
as
an
initial
study
dispersion
and,
of
course,
it's
very
important
that
all
parameters
and
settings
that
may
influence
the
performance
of
the
device
under
test
must
be
reported
in
the
results
report,
and
it
is
also
some
cases
there
are
some
implementation,
specific
parameters,
for
example,
hash
table
size
and
connection
table
size
for
IP
tables
which
don't
exist
for
other
implementations.
So
these
things
also
must
be
reported.
Another
example
was
here
is
the
number
of
limits
for
open
BSD
and
okay?
It's
good.
Thank
you.
Thank
you.
I
I
So,
of
course,
as
an
independent
variable,
we
must
have
the
number
of
sessions
which
are
not
just
random
but
the
product
of
source,
number,
Source,
port,
number
and
destination
port
numbers,
and
here
there's
two
examples
for
the
hash
sites
and
connection
taking
table
size
which
are
specific
to
IP
tables
and
here's
one
very
important,
calculated
value,
which
is
the
number
of
sessions
per
hash
size.
I
It
means
that
the
implementation
of
Ip
tables
has
a
hash
table
and
from
each
entry
of
the
hash
table,
a
linked
list
is
started,
and
these
numbers
is
3.05
and
3.81
Etc.
These
are
the
average
length
of
these
lists,
which
is
very
important
it.
It
really
determines
the
performance
of
Ip
tables
and,
of
course,
the
number
of
experiments
and
the
error
of
the
binary
search,
the
difference
between
the
lower
and
upper
bound,
and
here
come
the
real
result.
I
I
Yes
and
for
discussion,
we
put
some
slides
for
for
discussion
still
for
generating
multiple
Network
flows.
We
proposed
to
use
only
a
singular
Source
IP
address
and
destination
as
a
spare
and
multiple
port
numbers
and
currently
I'm
in
Tokyo,
and
do
measurements
with
the
before
mentioned:
implementations,
Joule
tiger
plus
IP
tables
and
open
bsdpf
and
I
found
that
this
solution
works
properly
with
Linux,
because
the
servers
which
I
use
the
support
this
is
size
scaling.
I
So
if
I
set
properly
the
NSS
function,
then
not
only
the
IP
addresses,
but
also
the
port
numbers
take
part
in
the
in
the
hash
function
and
I
can
distribute
the
interrupts
of
the
packet
arrivals
among
the
CPU
cores.
So
all
CPU
cores
take
part
in
processing
the
interlaps.
However,
it's
not
the
case
with
openbsd,
so
I
I
just
checked
with
the
top
command
and
I
saw
that
as
I
used
the
direction
of
traffic.
I
Two
two
cores
were
working
to
process
the
interrupts
and
the
other
course
were
not
working,
because
they
were
only
one
ipages
there
and
in
the
other
direction
they
were
exchanged,
but
the
source
port
number
extension
polynomial
didn't
have
to
to
disable
the
interrupts
among
the
CPU
cores.
But
of
course
it's
not
the
case.
If,
if
this
device
forwards
internal
traffic,
so
maybe
we
could
improve
in
the
situation,
could
you
go
to
the
next
slide?
Please.
I
So
our
question
is
that
we
should
add
the
requirement
of
using
multiple
IP
addresses,
because
in
that
case
the
measurement
results
would
be
closer
to
those
that
one
could
experience
when
the
Gateway
processes
internet
traffic-
and
we
have
just
one
more
slide-
please
go
to
the
next
slide
and
I
was
talking
about
scalability
and
the
recommended.
Scalability
internet
number
of
network
flows
be
used
for
this.
I
The
the
variable
size
destination
for
number
range
to
set
different
numbers,
Network
flows,
and
we
recommend
the
descability
agent,
the
CPU
core
numbers,
the
active
CPU,
core
numbers
and
I
think
both
are
important.
But
our
question
is
that:
is
there
any
other
type
of
scalability?
What
would
be
important
to
examine?
A
Okay,
thanks
very
much
Gabor
on
your
on
your
last
couple
of
topics.
Any
input
from
the
folks
in
attendance.
J
Now,
each
other,
if
possible,
I,
have
a
comment
about
scalability.
What
test,
because
I
had
some
experience
many
years
ago
about
about
to
test
primarily
firewalls,
but
not
just
firewalls.
Any
stateful
device
has
typically
four
limits.
Four
limits
and
four
limits
was
tested
by
me
many
times
in
the
past.
The
one
limit
is
a
number
of
flow.
J
The
top
number
of
law,
which
particular
device
is
capable
to
handle
it's
more
or
less
just
one
number,
it's
static,
number,
it's
just
top
top
number
which
possible
to
squeeze
in
in
the
Box,
but
additionally
to
this
it
was,
as
usual,
mandatory
to
test
such
things
like
GPS,
gigabits
per
second
PPS
buckets
per
second
and
CPS
sessions
per
second
new
sessions
per
second,
it's
typically
was
four
parameters
and
for
static
table.
How
many
flows
devices
capable
to
handle?
J
It
was
just
one
number
which
is
easy
to
test
its
separate
test,
but
for
combination
of
CPS,
PPS
and
gbps.
It
was
always
a
mix,
the
more
gbps
you
will
request
from
the
box,
the
last
PPS
and
CPS.
You
will
get
the
more
CPA
CPS.
You
will
ask
the
less.
Therefore,
it
was
always
the
combination
it
was
typically,
the
combination
has
been
decided
by
looking
to
the
real
customer
production
traffic.
You
look
to
any
customer
box
decide
what
is
the
real
situation
between
GPS
mpps
and
new
sessions
per
second
decide?
What
is
a
mix?
J
I
Okay,
let
me
also
share
my
my
experience,
so
I
completely
agree
that
usually
there's
some
upper
limits
for
the
number
of
sessions
for
IP
tables.
We
could
exactly
measure
it
and,
of
course
we
knew
it
in
advance
that
we
sat
for
the
maximum
number
of
connections
and
you
could
measure
it
very
accurately,
but
as
for
Joule,
we
were
not
able
to
measure
it
because
it
seems
that
it
doesn't
have
a
real
limit.
The
limit
is
the
amount
of
memory,
so
when
we
just
increased
increased
increased.
I
Finally,
the
the
computer
stops
working
I,
don't
know
how
it
is
implemented,
but
it
can
just
trash
the
data
fracture
until
it
has
done
free
memory
and
then
it.
J
Changes
you're
right
with
this
comment
because
you
typically
test
software
box
I
have
in
the
past
experience
to
test
Hardware
box
and
for
Hardware
box.
There
was
a
limit.
Of
course
it
was
primarily
memory
based,
as
you
said,
but
it
was
pretty
much
a
limit
and
sometimes
it
was
a
limit
really
for
production.
Maybe
you're
right,
maybe
for
your
case,
because
you
test
something
software
based
and
it's
just
a
memory,
maybe
in
your
case
the
overall
number
of
sessions
which
is
capable
to
squeeze.
J
I
Yes,
I
I
totally
agree
with
you,
but
what
I
experienced
it
was
the
same,
but
to
some
extent,
I
had
experienced
something
different,
because
if,
if
I
use
fast
enough
network
interfaces
currently
I
use
10
gigabit
interfaces
up
to
a
while.
It's
the
the
number
of
packets
per
second
is
not
not
I,
don't
even
say
independent,
but
nearly
independent
from
the
packet
size.
I
So
if
I
use
82,
bytes
or
128
bytes
or
256
bytes,
the
package
rate
that
you've
maxed
in
fact
is
very
similar,
because
the
bottleneck
is
usually
in
our
case.
In
my
my
measurements
is
the
is
the
handling
of
the
packets
and
not
not
the
transmission.
Through
the
network
interface.
J
But
for
sure,
but
for
sure,
but
for
sure,
even
for
your
software-based
situation,
you
will
still
should
have
a
big
dependency
between
packet
per
second
and
new
sessions
per
second.
This
particular
dependency
should
be
still
pretty
much
big.
I
Yes,
I
I
measure
separately,
the
maximum
connection
establishment
rate
and
then,
if
I
have
a
given
number
of
connections,
then,
with
a
given
of
number
of
connections,
I
can
do
the
throughput
rate
and
by
I
do
the
throughput
measurement.
I
I
just
separated
it,
because
the
first
I
test
the
number
of
connections
per
second,
the
maximum
and
then
with
no
new
connections,
I
measure,
the
throughput
and
other
latency
other
things
and
don't
mix
the
two.
A
D
I,
do
thanks
very
much
for
a
very
informative
talk.
I
have
a
a
comment
about
the
scalability
specifically
number
of
CPU
cores
I
noticed
that
the
way
you're
assuming
the
the
flows
are
load
balanced
across
the
course
is
using
RSS,
as
we
are
dealing
here
with
not
not
for
4046
stateful
flows
with
the
distinction
between
the
forward
and
return
flows
in
the
forward
and
return
flows
are
are
different
from
the
you
know,
our
from
the
RSS
hashing
perspective
I.E.
D
They
will
have
a
different
different
IP
header
values.
Ip
addresses
if
it's
a
source,
not
and
and
and
also
ports.
So
if
you
scale
number
of
CPU
cores,
it
is
likely
that
the
forward
and
return
flows
will
end
up
on
different
CPU
cores.
So
the
more
cores
you
have
the
more
work
the
threats
running
on
those
scores
will
have
to
do
to
reconciliate
the
the
state
or
share
the
state.
D
So
that's
that's
one
thing,
maybe
to
consider
in
terms
of
evaluating
the
multi-core
scalability
on
how
to
scale
the
the
the
dot
the
software
dot
across
multiple
cores,
and
there
are
solutions
out
there
that
you
may
consider
that
scale.
The
course,
by
running
independent
software
instances
and
and
basically
load
balancing
across
them
from
the
original
flows.
But
but
that's
just
one
one
aspect
to
take
into
account
on
how
the
forward
and
return
flows
are
all
balanced.
Using
RSS.
A
For
raising
that
comment,
our
Gabor
I'm
gonna
have
to
cut
this
discussion
off.
Unfortunately,
at
this
point,
we've
got
a
lot
of
other
presentations
to
cover,
and
so,
let's,
let's
move
along.
Thank
you
very
much
for
your
work
on
this
Gabor.
A
So
I've
got
I've
got
a
set
of
slides.
I
can
share
here
on
the
considerations
for
benchmarking
in
the
containerized
infrastructure
and
I.
Think
it's
a
Min
nut,
Trend
who's
going
to
be
presenting.
Is
that
correct.
A
Yes,
yes,
I
can
so,
let's,
let's
try
to
let's
try
to
make
this
10
minutes.
Please:
okay,
I'll
start
a
timer
for
you,
okay,
so
hello.
H
Everyone
CC
Oria,
so
on
behalf
of
my
authors,
I've
been
present
a
drop
update
today
next
slide,
please
so
the
first
thing
we
update
in
this
drive
is:
we
are
police,
our
draft
and
reorganize
the
content
inside
our
draft
to
clearly
show
the
purpose
of
the
drive.
So
the
dropper
post
is
provide
additional
considerations
as
specification
to
guide
copyright
infrastructure
batch
marking
to
compare
with
previous
benchmarking
methodology
for
NF
infrastructures.
So
we
reorganized
the
drug
in
the
way
that
we
clearly
show
really
for
additional
consideration.
H
There's
a
additional
deployment
scenarios,
additional
configuration
parameters,
then
the
two
investigation
of
different
Continental
networking
models
based
on
the
uses
of
this
switch
and
different
package
assertion
techniques
and
investigation
of
different
deployment
settings,
make
performance
impacts
on
the
network
performance
next
slide.
Please.
H
So
this
is
a
the
comparison
between
our
previous
track
and
the
current
version
so
to
increase
the
drop
cohesion
and
clearly
show
the
purple
water
draft.
We
have
add
a
new
section,
it's
a
graphic
benchmarking
consideration.
So
by
looking
at
the
current
version,
everyone
look
at
our
graphic
and
clearly
see
what
is
the
additional
consideration
for
cutting
effect
infrastructures
so.
B
H
Inside
that
say,
in
the
four
consideration,
the
department
scenarios
is
actually
taken
from
the
previous
season:
3
The
overview
of
controlled
infrastructures.
So
by
moving
the
easy
into
this
one,
the
department
scenario
addition
one
can
be
cohesive.
Then
we
add
a
new,
completely
new
additional
configuration
parameters.
Then
we
have
a
networking
models:
the
previous
season,
4
and
exactly
when
we
update
the
ebpf
is
as
a
model
with
the
AF
pdb
deployment
option
and
the
previous
1655
impacts
a
move
inside
this,
as
well
as
the
consideration
number
four
next
slide.
Please.
H
H
This
is
a
new
session,
the
additional
configuration
parameter
and
it's
just
a
list
of
additional
parameters
that
need
to
be
considered
when
I've
been
smoking,
cartilage
networking.
So
they
are
the
resulted
container,
printer
selected
container
level,
plugin
selected
package
access,
napping
model,
number
of
CVN
app
and
the
restock
allocation
to
the
cbnl
inside
this.
H
H
Only
so
there
are
another
development
option
that
is
using
a
ADB
socket
is
a
new
liner
socket
that
allows
bypass
control
part
and
this
option
is
used
by
support
afpb
switch
like
OBS
or
VPP,
and
it's
also
used
by
the
Intel
connected
data
plane
and
the
second
development
option
for
evf
acceleration
is
using
the
EPA
program
branding
inside
the
traffic
control
and
the
ACP
Hub
inside
the
net
driver
next
time.
Please,
the
last
consideration
inside
the
inside.
The
Trap
is
performing
it
past.
If
we
make
no
change,
we
just
move
it
inside
this
section.
H
Please
try
please
so
in
our
preview
has
got
home
in
104
and
105.
We
are
masking
effort
for
EPF
acceleration
models,
so
we
verified
the
EPF
acceleration
with
four
validation.
First
is
using
ovs
V
switch
with
the
vhost
users
as
a
connection
between
the
user
space
and
the
port.
The
second
variation
is
using
the
vpb
switch
this
one
using
the
memory
instead
of
the
vehicles
users
and
both
CV
switch
use,
is
afdb,
Support
Division.
It
means
that
is
to
have
an
alepp
remote
driver
to
get
the
package
from
the
EPF
program.
H
Button
inside
the
driver
and
12
valuation
is
a
ETL
quantity.
Data
plane
is
a
new
project
that
they
have
their
own
kubernetes
plugin,
and
this
company
plugins
that
can
move
the
network
device
from
the
hot
mechanism
spray
and
directly
attack
it
into
the
port
Network
namespace.
So
the
port
can
get
the
package
from
the
ADP
socket
and
the
final
will
be.
The
serum
cni.
H
H
A
Side,
please
quick
question
on
this:
how
many
flowers
are
in
use
for
this
test.
A
Is
it
just
one
or
multiple
photos?
Okay,.
H
For
the
psyllium,
the
ability
of
cerium
to
accelerate
performance
have
already
been
banned.
Much
by
itself
and
probably
in
some
of
their
report.
So
serum
can
fall,
can
accelerate
both
no
cell
and
Israel
to
compare
with
the
other
values
and
then
Elysium
can
can,
if
you
like,
EPF
for
Eastway
traffic
with
other
various
in
the
the
Easter
traffic
accuracy
accomplished
by
the
district
itself.
H
So
and
we
think
that
the
idea,
one
of
hacker
Thomas,
completes
our
investigation
activities
for
all
the
proper
landmark
in
Constitution
Etc.
So
we
would
like
to
hear
any
question
and
comment
from
anyone
in
the
working
groups
that
be
interested
in
our
job
and
we
really
want
these
kind
of
feedback
for
the
WG
adoption
as
the
next
type
of
digital
drug.
Thank.
A
You
that's
great
thanks
very
much
so
look
one
thing:
I'm
going
to
ask
is
that
you
know:
we've
got
a
couple
of
authors
of
the
multiple
loss
ratio,
search
working
in
very
heavily
with
virtualized
measurements,
and
you
know
the
virtualization
technology
I
think
it
would
really
be
really
good.
If
mossiak
and
Bradco
could
take
a
look
at
this
and
make
some
suggestions
do
you
do
you
guys
agree
masiak
and
vodka.
D
A
Sorry
about
that
yeah
I'm,
asking
I'm
tagging
you
guys
for
review
specifically
since
you're
working
in
you
know
an
area
very
close
to
this.
Please
sure,
thank
you.
That's
great
and
I'm
I'm
searching
to
try
to
get
some
review
from
the
the
Anakin
oil
open
source
project.
Where
we're
still,
we
still
have
a
bench
parking
project
there.
Hopefully
I'll
get
some
feedback
for
you
there
as
well.
A
Thank
you,
okay.
Well,
thanks
very
much
for
the
for
the
quick
review
here
today.
Much
appreciated
all
right,
so
then
moving
right
along.
We
got
a
couple
of
talks
on
us,
the
you
know
the
segment
routing
and
we'll
start
out
with
we'll
start
out
with
the
mpls
segment,
routing
and
Apollo
I.
Think
that's
you
right!
A
K
K
So,
just
to
introduce
myself
Paul
volpato
away
and
speaking
on
behalf
of
the
hotels
you
see
listed
here-
and
this
is
the
first
of
two
drafts
we
have
submitted
to
Benchmark
segment
routine,
capable
devices
specifically,
this
rap
is
about
Sr
mbrs
and
there
will
be
the
next
talk
provided
by
eduarda
on
srv6,
just
as
the
next
presentation
next
slide.
Please
and.
A
By
the
way,
Paul
I'm
going
to
give
you
gonna,
give
you
10
minutes
here
as
well
sure
which
I'm
starting
now
I
think
wait
a
minute
start.
Thank
you.
K
God,
okay,
so
I
think
this
is
something
already
known,
but
just
to
share
a
few
details.
Segment
routing,
as
defined
by
RFC
8402,
can
be
applied
to
two
different
data
planes.
As
said
that
these
draft
focuses
on
SR
mpls.
K
Let's
see
that
our
starting
point
is
RFC
5695,
which
describes
the
methodology
for
benchmarking
mpls
devices,
so
that
is
I
would
say
our
foundational
component.
What
we
did
is
actually
to
use
it
as
the
basis
and
complement
that
paper,
as
well
as
all
the
references
that
you
see
this
study
here
in
the
draft
to
extend
the
capability
to
Benchmark
srmpls.
We
just
need
something
more
than
the
basic
mpls
benchmarking
methodology.
K
Okay,
as
I
said,
the
SR
policy,
the
SR
mpls
police,
to
be
more
precise,
is
instantiated
on
the
packet
as
a
set
of
labels.
So
basically
you
define
a
segment
list
which
is
a
set
of
labels
you
impose
to
the
to
the
packet.
K
There
is
a
sort
of
one-to-one
Correspondence
between
segment
rooting,
mpls
and
mpls
itself.
There
are
three
basic
operations
considering
srps,
we
have
the
push
operation.
Basically,
you
inject
a
policy,
I
will
say
on
top
of
a
packet
that
corresponds
to
the
label,
push
in
the
mpls
terminology.
K
There
is
the
next
operation
which
corresponds
to
the
label
pop
in
the
traditional
mpls
jargon.
So
the
the
label,
the
policy,
the
topmost
policy
is
removed,
and
then
you
do
something
which
is
correspond
to
the
next
operation
that
you
are
instructed
to
do,
and
then
we
have
the
third
operation,
which
is
continue.
Basically,
it's
a
kind
of
one-to-one
Correspondence
to
the
label
swap
in
MPS.
You
just
remove
the
utmost
label.
K
K
You
should
use
a
test
list
of
one
label
or
translated
into
srmpls.
That
would
correspond
to
one
seed,
one
segment,
identifier.
Actually,
this
is
not
enough.
We
should
consider
at
least
two
labels
or
two
seats
in
Sr
mpls,
because
we'd
like
to
take
into
consideration,
for
example,
traffic
engineering,
and
to
do
so
you
just
need
two
labels,
for
example,
that
is
to
say,
you
are
not
yet
considering
the
service
layer.
K
Then
again,
comparing
to
the
let's
say,
basic
capability
of
a
50
56.95.
We
have
to
change
a
bit
the
reporting
format
and
we
have
also
to
take
into
consideration
the
control
protocol
to
distribute
the
seats,
so
the
segments
IDs
next
slide.
Please.
K
All
of
that
corresponds
to
the
changes
we
have
done
from
version
zero.
Two
we
submitted
two
version,
one
just
after
the
other
in
October
to
address
very
good
comments.
We
have
received
from
Gabor
and
Boris
on
the
list.
I
have
to
be-
let's
say:
I
have
to
to
to
advance
that
those
changes.
K
Those
comments
were
equally
applicable
to
to
both
drafts,
so
the
one
on
srmps
that
we
are
discussing
right
now
and
to
the
next
one
on
service
six,
so
I'm
not
going
to
let's
say,
discuss
all
the
topics
you
see
listed
here.
Otherwise
we
run
out
of
time
that
basically,
we
have
provided,
let's
say
more
text
to
address
the
comments
on
the
buffer
size
test
on
the
reference
about
using
Ethernet
or
other
media
that
has
that
may
have
the
issue
of
Staffing
beat
or
bytes
stuffing.
K
So
this
is
something
that
should
be
considered,
especially
in
the
performance
test.
For
example,
we
need
to
reference
how
to
deal
with
the
address
randomization
to
be
sure
that
we
equally
distribute
the
traffic
flows
across
multiple
ports
on
a
packet
for
warning
engine
and
so
on.
So
you
see
all
the
the
topics
here
and
pretty
sure
that
eduard
during
the
next
talk
can
add
more
details
on
that
next
slide.
Please.
K
Okay,
next
step
so
first
of
all,
we'd
like
to
hear
from
from
the
community
if
there
are
more
comments,
more
inputs
or
feedback,
so
new
text
that
we
can
add
to
this
lab
to
this
draft
as
well
as
the
the
other
one.
We
are
open
to
co-authoring
contributions
or
any
suggestions.
Here
we
are
more
than
open
to
accept
further
proposals
and
we
like
also
kind
of
to
share
a
kind
of
of
early
question.
K
A
Okay,
well,
I
think
you've
got.
You
know
you
had
some
good
review
last
time.
Let's
encourage
that
to
continue
one
question
I've
asked
in
the
past.
Is
it?
Is
there
any
Implement
any
implementation
experience
with
this
testing?
Have
you
got
any
example
results
that
show
that
this
methodology,
you
know,
is
solid
and
working
also
repeatable.
K
A
Great,
that's
great
I
think
that
we'll
learn
a
lot
from
the
lab
results
that
you
share
with
us
and
that's
when
people
really
get
excited
when
we
start
to
see
some,
you
know
some
real
numbers
here.
So
I
think
you
know
I'll
mark
that
down
as
an
action
item
for
the
authors
and
we'll
hope
to
see
that
next
time
or
on
the
list,
that's
great
thanks.
Paula
thank.
F
A
Okay,
so
let's
move
on
to
the
to
the
next
one,
which
is
the
IPv6
version
of
this
I,
just
have
to
find
it
there.
It
is
share
and
Edward
you're
going
to
be
presenting
yeah.
J
J
Next
slide,
please:
okay,
okay,
I
would
say
that
our
draft,
of
course,
is
primarily
based
on
the
primary
draft
of
all
primary
RFC
of
benchmarking,
mocking
group.
It's
RFC
2544,
but
of
course
there
are
many
additional
things
which
has
been
discussed
deeply
in
different
routes
and
primarily
next
different
refseu,
primarily
next
two
different
RC,
which
we
should
mention.
Of
course
it's
a
IPv6
RFC
5180
and
it's
mpls
RFC
56.95,
but
it's
it's
not
limited
just
just
to
to
this
particular.
If
C
many
things
has
been
taken,
especially
after
comments.
J
Many
things
has
been
taken
from
many
other
RFC.
We
have
here
a
little
bit
challenging
situation,
because
the
service
is
from
one
point
of
view:
it's
it's
856
with
enhanced
heaters,
it's
pretty
much
a
pv6
right,
but
from
another
point
of
view,
if
you
look
how
exactly
it
operates,
it's
pretty
similar
to
mpls.
For
that
reason,
it's
combination,
it's
a
merge
of
mpls
technique
and
IPv6
technique.
For
that
reason,
it's
a
little
bit
special
next
slide.
Please.
J
Okay,
the
service
six
is
pretty
much
different,
unfortunately,
maybe
fortunately,
maybe
unfortunately
from
mpls,
because
it
has
not
just
different
names
for
what
we
deal
with
here.
It's
just
just
not
Source,
node
or
segment
and
point
now
the
transit
node,
it's
not
just
in
different
names,
different
syntax,
it's
different
semantic,
because
if
you
will
look
to
transition
for
example,
Transit
node
is
a
node
which
does
not
understand
the
services.
It's
just
normal
apv6,
not
it's
something
pretty
different
from
from
mpls.
J
For
the
reason
it
needs
special
procedure
and
as
already
Paulo
mentioned,
we
have
some
restrictions
which
we
put
on
ourselves.
Just
it's
our
decision.
J
Node
and
part
of
the
seat
could
be
the
service,
and
for
that
reason
we
have
discussed
here
in
the
draft
why
we
have
put
aside
Services,
because
it's
possible,
of
course,
to
discuss
services,
but
it
will
put
here
Services.
The
draft
would
be
from
our
point
of
view,
is
extremely
big,
because
number
of
services,
especially
for
services,
could
be,
could
be
huge,
and
for
that
reason
services
are
out
of
the
scope
here
next
slide.
Please.
J
As
as
Paulo
mentioned
from
the
last
presentation,
we
have
a
big
update,
a
really
big
update
from
my
point
of
view,
because
thanks
for
Gabor
thanks
for
Burris,
they
have
pointed
to
us
to
many
things
which
could
be
improved
from
text
size.
Point
of
view.
J
We
changed
something
like
from
my
point
of
view,
maybe
15
percentage,
maybe
15
percentage
of
text
has
been
changed
but
from
a
logic
point
of
view,
because
the
changes
are
in
different
effectively
in
all
chapters
here,
and
for
that
reason,
and
the
logic
change
is
much
big,
something
like
30
percent.
It
should
change.
Therefore,
the
current
revision
by
the
way
is
not
the
0.2.
We
have
a
little
bit
small
update
0.3.
The
current
revision
which
you
could
find
on
the
internet
is
much
much
different
from
zero
zero.
J
For
that
reason,
we
we
ask
everybody
to
read
it
again,
because
after
Gabor
Gabor
and
Bernice
comments,
it's
it's
a
pretty
pretty
big
change,
which
we
did
here
next
slide.
Please.
J
As
as
you
said
already,
as
a
chair
said
already,
maybe
to
to
proceed
for
adoption,
we
need
to
show
the
test
results
in
the
real
lab
for
some
particular
look
into
this
particular
methodology
test
in
the
real
lab.
For
the
reason.
Maybe
we
are
not
ready
for
adoption
just
because
we
don't
have
a
test
lab,
but
from
the
tax
point
of
view
it
looks
like
pretty
good
already
but
anyway,
if
somebody
will
read
it
again
and
make
some
additional
comments,
big
thanks.
J
We
we
are
very
happy
that
anybody
will
will
give
additional
comments.
That's
it
from
my
side.
A
Thanks
very
much
Edward
I
think
that
you
know
your
appeal
to
folks
who
read
this
before
and
provided
comments
now
that
you've
made
a
huge
update,
as
you
put
it
would
be
good
if
some
of
those
reviewers
would
go
back
and
and
see
that
that
their
wishes
were
realized,
and
that
would
be
great
so
folks,
if
you've
read
the
draft
before
I'll,
ask
you
to
go
back
and
take
a
look
at
this
and
and
see
the
you
know
what
see
what's
changed
and
if
you
now
like
how
the
authors
have
dealt
with
the
new
situation.
A
Also
I
really
welcome
the
the
implementation
of
the
tests,
like
I,
said
before
we'll
learn
a
lot
from
that.
The
authors
won't
learn
a
lot
from
that
I
think
and
we'll
we'll
see.
You
know
we'll
see
what
what
the
implications
that
has
for
the
the
test
methods
and
the
logic
as
you
put
it
Edward.
So
thanks
for
volunteering
to
find
a
way
to
do
that,
I'm
much
appreciated.
A
Okay,
there
seems
to
be
a
comment
here.
Oh
and
Boris
says
he
will
read
again
and
provide
feedback.
Thank
you,
Boris,
okay.
Well,
thanks
for
thanks
for
the
both
the
pieces
of
work
on
segment,
routing
I,
think
you
know
you
guys
have
gotten
some
good
review
and
now
it's
some
good
input
for
making
progress.
So
thanks
again,.
A
A
So
I
think
if
I'm
not
mistaken,
we've
reached
our
last
talk
and
you
know
I'm
gonna
put
the
we've
got
about.
We've
got
about
15
minutes,
but
I'm
going
to
put
I'm.
A
10
minutes
on
this
anyway,
Zaki
Y
is
kind
of
present
this
zekiah.
We
got
you
in
the
the
queue
here
with
your
with
audio,
yes,
and
it
looks
like
you're
ready
to
go
so.
F
F
F
Okay,
hello,
everyone
I'm
very
happy
to
have
this
chance
to
introduce
our
latest
works
on
the
considerations
for
benchmarking,
Network
performance
in
integrated
space
and
terrestrial
Network.
So
here
we
go
the
next
page,
please,
foreign.
F
Now
we
are
entering
a
new
age
of
satellite
internet.
Recently
we
have
witnessed
a
rapid
evolution
in
the
Aerospace
industry,
and
the
many
big
players
are
actively
planning
and
deploying
their
satellite
constitutions
in
the
low
earth
orbit
to
provide
internet
access.
Probably,
for
example,
as
November
of
this
year,
Sterling
has
already
launched
more
than
3
000
Leo
satellites
providing
internet
access
to
over
500
000
subscribers
in
about
more
than
14
countries
and
regions.
F
It
also
aims
for
Global
Mobile
phone
service
after
2023
and
in
addition,
many
additional
public
Central
internet
constellations
such
as
one
web
creeper
Boeing,
are
also
in
parallel
developments.
This
emerging
Surplus
can
be
integrated
with
existing
terrestrial
networks,
building
an
integrated
space
and
terrestrial
Network
for
Progressive
and
Performance
internet
service
globally.
So
this
is
a
background.
The
next
page,
please.
F
So,
on
one
hand,
let's
like
in
other
kinds
of
commercial
networks,
Network
techniques
such
as
the
new
tech,
Network
topology
protocol
functionality,
are
expected
to
be
carefully
evaluated
in
an
isolated
test
environments.
We
call
it
ite
before
they
are
deployed
in
a
live
production
environment
and,
on
the
other
hand,
unlike
traditional
situations,
rstn
core
infrastructures,
for
example.
These
satellite
router
switch,
are
hard
to
upgrade
after
launch,
especially
for
their
own
broad
Hardware.
Thus
it
requires
a
more
systematic
and
more
comprehensive
evaluation
before
the
launch.
F
Last
time
in
ietf
112,
we
should
our
preliminary
considerations
for
the
problems
and
requirements
of
evaluation
methodology
for
ICN
here
is
the
link
of
our
previous
draft,
and
this
time
we
also
have
some
questions
left
over.
We
believe
discussion
should
be
clarified
before
our
full
question,
for
example.
Well,
what
aspects
of
istn
related
problem
can
be
pursued
for
benchmarking,
Network,
Duty
and
Sut,
and
also
we
need
to
clarify
the
work
scope
that
fits
the
charter.
F
So
this
time
we
try
to
First
clarify
the
work
scope
that
is
relevant
to
our
fits
of
our
Charter
I.
Think
the
major
goal
of
our
bmwg
is
to
consider
a
series
of
recommendations
concerning
the
key
performance
characteristics
of
Internet
working
technology.
So
here
we
summarize
summarize
five
important
aspects
that
might
fit
the
charter.
F
Third,
and
we
we
think
we
can
discuss
the
important
matches
describing
above
characteristics,
for
example,
user
perceived
latency,
throughput
loss
or
routing
convergence,
and
we
also
need
to
clarify
how
to
specify
methodologies
to
correct
this
Matrix.
Above,
for
example,
what
is
the
expected?
In-Like
benchmarking,
mass
storage
for
istn,
and
what
is
the
concrete,
concrete
approach
and
test
cases?
We
need
to
use
to
Benchmark
the
icetn
technology
or
functionalities,
and
finally,
we
also
can
need
to
describe
the
common
and
unambiguous
results
for
marketing
formats.
F
So
using
these
two
Benchmark
to
report,
The
Benchmark
results,
and
in
the
next
page
we
will
elaborate
our
the
considerations
for
the
methodology
and
the
two
bench
for
this
Inlet
battery
match
Mark.
So
here
we
consider
a
data
driving
emulation
based
Benchmark
approaches.
This
includes
at
high
level.
This
approach
includes
three
key
steps:
we
call
them
public
data
collection,
test,
environment,
setup
and
running
the
test.
Next
page,
please.
F
So
the
first
step
we
call
it
Community
German
public
data
collection,
as
we
all
know
that
Leo
satellites
are
public
network
infrastructures
and
operating
in
the
outer
space.
So
many
Constitution
level
information
such
as
the
number
of
office,
the
number
of
satellites
per
orbit
inclination
attitude
can
be
obtained
from
the
public
community,
so
such
topological
information
can
be
obtained
to
help
us
to
build
the
isolated
test
environments.
In
addition,
there
are
some
other
measurements
platforms
allowing
users
to
share
and
their
Network
management,
for
example.
This
is
the
static
results
shared
by
the
speed
test
platform.
F
F
Next,
please.
Thank
you.
The
second
step
we
call
is
real
data,
gyro
and
ite
setup.
In
this
step,
we
can
build
an
I.T
via
virtual
machine
or
container
based
emulation.
The
emulation
should
mimic
the
satellite
behaviors,
such
as
the
time
varying
topology
distance
visibility,
connectivity
and
network
conditions,
so
this
figure
shows
an
example.
If
we
want
to
build
an
environment
to
mimic
a
real
istn
like
this,
for
example,
the
last
left
picture
with
two
office,
then
in
our
lab
environment
we
can
use
two
machines
to
create
a
virtual
representation
of
these
environment.
F
F
And
in
the
third
step
we
specify
the
device
on
the
test
or
the
system
on
the
test.
For
example,
one
day
maybe
a
satellite
operator
or
research
group
developed
a
cubesat
with
a
satellite
processor,
it
runs
a
customer
or
modify
the
TCP
IP
stack
and
we
want
to
test
its
Network
performance
and
power
consumption.
In
our
lab
environments,
we
can
use
a
cluster
as
the
right
picture
to
build
an
emulation,
including
a
large
number
of
virtual
satellites,
depending
on
the
constellation
size
you
want
to
test.
F
Then
we
can
connect
the
hardware,
for
example,
a
device
under
test
running
some
the
protocols
or
functionalities.
We
call
it
isn't
consistent
on
the
test
to
the
emulation
and
we
can
build
a
Hardware
in
the
loop
test
environments.
We
can
generate
the
istn
traffic,
for
example,
using
some
tools
like
my
perf
to
load
the
interactive
traffic
in
the
lab
environment,
and
we
can
measure
the
performance,
for
example,
if
you
want
to
evaluate
a
new
to
writing
protocol
in
space
or
evaluate
the
TCP
or
quick
throughput
in
the
emulated,
icing
experimental
environments.
F
Okay,
so
as
our
next
step,
we
hope
we
can
have
more
further
discussions
and
clarifications
to
narrow
down
the
scope
of
our
work.
For
example,
we
think
this
aspects
may
fit
the
charter.
For
example,
the
class
of
network
functions
systems
or
service
that
are
important
in
emerging
Leo,
satellite
internet
constellation
or
STM.
F
The
key
performance
characteristics
pertinent
to
istn
and
a
set
of
important
benchmarking,
metrics
concrete
benchmarking
methodology
proper
test
case
tell
it
for
ICT
environments.
We
also
mentioned
that
we
did
have
some
correct
collaborators
from
the
industry.
For
example,
we
have
some
calibrillat
letters
from
the
satellite
operator
for
the
channel
Telecom.
They
have,
they
indeed
have
some
environment
for
their
mobile
Telecom
satellites,
Broadband
satellites.
F
In
addition,
we
are
also
working
on
some
tools
and
platforms
for
operators
or
researchers
to
build
a
lab
level
test
environment.
For
example,
we
have
some
open
source,
we
call
it
Stoppers
and
also
we
have
a
container-based
large-scale,
stale
emulator.
We
hope
this
tools
can
facilitate
the
ite
creation
for
istm
benchmarking
in
a
more
flexible
and
convenient
way.
F
Next
page,
please!
So
that's
all
where
I'm
running
out
of
the
time
and
thank
you
very
much
and
any
comments
or
questions
are
more
than
welcome.
Thank
you.
A
B
A
F
A
Okay
I
mean
that
that
sounds
like
a
good
place.
For
you
know
some
of
this
work
to
get
started
in
the
lab
and
yeah
I
I
guess
maybe
the
let
me
let
me
first
say
the
number
one.
Thanks
for
the
talk
and
I
wanted
to
point
out.
Thank
you.
Something
I
missed
here
that
you've
actually
got
two
drafts
going.
B
A
Okay
and
so
I,
so
we've
got.
We've
really
got
two
stew
drafts
that
we'd
like
people
to
be
taking
a
look
at
okay.
F
A
That's
good
so
you're
kind
of
leaving
the
other
one
on
and
behind
for
now
when
I
think
this
one
this
first
one
here
is
a
really
really
focused
on
our
our
particular
needs
here
and
I.
Think
that's
that's
the
right
way
to
go
so
Thanks
For.
A
So,
thank
you.
Let
me
open
it
up
any
any
questions
in
the
room
for
seki.
A
All
right
well
well,
mine,
have
a
I,
have
a
question
and
a
comment.
B
A
I
think
that
when,
when
you
start
to,
when
you
start
to
pursue
ambulation,.
A
I
think
you
I
think
really
really
all
of
us
will
benefit
from
the
the
possibility
where
you
describe
your
the
emulations
that
you
come
up
with
and
in
let's
say
peer-reviewed
conference
papers,
so
we
can
get
some
wider
review.
You
know,
beyond
the
benchmarking
methodology,
bargaining
group
and
and
and
maybe
draw
on
some
of
the
folks
in
the
industry.
Who's
expertise
is
more
in
the
you
know,
the
satellite
internet
constellations
and
the
the
work
that's
going
on
there.
A
F
Okay,
so
sure,
thank
you
for
your
comments
and
maybe
next
time
we
can
show
some
of
our
emulation
results.
Maybe
I
think
we
can
further
narrow
down
the
scope
of
our
work.
Maybe
we
can
focus
on
a
certain
protocol
or
surveys
or
functionality
related
to
this
satellite
internet
consideration.
A
Okay
but,
like
I,
said
anything
in
the
emulation
category
I
think
we're
going
to
want
other
people
in
the
industry
to
weigh
in
and
say
that
you
know
it's
a
reasonable
emulation
or
it
needs
work.
You
know
those
are
that's
the
kind
of
feedback
I
think
you
know,
just
speaking
as
a
participant,
I'd
feel
much
more
comfortable.
If
real
experts
weighed
in
on
this
okay.
F
A
All
right
sure,
the
more
the
merrier
in
fact
I,
don't
think
I,
don't
think
we'll
be
able
to
complete
this
work
without
adding
up.
You
know
some
number
of
of
real
experts,
Beyond
yourselves
as
the
co-authors
who
can
help
us
here.
So
you
know
we
want
to
get
the
strong
review
and
than
a
strong
foundation
for
this.
If
we
take
it
up.
A
Good
well,
I
think,
then
you
know
the
path
forward.
Here
is
anybody
who's
interested
in
Leo,
satellite
Communications
and
it's
becoming
more
and
more
popular
every
day,
I
encourage
you
to
read
this.
This
first
draft,
that's
on
the
screen
and
provide
your
feedback
and
let's
try
to
get
you
know.
Let's
try
to
make
some
progress
on
this,
and
and
also
the
other
topics
which
are
are
candidates
for
bmwg
work
between
now
and
and
our
next
meeting
next
Mark
March,
that's
the
that's
the
goal
for
all
of
us.
B
A
Some
homework
reading
the
drafts
providing
comments
on
the
list
and
you
know
just
being
helpful
to
one
another
so,
for
example,
Zach
UI.
You
could
take
a
look
at
some
of
the
other
work
that
has
been.
A
Yeah
and
and
really
that's
the
way,
all
of
us
help
each
other
and
get
this
work
done
so
I.
A
It
so
so
any
final
comments
on
the
topics
that
we've
talked
about
today
feel
free
to
jump
up
to
the
mic,
and
you
know
volunteer
for
something
or
say
that
we've
gotta,
you
know
we
that
it
seems
like
we're
in
the
working
in
the
right
direction.
A
Okay,
well,
then,
with
no
final
comments.
Well,
thank
you
all
for
your
participation.
Today
we
had
a
good
meeting,
looks
like
we're
finishing
finishing
up
right
on
time,
covered
a
lot
of
good
topics.
I
think
with
you
know
the
additional
work
in
the
Laboratories
and
and
reviews
that
will
make
more
progress
on
this
and
and
all
our
work
and
really
have
a
have
a
great
meeting
next
time
around
as
well
so
see.
D
A
On
the
list
and
I
I
hope
we'll
we'll
have
good
holidays
and
I
wish
everybody
some
rest
over.
You
know
the
holidays
and
new
year,
let's,
let's
join
up
together
in
2023
with
lots
of
good
stuff
done,
see
you
then
bye-bye.