►
From YouTube: DASH Workgroup Community Meeting Aug 10 2022
Description
Keysight/Mircea new test presentation!
A
B
One
is
for
a
dash,
let's
call
it
pseudoconfig
generator
and
an
update
to
the
hero
test
is
not
yet
the
full
hero
test,
and
I
will
explain
why
we
call
it
baby
hero
test
and
the
scale
and
how
we're
gonna
get
it
to
the
full
hero
test.
B
So,
first
with
a
dash
config
generator
that
is
needed
in
order
to
generate
the
config
that
is
needed
for
the
test.
So
basically,
both
of
them
go
hand
in
hand
now
the
generator
I
placed
it
under
test
in
confgen
directory
and
right
now,
this
config
it's
based
on
the
dash
reference.
Config
example
it.
This
is
not
the
final
form,
so
this
generator
will
keep
morphing
as
the
dash
config
gets
finalized.
B
So
as
we
get
more
updates
there
I'll
update
this
one
here,
also
in
parallel,
let's
say
I
will
copy
paste
this
one
and
transform
it
into
like
psi:
generator
cytric
generator.
Grpc
generator,
whatever
ways
are
there
to
program
the
dpu,
I
can
make
a
generator
for
it,
keeping
the
same
logic
but
different
apis
or
ways
of
configuring.
It
so
I'll
change
the
way
of
configuring,
but
keep
the
logic.
B
The
same
at
this
moment
is
just
exporting
everything
in
a
json
file
based
on
this
format,
a
few
things
which
are
important
here,
I
gave
here
a
few
variables
and
I'll
talk
about
it
in
the
test.
We
have
a
variable
file
that
this
kind
of
drives
the
scale
of
the
config
that
is
generated.
B
B
B
B
They
talk
about
six
here
in
the
test.
There
are
other
mentions
of
ten.
Basically,
so
this
can
be
changed,
but
it's
basically-
and
I
count
multiply
by
two
because
of
the
direction
inbound
outbound
plus
acl
table
count
which
is
right
now
set
to
three
like
I
said
there
is
a
mention
to
five,
so
it
could
be.
I
know
10
plus
cni
accounts
of
a80
acl
rules,
it's
basically
the
nsgs
plus
there
is
a
requirement
to
have
a
thousand
rules
for
energy,
and
this
is
controlled
by
aclu.
B
Rules
for
nhg
variable
prefixes
is
nothing
else
but
acls
and
how
many
ips
we
can
have
prefixes
we
can
have
in
each
aclu
rule
mapping
table.
I
have
an
another
variable
for
how
many
ips
from
each
acl
rule.
We
want
to
be
mapped
just
to
keep
the
two
million
count.
You
don't
want
all
of
them.
So
let's
say
here:
you
need
200
prefixes,
but
here
maybe
you
just
want
to
map
40
of
them
to
make
2
million
mapping
now
based
on
some
other
documentation.
Here
from
dash.
B
B
B
So
basically
it's
divided
by
eight
multiplied
by
three
and
you
get,
I
think
three
point
something
million
routes
or
you
can
make
it
a
group
of
16.
It
needs
to
be
a
power
of
two
and
you
have
16
ips
you
divide
by
16,
but
then
you
have
four
routes
for
this
16
ips.
I
believe-
and
you
get
another
number
which
is
not
1.6-
is
lower.
So
route
is
not
quite
there,
but
again
based
on
the
dash
scale
requirements
if
you
are
to
have
60
40
and
I
each
with
100k
routes.
B
This
is
basically
six
million
routes,
so
we
are
well
within
even
if
we
do
three
million
routes,
so
it
shouldn't
be
a
problem.
Now,
let's
go
and
see
how
this
is
all
done.
Basically,
the
logic
trying
and
here
I
need
a
lot
of
feedback,
because
you
guys
know
how
the
hardware
works
better,
trying
not
to
make
anything
summarizable
for
okay,
I
don't
know
what's
happening.
B
Okay,
sorry
for
that
nice
windows,
11
crash-
I
don't
know
what
happened.
Let
me
bring
back
everything
on
screen.
B
Okay,
so
in
in
the
config
we
have
the
ni.
Each
eni
has
three
tables
for
an
inbound
three
tables
for
outbound,
like
I
said
some
documentation
says,
needs
to
be
five
id,
usually
and
usually
ids
throughout
to
the
config.
If
you
see
it,
starting
with
one
that
cni
one,
if
you
start
with
seven,
that
will
be
usually
an
i7
and
this
last
number,
it's
usually
the
table
id
one,
two
six
and
we
have
the
eni.
So
we
have
all
the
atni's
in
here
when
it
comes
to
acls
what
we
have.
B
B
I
hope
the
harder
cannot
do
any
tricks
and
basically
aggregate
all
this
and
say
hey
this
big
bunch.
It's
all
allowed
everything
is
denied
and
summarize
and
so
on.
So
this
is
idea
behind
the
acls
and
then
to
the
next
group,
it
just
increments
by
two
here.
So
this
gives
us
the
500
rfp
and
it's
another
group
of
512
ips
that
is
distributed
both
even
between
allow
deny
and
it
keep
going
like
this
thousand
times
for
all
the
acl
rules.
B
Now
when
it
comes
to
mapping,
it
just
takes
the
first
x
ips
from
each
acl
rule
and
it
creates
a
map
for
it.
This
is
nothing
here.
It
just
adds
a
mac
and
then
ip
and
the
remote
vtep.
B
Okay,
now,
let's
go
to
routes
so
routes
in
order
to
try
not
to
be
able
to
summarize
them
what
it
does.
For
example,
in
this
case
it
takes
a
group
of
four
ips
and
if
you
have
a
group
of
four
ips
from
each
group
of
prefixes,
we
take
the
first
two
ips
and
we
put
a
route
for
them.
So
this
will
be,
let's
say,
2.128.0.0
31..
So
this
will
account
for
two
ips.
B
Then
we
have
2.128.0.2
32.
That
coincides
actually
with
the
deny
rule
which
is
not
present
here.
So
it's
skipped
and
then
we
have
a
route
2.28.0.2
32,
which
it's
for
the
allow
rule,
and
then
again
we
have
a
route
for
the
next
group
for
two
ips
one
ip
from
the
deny
we
skip
it
and
then
again
another
ip
has
a
route
so
having
these
gaps
in
the
routes.
B
Yeah,
so
this
is
with
a
group
of
four
done
with
a
group
of
eight.
Basically
we'll
have
a
route
with
30
for
four
ips
and
route
231
for
two
ips,
so
that
will
make
six
one
ip
will
be
skipped,
and
then
you
have
another
route
with
a
slash
32,
so
you'll
have
seven
ips
out
of
a
group
of
eight
having
routes
and
one
being
skipped
that
skip,
like,
I
said,
coincides
with
a
deny
rule.
B
This
is
how
the
routes
are
kind
of
questions
so
far.
C
B
B
B
B
C
Think
this
one
is
just
a
reference,
so
I
I
don't
think
our
json
generation
should
be
based
or
even
looks
like
this,
because
we
have
changed
the
the
action
types
and
the
names
etc
like
it's
not
vpc
direct.
Instead,
it's
we
know
direct
right.
So
that
example
is
there
in
the
sonic
hld
document
like
towards
the
end.
So
I
think
that
format
is
what
I
think
we
should
be
generating
the
json
file.
B
Yeah
definitely
I
mean
these
are
just
name
changes.
I
can
just
put
them
in
there
and
format
it,
and
if
we
go
towards
the
yamaha,
I
can
generate
the
ammo.
That's
not
the
problem.
I
mean
what
I
have
here.
It's
it's
a
python
script
that
generates
a
data
structure.
Data
structure
can
have
different
names
and
then,
if
I
use
the
json
library
to
do,
you
know
python
data
structure
to
json
or
I
do
python
data
structure
to
yamalor.
B
Whatever
format
we
decide
on
yeah,
that's
you
know
one
two
lines:
code
change
for
me
shouldn't
be
a
problem
and,
like
I
said
this
is
not
final.
This
will
morph
as
the
dash
config
format
gets
finalized.
I
will
keep
doing
updates.
So
if
you
want
me
to
use
those
parameters,
I'll
just
change
the
names
based
on
the
other
reference
example,
yeah.
C
B
Yeah,
okay,
let
me
let
me
talk
about
that.
So
if
you,
if
you
take
this
and
you
generate
with
ip
prefixes
as
a
list,
the
json
file
for
the
full
hero,
skill
test,
it's
about
one
to
two
gig
json
file,
if
you
don't
make
it
as
a
list-
and
let
me
show
maybe
for
everyone
where
it
is
what
we
are
talking
about.
B
So
I
I
think
it
was
about
the
route
so
the
routes,
because
all
the
routes
had
same
properties.
I
put
them
in
a
list
this
way
and
this
file
generated
for
the
hero
size
is
like
1.6,
one
point
something
a
gig.
If
I
don't
put
them
like
this
and
each
route
has
you
know
other
eight
lines
of
overhead
for
providing
other
information
yeah,
then
it
will
go
to
two
gig
yeah.
It
will
keep
increasing
and
it
becomes
harder
for
me.
You
know
to
open
it
and
look
through
it
and
manage
it.
B
So
since
this
was
not
yet
a
working
sample,
I
I
put
them
as
a
list
as
this
becomes
working
and
actually
the
real
dpus
are
able
to
load
this
config.
I
will
keep
morphing
it,
as
you
know,
as
the
project
progresses,
I'm
keep
up.
Gonna
update
this
generator
to
be
representative
of
the
final
conflict.
B
Now,
when
it
comes
to
that
yeah,
I
have
a
mention
here,
and
this
is
a
request
to
everyone.
There
is
a
mention
in
one
of
the
files
in
normal
operation.
Mapping
updates
can
occur,
100
mappings
per
second
or
something.
Please
don't
it's
like
think
about
it.
You
have
10
million
prefixes,
2
million
mappings
million,
whatever
routes,
let's
say,
20
million
objects
that
I
need
to
update
on
the
dpu
at
100
per
second
tomorrow,
you'll
still
be
here.
Looking
at
how
that
config
is
getting
loaded,
we
need
tens
of
thousands.
B
You
know
hundreds
of
thousands
of
entries
being
updated
every
second
to
I
know
we
need
to
set
a
target
for
the
full
scale,
config,
actually
the
one
with
10
million
mappings
6.4
million
routes
and
so
on
what
that
should
be
loaded
and
it's
not
for
testing
purposes.
But
when
you
have
like,
I
know
it
just
goes
down.
The
device
went
down
crashes
and
you
have
to
bring
the
new
device
and
put
it
in
how
long
it
will
take
till
the
new
device
gets
all
the
config
back
up
on
it.
C
That's
the
second
point
right
like
such
a
cases
it
it.
It
should
support,
scale
configuration
so
the
the
mappings
per
second
and
the
actual
or
route
updates
is
like
once
the
system
is
stabilized
and
it
has
been
like
actively
carrying
traffic
and
enas
are
enabled.
Then
what
you
expect
is,
like
you
know,
consecutive
updates,
but
in
the
beginning,
like
that,
that's
what,
like
you,
have
to
read
all
the
points
right.
B
Yeah
yeah,
so
this
is
what
I'm
saying:
don't
don't
take
this
as
also
in
the
beginning,
in
the
beginning,
really
fast
and
yeah.
At
this
moment
some
implementations
are
fast,
some
are
not
so
I
don't
know.
I
think
there
should
be
a
requirement
for
how
long
it
should
take
to
get
full
scale
config
in
here
and
should
be.
I
don't
know
seconds
or
minutes
yeah,
so
we'll
have
to
have
a
way
to
push
this
config
to
the
dpu.
C
Yeah
again
just
simple
questions,
so
I'm
just
trying
to
understand
the
whole
idea
of
this
generator
so
just
to
understand
what
is
the
purpose
of
it
so
as
understand
so
it
takes
the
configuration
it
generates,
some
different,
let's
say
like
json
file
and
also
it
it
is
able
to
to
generate
like
kind
of
different
test
cases,
use
cases
to
be
able
to.
B
Test
the
test
to
use
a
config,
so
the
test
will
use
the
config,
and
this
ties
this
generator
ties
mostly
into
the
contribution
into
git.
So,
for
example,
we
have
a
test
with
one
ip
one,
ip
one
acl
rule
one
e
and
I
and
so
on.
Here
I
I
have
like
let's
say
it's
a
config
yeah.
It
has,
I
don't
know
100
200
lines
and
you
have
the
config
here
uploaded
in
git.
B
B
So
in
git
we
update
and
we
change
the
logic.
Maybe
the
start
ip,
maybe
some
counts
in
the
variable
files
and
like
or
maybe
the
logic
here
in
the
implementation
and
that
generates
a
config.
But
we
don't
having
git
uploaded
a
2gig
json
file
in
order
to
be
able
to
run
the
hero
test,
you
have
100k.
C
B
C
Okay
yeah,
so
maybe
I
can
phrase
it
a
different
way
and
think
of
it.
This
is
an
algorithmic
configuration
generator
that
can
be
fed
parameters
to
to
result
in
different
scale
outcomes
and
rather
than
storing
verbatim
config
files,
which
are
gigantic
literally
gigantic.
The
same
word,
you
know
giga
right
yeah.
C
C
So
that's
what
this
really
is:
there's
months
of
expertise
and
experience
put
into
generating
these
algorithms
tested
on
real
devices,
so
there's
quite
a
bit
of
grounding
real
world
grounding
in
this
and
the
only
gap
that
really
remains
is
do
we
have
an
agreed
upon
canonical
format.
You
know
intermediate
representation,
if
not
we'll
just
we'll
just
treat
this
as
the
de
facto
one
thanks.
B
Yeah
to
what
chris
said,
for
example,
I
can
take
this,
let's
say
copy
paste
everything
and
then,
instead
of
adding,
let's
say
the
I
know
acl
group
whatever
to
the
python
data
structure,
I
can
make
here
the
grpc
call
that
actually
creates
it
when
that
becomes
available,
and
that
will
you
know,
push
the
config
generating
it
here
again
without
having
a
two
gig
file
around
and
it's
not
only
about
having
one
two
gig
files.
But
I'm
sure
somebody
will
ask
me:
hey,
can
I
have
this
at
the
different
scale
and
then
I
need
to
upload.
B
Yeah
this
way
you
just
change
being
the
variables
you
rerun
it
and
you
have
your
new
config
up.
C
Yeah
well
we'll.
We
will
also
improve
the
way
the
parameters
are
defined.
You
know,
make
it
a
little
more
uniform
and
then
what
we
could
have
as
a
series
of
profiles
that
are
stored,
those
would
just
be
parameter
files,
and
then
you
feed
those
into
the
generator
in
your
in
your
development,
environment
or
lab,
so
will
be
very
compact
representation.
A
Yeah,
because
what
I've
seen
is,
you
know
we
say:
oh,
we
we
set
the
scale
to
this.
Oh
well,
what
happens
if
we
set
it
to
that?
Well,
what
if
we
set
the
aging
to
this
or
change
it
to
that
and
we're
trying
to
change
things
on
the
fly,
and
it's
very
very
difficult
to
do
it
by
hand,
and
I
think
that's
where
this
came
from
right,
mercha.
B
Yeah
coming
to
by
hand,
I
would
not
have
written
a
2gig
file
config
file
by
hand
in
the
first
place,
so
this
is
another.
Let's
call
it.
Maybe
it's
a
selfish
motivation.
I
had
to
do
this
because
I
did
not
want
to
you
know:
create
such
a
huge
conflict
by
hand.
I
had
to
write
codes
that
will
generate
the
config
as
a.
C
Okay:
okay,
thanks
yeah.
I
understand
the
whole
idea
what
this
gen
is
doing,
but
I'm
trying
to
understand
who
is
going
to
use
this
config
and
generated
configuration?
Yes.
So
as
an
essential,
this
is
tests
right.
So
it's
not
okay.
So
there.
C
A
B
And
now
chris
made
the
statement
so
I'll
move
also
into
the
test.
If
there
are
no
more
questions,
the
generator
is
driven
by.
C
Sorry,
sorry,
okay,
I
I
have
one
one
question,
and
maybe
you
mentioned
this,
but
but
I'm
not
sure
do
the
parameters
for
the
generator
allow
you
to
to
create
prefixes,
maybe
with
like
more
like
slash
24s
instead
of
slash
31s
and
slash
22
32s.
Is
it
flexible
enough
to
like
be
able
to
sort
of
run
the
hero
test
with
somewhat
varying
the
prefix
lengths.
B
C
It
might
be
useful
to
have
like
that
kind
of
flexibility
in
generating
the
prefixes.
B
B
C
B
Currently,
now
what
I
have
done
in
my
testing,
while
I
was
doing
this-
I
I
just
went
into
the,
but
this
requires
a
bit
of
coaching.
You
basically
go
into
the
route
table
and
then
say
instead
of
having
all
this
logic
here,
just
have
one
slash
nine
route
for
each
and
I
and
then
your
whole
routing
table
gets
you
know
one
route,
and
this
is
how
I
did
it
initially
when
debugging
to
make
sure
you
know.
Let's
say
I
was
working
on
acls.
I
didn't
want
to
have
to
deal
with
the
routes.
B
So
all
this
logic
was
stripped
down
and
it
was
for
each
and
I
had
one
route
with
eni,
slash
nine,
and
that
was
you
know,
avoiding
having
many
routes
while
debugging
but
yeah
currently
slash
25
and
up,
I
would
say,
based
on
you
know,
quick
napkin,
math.
B
And
of
course,
if
you
go
over
that
limit,
then
it
will
start
stepping
over
the
next
dni
and
so
on.
So
I
also
had
to
take
into
account
when
scaling
this
not
to
have
overlapping
and
stepping
on
each
other
and
also
having
enough
values
without
running
out
of
them,
and
since
we
are
supposed
to
go
to
64
and
128
enis.
B
First
group
from
the
ip
and
that
will
be
one
from
128
ips.
Maybe
if
I
do
it,
you
know
to
256.
I
can
give
two
dot
two
here.
Basically,
there
are
so
many
ips
in
the
world.
You
know
this
is
for
ipv4.
B
Now,
once
we
move
to
ipv6-
and
this
is
you
know
when
I
move
into
test
yeah-
if
we
do
this
all
over
ipv6-
and
you
know,
the
most
of
these
problems
are
gone
ip,
wise,
a
mac.
We
still
have
a
bit
because
if
I
increase
this
too
much,
I
go
into
the
vendor
portion
and
it's
not
a
problem
having
different
vendor.
Let's
say
part
of
the
mac
for
different
tnis
or
multiple.
B
B
B
Let's
say
it
will
be
used
and
that's
a
loopback,
and
you
know
god
knows
what
happens
so
this
is
work
for
the
future
when
I
scale
it
past
60,
40
and
younis
to
start
looking,
maybe
into
skipping
special
ips
and
special
max.
That
would
be
something
to
consider.
B
C
I
think
people
are
digesting
digesting
this.
The
size
of
this
thing.
B
Yeah
I'll
leave
it
open
the
pr
for
a
while,
so
people
can
go
through
it
and
you
know
give
some
feedback
right
when
it
comes
to
the
test,
the
test,
basically
it's
using
the
config
generated.
Why
call
it
baby
hero
test?
And
it's
because
it's
not
using
all
the
ips
in
the
test
at
this
moment,
it's
just
using
the
first
ip
out
of
each
acl
rule.
So
we
are
using
total
48
000
ips.
B
B
B
When
I
go
to
test
cases,
they
are
all
in
here
in
vienna
to
v-net.
I
would
ask
people
to
close
a
bit
of
a
bland
diet
to
these
vina
to
v-nets,
some
of
them
to
me.
They
qualify
more
vpc
peering,
but
for
now
we'll
keep
it
here
as
we
are
adding
more
tests.
So
we
have
the
one
ap
contributor
a
while
back
and
we
have
the
48
kip
till
I
get
to
this
one
through
to
our
learning.
B
We
made
a
few
variations,
and
all
this
will
be
contributed
as
this
here,
so
the
ones
that
we
presented
few
months
back
was
one
ip
now
with
one
ip.
What
we
notice,
since
that
creates
only
one
flow.
B
If
you
have
a
design
where
it
really
encourages
parallelism,
one
flow
is
not
showcasing.
You
know
the
power
of
the
hardware,
and
actually
this
may
not
be
quite
the
best
case
scenario
for
everyone.
B
This
may
be
worst
case,
and
then
we
increase
the
number
of
udp
ports,
and
here
the
number
should
be
probably
equal
with
a
number
of
let's
say,
compute
units
that
are
present
in
the
actual
hardware,
just
to
have
at
least
one
flow
distributed
to
each
compute
unit,
and
actually
you
know
have
your
hardware
show
that
it
can
do,
and
this
becomes
the
best
case
scenario
now,
while
we
try
all
this
even
with
one
ip
and
going
through
the
dash
test
requirements,
we
figure
out
that
if
we
put
all
the
source,
udp
or
tcp
ports
and
all
destination
ports
in
the
traffic,
even
if
we
still
have
one
ip,
this
could
create
four
billion
unique
flows
for
the-
and
this
is
a
very
simple
test
that
can
show
aging-
can
show
how
many
flows
you
can
install
every
second,
as
well
as
how
big
the
flow
table
is,
and
also
what,
if
you
increase
everything
till
you
exceed
the
flow
table
size
and
after
that,
you
let
the
flows
expire.
B
You
come
back
to
a
normal
amount
of
flows,
and
you
can
see
this
dpu
just
had
to
slow
down
in
performance
because
it
exceeded
the
max
flow
table.
And
then
you
know
all
the
flows
get
the
slow
processing.
But
after
that
everything
comes
to
normal
and
it
comes
back
to
the
accelerated
performance
or
it
will
just
crash,
and
you
know
end
of
life
for
it
and
everything
will
stop
working.
B
So
this
test
is
very
small,
but
can
show
a
lot
of
performance
and
the
scale
metrics
of
the
dpu
just
by
manipulating
the
source
and
destination
ports.
B
B
What's
the
max,
the
hardware
can
do
when
we
come
to
48
kips,
like
I
said
one,
I
people
are
each
acl
rule
here
by
using
multiple
tcp
ports
we
can
actually
or
udp
port.
We
can
do
the
full
scale
in
terms
of
flows
required
in
the
hero
test.
Let
me
see
if
I
still
have
it
one
of
the
screens.
B
Back
back
to
the
hero
test,
so
what
we
are
doing,
we
are
maintaining
the
full.
Oh
here,
six
million
flows,
so
you
have
six
million
parallel
flows
in
the
test
and
while
we
are
sustaining
six
million
power
flows,
it
does
not
have
the
tcp
and
udp
mixed
in
together.
At
this
moment
they
are
separate,
I'm
working
on
mixing
it
together,
so
it's
only
tcp
six
million
flows
all
together
and
at
that
point
we
are
measuring
the
cps.
B
Now
there
are
a
few
constituents
and
changes
from
here.
This
talks
about
six
packets.
It's
not
six
packets,
one.
We
are
making
use
of
http
traffic
to
do
actually
the
tcp
use
case.
So
you
have
the
six
packets
from
tcp
plus
the
get
and
the
response
from
http.
That's
eight
packets.
B
B
That
means
you
need
to
send
an
extra
keep
alive
packet,
otherwise
your
fuel
flow
will
die
because
of
the
one
second
flow
timer.
So
when
you
consider
the
six
packets
from
here
plus
the
two
from
http
plus
the
keep
alive,
if
the
performance
of
the
dpu
does
not
exceed
6
million
cps,
then
this
becomes
more
like
8
to
10
packets
per
second.
B
So
here,
where
it
says,
effective
pps
sustain
is
not
quite
this,
like
I
said,
this
is
multiplied
by
eight
or
six,
depending
on
the
cases.
So
this
is
something
to
consider,
and
at
this
moment
yeah
I
didn't
found
a
way
around
it.
I
can
reduce
the
number
of
packets,
but
it
will
reduce
some
of
the
tcp
packets
actually,
and
I
still
need
the
http
packets
for
the
application
on
top
of
tcp
for
traffic,
and
if
you
need
to
keep
alive,
you
need
to
keep
alive
and
there's
nothing.
I
can
do
about
that.
B
So
this
is
regarding
the
scale
and
packets
and
so
on.
Now,
let's
go
back
to
the
test
cases.
C
I
have
questions
about
what
you're
saying
here.
Yes,
the
flow
won't
time
out
if
those
packets
are
all
sent
within
the
timeout
window,.
C
B
A
B
And
your
cps
or
flow
installer
rate-
let's
say
it's:
2
million
yeah
for
your
hardware.
That
means
you
can
install
2
million
flows
every
second,
but
you
need
to
have
6
million
parallel
flows.
So
first
second,
you
install
2
million.
Second,
second,
you
have
four
million
in
the
table,
but
in
order
to
have
four
million
in
the
table,
the
first
two
million
flows
that
you
install
a
second
ago.
B
You
need
to
send
the
keep
alive
on
those
flows,
because
otherwise
the
second
has
passed
and
they
will
be
age
out
and
they
will
disappear
from
your
flow
table.
So
you
will
not
have
four
million
in
the
second
second
second
and
then
third,
second,
you
install
another.
Two
million
flows,
the
flows
from
a
second
ago.
You
need
to
send
the
keep
alive,
to
keep
them
and
also
on
the
floors
from
two
seconds
ago,
in
order
to
reach
six
million
flows.
Okay,.
C
I
I
hear
what
you're
saying
is
it?
Maybe
it's
not
my
understanding.
I
thought
that
there
would
be
background
flows
that
would
like
occupy
the
table
and
those
background
flows
will
have
keep
alives
and
then,
like
the
active
flows
like
don't
require,
keep
a
lives
they
they
just
send
their
six
packets,
like
within
the
timeout.
C
B
Perfect:
okay,
okay,
five,
five
million;
okay,
five
million,
your
your
cps-
is
still
two
million
okay,
two
million
per
second
four
million.
Second,
second
you're,
still
not
at
five
million
flows
in
the
table.
You
still
need
to
send
the
keeper
live.
As
long
as
your
cps
does
not
exceed
this
number,
you
need
to
send
the
keep
alive.
B
C
C
B
B
So
now
you
become
a
trim,
I
don't
know,
let's
say
it's
a
dpo
like
I
said
two
million
and
you
have
five
plus
two,
the
cps
and
becomes
like
seven.
That
changes
you
know
is
a
test
numbers,
but
as
long
as
your
cps
is
lower
than
this
number,
you
need
to
keep
alive.
A
I
wonder
if
we
could
have
a
separate
conversation
about
this
because
it
sounds
like
it
could
take
a
bit
to
go
over
yeah.
What.
C
A
Think,
john
and
russia.
B
A
B
C
I
mean
I
le
I,
I
know
you
don't
want
this
dragon
out
I'll
just
make
one
statement.
I
think
it
would
be
better
to
not
have
to
keep
alive
and
instead,
like
have
an
api
or
something
where
the
data
plane
can
report.
How
many
flows
are
like
active
in
the
table
and
so
then,
like
you
know,
that's
just
part
of
the
measurement
of
the
test.
C
If,
if,
if
someone's
running
at
low
connection
per
second
rate,
you
you
you
can
you
can
see
that
but,
like
I
think
that
these
I
think
that
the
connection
per
second
flows
should
just
be
short-lived,
like
shorter
than
the
than
the
timeout.
B
Yeah,
so
I
I
we
we
have
done
that
and
is
the
other
option
or
what
I
can
do.
Okay,
I
don't
care
about
the
flow
table
size.
I
completely
ignore
this
line.
Yes,
and
I
just
send
the
packets
as
fast
as
possible,
where
basically
it
brings
up
the
tcp
session.
It
sends
the
data
and
the
moment
the
data
got
received.
C
B
The
flow
it's
out
of
the
table,
but
when
we
run
the
test
that
way
and
I
go
on
the
dpu-
and
I
ask
the
dpu
hey:
what's
your
flow
table
size,
it
just
come
and
say?
Oh,
I
have
10k
flows
power
or
I
have
100k
flows
power
which
is
far
from
the
number
required
here.
So
that
was
my
problem,
and
this
is
why
I
put
this
constraint
in
the
test,
because
if
I
do
it
as
fast
as
possible,
the
actual
flow
table
on
the
dp
is
almost
empty.
B
C
A
C
A
B
A
A
So
yeah:
let's,
let's
explore
that
further.
If
anyone
wants.
A
A
Gonna
say
that
if
anyone
wants
to
go
over
that
further
and
more
fully
and
has
a
strong
opinion
on
this
kind
of
stuff,
let
us
know-
and
we
can
we
can
cover
it.
B
Yeah
with
udp
traffic,
usually
what
we
see
it's
more
like
a
conformance.
We
are
sending
traffic
also
on
denied
ips
as
well,
and
we
see
if
actually
those
are
being
dropped,
hundred
percent
and
nothing
goes
through
and
making
sure
acl
are
being
respected.
B
Also,
if
sending
traffic
over
the
denied
ips
has
any
impact
on
performance,
the
tcp
and
the
hero
test
sends
traffic
only
over
the
allowed
ips.
So
that's
something
important
to
note
and
few
things,
so
all
this
will
be
contributed
in
the
next
few
days.
The
script
at
this
moment,
the
one
in
the
pr
is
the
tcp
48kp.
Is
this
one
the
baby
hero
test
as
we
call
it
since
it's
not
the
full
scale
and
few
other
tests?
This
will
come
in
the
you
know.
B
Next
week's
month
we
still
need
to
put
the
tcp
and
udp
in
the
same
traffic.
At
the
same
time
contributes
a
full
hero
test
like
generator,
it's
already
doing
the
full
hero
test
and
so
on.
I
just
need
to
run
it
validate
it
and
finish
from
my
side
before
I
upload.
It
then
do
everything
that
we
did
here
with
ipv6
and
then
do
another
one
which
will
be
probably
for
hero
test
which
will
have
a
mix
of
ipv4
ipv6.
Here
I
have
to
see
the
support
on
this.
B
If
it's
ipv6
over
ipv4,
ipv4
or
ipv6
ipv6,
you
know
all
the
combinations
what
are
supported
by
dash
and
tried
this,
so
this
is
regarding
the
test
and
the
whole
purpose
of
this.
The
way
I
see
it
maybe
not
stated
in
the
hero
test,
but
the
way
I
saw
it
is
that
if
I
am
able,
with
this
test
to
show
the
best
the
device
can
do
ever
as
well
as
the
worst
a
device
can
do
ever
and
by
the
way,
different
hardware
implementation
will
different
tests
will
be
best
or
worse.
B
That
means
the
production
performance
will
be
somewhere
in
between
min
and
max
somewhere.
So
if
your
best
is,
I
don't
know
3
million
and
you're
worth
it's
1
million.
Then
it's
like
in
production,
you'll
get
between
1
and
3
in
real
deployment
kind
of
that's
the
idea.
We
did
all
these
variations
and
tests.
This
is
why
we
have
multiple
and
not
just
one.
A
Yeah,
so
if
anyone's
passionate
about
testing,
please
sync
up
with
mercha
and
chris
and
you
know
collaborate
there,
because
this
is
awesome
mercha.
Thank
you.
B
Yeah
one
thing
that
we
are
working
on
and
it
started
it's
already
partially
contributing.
This
is
latency
measurements,
I'm
not
sure
if
they
are
necessarily
specified
in
the
hero
test,
but
also
we
are
looking
at
tcp
kind
of
application
latency
as
well
as
packet
in
packet
out
latency,
and
for
that
I
will
make
a
pr.
B
You
know
in
the
next
few
weeks
about
how
you
test
it,
because
for
latency,
what
you
want
to
do
is
basically
take
the
dpu
out
put
a
cable
like
a
one
meter,
cable
run
the
test
and
by
the
way,
since
dp
is
supposed
to
be
a
bump
in
the
wire.
B
So
that's
something
that's
coming.
I
think
application
latency.
We
are
showing
it
as
of
today,
but
I
need
to
also
pack
it
in
packet
out
latency
and
to
document
more
how
to
execute
the
test,
because
the
testbed
has
an
interesting
latency
that
you
need
to
subtract
from
the
gpu.
B
A
This
is
amazing,
mercha.
Thank
you
so
much
yeah
everyone,
please
take
a
look
if
you
can
and
I
think
I
think
we
should
have
more
of
a
conversation
on
the
keep
alive
discussion,
etcetera,
we'll
figure
out
how
to
talk
about
it,
and
if
we
can't
do
it
in
the
next
seven
days
offline
outside
of
this
meeting,
then
maybe
we
could
pick
it
up
in
the
next
meeting,
I'll,
try
and
figure
it
out.
A
So
please
keep
thinking
about
it
in
the
next
seven
days
and
I
had
an
oversight
at
the
beginning
of
the
meeting
I
forgot
to
introduce
pranjal,
so
pranjal
is
on
the
call
and
he's
a
principal
software
engineer
and
works
alongside
of
michael
zigmund
and
he's
joining
the
project
and
so
pranjal.
If
you
could
say
hi
to
everyone
on
the
call,
we
have
a
multitude
of
people
in
the
industry,
some
of
the
well
the
brightest
people
from
excite
labs,
broadcom
keysight
intel
nvidia
at
pensando
amd
green,
big
semiconductor
everybody.
C
C
A
I
can,
I
can
work
with
you
hand
in
hand,
and
we,
you
know
we
also
have
intel
and
we
have
broadcom
and
just
everybody,
and
so
this
is
more
like
michael's
peer,
so
good
to
have
him
on
the
team.