►
From YouTube: DASH Workgroup Community Meeting Sept 14 2022
Description
Keysight and PLVision SAI Challenger Enhancements for DASH Testing (presentation and Demo) - Chris Sommers and Anton Putrya
A
September
14th,
quite
a
few
people
today
wanted
to
turn
it
over
to
keysight
this
morning,
unless
there
was
some
pressing
agenda
items
that
anyone
had.
B
So
today,
I'm
kind
of
excited
because
we're
going
to
give
the
community
a
sneak
preview
of
a
new
test
framework
that
we've
been
working
on
and
it's
still
a
work
in
progress,
and
we
wanted
to
give
people
a
look
at
this
to
get
reactions
and
feedback.
There's
still
time
to
do.
You
know
course
corrections
or
factor
future
plans
in
and
we
engage
the
services
of
pl
vision
for
this
project
and
many
of
you
know,
pl
vision,
they're
doing
other
work
in
this
project
and
sonic
in
general
and
a
great
company
to
work
with.
B
So
you
guys
can
get
a
look
at
it
and
and
then
we'll
we'll
talk
about
it
and
get
some
feedback.
So,
let's.
B
Yeah
thanks.
So
the
first
thing
is:
why
do
we
need
something
new?
You
know
we
have.
We
have
existing
frameworks,
we
have
ptf,
which
is
already
being
used
in
the
dash
behavioral
model
pipeline.
It's
really
well
understood
and
popular
in
the
sonic
and
sci
community,
but
dash
really
stretches
the
limits
of
current
test
methodologies.
B
The
test
cases
can
be
very
complex.
Lots
of
tables
to
set
up
lots
of
interdependencies
to
configure
beam,
takes
a
lot
of
expertise
to
understand
how
to
create
the
tables
with
the
proper
contents
to
do
meaningful
tests
an
example
of
that.
An
extreme
example
is
the
hero
test,
which
we've
been
working
on
for
many
many
months,
and
some
of
that
knowledge
is
going
to
come
out
in
this
framework.
B
The
table
scales
can
be
very
huge,
there's
a
lot
of
apis
to
test.
If
we
want
to
really
test
the
entire
software
stack,
it
makes
sense
to
test
it
the
different
api
levels,
and
we
want
to
be
able
to
test
platforms,
hardware
platforms
at
line
rate.
We
want
to
do
more
than
just
software
virtual
testing
and
we
want
to
test
more
than
just
a
packet
at
a
time
we
want
to
test
high
speed
flows
with
millions
of
packets
and
many
many
different
ip
addresses
cycling
through
etc.
B
And,
finally,
you
know
we're
in
an
era
where
software
developers
are
really
expected
to
write
a
lot
of
their
own
functional
and
unit
tests.
We're
already
seeing
evidence
of
that
in
this
project.
We
don't
have
a
room
full
of
test
script,
writers
writing
tests
for
the
diligent
software
engineers.
We
have
to
do
it
all
ourselves,
so
we
need
efficient,
easy
and-
and
you
know,
efficient
and
easy
methods
and
frameworks,
and
you
want
to
reduce
the
barriers.
So
it's
not
onerous.
It's
just.
B
You
know
part
of
the
software
development
that
doesn't
seem
like
a
punishment,
so
to
speak.
So
let
me
just
do
a
little
review
of
of
what
I
call
the
dash
test.
Maturity
stages.
I
skipped
phase
one
which
is
just
early
stages
in
the
in
the
company's
labs.
We're
really
at
stage
two
and
we're
transitioning
to
stage
three
right
now
so
stage
two
is
you
have
packet
testers?
You
have
some
standardized
test
cases
still
using
proprietary
tools
and
apis.
B
So
this
was
more
or
less
a
development
model.
I
just
wanted
to
restate
that,
because
it'll
put
some
things
into
context,
so
what
why?
What
and
when
are
we
doing
so?
What
we're
going
to
do
is
we're
going
to
contribute
this
enhanced
framework
to
the
github
repo
it's
already
in
ocp
and
keysight
is
sponsoring
these
enhancements
and
peel
vision?
B
Is
the
experts
in
this
framework
and
they're
doing
the
actual
development
of
this
keysight
has
been
developing
a
configuration
generator
for
dash,
and
my
colleague
mercha
presented
early
views
of
that
actually
more
than
a
month
ago,
and
we've
been
using
it
for
many
months
in
various
forms
to
configure
real
hardware
and
set
them
up
to
run
hero
test.
So
this
is
a
proven
methodology.
B
It's
an
algorithmic
generator
that
can
be
controlled
with
parameters
and
to
generate
very
large
scale
configs,
so
we're
contributing
that
we're
actually
helping
with
the
integration
of
that
into
side
challenger
and
what
this
will
result
in
is
vastly
increased
developer
productivity,
where
we
can
focus
on
the
configuration
data
and
not
the
low-level
plumbing
details,
and
this
will
actually
allow
us
to
test
multiple
apis
with
the
same
test
cases
by
just
saying
which
api
to
test
and
the
code
and
the
data
configuration.
Don't
change
to
me.
B
That's
a
really
large
improvement
here
and
as
far
as
when
that's
happening,
it's
actually
in
process
and
you're
going
to
get
to
see.
You
know,
work
in
progress,
it's
kind
of
a
checkpoint
and
we've
got
some
proof
of
concept
of
this
actually
working
with
the
behavioral
model,
and
so
we're
looking
for
feedback,
and
we
want
to
have,
let's
say
the
first
meaningful
release
of
this
in
october.
B
B
Okay,
I'll
keep
going
so
I'm
just
gonna
jump
right
to
the
architecture
picture
that
lets
people
see
what
we're
doing
here,
get
right
to
the
fun
stuff.
This
is
the
framework
that
we're
working
on
and
it
already
exists
in
a
simpler
form
where
used
as
the
psi
redis
driver
and
what
we're
adding
a
number
of
things.
Okay,
thrift
driver
we're
getting
some
echo.
B
Okay
thanks
so
the
side
thrift
driver
is
being
added
and
that
scyther
driver
comes
directly
from
the
work
that
intel
did
with
the
auto
scithrift
cert
client
server
framework
that
we're
already
using
in
dash.
So
this
project
actually
relies
on
that
heavily
and
we
have
a
place
in
this
architecture
for
a
gnmi
driver.
B
It's
out
of
scope
for
this
first
phase
of
the
project,
but
I'll
talk
about
that.
A
little
bit
and
vendors
can
even
implement
their
own
drivers,
so
if
they
have
an
implementation
that
doesn't
have
scithrift
on
it,
yet
let's
say
a
proprietary
grpc
interface
or
something
else.
You
could
write
a
driver
and
use
this
framework
and
still
configure
to
block
your
device
while
you're
on
your
journey
to
getting
the
full-size
implementation.
B
This
data
plane
wrapper
is
something
that
peel
vision
already
created
for
their
side
side
challenger
framework
and
allows
you
to
control
different
traffic
generators.
Not
only
does
it
accommodate
the
familiar
scappy
gen
traffic
engine,
which
is
part
of
ptf,
it
also
will
control
snappy
api,
which
I've
talked
about
in
the
past
and
that's
a
keysight
invention.
B
Snappy
is
a
client
library
for
an
open
traffic
generator
interface
and
that
otg
interface
is
also
open
source.
It's
a
model
for
a
abstract
traffic
generator
that
can
run
on
different
platforms,
and
we
support
both
software
versions
like
the
csc
which
I've
talked
about
in
the
past
and
as
part
of
the
dash
test
framework
already,
it
will
also
support
hardware
traffic
generators
up
to
line
rate,
which
today
means
800
gigabits
per
second,
so
this
is
paving
the
way
for
all
scales
of
testing,
both
pure
virtual,
as
well
as
true
hardware
testing.
B
B
Here's
where
the
real
magic
is,
I
think
the
input
to
this
pi
test
there
are
utilities
and
drivers
that
allow
you
to
take
data
configurations
in
a
declarative
form,
for
example,
could
be
a
json
file
or
a
python
structure,
or
even
a
python
library.
That's
generating
configuration
records
on
the
fly;
it
will
take
those
parse
those
records
and
translate
them
into
the
appropriate
api
on
the
fly.
B
What
that
means
is
you
can
focus
on
the
configuration
itself
and
it
looks
just
like
psi.
Attributes
and
tables
anton
will
be
giving
you
a
look
at
that,
so
you
get
a
gut
feel
for
it.
It
will
take
those
data
configurations
and
apply
them
to
the
device
under
test,
and
you
don't
have
to
write
the
code
here.
It
will
be
done
for
you,
so
you
can
focus
on
the
data.
B
We
can
actually
feed
this
with
our
config
generator.
We've
shown
you
in
the
past
our
dash
config
generator,
which
was
putting
out
kind
of
an
intermediate
form,
but
we've
actually
got
a
version.
That's
generating
psi
records
appropriate
for
dash
and
those
records.
Look
like
attributes
and
table
entries
so
they're
very
easy
to
create.
You
don't
have
to
know
the
low-level
apis
that
you
get
with
these
drivers
just
focus
on
the
data,
and
these
can
be
tiny,
tiny
configurations
like
just
a
few
tables
or
it
can
be
millions
of
entries.
B
B
B
That
launch
doesn't
have
to
change
based
on
on
the
interface
you're
using.
Furthermore,
a
logic
doesn't
have
to
change.
If
you
have
different
files,
you
could
have
as
many
files
as
you
want
for
generator
configurations
and
say
I
want
to
apply
10
hcl
rules
or
10
000.,
the
logic's
the
same.
So
you
don't
have
to
write
lots
of
test
cases,
it's
actually
just
parameterized.
B
So
I
think
this
will
result
in
a
lot
of
productivity,
I'll
pause
for
any
questions
here.
B
B
Internally,
we've
been
used
using
it
to
generate
proprietary
vendor
formats
for
configuring,
their
devices,
but
we've
finally
taken
this
and
we're
standardizing
one
of
its
versions
to
generate
these
psi
records
and
you
can
think
of
it
as
more
or
less
loops
and
nested
loops
that
are
driven
by
parameters
that
can
create
different
scales
for
the
different
types
of
dash
configurations.
B
For
example,
there
might
be
something
that
generates
kcl
groups,
another
one
that
generates
e
I's.
You
know
route,
mappings
etcetera,
so
these
basically
contain
dash
configuration
know
how
that's
just
python
code
very
easy
to
read,
and
these
are
all
aggregated
by
what
I
call
an
uber
generator
that
takes
all
these
and
creates
final
aggregate
output.
So,
to
give
you
an
example,
if
you
were
to
run
something
like
a
full-scale
hero
test
and
generate
the
configuration
for
that
and
put
it
into
a
json
file,
it's
1.3
gigabytes.
B
So
this
config
generator
can
create
text
files
which
you
could
use
later
in
some
downstream
tool,
but
we
can
stream
it
right
into
this
side
challenger
on
the
fly
you
don't
have
to
generate
an
intermediate
file.
These
data
records
can
basically
just
be
read
like
from
this
data
source
for
your
record
applied
to
the
api.
B
B
So
think
of
this
as
kind
of
a
dash
config
wizard,
so
I
just
wanted
one
more
time
to
talk
about
this
diagram
which
you've
seen
pieces
of
in
the
past.
It
shows
kind
of
the
relationship
between
the
different
layers.
We
have
the
gnmi
northbound,
which
is
defined
by
yang
schema.
B
Then
we
have
the
redis
server
in
sonic
stack,
which
has
an
app
db,
which
stores
a
version
of
this
yang
defined
schema,
but
in
redis
the
dash
orchestrator
converts
that
transforms
that
into
basic
db
objects,
which
are
basically
psi.
You
can
think
of
those
as
psi
records
in
some
form
those
get
applied
through
sync
d
into
the
data
plane.
So
that's
the
transformation
this
goes
through
and
based
on
my
current
understanding,
there's
a
slight
difference
between
the
gnmi
schema
and
the
size
schema,
and
that's
what
this
fork
daemon
does.
B
The
reason
I
point
that
out
is
here:
I
show
this
side
generator,
creating
records
generating
all
these
outputs,
but
in
reality
we
might
have
a
gmi
generator
that
you
can
invoke.
That
goes
through
this
stack
and
that's
something
that
you
know.
We
look
forward
to
working
with
others
on
so,
hopefully
that'll
spawn
future
discussions,
but
we
think
this
framework
can
be
the
basis
for
even
testing
the
sdn
work
that
you
know
prints
and
others
are
working
on.
B
B
We
could
use
those
same
data
files
to
define
side,
thrift,
tests,
cyrus
tests
and
genomic
tests.
Then
we
have
a
single
source
of
truth
for
the
duct
configuration
and
the
packets
that
we're
going
to
send
and
the
logic
we
use
to
test
that
so
that
that's
kind
of
the
vision
that
we
set
down
gosh
almost
a
year
ago
and
we're
finally
seeing
light
at
the
end
of
the
tunnel.
C
You
know,
are
you
going
to
be
utilizing
that
schema
in
order
to
you
know,
convert
your
yang
coming
in
from
the
configuration
jnmi
yang
into
the
fdp,
or
is
it
something
that
that
we
are
coming
up
with
a
different
schema
to
store
all
that
you
know
configuration
that
is
coming
into
the
fdp.
B
Well,
what
I
think
we
would
do
is
we
would
we
would
define
test
cases
that
emit
some
kind
of
a
gnmi
configuration
itself
that
would
have
gnmi
configuration
objects
and
those
who
get
applied
to
the
sdn
interface.
It's
it's.
It's
the
dash
container
here
that
microsoft
is
producing
that
would
actually
transform
it
into
appdb
objects
and
and
then
it's
the
orchestrator
that
would
transform
those
into
basic
objects.
So
what
we're
talking
about
is
an
external
stimulus
at
the
appropriate
northbound
interface.
B
B
So
we're
not
purporting
to
say,
and
but
we
could
we
could.
We
can
make
that
scope
of
the
project
or
excuse
me
not
this
project,
but
another
project
that
says,
given
a
gnmi
configuration
input,
let's
test
that
the
fdb
contents
are
correct.
That's
that's
a
possible
test.
You
could
do,
but
we're
not
that's
not
in
scope
of
this
project.
Right
now
does
that
answer.
C
Yeah
yeah
that
answers
the
question,
so
in
other
words,
what
we
are
doing
is
that
the
transformer
from
the
gmi
yang
to
the
app
db
is
essentially
this
part
of
the
already
defined
by
sonic.
Let's
say
you
know
this
dash
agent
that
used
to
be
called
blue
bird
right
and
then
now,
basically
you
know
we
will
have
to
have
that
one.
So,
since
you
mentioned
that
you
plan
to
complete
this
one
next
month,
do
we
already
have
the
dash
agent
available.
B
Okay
I'll
answer
that
more
fully
later,
but
the
scope
of
this
project
does
not
include
any
of
the
gmail
our
project
right
now,
that's
a
call
to
action
for
the
community
so
to
speak
and
I'll
I'll
get
into
that
a
little
more,
but
that's
out
of
scope
for
our
effort.
We
have
an
architecture
here
that
allows
that
to
fit
into
this
framework,
but
we're
not
doing
the
actual
work.
E
Yeah
general,
I
think
we
are
for
the
gnmi
and
the
northbound
part
we
are
targeting
for
mid-october.
E
So
that's
what
the
current
plan
is.
B
Okay,
so
as
part
of
this
project,
keysight
and
peel
vision
are,
are
not
doing
a
gnmi
generator
and
are
not
doing
a
gene
in
my
driver
at
this
time.
That's
out
of
scope
for
our
work.
What
we
have
is
a
place
for
it
to
fit
in,
and
maybe
the
community
wants
to
pitch
in
somehow
and
and
come
up
with
a
plan
to
do
that.
D
B
Thanks
cheryl,
so
as
usual
hannah,
if
you
you've
already
read
my
mind
on
these
in
these
talks,
and
we've
got
to
part
of
the
call
to
action,
but
we'll
revisit
that
at
the
end,
and
you
seem
to
have
a
good
way
of
predicting
the
end
of
the
meeting
really
well.
B
C
Just
a
follow-up,
you
know
just
sorry,
so
then,
then,
in
that
case,
right
for
the
first
phase
of
this
project,
what
would
be
the
trigger
for
you
know,
driving
any
test
cases,
for
example
right.
B
Can
we
table
that
one
and
and
come
back
to
that
after
the
demo
show
yeah?
I
think
I
think
we
want
to
get
through
the
demo
to
make
sure
we
do
that,
because
each
one
of
these
questions
might
lead
to
another
fascinating
discussion
and
we'll
I
want
to
give
anton
his
time
and
we
can
catch
up
and
we
can
also.
This
can
be
an
ongoing
discussion
right.
This
is
a
work
in
progress,
but
we
will
be
creating
some
test
cases
in
the
near
future.
The
quick
answer
so
with
that.
B
I
want
to
turn
over
to
anton
who
is
a
software
architect
at
peel
vision.
We've
been
working
together
on
this
and
he'll
present
this
side
of
things
so
go
ahead
and
take
the
screen.
If
you
like.
F
Yeah
sure
connection,
okay,
so
hello,
everyone,
so
I
will
speech
actually
from
the
nice
slides
from
the
crease
and
they
start
showing
a
little
bit
more
code
and
my
id
so
first
of
all,
I
will
I
want
to
show
actually
the
example
of
the
test
cases,
because
chris
showed,
on
the
slide
that
we
have
number
of
different
possibilities:
how
to
write
this
case.
It's
about
to
pass
and
the
main
format
we
understand
actually
is
that
dash
config
json-config,
like
format
where
we
actually
describing
the
configuration
entities,
what
we
want
and
we
can
to
scale
them.
F
F
Gen
here
is
the
code
actually,
which
is
doing
that
we
can
write
direct
cycles
so,
for
example,
yeah.
So
here's
it
an
example.
So
that's
pretty
known
for
math
as
understand
like
where
we
have
like
type
psi
object
type.
We
have
keys,
we
have
attributes,
we
have
operation
and
we
actually
do
it
for
all
configuration
entities.
We
need
so
like.
F
For
the
test
development,
you
actually
choose
what
to
do
so.
You
can
have
a
high
level
format
generate
it
or
you
can.
Actually,
you
use
a
low
level
psi
format,
because
right
now
this
is
separate
file,
but
the
you
can
like
this
is
just
the
python
dictionary.
You
can
define
it
inside
of
the
disk
code
and
do
a
single
call
to
the
site
through
the
selected
interface,
like
that's
kind
of
up
to
you,
select
separate
file,
inline
file
or
what
you
do
call
by
call.
F
F
So,
like
you
decide
actually,
according
to
your
need
and
the
yeah
for
his
hero
test,
we
actually
expect
him
to
write
it
in
that
format,
because
then
we
will
translate
it
where
we
want
and
the
e
it
will
be
passed
to
the
sagen
script
we
which
then
can
scale
it
because,
like
if
you
want
really
hundred
thousand
of
some
and
entities,
it
will
be
like
too
much
to
write
everything
in
the
code
even
in
the
external
file.
F
F
So
we
just
will
see
difference
like
to
to
see
the
output
of
the
test
framework
and
in
addition,
I
want
to
show
actually
the
testbed
configuration
format
because
see
that
both
this
those
test
cases,
so
I
will
be
running
on
two
different
setups
and
actually
I
don't
need
to
do
any
changes
in
the
code
to
run
it
on
one
or
another
setup,
so
they
will
be
up
fully
applicable
for
both
through
our
wrappers,
which
also
were
mentioned
on
the
chris
slide.
F
So,
let's
take
a
look
so
here
I
have
one
configuration
format
for,
for
the
dpu
is
snapping
so
see
that
I
have
a
tpu
entity
here
and
I'm
defining.
So
that's
thrift,
api
interface,
because
that
could
be
by
the
way
radius
interface
so
which
is
in
place
right
now,
something
that
we
are
expecting,
maybe
in
future
it's
and
this
gmi.
F
So
for
me,
my
right
now,
this
target
bmv
2
will
be
next
scene-
is
very
important
stuff,
it's
data
planes,
so
I'm
using
snappy
and
xse
because
another
one
I
will
have
with
the
ptf
as
a
data
plane.
So
I'm
putting
pdf
here
this
section
for
me
is
completely
the
same,
let's
to
take
a
look
at
and
the
pictures
by
the
way.
F
So,
like
also,
I
wanted
to
show
some
difference
in
the
traditional
test
cases
what
we
have
right
now,
because
we
have
direct
calls
to
the
thrift
and
to
the
ptf
in
the
test.
Config,
and
here
we
kind
of
decoupled
it
so
test.
Config
is
no
would
be
a
high
level
format,
so
the
one
that
is
used
to
the
input
for
say
generator.
F
So
that's
a
separate
stuff
plus
we
have
say
challenger
for
and
we
are
doing
some
which
contains
that
wrappers,
which
allows
us
to
run
the
same
code
on
the
different
platforms,
and
so
that's
two
platforms
that
I
I
have
right
now.
So
the
one
is
this
standard,
ptf
ptf
based
scopy
based
pdf
data
plane
and
another
one
with
snappy,
but
see
the
test.
Config
will
be
completely
the
same,
so
it's
just
agnostic.
It
doesn't
matter
what
we
have
in
the
data
plane.
What
we
have
the
api
for
the
dude.
F
Okay,
so
let's
actually
go
to
the
real
demo,
so
I'm
using
chris
scripts
for
spawning
all
environment,
that's
already
from
some
previous
demos,
so
I'm
running
right
now,
bmv
to
model
in
one
window
in
another
one.
I
will
write
a
run
size
fifth
server.
F
Yeah,
so
that's
to
link
dash
test
cases,
folder,
okay,
so
actually
that's
the
same
folder
that
we
just
saw
in
the
in
my
vs
code.
F
Okay,
so
how
to
run
test
cases
pretty
simple,
so
that's
pi
test,
I'm
using
verbose
mode
and
I'm
defining
setup
what
I'm
going
to
use.
So,
let's
start
from
the
ptf
a
standard
one,
and
then
I
select
test
cases
I
will
run
so
both
because
I
have
inbound
and
outbound
by
the
way.
So
each
test
file
contains
three
tests
for
setting
up
configuration,
sending
traffic
and
destroying
configuration.
So
total
will
see
six
test
cases.
F
Okay,
let's
go,
as
I
mentioned.
One
will
fail.
Please
pay
attention
like
zero
packets
received,
sets
for
inbound
five
past
and
also
pay
attention.
So
that's
actually
was
a
run
with
the
ptf
and
now
yeah.
So
that's
output
like
in
the
ptf
style,
something
that
you
used
to
see
in
the
existing
test
cases
and
right
now.
I
do
not
change
anything
in
the
code.
F
Here
it
is
like
it
again
same
result:
one
failed
five
past
yes
same
like
no
packets
captured
same
reason,
but
you
see
different
output
because
right
now
we
were
using
snappy
with
and
it
used
separate
methods
for
ryan
traffic
and
there,
therefore,
you
have
like
a
little
bit
different
output.
However,
the
test
itself
doesn't
require
any
change
like
something
that
you
write,
for
example
for
the
model
you
can
run
then
on
the
real
hardware.
That's
the
idea
of
that
framework
by
the
way
so,
like
so
here's
it
like
these
questions,.
E
Hey
anton
this
is
prince
here,
so
so,
how
is
it
different
from
the
current
scift
framework
and
the
test
that
we
have
like?
Is
it
just
a
replacement
of
ptf
with
snappy,
or
is
there
some
other
value?
I
had.
F
No,
so
we
are
not
replacing
we're
building
a
stop
because
under
hood
like
still
same
stuff
like,
but
what
we
are
adding
here,
because
we
we
already
abstraction
over
the
boot
api
and
tg
api.
So
we
can
use
same
port
of
the
test
case
but
run
it
on
the
swift
server
on
the
ready,
server
and
feature
for
the
gnmi
as
well.
So
we
do
not
need
to
recreate
every
to
have
like
separate
test
coverage
for
each
level
of
our
verification
so
like
because
for
currents.
E
Oh,
I
see
I
see
so
the
okay,
the
test
configuration,
can
change
in
future
for
gnmi
and
then
run
the
same
test
to
cover
that
scenario
as
well
right.
F
Okay,
and
so,
for
example,
at
this
moment
what
chris
mentioned
so
like
that
comments,
psi
commands
that
we
have
here,
so
they
already
could
be
translated
like
to
the
both
like
to
the
thrift
or
to
the
radius
like
you
can
run
if
you
write
this
case,
for
example,
in
such
format,
like
you
already
can
run
same
scenario
on
both
platforms,
but
the
real
target
actually
is
to
have
test
case
and
the
the
declarative
form
for
dash.
F
So
that
will
be
completely
possible
and
to
do
some,
even
verification
on
very
early
stages
for
them,
because,
like
anyway,
so
like,
we
are
like
building
everything
around
psy.
So
that's
in
that
case,
vendors
even
can
verify
that
the,
if
framework
produce
that
let's
say
list
of
site,
commands
and
then
with
their
driver
that
everything
is
goes
well
and
they
can
like
validate
on
very
early
stages.
E
So
where
is
this
getting
translated
to
the
actual
psy
configuration
this
this
one
that
you're
highlighting.
F
Yeah
so
like
right
now,
like
from
that
configuration
to
that,
we
are
translating
the
through
this
agent
so
like
and
the
say,
challenger
itself
takes
that
format
of
psych
objects
and
goes
to
the
through
the
client
through
the
thrift
radius.
Based
on
what
you
define
in
your
setup,
json.
B
F
Yeah,
so
actually
chris
showed
here
yeah,
so
that's
actually
how
it
goes
so
like
what
I
just
showed.
It's
a
declarative
form
is
related
to
that
block.
So
also
we
can
have
like
literal
side
records,
so
we
have
some
program,
maybe
some
low
level
to
push
directly
something
and
also
we
have
the
trapper.
So
we
have
thrift,
driver
radius,
driver
and
that's
for
future,
so
possible
translation
to
make,
because
those
are
just
the
modules
that
are
loaded,
dynamically
and
but
yeah.
So
that's
the
idea.
Actually.
E
F
B
F
They're
using
ptf,
okay
and
the
all
that
parts
are
actually
part
of
that
framework,
so
that's
only
a
technical
question
like
how
to
rub
rubs
them
inside
of
that
picture,
because
that's
possible,
we
already
using
swift
and
the
question
okay.
So,
let's,
if
we
do
not
want
to
have
high
level
format,
we
can
do
low
level
calls
like
introduces.
F
That's
one
thing
that
we
need
to
adopt
here
and
another
is
actually
pdf.
You
don't
need
to
do
anything.
So,
let's
take
a
look
at
my
insert
test
code
to
the
my
traffic
code,
so
that's
just
almost
not
almost
that
copy
passed
from
the
github.
So
it's
completely
the
same.
You
don't
need
to
do
anything.
It's
the
same.
F
What
we
have
at
this
moment
so
in
future
for
the
hardware
traffic
generator,
we
might
have
some
specific
functions
that
are
not
possible
to
run
this
ptf,
but
like
existing
functions
from
the
ptf
wall,
would
be
launched
through
the
exa.
So
that's
no
problem
to
transform
them.
So
from
from
that
point
of
view,
tests
completely
the
same
you
you
need
only
to
create
proper
pi
test
features
to
run
existing
code.
B
Yeah
I'll
touch
on
on
your
question
also
prints
in
a
moment
when
I
kind
of
wrap
this
up
I'll
circle
back
and
I'm
going
to
almost
turn
the
question
on
its
head
and
you'll
see
you'll
see
why
I
say
that
in
a
moment
antenna
is
there
anything
else
you
wanted
to
show
before
I
go
to
some
of
the
conclusion
slides
also.
E
Have
a
question
for
anton
or
chris:
I
thought
that
you
know
when
using
the
cyrodis.
E
We
are
writing
the
entry
directly
into
the
asic
db,
which
means
we
need
to
create
an
entry
in
a
format
that
asic
db
expects
and
from
that
down
it
looks
like
it's
sonic
on
top,
and
you
know,
first
of
all,
it
uses
portion
of
sonic
right
here
using
the
using
the
asic
db
and
and
that
using
the
pub
sub
mechanism
just
like
how
sonic
would
you
normally
run,
the
syncd
will
get
a
notification
and
then
you
know
in
call
the
corresponding
hardware
apis
right
using
the
psi
api
implementation.
E
F
B
The
simple
answer
is
these
test
cases
that
the
form
of
the
the
duck
configuration
is
done
in
a
way
that
can
be
translated
directly
into
cyretis
crud
operations
or
site
thrift.
Api
calls.
That
was
the
whole
point.
This
framework
right,
the
data
itself
is
sort
of
it
contains
psi
objects,
but
they're,
abstracted
and
field
vision
is
already.
This
is
already
in
play.
B
This
is
what
sci
challenger
already
does
it
creates
psi
redis
entries
in
the
appropriate
format,
so
this
is
like
over
a
year
old
that
part
of
it
we've
added
the
scithrift
and
and
these
other
concepts
to
this
right.
So
yes,
if
the
vendor
has
implemented
this
and
has
this
all
running
on
their
device,
even
without
the
rest
of
sonic,
this
test
can
be
done.
E
It
will
be
really
great
to
have
this.
You
know
framework
that
can
generate
large
amounts
of
configuration,
and
it
will
be
really
good
to
use
it
with
the
side
thread.
B
Yeah
both
so
thanks
for
all
the
questions.
I
know
this
is
a
lot
in
a
big
gulp.
That's
why
we
wanted
to
give
a
sneak
preview
because
lay
the
groundwork
for
for
future
as
well.
So,
let's,
let's
just
talk
about
some
of
the
deliverables,
we're
hoping
to
do
next
month
sometime.
This
will
be
upstream,
so
psy
challengers
are
already
part
of
ocp.
This
will
just
be
a
pull
request
and
peel
vision
has
been
the
maintainer
of
that
anyway,
since
they
created
and
then
the
integration
into
dash.
B
It's
already
doing
it's
going
to
be
a
pull
request
so
that
this
gets
pulled
into
dash
as
a
git
sub
module.
It
creates
containers
for
running
all
the
client
software,
so
it'll
just
be
like
the
current
dash
bmb2
pipeline
test
framework.
There'll
be
a
client
container
that
has
everything
you
need
to
run
tests
we'll
have
a
few
initial
test
cases
we've
been
focusing
on
the
framework
and
not
the
scale
of
the
tests.
We
hope
the
test
cases
are
easy
enough
to
to
mimic
that
people
can
start
writing
their
own
as
well.
B
That's
always
the
unspoken
assumptions
we're
trying
to
plant
seeds
here
to
empower
people
to
do
more
of
this
themselves,
we'll
be
bare
verifying
these
against
the
behavioral
model,
using
software
traffic
generators
and
at
some
point
in
the
future,
when
vendors
have
a
device
with
size
support,
meaning
you
know
concretely
give
us
a
dpu
card
with
a
scythe
server
running
on
it.
Import
9092,
we'll
start
playing
with
it
with
hardware
as
well.
So
that's
you
know
a
tbd
exercise,
but
we're
really
looking
forward
to
that
and
then
other
possibilities.
B
One
is
this
gnmi,
which
hanaf
already
planted
the
seeds
for
that
discussion,
but
we
need
to
come
up
with
formats,
generators
and
api
drivers
and
test
cases,
and
we
don't
have
the
manpower
right
now
to
do
this
ourselves,
so
we're
looking
for
people
to
volunteer
or
if
someone
wants
to
sponsor
our
wonderful
friends,
appeal
vision
to
do
this,
I'm
sure
they'd
be
happy
to,
but
this
this
is.
I
think
this
would
be
a
fun
project
and
then
there's
also
the
possibility
of
taking
some
of
this
work
and
porting
it
back
into
ptf.
B
So
this
is
where
I
wanted
to
turn
the
question
on
its
head:
a
bit
that
that
prince
asked
you
know.
Can
we
still
use
ptf
tests
I'll
jump
to
this
one?
B
B
There's
no
reason
why
they
couldn't
be
done.
It's
just
plenty.
It's
just
software.
There's
no
high
tech
here
involved,
it's
just
work
and
it
probably
can
be
done
and
I
think
it'd
be
kind
of
exciting.
I
did
a
poc
a
year
and
a
half
ago
putting
a
snappy
test
into
a
ptf
test.
You
know
I
just
did
that
in
a
p4
workshop
as
a
as
a
poc,
so
it
can
be
done.
There
are
some
caveats
with
ptf
and
that
it
sort
of
assumes
the
data
plane
is
scappy
and
it's
always
there.
B
So
you
might
want
to
write
new
test
cases
in
a
little
more
clever
way
to
be
able
to
work
this
way,
but
it
can
be
done
so.
Ptf
can
get
kind
of
like
a
vitamin
shot
from
this
project.
If
we
want
and
then
finally
do
we
want
to
make
bmv2
do
we
want
to
make
a
virtual
version
of
sonic
running
bmb2?
I
think
that's
a
large
scale
project.
B
It
means
we
have
to
have
the
sync
d
integrated
with
vmb2
and
the
libside,
and
you
might
even
raise
questions
you
know.
Do
we
need
underlay
etc,
but
the
idea
is:
do
we
want
a
full
software
implementation
of
dash
that
we
run
in
the
cloud
for
cfcd
testing
that
there
is
work
to
be
done,
and
I
think
that
could
be.
B
You
know
future
discussion,
so
that's
some
future
possibilities
and,
finally,
that
I
guess
the
call
to
action
out
of
this
is
like
some
feedback
today,
if
possible
or
anytime,
we
probably
need
to
close
some
gaps
in
the
current
b
and
b,
two
to
run
more
meaningful
test
cases.
We
really
want
to
run
stateful,
processing
or
other
types
of
tests.
There
may
be
some
things
to
finish
before
we
can
do
those
test
cases,
so
I'm
hoping
some
of
this
can
filter
into
the
behavioral
model
working
group.
B
Then
we
can
maybe
agree
on
some
kind
of
a
goal
saying:
okay,
we
want
this
amount
of
functionality,
so
we
can
start
writing
these
test
cases,
because
what
we
really
want
is
to
have
test
cases
that
we
can
then
apply
on
hardware
and
show
that
they
match
both
in
software
and
hardware
who
wants
to
work
on
gmi
and
then
we'd
like
to
help
writing
test
cases.
A
Yep,
I
think
those
are
the
the
big
big
needs
right
there.
Chris
thank
you.
B
A
If
someone
would
want
to
participate-
and
you
know,
do
test
cases
and
have
questions
or
anything,
could
they
reach
out?
Should
they
reach
out
during
this
meeting
or
individually
or.
A
D
I
would
like
christina
for
a
formal
list
of
work
items
and
then,
where
we
can
see
volunteers
going
gain,
I
I
don't
want
to
see
two
weeks
from
now.
We
don't
have
any
volunteers,
so
I
think
we
need
to
create
the
list
of
work,
prioritize
that
make
sure
we're
aligned
with
that
and
then
start
putting
names
who
can
volunteer
and
and
help
with
this,
because
I
don't
want
this.
This
is
super
super
important
stuff.
D
This
is
quality
and
we
want
quality
built
into
the
into
dash,
and
if
we
don't
get
volunteers,
it's
really
you
know
we're
not
going
to
have
the
quality
that
we
want,
and
so
it's
as
important
as
designing
you
know
the
dash
orchestration
agent
and
it's
it's
it's
along
the
similar
lines.
D
Testing
is
job
number
one
quality.
Remember
you
know
dash,
can
only
output
a
golden
image
of
a
behavioral
model.
We
need
to
know
that
it's
correct
and
then
you
know
all
the
hardware.
Vendors
will
get
all
the
benefit
of
being
able
to
run
those
same
tests
against
their
hardware,
and
so
that's
like
huge
right.
It's
a
huge
benefit
to
the
community
overall.
So
we
we
do
need
to
formalize
the
list.
I
think
I
think
that's
the
job
number
one.
What
needs
to
get
done
make
sure
that
prince
and
others
on
this
call.
D
B
So
I
want
to
thank
fuel
vision
for
all
the
great
work
on
this
project
to
date
and
also
just
having
something
like
site
challenger,
which
was
you
know
all
ready
to
be
the
framework
for
this
once
they
once
I
saw
their
presentation.
I
had
that
in
the
back
of
my
mind
for
the
last
year
and
now
we're
finally
seeing
the
exciting
stuff
running
it's
gratifying.
B
Also,
we've
been
working,
you
know
one-on-one
with
different
dpu
vendors,
which
I
won't
get
into
specifics,
but
a
lot
of
their
knowledge
and
know-how
has
gone
into
creating
like
the
generator
and
verifying
it
actually
configures
real
hardware
and
does
real
tests,
and
you
know
microsoft
has
been
interacting
with
us
a
lot
on
the
as
reality
check
on
all
this
config
generator.
So
we
appreciate
all
the
help.
A
lot
of
hands
went
into
this
by
the
way,
some
of
them
behind
the
scenes.
Well,
thanks
everyone
yeah.