►
From YouTube: DASH Workgroup Community Meeting 20220608 (June 8, 2022)
Description
Review of Testing CI/CD
Q&A
A
Yes,
yes,
great
great!
So
what
you
know
hanif
mentioned
it
might
be
nice
to
have
an
agenda.
You
know
each
time
before
the
meeting,
so
I
attempted
to
create
an
agenda,
but
I
know
that
today,
keysight
wanted
to
give
a
talk
about
items
that
are
top
of
their
mind
and
we
also
had
a
really
nice
discussion
with
keith
site
yesterday
internally
and
then
I
was
hoping
we
could
talk
about
this
psy
library
generation.
A
If,
if
we're
ready,
so
that's
what
I
have
does
anyone
else
have
something
we
want
to
put
on
the
agenda.
A
B
A
Maria
did
you
want
to
talk
about
the
this
commit
from
yesterday.
C
That
was
sitting
there
for
a
few
weeks,
I
managed
to
fix
the
conflict,
and
now
it's
and
it
was
it
was
actually
chris-
was
asking
for
that.
One,
because.
C
Plans
for
cicd
pipeline
and
dash
repo
were
pending
this
command
so
yeah,
so
I
finally
fixed
all
the
conflicts,
and
now
it
is
in.
A
Awesome
awesome
anything
else
you
wanted
to
share
while
we're
waiting
for
chris.
A
Okay,
that's
fine,
and
just
for
everyone
else's
edification,
if
you're,
not
in
the
high
availability
or
behavioral
model
meetings,
we've
been
working
through
those
trying
to
continue
to
make
progress.
Progress
with
requirements,
especially
for
aha
and
we've,
been
also
working
through
the
behavioral
model
items
in
the
project.
So
that's
been
great
and
I
appreciate
the
contributions
from
everyone,
and
I
know
we
should
probably
generate
a
list
of
you
know.
A
The
sdn
team
right
now
is
very
focused
on
delivering
their
proof
of
concept
out
into
our
data
centers,
but
in
a
couple
of
weeks
they
might
have
some
time
to
come
back
and
visit
with
us
about.
You
know
our
questions.
I
know
they.
They
have
quite
a
few
things
on
the
list.
A
And
I,
of
course
I
know
we
have
more
as
well
hey
chris,
have
you
joined.
A
Oh,
that's!
Okay!
That's!
Okay!
We
just
sorry
about
that!
Chris!
We.
We
talked
briefly
about
the
conflict
fixed
by
marion.
For
for
this
commit
right
here
and
I'm
going
to
go
ahead
and
stop
presenting.
I
thought:
keysight
had
possibly
wanted
to
talk
about
a
few
items
on
their
end.
A
I'm
gonna
I'm
gonna
stop
presenting
in
case
you
need
to
present.
D
Hello,
hey
everyone
nice
to
be
here
again.
I
wanted
to
give
a
little
update
on
where
we
are
with
respect
to
cicd
testing.
I've
been
talking
about
that
off
and
on
for
a
while,
and
I
wanted
to
just
kind
of
give
an
update
and
you
know,
get
feedback
from
the
community
and
also
talk
about
what
some
of
the
you
know
dependencies
are
and
where
we
maybe
could
use
some
help.
D
So
this
is
what
I
plan
to
go
over
and
I
just
wanted
to
refresh
people
from
some
diagrams
represented
a
few
times
in
the
past.
This
just
shows
a
diagram
of
testing
two
different
p4
models.
One
is
the
bmb2.
D
The
other
is
a
future
p4dpdk
version
that
intel
is
developing,
and
this
shows
software
traffic
generators
testing
the
p4
model,
and
we
wanted
to
run
it
as
github
actions
in
a
ci
cd
pipeline.
The
idea
is,
when
you
do
a
commit
of
some
kind
of
change.
D
D
Yes,
this
is
what's
currently
being
built
and
this
is
futuristic
and
so
right
now
I
have
a.
E
D
D
I
mean
there's,
probably
a
lot
of
other
differences
we
could
get
into
in
the
future,
but
the
real
distinction
for
the
purpose
of
today's
discussion
is
just
the
performance
of
these
two
approaches,
primarily.
D
D
Well,
dpdk:
this
implementation
is
very
performant
right,
it's
probably
can
do
million
packets
per
second
or
faster.
This
bmv
ii
is
really
more
of
a
you
know:
an
engine
driven
kind
of
metadata,
driven
engine
that
runs.
You
know
hundreds
of
packets
per
second
kind
of
speed
that
someone
can
contradict
me
if
they
think
it
runs
a
lot
faster
than
that.
But,
generally
speaking,
this
is
like
an
educational
simulation
tool
or
the
commercial
grade
data
plane,
but
they're
both
pure.
E
F
D
Yeah
ptf
and
beyond,
there's
another
approach
too,
where
I'll
talk
about
where
we
use
a
much
more
performant
traffic
generator,
it
has
an
open
api
and
actually
there's
so
many
sort
of
deep
dives
on
this.
D
I
don't
want
to
spend
too
much
time
on
some
of
the
details,
so
I
can
cover
the
broad
strokes,
but
I
did
do
a
talk
with
reshma
from
intel
about
this
whole
approach
and
we
talked
about
dash
and
about
dash
testing
and
there's
a
video
link
at
the
end
of
this
slide
deck,
which
I
will
share
that
you
can
watch
that
p4
workshop
talk.
That
explains
this
in
more
detail.
G
Hey
chris,
this
is
a
question
for
you.
I
mean
my
question
is
pretty
fundamental
right,
so
the
whole
purpose
of
having
a
p4
based
definition
for
dash
behavior
was,
you
know
to
have
a
you
know,
well
understood
behavior
specified
now.
I
understand
that
you
know
using
bmv
2
for
the
testing
you
know
of
what
we
define
in
the
before
makes
sense,
but
why
should
we
expand
this
2p4dpdk
and
all
other
methods?
G
I
understand
from
a
p4
perspective.
Yes,
that
is
probably
a
piece
from
a
from
a
dash
perspective.
Does
it
make
sense
to
add
all
these
different
paths?
For
this
I
mean
performance
targets.
Dpdk
performance
is
not
going
to
match
with
the
performance
requirements
of
smart
switch
or
other
appliances.
As
you
know,
microsoft
needs
anyway.
So
the
intention
should
be
for
testing
the
behavior,
not
really
performance,
for
which
vmv2
should
be
good
enough
right.
I'm
just
trying
to
understand
the
rationale.
H
Yeah,
though,
before
I
chris,
if
I
can
quickly.
H
For
p4dpdk,
you
know
it's
a
dpdk
with
p4
programmable
dpdk
right,
so
we
have
the
p4
program
for
the
dpdk
backend,
that
that
has
the
features
that
we
need
for
underlay,
as
well
as
connection
tracking
and
overlay
as
well
already
you
know,
and
what
we
are
doing
is
to
have
a
sonic
running
in
a
vm,
so
sonic
soft
switch
right
with
p40
pdk.
Then
you
have
basically
all
the
sonic
software
stack
that
is
same
to
that
can
be
used
for
soft
switch
as
well
as
hardware.
H
So
once
we
have
the
dash
container
and
all
the
code
required
for
dash
and
that's
there
in
sonic
right,
we
can
use
the
same
stack
for
both.
You
know
for
development
environment
as
well.
As
you
know,
if
there
are,
there
are
some
nicks
that
will
run
dpdk,
for
example,
which
are
not
smart,
smart,
necks
or
ipo,
with
high
powers,
high
capacities
of
memories
and
all
that
right.
H
So
this
has
a
lot
of
advantages,
mainly
here
in
the
community,
is
that
it
will
be
just
a
pna
compliant
and
it
will
have
connection
tracking
and
overlay
features
and
the
sonic
software
stack
can
be
used.
You
know
for
both
soft
switch
and
hardware
in
this
is
the
same.
Stack
can
be
used.
D
Yeah
and
just
to
be
clear,
this
is
this
is
not
you
know
in
the
next
few
weeks.
This
is
kind
of,
and
reshma
can
comment
more
on
the
timing
of
this,
but
this
will.
This
will
follow
this
initial
work
and
by
the
time
this
is
available
for
this
kind
of
test.
We'll
have
invested
a
lot
of
time
in
a
com
in
the
common
framework
to
make
all
of
this
work,
so
it
should
not
be
like
twice
the
amount
of
work
to
test
both
models.
D
G
You
you
know
reshma.
I
have
no
argument
against.
You
know
on
the
dpdk
advantages
over
a
software
simulator
right,
but
my
concern
is
that
when
we
start
a
an
effort
like
this
and
over
a
period
of
time,
it
will
diverge
and
create
confusion
on
you
know
what
is
a
real
target
to
be
testing
with
so
and
that's
what
I've
seen
in
the
past
two
and
that's
the
reason
I'm
bringing
it
up.
H
Sure
I
think,
as
in
increases,
you
know
test
scenarios
that
he
has
listed
in
the
dash
repo.
Basically,
there
are
several
options
on
and
several
levels
of
how
we
can
test.
You
know
below
sonic,
using
sci-ptf
right
and
using
sonic
with
when
we
talk
about
using
sonic
how
we
test
in
the
soft
switch.
I
think
this
will
be.
You
know
advantageous,
because
we
are
using
the
sonic
in
vm.
H
We
can
instantiate
multiple
vms
and
use
the
p40
pdk
pipeline
and
write
our
own
pay
for
programs,
and
we
can
give
the
fifa
programs
that
we
have,
which
we
will
open
source
as
well,
which
will
cover
underlay
and
the
connection
tracking,
etc.
Right
so
yeah
it.
It
will
be
a
full
stack.
You
know,
even
in
the
development
environment,
to
be
used
anywhere
in
some
some
of
the
nicks
that
you
may
have
that
are
not
as
powerful
as
the
ipo
dpu
yeah.
Those
are
the
those
are
the
advantages
I
see.
G
G
H
Yeah
there
were
similar
such
thoughts
earlier
as
well.
Yeah.
D
I
I
think
it's
a
real
good
point
and
why
don't
we
just
keep
that
as
kind
of
a
discussion
item?
You
know,
that's
probably
going
to
be
ongoing
and
you
know
take
those
comments
and
then
move
along
on
this
presentation,
and
we
can
you
know
that
can
be
an
open
thing,
because
there's
nothing
being
done
right
now,
that's
going
to
derail
one
choice
or
the
other.
I
think
right
now,
they're
kind
of
like
you
know
two
future
possibilities
here.
D
So
let's
just
keep
that
in
mind
and
then
as
we
go
forward
thanks
though
so
yeah
skip
the
animation.
So
I
just
want
to
talk
about
the
p4
model
testing,
regardless
of
which
model,
but
for
now
I'll
just
say
it's
bmb2
for
the
current
discussion.
D
When
something
happens
like
a
commit
or
a
pull
request,
it
will
trigger
this
set
of
actions
and
here's
what
gets
done
when
you
trigger
the
actions
and
right
now,
all
the
stuff
in
green
can
be
done
manually
with
marion's
latest
merge,
which
was
very
timely,
and
I
just
actually
reconfirmed
all
this
this
morning
and
last
night
that
this
all
works
by
just
doing
these
manual
step.
But
it
builds
the
serious
pipeline
p4
code.
It
creates
the
psi
headers
from
the
p4
info.
D
D
Yeah
yeah
this,
so
this
is
what's
working
today.
This
was
actually
working
many
weeks
ago.
You
know
I've
tried
this
out
and
been
kind
of
a
guinea
pig
along
the
way,
and
it's
it's
pretty
cool,
and
this
is
what
the
repo
looks
like
you
know.
As
of
last
night,
you
know
you
just
do
these
steps,
it
all
works
and
some
of
the
steps
we
need
to
do
to
complete
this
automation
test
is
we
need
to
integrate
the
site
thrift
server,
because,
right
now
this
is
just
a
site,
library
binding.
D
We
need
to
configure
the
device
under
test
through
through
scithrift
spin
up
a
traffic
generator,
a
software
one
actually
pass
traffic
between
the
ports,
the
virtual
ethernet
port
to
the
data
plane
and
then
longer
term.
We
want
to
test
the
successfully
the
higher
layers
in
the
stack
so
that
we're
ready
for
pure
true
sonic
integration
and
then
finally
use
these
same
test
cases
with
hardware
tests
using
hardware,
traffic
generators
and
actual
you
know:
smart
knit
cards,
so
that's
kind
of
a
broad
overview,
and
this
is
just
for
reference.
D
I've
talked
about
this
diagram
a
number
of
times,
and
I
also
talked
about
it
in
that
video
I
mentioned.
You
know
this
is
kind
of
the
overall
workflow,
where
the
p4
code
is
that's
part
of
the
behavioral
model.
Working
group
is
used
not
only
to
make
a
software
target
a
bmb2
target,
but
also
generates
all
these
artifacts,
which
can
be
used
to
do
testing
it
generates
psi
headers,
it
generates
a
site.
D
It
can
be
used
to
generate
a
cythrif
server
which
you
can
actually
use
to
test
the
target
with
and
then
test
scripts
would
configure
the
device
using
scithrift
configure
a
traffic
generator,
send
packets
to
the
device
and
test,
and
this
can
all
be
triggered
manually
or
by
github
action.
So
this
is.
This
is
kind
of
the
broad
strokes.
D
D
One
other
thing
I
wanted
to
mention
is:
I
would
like
that
we
can
have
test
cases
that
can
test
the
different
layers
of
the
son
of
the
dash
sonic
stack,
starting
from
the
bottom
up,
where
first
we're
testing
with
scythe
rift.
Sci
api
of
the
implementation-
and
this
will
be
a
software
implementation
and
later,
as
you
work
your
way
up,
the
sonic
stack.
You
want
to
test
the
psi
retis
interface,
which
doesn't
use
all
of
sonic.
D
It
just
uses
really
the
redis
and
sync
d,
parts
of
sonic
and
then
finally
testing
through
the
gnmi
northbound
interface,
which
is
the
sdn
interface
in
principle.
You
could
write
a
test
that
could
be
used
to
drive
all
three
of
these
into
paper
and
that
remains
to
be
seen
how
well
that
will
work,
but
we're
going
to
proceed
as
if
that
is
possible
and
by
using
the
appropriate
drivers
and
wrappers.
D
A
test
runner
gets
allocated
in
the
azure
cloud
and
this
is
free
for
public
projects,
and
you
can
read
about
that
in
one
of
the
links
at
the
end
of
this
slide
deck,
but
github
explains
how
to
do
that.
It
will
build
or
retrieve
a
pre-built
docker
image
containing
the
tool
chain.
This
is
the
make
docker
target
that
marion
created.
D
So
it
builds
all
these
tools
into
a
docker
so
that
you
basically
have
a
complete
development
environment
to
build
all
the
rest
of
the
things.
So
it's
going
to
pull
all
these
tools
in
build
them,
pull
them
from.
You
know.
Ubuntu
package
repository
build
all
these
tools,
and
then
you
have
a
docker
image,
which
is
basically
then
a
nice
little
self-contained
build
environment
for
everything
else.
D
Eventually
the
scythe
thrift
server,
which
is
not
done
yet
it
will
launch
the
switch
and
the
cythrif
server.
Then
we'll
launch
some
traffic
generators
which
we'll
talk
about.
We
have
this
free
dpdk
traffic
generator
that's
much
better
than
scappy,
and
it
can
actually
go
up
to
quite
performant
levels.
D
Then
it
runs
pi
test
to
configure
the
data
plane,
send
traffic
and
analyze
the
results.
Produce
some
kind
of
report
probably
do
an
automatic
status
badge
in
the
github
repo,
so
you
can
see
like
pass,
fail
and
then
shut
down,
and
that
can
happen
just
by
doing
a
pull
request.
So
that's
that's
the
grand
vision,
I'll
stop
for
questions.
E
So,
chris
just
a
quick
question,
so
this
this
basically
will
start
as
soon
as
the
pull
request
is
submitted.
D
E
Okay,
okay
yeah,
so
this
will
become
like
one
of
the
gates
for
something
to
be
accepted
as
a
you
know,
for
approval
purposes,
right.
D
Correct
correct
and
you
know
a
lot
of
companies
use
this
process
internally,
but
you
know
our
development
team
to
keysight.
This
is
like
baked
right
into
our
process.
You
do
you
do
a
commit
and
it
kicks
off
the
pipeline.
You
can
see
immediately
of
all
your
regressions
passed
or
failed,
and
dianna
here
can
attest
to
the
value
of
that.
E
F
D
So,
let's
just
talk
a
little
bit
about
progress
today.
We
literally
just
started
this
yesterday
and
you
know:
we've
got
some
good
good
progress,
but
we
also
flagged
some
issues
which
we
need
to
work
on.
So
there's
a
couple
things:
one
is
the
automation
part
which
we
started
last
night,
actually,
the
overall
project
of
the
framework,
getting
resources,
schedule
and
dependencies.
D
You
know
keysight
wants
to
make
a
lot
of
contributions
in
this
regard,
so
we're
still
studying
the
overall
plan
for
all
these
test
cases
etc
and
the
larger
test
framework,
but
right
now,
diana's
working
on
the
automation,
part
and
testing
the
github
action.
Just
the
basics-
and
the
first
thing
we
want
to
do
is
just
reproduce
all
the
manual
steps
that
marion
documented
in
his
readme
of
making
the
docker
kicking
off
the
switch
and
making
one
table
access
over
over
the
psi
interface.
D
And
so
here's
an
I
here's,
a
picture
of
like
the
console
that
happens
when
you
kick
off
a
job
and
you
know
what
does
it
say,
I
guess
diana
you
did
like
a
commit
to
your
repo
and
it
kicked
off.
B
We
can
even
show
it
chris
if
you
want
to
in
the.
D
Oh
okay,
double
page
sure
we'll
do
that
in
a
minute.
That'd
be
great,
so
it
kicks
off
a
job,
and
this
is
the
first
step,
make
docker
right
and
it's
pretty
slow.
This
is
an
hour
and
five
minutes
since
barely
got
started,
because
it's
kicking
off
a
two-core
instance
of
an
azure
runner
of
a
github
runner
in
the
azure
cloud.
You
don't
get
the
biggest
fastest
cpu
all
to
yourself.
D
You
get
whatever
they
give
you,
because
it's
free,
so
it
takes
a
while
to
run
and
you
can
see
after
two
hours
it
finished
building
the
docker.
Now,
if
you
do
it
locally
on
a
fast
tower,
it's
much
faster
than
that.
But
this
is
you
know
this
is
a
cloud
build
and
I'll
talk
about
what
we're
going
to
do
about
trying
to
speed
this
up,
because
you
don't
want
to
do
a
commit
and
wait
hours
to
see
if
it
passed
right.
D
You
want
want
to
know
pretty
quickly
and
then
you
know
we're
still
we're
just
starting
this.
So
yes
do.
D
It's
like,
I
paid
you
to
paid
you
to
set
me
up
for
the
next
part
of
this
talk
good
job,
yeah.
I've
got
smart
people
on
this
talk.
You
can
give
about
half
the
talk
and
then
just
end
the
meeting,
because
everyone
can
figure
out
the
rest.
So
we
did
another
run
where
that
this
just
built
the
docker,
and
then
we
we
added
more
steps
where
we
tried
to
build
like
the
software
switch
and
we
ran
into
some
error
and
I
think
it's
probably
a
resource
or
as
a
console.
D
Yeah,
like
you
probably
can't
read
this,
but
here
it
says:
docker
run
dash
it.
That
means
an
interactive
terminal.
Since
it's
running
in
the
cloud,
you
have
to
turn
that
off.
So
you
know
we're
just
starting
to
adapt
this
manual
process
into
an
automated
one
and
you
just
have
to
kind
of
peel
the
onion
layers.
B
Let
me
show
you:
let
me
show
it
next
time:
okay,
yeah,
let's,
let's
make
sure
to
cover
the
the
issue
with
the
docker
image
having
to
building
build
it
every
time,
as
opposed.
G
G
Have
I
don't.
D
You
know
just
really
quickly,
so
what
we
need
to
do
is
figure
out
a
workflow
where
we
can
build
this
docker
and
store
it
in
some
kind
of
a
docker
repository,
whether
it's
docker
hub
or
somewhere.
In
the
you
know,
microsoft,
azure
cloud,
and
so
that's
that's
a
dependency
that
we
want
to
discuss
in
this
meeting.
But
we
may
as
well
talk
about
it
now.
D
We
need
help
in
or
suggestions
in
having
a
resource
where
we
can
store
a
pre-built,
docker
and
retrieve
it
on
the
fly,
and
is
anyone
in
this
meeting
familiar
with
how
that
might
be
used
already
in
sonic?
Is
that
a
is
that
a
common
workflow
that
there's
some
kind
of
a
azure,
sonic
or
docker
docker
hub
that
we
can
use.
D
Okay,
so
that's
one
of
the
dependencies
that
we
need
to
explore
and
you
know
we'll
ask
around
and
we
might
need
some
help
from
the
microsoft
side
to
identify
a
resource
and
we
might
set
up
some
temporary
docker
hub
just
so.
We
can
do
development
on
this,
but
we'll
need
some
kind
of
a
place
to
store
pre-built
artifacts,
because
this
should
this
whole
ci
cd
pipeline
should
run
in.
In
a
couple
minutes.
A
Like
prince,
is
on
the
call
he
might
know,
unless
he's
okay.
I
Oh
okay,
so
we
have
okay,
at
least
for
sonic.
We
have
some
azure
pipelines
and
sonic
storage.
So
I
I
I'm
not
much
familiar
with
how
the
build
team
is
handling
that.
But
if
you
need,
we
can
have
like
some
offline
discussion
with
the
build
team
and
understand
how
and
what
they
are
doing.
D
I
D
I
Sure
I
I
think
I
got
it
so
maybe
krishna
we
can
get
the
requirement
and
we
can
have
an
internal
discussion
with
the
build
team
too
on
how
to
facilitate
that.
D
F
Should
I
ask
further
if
it's
a
possibility
to
get
like
a
donation
into
this
project,
so
we
don't
use
a
free,
vm
and
use
a
more
powerful
one
to
speed
it
further.
D
I
guess
that
would
be
another
part
of
that
discussion
that
that
prince
mentioned,
maybe
just
what's
the
best
way
to
expedite
speeding
this
up
without
getting
without
trying
to
solve,
boil
the
ocean,
so
to
speak,
just
get
some
faster
private
runner
right.
That
would
be
a
private,
github
runner,
so
there's
kind
of
two
two
types
of
runners:
there's,
there's
sort
of
free
ones
that
are
floating
they
get
assigned
on
the
fly
that
are
pretty
low
performance.
D
You
know
like
14,
gig
of
memory,
14
gig
of
disk
space
and
seven
gig
of
ram
and
two
cores,
and
then
you
can
have
you
know
dedicated
runners
which
are
speedy.
So
we
wanna
ask
those
questions
too
thanks
merchant.
D
So
let
me
go
to
the
next
slide.
Let's
see
so
what
are
some
of
the
dependencies
going
forward
to
make
continued
progress?
We
need
to
get
the
scithrift
server
linked
into
what
marion
has
already
put
together
for
us
and
I'll
go
back
and
refer
to
this
see
all
these
silly.
D
I
should
have
turned
off:
we
don't
have
this
part
grafted
onto
libside
right
now.
What
we
have
is
libsi
and
then,
if
you
replace
this
box
with
a
simple
c
plus
plus
program
that
does
a
couple
of
table
accessors,
that's
what's
working
right
now.
What
we
need
is
this
scythe
rift
server,
so
we
have
a
general
rpc
mechanism
to
run
test
scripts
on.
So
that's
an
effort
where
we
take
the
and
there's
two
parts
to
that.
Let
me
go
back
to
my
list.
D
Certainly
intel's,
working
on
enhancements
to
scithrift
and
reshma's,
spoken
about
that
before,
where
they're
modifying
the
existing
scithrift
framework
to
be
able
to
handle,
you
know
smart
devices
with
two
ports,
not
full
sonic
switches-
and
you
know
eventually
that
needs
to
get
merged
into
to
ocp
side.
Although
we
could
use
a
development
branch
right
now,
we're
kind
of
experimenting
here,
so
we
could
just
use
the
dev
branch,
that's
underway.
If,
if
it's
close
enough,
but
then
we
need
to
integrate
that
into
the
docker
building
inside
dash,
so
that
we
have
a
full
server.
D
So
we
need
help
to
do
that,
and
preferably
someone
who
already
has
a
lot
of
experience
in
this
site.
Thrift,
server
area,
you
know
we'll-
need
to
do
some
kind
of
a
github
sub
module
or
something
pull
it
into
the
dash
project.
Link
it
all
in
we're
going
to
need
to
choose
some
exemplary
test
cases
and
agree.
You
know
what
would
we
like
to
test?
First,
just
as
a
kind
of
a
hello
world
of
this
whole
framework?
That's
something
we
can.
You
know
agree
on.
D
We
need
the
p4
model
working
to
be
able
to
pass
some
tests.
You
know,
even
you
can
just
test
it
on
the
command
line
with
you
know,
scappy,
or
something
just
to
configure
the
p4
model.
With
some
kind
of
a
service
config
send
packet
into
it,
make
sure
the
output
is
right.
Then
we
can
work
on
automating
that,
but
we
need
we
need
some
kind
of
a
model
and
a
we
need
to
pick
milestones
and
checkpoints
like
okay,
the
model's
ready
to
do
x.
D
This
is
kind
of
a
big
one,
and
I've
been
talking
about
this
for
several
months.
But
it's
getting
time
to
where
we'll
need
this.
We
need
some
kind
of
a
configuration
schema
that
can
describe
the
configuration
of
the
dash
data
plane
so
that
we
can
say
well
here's
a
config,
but
we
don't
want
us
to
make
everything
purely
procedural,
where
you
have
to
write
a
program
every
time
with
a
bunch
of
psi
access
commands.
We
want,
for
example,
a
json
file
that
says:
here's
the
config
and
there's
some
provisional
ones
in
the
repo
right.
D
Now
that
prince
has
put
up
several
several
weeks
ago,
it's
kind
of
an
example
of
the
kind
of
configuration,
but
we
really
need
to
settle
on
something
that's
semi-stable,
so
we
can
use
that
as
our
common
test
format
and
I've
been
talking
with
prince
on
the
side
about
this
and
we'll
probably
have
some.
You
know
little
working
sessions
to
talk
about
that.
D
And
then
we
need
this
docker
image.
Repo
of
some
sort,
which
we
talked
about
and
merch
actually
alluded
to
this
more
powerful
github
runners.
So
we
this
can
run
even
faster.
So
these
are
some
of
the
dependencies
and
I'd
say.
The
first
thing
we
need
is
to
talk
about
how
to
get
this
done,
and
you
know
we
can
talk
about
here
or
we
can
talk
about
it
offline
or
in
another
meeting,
but
this
is
going
to
be
coming
up
pretty
soon.
D
D
Yeah-
and
let
me
just
see
what
the
next
slide
is
just
to
see,
if
there's
any
oh,
let's
talk
about
next
steps,
then
we
can.
We
can
back
up
on
on
these
things.
I
think
the
needs
are
here.
So
the
next
steps
getting
getting
this
current
ci
runner
and
diana's
working
on
this.
You
know
in
real
time
it
will
probably
I
I
feel
pretty
confident
that
this
will
get
less
approved,
not
pretty
soon
the
the
side
thrift
enhancements
for
dash
with
intel
has
already
been
doing
for
some
time.
D
D
The
p4
working
group
needs
to
decide
on
some
level
of
functionality
and
say
this
is
what
we're
shooting
for
and
we
can
confirm
it
works,
and
you
know
we
need
to
be
able
to
send
the
packet
into
the
data
plane
and
get
it
back
out
and
say.
Yes,
this
does
what
we
expect
and
then
what
we
want
to
do
is
automate
that
into
the
framework
define
a
json
configuration,
schema,
get
to
docker
repo-
and,
I
should
say,
also
maybe
faster
runners,
and
then
we
should
decide
at
some
point.
D
When
are
we
ready
to
have
a
regular
test
working
group
meeting
cadence,
because
this
could
probably
be
you
know
its
own
focus?
So
I
know
we
have
a
lot
of
meetings
going
on
already,
but
people
can
kind
of
just
focus
on
this
one
topic
and
not
necessarily
make
it
part
of
the
weekly
meeting
this
weekly
meeting,
but
that's
a
that's
a
choice.
D
So
that's
just
kind
of
a
snapshot
in
time
of
my
thoughts
on
this
and
I
welcome
discussion
and
feedback.
This
is
just
a
proposal.
E
Well,
this
is
great
work
yeah.
Thank
you
so
much
chris
and
ex
you
know
keysight
team.
I
really
appreciate
it.
I
think
this
will
go
a
long
way
in
really
ensuring
that
we
have
a
quality
quality.
You
know
work
that
is
coming
in
because
once
we
have
this
gate,
we
can
really
determine
that
whatever
is
getting
accepted
is
thoroughly
vetted
out.
You
know,
so
it
is
excellent.
E
So
one
one
question
question
you
know
this:
this
workflow
really
talks
about.
You
know
running
the
regression
to
ensure
that
okay,
we're
protecting
whatever
is
already
existing
right.
So
now.
The
question
here
is
that
if
somebody
is
really
putting
in
something
new,
then
that
something
new
should
come
with
its
own
set
of
test
cases
right,
how
do
you
ensure
that
you
know
we
actually
make
that
one
as
part
of
the
complete?
You
know
the
test
inventory
that
we
have
of
the
test
suites.
D
Yeah,
that's
a
that's
a
great
question
and
it
gets
into
this
area
of
kind
of
policies
and
conventions
for
this
project
right
as
we
get
a
little
more
serious
yeah,
no,
no
pun
intended
with
serious
pipeline.
D
So
I
I
kind
of
came
up
with
this
thought
I
was
sharing
with
the
colleague
I
said
you
know
going
forward
if
someone
adds
a
feature
or
makes
a
change,
they're
responsible
for
the
tests
if
they
break
a
test
or
the
test
has
to
be
changed
because
we
changed
the
behavior,
then
that
parties
responsible
for
both
they
can't
break
the
repo.
So
to
speak,
that's
kind
of
the
convention.
A
lot
of
companies
follow
internally.
A
Well,
here's
I
have
two
thoughts
on
that.
You
know
we
used
to.
At
microsoft,
we
used
to
have
an
entire
testing
group
ste
a
software
test
engineer,
and
then
we
had
sde
for
development
engineer
and
as
we
moved
along
through
the
years,
you
know,
because
testers
are
always
intent
on
testing
and
breaking
things
and
developers
sometimes
have
blind
spots
as
to
how
to
test
you
know.
But
at
microsoft
we
moved
into
wrapping
the
testing
into
the
developer's
work.
You
know
as
if
you
develop,
you
should
test
as
well.
A
I
don't
know
how
well
that
works,
because
it's
it's
just
a
different
mindset.
That's
just
my
opinion.
D
I'd
like
to
comment
on
that,
first
of
all,
the
the
projects
I've
been
working
on
in
keysight,
more
recently
like
with
diana
and
others.
We
exactly
do
that.
We,
we
don't
have
a
test
group.
We
are
the
testers
and-
and
we
write
tests
with
our
when
we
do
a
feature.
D
D
But
when
you
really
step
back
in
the
big
picture,
it's
actually
easier
and
other
companies
have
adopted
this.
I
mean
this
is
really
continuous
integration
in
a
nutshell,
and
it's
kind
of
the
modern
mindset
and
throwing
it
over
to
an
sqa
group
is
sort
of
a
thing
of
the
past,
and
you
know
it's
kind
of
a
well-known
maxim
that
the
longer
the
the
deeper
into-
or
let's
say
the
later
in
the
production
schedule,
that
a
bug
is
found
the
more
expensive
it
is
and
goes
up
exponentially
right.
D
F
A
F
From
experience
with
like
python
projects
and
other
projects,
there
are
like
coverage
tools
which
will
give
a
percentage
coverage,
and
then
you
can
make
automating
the
pipeline
and
say
if
you
don't
meet
the
coverage,
just
reject
it.
So
that
could
be
ultimate.
But
in
this
case
I
do
not
know
how
that
could
be
realized,
because
I
don't
know
if
there
are
tools
that
can
provide
coverage
for
the
tests
that
we
will
add
in
order
to
be
able
to
automate
that
portion
and
make
it
not
a
policy
but
actually
make
it
enforceable
automatically.
D
D
That's
a
good
point,
and
you
know
this
is
going
to
be
incremental
right.
We're
not
going
to
suddenly
dump
a
giant
process
on
everyone
and
make
their
life
a
drag.
We
want
to
do
this
in
a
organic
way,
so
that
everyone's
seeing
the
benefits
along
the
way
and
not
feeling
like
it's
a
chore,
but
you
know
we
want
to
stop
bugs
early.
That's
the
bottom
line.
D
Companies
like
like
arista,
my
ex-boss
went
there
some
years
ago.
He
said
they
don't
really
have
hardly
any
qa
department,
they
make
sure
like
power
supplies
are
working
and
cards
plug
in
the
developers
write
all
their
own
tests
and
no
no
one
can
argue
that
they
aren't
doing
pretty
well,
so
the
newer
companies
pretty
much
do
that,
I
think
almost
from
day
one
so
anyway,
you
know
thanks
for
letting
me
present
this,
and
you
know
I'm
hoping
we
can.
D
D
Will
I
will
share
them
I'll
I'll,
I'll,
upload
them
into
the
slides,
deck
and
and
then
christina
can
get
the
link.
A
A
Put
everything
in
but
yeah
no
a
link
would
be
great
too.
Thank
you.
Any
other
agenda
items
today,
guys
from
the
rest
of
the
crowd.