►
From YouTube: DASH Workgroup Community Meeting June 22, 2022
Description
June 22, 2022
Keysight presenting automated CI/CD test w/Q&A
A
In
the
dash
meeting,
we
wanted
to
cover
possibly
a
10
minute
q,
a
from
prince
regarding
the
presentation.
Last
week
he
had
provided
on
the
sonic
hld
and
we
also
wanted
to
a
lot
a
nice
chunk
of
time
for
chris
to
go
over
the
test
automation
pipeline.
So
if
we
could
get
started
prince
or
did
anyone
have
comments
or
questions
for
prince
ready
to
go.
B
B
The
seeds,
to
you
know,
start
building
more
and
more
test
cases
against
the
data
plane,
and
you
know,
marion
created
this
serious
pipeline
directory
with
with
all
the
build
artifacts
and
make
files
etc
and
docker
files
quite
a
while
ago,
and
so
we
just
built
upon
his
work,
which
was
already
pretty
thorough
and
and
ready
to
go.
So
what
we
did
was
we
added
automation
in
the
get
pipeline.
B
We
added
spinning
up
traffic
generators,
tried
to
add
a
lot
of
documentation
and
made
a
few
structural
changes
and
make
file
improvements
tried
to
make
it
a
little
more
usable
because
we've
added
quite
a
bit
of
complexity
for
all
the
automation
and
testing.
B
So
that's
the
overview
and
what
I
want
to
do
is
walk
through
the
readme
file
and
I'm
also
going
to
try
to
do
a
couple
of
demos.
B
B
Let's
see,
that's
the
wrong
repo,
let's
go
to
the
correct
repo.
No
that's
where
I
want
to
be.
I
want
to
go
to
where
this
pull
request
is.
If
it
is
sitting
on
a
fork
that
deanna
my
colleague
created
and
that's
where
the
development
branch
is
right
now
so
pull
request,
you
can
see
it
in
the
azure.
B
You
can
see
in
the
azure
repo
it'll
show
all
the
changes,
but
the
actual,
fully
running
version
of
it
is
right
now
in
this
dev
branch,
which
is
the
typical
github
workflow,
and
so,
if
you
want
to
play
with
this,
what
you
want
to
do
is
go
to
the
source,
which
is
this
repo
and
clone
it
or
do
a
pull
request.
Excuse
me,
do
I
do
a
clone
and
then
run
all
the
instructions?
B
So
first
thing
I
want
to
do
is
show
you
this
menu
here
on
actions.
This
will
be
new
to
some
people.
This
shows
all
these
workflows
that
have
run
in
the
past
in
this
repo
and
what
these
are.
These
are
the
results
of
all
the
last
commits
and
pushes
that
have
been
done
to
this.
So
here's
one
where
I
deliberately
created
a
p4
code
error
on
purpose
to
demonstrate
that
it
shows
a
failed
build
and
then
here's
where
I
fixed
it
I'm
going
to
go
through
some
of
this.
B
B
So
you
can
see
see
this
line
of
code
here:
import,
doe,
dot,
p4,
it's
a
little
joke,
it's
a
deliberate
error
and
the
build
failed,
because
there's
no
such
file,
in
fact,
there's
not
even
an
import
statement
in
in
p4.
So
it's
two
mistakes,
there's
missing
file
and
there's
and
there's
this
import,
which
doesn't
even
make
sense
in
people
the
python
keyword.
B
B
B
Likewise,
if
I
were
to
do
a
pull
request
on
a
build
on
a
repo,
for
example,
when
I
submit
this
pull
request,
if
someone
looks
at
the
pull
request,
you
should
see
that
this
did
pass
or
did
not
pass
before
you
even
decide
to
accept
it.
We
haven't
gone
through
that
workflow,
yet
because
this
is
the
first
pull
first
pull
request
with
this
capability
in
so
once
this
is
accepted
once
we
accept
this.
This
pull
request
here.
B
This
automation
will
then
be
in
the
main
azure
dash
sonic
dash
branch,
and
then
it
will
do
all
these
things
for
us
and
we'll
we'll
go
through
that
after
we
do
a
merge
of
this
whenever
that
happens
after
the
review
and
we'll
watch
how
it
works.
D
To
I,
I
do
have
one
question
for
you
chris:
if
you
were
a
long
time,
sonic
developer,
how
familiar
would
all
this
look
to
you.
B
B
Okay,
all
right
yeah,
the
main
difference
is
sonic,
is
using
some
build
resources
like
get
runners
that
are
allocated
and
much
more
high
performance
than
the
freebie
ones,
we're
using
right
now
yeah.
This
is
not.
Nothing
here
is
departing
from
convention.
As
far
as
I
know
so,
I'm
going
to
go
in
and
edit
in
place.
This
p4
file.
B
B
B
B
While
this
is
running
once
it
pulls
the
docker
image
into
the
github
runner
and
it's
pulling
it
from
the
cloud,
then
it
will.
The
first
thing
we'll
do
is
build
the
p4
switch.
So
while
we're
waiting-
let's
just
look
at
a
couple
of
things-
I
want
to
show
you
what's
going
on
in
terms
of
the
workflow.
B
Okay,
so
here's
the
here's,
the
serious
pipeline
top
level
readme,
where
I've
tried
to
add
a
lot
of
documentation
to
explain
everything
what's
happening,
and
I
do
hope
that
people
get
a
chance
to
review
this
and
be
critical
and
find
things
that
aren't
clear,
spelling
errors,
whatever
ask
for
more
information
and
file
an
issue,
because
my
goal
is
to
make
this
so
that
it's
standalone
and
you
don't
need
me
anymore-
it
doesn't
mean
I
don't
want
to
work
on
this.
It's
just
that's
to
me
to
me.
B
B
So
in
this
docker
build,
we
pulled
from
a
registry
which
is
docker
hub
for
now,
but
hopefully
it'll
be
sonic,
it'll
be
azure
container
registry
or
acr
in
the
future.
We're
pulling
this
into
our
work
environment
on
demand
in
the
github
renter.
B
B
So
you
don't
have
to
wait
to
do
a
pull
up
to
do
a
commit
and
a
push
to
try
this
pipeline.
You
can
do
it
on
your
own
machine.
It
just
won't.
Look
like
this.
You'll
see
it
on
your
terminal,
all
the
steps
so
we're
pulling
this
doctor
in
and
this
docker
is
basically
a
slightly
modified
version
of
the
one
that
marion's
original
repo
contained,
whereas
he
was
building
it
on
demand
in
the
user
space.
B
I
built
one
and
uploaded
it
to
docker
hub
so
that
you
can
just
pull
it
on
demand,
because
this
takes
an
hour
or
two
to
build,
and
it's
much
nicer
just
to
download
it
in
a
couple
minutes
and
have
it
on
demand
all
the
time.
So
this
is
the
environment,
and
this
is
the
builder
I'll
explain
that
in
a
minute,
okay,
see
that
it
took
two
minutes
and
40
seconds
to
pull
the
image
one.
Second,
to
compile
it
and
say
I
don't
think
so
right.
B
So
we
did
that
and
what
I'll
do
is
I'll
exit
from
here
for
a
second.
E
Hey
chris,
just
a
quick
question:
yes,.
E
How
do
we
keep
the
the
docker
image
up
to
date?
In
other
words,
you
know
between
the
forks
and
between
branches.
E
You
know
the
dependency
might
change,
and
then
you
you
may
need
a
different
version
of
the
docker
and
so
forth.
Do
we
have
any
type
of
means
to
ensure
that
okay,
we
have
the
right
version
anytime,
you
want.
B
Yes
I'll
commit
this.
That's
a
good
question.
Let
me
make
a
note
real
quick,
so
I
can
remember
to
out
configuration
management
of
docker
images.
That's
how
I'll
summarize
the
question.
B
Sure
so
question:
let
me
let
me
go
back
to
this
readme
to
put
this
into
context.
B
B
The
full
name
is
the
name
of
the
repo
slash
and
then
that
name
that
repo
right
now
is
named
chris
summers,
because
that's
my
personal
docker
hub
and
that's
where
I'm
storing
it
slash
and
then
the
name
of
the
image
dash
hyphen
bmv2
and
what
I'm
not
showing
here
is
the
default
colon
tag
and
the
tag
is
latest
so
right
now
this
pulse
latest,
but
that
tag
can
be
anything
we
want.
It
can
be,
you
know,
version
1.24
or
whatever,
so
what
we
would
do
is
when
we
actually
get
into
using
this
workflow.
B
Hopefully
we'll
have
a
different
place
to
store
these
images.
I
can.
I
can
host
them
for
now,
but
we'll
version
them,
and
so
a
particular
snapshot
of
this
entire
project
of
series
pipeline
project
itself
has
a
version
right.
It's
the
version.
It's
the
s,
the
shaw
commit
hash
of
the
of
the
get
repo
or
we
can
give
it
a
branch
that
will
have
an
explicit
tag
for
this
as
well
so
they're
all
pinned
together.
B
E
Okay,
sorry
so,
as
you
were,
describing
if
something
gets
committed
in
or
or
yeah
committed
into
a
branch,
so
to
speak,
and
in
your
case,
for
example,
that
you
just
mentioned
like
the
p4
file,
do
we
have
at
some
level
some
dependency
which
ensures
that
you
know
if
you
are
modifying,
let's
say
a
non
code
file
for
let's
say
a
documentation
file,
then
you
don't
need
to
go
through
this
entire.
You
know
pulling
of
all
these
other
artifacts
and
building
this
entire
thing
and
so
forth
right.
So
how
do
you
go
through
this?
B
B
B
This
is
for
this.
The
main
series
pipeline
this
is
this
is
actually
for
the
docker
file
itself.
So
if
I
make
a
change
to
the
docker
file,
it
will
actually
build
the
docker
file
to
make
sure
it
builds,
and
when
we
get
the
proper
final
resting
place
for
this
docker
image,
we
can
actually
publish
it
automatically
to
the
docker
repo.
B
So
if
you
change
the
docker
file,
it'll
it'll
do
a
build
of
docker
only
if
you
change
any
important
code
in
the
pipeline-
and
it
will
look
at
that.
This
is
something
that's
you
know
subject
to
everyone
learning
to
do
if
they
wish
and
make
improvements
along
the
way.
But
here's
an
example:
it's
pretty
readable,
it's
the
ammo.
So,
on
a
push,
any
branch,
these
paths,
change
that
accept
the
docker
file
and
accept
readme
or
md
files,
then
from
the
pipeline.
B
B
It
tells
it
to
run
in
an
ubuntu
2004
instance.
It's
got
some
environment
variables,
working
directory,
okay,
so
here's
a
bunch
of
step-by-step
things.
Now,
as
a
user,
I
might
just
type
make
clean
all
to
run
everything
on
one
command
and
I'll
demonstrate
that
in
a
bit.
What
I
did
here
is
I
broke
it.
We
broke
it
down
and
diana
did
this
work
broke
it
down
the
individual
steps
so
that,
if
something
fails,
we
broke
it
down
in
all
the
steps,
because
it's
much
more
easy
to
debug
things.
B
If
it's
broken
down
that
way,
I
could
have
just
had
one
job
that
said,
make
all,
but
then
you'd
have
one
thing
running
and
running
and
running,
and
you
wouldn't
know
how
far
you've
gotten
unless
you
read
all
the
detailed
logs.
So
here
you
can
see
all
the
steps
so
see
this
in
stall
site
sub
module,
so
that
is
corresponds
to
this
step.
So
a
job
consists
of
steps.
So
here
we
install
the
site,
sub
module
and
the
actual
command.
Is
this
command
just
like
you
type
it
on
a
command
line?
B
B
Okay,
so
I've
I've
demonstrated,
you
know
two
basic
things:
committing
a
bad
and
a
good
build
and
that's
kind
of
the
big
takeaway
for
this.
But
I
want
to
I
want
to
mention
something
else.
Now,
I'm
going
to
get
to
the
next
level
and
I'm
doing
this
kind
of
at
a
say,
a
superficial,
20,
000
foot
level
right
now,
so
we
can
cover
the
highlights
because
we
could
drill
down
into
any
one
of
these
things
and
spend
a
whole
meeting
on
it.
B
B
So
I
want
to
show
something
here
in
okay,
see
in
this
job.
We
built
all
this
stuff.
We
built
this
c
plus
plus
test
and
it's
the
same
test
that
marion
created
originally
all
it
does
is
flush
and
say
all
does
a
lot
of
cool
things
it
compiles.
The
p4
it
generates
auto
generates
the
psi
api
to
the
serious
pipeline.
That's
it's
really
clever.
On
a
pat
mayan
on
the
back,
for
that
it
uses
ginger
to
templates,
generates
all
this
headers
and
bindings
to
people
runtime,
so
it
creates
libside,
which
is
the
fundamental.
You
know.
B
It
builds
everything
here
up
to
libside,
so
it's
building
this
here
is
pulled
from
p4.lang.
That's
actually
the
base
of
the
docker
image
is
this
behavioral
model.
This
is
the
vanilla
behavioral
model
v2,
it's
not
the
modified
one
that
the
behavioral
model
working
group
is
working
on.
This
is
vanilla,
e
and
b2
and
I'll
defer
to
marion
when
we
start
talking
about
how
to
migrate
to
the
modified
one,
it
comes
with
p4
runtime
server.
B
So
this
is
the
native
data
plane,
interfaced
behavioral
model
v2,
so
mara
created
all
this,
which
is
auto
generated
from
the
p4
code.
In
a
little
more
detail,
it's
actually
generated
from
the
p4
runtime
in
metadata
that
the
p4
compiler
creates.
So
before
code
is
compiled,
it
creates
p4
info
which
describes
all
the
p4
objects
or
entities
tables
registers
counters
that
gets
generated
into
all
this
code.
B
B
For
this
testing,
you
tend
to
use
a
scithrift
server
which
allows
you
to
make
remote
procedure
calls
to
the
device
under
test,
so
vendor
would
compile
their
lib
psi
along
with
their
entire
entire
data
plane
implementation,
which
it's
not
going
to
look
like
this.
It's
going
to
be
something
else.
It's
going
to
be
your
system
on
a
chip
and
all
the
libraries
and
your
own
code,
and
then
we'll
have
a
scift
server
to
be
able
to
talk
to
it
with
test
scripts
so
back
to
where
we
are
today.
B
B
That
is
what
this
test
sci
library
step
is.
So,
if
you're
to
drill
down
and
look
it
just
runs,
this
make
command,
make
run
test
and
it
does
one
simple
grpc
access
over
pre
for
runtime
through
a
libside,
and
you
can
look
at
the
code
for
this.
It's
pretty
straightforward.
It's
just
setting
up
a
side
table
entry
and
writing
it.
So
that's
the
first
test
and
that's
been
in
place
for
some
time.
B
This
is
all
new,
and
what
this
is
is
we're
spinning
up
some
software
traffic
generators
in
this
place
and
we're
sending
udp
packets
in
and
expecting
them
to
echo
back
out.
I
remember
that
was
a
bud
suggestion.
That's
our
first
hello
world,
and
without
configuring
this
it
should
just
echo
udp
packets
back
by
default,
and
I
want
to.
B
So
these
are
all
the
instructions.
This
is
the
configuration
of
the
traffic
generator
and
I'll
explain
this
a
little
bit,
but
to
learn
more
about
xcsc,
you
can
follow
the
readmes
and
the
links
it's
a
there's.
An
open
source
repository
on
github
called
open
traffic
generator,
and
this
is
described
there.
You
can
pull
it.
It's
a
docker
images,
the
free
version.
You
can
just
pull
them
down
from
docker
hub,
and
this
is
the
layout
there's
there's
several
containers
that
comprise
the
controller.
B
B
B
B
It's
a
web
ui
type
interface,
but
I
would
say,
web
uis,
it's
a
typical.
You
know
http
interface.
This
client
is
a
python
library
or
a
go
library
in
this
case
we're
using
python.
So
it's
snappy
python
binding,
and
this
is
a
very
easy
to
use
library.
So
what
you
do
is
you
import
this
library
into
your
test
code
and
then
talk
to
this
traffic
generator
complex,
and
it
doesn't
take
very
much
to
learn
how
to
use
this.
B
F
You
off
guard
alex
yeah.
You
got
me
a
little
bit
of
guard
yeah.
Well,
definitely,
these
are
like
the
the
the
low
lower
details
of
how
the
xcic
traffic
generator
like
what
are
the
components
it
consists
of
really
to
use
it.
There
is
a
good
way
to
hide
all
that
complexity
and
just
have
the
the
api
that
you
test
script
talks
to
and
then.
F
The
yeah
and
then
on
the
right
hand,
side
you
have
outputs
that
are
sending
packets
to
the
you
know,
behavioral
model
and
back
the
the
interesting
angle
to
that
is
the
same
api
that
we
use
here
with
the
free
version
you
can
use
it
with
with
the
hardware
traffic
generators.
C
F
When
you
want
to
move
that
into
the
actual
physical
testing,
your
test
content
doesn't
have
to
change.
Thank
you
for.
B
Pointing
that
out,
so
this
can
be
replaced
by
a
hardware
box
that
runs
at
line
rate.
You
know
any
speed
available
up
to
800
gig
at
this
point,
so
the
same
program
will
work.
So
that's
really
why
we're
making
this
investment
in
doing
this?
It's
not
that
we're
trying
to
push
this
as
much
as
we
want
a
workflow
that
goes
from
software
in
the
cloud
testing
to
full
line
rate
on
the
on
the
test
bench
or
in
the
production
with
the
same
code.
Oh
that's
awesome.
B
Awesome
really
like
that
and
you
follow
the
links.
There's
there's
whole
there's
a
whole
repo
on
on
this
xcsc
and
snappy
and
there's
example.
Programs
you
can
get
up
and
running.
So,
in
fact,
I
wanted
to
also
extend
so
alex
has
been
helping
me
with
that.
With
this
and
and
diana
did
all
the
automation,
work,
I've
been
doing
kind
of
the
polishing
and
the
usability
and
the
documentation.
B
I
also
had
a
lot
of
help
from
madhu.
I
wanted
to
give
a
shout
out
to
him.
He
just
joined
this
last
week
this
dash
project,
but
he
did
a
very
thorough
test
drive
of
this
repo.
The
other
day
and
gave
me
a
full
report
and
caused
me
to
make
a
lot
of
little
fixes,
because
when
you
do
it
in
your
own
workspace,
everything
always
works
as
soon
as
you
give
it
to
some
unsuspecting
user.
You
find
all
the
things
you
assumed,
so
we
run
this
test
and
then
that's
complete.
B
So
when
you
push
a
p4
code
change,
it's
also
going
to
run
traffic
through
the
pipeline
every
time
the
pipeline
runs
it's
going
to
do
some
tests
and
we
have
a
very
trivial
test
right
now.
That's
not
really
testing
dash,
it's
just
testing
kind
of
udp,
echo
and
connectivity.
B
Now
we've
got
the
infrastructure
to
do
it,
and
what
I
want
to
do
next
is
I'll
cross
my
fingers
and
try
to
do
a
live
run
of
this,
because
I
did
it
just
five
minutes
before
the
meeting
and
it
didn't
pass
because
my
vm
was
so
slow.
The
test
timed
out
in
10
minutes
so
bear
with
me
as
I
try
to
do
this
live,
but
we
know
it
works
in
the
cloud,
and
this
is
these
test
runners,
the
free
ones.
It's
just
too
coarse.
B
You
only
get
two
cores
allocated
and
14
gig
of
memory
which
we're
almost
using
up
just
for
the
docker
image,
and
so
it's
pretty
underpowered,
but
it
manages
to
run.
It
doesn't
run
on
my
machine,
it's
because
my
machine's
even
wimpier,
because
I'm
just
doing
a
vm
in
virtual
box.
B
B
B
Okay,
it
doesn't
do
that
much.
It
just
deletes
the
things
like
some
b
ethernet's
deletes
the
built
c
plus
plus
cleans
up
some
things,
and
it
also
restores
the
the
psi
repo
if
it
needs
to,
and
I'm
going
to
make
all.
B
B
This
runs
and
this
runs
slow
on
my
machine
right
now.
This
is
much
faster
on
decent
machine,
okay
and
then
it
ran
and
then
built
the
test
program.
That's
it!
We
just
built
everything,
wow,
pretty
easy
right,
I'm
going
to
make
network
and
what
this
does
is.
It
creates
the
ethernet
pairs
in
order
to
connect
bmv2
to
the
traffic
generator.
B
So
just
as
a
bunch
of
iplink
commands,
people
who
are
used
to
virtual
ethernets
won't
be
surprising.
It's
disabling
ipv6
on
these,
because
when
you're
doing
behavioral
modeling,
if
you
don't
do
that,
you
start
getting
neighbor
discovery
calls
to
your
ethernets
automatically
by
linux.
Thank
you
very
much
and
it's
kind
of
annoying
so.
B
B
All
these
huge
tools
are
all
pre-built
and
you
just
download
the
docker
image.
It
saves
hours
of
building
and
it's
reproducible
because
it's
it's
a
you
know
it's
a
frozen
image,
so
we're
going
to
run,
switch
okay,
so
we've
got
a
p4
runtime
server
in
the
switch
waiting
for
p4
runtime
commands,
and
that
corresponds
to
that
diagram
of
that
stack
now
in
a
different
console,
because
this
is
running
like
a
daemon
and
spewing
stuff
out
verbosely.
B
B
A
B
B
B
And
what
this
is
going
to
do
now,
I've
already
downloaded,
because
I
ran
this
several
times.
I've
already
pulled
the
docker
images
for
xcsc,
but
the
first
time
you
run
this
it'll
install
dependencies,
and
that
includes
the
snappy
python
library
and
it
pulls
two
docker
images,
one
for
the
xcsc
controller
and
one
for
the
traffic
engine.
B
So
it
does
those
dependencies.
Then
it's
going
to
spin
them
up
in
that
configuration.
I
showed
you
and
it
uses
docker
compose
to
do
that,
which
is
a
nice
declarative
way
of
you
know:
instantiating
containers
it's
in
yaml
file,
so
you
need
docker
compose
in
your
work
environment,
and
I
explain
all
that
in
the
readme.
B
So
let's
run
the
test
and
see
what
happens.
Okay,
it's
sending
traffic
and
it's
it's
sending
a
thousand
packets.
Here's
the
bmv2
pipeline
output,
it's
still
still
spewing
to
the
console
because
it's
so
slow
to
emit
it
and
what
does
it
sending
a
thousand
packets
to
each
port
and
making
sure
that
a
thousand
packets
come
out?
B
And
it's
looping?
It's
it's
saying:
okay,
how
many
packets
have
I
sent?
Does
it
equal?
The
number
do
I
get
the
at
least
as
many
out
right
and
it's
getting
more
out
because
there's
other
stuff
going
on,
and
I
can't
get
into
it
right
now.
Nor
do
I
fully
understand
it,
but
there's
more
packets.
Then
then
we
transmit
coming
out
and
that
actually
might
have
to
do
with
the
environment,
but
we're
we
actually
spin
up
and
send
a
thousand
packets
and
we
try
to
send
them
a
higher
line.
B
A
higher
rate
than
the
b
and
b
two
can
actually
handle.
That's
why
it
takes
a
while
if
this
were
running
on
a
faster
machine,
this
test
would
run
much
quicker,
but
basically
we
went
through
that
whole
process.
You
know,
make
run
hcsc
test,
so
ideally,
we'll
have
a
lot
of
tests
in
the
future.
That
will
continue
to
run
more
and
more
tests
and
all
I'll
show
you
where
these
tests
live.
B
B
B
Just
keeps
booping
every
few
seconds
or
every
got
so
it's
just
waiting
to
completion.
It
doesn't
know
how
long
it's
going
to
take.
So
it's
just
waiting
until
it
gets
all
thousand
packets
sent
cool
that
it
works
yeah,
and
so,
let's
see
when
we
start
testing
for
real
the
dash
data
plane
like
v-net
service,
we'll
craft,
specific
packets
and
send
them
you
know,
and
then
when
we
get
them
back,
because
this
has
a
capture
engine,
we
can
actually
dissect
the
packets
and
make
sure
the
contents
are
exactly
what
we
expect
this.
B
You
have
test
cases,
and
this
is
the
celeste
test,
hello,
world,
dot,
icon
and
look
through
here.
It's
just
an
example
of
setting
up
snappy
and
there's
various
premise
in
here.
It
doesn't
look
like
scappy
at
all.
If
people
are
used
to
ptf
and
scappy,
this
won't
look
familiar,
but
it
shouldn't
look
too
foreign
either
it's
just
a
matter
of
learning
the
conventions,
but
there's
a
number
of
things
in
here.
I
want
to
point
out
that
you
won't
see
like
in
ptap
and
scappy.
B
B
B
B
That's
really
the
power
of
this
data
model
and
you
can
create
destination
board
lists,
so
there's
all
kinds
of
little
syntactic
things
in
here
that
are
nifty
and
once
you
get
the
hang
of
it,
you'll
find
out
and
then
here
we're
getting
the
metrics
from
the
receive
engine,
which
is
all
kinds
of
counters
and
statistics
measured
automatically,
and
then
you
can
pull
those
and,
for
example,
in
our
tests.
B
We're
saying
get
all
the
sum
of
the
transmit
packets
get
this
sum
of
all
the
receipt
packets
and
make
sure
that
were
we
to
finish
we're
done
when
we've
transmitted
all
the
total
transmit
and
received
packets
meeting
this
criteria.
But
this
is
just
an
arbitrary
function.
You
can
do
anything
you
want
so
there's
all
kinds
of
nifty
things
in
here
and
if
you
want
to
learn
more
about
using
this
there's,
actually
a
slack
channel
for
xcsc
support
that
we
host.
B
So
you
can
get
help
plus
there's
demos
and
examples
online,
so
yeah,
let's
so
I
kind
of
I
ran
the
demo.
I've
walked
through
some
of
this
again.
This
was
the
engine.
So
let
me
pause
and
ask
if
there's
any
questions
at
this
point
I
could.
I
could
walk
through
a
little
bit
more
of
the
pull
request
if
you
like,
or
we
can
you
know
and
here
and
then
you
can
clone
this,
look
at
the
pull,
requests
and
learn
more
because
there's
a
lot
to
take
in
here.
D
So
I
guess
one
thing
that
would
sounds
looks
like
it
would
be.
If
we
wanted
to
write
a
test
for
the
tcp
state
machine,
it
looks
fairly
straightforward.
We
should
be
able
to
generate
packets
and
and
make
the
tcp
state
machine,
go
through
every
single
state
and
then
ask
for
the
state
back
as
we
transition
to
see.
If,
if
the
state
machine
matches
our
behavioral
model,
that
sounds
like
it
would
be
relatively
straightforward
to
go
and
test
the
the
mission.
B
Well,
I
guess
what
I
would
do
is
defer
to
whoever
wants
to
talk
about
the
behavioral
model
and
see
if
you
can
query
its
state
at
any
point
in
time,
but
what
we
can
do
is
create
arbitrary
sequence
of
packets
right
with
their
exact
content,
extend
it.
You
know
one
at
a
time
or
in
a
batch
or
whatever,
and
and
then
decide
what
comes
out,
but
essentially
what
we're
doing
is
sending
packets
in
and
out
like
a
black
box.
B
B
D
B
D
Have
to
be
able
to
read
the
not
just
tcp
state
anything
counters,
you
name
it
out
of
the
of
the
implementation,
so
there
there
has
to
be
the
the
api
is
needed
to
do
all
of
that,
and
we
need
to
be
able
to
test
it.
So
I
don't
see
this
being
any
different
as
long
as
the
tcp
state
is
supported.
Of
course
it
has
to
be
an
api
for
it.
C
B
B
So
I
don't
know
if
we've
just
if
we
identified
internal
state
variables
as
being
exposed
here,
a
lot
of
times,
they're
buried
in
implementation,
you
may
not
even
be
able
to
read
them
necessarily
easily
in
order
to
expose
them.
With
this
structure,
we've
created,
we
by
definition
either
we
have
to
have
a
a
handwritten
psi
interface
for
those
stereo
state
variables
or
we
expose
them
in
the
form
of
p4
pseudo
tables,
let's
say
course
or
registers.
D
Hopefully,
we
can
just
get
it
through
the
psy
interface,
I'm
not
understanding
why
we
couldn't.
I
don't
think
it's.
I
think
that
they
have
to
support
the
tcp
state
machine
and
that's
why
it's
being
donated
and
worked
upon
and
we
need
to
be
able
to
read
at
state
for
sure.
Otherwise.
I
don't
know
if
they've
implemented
the
state
machine
properly.
G
D
You
should
be
able
to
to
create
the
packet
and
then
read
this
read
this
date,
and
I
should
be
able
to
create
packets
that
make
no
sense
and
see
what
happens
to
the
state
machine,
but
I
would
want
to
ensure
that
that
state
machine
is
robust,
and
so
we
should
have
an
api
and
we
should
have
the
ability
to
always
be
able
to
read
the
connection
state
now
whether
in
live
traffic
that
that
would
be
kind
of
hard,
because
connections
might
only
live
for
a
fraction
of
a
second
but
for
functional
testing.
D
Of
course,
we
should
be
able
to
read
it
because
we
can
generate
those
packets,
slow
enough
and
basically
one
packet
can
cause
an
entire
state
change
and
should
impact
in
many
cases,
and
then
that
way
we
can
look
at
this.
You
know
it's
an
easy
test
to
do,
because
we
have
the
state
machine.
We
have
all
the
transitions
we
can
generate
the
packets.
Now
we
need
to
make
sure
that
the
implementation
is
actually
creating
those
states
and
transitioning
properly.
D
So
I
think
it's
super
valuable
to
make
sure
that
we're
not
basically
excite
mentioned
a
whole
bunch
of
different
scenarios
where
states
were
transitioned
and
after
so
many
seconds
and
all
that
kind
of
stuff,
and
it's
it's
something
you
really
do
want
to
test.
B
Yeah
gerald
that's
right!
Three!
I
just
don't
know
that
it
was
ever
like
stated
as
a
goal
of
the
behavior
model,
so
we
probably
have
to
bring
that
up
in
the
bm
behavioral
model
meeting
the
same.
These
are
these:
are
objectives
right
and
may
have
to
take
another
look
data
plane,
yeah!
I
I
couldn't
agree
more
as
a
test
guy
right.
C
G
Oh
good,
so
that's
a
big
of
a
a
bit
of
an
issue,
because
when
we
are
talking
about
the
api
now
it
will
be
a
new
requirement
for
vendors
actually
to
expose
the
state
of
every
connection
to
the
control
plane
and
the
this
is
not
very
useful
at
the
high
rate
so
doing
that
at
a
low
rate.
Only
for
the
testing,
it
seems
to
me
as
a
requirement:
that's
not
going
to
be
actually
apis
that
are
not
going
to
be
actually
used
in
production.
D
D
C
G
Yeah
without
looking
at
a
connection,
we
can
look
at
the
actual
behavior
that
we
know
for
certain
that,
after
a
given
event,
a
connection
should
be
removed,
because
this
is
the
end
goal
of
the
state
machine
or
shouldn't
be
removed
and
then
verify
if
it
is
removed
or
not
based
on
sending
another
packet
that
belongs
to
that
flow
data
pack.
H
So
marion,
are
you
talking
something
like
a
code
nomicon
or
something
like
that?
That
does
a
tcp
state
machine
check
to
ensure
that
all
the
transitions
are
are
kind
of
kosher
I
mean,
are
you?
Are
you
looking
at
something,
or
I
mean.
H
Right,
that's
not
gonna
happen
from
the
api
side.
The
way
we
are
setting
up
here
right
gerald,
I
think
that's
a
little
bit
more
white
box.
You
know
protocol
correctness
type
of
a
test
to
ensure
that
all
the
transitions
are
are
kosher
across.
You
know
various
hk
scenarios
right.
A
A
A
D
You
know
needs
to
be
the
this
behavior,
but
there'll
be
many
other
things
that
will
you
know
we
can
break
out
those
those
tests
and
assign
to
different
people
to
write
but
okay,
but
it's
a
good
topic
and
we
can
bring.
B
To
wrap
up,
that's
okay,
yeah
good
point
girl,
but
they
forced
us
to
rethink
the
behavioral
model
api.
So
just
I'd
ask
people
to
take
a
look
at
this
pr
and
try
it
out.
What
you
want
to
do
is
go
to
the
you
know:
full
request
itself
where
it
specifies
it
and
oops
you
want
to
go
to
the
pull
requests
and
then,
when
you
you
see
this
one
here.
B
B
And
bud
to
answer
your
question:
the
data
plane
configuration
is
actually
the
test
that
runs
before
that
the
v-net
out.
So
the
test
that
marion
wrote
on
the
lid
side
that
has
to
be
run
first
and
found
it
empirically.
B
You
can
follow
that
yeah
and
so
next
step
some
more
tests
written.
We
still
need
a
skythrift
server,
which
I
want
to
start
working
on
resuming
on
that.
Now
that
we've
finished
this
pull
request,
which
was
quite
a
bit
of
work.
I
want
to
start
working
on
the
scythe
server
again
and
I'll
need
help
from
people
who
are
experts
in
that,
because
it's
not
my
real
house
and,
I
think,
someone's
doing
something
very
lazy
in
the
kitchen.
B
So
cythrift
we
we
need
a
proper
docker
repository
at
some
point,
there's
actually
a
whole
bunch
of
to
do's
here
that
I've
put
in
the
pr
itself
there's
a
couple
of
known
issues,
and
this
bud
relates
to
your
question.
We
have
to
run
this
first
test
before
the
packets
will
echo
back
and
it
may
be
obvious
people
who
know
the
model
better
than
I
do
this
doctor
image
is
too
big.
We
need
to
trim
it
down
and
there's
well-known
techniques
for
doing
that.
B
A
B
Dialogue,
so
here's
just
a
whole
list
of
like
going
forward
next
things
to
do
right.
I
need
to
chris
needs
to
learn
how
to
spell
that's
another
to
do,
and
so
anyway,
there's
a
bunch
of
things
here.
People
can
take
a
look,
and
you
know-
hopefully
we
can
use
this
as
the
basis
going
forward.
B
I
have
no
objection
to
people
writing
traditional
ptf
tests
because
they're
convenient
you
know
we
can
have
multiple
types
of
tests
in
this
repo.
So
if
that
gets
you
up
and
running
faster
go
ahead,
and
then
we
can
translate
them
possibly
into
the
xcc
snappy
version
by
using
you
know,
whatever
you
did
in
the
test,
so
that
we
can
eventually
scale
to
line
ring.
B
I
didn't
have
much
time
to
talk
about
the
git
sub
modules,
but
there's
a
lot
of
implications
previously
in
the
docker
image
marion
would
pull
the
get
he'd
clone
the
psi
psi
repo
into
the
docker
image,
while
it's
building
every
time
on
demand
or
as
needed,
and
instead
I'm
doing
a
get
sub
module,
and
I
talk
about
what
that
means
in
the
readme.
So
people,
if
they
don't
know
what
that
is,
they
can
learn
a
little
bit
about
it.
Talk
about
sub
modules
down
here
at
the
bottom.
B
So
you
can
learn
about
that
if
you
like,
but
it's
a
way
of
pinning
a
version
of
psi
to
this
project,
so
that
they're
always
in
block
step,
so
try
to
build
in
configuration
management
and
make
sure
that
we
always
have
a
reproducible
state
of
this
repo.
So
the
people
do
changes
along
the
way.
We
no
matter
what
happens
you
pull
out
a
certain
version
of
this
repo.
It
should
all
be
intact
and
all
the
versions
should
be
tightly
coupled.
B
So
that's
that's
all
I
have
to
say
about
this
today.