►
From YouTube: DASH Behavioral Model WG Feb 9 2023
Description
Go overbmv2 for ST/PL - push by 1 more week to 2/16/2023
https://github.com/sonic-net/DASH/pull/308
Discuss: Add/Transition from bmv2 to P4 DPDK target (P4 runtime on DPDK)
Merge https://github.com/sonic-net/DASH/pull/332/files ?
Can merge once out of Draft Status
A
What
I'm
showing
on
the
screen
is
what
we
talked
about
last
week
and
sorry
I'm,
just
managing
the
call
here,
and
so
last
week
we
talked
about
Andy
and
Chris
being
super
helpful,
trying
the
the
p40p
compiler
and
seeing
if
there
were
gaps
and
things
like
that
and
we
we
talked
about
presenting
PR
308,
which
was
going
to
be.
You
know,
service
tunnel,
private
link
and
I.
A
Guess
this
one's
going
to
be
pushed
by
another
week,
because
it's
not
ready
quite
yet
I
checked
with
Marion
and
Prince
this
morning
and
this
one's
just
not
ready
this
morning.
So
we'll
push
that
one
by
another
week
and
then,
let's
see
so
Andy
was
going
to
try
and
see
if
he
could
execute
a
process
here.
A
If
he
had
time
and
Marion
was
curious,
we
could
execute
this
on
real
dpdk,
backend
software,
and
then
we
we
talked
a
bit
about
Andy's,
pull
request
here,
331
and
so
oh
and
then
sod
wanted
to
make
sure
we
had
an
open
tracking
item
for
not
only
the
fact
that
we
looked
at
connection
tracking,
but
then
the
work
item
to
actually
implement
the
connection
tracking,
so
I
went
ahead
and
made
a
work
item
there.
A
So
that's
what
we
covered
last
week
and
I
was
wondering
if
anyone
had
a
update
or
comment
on
anything.
Since
the
week
has
passed.
C
D
B
Yeah
I'm
asking
Anton
to
help
with
that
as
well.
Yeah.
A
A
D
So,
a
couple
weeks
ago
we
talked
about,
we
were
talking
about
the
challenges
in
getting
the
BMV
to
model
and
p4c
compiler
modified
to
handle
stateful,
statefulness
and
and
dash
requirements
and
Andy
Fingerhut
Intel
started
talking
about
progress
on
the
peak
or
dpdk.
D
We
had
a
nice
discussion
about
that
and
he
volunteered
to
try
to
compile
our
Dash
code
against
the
p4c
dpdk
backend
and
he
was
able
to
do
so
experimentally
with
a
few
code
modifications
he
and
I
worked
together
to
iron
out
some
of
that
and
I
created
a
tool
chain
and
Docker
images
for
him
to
compile
it
with.
So
we've
got
kind
of
a
repeatable
recipe.
D
D
So
the
next
phase
is,
can
we
actually
run
p4dpdk
and
so
there's
there's
an
open
task
to
you
know
actually
get
a
P40
pdk
image
running
and
try
to
execute
Dash
code
on
it
and
see
what
happens
and
that's
what
Russia
was
alluding
to
that.
There's
some
work
beginning
in
that
area.
So
it's
kind
of
an
experiment.
F
F
It
is
called
P4
dpdk,
Target
E4
dash
dpdk
dash
Target,
and
then
this
is
where,
of
course,
you
know
the
compile
code
is
loaded
and
there,
the
compiler
people
code,
sorry
from
the
P4
code,
the
C
code
is
generated
and
the
C
code
is
then
compiled
and
it
gets
loaded
into
the
P4
DB
data
Target,
and
then
one
can
go
ahead
and
start
communicating
or
or
interfacing
this
one
through
P4
runtime
to
start
adding
table
entries
and
so
forth
right.
F
But
one
of
the
one
of
the
things
you
know,
perhaps
for
everyone's
sake,
is
that
when
we
were
working
on
the
bmv2
side,
it
was
based
on
the
V1
model
architecture
and
by
moving
into
the
dpdk.
We
are
now
moved
into
the
PNA
architecture.
So
that's
you
know,
that's
that's
one
of
the
subtle
differences
as
well.
D
Me
find
the
right
widget
to
do
that
with
it's.
D
A
A
D
This
is,
this
is
pretty
old.
This
was
like
aspirational
way
back
when
and
we
have
this
lower
part
working
for
a
long
time.
Now,
that's
what's
running
every
day
in
the
CI
CD
we're
now
aspiring
to
get
this
working
and,
and
so
all
we
need
now,
we've
got
the
code.
Compiling
and
building
I've
actually
got
CI
CD
pipeline
ready
for
this
as
well
and
and
so
Andy's
got
his
pull
request
sitting
in
the
back
door
and
I
added
to
it
to
do
all
this
GitHub
stuff
and
create
a
Docker
image
to
build
it.
Now.
D
D
So
this
is
the
V1
plus
I
call
it
because
it's
a
modified
V1
architecture,
that's
underway
right
now.
It's
just
V1
because
we
really
haven't
got
the
modified
version.
Yet
it's
you
know,
still
work
in
progress
kind
of
suspended.
So
this
will
be
the
true
p,
a
architecture
which
is
community
supported.
It's
it's
like
a
first
class
citizen
in
the
P4
world
and
whereas
this
is
kind
of
a
dash
dash
specific
project,
that's
a
little
bit
stalled
out.
So
I
think
the
intent
is
if
this.
D
If
this
works-
and
we
can
get
this
to
support
our
use
cases,
it's
it's
going
to
be
a
legitimate.
You
know
P4
Community
project.
It's
got
a
number
of
people
working
on
it
all
the
time
and
it
gets
talked
about
the
p.
A
architecture
has
working
groups
Etc
this.
This
might
be
a
worthwhile
thing
to
focus
on
long
term,
but
it's
still,
you
know
still
up
in
the
air.
Does
anyone
agree
or
disagree
with
that
assessment?.
B
B
Yeah-
and
that's
also,
you
know
a
thank
you
for
summarizing
everything.
I
think
Andy
is
also
joined.
Now.
I
just
want
to
say
that
the
p40pdk
pipeline
runs
in
the
host.
B
As
we
know,
and
as
Chris
mentioned,
it
is
PNA
compliant
and
all
that,
so
it
will
work
with
the
Sonic
not
bound
so
suppose
if
we
have
Sonic
with
the
dash
container,
and
since
we
are
now
going
to
be
able
to
you
know,
of
course,
there
is
work
there
able
to
run
Dash
before
in
the
preview
dpdk
backend
we
we
can
have
Sonic
with
Dash
container
as
Northbound
programming.
B
All
of
the
the
dash
apis
in
the
same
manner
are
in
the
dpdk,
like
we
expected
to
work
on
the
real
Hardware
yeah,
and
then
it
would
from
the
Northbound
point
of
view,
it
would
work
with
side,
Thrift
or
Sonic
with
Dash.
F
Yeah,
that's
good,
so
one
one,
quick
question:
Chris,
as
part
of
you
know
doing
this
work
of
you
know,
repeatable
recipe
of
compiling
the
code,
and
so
so
far
just
wanted
to
understand
where
we
are
at.
Are
we
able
to
now
generate
the
Dar
ESO
file
or
that
you
know
that
one
can
basically
load
into
the
back
end.
F
So
I
was
just
looking
at
the
pipeline
from
the
tool
chain
pipeline
when
you
have
a
P4
code
and
some
people
code.
You
know
it
actually
generates
the
the
C
code
right,
if
I
understand
correctly
from
the
dptk
backend
compiler
and
then
after
the
C
code,
you
compile
the
C
code
and
eventually
you
generate
the
direct
so
which
is
just
the
the
library
like
dynamically
loadable
Library,
which.
D
C
F
Oh
I
I
see
so
so.
Maybe
then
I
had
a
wrong
understanding,
because
you
know
I
was
just
going
through
multiple
different
videos
from
Kristen
I
guess
who?
Who
is
one
of
the
architects
in
Intel
that
was
presented
in
2019,
2021
and
2022?.
D
B
C
F
No
reason
to
no
yeah,
no
I
don't
want
to
look
at
it.
It's
just
that
I
wanted
to
see
that
you
know
in
our
work
that
we
are
doing
as
part
of
this
PR
at
what
stage
we
are
in
right
now,.
C
D
Yeah,
so
the
next
big
task
would
be,
you
know
as
a
deliverable,
something
that
produces
an
image
it
doesn't
have
to
have
people
runtime
server.
This
is
can
be
ignored.
It
produces,
let's
say
a
Docker
image
that
contains
everything
here,
and
all
we
care
about
is
that
it
has
a
socket
interface
and
this
tool
chain
to
do
all
this
won't
necessarily
be
part
of
Dash.
It
Dash
May
invoke
it.
You
know,
but
it's
probably
an
external.
D
You
know
project
like
P40
pdk,
some
kind
of
a
GitHub
project,
and
we
need
to
compile
some
kind
of
an
image.
That's
probably
a
Docker
image.
That's
all
of
this,
and
all
we
care
about
is
we
can
talk
to
it
over
this
socket.
Similarly,
what
Russian
was
talking
about?
Having
a
full
Sonic
stack?
There
would
need
to
be
some
external
project
that
replaces
this
with
sync
D.
So
it's
the
sync
D
demon
it
produces
this
entire
image,
and
that
would
be
something
that
you
could
drop
right
into
Sonic
all
right.
B
Yeah
for
the
same
D,
but
what
will
be
linked
into
the
thing
D
will
be
the
so
that
will
include
the
p40pdk
and
programming
layers
and
the
libs
I.
Call
that
oh.
F
Yeah
so
I,
don't
think
so
guys,
I,
don't
think
this.
This
is
the
way
it's
going
to
be
because
you
guys
just
imagine
the
way
we
basically
are
talking
about
here.
Right
now,
at
least
my
understanding
is
that
the
PSI
interface
is
the
P4
runtime
right.
So
if
we
continue
to
basically
have
that
part,
then
essentially
you
are
just
making
P4
runtime
calls
to
some.
D
B
We
may
or
may
not
use
it,
but
if
we
were
to
link
it
into
the
sync
D,
there
is
no
need
for
another
before
runtime,
because
the
pipeline
will
run
in
the
host
and
then,
since
it
will
all
be
part
of
the
sync
D
we
can
make
direct
function.
Calls
you.
D
D
E
It
doesn't
have
to
like
we
only
care
about
apis,
so
we
don't
break
anything,
but
on
the
other
side
of
it,
whatever
the
layer
it
uses,
it
has
to
be
there,
because
otherwise,
you
can't
think
of
it.
Yeah.
E
C
D
D
So
I
guess
without
trying
to
stipulate
how
the
tool
chain
is
going
to
work.
Ultimately,
we
want
some
kind
of
image
that
has
either
a
thrift
interface
right
or
or
a
sync
D,
which
would
actually
have
more
of
the
standard
Sonic
interfaces
to
redis
Etc
right
and
the
goal
is
just
to
have
an
image
that
you
can
spin
up
and
it
has
the
right
interfaces.
However,
however,
this
is
plumbed,
you
know
we'll
learn
more
about
that.
I
guess,
as
time
goes
on.
B
And
for
bmv2
today
you
are
using
before
runtime.
D
Yes,
it's
just
like
this
there's
there's
this.
This
code
is
auto-generated,
Marian
wrote,
Marion
and
team
wrote
the
code
to
Auto
generate
the
shimlair,
so
it
translates
side
calls
into
P4
runtime
calls
and
so
you're
actually
going
over
two
sockets
side
Thrift
through
an
adapter
layer
and
then
over
another
socket,
that's
what's
working
today
and
you
can.
You
can
see
the
activity
if
you
run
run
the
the
code
in
a
console.
You
can
see
the
stuff
going
on
yeah.
B
Yeah
we
can
start
with
something
like
this
and
then
also
have
Sonic
as
another
not
around
here,
although
this
is,
this
figure
is
upside
down,
Sonic
will
be
not
bound,
and
so
it
will
probably
not
done.
D
Yeah,
it
could
be,
it
could
have
been
drawn
differently.
That's
for
sure
that
probably
yeah
North
is
down.
This
is
this
is
for
people
in
Australia
kidding
so
anyway.
That's
where
we're
at
it's
great
that
we've
actually
got
to
this
point.
We're
talking
about.
Maybe
this
this
happening
sometime
in
our
lifetime.
A
D
Anyway,
that's
I'll
stop
sharing
because
we
have
anything
else.
So
does
anyone
have
any
guesstimates
for
when
we
might
see
some?
You
know
progress
or
what
are
the
next
steps
we
might
be
able
to
to
see.
C
I'll
chat
privately
with
reshma
she's,
been
in
contact
with
some
Intel
Engineers,
who
can
perhaps
help
with
the
next
step,
which
I
see
is
trying
out
before
runtime
API
on
dpdk
on
a
running
V4
code
and
scene
yeah
what
happens
and
writing
out
instructions
on
how
to
create
how
to
build
that
and
run
it.
D
A
Yeah
and
and
like
I,
said
we'll
we'll
push
service
tunnel
private
link
to
next
week.
I
I
won't
be
here
next
week,
but
Yusuf
will
be
here
in
The
Usual
Suspects
Chris
Rush,
my
everyone
I'm
trying
to
think
do.
We
have
anything
else
for
this
week,
guys
I
want
to
call
I.
D
C
C
D
Maybe
review
that
pull
request
right,
332
right
there
at
the
top.
E
C
Given
a
few
comments
on
it,
that
I've
responded
to
I,
don't
know
if
anybody
else
has
commented
or
looked
at
it,
but
I
mean
it's
up
to
the
group
here
whether
they
think
it's
worth
merging
in.
D
A
E
F
Yeah,
just
a
quick
question
on
the
testing
part
of
the
things.
Of
course
you
know
it
compiles
the
dpdk
so
for
the
BMV
too,
all
the
tests
that
we
had
passing
with
this
one
did
we
try
with
this
PR
that
you
know
they're
all
passing.
F
D
Exactly
exactly
no
all
you
have
to
do
to
ask
to
answer
that
question
is
to
look
at
the
actions
in
GitHub
up
top
and
you
can
you
can
see
that
somewhere
down
below
go
down
to
where
it
starts.
Saying
add
if
depths
see
all
those
greens,
if
you
go
down
whoops
you,
you
don't
need
to
do
that.
Yeah,
yeah,
sorry,.
D
Yeah,
it
runs
the
pipeline,
so
it's
like
a
regression
test
on
the
repo,
as
in
state.
It
stages
it
so
that
it
builds
everything
again
as
if
you
were,
the
code
had
been
accepted.
So
we
have.
We
have
confidence
that
once
we
merge
this,
it
doesn't
break
now,
there's
no
guarantee
it
won't,
because
for
one
thing
it's
going
to
have
to
push
a
Docker
image
to
ACR
for
this
new
compile
tool
chain
and
that
hasn't
been
done
until
you
accept
the
merge.
But
that's
a
small
risk.
F
Right
no,
but
but
my
question
Chris
is
that
you
know
in
in
this
I
I
definitely
I
had
the
knowledge
and
that's
why
I
should
have
been
more
clear
in
my
my
question:
was
there?
Are
there
I
believe
there
are
more
test
cases
or
tests
that
probably
are
Beyond?
Just
you
know
this
or
or
does
this
cover
everything
right?
You
know
this
these
actions
when
somebody
basically
goes
and
commits
it
or
issues
a
PR,
do
we
run
each
and
everything
that
is
in
our
repository
or
is
it
just?
You
know
subset
of
it.
D
Right
now,
every
time
there's
a
pull
request
or
a
merge,
except
it
rebuilds
everything
from
scratch,
except
Docker
images
that
can
be
pulled
from
ACR.
Actually,
it
rebuilds
them
anyway
locally.
It
does
rebuild
everything
and
it
runs
all
these
side.
Thrift
inside
Challenger
tests
that
we've
committed
okay.
A
D
If
you
just
change
a
readme,
it's
just
going
to
do
a
spell
check,
but
if
you,
but
if
you
change
any
line
of
P4
code,
any
python
any
c
code,
whatever
there's
a
whole
lot
of
different
file,
types
hit
basically
rechecks
everything
and
it's
pretty
fast.
It's
only
like
yes,
10
minutes
for
the
whole
thing,
yep,
awesome
and
and
also
for
just
as
a
refresher
to
people
before
you
do
a
pull
request.
You
know
and
you're
working
on
a
branch
push
it
to
your
local
Fork.
D
Your
own
GitHub
actions
should
be
enabled,
so
you
can
look
at
them
and
see
that
they
pass.
You
don't
need
to
do
a
pull
request.
You
shouldn't
do
a
pull
request
until
you're
verified
in
your
own
Fork,
because
otherwise
you're,
you
know
just
running
the
risk
of
filling
this
up
with.
You
know:
X's
red
X's
instead
of
green
checks,
and
you
need
to
go
into
your
own
fork
for
your
project,
the
dash
project
and
enable
GitHub
actions-
it's
not
done
by
default,
but
once
you
do
that
you
know,
then
then
you
can.
D
You
can
see
that
it's
working
so,
for
example,.
D
Christina,
if
you
want
to
go
back
to
the
pull
request,
the
main
page
for
this
pull
request,
you'll
see
at
the
top,
where
it
says,
requests
to
merge
from
from
a
certain
Branch.
If
you
click
on
that,
if
you
click
right,
there
you'll
go
into
Andy's,
Fork
yeah
and
then
that's
his
personal
Fork,
that's
being
merged
from
click
on
his
actions
up
there.
D
So
see
all
about
these
are
all
the
actions
running
in
his
and
and
that's
where
he
merged
my
my
Docker
piles
in
so
basically
the
point
is
you
can
preview
the
success
or
failure
of
the
CI
pipeline
in
your
own
Fork
before
you
do
a
pull
request
to
dash
repo
and
that's
the
recommended,
because
you
can
catch
a
lot
of
mistakes
quickly
without
it
cluttering
up.
Let's
say
the
dash
repos
activity,
it's
just
it's
kind
of
a
good
way
to
pre-check
everything.
First,
you
really
shouldn't
do
a
pull
request.
A
A
All
right
great,
so
then
I
guess
we'll.
You
know
we'll
wait
to
see
how
I'm
hearing
an
echo
I'm
sorry
so
I'll
go
ahead
and
close.
The
call
for
today
then
and
send
notes
and
look
forward
to
you
know
rushman
Andy
talking
with
their
people.
You
know
about
trying
this
out
and
thank
you
very
much
for
doing
this.
This
is
awesome
and
thanks
for
the
recap
on
the
the
picture,
Chris
so
I
think
that
was
really
helpful.
A
Yeah
yeah,
thank
you
thanks,
yeah
and
again
next
week,
it'll
be
Yusuf
Chris
and
everyone
and
I'm
gonna
stop
the
recording
I.
G
G
We
have
some,
could
we
table
that
discussion
at
some
point?
Maybe
you
know
next
week,
I
don't
know
if
Gohan
has
changes
or
he's
on
the
call,
it's
not
on
the
call
today,
but
all
the
not
bound
changes,
at
least
that's
in
the
words.
G
I
at
least
want
to
understand
the
the
dash
Arc
agent
changes
that
that
I
believe
in
the
works
based
on
discussions
that
we
had
last
week
on
the
call.
So
when
that
may
be
coming
a
little
bit
more
details
around
that,
perhaps.