►
From YouTube: DASH Workgroup Community Meeting 20220504
Description
May 5, 2022 Community Call
A
The
behavioral
model
simulator
that
we
are
using
for
the
dash
pipeline
to
be
able
to
automate
its
configuration
so
that
we
can
start
doing
some
tests
and
there
will
be
an
infrastructure
for
actually
running
this
simulator
in
the
automated
environment.
So
let
me
share
my
screen.
A
Yeah,
so
the
pull
request
itself
will
be
ready
by
the
end
of
the
week.
There
is
still
some
beautification
that
needs
to
be
done.
Some
cleanup,
but
I
want
to
show
to
show
you
the
flow,
how
I
see
it
working.
We
can
improve
on
that
in
the
future,
but
it
is
ready
to
incorporate
automated
testing
all
right.
A
A
With
all
of
the
source
files
for
for
the
pipeline
and
p4
language,
and
then
you
have
a
docker
file
for
the
environment,
it
was
changed
by
the
way
to
incorporate
p4
runtime
as
well.
So
if
anyone
tried
running
or
compiling
the
pipeline
before
after
this
pull
request,
you'll
need
to
rebuild
the
docker.
So
there
is
this
command
make
docker
to
to
just
rebuild
it.
A
Now
it
contains
a
few
additional
dependencies,
such
as
pi,
grpc
and
so
on.
So
this
is
what's
used
for
configuration
and,
as
usual,
I
try
to
keep
everything
within
the
same
make
file.
So
all
the
targets
will
be
consolidated
into
one
place,
so
you
get
more
or
less
convenient
cli
to
use
this
infrastructure.
A
So
first
thing
just
rebuild
the
docker.
I
won't
do
it
right
now,
because
it
takes
time.
A
Second,
of
course
we
need
to
compile
our
p4
code
same
as
before.
We
are
not
doing
anything
different,
so
there
is
a
target
series.
Pipeline.Json
excuse
me,
so
I
want
to
to
emphasize
on
a
few
changes
to
this
process.
Although
the
command
is
the
same,
you
may
notice
that
the
resulting
compilation
command
is
different.
We
are
still
using
this
mp4c,
but
this
time
we
add
additional
argument
to
generate
p4
runtime
files.
A
This
is
needed
for
two
purposes.
First,
one
is
that
now
site
api
is
based
on
the
p4
runtime
file,
so
we
generate
the
site
api
inside
implementation.
Based
on
that
and
second,
this
will
be
used
by
by
the
simulator
itself
and
by
the
way
everything
is
also
automated.
You
don't
need
to
as
long
as
you
stick
to
this
make
file,
you
don't
need
to
take
care
of
of
the
files
manually.
Everything
is
fed
into
the
software
switch
automatically.
A
So
we
compiled
our
we
compiled
our
behavioral
model.
For
now
we
only
have
the
result
of
p4
compilation.
Next
step
would
be
to
translate
that
to
psi,
so
I
added
a
new
target,
make
sai
so
to
explain
what
it's
doing
as
before.
As
on
the
previous
demos,
I
was
showing
you
so,
let's
switch
to
sai
directory.
What
we
have
in
here
is
will
make
the
font
a
little
bit
larger.
A
So
we
have
the
site
api
generation
script
now
it
is
enhanced
to
also
generate
implementation
as
well.
So
if
you
look
at
the
templates
directory
now,
we
have
a
template
for
each
for
both
site
api
headers,
which
was
available
up
till
now,
and
it's
implementation
using
before
runtime,
so
that
having
having
this
implementation,
we
can.
A
A
What
it's
doing
is
just
invoking
the
auto
generation
with
the
relevant
arguments,
like
name
of
the
dash
apis
and
stuff
like
that,
so
what
we
get
in
the
result.
A
Now,
let's
compare,
we
have
a
few
more
files
in
this
directory.
First,
of
course
we
get
the
psy
apis
so
as
before
it
is
integrated
into
the
into
the
master
psy
so
that
it
can
be.
A
A
Of
course,
it
will
be
enhanced
in
the
future
to
cover
the
underlying
part
when
we
will
define
it
better,
but
it's
pretty
minimal-
and
I
have
a
test
in
here.
So
the
thing
that
I
was
talking
about,
it
still
needs
to
be
beautified.
I
want
to.
I
think
it's
a
good
idea
to
to
start
a
test
directory
in
dash
repository
so
that
we
will
have
all
the
tests
kept
there.
A
I
will
contribute
the
first
one
showing
how
the
pipeline
can
be
configured
and
I
will
also
add
a
traffic
test
along
with
it
that
will
generate
the
corresponding
packet
to
to
what
is
configured
by
this
unit
test
and
send
and
receive
packet
and
verify
that
we
got
what
we
expected,
but
for
now
it's
all
in
the
same
directory.
So
this
is
what
will
change
by
the
end
of
the
week.
A
A
And
so
this
is
a
new
step
before
what
you
would
do
is
just
compile
the
pipeline
and
then
do
you
make
run
switch
in
between
those
two
steps.
Now
you
need
to
also
do
say
from
here.
A
Next,
we'll
be
running
the
switch
itself
and
let's
see
how
it
is
configured
using
this
psi
api,
so
I
will
go
into
the
files
in
a
minute,
but
before
that
I
want
to
show
you
what's
different
about
running
the
switch.
Everything
is
still
in
the
script,
but
for
you
to
to
see
the
difference.
So
now
we
are
using
a
different
binary.
A
It's
a
different
target.
Coming
together
with
the
before
community
behavioral
model.
It's
called.
Let
me
find
a
name,
simple
switch
grpc
using
p4,
runtime
apis
and
you
may
notice
we
do
not
feed
it
any
of
the
files
anymore.
We're
saying
that,
let's
start
as
a
clean
switch,
because
the
pipeline
will
be
loaded
using
our
psi
apis.
A
So
been
generated
by
the
script,
but
you
may
find
some
pattern
in
here,
so
you
have
for
each
kind
of
apis.
You
got
three
callbacks
create
remove,
set
attribute,
get
attribute,
can
be
extended,
of
course,
so
for
now
set
and
get
attribute
is
not
implemented,
because
I
think
initially
tests
won't
use
those
in
the
future.
When
we
will
try
to
integrate
that
with
the
sonic
software
stack
with
the
orc
agent.
Of
course
we
will
likely
need
them.
A
A
A
So
just
calling
those
apis,
you
may
see
that
we
configure
some
of
the
entries
like
direction.
Lookup
then
eni
lookup,
for
example,
like
translating
mac
address
to
eni,
then
eni,
to
vni
values
like
all
of
the
transformations
and
lookups
from
the
pipeline
can
be
expressed
with
these,
and
the
apis
eventually
are
called.
A
Yeah,
so
let's,
let's
enter
the
same
container,
has
all
the
mounts
ready.
Probably
this
step
can
be
automated
as
well,
but
for
now
it's
manual
because
I
haven't
put
everything
in
the
right
place
yet,
but
I
hope
you
get
the
idea.
A
So
let's
go
to
our
site
directory
here
it
is
and
let's
take
a
look
at
what
will
happen
in
the
in
the
simulator.
A
Okay,
one
more
time,
yeah
make
psi.
A
A
A
A
All
right,
so
what
this
is
doing
is
two
things.
First,
one
as
I
mentioned,
it
will
load
the
pipeline
for
you.
So
that's
number
one.
You
can
see
the
output
from
the
logger
that
the
setting
up
the
pipeline,
like
all
of
the
tables,
default
actions
for
those
tables
and
so
on.
So
that's
number
one.
You
don't
really
need
to
to
worry
about
that.
Second,
it
will
populate
those
entries
one
by
one.
A
The
configuration
is
done
next
step.
Entry
is
added
to
a
table
direction,
lookup
e
and
I
look
up
dna
mapping
to
vni
and
so
on,
and
this
is
pretty
much
ready
for
running
the
traffic
yeah.
So
that's
what
I
have
and,
as
I
said
by
the
end
of
the
week,
I
will
raise
a
pull
request,
organize
all
the
files
nicely
and
probably
also
you
will
see
there
some
scappy
script
that
will
send
the
traffic
to
to
have
a
complete
picture,
so
we
will
have
something
to
to
work
with,
and
from
that
point
we
can.
A
A
You
will
have
time
to
review
it
next
week
the
pull
request
itself.
So
we
will
circle
back
next
week
on
the
on
the
pr.
B
Marion
this
is
just
excellent,
excellent
work,
amazing,
actually
to
see
this.
You
know
for
the
very
first
time,
I'm
sure
people
will
have
questions,
but
congratulations
and
community
all
appreciates
the
work
that
you've
done.
It's
great
and
everybody
else
in
know
not
to
help
you
put
that
together
and
I
hope
that
the
community
will
come
together
and
start
right
test,
because
that's
that's
just
as
important
as
the
model
itself.
C
C
A
So
there
is
no
explicit
site
api
to
load
the
pipeline,
but
what
I
mean
is
that
it
is
part
of
the
library
implementation
that
it
will
load
pipeline
for
you
before
before
executing
any
api.
A
Yeah,
so
if,
if
you
may
notice
phone,
I
will
I
will
stop
the
container
so
here
in
the
make
file.
A
Where
we
are
running
switch
it
so,
first,
when
you
compile
compile
the
code,
you
will
get
those
you
will
get
the
pipeline
in
the
predefined
path
right
and
other
p4
runtime
files
needed,
and
when
you
run
the
switch,
we
mount
those
files
in
also
in
the
predefined
directories,
in
the
docker,
like
etsy
dash
and
the
file,
and
then
when
you
execute
any
test,
it
will
know
to
take
to
take
the
pipeline
and
the
p4
runtime
information
from
the
predefined
path.
So,
as
long
as
you
stick
to
the
same
environment,
it
will
work.
C
A
Scift,
yes,
so
site
rift
is
one
layer
above
so
we
have
site
apis
and
then
you
can
implement
different
clients
for
the
site
api.
So
for
the
most
trivial
one
was
the
one
that
I
did
like
just
manually
my
test
right,
which
directly
calls
site
api.
No
rpc.
There
are
no
nothing,
we
directly
link
with
acai
library
and
we
we
execute
calls
directly.
There
is
another
way,
however,
there
is
the.
A
I
don't
know
exactly
how
it's
implemented,
because
it's
been
a
long
time
since
I've
looked
into
that,
but
there
is
an
a
thrift
rpc
for
psy
so
that
you
have
the
rpc
server
sitting
somewhere
and
that
that
can
understand
any
of
the
site
apis,
and
I
don't
know
if
any
change
is
needed
to
incorporate
overlay
apis,
which
are
auto
generated,
that
I
would
need
to
change.
C
Yeah,
I
think
that's
what
I
kind
of
figured
that
you
have
kind
of
a
direct
c
interface
right
now,
that's
great
for
proving
proof
of
concept
at
this
level
so
probably
need
to
have
I
mean
this
is
a
great
milestone.
I'm
really
excited
about
this.
I
think
what
we
need
to
do
is
have
some
subsequent
discussions
on
how
we're
going
to
integrate
that
into
the
rpc
approach
and
reshma's,
not
here
on
the
call
I
think,
she's
out
for
a
couple
weeks,
but
we'll
want
to
have
that
discussion
about
how
to
then
yeah
so.
A
Drift,
I
believe
it's
not
really
a
big
deal,
because
even
the
main
site
is
or
extended
with
new
objects
all
the
time,
and
we
don't
do
much
of
the
work
to
support
that.
So
it's
either
like
add
a
new
object
type
and
then
everything
else
will
be
taken
care
of,
or
it's
even
simpler
than
that
so
everything's.
C
A
C
Sorry,
I
don't
think,
there's
any
technical
issues.
I
think
it's
just
going
to
be
a
matter
of
getting
the
workflow
right
right,
yeah,
you
know
to
get
the
psi
repo
where
the
scythe
thrift
is
and
and
make
that
see
the
new
site
headers
that
you're
generating
here
and
then
produce
the
final
server
right,
client
server.
C
A
E
E
Marian
thanks
a
lot
really,
you
know
excellent
work.
Definitely,
you
know
major
major
accomplishment.
I
have
a
quick
question
on
the
testing
side
of
the
thing
that
you
you
carried
out
right
so
currently
how
it
exists
today.
Before
your
you
know,
pr
is
going
to
emerge
the
way
I
see
it.
When
I
carried
out
all
the
testing
there.
What
I
saw
was,
you
know
we
ran
the
simple
switch
cli
in
order
to
populate
the
tables
right.
E
So
what
I
see
right
now
just
trying
to
see
the
difference
between
what
we
were
doing
before
and
what
you're
doing
right
now.
Are
you
really
using
the
psy
api
now
to
to
populate
the
tables
right
and
then
so?
In
turn,
the
psi
apis
are
basically
calling
those
p4
runtime
calls
to
to
you
know
populate
those
tables.
Correct
is
that
is
that.
A
Yeah
yeah,
so
here
you
see,
we
have
everything
now
in
the
psy
format,
you
have
the
entry
you
have
attributes
of
this
entries
you
can.
You
can
find
the
full
definition
in
the
headers
and
then
the
site
api
call
so
yeah.
This
is
conforming
to
psi
apis,
but
under
the
hood
it
is
translated
to
yeah.
I
can
show
you
that
well,
let's
do
cy
dash
cpp,
create
eni,
for
example,.
A
Pni
to
vni
and
create
outbound
so
what's
happening
here
is
that
signature
is
psi.
However,
it
is
translated
into
p4
runtime
values.
Like
you
have
a
table
id
from
p4runtime
file.
You
create
a
match
action
entry,
create
an
action
and
so
on
and
so
forth.
So
you
populate
it
using
a
p4,
runtime
apis
and
eventually,
at
the
end
of
this,
you
will
see
something
like
your
insert
an
entry
into
the
table
so
yeah.
So
now
there
is
no
need
for
manual
adding
of
the
entries
using
the
cli.
A
E
Right
awesome,
this
is
great,
so
I
can
really
see
these
things
since
I
actually
ran
the
the
manual
piece
and
now
I
can.
You
know
really
appreciate
how
you
basically
have
come
up
with
this
automated
one.
So
what
you
showed
there
were
a
couple
of
files.
You
showed
me
right
how
much
of
that
is
actually
auto
generated
and
how
much
is
basically,
it
was
manually
coded.
A
E
I
see
so
the
one
manual
test
is
manually,
so
psi
dash
dot
cpp
was
was
auto
generated
completely
yeah,
okay,
okay,
so
so
the
steps
were
basically
that
you
know
when
we
went
into
this.
You
know
when
we
did
the
mag
make
docker,
and
then
you
know
when
we
did
the
p4c
compile
part
and
generated
the
auto
sorry
generated
the
p4
runtime
code.
E
Then,
when
we
went
back
to
the
psi,
it
essentially
pulled
all
those
p4
runtime
code
into
the
psi
generated
psi
apis
and
essentially
that's
the
total
implementation.
Okay.
This
is
great.
This
is
great.
Thank
you
appreciate
it.
I
think
now
we
can
connect
all
the
dots
and
see
how
all
comes
together
appreciate
it.
E
E
Sonic
yeah
it's
being
used
in
sonic.
Exactly
so
is
that
is
that
integrated
here,
because
remember,
you
showed
that
you
can
populate
the
table
with
making
those
psi
api
calls.
The
next
step
would
be
is
to
really
use
the
ptf
to
send
out
the
traffic
to
ensure
that
okay,
all
those
tables
are
hit,
and
then
we
can
see
the
you
know
whatever
was
basically
was
populated,
is
really
getting
exercised.
A
Yeah-
it's
not
in
the
docker
yet,
but
but
I
don't
see
really
a
problem
with
that,
because
that's
just
the
python
library
sure
sure
so,
of
course
yeah
I
will
add
a
traffic
test.
I
will
run
into
this
that
the
ptf
is
missing,
so
it
will
be
there
as
well.
Thank
you.
C
Yeah
and
if
I
wanted
to
add
on
to
your
comments,
yes,
the
side
framework
also
uses
you
know
ptf
and
that's
real
well
known
and
sonic
for
kind
of
functional
testing,
not
at
line
rate
and
it's
easy
and
popular.
C
D
Yeah,
that
was
a
lot
of
work.
Thanks
marion
anything
else
from
you,
or
should
we
give
you
a
break.
A
For
me
yeah,
I
talked
to
you
personally
about
that,
but
I
want
to
update
everyone.
There
is
still
work
going
on
with
regards
to
connection
tracking
to
support
that
in
the
simulator.
So
we
have,
I
think,
all
or
almost
all
implementation
or
for
the
simulator
already.
A
We
are
still
pending
some
things
in
the
dna
community,
but
I
don't
want
to
be
blocked
by
that.
So
in
a
week
or
two
we
will
also
introduce
the
connection
tracking
support
in
the
in
the
simulator.
E
Great,
so
you
know
today
right,
as
as
people
have
experienced
this
thing
when
they,
when
they
try
to
do
this,
you
know
all
the
steps
to
to
build
and
do
go
through
make
docker
and
then
go
through
the
p4,
compiler
and
so
forth.
You
know
with
all
the
behavioral
model.
E
A
Yes,
so
those
two
not
yet
supported
features,
probably
we
can
by
default,
leave
them
out.
That's
a
good
idea,
and
as
soon
as
we
will
have
a
support,
we
can
turn
them
on
with
that
yeah
yeah.
B
Actually,
it's
the
connection
tracking
yeah.
So
can
you
give
us
an
update
of
what
happened?
Maybe
it
was
last
monday
or
something
there
was.
There
was
a
meeting
with
the
p,
a
group
whatever
happened,
and
that
anybody
have
an
update.
A
E
D
Or
anyone
next
week
was
the
smart
conference
and
I
think
it
was
canceled,
but
the
one
before.
B
C
A
Yeah,
that
would
be
good.
I
just
know
how
to
yeah
what
to
start
with.
Probably,
we
need
some
help
from
someone
who
has
experience:
okay,
okay,.
C
Yeah
cause
that
that
way
for
people
who
might
not
be
familiar,
it's
just
kind
of
a
cicd
automation,
pipeline
workflow,
where
yeah,
of
course,
you
check
in
code
well,
other
not
everyone
on
this
call
might
be
familiar
with
it.
C
So
I
just
wanted
to
briefly
say
it's:
if
you
check
in
code
to
something
you
can
have
to
get
a
framework
recognized
something
that
requires
a
test
or
an
action
or
some
step,
and
in
this
case
what
we
could
do
is
actually
all
these
steps
that
marion
just
showed
us
to
build
the
switch
and
run
it,
etc.
That
could
all
be
done
automatically
anytime.
There's
a
commit
done
to
this
particular
sub
project
and
it
would
pass
or
fail,
and
that
would
be
part
of
a
pull
request.
C
E
That's
excellent,
you
know
I.
This
is
excellent
suggestion
chris
this.
This
definitely
we
need-
and
this
is
I
would
was
also
asking
in
previous
meetings
about
some
sort
of
a
unit
testing
for
people
who
are
submitting
their.
You
know
behavior
models,
as
we
have
a
quite
list
of
things
that,
in
the
in
the
project
dashboard
that
people
are
going
to
bring
in
yeah,
there
has
to
be
some.
You
know
checking
to
ensure
that
okay,
what
people
are
submitting
has
been
tested.
E
It
gets
integrated
because
remember
once
this
thing
is
in
play,
it
becomes
part
of
you
know
when
people
check
in
their
changes
to
the
behavior
model
or
they
they
start
to
submit.
You
know
their
contributions.
We
want
to
ensure
that
everything
basically
remains
kosher
when
people
check
out
and
then
when
people
try
to
build
the
pipelines
and
then
try
to
carry
out
certain
testing
in
the
simulated
environment.
E
They
don't
start
to.
You
know,
see
failures
right,
so
this
is
a
great
way
of
actually
putting
these
checks
in
place
right
to
ensure
that
whatever
is
getting
checked
in
it's
thoroughly
vetted
out.
C
And
thanks
for
you
know,
backing
that
up
and
even
one
step
beyond
that
we
could
have
building
and
generating
artifacts
that
are
put
some
kind
of
repository.
Much
like
sonic
is
built
regularly
and
there's
a
status
dashboard
et
cetera.
You
know
this
particular
build
passed
or
failed.
You
know
anything.
C
It
creates
artifacts
that
you
can
just
download,
like
you
know,
sonic
build
image
for
you
know
xyz
asic,
that's
you
know
that's
going
to
take
some
work,
it's
quite
a
bit
of
infrastructure
work,
but
I
think
it's
a
good
aspiration
right
so
that
people
don't
have
to
manually
get
clone
go
through
all
the
steps
build
it
have
it
in
the
work
directory
and
then
start
playing
with
it.
There'd
be
some
artifact
already
there,
but
that's
that's
kind
of.
E
C
D
Good
good,
thank
you
and
just
fyi
for
everyone.
We've
been
meeting
the
high
availability
working
group
and
we
met
the
other
day
to
continue
documenting
requirements
and
how
we
might
want
to
handle
future
pieces
of
work.
We're
also
still
doing
the
behavioral
model
work
group,
where
we're
working
through
the
work
items
needed
to
complete
the
behavioral
model
for
v-net
v-net
and
for
the
smart
switch
rfi.
D
Microsoft
is
going
to
review
feedback
on
may
10th
and
it
looks
like
we
prince
had
gone
ahead
and
converted
to
the
we
had
the
json
and
he's
converted
to
yang
and
given
that
to
us
and
the
sdn
team
to
review
and
go
over,
and
it
also
looks
like
we're
still
waiting
on
the
metering
document
that
was
supposed
to
be
delivered
after
a
few
weeks.
So
I'll
go
back
and
check
on
the
metering
document.
I
know
there's
other
priorities
right
now
in
front
of
that.
B
Week
we
should
meet
on
that
to
make
sure
that
the
community
actually
understands
that
northbound
interface
and
that
he's
commenting
on
it.
And
you
know
I
don't
want
him
to
just
convert
it.
And
then
you
know
it's
just
missing
a
bunch
of
stuff.
So
we
should
have
another
meeting
on
that
and
but
people
will
ask
questions
but
give.
D
E
Let
me
find
you
yeah.
I
saw
that
I
I
saw
that
pr
and
and
and
thanks
a
lot
prince
for
really
you
know,
heading
this
effort
and
taking
the
take.
You
know
taking
this
initiative
of
converting
from
json
to
yang,
and
I
I
started
seeing
this
thing
and
it
looks
like
you
know:
it's
a
great
start.
E
So
definitely
you
know
it's
people
who
basically
want
to
see
the
data
models
in
the
yang
format.
It
will
really
help.
So.
Thank
you.
I
have
one
question
about.
You
know.
I
saw
some
pr
about
this,
some
document
on
the
holistic
design
or
something
like
that.
Is
there
something
going
on
and
is
there
an
update?
What
is
that
yeah.
D
Yeah,
basically,
we've
gone
ahead
and
expanded
on
the
initial
document
and
we
wanted
to
add
even
more
information
around
each
piece.
So
we
we
took
a
copy
of
what's
published
and
we've
added
to
it
and
and
vetted
it
through
gerald
and
some
other
people
and
then,
as
of
yesterday,
we'll
remove
the
old.
I
think
we
called
it.
The
hld
and
we've
named
the
we've
named
the
new
document,
the
same
name.
So
all
the
pointers
will
stay,
but
it's
just
a
more
fleshed
out
version.
D
E
C
B
C
Yeah
christina,
can
we
look
at
this
diagram?
I
put
the
link
up
since
we
started
talking
about
schema.
I
wanted
to
just
share
this
picture
and,
since
prince
is
on
the
call,
hopefully
he
can
answer
a
question.
That's
been
burning
in
my
mind,
so
I
tried
to
capture
the
relationship
of
the
different
schema
layers
in
that
diagram.
If
you
can
just
scroll
down
to
that,
you
know
it
shows
how
we
have
gnmi
northbound
schema
defined
in
yang.
C
It
gets
translated
in
the
dash
gmail
container
into
app
db
objects
and
then
those
get
transformed
into
asic
db
objects
through
the
orchestration
right
and
then
ultimately
applied
to
the
psi
interface
and
sonic
config
gen
is
kind
of
the
canonical
way
of
importing
and
exporting
app
level
configurations
using
json.
C
I
was
just
doing
this,
so
I
could
understand
it
and
ask
some
questions
about
it
and
one
of
the
tasks
that's
going
to
be
coming
forward.
I
guess
for
the
the
dash
gmi
container
developers
is
translating
the
yang
into
app
tv
and
that's
kind
of
just
a
mapping
exercise.
C
As
far
as
I
understand,
it's
almost
a
one
for
one
translation
from
one
schema
to
another
and
to
me
that's
an
opportunity
for
writing
tests
against
all
these
interfaces
with
one
source
of
truth
and
then
having
one
set
of
declarative
data
which
I'm
proposing
we
think
about
using
the
sonic
config
gen
format
as
a
way
of
starting
with
one
set
of
test
vectors
and
then
applying
them
to
every
interface.
That's
applicable,
that's
just
a
proposal.
F
Yeah,
I
think
sure
we
can
consider
that,
but
again,
like
sonic
config
gen
is
not
for
app
db.
It
is
mainly
for
the
config
so.
F
But
overall,
this
is
this
makes
sense,
like
the
config
back
end
of
the
jnmi
container
is
the
one
that
puts
those
data
into
the
app
db.
However,
if
you
look
at
the
sonic
yang
model,
it's
pretty
much
very
similar
to
what
we
will
eventually
have
in
the
db,
so
it's
kind
of
one
to
one
mapping
that
this,
and
there
is
no,
you
know
complex
conversions
that
are
required
at
the
general.
My
layer.
C
F
So,
of
course,
we
would
definitely
need
some
sort
of
testing
these
apis.
We
can.
C
But
doesn't
that
extract?
Doesn't
that
represent
data
that
could
be
all
the
configuration
objects
we
need
or
a
different
format.
F
That
so
these
objects
are
all
translated
directly
to
the
app
db
right
yeah.
So
so
sonic
has
different
db.
Instances
for
one
is
appdb.
One
is
config
db
for
all
those
static
based
configurations.
F
G
In
but
if
we
do
a
cold
boot,
if
we
do
a
call,
boot
sonic,
usually
loads,
sonic
config,
whatever
db
in
this
case,
if
we
do
a
call
boot
on
this
appliance,
basically
everything
will
have
to
be
pushed
again
by
sdn
controller.
F
Absolutely
so
in
the
cold
boot
there
is
no
required
so.
F
F
G
C
C
We
could
then
use
that
you
know
agnostic
format
or
maybe
it's
tied
to
one
of
these
interfaces
and
then
translate
into
all
the
other
formats,
because
what
we
want
to
do
is
have
test
vectors.
We
don't
have
to
keep
rewriting
for
every
northbound.
You
know
we
can
translate
them
or
render
them
on
the
fly.
C
The
data
you
know
an
ip
address
is
an
ip
address.
I
don't
care
if
it's
represented
as
yang
or
json
or
psy
right.
It's
it's
still
the
same
information,
so
we
should
try
to
have
one
agreed
upon
source
of
information
for
those
and
so
that
that's
the
question,
do
you
have
off
the
top
of
your
head?
A
recommendation.
F
We
have
some
references
for
that
like
even
currently
existing
in
sonic
for
such
validations.
Maybe
maybe
I
think
we
need
we
can
take
it
offline
and.
C
Yeah,
get
back
on
that
day,
yeah
that
we
can't
hash
that
out
now,
but
I
just
want
to
throw
it
out
there,
because
you
you
put
up
a
you,
know,
sort
of
an
example
json
file
to
give
people
an
idea
of
the
kinds
of
config
objects,
but
it
was
just
treated
as
a
sort
of
a
an
example
right
to
get
the
conversation.
It
wasn't
a
definitive
schema.
C
Yes,
I
think
it
would
be
it's
important
to
have
a
definitive
schema
that
test
vectors
can
all
be
represented
in
because
otherwise
we're
going
to
be
doing
way
too
much
work
over
and
over
again.
F
So
one
more
thing
to
clarify
here,
the
the
json
example
is
just
for
reference
for
sure.
Okay
right,
the
actual
schema
definitions
is
either
you
can
refer
this
yang
file
or
in
the
sonic
hld.
There
is
a
section
for
the
the
schema
definitions,
the
one
that
I
think
in
your
diagram.
Also,
you
have
mentioned.
C
F
C
It
seems
like
a
good
place,
and
you
know,
because
we
want
to
do-
is
come
up
with
a
method
where
we
have
declarative
test
data,
and
then
you
just
write
a
simple
translator
to
turn
into
whatever
northbound
you
want,
and
you
can
run
all
the
tests
at
every
level
of
the
stack
and
you
should
get
the
same
result
in
the
data
plane.
So
it's
a
way
of
really.
You
know
verifying
correctness
everywhere,
and
you
know
I'm
a
lazy
engineer.
C
C
F
C
A
shared
library
or
something
that
we
could
use
yeah.
I
wanted
to
propose
that
we
try
to
do
things
that
way
where
the
transformation
is
encapsulated
and
not
tied
into
lots
of
plumbing.
Like
some
code
is
the
plumbing.
You
can't
separate
the
database
access
from
the
transformation,
because
it's
all
kind
of
you
know
intertwined
if
we
can
isolate
the
mapping
between
different
schemas
as
a
shared
library
or
some
piece
of
code,
it
would
really
help
you
know
in
the
testing
unit,
testing,
validation
and
and
comprehension.
F
Yeah,
I
think
we
can.
We
can
have
plan
something
like
a
test.
Genomic
container
that
just
accepts
some
api
calls
and
then
writes
to
the
mdp.
So
there's
no
other
dependencies
required.
C
E
So,
in
the
same
vein,
you
know
we
we
need
to
also
start
to
see
about
asic
db
right,
so
once
the
so,
we
need
to
see
that
how
the
schema
for
the
ac
db
is
going
to
look
like
such
that
you
know
those
whatever
gets
populated
in
asic
db.
We
can
get
translated
into
this.
You
know
the
psi
api
eventually
to
talk
to
the
sing-d
and
then,
which
eventually
goes
to
this
one.
So
so
is
there
any?
E
F
So
typically,
we
don't
have
the
definitions
of
asic
db
schemas,
because
it's
it's
a
little
bit
cryptic
to
understand
right
right.
So
what
I
think
got
the
point
like
how
it
will
feel
like
in
sort
of
some
examples
to
capture,
but.
E
E
Right
right
so
yeah,
but
if
you,
if
you
mandate
the
mandate
the
arc
agent
to
write
it
in
the
format
that
essentially
is
easily
translatable
from
there
into,
because
eventually
it's
the
side,
it
is
apis
that
our
agent
calls
in
order
to
populate
those
ones.
So
so,
if
you
say
that
okay
here
are
the
side,
radius,
api
and
here.
D
C
E
D
Okay,
I'll
put
that
in
the
notes
where,
hopefully
we
start
thinking
about
it
in
the
next
couple
of
weeks,
I'm
gonna
go
ahead
and
stop
the
recording-
and
you
know
thank
everybody
for
their
time
and
their
input
into
the
conversation,
and
thank
you
marion
for
the
presentation.