►
From YouTube: DASH Workgroup Community Meeting Sept 21 2022
Description
Reshma hosting - thank you!
A
A
Good
morning,
everyone
I'm
reshma
from
Intel
and
I,
am
helping
out
to
moderate
some
of
these
Dash
Community
meetings.
This
week
as
Krishna
is
Christina
is
out
of
office.
We
had
a
very
good
meeting
yesterday
on
high
availability.
I
will
paste
the
link
to
the
document
here
presented
by
AMD
pensando.
Please
go
through
this
document
and
add
your
review
comments
to
it
and
yeah.
So
what
Christina
had
said
was
that
this
meeting
format
is
an
open
Forum.
A
So
today,
if
you
have
any
questions,
please
you
know
we
can
discuss
the
open
PRS
that
need
reviews
or
any
other
discussions
that
you
may
want
to
have
today.
Yeah.
Let
me
share
the
screen
to
show
the
PR
that
I
was
talking
about
the
AMD
pensando
PR.
A
Sharing
here
it
is
so
this
is
the
one
we
discussed
yesterday
and
I'll
paste
the
link
in
the
chat
yeah.
So
does
anyone
have
an
APR
that
they
want
to
discuss
today
or
any
other
topics.
B
A
B
I'll
take
the
screen.
Okay.
So
last
last
week
we
had
you
know,
presentation
on
this
framework:
I'm
a
sneak
preview
in
it
took
all
the
time.
So
we
didn't,
we
didn't
take
time
to
review
some
PRS
that
had
been
close
in
the
days
preceding
that
I
just
wanted
to
just
mention
them.
So
people
are
aware.
B
This
fixed
IPv6
packet
noise,
which
we
talked
about
in
some
previous
meetings,
where
it
was
a
matter
of
disabling
IPv6
in
the
Linux
stack
in
the
test
environment,
so
that
we
don't
get
IPv6
control,
plane,
packets,
breaking
test
cases
because
we're
receiving
packets.
We
didn't
expect
so
that's
in
there,
but
then
shortly
after
that,
our
xcsc
development
team
actually
added
a
feature
at
my
request
where
you
can
disable
IPv6
when
you
deploy
xcsc.
B
Let's
say,
extra
work,
extra
clicks,
there's
a
level
of
folders
called
requirements
and
one
called
document
requirement.
Someone
called
design
and
it
turned
out
just
annoying
people,
so
he
flattened
the
structure
a
little
bit
and
cleaned
it
up
and
I
helped
review
that.
So
that
was
closed
yesterday
and
one
more
happened,
this
fixed
Docker
and
make
file
permissions,
which
I've
talked
about
a
number
of
times.
B
We
did
merge
it
just
before
the
last
meeting
of
last
week's
meeting,
so
there's
much
less
use
of
sudo
command
in
the
make
files
and
that
will
make
people's
lives
easier.
The
only
time
it's
invoked
is
when
you
do
some
Linux
networking,
where
you
have
to
have
group
permission.
So
that's
I
just
wanted
to
cover
those
some
of
those
so
that
just
for
closure
and
then
let's
look
at
some
of
the
open
PRS
one
is,
let
me
find
it
here.
B
So
this
is
a
significant
amount
of
work
to
get
it
all
working,
but
basically
the
docker
files
and
make
files
and
everything
have
been
enhanced
so
that
everything's
not
being
published
to
my
personal
Docker
Hub
anymore.
It's
now
going
to
ACR
and
I
want
this
to.
You
know
bake
for
a
while,
and
anyone
wants
to
try
it
out.
That
would
be
nice.
I've
also
documented
the
workflows
for
maintaining
Docker
files
right
now,
it's
just
been
a
black
art
that
a
few
people
have
had
to
work
with
I.
B
B
It
should
be
something
that
anyone
with
the
right
permissions
can
do,
and
so
I've
documented
the
workflows
for
that
in
this
request
and
I've
got
extensive
instructions,
and
you
know
that
diagrams
on
the
workflows,
there's
there's
multiple
different
possible
workflows.
B
You
can
see
there's
quite
a
bit
of
detail
here,
so
the
ask
is
for
the
community
and
then
for
Microsoft
is
there
should
be
a
few
people
with
maintainer
access,
because
you
have
to
have
access
to
the
main
Dash
repo
in
order
to
create
branches,
and
we
need
branches
in
the
main
Dash
repo
in
order
to
publish
images.
So,
there's
a
little
bit
of
like
administrative
aspect
to
this.
B
Not
everyone
can
just
mess
with
the
docker
files,
because
it
requires
permissions
from
the
main
repo
to
publish,
so
we'll
probably
need
a
few
people
as
as
backup
who
can
be.
You
know,
nominal
maintainers,
who
can
understand
and
work
with
the
docker
files
as
needed,
because
these
Docker
files
are
essentially
Central
resources
of
all
the
building
resources
and
some
of
the
logic,
and
we
need
a
few
people
who
can
kind
of
Step
In
when
they
need
to.
So
that's
just
like
an
open
item.
B
B
Hi
Marion,
so
you
know
we
can
talk
offline
too,
but
we
probably
want
to
work
a
little
bit
together
on
this,
a
little
transfer
of
information,
because
you'll
be
making
new
Dockers
for
the
bmb2,
and
so
some
of
this
will
come
into
play.
I
think
you'll
probably
use
this
workflow
here,
which
is
simpler,
because
you
already
have
maintainer
right
right
permissions
to
the
repo,
but
I
just
wanted
to
bring
that
people's
attention.
There's
this
sort
of
tribal
knowledge
that
I'm
trying
to
codify.
B
It
wouldn't
be
a
bad
idea,
or
at
least
have
you
know
two
or
three
people.
It's
may
look
more
daunting
than
it
really
is,
but
sometimes
you
have
to
go
through
a
few
times
to
get
it
get
get.
Github
operations
are
sort
of
complicated
enough
without
them
adding
Docker
publishing
to
it.
Oh
yeah
I
would
say
it's
a
few
people
want.
We
can
even
have
a
working
session
sometime
and
just
go
through
some
of
this
right,
but
it's
here
documented.
B
So
if
something
happened
to
me
tonight
or
tomorrow,
you
know
there'd
be
something
here.
B
So
that's
that,
and
so
that's
baking
and
there's
no
urgency
to
commit
this
right
away.
So
there's
time
for
people
to
try
it
out
and
then
I
wanted
to
mention
that
hanoff's
been
working
on
this
IPv6
ACL
support
and
Mukesh
and
I
have
been
reviewing
and
communicating
with
him
and
we're
ready
to
approve
this
in
case.
Anyone
has
any
last
words
they
want
to
weigh
in
on
it.
B
It's
it's
pipeline
support,
he's
also
working
on
test
cases
that
he's
made
some
progress
on,
but
he's
not
ready
to
commit
those
because
there's
some
some
issues
but
I
move
that
we
submit
this
change.
You
know
today,
let's
say
and
maybe
we'll
wait
till
end
of
day
in
case
anyone
wants
at
the
last
minute
to
look
at
this
and
weigh
in,
but
there's
already
been
some
review.
A
B
End
of
week
is
fine.
I.
Suppose
main
thing
we
want
to
avoid
is
too
much
backlog.
If
there's
lots
of
work
in
progress,
we
start
stepping
on
each
other's
toes
but
I.
Think
in
this
game,
pretty
quiet.
B
A
Sure
we
have
a
couple
of
items
to
discussed.
Definitely.
B
Okay,
well
I've
I've
finished
what
I
wanted
to
talk
about
today,
so
I'll
I
can
stop
my
sharing.
If
anyone
wants
to.
If
you
want
to
talk
about
this
today
that
or
tomorrow,
that's
fine.
E
Yeah
sorry
Chris
thanks
a
lot
for
doing
this.
Acr
part.
You
know
so
I
I
volunteered
to
try
this
one
out
and
and
send
you
some
feedback.
B
A
Thank
you,
honey,
I
think
Voldemort
mcnik
may
also
volunteer
at
some
point
yeah.
So
we'll
also
try
it
out
and
let
you
know
Chris
and
just
for
everybody's
information,
that
keyside
Chris
and
team
and
Intel.
We
meet
often
bi-weekly
to
discuss
all
the
test
related
items
in
terms
of
test
cases
or
you
know,
CI
Etc
foreign.
D
Yep
sure,
do
you
see
my
screen?
Yes,
okay,
great
yeah,
so
today,
I
want
to
talk
about
the
the
pull
request.
Actually,
I
will
be
talking
about
the
one
that
is
the
first
pull
request,
it's
about
the
test
cases
and
we
also
have
another
pull
request
where
we
starting
adding
the
test
plan,
probably
until
we'll
cover
this.
So
first
of
all,
I
start
from
the
this
pull
request,
where
we're
adding
the
basic,
let's
say
intra
and
starting
adding
the
more
test
cases,
overlay
and
actually
v-net
to
the
net.
D
These
cases
PDF
test
cases,
let's
say
to
the
dash
repo.
So
here
is
the
let's
say
brief
description
of
this
pull
request.
What
we
are
doing
here
about
the
main
changes,
what
has
been
added?
What
are
limitation
currently
our
future
plan
for
these
test
cases?
That's
one
of
the
review
of
from
the
crease
already,
so
we
want
to
make
some
modification
into
this
pull
request
to
avoid
some
duplication
stuff.
So
this
is
something
that
we
want
to
to
address
and
also
I'll
show
just
quickly
the
test
test
cases
and
where
everything
is
located.
D
So
we
are
going
to
put
the
PTF
test
cases
under
dash
pipeline
test
size,
Rift,
PDF,
folder.
Okay,
there
is
some
files
that
are
duplicated
from
the
side.
This
is
what
we
want
to
remove
and
avoid
this
duplication,
but
the
Mind
main
piece
here
is
the
side.
Dash
we
net
this
case,
so
we
just
too
big
I
would
say
so
yeah.
So
we
have
some
basic.
Let's
say
you
deal
this
stuff
added
that
each
of
the
test
cases
will
use
and
added
few
use
cases
for
the
element
invent
testings.
D
Of
course
it
will
be
still
updating
and
more
use
case
and
fixes
will
be
coming
in
like
next
two
weeks
right
and
more
and
more
use
cases
so
anyway
for
everyone
to
know
where
PDF
will
be
located.
Maybe
someone
will
have
you
know
some.
D
You
know
confusion
about
why
it
is
located
in
dash
pipeline
test
folder.
Maybe
it
makes
s
will
have
some.
You
know
suggestion
where
it
should
be
place,
it
let's
say
better.
Maybe
it
will
suggest
some
better
place,
so
any
comments
actually
are
welcome.
So
please
go
review,
you
know
and
have
any
comments
so
we'll
be
glad
to
to
address
them,
but
this
is
just
a
basic
and
first,
let's
see
pull
request
where
we
are
going
to
add
the
v-net
test
cases.
D
But
as
I
said
we
can
we
we're
going
to
add
more
yeah
and
if
we
go
back
to
the
pull
requests,
so
we
also
starting
adding
the
overlay
test
plan
so
probably
Anton.
If
you're
on
the
meeting,
can
you
cover
this
overpass?
Okay,
I
will
stop
by
measuring
now.
F
You
can
keep
because
I'm
not
not
much
to
show
so
you
can
open
it.
Just
so
main
is
actually
the
description,
so
we
put
some
landing
page
for
the
old
test
plans
and
right
now
here
we
have
two
drafts
for
the
United
States
plan
and
we
need
to
win
it.
This
plan,
so
yeah
I
just
would
like
that
people
have
put
some
comments
because
we
already
like
we
are
right
now
working
on
it.
We
are
actually
doing
automation
all
those
test
cases.
F
We
even
have
some
files
that
we
are
going
to
permission
and
the
some
basically
like
fixesville,
going
to
like
extend
those
test
Plans
by
the
way,
so
adding
some
new
test
cases,
but
you
can
take
a
look
already
and
give
your
comments
like
basically
to
anything
to
the
structure
to
the
some
to
the
scenarios.
If
he
sees
it
some
something
important,
it
is
missed.
Yeah,
please
don't
don't
give
us
some
comments
that
will
then
we'll
be
able
to
handle
issue
or
like
to
put
this
case
right
now.
What
what
is
automated?
F
We
are
putting
test
names.
So
something
is
that
not
automated
right
now
that
it
says
just
happen
and
then
so
yeah.
So
that's
a
work
in
progress
anyway,
so
everybody
can
take
a
look
and
give
your
comments,
and
one
also
I
want
to
rise
a
little
bit
again.
The
questions
that
will
be
more
already
asked
is
the
place
for
the
test
cases,
because
right
now
we
have
some
like
from.
For
my
for
my
sense,
some
kind
of
confusion.
F
We
have
some
test
folder
in
the
root
which
kind
of
like
supposed
to
be
the
place
for
the
test.
From
my
side,
on
the
other
hand,
like
current
CI
is
running
test
cases
from
the
dash
Pipeline
and
I
would
like
to
stabilize
actually
there's
a
place
for
the
production
test
cases
so
because
something
that
will
be
more
pushed
is
actually
a
production
to
scale.
F
That's
going
to
be
used
like
for
verification
of
the
implementation
of
the
real
implementation
of
the
dash
and
let's
set
folder,
because
for
doing
next,
pull
requests
for
committing
eni
test
plan
and
divinate
and
extension
to
the
winner,
TV
net
use
cases.
According
to
the
test
plan
that
I
pushed
in
the
pull
request,
I
I'd
like
to
like
to
have
some
stable
place
where
to
push
all
files,
how
to
link
utilities
tools
and
everything
else.
F
So
if
people
here
have
some
comments
about
the
placement
where
we
should
like
use,
root,
test,
folder
or
somehow
stick
to
the
dash
pipeline,
so
yeah,
please
give
you
a
comment
here.
C
Well,
this
is
Gerald,
I,
think
Intel,
keysight,
melanoxie
I,
think
between
the
three
companies
you've
done
most
of
this
and
you
could
probably
decide
amongst
yourselves,
I
mean
and
just
but
I
don't
know
since,
since
it's
all
your
content,
maybe
you
guys
could
just
decide
on
one
area
and
just
make
the
decision
and
do
it
as
opposed
to
like
I,
don't
think
anybody
else
in
the
community
is
going
to
actually
be
as
opinionated.
C
B
Sure,
maybe
maybe
what
we
can
do,
one
way
to
do
this
is
to
file
an
issue,
and
then
we
can
kind
of
have
a
discussion
thread
that
way.
Yeah
that
that
way
we
can
not
clog
up
the
meeting
with
a
I
I
think
it's
a
good
idea.
I
knew
this
day
was
coming
animal,
whatever
I
think
the
dash
pipeline
test
directory
was
kind
of
a
Sandbox
to
get
some
test
cases
going
in
the
first
place,
but
probably
we
probably
need
to
have
production
level
tests.
You
know
up
in
the
main
test
directory.
B
B
A
B
Open
to
the
public-
and
you
know
that's
that's
the
GitHub-
it
can
be
300
threads,
it
doesn't
matter
it's
it's
free,
storage
and
On
a
related
topic.
You
know
some
of
our
meetings
are
kind
of
one-on-one
or
just
a
few
people
working.
It's
come
up
that
they're
discrepancies
in
the
behavior
of
the
of
the
bmv2
model
that
some
tests
aren't
going
to
pass
right
now
or
are
lacking
in
functionality.
B
So
two
examples
right
off
the
top
of
my
head
from
today's
some
today's
conversations,
the
v-net
in
test
has
some
questions
or
issues
or
we
need.
We
need
the
VN
vnet
in
functionality
to
work.
The
way
it's
supposed
to
there's
a
question
of
that
and
the
other
is
ACL.
Groups
aren't
really
implemented
in
working
in
the
in
the
behavioral
model
and
we
really
want
to
start
writing
test
cases
for
all
of
those.
B
So
what
I've
asked
peoples
to
try
to
come
up
with
like
a
concise
list
of
what
are
the
gaps
we
need
to
close,
and
can
we
focus
on
those
as
a
community?
Maybe
we
can
maybe
we'll
have
a
list
of
items
for
tomorrow's
behavioral
model
working
group,
because
I
feel
like
some
of
these
things
are
just
out
there
they're
not
being
closed
and
they're
preventing
like
the
real
test
work
from
being
done
and
I
think
Vladimir.
B
You
brought
up
some
things,
Monday
some
issues
of
your
test
plan
that
won't
actually
work
in
the
behavior
model
and
Anton
brought
up
some
other
things
with
me
this
morning
from
the
v-net
end,
does
anyone
have
any
comment
about
the
bmv2
and
how
complete
they
think
it
is
to
implement
v-net
in
and
v-net
out.
A
I
would
let
everyone
here
talk
about
it,
and
quick
mention
is
that
we
have
the
behavioral
model
meeting
as
well
on
Thursdays.
Do
we
want
to
discuss
more
there
and
note
down
the
points
here?
Both
are
open
it's
up
to
everyone
here.
If
we
want
to
discuss
it
here,.
B
You
know
the
bmv2
we've
been
moving
along,
but
I
think
we
need
to
have
some
like
crisp
definitions
of
what
do
we
want
working
when
and
it's
almost
like
test
driven
development.
We
want
some
test
cases
to
be
complete
and
work.
So
let's
do
everything
we
need
to
to
make
those
test
cases
work
right
and
have
some
crisp
definitions.
I
think
it's
time
we
do
that.
E
C
E
Want
to
you
know,
as
I've
worked
recently
on
this
one
and
I
believe
we
lack
quite
a
bit
in
terms
of
test
cases,
availability
that
we
have
you
know
I
think
we
have
implemented
good
amount
of
functionality
for
the
v-net.
In
fact,
it
is
almost
ready
right,
but
we
don't
have
the
the
test
cases
to
test
a
variety
of
different
areas
and
also
to
ensure
that
okay,
we
are
not
going
into
any
regression
right,
so
I
mean
we.
E
We
thought
that
we
had
B6
routing
and
V6
processing
working,
but
during
my
recent
testing
you
know
I
found
out
that
there
were
some
issues
right.
There
were
bugs
that
I
fixed,
which
were
not
related
to
IPv6
axle.
But,
while
really
you
know
modifying
the
v-net
test
case,
I
found
out
that
okay,
there
were.
There
were
things
that
we
could
have
caught
earlier
right.
E
So,
yes,
there
are,
there
are
definitely
gaps
there,
but
at
the
same
time,
I
also
noticed
that
okay,
you
know
I
mean
we
do
need
to
fix
a
lot
of
things
in
in
our
infrastructure
right,
whereby
what
has
what
is
going
on
right
now
is
we.
E
We
can
have
a
setup
in
terms
of
you
know:
we
run
the
BMV
to
sewage,
we
run
the
site,
lift
server,
and
then
we
issue
you
know,
commands
through
the
client
right
so
right
now,
what
is
going
on
is
that
when
some
issue
occurs
in
running
the
test,
the
cleanups
are
not
happening
very
well,
and
you
do
need
to
kill
everything
and
restart
everything
in
order
to
you
know
get
to
the
next
test
right.
So
those
are
the
things
that
we
need
to
also
fix
from.
E
From
that
point
of
view
on
another
thing
that
also
I
I
noticed
that,
as
as
just
giving
a
feedback
as
part
of
going
through
a
workflow
is
that
you
know
there
is
a
lack
of
documentation
right
that
we
need
to
really
have
to
see.
How
do
we
introduce
a
new
test
case,
and
how
do
you
know
different
test
classes
get
called?
What
is
the
order?
Where
do
things
go
and
so
forth?
So
there's
a
lot
of
things
that
I
actually
figured
out
by
just
you
know
trying
it
out
reading.
E
You
know
the
the
code
reading
through
the
things
like
that,
so
I
think
we
we
do
need
to
have
quite
a
bit
of
things
that
we
can.
We
can,
you
know,
outline
and
then
you
know
start
addressing
them
one
by
one.
B
Yeah
kind
of
that's
good
feedback,
thanks
just
quick,
a
quick
answer
to
the
question
about
the
documentation.
B
There
was
just
because
of
lack
of
time,
there's
really
no
documentation
on
like
how
to
use
PTF
and
some
of
the
issues
you
brought
like
how
to
run
a
test
case.
How
to
add
one
people
who
do
PTF
all
the
time
know
how
to
do
that.
Implicitly,
it's
kind
of
like
standard
fare,
but
there
could
be
a
nice
intro
page
for
at
least
a
couple
of
paragraphs.
Giving
the
highlights-
and
you
know
you
and
I-
had
some
back
Channel
discussions
about
how
to
do
a
test
Case
by
itself.
A
This
is
good
feedback
for
sure
we
had
added
the
PDF
readme,
you
know
just
when
Chris
was
actually
about
to
start
the
CI
part,
and
he
has.
He
had
asked
a
similar
questions,
and
so
we
quickly
put
together
a
document
honey.
We
will
share
that
with
you
and
please
give
us
any
feedback
and
how
we
can
improve
on
that.
We
can
certainly
add
more
to
it.
E
You
know
Chris
and
Mukesh,
both
and
especially
on
the
Chris
for
for
helping
me
out
over
the
weekend
and
over
you
know,
on
this
Friday
night
and
so
forth.
That
I
was
trying
to
you
know
get
so
those
testing
done,
and
you
know
it-
Chris
you're,
a
lifesaver.
You
know
like
literally
as
I,
was
getting
blogged.
You
were
unblocking
me,
so
thank
you
so
much.
B
B
Yeah
yeah,
no
enthusiastic
users
is
really
what
we
need.
You
know,
so,
thanks
for
being
brave,
and
you
know
as
I
get
time,
I'll
try
to
fill
in
some
of
these
gaps
in
the
documentation
too.
B
So
people
don't
have
to
you
know,
spend
so
much
time
learning
the
PTF
101,
but
when
it
comes
to
test
cases,
for
example,
when
we
add
functionality
or
change
functionality
in
the
pipeline
again
I
think
we
ought
to
get
some
kind
of
a
consensus
to
the
community
that
for
for
dissension,
if
the
P4
developer
changes
something
or
adds
something,
it
would
be
nice
if
either
at
that
time
or
right
afterwards.
We
have
a
test
case
to
validate
that
functionality
and
also
to
catch
regressions
in
the
future.
B
B
Otherwise
we
have
technical
debt
accumulating
and
then
we
have
resources
lined
up
even
budgets
created
to
get
people
to
write
tests
and
then
they're
the
hit
blockers,
because
it's
not
even
actually
working
yet
so
I
just
want
to
impress
on
people
the
need
to
try
to
reduce
our
technical
debt
and
keep
keep
our
balance
zero
or
higher,
not
negative.
B
So
I'm
hoping
we
can
come
up,
maybe
tomorrow
we
can
come
up
with
a
hit
list
of
what
are
the
things
we
need
to
close
in
on
or
verify
you
know
in
what
order.
So
we
can
make
meaningful
progress
because
you
know
I,
know
keysight
and
Intel,
we've
engaged
services
and
there's
you
know
people
being
hired
and
work
being
done,
but
we
don't
want
to
have
blockers
because
it
just
slows
things
down.
E
A
Yeah
I
just
want
to
add
that
Anton
has
a
very
extensive
test
plan.
I
think
he
has
added
the
link
to
that
test
plan
in
the
chat.
If
not,
we
will
add
it
for
sure.
If
you
could
please
go
through
that
Anton.
Would
you
like
to
present
that
and
just
show
how
the
you
know
the
layout
is
what
is
where.
F
Okay,
so
like
please
take
a
look
at.
We
have
at
this
moment
edit,
like
landing
page
yeah,
so
I
put
it
to
the
test
docs
the
spawns.
That's
why
I'm
asked
that
question?
Actually,
so
what
we
expect
to
hear
where
to
put
okay,
the
doctor's
plans
and
here's,
the
landing
page
I
will
add.
Actually
the
v-net
display
link
here
so
right
now
it
has
a
thumb
overview
because
we
will
have
multiple
different
use
cases
and
requirements
like
right
now,
I
put
test
sheets
that
we
expect
to
covers.
F
At
this
moment
we
covered
in
the
Intel
Ani
configuration
and
we
need
to
be
network
configuration
also.
We
should
have
like
separate
the
seat
for
connection
tracking
for
ACL
for
other
venetics
cases,
and
I
expect
that
in
future
that
here
that
we
have
I
will
have
some
Linux
to
the
existing
response
and
by
the
way
so
I
added
requirements,
I
combined
everything
what
I
found
in
the
hld
and
the
also
like.
Please
take
a
look.
F
So
if
you
have
some
comments,
I
will
I
would
like
to
fix
it
because
in
some
places,
I
found,
for
example,
for
old
Sperry
and
i1,
a
100K
in
another
documentation,
I
found
200
key.
So
actually
what
we
expect
and
also
that's
one
I
heard
that
that's
a
minimal
number,
so
that's
performance
should
be
more
that
flows
for
cut
64
million,
so
they
like
from,
for
instance,.
C
Okay,
I,
don't
know
what
version
Anton
is
using,
but
we
did
update
the
numbers
recently.
A
C
A
I
will
go
through
and
get
the
latest
for
sure.
I
actually
probably
was
thinking
of
a
different
document,
but
I
guess
I
have
I,
don't
remember
which
one
that
was.
Maybe
we
can
present
it
in
the
next
meeting.
Yeah.
F
That's
good
because,
like
see
the
day
especially
puts
in
many
places,
that's
like
to
clarify
and
bold
like
just
where
the
places
I
found
like
ambiguous
and
I
didn't
find
like
exact
number.
What
to
expect
like
and
from
here.
We
can
actually
go
to
the
two
test
plans
that
we
have.
One
is
eni,
so
we
just
for
create,
remove
various
scenarios
for
the
night.
So
yet
again,
I
put
some
requirements
from
the
hld
to
understand
the
purposes
and
the
objectives
of
this
plan
and
here's
the
seeds.
F
So
I
want
to
like
to
give
some
like
spoiler
here.
That's
we
already
automated
that
stuff.
So
that's
will
be
pushed
once.
We
actually
merge
and
stabilize
existing
we're
not
doing
at
this
case
pushed
by
one
of
them
or
so
then
we
know
that
here's
our
root
folder
for
all
Automation
and
then
we
will
start
adding
motor
skis.
So
that's
something
that
we
already
have
and
I'm
going
to
Upstream
soon.
So
that's
the
dimension
so
hyphen!
So
that's
something
neat
to
it.
So
maybe
even
volunteers
can
do
this.
G
F
Now
our
Target
is
to
close.
We
need
to
win
it
yeah.
F
So
I
would
like
that
people
also
comment
if
something
is
missed
here,
that
will
be
a
really
really
nice
so
that
we
will
know
this,
but
anyway,
so
this
is
work
in
progress,
so
we
already
actually
extended
a
number
of
these
cases
to
the
eni
creation
test
plan,
for
example,
to
adjust
to
the
recent
changes,
because
when
I
created
this
the
sponge,
we
had
a
little
bit
different
API
you
and
I
create
and
also
we
need
to
win
it.
So
that's
one
is
more
like
fresh
because
use
case.
F
So
that's
we
are
currently
working
progress
on
it
so
that
size
lead
it
to
the
outbound
inbound.
This
case
is
some.
Integration
in
this
case
is
negative
scale
and
performance
and
I
put
some
section
to
clarify
into
future
plans
that
we
are
not
covering
right
now,
so
maybe
to
the
to
the
next
phases,
we'll
edit.
F
Okay,
yeah
sure
sure
so,
I
will
show
so
that's
something
that
we
actually
resources
from
volodimers
test.
So
we
are
something
that
ill
already
covered
these
cases,
so
this
is
through
outbound
inbound
integration,
negative
so
like
so
that's
we
want
to
send,
for
example,
traffic
for
invalid.
We
we.
F
F
I
see
cool
yeah,
so
scaling
performance
we
haven't
defined
yet
so
that's
full
future.
And
yes,
once
we
have
by
the
way,
that's
the
stabilized
framework
for
the
high
scale
and
high
traffic
High
rate
Traffic
sending.
So
that's
we're
going
to
send
to
to
set
some
use
cases
here
so
right
now,
it's
a
like
for
the
future
work.
So
here
I
am
not
sure
that
we
should
stop
right
on
that
meeting.
But,
like
you
can
take
a
look.
F
So
that's
some
questions
that
I
put
to
the
to
clarify,
really
I
will
be
happy
if
somebody
will
take
a
comment
because,
for
example,
I
at
this
moment,
I
just
looked
for
example,
VM,
vnivinetta,
D
and
z
and
I
create
so
that's
I
need
to
understand,
like
I
would
like
to
to
have
a
comment
about
the
relation.
Actually,
what
the
the
reason,
how
both
they
used,
what
we
expect
to
have
in
each
one
because
from
for
example,
for
me
this
in
these
cases,
it's
not.
A
A
And
then
we
can
discuss
that
in
the
behavioral
model
meeting
or
internally
as
well.
Yeah.
F
Okay,
yeah,
but
anyway,
so
like.
D
F
Saw
the
prince
signed
somebody
to
review
this
from
the
Microsoft
email,
so
we
would
like,
because
we
have
some
statements
in
the
hld
but
yeah.
So
if
there
will
be
some
examples,
use
cases
that
I
need,
so
that's
for
for
future
anyway,
so,
like
I,
expect
that
that
lists
will
end
up
with
the
scenarios
here
so
like
thanks
seconds
at
least
will
be
become
shorter
and
shorter
and
list
of
tests
longer
and
longer
like
for
the
future,
sounds
good
okay.
F
So
that's
our
current
activity
so
that
we
are
working
on
that
test
plan
right
now
and
on
on
the
automation.
A
So
I
just
want
to
mention
to
everyone
that
tomorrow,
in
the
psy
Community
call,
we
will
be
presenting
all
the
work.
That's
been
done
in
terms
of
Dash
automated
test
framework,
as
well
as
the
test
cases.
We
have
the
PRS
for
them.
We
need
to
merge
them
to
psi
ocp
size.
So
we'll
start
with
the
review
and
code
reviews
already
sort
of
in
progress
and
Prince
has
assigned
someone
to
take
a
look
which
is
really
helpful.
A
I
just
want
to
mention
that
Prince
also
gave
a
comment
that
we
need
to
use
the
you
know
bulk
apis,
so
yeah
we
will.
We
have
plans
to
start
using
the
bulk
apis
and
the
single
API
tests
are
as
they
get
completed.
F
B
A
I
see
so,
if
we
added
in
the
test
case
you're
saying
the
BM
behavioral
model
may
not
work
Prince
any
comments,
I
think
it
will
work
for
hardware
for
sure
right.
If
it
is
implemented,
it
should
work
in
the
hardware
yeah,
but
I.
G
B
A
Absolutely
no
we'll
have
both
ways
for
sure
you
know,
even
if
we
implement
the
bulk
apis,
the
single
API
calls
would
also
be
there.
So
nothing
should
be
broken
for
sure
that
something
that's
already
working.
F
Increase
if
I
understand
also
like
use
of
that,
we
also
need
to
think
how
to
skip
the
such
test
cases
for
the
CI,
because
otherwise,
like
we
will
have
failures
per
PR
per,
commit
the
instantly
correct.
Yes,.
B
If,
if
we
have
a
production
test
case
that
uses
a
bulk
API
right
now,
it
will
fail
on
the
bmv2
model
unless
we
do
more
work
in
that
area
and
PTF
is
not
real
convenient
in
terms
of
marketing
test
cases.
That's
where
Pi
test
is
a
little
better,
but
maybe
someone
else
has
an
idea
of
how
to
do
that.
We
need
to
you
know
way:
we
organize
it
by
directory
or
something,
but
we
need
a
way
to
know
which
tests
will
run
in
Hardware
only
and
which
ones
will
run
pmb2.
C
But
but
generally
this
is
Gerald
again.
The
goal
of
the
dash
group
is
to
build
the
behavioral
model.
We
don't
have
Hardware
in
in
Dutch,
I
mean
obviously
we'll
cast
Hardware
as
a
consumer,
but
this
group
is
supposed
to
be.
You
know,
defining
the
behavioral
model,
the
apis,
the
tests,
and
so
it
can't
be
that
the
hardware
is
doing
it
one
way
and
and
the
behavioral
model.
Does
it
a
different
way
or
can't
support
what
the
hardware
does.
E
C
But
but
my
my
point
is:
there's
no
Hardware
here
in
this
in
this
group,
Dash
is
a
behavioral
model.
There's
no
Hardware
is
something
that
suppliers
build
against
the
the
Das
behavioral
models
and
apis.
It's
it's
like
this
group
here
is
not
building
Hardware,
not
even
disclosing
the
software,
in
fact,
there's
no
disclosure
of
people's
implementations.
C
So
my
point
is
that
if
we're
writing
test
cases
that
we
want
all
Hardware
to
basically
pass,
then
you
have
to
start
with
the
behavioral
model,
and
then
everybody
will
have
to
to
to
match
that
if
they
want
to
get
into
the
network,
I
I
don't
really
understand
where
the
hardware
is
coming
from.
As
far
as
like
what
this
group
needs
to
accomplish,.
A
Rather
than
focusing
from
on
Hardware
at
the
moment,
maybe
I
can
say
that
there
are.
There.
Are
these
two
parts?
One
is
the
test
cases
themselves
to
actually
contain
all
the
things
that
we
need
to
test
all
the
dash
apis
and
they
should
yeah.
A
They
should
be
enhanced
to
support
all
the
requirements,
all
the
use
cases,
all
the
paths
that
code
flows,
that
we
want
to
test
and
the
apis,
and
the
second
part,
is
to
use
the
behavioral
model
to
you
know,
depict
everything
the
functionality
in
software
itself,
so
I
guess
we
will
have
to.
It
is
just
my
thought
that
you
know
we
have
to
work
on
both
the
test
cases,
as
well
as
the
behavioral
model,
to
sort
of
at
some
point
be
in
sync
and
keep
continue
that
work
until
there's
in
sync.
B
B
My
recollection
commitment
that
we're
gonna,
somehow
collectively
put
all
the
work
necessary
to
do
that
and
the
gaps
I
see
are
a
complete
PSI
library
that
implements
all
the
apis
and
all
the
semantics,
including
bulk,
set
and
get,
and
all
those
apis
which
we
just
realize
now
are
missing
and
also
even
the
underlay
Behavior.
That's
not
supported
in
our
current
bmv2
model.
I,
don't
have
any
plans
to
do
that
and
some
of
the
test
cases
are
going
to
assume
the
production
test.
Cases
are
going
to
assume
underlay
capabilities
and
configuration.
B
So
we
have
these
pretty
big
gaps.
What
we
have
is
I
think
an
unspoken
understanding
among
many
people
that
the
bmv2
will
Implement
more
or
less
the
overlay
test
cases,
but
not
necessarily
all
the
PSI
apis,
all
the
nuances.
So
maybe
that
discussion
needs
to
be
had,
and
maybe
you
know
tomorrow's
meeting
can
bring
that
up.
There's
there's
a
huge
amount
of
effort
to
close
those
gaps
and
make
behavior
model
closer
to
the
hardware
and
I.
B
C
I
I
think
we
need
to
discuss
that,
because
that
was
the
Mandate
of
Dash
right
dash
doesn't
have
Hardware,
it
doesn't.
There's
no
open
source
associated
with
that
is
under
the
covers
right.
The
hardware
doesn't
open
source
anything
so
I
mean
from
the
beginning
we're
building
a
behavioral
model
that
people
that
would
be
an
exact
replica
of
Hardware,
so
yeah
we
do
have
a
disconnect.
We
should
discuss.
D
D
No
at
the
end,
the
whole
test
cases
should
be
run
on
the
behavior
too,
but
the
marking
stuff
that
we
are
that
Chris
is
talking
about,
is
just
you
know,
select
what
test
cases
are
current
on
the
bmv2
and
can
be
run
on
the
CIA
and
that's
it,
but
at
the
end,
once
the
bnb2
have
now
has
the
all
functionality
ready.
So
all
these
cases
should
be
absolutely
run
on
the
bmv2
and
there
should
not
be
any
difference
in
the
test
cases.
D
A
F
If
there's
no
other
question,
I
actually
have
one
I'd
like.
So.
If
Prince
is
here,
because
I'd
like
to
ask
whether
there
is
any
like
progress
doing
with
preparing
Sonic
management
as
framework
for
the
Dash,
because
I
like
interested
in,
want
to
be
like
ready
in
advance,
actually
what
the
topology
will
have,
how
it
will
look
like
whether
some
maybe
some
even
some
test
cases
or
at
least
explains,
were
started
to
be
really
preparing.
G
Yeah
yeah
I
think
Anton.
We
are
referring
your
your
winner
to
win
a
test
plan
and
we
are
starting
working
on
the
on
the
Sonic
management
test
implementation.
But
the
infrastructure
part
is
already
available,
like
you
can
refer
the
pr
scene
related
to
the
DPO
and
the
Appliance
in
the
Sonic
management.
So
that
should
be
already
like
say:
we
have
currently
t0
T1
those
kinds
of
topologies.
So
similarly,
we
have
introduced
an
appliance
or
dpu
based
topology
as
well
with
two
Port
architecture.
A
A
This
is
great
yeah.
Thank
you,
friends,
I
think
Anton
and
Voldemort
were
asking
a
few
times
about
the
Sonic
management,
but
there
is
a
lot
of
progress
already
yeah.
G
So
the
infrastructure
we
are
taking
it
in
different
phases,
so
one
is
setting
up
the
infrastructure
with
so
that
we
can
just
run
the
regular,
let's
say:
underlay
bgp
and
those
kinds
of
tests.
The
second
is
the
winner
to
win
it
so
for.
E
A
Thank
you
anything
else.
We
want
to
discuss
today
any
comments
on
the
previous
week's
presentation
from
crucified
or
comments
on
AMD
pensando's
achieve
proposal
that
was
presented
yesterday.
It's
an
ongoing
review.
That's
been
going
on
for
quite
a
few
weeks,
but
just
want
to
ask
everyone
here.
E
Conditions:
okay,
but
in
fact,
for
the
future
presentation
which
is
for
tomorrow,
all
right
I
just
want
to
leave
one
card
out
based
on
my
recent
experiences
right
and
then
there
is
about
testing
of
the
infrastructure
itself
test
infrastructure.
So
I
just
want
to
see.
You
know
if,
if
we
give
some
thoughts
into
how
we
are
testing
our
own
test
infrastructure
itself
and
then
also
to
see
that
okay,
you
know
is,
you
know,
are
all
the
things
that
we
are
claiming:
they're
working
are
really
working.
E
E
We
do
have
to
have
those
examples
and
then
you're
saying
that
okay,
we
have
tried
them
out
and
they
are
indeed
working
so
that
people
who
want
to
try
this
one
out
like
adding
more
test
cases
or
you
know
adding
more
files
in
certain
directories
and
ensuring
that
you
know
some
of
the
scripts
are
really
picking
up
those
files
and
then
really
trying
them
out.
So
just
you
know,
testing
the
test
infrastructure
itself
right.
E
So
those
are
the
things
that,
if
you
give
some
thoughts
into
it,
as
you,
you
guys
present
tomorrow's
presentation
to
say
that.
Okay,
yes,
you
know,
all
of
those
things
have
been
waited
out
and
then
people
who
want
to
add
new
things
to
it.
We
we
believe
that
you
know
they
will
work.
Fine,
so
I
think
just
just
just
a
food
for
thought
here.
So.
A
Sounds
good
yeah
that
sounds
good
in
tomorrow's
call.
We'll
discuss
this.
Some
more
voldemortnick
also
has
noted
some
test
cases
and
portions
of
the
test
case
that
he
has
tested
out
on
hardware
and
things
that
it
may
or
may
not
work
in
the
BMV
too
Etc.
So
if
we
talk
about
that,
then
you
know
we
can
add
those
items
into
our
to-do
list
that
Christina
used
to
maintain
and
and
then
proceed
on.
How
to
add
that
into
be
a
meter
as
well.
Yeah
I
think
that's
a
great
idea.
B
Yeah,
thank
you
so
Hannah
I'd
like
to
make
a
comment
or
response
to
that,
and
thanks
for
thanks
for
the
observations
you,
you
definitely
were
a
good
litmus
test
for
how
how
ready
is
the
test
infrastructure
for
someone
just
to
kind
of
Wade
in
and
try
it?
So
maybe
it's
not
deeply
beaten,
PTF
and
test
world,
so
I
can
say
that
I
put
in
a
lot
of
the
infrastructure
to
run
these
test
workflows
and
the
sci
Thrift
server.
B
You
know
containers
and
whatnot
I
did
that
work
a
couple
months
ago
and
it's
sort
of
a
placeholder
waiting
for
test
cases
to
drop
into,
but
there's
not
been
any
kind
of
test
of
the
test
framework
itself.
It's
being
tested
indirectly
by
people
adding
test
cases-
and
you
know-
maybe
that's
not
the
the
best
way
to
go
about
it,
but
that's
based
on
who
has
time
available
to
do
these
kinds
of
things.
But
if
you
have
a
few
specific
examples
of
things
that
broke
and
I
know,
you've
mentioned
some
things
in
in
some
threads.
B
E
Sure
sure
not
definitely
we'll
do
that
and
I
think
this
may
basically
come
out
because
of
lack
of
documentation.
Also
because
you,
you
keep
trying
things
and
you
expect
this
thing
to
work
in
a
certain
way,
but
it
doesn't
and
then
you
say,
oh
by
the
way,
maybe
perhaps
this
is
the
only
way
it
works
so
hopefully
I
think
between
the
documentation
and
between
trying
certain
things
out.
We
can
read
those
things
out
and
then
you
know
straighten
out
all
those
wrinkles.
B
Yeah
yeah
and
there's
always
room
for
improvement
and
every
time
someone
new
try
something
in
fact
someone
who's
less
experienced
in
that
domain.
It
points
out
the
shortcomings
of
the
documentation
or
even
the
infrastructure,
so
it's
always
welcome
to
have
Brave
volunteers
trying
things
out
and
then
don't
just
sweep
things
under
the
rug.
You
know
report
everything
and
you've
been
good
about
that.
But
you
know
this
is
not
a
commercial
product
right.
B
E
Yeah
I
know
that
sounds
good
I
think
you
know,
as
we
are
all
volunteers.
Yes,
you
know
right.
So
we
we
can.
We
can
go
ahead
and
fix
issues
as
we
run
into
them
wherever
possible,
whether
it's
a
documentation,
issue,
infrastructure
issue
or
even
the
you
know,
the
BMW
code,
the
P4
code,
implementation
issue
or
whatnot
right.
So
yeah.
B
Your
feedback's
always
been
really
spot
on
and
And
Timely,
so
please
keep
it
coming.
You
and
everyone.
Everyone
for
that
matter,
don't
sweep
things
under
the
rug,
bring
them
to
shine
a
flashlight
on
them,
so
we
can
fix
them.
A
Anything
else
that
we
want
to
discuss
today-
or
we
could,
you
know,
give
a
couple
of
minutes
back.
A
Gerald,
if
you
could,
please
stop
the
recording.