►
From YouTube: DASH Workgroup Community Meeting 20220323
Description
March 23, 2022 Community Call
A
So
we
talked
about
that
number
a
lot
and
we
wanted
to
make
sure
that
people
understand
that
those
were
the
tests
that
were
in
the
past,
that
we
wanted
to
make
sure
that
you
understand
more
now
what
the
real
numbers
will
look
like
in
product
environment.
So
it's
16
million.
You
should
not
have
in
your
head
because
in
actual
fact
the
requirement
is
really
64
million.
A
B
And
so
that's
that's
kind
of
the
next
two
bullet
points,
and
so
when
we
say
we
want
to
protect
the
space
of
one
million
total
active
connections
per
eni,
the
implication
for
that
is
that
the
way
that
we
run
high
availability,
we
do
something
called
partner
replication,
and
so
you
would
have
one
card.
That's
active!
B
That
has
always
a
partner
that
it's
talking
to
that's
also
active,
but
the
each
card
is
a
standby
for
each
other,
and
so
what
that
means
is
you
inherit
the
the
flow
table
of
the
other
enis?
And
so
while
we
we
use
this
1
million
total
active
connections
and
these
32
enis.
B
Then,
when
you
consider
aha,
that
takes
you
to
needing
the
memory
footprint
of
a
flow
table
that
is
essentially
64
enis
and
when
you're,
when
you're
doing
high
availability,
you
don't
necessarily
have
to
maintain
everything
about.
B
Well,
there's
a
lot
you
have
to
maintain
about
the
eni,
but
the
big
thing
here
is
you:
you
maintain
the
flow
table
even
for
the
passive
card
on
the
active
yeah,
the
active
card,
and
so
that
means
you
have
64
million
total
active
connections
being
used
in
your
memory
space
on
one
card
at
a
single
point
in
time
the
one
million
total
active
connections
isn't
arbitrary.
B
That's
just
looking
across
clouds
today
be
that
azure
be
that
aws
gcp,
whatever
the
the
total
active
connections
for
any
size
given
virtual
machine
would
be
around
this
one
million,
if
not
more
mark
and
so
and
if
you're
looking
for
some
sort
of
reason
why
we
chose
that
number.
It's
just
kind
of
accommodating,
where
we
need
to
be
to
optimize
workloads
and.
A
So
also
james,
just
one
more
comment
that
I
think
that
if
people
have
ever
measured
this
before
small
vms
tend
to
have
smaller
table
sizes
and
bigger
ones
have
larger.
So
this
is
still
like
a
committed
average.
It's
not
necessarily
that
we're
going
to
give
a
two-core
vm
a
million
entries
in
a
float
table.
B
B
When
we
want
to
protect
the
space
of
one
million,
but
it
one
million,
is
derived
from
average
of
vm
capacity
across.
B
Yeah
yeah,
and
so
the
last
implication,
is
that
we
I'm
not
sure
how
deeply
we've
gotten
into
with
this
group
yet,
but
then,
when
an
failover
occurs,
that
means
that
the
32
passive
eni
or
the
32
active
enis
on
your
partner
card
move
over
to
what
becomes
the
primary.
B
Now
after
a
failover
and
one
card
may
inherit,
64
active
enis,
with
64
million,
total
active
connections,
and
so
just
in
without
going
very
deeply
into
what
policy
looks
like
what
the
breakdown
of
memory
is
in
terms
of
mappings,
routings,
so
on
and
so
forth,
without
touching
on
the
os
memory
footprints
in
the
redis
db.
C
James,
this
h-a
right
vnas,
are
like
one-on-one,
mapped
correct
between
active
and
standby.
If
that's
the
case,
why
would
we
get
64
active
enas.
B
So
so
you're
correct,
so
an
eni.
Essentially
you
take
a
destination
mac
address
behind
your
that
you
wish
to
optimize
you
also
like
eni
mapping.
You
do
look
at
the
v-net
id
and
you
do
look
at.
I
think
some
of
the
ip
addressing
and
you
look
at
the
destination
mac
and
that's
kind
of
taking
you
to
a
a
single
optimized
target
that
you
can
refer
to
now.
B
We
get
to
this
64
number
because
of
the
way
that
we
do
partner
replication,
and
so
when
we
look
at
high
availability-
and
some
of
this
is
in
dash
today,
but
it's
it's
something
I
think
we'll
go
more
into
in
the
future,
but
the
way
that
we
do
high
availability,
there's
two
cards
and
they're
both
active.
B
However,
one
one
card
has
32
active,
enis
and
then
32
passive
that
it's
just
receiving
replication
from
its
partner
on,
and
so
if
the
partner
ever
goes
down
for
whatever
reason,
those
32
that
were.
B
Essentially,
just
passively
being
partner
replicated
over
to
it
then
become
active.
I'm
sorry
does
that
answer
your
question.
C
A
Confusion
is
it's
an
active,
active
h,
a
system,
it's
not
an
active
passive.
So
oh.
D
A
B
Well,
it's
it's
active
active
in
your
essentially
you're,
not
taking
well,
you
can
say
half
of
the
connections,
but
it's
really
the
32
active
connections
on
each
card
are
being
replicated
to
the
other
card.
So
you
do.
D
B
Flow
table,
size
of
or
and
just
well
essentially.
A
I'm
I
both
the
traffic
just
doesn't
arrive,
but
you
know
we
we
direct
traffic
up
like
half
to
one
and
a
half
to
the
other
until
such
an
event
occurs,
and
then
everything
goes
on
in
the
future.
It
will
actually
that
will
have
ecmp
to
make
that
even
better.
But
right
now
everything
will
be
practiced
to
the
one
card.
A
B
And
this
is,
this
is
still
something
we
want
to
provide
more
clarity
around
and
so
think
of
this,
as
kind
of
a
a
first
communication
to
the
group
to
to
help
really
define
what
we're
looking
at
and
in
the
future,
we'll
we'll
try
and
further
provide
clarity
here,
but
we
thought
this
was
important
enough
to
kind
of
explain
what
we
were
looking
at
and
why
we
were
arriving
at
the
numbers.
We
were
get
this
in
documentation
and
bring
it
to
the
group
to
discuss
if
anyone
had
any
questions.
A
Later
this
has
been
added
playing
time
before
done
just
later,
we
will
just
so
you
know:
we've
been
looking
at
this
from
like
how
you
use
memory,
and
we
believe
that
this
will
take
up
approximately
two-thirds
of
the
32
gig
of
memory.
So
that's
what
we're
looking
at
practitioning
the
memory
and
making
sure
that
we
32
gig
is
enough
to
support
this,
and
currently
we
believe
it
is.
F
Yeah
yeah,
so
I
I
I
have
some
questions.
I
it
was
my
understanding
that,
like
an
eni,
is
actually
like
load
balanced
across
like
many
dpus,
and
so
like
it's
unclear
to
me,
like,
like
the
one
million
connections
here
per
eni,
would
that
be
like
for
a
given
eni
that
one
million
connections
would
be
split
across
you
know
eight
gpus
or
some
number
of
dpus
that
are
in
the
appliance.
F
A
And
there
can
be
there
there
can
be.
This
is
the
simplest
form
that
we
could
describe.
This
is
really
a
memory
calculation
thing.
Okay
in
the
future.
Today
we
don't
actually
use
the
ecmp,
but
very
soon
we
will
be.
And
yes
once
you
go
to
ecmd,
it
means
that
you're
going
to
have
even
more
eni
per
card,
but
fewer
connections.
D
A
Will
receive
less
pure
connection,
so
it
will
balance
the
same
way.
It's
just
that
for
over
subscription.
It
makes
a
lot
better
when
you
do
it
that
way
and
also
for
redundancy,
but
the
the
numbers
won't
change.
You'll
just
have
okay.
Well,
some
numbers
will
change
because
you
you
need
to
have
space
to
have
more
unites,
but
not
for
more
flows.
The
flow.
D
A
Which
is
dominant
memory,
user
won't
change,
but
some
of
the
policy
tables
but,
of
course,
haven't
been
increased.
So
that
is
the
consideration
for
ucmk
and
you
know:
do
you
do
you
spread
across
all
cards
or
some
cards,
and
that's
that's
actually
what
the
consideration
is?
How
much
memory
do
you
use
for
having
more
and
more
eni
on
a
particular
card?
Yeah,
okay,.
A
More
deep
talk,
though
this
is
more
of
just
giving
you
a
heads
up,
that
this
is
not
16
million,
that
our
requirement
is
actually
64
million.
Okay,
okay-
and
this
is
just
showing
some
of
the
reasons
why
that's
all.
F
Okay-
and
I
have
a
couple
more
sort
of
follow-ups
to
that
is
this:
are
these
connections
like
any
of
the
six
or
seven
services?
Are
these
v4
v6?
Does
this
include
nat?
Is
this
just
v-net
to
v-net
like?
Is
it
like,
like,
for
example,
do
you
expect
that
the
capacity
of
64
million
connections
do
you
expect
that
capacity
for
ipv6
flows?
F
B
The
that's
actually
an
excellent
question,
and
so
a
lot
of
it
is
in
the
ipv4
space,
because
it's
in
our
data
centers
and
essentially
we
can
control
cipa
addressing
and
like
there
are
some
instances
where
you
do
need
ipv6
and
certainly
when
you
start
having
heavy
requirements
for
ipv6
it
kind
of
it.
It
begins
to
start
getting
into
these
these
numbers
quickly
and
I
think
the
64
million
is
still
the
space.
B
We
were
we're
carving
out
for
the
seven
scenarios
included
right
and
the
the
other
expectations
around
memory
consumptions
for
some
scenarios
versus
others.
B
That's
that's
something
we
want
to
bring
clarity
on,
but
this
is
kind
of
our
our
initial
thoughts
right
now,
like
we've
done
some
analysis
and
we're
still
trying
to
nail
down
what
that
will
look
like
to
give
you
guys
a
little
bit
more
clarity,
but
at
least
we
can
discuss
this
right
now
so,
like
more
more
of
that
is
on
the
way
and
you're
you're
correct
to
assume
there's,
there's
more
stuff
that
make
up
the
entire
answer
than
just
what
we're
showing
right
now.
B
But
this
is,
this
is
kind
of
the
framework
that
will
help
us
get
to
the
rest
and
so
like
as
a
follow-up,
I
always
mess
this
up.
There
should
be
view
notes.
I
was
just
going
to
take
a
quick
note
to
to
dig
in
more
into
ip
v6
versus
v4.
B
I
think
the
answer
is
we.
We
trend
heavily
towards
v4,
but
yes,
v6
would
eat
through
those
addresses
as
well.
A
But,
but
also,
I
think,
the
answer
to
that
will
become
clearer
when
we
define
all
the
mappings
something
that
we
were
discussing
last
night-
and
I
was
thinking
this
morning
more
as
we
define
all
the
mapping
the
mappings
actually
translate
to
how
much
how
much
memory
is
required
to
perform
those
mappings
those
transformations.
And
yes,
they
will
be
different
debates
based
on
different
services.
So,
as
we
can
finally
define
those
mappings,
you
can
actually
draw
them
from
the
documents.
A
Then
you
could
more
accurately
create
the
the
memory
music.
You
know
different
happenings
translate
to
different
encapsulations,
including
ipv6,
including
that
and
and
all
and
slb
and
all
of
them.
So
once
we
do
that,
you'll
get
probably
be
able
to
calculate
it
much
more
clearly,
but
your
your
question
is
definitely
a
good
one.
F
F
If,
if
you
could,
if
you
could
like
reduce
like
the
requirement
for
number
of
v6
like
where
the
total
is
still
64
million,
but
v6
will
never
be
more
than
some
percentage
of
that,
and
because
that
would
allow
us
to
like
optimize
if,
if,
if
we
just
had
to
say
64
million-
and
they
could
all
be
like
the
worst
case,
then
that's
like
substantially
more
memory
and
and
maybe
that
that's
not
like
a
the
reality
and
we'd
like
to
just
know
that
yep
yep.
A
I
think
it's
an
excellent
point
and
we
will
work
towards
that.
Yes,
we
one
one
advantage
we
have
in
microsoft.
Is
we
can
measure
those
things.
B
Yep-
and
I
put
I
put
some
notes-
we
need
to
kind
of
characterize
these
connections
right,
so
there's
there's
a
lot
more
visibility.
We
want
to
give
you
and
if
there
are
things
that
you
think
would
be
very
relevant
breakdowns
of
six
versus
four
percent,
not
connections
other
things
like
breakdown
by
scenario
or
pieces
of
the
seven
services
that
might
make
up
the
total
composition.
B
It's
it's
an
interesting
question
because,
depending
upon
the
allocation
of
different
enis
on
a
given
dpu,
you
can
have
different
mixes
right
and
so
part
of
that
is
your
implementation.
And
then,
on
top
of
that,
there
we've
we've
kind
of
given
some
guidelines
around
what
these
numbers.
B
B
And
so
this
is
an
excellent
discussion,
but
this
was
this
was
also
something
we
knew
we,
you
know
the
group
an
update,
and
so
we'll
we'll
start
here,
but
continue
to
to
give
more
visibility
and
also
provide
points
for
feedback.
So,
as
you
guys
look
at
this,
you
can
definitely
direct
the
conversation
and
help
us
arrive
together
at
the
correct
answers
here.
So,
even
if
you
have
any
questions
you
haven't
thought
of,
please
do
always
keep
them
coming.
B
If
you
have
any
advice
or
feedback
in
general,
let's,
let's
work
together
and
make
sure
that
we
address
concerns.
C
B
And
we're
working
through
that
now
so
there's
the
men
and
macs
are
are
driven
in
separate
areas
and
we
can
look
at
different
things
like
acls,
mappings
routes,
etc
and
determine
a
min
and
a
max
size
per
eni.
And
when
I
started
talking
about
characterizing
connections,
that's
that's
some
of
the
stuff.
B
We
want
to
show
you
going
forward
and
kind
of
describe
what
what
what
that
is
and
why
it
matters
the
what
gerald
brought
up
earlier,
though,
was
that
the
the
flow
table
or
your
connections
table
is
a
large
amount
of
this
memory
footprint.
B
And
so,
even
even
looking
at
like
the
different
policy
and
the
os
space
you,
you
can
expect
that
most
likely,
the
largest
amount
of
space
allocation
comes
from
flow
tables
proportional
to
the
rest,
of
course.
Okay.
B
But
yes,
we
we
do
like
min
and
max
for
sure
we
need
to
get
there.
Thank
you.
B
Absolutely
and
thank
you
folks,
please
do
keep
the
feedback
coming
and
don't
be
a
stranger
all
right.
Thank
you.
G
B
G
I'm
sorry,
I
joined
a
little
late.
I
may
you
may
have
already
said
this
because
obviously
just
on
the
bullet
point
protect
space
of
one
million
total
active
connections
per
eni.
So
basically,
this
is
just
a.
If
I
understand
you're
saying
there
should
be
no
way
that
one
eni
eats
up
all
the
flow
space
and
prevents
some
eni
from
getting
at
least
one
million.
Is
that
the.
B
So
so
that's
actually
an
excellent
question
as
well.
In
most
cases,
if
we're
provisioning
out
these
cards,
we
might
find
that
we're
getting
to
some
sort
of
optimization
that
we
divide
the
capacity
of
the
card
to
be
able
to
accommodate
30
enis,
but
the
implementation
does
leave
open
that
you
could
have
one
one
eni
that
scales
to
the
capacity
of
the
card
right.
So
if
you
wanted
one
super
duper
eni
for
whatever
reason
and
there's
there's
valid
scenarios
for
that
right.
B
This,
maybe
a
stipulation
here
would
be
that
there
there
is
kind
of
some
large
sizing.
Some
large
eni
sizing
may.
B
Consume
a
total
capacity
so
like
if
you
want
to
take
32
64
million
total
active
connections
and
take
all
of
your
cps
and
make
one
giant
eni
that's
possible.
However,
when
you
look
at
the
workloads
in
clouds
today,
it's
not
always
going
to
happen
like
that,
and
so,
if
you're
sizing
realistically
to
your
workloads
yeah,
you
may
not
have
a
lot
of
super
giant
enis
like
that,
and
so
in
general
you
know
this
is
this?
Is
average
right?
So
you
may
you
may
have
a
few
outliers
where
you
have.
A
D
A
Decide
how
to
pack
these
cards
and
it
will
consider
some
level
of
over
subscription
if,
if
necessary,
but
for
example,
it
wouldn't
say
I'm
going
to
put
a
graphic,
huge
cni
that
consumes
all
the
space
and
then
go
put
a
bunch
of
other
enis
on
the
same
card.
The
allocator
would
actually
take
that
into
account.
It
takes
the
constraint
and
then
says:
okay,
that
big
vm
is
going
to
is,
is
going
to
take
a
large
part.
A
I
can't
put
more
vms,
or
I
can
only
put
two
more
vms
on
it,
but
the
allocator
will
decide
that
based
on
the
requirement,
but
the
average
is
actually
quite
accurate,
but
in
in
practice
like
there
might
be
at
a
vm
that
has
eight
million
connections
allocated
to
it
and
then
others
with
you
know
two.
A
You
know
500k
because
they're
smaller
vm,
so
you
have
to
have
the
flexibility
to
be
able
to
do
that,
and
you
have
to
have
some
flexibility
to
oversubscribe,
because
we
know
that
it's
just
highly
highly
unlikely
that
every
single
vm
l32
vms
are
sitting
at
the
max
connection.
A
That's
that's
highly
unlikely,
so
it
makes
more
sense
to
at
least
accommodate
some
level
of
oversubscription,
which
we
would
also
set
by
by
allocating
more
enis
to
a
particular
card
than
the
numbers
actually
indicate,
but
it
wouldn't
be
too
over
subscribed
by
the
way,
but
we
may
use
some
level
of
older
subscriptions.
So
the
designs
need
to
accommodate
for
that.
A
You
need
to
be
able
to
support.
Different
numbers
of
connections
per
eni
is
what
it
comes
down
to
and
whatever
we
set,
then
you
know
if
you
have
over
a
subscript
and
there's
always
a
chance
when
you
go
to
build
a
flow
that
there's
no
more
table
left
because
it
was
over
subscribed.
That's
that's
over
subscription
right
and
that's
what
it
means,
but
generally
we
wouldn't
expect
it
because
it
would
never
over
subscribe
that
much
that
you
should
ever
run
into
that.
G
If
you
want
to
make
sure
if
it's
got,
let's
say
8
million
capacity
flow
table,
let's
say
you,
I
think,
want
to
be
able
to
configure
it
or
it
should
operate
such
that
juani,
and
I
cannot
take
up
all
8
million
and
leave
0
left
over
for
the
other
one.
You
want
correct.
Both
of
them
have
at
least
a
billion,
and
then
there's
a
shared
leftover
that
either
one
of
them
can
take
is
sort
of
the
behavior.
I
think
you
want,
maybe
whatever
behavior
you
do
want.
That
would
be
good
to
document.
For
that.
A
Yeah,
we
need
to
document
that
behavior
and
you're
and
you're
right.
We
we
need
to
document
how
oversubscription
is
actually
how
it's
implemented
and
and
that's
when
possible,
what
you
just
said
that
there's
some
other
consequences
as
well,
but
in
any
case
it
can't
be
that
you
get
zero
for
sure
everybody
needs
to
get
a
minimum
and
the
number
that
it
gave.
A
You
is
not
the
minimum,
that's
just
the
average,
based
on
what
we
see
across
that's
more
of
a
looking
across
all
of
the
clouds
looking
at
what
they
offer
measuring
it
and
then
saying
yeah.
We
need
to
support
this
name
because
that's
what's
competitive
today,
but
it's
not
your
min.
It's
not
your
max
and
it
has
nothing
to
do
with
building
subscription,
that's
more
of
the
marketing
requirement
and
how
we
get
to
do
some
more
work,
but
64
million.
We
can
tell
you
that
we
can.
A
We
think
it
is
realistic
to
have
64
million
flows
in
a
cpu
with
32
gig
of
memory.
We
we
know
that
piece
and
in
the
future,
what
it
will
you
know
what
things
like
you've
already
mentioned,
that
will
you
know
change
it.
A
bit
as
you
know,
as
we
go
to
ecmp,
and
we
have
to
replicate
more
policy,
we
get
a
little
bit
more
experience.
Maybe
those
numbers
will
change
a
bit,
but
right
now
we
believe
that
64
million
is
achievable,
based
on
our
experience
and
calculations.
B
And
I'm
I'm
actually
curious
to
turn
that
back
around
to
the
the
folks
on.
This
call
is
this:
in
line
with
your
expectations:
what
what
amount
of
enis
did
you
think
you
were
going
to
facilitate
before
this
presentation.
E
F
And
not
by
the
by
the
eni
so
like
we
weren't
from
our
perspective
like
that
sensitive
to
the
number
of
e
I's,
but
we
are
sensitive
to
the
to
the
size
of
the
flow
table
and
and
that's
why
any
any
more
like
details,
you
can
give
us
on
like
mix
of
kinds
of
flows
and
things
like
that
would
be
helpful.
B
Okay
yeah:
absolutely
we
can
do
that
and
the
the
eni
number
actually
does
become
very
important
because,
essentially
that's
that's
the
number
of
of
things.
A
single
or
dpu
number
of
targets,
you're
going
to
optimize,
becomes
right
and
so
there's
essentially
a
one-to-one
mapping
of
the
eni
to
the
the
optimization
target,
and
so
it
it
does
become
important
fast
and
I
think
you're
you're
concentrating
in
the
right
place
for
the
right
reasons,
because
it
it
is
the
majority
of
basically
your
memory
use
and
so
yeah.
B
Let's,
let's
lay
out
some
behavior
around
over
subscription
protecting
the
space
min
and
max's.
Let's
start
here
and
try
and
set
some
expectations,
but
also
learn
as
we
go
forward
and
give
you
guys
a
chance
for
input
and
letting
us
know
what
what
you
think
is
appropriate
along
the
way.
F
I
I
I
I
have
one
one
more
question.
I
just
want
to
make
sure
I
understand
what
you're
calling
like
an
active
connection.
Are
you
calling
each
direction
an
active
connection
or
you're,
calling
like
the
bi-directional.
B
H
I
think
we
are
also
in
line
with
the
number
of
total
number
of
flows
bidirectional
and
whether
they
are
active,
active
and
you
know,
half
of
them
are
active
yeah.
Both
both
the
aja
cases
are
okay,
as
long
as
the
total
number
seems
in
line
with
what
we
are
thinking,
yeah.
H
B
Yeah,
we'll
we'll
have
opportunities
to
go
through
this
in
more
depth
very
soon,
and
so
again,
thank
you
folks
for
the
the
feedback,
and
I
guess
christina
I
didn't
know
if
you
had
any
more
topics.
I.
E
Do
I
just
have
one
so
thanks
james
for
coming
and
spending
more
time
than
I
thought
you
would.
I
appreciate
your
time
and
I'll
send
out
notes
on
to
you
and
to
the
community
and
what
we
talked
about.
I've
just
been
documenting
in
the
background
here,
and
what
we're
going
to
do
is
have
andy
summarize
what
happened
at
the
p4
meeting
if
he
feels
like
it
andy.
G
Sure
I
noticed
I
think
alan
lowe's
here
so
he's
I'll
invite
him
to
jump
in.
Oh,
he
might
have
left.
I
will
attempt
to
accurately
say
things
that
he
would
agree
with.
He
did
most
of
the
presentation
on
monday.
I.
G
Yeah,
that's
right.
Also,
okay,
christine
you
can
double
check,
so
the
there
are,
I
mentioned
a
while
ago
that
the
current
p4
code
and
the
serious
pipeline
checked
into
the
dash
repo
has
some
keywords
that
are
not
in
the
p4
language
standard
and
I
asked
what
they
meant
and
so
on
monday
alan
presented,
what
they
meant
and
where
they
came
from
they've,
been
made
up
by
nvidia
and
they
would
be
interested
long
term.
If
those
were
part,
you
know,
became
part
of
the
p4
language
standard
and
nate
foster.
G
Also,
you
know
involved
in
the
beef
language
design
group
was
there
and
his
summary,
I
think
is,
is
correct,
which
is
you
know?
The
language
language
group
is
fairly
conservative,
but
they
are
open
to
things.
Especially
you
know,
driven
by
actual
real
world
use
cases
are
much
more
interesting
than
say,
let's
say
more
academic
use
cases,
and
this
is
definitely
you
know,
driven
by
by
real
world
use
cases
and
the
recommendation
to
allen
and
others
was
to
create
some
more
similar
examples
other
than
the
dash
examples
for
consideration
going
forward.
G
I
suspect
that
in
the
meantime
this
could
be
a
like.
I
doubt
it
would
be
less
than
a
four
to
six
month
process.
That
would
be
a
fast
track
to
get
a
new
thing
like
this
ahead
of
the
language,
I
think,
potentially
in
the
meantime,
from
what
I
understand
from
what
allen
presented.
There
are
other
ways
to
write
this
in
p4
that
have
equivalent
behavior
that
could
be
used
in
the
meantime.
I
G
I
Yeah
a
couple
of
things
about
that,
so
an
alternative
approach.
That
is
already
it's
not
really
in
the
p4
language,
but
it's
part
of
the
pna
architecture.
I
This
is
something
that
we
want
to
adopt
and
in
in
the
shorter
period
of
time,
because,
as
andy
correctly
mentioned,
the
resolution
in
the
p4
language
community
may
take
months,
if
not
more
so
we
want
to
also
try
and
base
our
connection
tracking
on
the
pna
approach,
and
about
that
I
don't
know
if
mario
or
someone
from
pensando
is
on
the
call.
I
But
there
is
a
pull
request
for
the
dna
version
of
connection
tracking
and
we
need
to
be
updated
to
the
latest
directory
structure,
so
it
can
be
merged
as
well
and
we
are
working
on
the
simulator
to
support
it,
but
we
need
to
align
that
will
requires
with
the
current
code
structure.
G
And-
and
I
also
as
a
side
note-
mario
did
contact
me
by
email
recently
and
I'm
going
to
take
the
action
item
with
an
intel.
I
think
we
mentioned
a
couple
months
ago
that
at
some
point
there's
a
dpdk
open
source,
dpdk
backend,
a
dvda
soft
based
software
switch.
That
has
a
p4
back
end
and
it
might
be
getting
the
point
where
it's
imminent,
that
it's
featureful
enough
to
handle
some
of
these
features.
But
I
will
go
double
check
that
and
get
back
to
you
on
how
imminent
that
is,
whether
it's
a
month
or
more.
E
Right
and
mario
is
on
the
call
marianne.
D
Yes-
and
I
got
the
message
about
you
know
pointing
to
the
news
directory
structure-
I
it's
not
really
clear
to
me
how
to
do
it.
I
I
started
looking
into
it's
not
really
clear
how
to
do
it,
but
in
case
marion,
I'm
gonna
ask
you
for
help
to
do
that.
D
A
E
Great
great,
so
that's
what
I
had
for
this
week.
I
believe
I
briefly
showed
I'll
show
it
again.
D
E
Okay,
great
so
again,
just
to
summarize
from
the
beginning
of
the
meeting
we
have
what
we
believe
are
the
work
items
for
the
behavioral
model
and
a
short
description
over
on
this
side
of
the
work
to
be
done
and
michael
miele
and
I
today
will
populate
the
assignees
that
we
know
of
who
had
kind
of
raised
their
hand
like
ben
kitt
and
cynthia
and
and
venkatp,
and
cynthia
and
assign
we'll
have
to
assign
permissions
in
the
github
repo
and
then
populate
the
assignee.
E
So
that's
what
we'll
do
today
if
we've
missed
anything
or
if
you
can
think
of
anything,
please
let
us
know,
but
we
have
the
you
know,
compare
the
the
one
version
to
the
other
for
the
for
the
connection
tracking
and
then
all
these
udp
ipv6,
the
ones
that
have
to
do
specifically
with
connection
I
prefaced
with
connection
tracking
here
and
then
the
rest
of
things
so
nvidia's
taken
on
at
least
four
of
these
items
here,
row
three
and
row
7
and
row
9
and
row
10..
So
anyway,
we're
working
on
that
today.
E
E
So
thanks
everyone,
I
guess
we'll
go
in
three
two
one
and
have
a
good
day.