►
From YouTube: DASH Workgroup Community Meeting 20220202
Description
February 2, 2022 Community Call
A
A
So
if,
for
example,
because
like,
for
example,
customers
may
have
their
own
address
space,
let's
say
tesla
right
and
want
to
go
to
the
internet,
so
they,
for
example,
go
to
888
and
we
just
leave
packet
out
right,
which
is
kind
of
not
uncapped.
This
kind
of
stuff
right,
but
if
by
customer,
for
example,
has
other
space
tesla
8
and
going
to,
for
example,
100
that
something
right.
It
is
not
really
internet
address
or
192.
It's
not
the
internet
address
right.
A
So
we
know
we
know
from
configuration
that
customer
has
basically
doesn't
have
this
address
configured
as
their
v-net.
But
the
same
point
is
not
let's
say
internal
address.
We
plump
those
additional
spoof
guard
rules
in
one
of
the
aqua
layers
right
so
so
the
processing
is
is
relatively
simple.
I
would
say
from
the
from
the
point
of
view
how
we
are
doing
data
planes
that
first
packet
basically
goes
when
the
vm
outreach
is
packet.
It
goes
through
multiple
aqua
groups.
A
It
needs
to
pass
through
all
the
aqua
groups
right
and
when
it
when
it
goes
through
alkyl
group,
then
the
action
on
the
on
this
can
be
either
a
lower
than
I.
If,
if
the
ankle,
if
the-
and
there
is
two
types
of
like
properties
of
those
action
can
be,
for
example,
allow
terminating
or
non-terminating
if
this
terminating
means.
It
is
basically
this
this
group-
and
this
rule
basically
says
that
this
is
this
basically
what's
happening
right
if
this
is
not
terminating,
but
we
already
found
an
action
in
this
group
right.
A
This
means
that
we
still
like,
for
example,
even
if
one
group
says
allow
right,
then
we
go
in
this
case
to
the
next
group,
and
the
next
group
may
actually
deny
this
traffic
right.
This
is
kind
of
the
non-terminating
behavior.
You
know
we
can.
We
still
need
to
jump
through
the
stuff,
so
this
basically
what's
happening
with
the
aqua
layer.
So
relatively
simple,
the
only
complexity
I
would
say
mostly
from
the
harvard
implementation-
is
that
a
customer
customer
can
specify
each
action.
A
Let's
say
allo
from
a
number
of
lots
of
prefixes
two,
for
example
lots
of
prefixes,
and
they
can
also
specify
that,
for
example,
they
want
to
allow
from
let's
say
those
lots
of
prefixes
from
this
point
range
to
this
point
range
to
the
other
prefixes
right.
So
that's
that's
the
only
complexity.
From
the
point
of
view,
the
hardware
implementation
to
kind
of
optimize
on
this,
the
only
hard.
B
A
And
customers,
yeah
yeah,
exactly
and
customer
right
now
can
use
also
tax.
So,
for
example,
can
say
hello
from
let's
say
azure
cloud
or
like
storage.uh
to
let's
say
internet
right,
so
they
can
use
that
that
kind
of
like
tax,
which
are
being
expanded
right
now
in
the
first
proposal,
I
believe
we
didn't
list
the
tax
because
we
will
be
still
expanding
this
right,
but
this
kind
of
optimization
that
will
go
later
to
kind
of
ability
to
also
define
the
tags.
So
if
the
customer
add
the
tags,
we
don't
need
to
expand
them.
A
We
can.
We
can
basically
plan
the
specific
tag
and-
and
basically
this
is
this-
and
then
after
accolade
layer,
which
is
kind
of
those
groups-
it
really
goes
to
to
the
routing
layer,
which
is
basically
lpm,
routing,
okay
right.
So
so
it
goes
through
lpm
routing,
and
this
is
fully
user
controlled.
In
a
way
the
customer
controls
the
we
always
like.
If
the
customer
doesn't
provide
the
only
user
defined
route,
so
it
doesn't
provide
the
on
route
table.
A
We
do
plant
if
our
route
table
right.
If
the
customer
provides
the
owner
table,
we
basically
plan
those
those
are
route
table
plus
we
also
add
additional
lpm
rules
if,
for
example,
customers
connected
to
let's
say
express
route
gateway
like
a
like
a
cisco
device
that
connected
to
on-prem,
let's
say
or
arista
or
any
other
device.
Basically
that's
connected
to
on-prem
and
advertise
routes
through
bgp
right,
because
we
have
like
ability
that
the
customer
can
can
connect
to
on-prem
right
and
we
basically
have
a
on
our
control
plane.
A
We
pull
the
routes
from
on-prem
from
those
devices,
and
this
is
this
is
usually
where
the
lots
of
prefixes
are
coming.
So
that's
why,
for
example,
the
requirement
is
to
have,
for
example,
10k
or
100k
prefixes,
because
usually
the
customer
defined
route
table
is
really
usually
small.
Most
of
the
most
of
the
routes
are
actually
coming
from
pgp,
which
which
basically
we
merge
and
and
this
and
we.
A
Don't
update
them
very
often
so,
usually
even
if
the
bgp
updates
very
often,
we
actually
have
agreement,
at
least
on
our
site,
with
with
team
who
is
who
is
doing
the
the
routing
that
they
will
not
send
us
update
more
often
than
every
30
seconds.
So
so
it's
not
very
very
often
that
we
are
programming
right,
but
we
also
have
this
kind
of
agreement
with
them
that,
even
though
they
will
not
send
us
out,
they
have
a
30.
Second
will
still
be
pulling
very
often,
so
we
are
still
pulling
them.
A
Let's
say
every
five
seconds
or
a
few
seconds
to
see
to
kind
of
react
very
fast
on
the
change,
but
once
they
send
us,
the
next
update
can
be
only
after
30
seconds,
this
kind
of
stuff
right
so,
and
so
this
basically
creates
a
road
table,
and
I
would
say,
red
tape
was
normal
route
table
there.
The
complexity
later
starts
with
a
specific
action
that
if,
for
example,
someone
says
that
address-
let's
say
10
8
is
my
on
vin
address
space
right
because
we
have
underlay
and
an
overlay
right
in
overlay.
A
10
slash
eighties
they've
been
out
of
space
right
but
underlay,
for
example,
the
if
the
overlay
other
is
10.005
versus
zero,
zero.
Six
right,
they
may
be
on
completely
different
locations
in
their
center
on
completely
different
physical
notes
right.
So
it's
so
basically,
even
though
you
hit
this
rule
route
target,
let's
say
v-net.
This
is
basically
the
the
route
rule.
Let's
say
it
tends
to
the
unit
right
then,
depending
on
the
destination,
the
actual
destination
that
you
are
doing,
so
the
lpm
already
won.
A
So
basically,
it
won
that
this
is
there,
for
example,
the
route
rule
which
is
the
v-net
right,
but
then,
depending
on
the
actual
destination
of
the
10.,
something
address
right.
There
is
a
lookup
table
that
that
that
basically,
based
on
the
actual
overlay
address,
which
is
10.8,
for
example,
specify
that
10.8
is
actually
on
this
specific
physical
machine,
and
this
is
this
should
be
the
outer
end
cap,
for
example,
that
should
be
plumb
right
and
we
also
enforce
mac
addresses.
A
So
the
lookup
table
has
also
information
that
plump
this
specific
mac
address
as
the
as
the
outer
destination,
no,
not
the
outer
destination
but
like
inside.
So
basically,
it
overrides
the
overrides.
The
mac
address
right
and
use
the
key
based
on
the
lookup
right
or
based
on
the
router
rule,
in
this
case,
to
to
kind
of
end
cap
with
outer
v
extent
and
just
send
the
packet
out
and
the
pocket
when
the
packet
comes
in.
A
D
Okay,
michael,
you
know
all
the
things
that
you
have
described
right
as
the
packet
goes
through
ankle
layer.
It
goes
through
routing
layer
and
all
of
those
things
this
is.
This
is,
as
we
understand,
is
a
slow
path,
in
other
words,
it's
for
perhaps
for
the
first
packet.
That's.
D
You
know
so
once
you
have
established
the
flow
based
on
that,
then
essentially
it
just
you
know:
subsequent
packets
just
only
going
to
take
the
the
flow.
C
D
Exactly
yes,
so
now
the
question
here
is
that
those
first
packets
that
we
were
talking
about
there
are
going
through.
All
of
this,
you
know
echo
layer,
the
routing
layer
and
you
know
so
forth.
Those
are
the
ones
are
the
ones
these
are
essentially
those
you
know
first
and
the
last
one
right
there
like
the
the
app
yes,
yes,
yeah.
A
Exactly
so-
and
this
is
this-
is
the
package
that
we
need
optimization
on
right,
so
so,
for
example,
we
already
full
hardware
optimizes
for
for
the
flows
which
are
offloaded
right.
So
these
are
like
fpgas
dedicated
this
kind
of
stuff,
which
are
very
simple
like
in
a
way
in
a
way.
For
example,
we
like,
because
we
have
vfp,
which
is
doing
the
slowpad
processing
in
the
cpu
right
and
then
once
it
figure
out
what
is
the
final
transposition?
It
just
uploads
it
to
the
hardware.
So
this
is
the
part
that
is
already
fully
optimized.
A
What
we
are
right
now
asking
us
for
the
dash
right
is
to
optimize
the
slow
part
in
hardware,
because
we
tried
a
lot
in
software
right.
We
even
have
like
beefy
machines,
this
kind
of
stuff
in
cpu,
this
kind
of
stuff
right,
but
but
this
is
where
the
cps
of
connection
per
second,
so
there's
like
sins,
this
kind
of
stuff
right
we're
coming
to
picture
because,
like
there's
already
existing
flow,
then
it's
very
very
fast
right
and
we
don't
have
any
problems
with
this.
The
problem
is
really
with
this
processing
of
those
of
those
rules.
D
Right
and
then
about
the
flow
table
you
know:
do
you
have
the
the
example
of
the
flow
table
in
terms
of
you
know
how
it
looks
like
and
how
large
the
flow
table
entries
are.
A
So
the
so
the
flow
table
has
has
five
tuple
match,
so
it
will.
It
will
basically
have
a
source
ip
destination,
ipe
source
for
destination
imported
in
the
protocol
right
and
then
it
will.
It
will
have
information
about
about
the
basic
transposition,
so
the
final
transposition,
whatever
rule,
got,
hit
right,
it's
there.
So,
for
example,
if
the
out,
if
it
should
basically
add
outer
end
cap,
it
will
have
this
information
and
it's
like
vx
lan
id
like
vxlan
key
it
should.
It
will
have
this
right.
A
However,
from
from
this
point
of
view,
so
there
are
different
vendors
like
like
in
vendors
like
basically
like
even
internally
right.
We
have
different
implementations
with
sometimes,
for
example,
sometimes
for
tracking
metadata
there.
They
are,
for
example,
adding
some
more
bits
in
the
in
the
flow
table
right
so
flow
table,
as
long
as
it
has
basically
can
correctly
match
the
packet
and
can
do
final
transposition
right
plus
there
is.
A
I'm
not
sure
if
this
this
we
already
described
or
not,
but
but
this
can
be
like
once
we
have
the
to
be
honest,
like
once.
We
have
the
final
final
stuff
that
that,
for
example,
does
the
packet
processing
and
and
do
this
right
then
then,
basically
metering.
I
think
it's
it's
there
kind
of
on
the
scrap.
It's
it's
basically
like
whenever,
for
example,
some
route
type
gets
hit
right
and
we
hit
the
route
target
v-net.
That
goes
to
the
minute
right
and
then
based
on
the
mapping
right
each
mapping.
A
This
kind
of
stuff
will
have
also
an
id
of
the
counter
of
the
of
the
metering
bucket
and
we
will
we'll
create
basically
to
scan
their
pre-programming
before
and
then
we'll
reference
this
counter
in
the
mappings
and
then,
for
example,
when
the
flow
gets
created,
then
the
flow
leads
to
basically
those
this
basically
counter
id
and
when
the
packet
goes
in
or
goes
out
through
the
specific
flow
right,
then
the
specific
counter
should
be
incremented
and
it
should
be
tracked
separately
for
the
input
and
unbound
right.
A
So
so,
basically,
we
are
trying
to
separate
the
involvement
balance.
So
if
we
reference,
for
example,
some
caller,
let's
say
number
five
right,
then
when
the
packet
goes
out,
the
number
of
bytes
gets
incremented
to
the
counter.
When
the
packet
comes
in
on
the
same
flow
number
of
bytes
gets
incremented
for
the
counter,
so
this
basically
the
metering.
A
So
it's
more
about
basically
on
on
which,
which
thing
we
we
assign
the
counter
and
right
now
there
is
two
ways
to
assign
the
counter,
so
the
counter
can
be
assigned
either
on
the
on
the
entire
route
rule
which
says
that
whatever
basically
packets,
it's
going
as
long
as
it
hits
this
route
lpm
rule
based
on
the
lpm
right.
This
is
always
the
counter.
Always
is
the
same
counter
right
and
most
of
the
scenarios
like,
for
example,
you
are
going
to
davina
traffic.
They
have
common
common
counter.
A
If
you
are
going
to
let's
say
through
to,
for
example,
them
that
saving
appearing,
they
have
basically
common
counter
for
definite
peering
right.
However,
there
is
some
subset
of
the
standards
like,
for
example.
Here
we
have
like
a
private
link
tpd
because
kind
of
next
step,
where
some
of
the
ips
inside
inside
inside
there
inside
the
basically
inside
the
basically
winning
address
space
right
can
be
marked
by
the
customer.
This
is
like
private
link,
ip
slightly
different
transposition
right,
it's
kind
of
like
a
tpd
yet,
but
it's
like
slightly
different
transposition.
A
In
this
case
you
can
assign.
You
should
also
be
able
to
assign
the
counter
on
the
specific
mapping.
So
in
this
case,
if
the
counter
is
on
the
mapping,
this
will
take
precedence.
If
there
is
no
cutter
on
the
mapping
or
the
card,
there's
basically
number
zero
there,
then,
basically,
the
the
routing
should
take
precedence
right,
but
at
the
same
time
I
don't
want
to
divert
this
kind
of
like
the
next
step
right,
that
the
first
step
that
we
really
would
like
to
have
is.
D
Yeah,
I
think
I
think
this
is
a
this
is
an
extremely
extremely
important
topic,
and
I
also
wanted
to
have
this
one
as
part
of
the
agenda
this
week.
If
we
have
the
time
to
talk
about
some
of
these,
you
know,
statistics
and
metering
and
so
forth,
because
again,
the
idea
here
is
that
you
know
we
are
talking
about
per
flow
statistics.
We
are
talking
about,
like
50
million
flows.
We
are
talking
about.
D
You
know
these
in
out
byte,
as
well
as
packet
statistics
and
also
depending
upon
you
know
that
the
type
of
flow
it
is
it
could
have.
You
know
different,
like,
as
you
mentioned
in
terms
of
you
know
this.
This
private
link
part
right,
the
mapping,
mapping
metering
as
well,
so
all
in
all,
probably
we
need
a
much
more.
I
guess
you
know
detailed
documentation
on
the
counters
meetings
and
statistics
and
so
forth.
D
For
this
particular
you
know
project,
that's
what
my
view
is,
because
there
are
a
lot
of
questions.
In
fact,
at
least
I
have
a
lot
of
questions
in
that
area,
and
our
team
also
has
a
lot
of
questions.
So
I
think
we,
if
we
have
time
you
know
later
on,
we
after
we
have
this
discussion.
You
know
I'd
like
to
bring
those
questions
up
and
then
we
can
probably
see
if
we
can,
we
can
carry
out
the
discussion.
D
A
E
Yeah
and
just
as
a
side
comment,
this
is
really
good
to
get
the
insights
michael
and
I
think,
really
valuable
to
capture
the
gist
of
all
this
in
writing.
Somehow,
because
we're
having
a
lot
of
oral
presentations
that
are
really
teaching
the
community.
E
Some
of
the
you
know
the
inner
details,
but
if
we
don't
capture
them
in
writing,
it'll
just
be
lost
in
a
meeting
somewhere
right
and
whatever
we
remember.
So,
if
we
can
somehow
get
the
essence
of
these
discussions
into
the
documents,
it'd
be
really
valuable.
A
Yeah
definitely
because,
because
I
I
remember
when
we
kind
of
originally
started,
we
also
have
some
some
internal
meetings
when
we
kind
of
were
doing
the
kind
of
presentation,
because
the
interesting
stuff
about
the
sdn
in
general,
how
it
started
in
azure
right
that
that
it
started
many
years
ago
and
in
a
way
like
customer
needs
via
traffic.
Okay,
then
we
added
video
traffic
right.
Then
customer
needs
this
kind
of
stuff.
A
So
there
is
lots
of,
for
example,
some
documents
here
some
documents
here,
this
kind
of
stuff
right
so
part
of
right,
like
it
was
never
one
big
common
document
on
which,
basically,
we
were
contributing.
So
this
is
right
now
what
we
are
doing
right
now
as
part
of
this
effort
to
kind
of
not
only
aggregate
in
the
common
document,
because
we
don't
have
one
common
document,
even
in
microsoft,
right
because
it
was
everything,
was
kind
of
feature-based,
feature-based
feature-based,
feature
based
that
was
kind
of
being
added,
right
and
and
and
part
of
this.
A
A
So
this
is
basically
what
we
are
doing
right
and
we
already
have,
for
example,
the
information
about
how
the
private
link
is
done
right,
even
though
externally
is
like
tbd
right
and-
and
I
believe
that
the
main
reason
why,
for
example,
this
was
not
yet
published
is
that
we
are
still
waiting
for
the
psi
implementation
to
kind
of
propose
it,
and
we
were
thinking
about
basically
doing
this
together
right
in
a
way
that
hey.
This
is
basically
the
site
proposal.
A
How
how
does
it
work-
and
this
is
how
how
it
works
with
the
with
the
private
link,
right
and
yeah,
but
this
this
is
private
link
is
more
like
some
specific
type
of
the
of
the
transform
right
once
we
get
the
but
but
for
us,
it's
kind
of
like
sure
some
customers
are
using
private
link,
increasing
the
number
yes,
but
at
the
same
point
everyone
is
using
v-net
right,
so
so
the
so
the
v-net
and
being
able
to
kind
of
meter
on
the
v-net.
That's
that's
the
important
part.
A
F
F
But
we
need
to
fix
it.
What
I'm
basically
saying
here.
What
scares
me
about
this
project
is
that
we
cannot
comply
with
20
different
thoughts
because
it's
never
going
to
to
comply
with
20
different
things,
because
with
20
different
things
will
be
partially
contradicting
each
other,
for
example,
the
flow
document
they
partially
contradict
the
performance.
A
Primary
that's
correct
everything
should
be
reconciled
and-
and
that's
why
the
the
github
right
when
when
when
we
should
basically
be
be
kind
of
updating
contributing
right,
because
I
know
that,
for
example,
there
was
also
a
question
about
the
fragmentation
that
came
later
right,
that
that
originally
was
not
published
right
and
whenever
we
have
we
have
answered.
Basically,
this
will
be
additional
document
that
will
be
there.
We
cannot
be
updated
as
part
of
the
fragmentation
or
the
main
document
updated,
so
everything
should
be
in
sync
right
if
something
is
is
basically
out
of.
A
A
Everybody
will
be
basically
that
the
answer
will
link
the
document
and
that
there
will
be
one
source
of
truth
right
and,
if
you
guys
will
notice,
like
sylvan
you'll
notice
that,
for
example,
something
that
is
not
in
sync
right:
let's,
let's,
let's
communicate
it
right
and
let's,
let's
try
to
in
this
case,
resolve
the
where's,
the
problem
right
and-
and
we
basically
say:
okay,
no,
this
is
this
is
basically
and
the
other
document
will
have
something,
maybe
slightly
out
of
sync
or
incorrect.
This
kind
of
stuff.
F
F
A
Yeah,
that's
true,
so
the
only
the
only
thing
that
busy
comes
to
my
mind
right
is
that
engineers
who
are
working
on
the
before,
like
it's
it's,
it's
basically
like
once
once
we
release
some
update
on
the
on
the
document.
Right
it'll
probably
take
them
some
time
to
update
the
epi
for
a
model.
So
that's
the
stuff,
but
definitely
the
goal
is
to
have
that
before
model
to
beat
the
model
right.
So
that's
definitely
the
goal
right.
I'm
just
saying
it
may
be
slightly
lagging.
F
F
A
So
so
the
high
availability
so
like
the
view
of
the
high
availability,
is
like
this,
because
because
at
the
moment
when,
when
the,
if
the
cards
is
inside
actual
physical
host
right,
then
we
know
that
the
host
has
basically
one
card
and
basically,
if
the
host
dies
or
the
car
dies,
the
entire
house
is
not
reasonable
right.
But
but
if
the,
if
the
card
is
basically
there
or
in
this
like
intelligent
switch,
that
is
there
and
serving
more
than
one
customer
right.
A
It
is
actually
acting
as
a
it's
kind
of
similar
as
a
tour
right,
and
there
is
a
slight
difference
between
dash
versus
versus
store
in
a
way
that
thor
usually
doesn't
keep
any
state
from
the
point
of
view
that,
like
if
the
packet
lands
on
the
one
tour
or
the
other
tour
using
ecmp,
it's
it's
basic
clients
goes
there
right
from
the
point
of
view
of
of
the
dash,
because
the
transport
system
happens
right,
it
will
create
the
flows
right,
so
the
flow
will
be
established.
A
So,
in
this
case,
in
this
case,
the
the
stuff
with
regards
to
here
is
that
usually-
and
this
is
what
we
are
also
doing
with
maxes
for
that-
are
basically
doing
the
maxi
multiplexing
of
basically
traffic,
which
is,
for
example,
coming
from
external
balancers.
This
kind
of
stuff
right
is
that
we
usually
have
a
set
of
maxes.
This
kind
of
stuff
and
packet
can
land
on
any
of
them
right
and
they
are,
for
example,
they
are
working
in
a
way
that
they
are
bouncing
the
packet.
A
A
So
that's
true,
so
we'll
come
up
with
a
transporter
and
based
on
this
transposition,
we'll
send
the
copy
of
this
packet
with
the
metadata,
because
because
we
cannot
cache
anything
right,
because
it's
kind
of
like
almost
line
range,
because
we
want
about
the
cps
right,
which
is
the
to
the
backup
device.
The
second
one
to
kind
of
hey
this
is
the
packet,
and
this
is
the
flow
that
I
created
persisted
right
and
then
this
this
device
basically
will
send
it
back
right
and
then
the
bounce
packet
out.
So
this
this
provides
capability.
A
At
least
this
is
what
you
are
doing
from
the
point
of
view
of
the
of
the
load
balancing
right
for
the
for
the
more
advanced
scenarios.
You
can
simulate
that
for
for
kind
of
like
a
stateful
not
and
then
not
that
customer
can
control
which
kind
of
release
as
the
management
where,
where
basically
packet
lands
on
on
on
one
of
those
devices,
and
it
needs
to
be
fast
right.
So
basically,
it
bounced
to
the
second
device,
with
the
with
the
metadata
of
what
kind
of
decisions
were
taken
right
in
this
case
here.
A
This
is
basically
let's
say:
final
packet
transposition
plus
potential
ceremony
data
send
back
and
send
it
out
right.
So
this
is
basically
how
we
are
right
now
handling
and-
and
I
would
say-
and
I
would
say
the
most
lucky.
This
is
how
we
should
be
implementing
this
right.
Unless,
because,
like
you
guys,
you
guys
are
very
smart
right
because,
like
we
have
like
right
now
in
the
community
people
from
all
over
the
place
right,
different
companies
kind
of
stuff
right.
A
So
if
you
guys
think
that
there
is
a
better
model
to
kind
of
provide
this
kind
of
like
flow
consistency,
because
the
idea
is
like
if
one
of
those
device
like
if
the
tour
dies,
the
other
top
picks
up,
because
it's
completely
stateless
right
here,
there's
a
state
which
is
the
flow
right.
So
if
the
one
one
car
dies
right,
the
other
car
will
will
start
receiving
traffic
for
the
already
established
connection
from
the
customer
perspective
right.
A
So
so
from
the
aha
perspective
right,
it
is
okay
from
the
point
of
view
and
if
the
car
dies
right,
then
there'll
be
some
furious
transmissions
right,
but
the
connection
should
not
be
dropped
right,
so
the
fury
transmission,
because
the
packet,
the
isotopes,
need
to
convert
this
kind
of
stuff,
it's
fine,
but
the,
but
the
packet
should
not
be
dropped
because,
for
example,
we
don't
have
flow
which
was
already
established
on
the
previous
device
right.
So
that's
that's
the
main
thing
right.
B
H
I
agree
with
silvano.
I
think
that
I
think
that
the
aha,
like
at
least
for
the
state
synchronization
that
can
happen
on
per
packet
per
packet
basis.
There's
state
changes
to
the
flow
table
and
there
needs
to
be
something
in
the
p4
code.
It
might
just
be
like
some
kind
of
call
to
an
external
function
and
we
could
define
what
that
external
function
is,
but
whenever
there's
a
state
change,
that's
like
reflected
in
the
from
the
data
path
it
has
to
be
communicated
to
the
to
you
know
to
the
partner.
I
I
agree.
H
H
Think
right,
I
think
we
need
to
define
what
that
synchronization
event
is
and
what
the
protocol
is,
and
this
is
something
that
we've
we've
been
working
on
and
at
some
point
we'll
share,
maybe
share
that.
But
but
I
think
it's
it's.
It
definitely
is
a
function
of
the
data
plane
to
at
least
initiate,
like
state
updates
from
one
data
plane
to
its
partner's
data
plane.
F
A
Yeah,
so
that
we
were
definitely
thinking
about
standardizing
control
plane
apis
from
the
point
of
view
that
is
able
to
standardize
api,
saying,
hey,
add
the
replication
partner,
remove
the
replication
partner,
get
the
status
of
the
replication
to
know.
If
this
is
healthy,
this
kind
of
stuff
right,
then
you
were
thinking
about
standardizing.
A
Then
we
can
right,
but
but
right
now
we're
more
thinking
about
standard
because
like
for
me,
basically
how
two
devices
communicate
the
stay
between
each
other,
because
it's
also
like
what
exactly
they
are
keeping
the
flow
tables
and
this
kind
of
stuff
right.
This
definitely
can
standardize
or
it
can
be,
can
be
device
internal
right.
What
thing
that
I
want
to
definitely
standardize
on
is
what
kind
of
packet
transposition
we
want
to
support.
A
So,
from
the
point
of
view
like,
what's
the
what's
the
stuff
that
what
we
can
do
with
those
packets
from
the
point
of
view
of
this,
of
their
defined
networking
right
but
the
stuff
that,
for
example,
how
exactly
the
flow
table
is
optimized,
not
optimize
those
kind
of
stuff
right,
we
can
define
this
and
and
how
those
are
being
synced
right
that
can
be.
A
D
Yeah
yeah,
michael
just
one,
michael
just
one,
quick,
quick
question
so
in
the
you
know
in
earlier
discussions
that
we
had
during
the
meeting
when
we
had
discussion
about
an
appliance
or
a
smart
switch
which
contains
multiple
dpu
cards.
Yes,
the
discussion
was
that
it
could
be
a
you
know.
It
could
be
a
mix
and
match
you
know
it
may
not
be
from
all
the
dpu's
may
not
be
from
the
same
vendor
right
depending
upon
you
know.
There
was
a
discussion
about
you.
D
Could
you
could
replace
them
depending
upon
you
know
if
one
were
to
die
and
so
forth
right,
so
it
could
be
so
they're,
hot
replaceable
ones.
So
the
idea
here
is
that
they
are
dash
compliant
gpus
and
they
are
from
different
vendors.
D
There
has
to
be
a
you
know,
some
some
standard
way
of
defining
that
when
the
flows
are
replicated
and
if
the
flows
have
these
metadata,
which
essentially
are
going
to
tell
from
you
know,
active
to
passive
or
from
primary
to
secondary
cards.
D
H
H
From
the
perspective
of
I'll
call,
it
live
migration,
which
is
not
a
failover,
but
you
just
want
to
basically
take
an
operating
card
and
move
the
flow
table
to
a
new
card,
potentially
that
could
be
vendor
to
vendor,
but
that
that
doesn't
have
like
the
same
requirements
as
like
a
failover
where
right
so
like.
I
think
that
there's
kind
of
like
two
aha
cases
here,
one
which
is
failover
and
the
synchronization
required
for
failover,
is
really
like
you
really.
H
E
Yeah,
that
would
be
my
representation.
We
shouldn't
mandate
total
interoperability
in
in
flow
state
synchronization,
if
it,
if
it
has
the
it's
a
negative
impact
of
getting
the
least
common
denominator
approach,
it
might
prevent
innovative
approaches
from
you
know,
being
10
times
superior
for
example.
E
A
Exactly
so
so,
this
is
why
this
is
why
kind
of
like
people
papa
is,
is
kind
of
why
people
follow
how
this
process
right,
but
how,
for
example,
later
the
the
memory
with
the
mapping
is
being
stored,
this
kind
of
stuff
right.
We
don't
want
to
also
enforce
it
right.
So
so,
for
example,
one
vendor
want
to
optimize
the
memory
in
this
way,
the
other
one
they
want
to
optimize
them
more
in
the
other
way.
The
other
vendor
doesn't
want
to
optimize
memory
to,
for
example,
give
higher
cps
right.
A
We
just
want
to
standardize
on
actually
what
should
be
pro,
how
the
processing
of
the
packet
should
go
and
what
kind
of
transposition
in
which
order
right,
not
so
much
about
internals
right
how?
How
deep
will
go
on
this
little
standardization
right.
I
agree
here
that
we
definitely
want
to
make
sure
that
the
like
each
vendor
can
basically
have
their
own
kind
of
secret
sauce
right
to
to
kind
of
optimize,
and
there
is.
There
is
also
the
standardization
of
the
apis
of
how
we
want
to
call
it
from
the
control
plane.
A
The
standardization
of
what
is
happening
with
the
packet
right
and
then
we
want
to
also
be
able
to
provide
the
innovation
part
which
regards
to
we
don't
say
how
exactly
it's
handling
internet
right
and
and-
and
there
is
no
question
like
whether,
for
example,
the
state
communication
between
two
paired
cards
right
should
be
should
be
proprietary
or
not
from
my
point
of
view,
if
it's
not
proprietary,
that
we
can
link
all
the
cards
together,
awesome
from
the
other
point
of
view
right
potentially
maybe
some
vendor
has
some,
for
example,
ability
to
pack
some
stuff
and
those
packets
can
be
shorter,
done
differently,
optimize
those
kind
of
stuff.
I
Hi,
michael
back
to
the
basic
acl
processing
that
you
described
earlier
in
the
meeting
the
scale
requirement
and
the
packet
transform
indicate
it's
a
hundred
100k
and
there's
also
a
10k
source
destination.
Port
is
100k
prefix.
Is
that
per
eni,
or
is
that
for
the
entire
gpu.
A
A
So
if
we
design
right
now
spend
engineering
a42
right
now
designed
the
car
that
can
only
handle
like
7k
or
10k
per
eni
right,
then
this
is
kind
of
designing
for
right
now
and
and
the
future
in
a
second
will
be
slightly
bigger
right.
So
so
I
I
would
say
that
this
is
like
100k
per
eni.
I
would
say,
and
definitely
with
the
tax
there
will
be
optimization,
so
we'll
be
using
lots
of
the
optimization
later
with
the
tax.
A
Memory
calculator
to
figure
out
how
it's
being
used
right
because,
like
like,
let's
be
honest,
like
at
the
end
basic
cars,
have
let's
say,
for
example,
200
gigs,
basic
connectivity
or
400
years
connectivity
to
the
tourist
kind
of
stuff
right,
and
we
know
that
each
customer
also
requires
some
bandwidth
right.
So
so
so,
even
if
we,
for
example,
program,
let's
say
1000
enis
on
one
card,
let's
say
it's
stereotypically
possible
right.
A
Those
customers
will
not
be
able
to
use
the
bandwidth
because
they
will
saturate
themselves
right,
so
we'll
never
program
so
many
right,
but
but
some
because
sometimes
we
are
hunting.
Customers,
for
example,
who
have
very
like
very
tiny.
Let's
say
they
are
very
tiny
customers
which
are,
for
example,
deploying
the
vms,
which
are,
let's
say
smaller
right
and
for
the
smaller
vms,
we'll
say.
A
Okay,
this
is
smaller
vm,
but
you
have
like
smaller
number
of
cps
right
and
it's
gonna
stop
and
there
will
be
all
those
kind
of
limits
right,
which
means
we
can
fit
more
of
those
in
in
the
cart
right.
So
it's
mostly
about
not
considering
ourselves
from
the
point
of
view
and
I
right,
but
this
100k
is
mostly
per
unit
per.
I
C
I
I
For
the
capacity
that
you're
having
right
now,
what
does
it
look
like?
Do
you
know
of
pen.
J
H
J
I
A
A
Is
it
is
clear?
The
answer
is
just
very
so
I
would
say
latest.
The
answer
is
actually
very
hard
from
my
point
of
view
because
we
don't
have
yet
those
kind
of
like
the
only
devices
that
we
have
right
now
that
are
kind
of
doing
this
is
software
defined
devices
where
we
basically
have
have
basically
lots
of
memory
on
those
everything
is
like
self
software
processing
right.
So
we
never.
We
never
kind
of.
We
don't
have
any
yet
kind
of
like.
A
Is
okay,
it's
the
hardware
to
figure
out
how
much
really
we
can
fit
in
hardware
right.
If
we
had,
we
will
tell
you
guys
that
right
now
we
have
a
hardware
that
fits
so
many
and
we
want
basically
like
x,
more
right,
but
we
don't
know
right
so
got
it.
We
don't
know
how
you
guys
will
be
able
to
like
flip
that
as
a
homework
item
to
think
about
it.
D
D
A
I
answer
the
question:
will
this
be
direct
multiplication?
Let's
go
like
this:
let's
take
it
as
a
homework
because
because
because
I.
A
Of
questions
about
this-
and
I
know
that
we
prepare
those
kind
of
stuff
thinking
that
okay,
we
want
this
because,
for
example,
our
platform
has
this
kind
of
stuff
right,
but
but
let's,
let's
take
it
as
a
as
a
homework
to
to
kind
of
discuss
this.
I
know
that,
like
james
on
our
side,
he's
awesome
guy,
who
is
driving
kind
of
like
eni
placement
and
kind
of
figure
out
what
kind
of
skills
we'll
be
offering
which,
how
many
cpus
and
this
kind
of
stuff
and
and
this
kind
of
drives
part
of
the
discussion
right.
D
C
D
D
B
D
B
So
so
hey,
this
is
mel.
I
haven't
I'm
new
to
this
forum,
but
a
lot
of
this
is
some
of
this
is
specified
in
the
other
document.
The
scaled
scale
requirement
document
where
they
have
the
hero
test.
B
B
K
B
K
Have
a
hero
test
and
the
hero
test
outlines
and
worst
possible
conditions,
some
performance
metrics.
We
want
you
to
be
able
to
hit
and
give
some
configurations,
but
there's
kind
of
a
an
overall
question
about
the
given
capacity
of
what
a
dpu
looks
like,
and
we
have
some
ballpark
figures.
But
there's
there's
a
lot
of
great
questions
that
michael
has
attempted
to
answer
and,
to
a
certain
extent,
what
he's
telling
you
is
there's
a
little
bit
of
a
sliding
scale,
because
things
are
proportional.
K
K
So
the
the
different
vectors,
the
cps
vector,
the
bandwidth
of
vector,
the
the
memory
vector
all
kind
of
have
to
work
together
and
we
have
ballpark
figures,
but
we
do
need
to
come
back
and
give
folks
a
little
bit
more
clarity
on
what
that
looks
like
right,
and
so
definitely
we're
we're
looking
at
this
now
and
we
can
come
back
to
this
forum
and
try
and
give
the
information
in
the
terms
that
people
are
asking
for
it
right.
K
B
I
I
Great
one
follow-up
question
I
have
regarding
that
document:
this
is
this:
is
the
goal
for
dash
or
or
this
is
the
current
deployment
that
you
have?
How
should
we
interpret
the
scouting
number
in
this
document?
I
L
So
lisa,
maybe
I
I
think
you
know
the
you
know
for
the
scary
number
right,
so
you
know
it's
always
going
to
increase
right.
So
we
have
a
you
know:
singapore.
We
have
switch,
we
have
you
know
12,
tera,
terabyte
and
the
you
know.
25
terabytes
are
always
going
to
increase
right.
So
you
know
the
the
hardware
improves
right.
So
I
I
I
think
you
know
in
terms
of
skill
and
also
the
customer's
load
increases.
L
I
think
you
know
this.
Would
we
we
probably
wouldn't
give
you
okay?
This
is
the
goal
right,
so
you
know
kind
of
that
right
so,
but
I
I
would
say
at
least
you
know,
this
should
be
the
you
know,
kind
of
minimal
bar
right.
So,
okay,.
I
I
B
Way
here
we
go,
this
is
the
one
so
this
this.
This
was
eye
opening
to
me
this.
This
showed
for
eight.
This
is
really
eighteen
I's
per
per
dpu.
I
assume
right,
which
leads
to
this
this
kind
of.
Like
again,
it
sounds
like
you
know.
These
are
not
necessarily
maximum
numbers,
but
this
is
one
scenario
that
there's
an
expectation
to
be
met.
M
B
K
Me,
let
me
give
some
context
here
as
well,
so
enis
is
not
necessarily
the
maximum
you
would
ever
have.
What
we
wanted
to
demonstrate
here
is
that
a
card
one
one
eni
could
potentially
be
the
entire
capacity
of
a
dpu,
or
you
could
have
multiple.
K
If
you
took
the
same
capacity
of
one
dpu
and
you
divided
it
equally
amongst
eight,
you
might,
you
might
see
a
configuration
like
that,
and
so
this
this
isn't
comprehensive.
It's
it's
kind
of
again,
it's
a
little
bit
of
a
sliding
scale,
but
the
the
purpose
yeah.
K
The
purpose
is
showing
you
that,
like
the,
if
you
take
the
entire
capacity
of
a
card
it
could,
it
could
be
facilitated
all
in
one
eni
or
it
could
be
divided
amongst
a
certain
number
of
enis
and
that's
that's
why
we
slip
split
between
eight
and
one,
and
it
also
gives
an
indication
of
of
what
a
stressful
scenario
per
eni
might
look
like,
and
so
you
know
maybe
some
people
were
referencing
like
40
enis.
K
So
if
you
took
the
entire
capacity
of
a
cardigan
and
divided
it
amongst
40,
you
might
see
another
test
scenario
for
40
enis,
based
on
the
capacity
of
a
dpu.
B
So
I
mean
I
had
this
question
christina,
for
you
also
is
this:
is
this
correct
because
it
looks
like
this
changed
in
the
last
month
or
two,
the
one
e?
I
scenario,
if
you
notice,
is
exactly
the
same
as
the
eight
eni
and
these
numbers
look
wrong
to
me.
One
e
and
I
would
not
have
48
unique
nsgs.
Would
it
because
I
thought
that
in
the
other
document
the
other
document
there
was.
B
A
D
D
So
if
you're
talking
about
a
band,
you
know
the
bandwidth
or
the
capacity
of
a
ppu
with
200
gig,
then
we
should
basically
spell
it
out
and
say
that
okay,
here
we
are
talking
about
the
capacity
of
200,
gig
dpu
and
if
that
were
to
increase
the
scale,
basically
proportionally
increases
right
so
that
that
that
needs
to
be
we
talked
about
as
well.
Otherwise
it
just
basically
you
know,
confuses
you
know
some
of
the
people
who
probably
are
working
on
higher
bandwidth
or
a
lower
bandwidth
or
whatever
will
be
the
case
right.
F
F
F
That
there
is
one
area
that
is
really
difficult
to
support,
because
if
you
have
multiple,
multiple
entire
you,
you
can
know
about
it
among
cores
much
of
things.
But
if
you
have
only
one,
the
balance
will
become
more
difficult.
K
So
so
I
I
let
me
interject
here
that
I
think
perhaps
we
can
give
some
more
clarity
on
on
what
some
ballpark
figures
might
look
like,
and
we
can
also
talk
about
what
capacity
is
and
how
you
might
hit
capacity
and
where
you're
likely
to
hit
capacity.
I
think
I
think,
there's
a
few
things
we
can
do
here
to
to
clear
this
up,
and
we
hear
your
feedback
loud
and
clear
right
like
this.
K
This
needs
more
clarity
and
so
michael
christina,
and
I
can
work
together
and
try
and
come
up
with
a
little
bit
more
context
here
to
guide
folks
because,
like
we're-
and
this
has
been
something
that's
been
brought
up
from
a
few
folks-
is-
we
need
a
little
bit
more
clarity
on
what
capacity
should
look
like
what
is
a
mint
bar?
What
is
appropriate
sizing
and
I
think
we
can
get
there
together.
D
I
Another
follow-up
question
that
I
have
is:
we
briefly
discussed
the
connection
tracking
so
how
we're
flow
of
learning.
I
assume,
eventually
the
controller,
need
to
know
about
all
of
this
new
flow.
So
what
can
you
share?
What
is
your
thought
process
of
how
quickly
those
flow
need
to
come
to
the
controller
and
for
the
short
life
flow?
What
is
the
expectation.
A
I
So
so
you're
not
interested
from
the
controller
you're
not
interested
in
knowing
the
newly
learned
flow,
but
I
thought
we
also
briefly
discussed.
There
is
a
statistic
associated
with
each
of
the
flow,
so
yeah.
A
So
so
the
statistic
is
mostly
from
the
point
of
view
of
not
the
flow
but
more
like
counters,
so,
for
example,
counters
will
be
longer
leave
than
the
flows
right,
because
candles
will
be
pre-created
before
it
doesn't
come
from,
for
example,
like
the
counters,
like
the
metering
buckets,
let's
say,
for
example,
all
the
traffic,
let's
say
for
the
for
the
v-net
right
for
for
this
eni,
we'll
be
using
these
buckets
right.
So
every
time
when
the
new
flow
gets
created
and
it
gets
some
traffic
it
increments.
This
bucket
and
the
floor
gets
removed.
A
So
we
want
to
basically
as
long
as
eni
exists
right,
those
counters
will
exist,
so
we
want
to
be
able
to
basically
query
those
counters
current
value
right,
but
but
we
are
not
interested
to
to
know
really
which,
unless
for
the
diagnosis,
ability
or
this
kind
of
stuff
right
to
debug
some
stuff
right,
but
but
we
are
not
really
interested
in
in
knowing
that
this
counter
was
incremented
by
like
what
was
the
flow
that
incremented
the
counter
this
kind
of
stuff
right.
I
So
just
just
to
clarify
so
so
now
I
heard
you
mentioned
it's
going
to
be
a
per
eni
counter
and
earlier
we
also
talked
about
potentially
per
lpm
entry
flow
opium
entry
counter.
A
We
can
add
the
scanner
globally.
That's
not
the
problem
right
that
the
stuff
is
basically
when
you,
when
later
we
are
creating
a
route
table
right,
the
route,
entry
or
the
mapping,
we
should
be
able
to
basically
reference
the
counter
right.
So
so
so,
basically-
and
I
and
the
reason-
why
is
that
for
the
eni,
because
we
right
now
don't
share
the
same
counter
across
different
enis
right?
So
there
is
this
locality
of
this
right.
A
I
So
let
me
rephrase
my
question,
michael
so,
from
the
controller
point
of
view,
you
are
not
interest
in
per
flow
counter.
A
Let
let
me
fully
clarify
so
from
the
code
point
of
view
right,
I'm
interested
in
the
counter,
which
is,
for
example,
set
up
on
the
unlike
cutter
as
an
object
which
is
set
up,
let's
say
on
dsc
level
or
uni
level,
that
the
flow
should
increment
and
I'll
be
querying
those
counters
and
will
be
querying
flows
to
get
the
counter
id
for
the
flows
right.
Okay,.
I
A
No,
so
so,
so,
basically,
so
it
is
like
this
that,
for
example,
perf
there
should
be,
there
should
be
ability.
Each
flow
should
increment
some
counter
right,
whether
for
example,
in
this
case,
I
will
define
that
that,
for
example,
because
of
the
routing
rule,
I
will
pump.
For
example,
lots
of
mappings
and
each
mapping
will
refer
separate
counter
object.
Right
then,
at
the
end,
when,
when
the
basically
packet
will
come,
it
will
create
lots
of
flows
and
each
flow
will
have
separate
counter
right.
A
So
in
this
case
each
flow
will
have
a
counter
right
and
potentially,
though,
those
flows
can
share
the
same
counter
right
or
they
can
have
the
same,
or
they
can
have
unique
counters,
depending
on
how
the
con
connections
from
the
customer
originates
right,
but
I
will
not
be
querying
flows
to
get
the
current
counter
value
there
is.
There
is
a
like
not
for
the
metering
right
there
is.
A
There
is
a
separate
discussion
that
we
even
are
not
starting
here
right,
which
is
kind
of
diagnosability,
so
basically,
customers
sometimes
wants
to
have
ability
to
basically
do
proactive,
let's
say:
dump
the
current
flows,
which
are
active
right,
which
doesn't
have
to
be
fast
and,
of
course,
we'll
be
missing
flow,
it's
kind
of
like
sampling
right.
Of
course,
we
will
not
be
missing
all
the
flow,
so
it's
not
tracking
all
the
flows,
but
they
have
ability
to,
for
example,
okay,
let's
example
at
basically
like
to
to
get
the
current
like
flows.
A
They
are
established
because
they
just
want
to
know
what
kind
of
connections
are
happening
right
now
through
their
vms
right.
But
it's
completely
separate
topic,
it's
more
about
at
some
point,
because
you
guys
anyway,
will
have
any
hardware.
The
flows
right
be
able
to
give
it
as
api
to
kind
of
query
this.
It
doesn't
have
to
be
fully
consistent.
It
just
gives
the
return
current
state,
this
kind
of
stuff,
from
both
point
of
view
right,
but
we'll
not
be
using
this
method.
A
For
for,
let's
say,
metering,
customer
or
anything
like
this
right,
we'll
be
just
creating
the
specific
counters
right.
So
let's
say
we
create,
let's
say
100
counters
right,
uni
counters
one
will
be
saying
like
this.
This
this
counts
vignette
traffic,
inbound,
this
and
and
and
separate
like
for
outbound,
and
this
can,
for
example,
be
appearing
traffic.
This
counter
will
count
traffic
for
this.
For
this
ip
discount
traffic
will,
for
example,
for
this
ip.
This
counter
will
count.
A
Basically,
internet
traffic
is
gonna,
stop
and
then
we'll
associate
those
counters
in
specific
routing
rules
right
and
then,
as
the
as
the
flows
gets
propagated.
The
flow
when
it
gets
created,
we'll
refer
to
those
counters
and
we'll
increment
those
kind
of
counters
the
flow
will
die.
New
flow
will
arrive
or
those
kind
of
stuff
right
with
specific
stuff
right
and
from
the
condo
plane.
I
M
D
C
D
I
think
those
along
with
everything
else
that
I
said
earlier
in
the
meeting
you
know
the
counters
is
a
massive
subject
that
we
need
to
really
talk
about,
especially
with
you
know,
all
the
statistics
and
workflow
notification
performance
statistics.
You
know
these
global
counter
objects
and
their
associations
with
all
the
different
ways
we
we
want
to
really
handle
it.
You
know
how
often
we're
going
to
read
it.
You
know
what
type
of
apis
are
going
to
be,
whether
we
are
going
to
pull
or
push
from
the
controller.
All
of
those
things.
D
A
So
sorry,
are
you
questioning
the
next
minute,
so
we
can
discuss
this
and
because
we
can
discuss
those
limits
and
this
kind
of
stuff
right,
yeah,
definitely
and-
and
I
can
tell
you
right
now-
the
ballpark
right,
the
the
ballpark
is
right
now,
for
example.
Usually
I
I
believe
like
max
that
you
were
thinking
of
is
like
let's
say
I
think
at
least
right
now
that
we
are
using
is
like
10k
per
eni
from
the
counters
right
and
on
average
we
are
using
much
much
less
right.
A
A
C
Yeah
yeah
and
if
you
can
throw
your
comments
and
issues
into
the
github,
we
would
appreciate
it
and,
yes,
we
will
work
on
scale
and
everything
and
get
back.
Thank
you.
Everyone.
A
And
then
I
would
also
say
that,
for
example,
if
you
guys
have
some
specific
format,
if
you
guys
also
would
like
to
see
this
data
that
that
you
guys
believe
will
be
the
most
readable
from
your
guys
right
feel
free
to
send
us
an
email
or
send
email
to
the
community.
That
hey,
you
guys,
are,
for
example,
tracking
the
the
current
like
something
similar
in
in
this
format,
or
this
format
right
and-
and
we
will
try
to
basically
see-
maybe
you
can
also
flag
fit
in
this
format-
really
guys
so
this.