►
From YouTube: DASH Workgroup Community Meeting 20220209
Description
February 9, 2022 Community Call
A
B
I
I
thought
you
did
yeah.
I
did
also
I'd
made
about
a
half
a
dozen
or
more
comments
on
his
pull
request
and,
as
of
yesterday,
he
resolved
all
my
points
and
made
a
number
of
changes
to
his
tool
chain,
which
is
great.
A
Oh,
wonderful,
wonderful
and
then
they've
started
working
to
configure
the
the
simulator
so
creating
definitions,
and
things
like
that.
So
if
he
joins
that's
great,
if
he
doesn't
that's
his
update.
C
D
The
data
plan
behavior:
what
do
you
mean
exactly?
Can
you
please
clarify.
C
Yeah
struct
graph
is
a
keyword
that
doesn't
exist
in
the
p4
language
in
open
source
tools.
It's
in
your
sample
code
list,
oh.
D
I
understand
okay
yeah,
so
missing
pieces
from
both
the
language
and
the
simulator
yeah.
We
have
a
backlog
of
those,
however,
they
will
be
done
after
we
will.
We
will
take
on
them
after
we
will
finish
with
the
psi
to
p4
runtime
layer.
So
we
are
doing
this
one
right
now
and
the
others
are
in
the
backlog.
D
We
would
like
to
get
some
help
if
anyone
volunteers,
but
right
now
the
plan
is
first,
we
finish
all
the
software
layers
above
so
we
can
at
least
run
whatever
the
current
p4
software
stack
supports,
and
then
we
will
extend
this.
This
late.
C
E
So
when
you
say
you'll,
do
it
after
you're
done
with
psi
to
p4a
layer
is
that
is
that
the
site
to
p4
runtime
work
that
you
guys
are
doing.
D
D
So
we
started,
we
only
started
analyzing
the
amount
of
work
there
because
it
needs
some
additional
information
which
luckily
is
already
available
from
p4
compiler.
A
Great
and
so
last
time
we
we
talked
about
scale
numbers,
we
talked
about
counters,
we
talked
about
h
a
and
so
this
time
we
we
have
our
update
from
marion.
We
we
in
the
future.
We
have
a
couple
of
homework
items
to
focus
on
with
respect
to
you
know:
counters
metering
scale,
those
kind
of
things-
and
you
have
this
so
so,
mario
and
you're,
saying
we'll
check
back
on
number
six,
seven
and
eight.
Maybe
in
a
couple
weeks
did
you
say.
D
Can
I
also
ask
a
question:
maybe
I
missed
this
about
the
aha?
Is
there
a
document
published
because
I
could
have
missed
that
one.
A
A
A
And
then,
in
the
notes
last
week
marion,
we
also
talked
about.
I
hope
you
received
this.
We
also
talked
about
it
quite
a
bit
here,
because
there
were
questions
that
weren't
really
answered
in
the
document
specifically
related
to
failover
and
live
migration
and
whether
we
use
the
same
card
different
cards.
How
do
we
standardize
about
communicating
between
the
two
cards?
A
F
Right
and
if
we're
working
on
this,
this
is
exciting,
we're
working
on
this
and
we
hope
in
a
future
meeting
to
be
able
to
present
some
high
level.
Some.
F
So
I
think,
we'll
plan
that
with
you,
christina
and
so
yeah
just
so,
everyone
knows
where
we're
working
on
on
just
some
high
level
stuff
right
now
related
to
these
items.
Yep.
A
And
and
michael
miele
and
I
would
be
super
grateful
if
any
comments
to
the
documents
could
be
put
in
as
suggestions
comments
prs.
However,
we're
doing
that
would
be
great,
so
we
can
keep
track.
B
John,
you
said
sorry,
john,
you
said
you
were
going
to
work
on
some
flow
synchronization,
yeah
ideas.
F
Currently,
working
on,
you
know
what
we
think
the
right
model
is
for
interoperability
and
for
like
flow
synchronization
at
a
high
level,
and
I
think
you
know
we
would
like
to
present
that
at
some
point
in
the
future,
not
too
far
off
and
then
hopefully,
if
that
is,
if
there's
some
consensus
around,
that,
we
would
follow
it
up
with
a
more
detailed
presentation.
B
That
that's
great,
I
wanted
to
ask
some
general
questions
about
that
topic
and
maybe
I'll
just
post
an
issue.
That's
kind
of
an
open
question
about
that.
For
example,
basic
things
like
is
there
an
assumption
that
all
the
synchronization
occurs
over
the
in-band
links
to
the
between
the
dpus
and
the
tor?
Is
there
some
kind
of
a
route
between
cards,
or
is
there
a
dedicated
hardware
connection
between
them
or
what
that
those
are
kind
of
basic
things
that
I
don't
think
are
stated
anywhere.
G
Hey
chris,
so
there
will
be
no
hardware
connection
between
cards
and
the
main
thing
here
is
that,
from
the
point
of
view
of
failed
domain
right,
all
the
cards,
for
example,
on
one
chassis,
will
be
under
the
same
power
right.
So
the
replication
for
high
availability
needs
to
happen
to
the
card
which
is
somewhere
else
in
the
data
center
right.
So
that's
why
this
will
be
over
the
network.
B
B
Yeah
and
and
the
protocols
have
to
respect
it
and
addresses
and
other
things
right
I
mean
there's
just
basic
stuff
before
we
even
get
to
the
algorithm,
you
know
what
what
can
the
network
support
in
terms
of
messaging
between
cards?
If
someone
comes
up
with
their
own
scheme
right,
like
there's,
tcp
or
udp,
over
ipv
people
or
multicast
addresses
or
whatever
there's
all
kinds
of
assumptions,
we
need
to
start.
G
And
the
restriction
will
be
whatever
our
tours
are
kind
of
configured.
Transferring
maybe
gohan
later
can
also
update
this
section
because
he's
more
familiar
with
from
the
physical
network
on
the
tours
right,
but
this
will
need
to
be
a
unicast
communication
directly
between
one
card
and
the
other
card
through
some
channel
that
is
established
there,
whether
it
will
be
udp
for
some
potentially
custom
protocol,
I'm
quite
sure
what
kind
of
protocol
stores
forward.
So
so
I
I
will
ask
gohan
basically
to
potentially
add
those
limitations.
H
B
E
Follow-Up
question
on
your
comment
on
this:
you
know
pairing
of
dpu
pairing
for
h,
a
purposes
as
you
mentioned,.
A
E
You
know
for
power
failure
domain
you
you
would
want
to
have
this
one
across
chassis
right,
yes,
but
what
if
there
is
what,
if
there
is
only
one
chassis
who
has
multiple
cards
in
them?
Would
you
still
consider-
or
at
least
from
the
controller
point
of
view,
the
controller
will
still
consider
pairing
apu
cards
within
the
same
chassis
for
the
card.
G
From
the
point
of
view
of
scaling,
yes,
but
the
distribute
part
of
the
traffic,
because
we
also
in
the
future
plan
because,
like
each
car
will
be
announced
they're
on
vp
right.
So
we
also
in
the
future
plan
to,
for
example,
split
the
on
the
source
right
to
have
like
a
hardware,
split
splitter,
to
split,
let's
say
one
fourth
of
the
of
the
flows
to
to
basically
one
v
versus
the
other
versus
different
card
right.
G
So
from
the
point
of
view
of
the
scale
sure,
but
not
from
the
point
of
view
of
the
availability,
because
availability
basically
means
that
customer
is
not
impacted
in
case
there
is
a
power
outage
or
or
hardware
failed,
this
kind
of
stuff
right.
If
this
is
the
same
chassis,
if
the
entire
chassis
goes
down,
customer
is
impacted
right.
If
the
if
the
power
goes
down
customer
is
impacted,
so
this
doesn't
serve
the
high
availability.
So
it
needs
to
be
from
a
different
chassis.
J
So
so
I
just
want
to
make
it
clear
to
the
community.
That
is
the
microsoft
policy.
If
you
were
an
enterprise,
and
you
wanted
to
do
that-
and
that
was
a
good
for
your
you
know
particular
needs
you
should
be
able
to
do
it.
We
won't
do
it
in
microsoft,
just
because
we
really
are
strict
about
that,
but
I
wouldn't
go
as
far
to
say
that
no
enterprise
would
ever
use
that
feature
or
want
to
to
have
aha
within
the
chassis.
J
K
Yes,
so
on
the
synchronization,
I
heard
that
synchronization
cross
chassis
will
happen
on
the
network
and
thinking
is
maybe
multicast
only
not
but
tcp
udp
are
these
synchronization
packets
directly
consumed
by
hardware
or
there's
a
control,
plane,
interaction.
G
There
we
go
direct
consumed
by
hardware;
there
will
be
no
control
plane
interaction,
so
here
we
are
talking
about.
Synchronization
is
more
about
replicating
flows
or
metadata
of
how
flow
got
constructed
from
some
specific
packet
from
from
one
device
to
another.
So
this
will
be
fully
controlled
and
consumed
by
the
hardware,
because
just
the
flows
gets
created
very,
very
fast
right.
There
is
no
way
the
control
plane
will
would
do
this
kind
of
stuff
correct.
K
Okay,
so
if
that's
the
assumption,
then
I
assume
there
has
to
be
some
kind
of
a
protocol
definition
which
hardware
can
support
right.
H
L
Think
I
I
think
we
start,
I
mean
we
have
a
lot
of
thing
to
do
and
if
we
start
from
the
synchronization
protocol,
that
is
the
most
hardware
dependent
the
most
cpu
defender,
I
mean,
I
think
it's
okay,
to
define
an
interface
in
which
we
tell
the
dpu
it
needs
to
synchronize
with
another
gpu.
We
need
to
synchronize
all
that
the
modality
parameter,
etc.
That
interface
is
perfectly
fine,
but
the
standardized
investigatorization
protocol
in
terms
of
packet
format.
I'm
not
sure
that
is
fine.
L
J
We
have
lots
to
do
and
we
may
not
be
able
to
standardize
it
right
away,
but
I
don't
want
to
be
testing
different,
aha
approaches
where
one
assumes
one
one
assumes
another
thing:
everybody
does
it
a
different
way?
I
think
that
that
communications
protocol
is
relatively
straightforward
and
we
should
define
it
whether
it's
the
top
of
the
list
to
define-
I
I
don't
think
so,
but
I
I
it
will
be
defined
and
we
want
these
things
to
to
be
testable
in
the
same
way.
J
In
the
future,
it
has
to
be
testing
with
one
company
and
find
out
it
behaves
differently.
K
I
agree
gerald
that's
my
concern.
My
concern
is
that
let's
say
today
I
implement
right.
We
first
is
the
interface.
We
also
have
to
come
up
with
a
apis
for
it
and
then
implementation,
and
I
do
some
implementation,
which
does
not
interrupt
with
another
vendor.
Then
I
don't
want
to
go
back
and
redo
all
that
effort,
so
very
skeletal
level.
Very
basic.
We
do
need
some
kind
of
a
definition.
F
I
I
think
that
there
are
two.
I
think
that
there
are
two
two
separate
cases
there's
a
case
of
failover
where
you're
actively
synchronizing,
and
I
think
that,
for
that
case
there
there
are
as
a
like,
legitimate
constraint
that
you
can
put
on
that
case.
That
says
that
it's
just
two
cards
from
the
same
vendor
for
the
purposes
of
failover,
and
there
should
be
flexibility
to
allow
the
vendor
to
define
like
what's
optimal
for
them
in
terms
of
the
protocol.
I
agree
that
the
psy.
J
F
J
F
J
We
I
I
just
I'm
going
to
state
my
opinion.
I
strongly
disagree.
I
want
all
the
testing
to
be
the
same
and
there
not
to
be
one
vendor
doing
it.
One
way
and
interpreting
it
one
way
and
then
saying
well,
it's
okay!
That
rha
took
five
seconds.
We
thought
that
was
okay,
the
other
one
said:
oh
no,
ours
is
like
hitler,
zero.
You
know
one
microsecond
and
it
just
goes
on
and
on
so
I
don't
think
it's
as
complicated
as
you're
trying
to
make
it
out
to
be.
J
We
have
some
people
who
are
who
are
wanting
to
propose
some
methodologies.
I
will
look
at
those
and
if,
if
they
make
sense,
then
we
can
standardize
on
them.
I
think,
as
far
as
priority
it
may
not
be
the
highest
party,
but
at
the
same
time
I
don't
see
this
going
into
our
network
just
willy-nilly.
J
Everybody
just
picks
their
approach.
We
we
don't
have
time
to
test
all
you
know,
everybody's
different
approach
to
they
should
all
work
the
same
way.
What
what
happens
under
the
covers
in
the
heart.
I
don't
care,
but
as
far
as
you
know,
how
fast
they
are
and
what
protocols
they
use-
and
it
makes
no
sense
that
the
high-level
protocol
to
make
this
work
shouldn't
be
that
complex.
J
I
think
I'll
talk
to
the
company
to
see
if
they're
willing
to
come
forward
with
the
proposal,
but
there
are,
I
mean.
Obviously
we
do
this
today
and
then.
Secondly,
secondly,
there's
another
company
who's
working
on
a
proposal
right
now.
A
B
If
we
come
up
with
basic
measurements
and
objectives,
I
think
that'll
that'll
level,
the
playing
field
in
terms
of
how
we,
how
we
rate
things
and
how
we
measure
them,
even
if
there
is
a
unique
protocol
between
them
as
long
as
they
can
travel
over
the
l3
network,
then
it
doesn't
have
to
be
standardized,
but
certainly
the
requirements
can
be
standardized
and
then
the
metrics
for
how
well
they
perform.
L
J
It
should
be
very
similar
for
aha
and
migration
by
the
way.
So,
let's
let's
table
it,
you
know
everything
is
always
open
guys,
but
right
now
I
would
not
want
to
make
it
a
goal
not
to
standardize.
I
want.
C
J
A
K
Yeah,
at
least
I
would
like
to
see
the
strongman
so
that
you
get
some
idea.
I
don't
have
to
be
fully
hashed
out
just
a.
E
Yeah,
so
I
don't
know
what's
next
in
the
agenda,
but
you
know:
do
we
want
to
really
go
through
the
the
issues?
The
list
of
questions
that
are
basically
put
forward
as
an
issue
or
list
of
issues
in
in
github.
A
We
can,
I
think,
a
couple
of
people
had
hands
up
real
quick.
Let
me
check
here,
michael,
you
had
your
hand
up.
I
Yes,
this
is
michael
miele,
I'm
working
on
documentation
with
christina
just
in
order
to
this
is
more
general
for
the
community
in
order
to
be
more
effective
and
efficient
in
terms
of
documentation.
I
I
was
wondering
what
community
thinks
about
being
able
to
ask
and
obviously
not
all
the
questions
can
be
put
in
writing
and
be
put
in
the
pr
form,
but
I
was
wondering
if
it's
more
useful,
do
you
have
a
pr
for
specific
topics
like
sponsors,
high
availabilities,
availability
and
scale,
and
you
guys
can
put
some
comments
and
maybe
some
suggestions
and
how
to
improve
also
the
documentation,
because
this
is
a
work
in
progress.
I
think
that
would
be
more
efficient
and
we
can
be
more.
I
We
can
follow
what
you
guys
are
asking
provide
questions
where
they
provide
answers
where
there
are
answers.
B
Michael,
can
I
just
add
on
to
that?
Yes,
I
would
like
to
distinguish
between
prs,
which
is
something
that
someone
wants
to
commit
to
the
repo.
It's
already
something
that's
been
documented.
Then
it's
an
addition
or
enhancement
and
issues
which
would
be
questions
about
what's
already
in
there
or
questions
about
what
should
go
in
there
and
issues
can
be
tracked,
including
complete
comment,
threads
and
responses
before
they
ever
turn
into
pr's,
and
I
I
don't
think
pr
should
just
be
filed
to
to
ask
questions
pr
should
be
filed
to
add
content
or
change.
B
I
B
I
M
A
You
know
we
had
one
inside
this
older
connection
tracking,
where
gohan
was
asking
about
whether
oklahoma
was
asking
marion.
If
this
was
correct,.
J
A
A
Gohan
to
marion-
I
don't
know
if
it's
been
addressed,
but
from
december.
D
Yeah
this,
this
is
a
bug.
First
of
all,
and
second,
I
will
review
the
full
pull
request
by
the
end
of
this
week.
I
I
didn't
get
to
it
yet.
I
will
also
review
this
proposal
to
to
see
if
we
can
actually
incorporate.
A
I'll
just
put
a
comment
there
great
thank
you
and
look
over
here.
This
one
had
quite
a
few.
A
So
this
high
level
design
is
a
constant
work
in
process
by
michael
miele,
chris
summers
and
I
and
it's
almost
ready
to
be
closed
and
pushed
to
the
repo.
This
is
just
more
to
do
with,
like
the
the
dash
state
level
interactions
that
we've
added
the
text
on
how
those
interactions
happen,
different
updates
we
make
to
the
drawings
and
designs.
So
that's
what
this
is
and
then
this
one.
A
Yeah
yeah
and
then
we're
restructuring
the
folders
and
things
like
that.
Maybe
we
go
to
issues
this
one
was
pretty
interesting.
I
had
answered
the
question
here
about
the
ip
options
and
fragments
handled
and
I
believe
we
want
to
handle
it
similarly
to
how
we
do
it
today
in
our
existing
implementation.
E
E
H
E
A
I
want
everyone's
cell
phone
number.
I
will
text
you,
I'm
just
kidding
okay,
so
I
feel
like
I
answered
this
one
hopefully
and
then
we've
talked
about
how
we
feel
we
need
to
work
on
connection
tracking
and
did.
J
We
go:
is
there
a
poll
request
christina
on
the
behavior,
that
we
found
that
that
we
do
with
the
fragmentation
and
why
we
need
to
not
close
the
connection
for
x
amount
of
time
in
case.
A
It
is,
it
was
in,
I
just
updated
it
the
other
day
and
it
was
in
an
issue
not
a
pull
request
that
I
believe
that
john
had
asked
it
in
an
issue,
and
I
answered
it
within
the
issue,
and
I
was
just
going
up
to
try
and
find
it
so.
C
J
No,
that's
not
the
that's,
not
clear.
We
do
at
all.
J
I
think
the
end
points
to
that,
but
when
you
go
to
close
the
connection,
because
you
got
a
fin,
it's
possible
that
a
fragment
is
out
of
order
and
you
what
we
do
and
with
our
designs
is
we
yeah,
we,
we
hold
for
x,
number
of
x
amount
of
time
to
make
sure
that
a
out
of
order
fragment
doesn't
follow
the
fin
packet
and
so
we've,
given
you
know
the
numbers
that
we
use,
and
this
is
the
way
our
cloud
works
today,
and
this
was
not
in
the
behavioral
model.
J
Nobody
really
thought
to
put
this.
You
know
this
this
there,
so
mel
knox
or
someone
who's
going
to
work
on
the
v-net
behavioral
model
needs
to
add
the
behavior
that
we
need
here.
Yeah.
C
C
A
G
G
Well,
so,
to
be
honest,
I'm
not
from
vfp
team,
so
I'm
probably
they
solve
it
somehow,
but
I
don't
know
the
answer
on
top
of
my
head.
A
So
yeah,
if
you
put
it
in
the
issue
I'll
we'll
chase
it
for
you.
P
J
That's
right:
let's
get
that
done
at
the
house
guys
specifically,
we
brought
the
fragments
up
it's
because
of
the
misordering
potential
misordering.
That
could
happen
and
if
we
incorrectly
close
a
connection-
and
there
is
a
fragment
misordered
and
we
don't
and
we
close
the
connection
right
away,
it
is
possible
to
lose
that
and
we
don't
want
to,
and
so
we've
stated
what
we
do
in
our
vip.
For
that
specific
application.
J
I
think
somebody's
talking
about
a
different
thing,
but
we're
not
talking
about
doing
reassembly
we're
talking
about
the
case
where
misord
packet
comes
in
after
we
close
a
connection,
because
we
saw
finn.
J
O
I
think
the
scenario
here
is
important
to
capture.
Was
it
john?
Were
you
the
one
who
brought
up
the
fragment
after
the
first
fragments,
assuming
it's
ip
traffic
like?
Can
you
can
you
shoot
me
an
email
just
with
this
specific
scenario
that
you're
talking
about
or
we
can
post
it
into
github
if
you
want
to,
but
we
always.
J
O
O
C
O
Yeah,
I
think
I
did
that
at
jagger
at
microsoft,
but
here
I'll
I'll,
send
you
a
quick
message
right
now:
yeah.
F
Yeah,
this
is
john,
so
the
the
thing
about
the
fin
wasn't
specifically
about
out
of
order
fragments.
It
was
about
the
fact
that
there
are
acts
that
can
happen
after
the
fin,
and
perhaps
this
time
like
covers
the
window
in
which
you
would
expect
to
get
axe.
But
it
seems
if
you're
going
to
keep
this
flow
around
for
5
to
240
seconds,
then
then,
at
very
high
connection
per
second
rates.
J
And
as
a
potential,
I
definitely
agree
with
you
on
that
one:
that's
potential
solution
to
the
problem.
I
I
we
wanted
to
just
point
out
that
we
need
to
resolve
it
and
put
it
in
the
behavioral
model,
and
so
that's
actually.
Potentially,
that
is
a
a
way
to
do
it
much
quicker.
So.
A
N
Yeah,
so
basically
you
mentioned
this
is
a
current
deployment
in
microsoft,
right
now
in
term
of
how
long
we
want
to
hold
off
until
when
we
receive
the
fin,
but
in
the
test
document.
So
I
get.
This
is
to
the
point
I
just
mentioned,
where
the
timeout
seemed
to
be
able
to
control
and
set
it
at
one.
Second,
you
know
to
test
how
many
connections
been
cleaned
up
and
how
many
still
outstanding.
G
G
There
is
some
scenario
with
regards
to
load
balancing
in
case
there
is,
there
is
some
rules
which
which
are
handled
with
load
balancing
for
the
inactive
flows,
because
there
is
a
separate
staff,
regaster
timer,
that
if
there
is
an
inactive
flow,
this
inactive
flow
also
has
a
timer
that
at
some
point
it
it
basically
will
need
to
be
removed.
If
there
is
basically
no
packets
coming
through
the
flow
right,
because
the
just
just
the
source
or
the
destination
may
die,
and
they
will
never
basically
send
fins
right
so
yeah.
G
Let's
say,
for
example,
if
the
customer
is
going
to
sql.
Sometimes
the
sql
connections
are
normal
for
a
very
long
time,
even
though
they
should
have
keep
sending
a
live
messages.
So
there
is
some
configuration
for
this
type
of
timer
that
sometimes
on
some
rules
we
may
need
to
add,
but
not
not
the
timer
for
in
case
there's
a
fin.
How
long
the
connection
should
be
it's
how
long
the
flow
should
be
kept
before
just
to
wait
for
this.
Basically,
the
and
the
fragments
so.
N
G
Yeah,
if
this
hit
specific
rule,
then
then
basically
this
means
we
need
to
overwrite
it
for
for
a
specific
time
yeah,
but
we
didn't
document
it
yet
right.
This
is
more
like
software
load
balancing
scenario,
so
it's
slight
addition
to
the
vena
tubing
traffic
right,
but
but
this
this
is
kind
of
like
the
next
step
potential.
So.
J
So
so
anyways,
I
think
that
maybe
michael
and
maddie,
and
whoever
wants
to
be
involved,
should
have
a
separate
meeting
to
figure
out
what
the
behavior
should
be
and
how
many.
N
J
E
And
then,
obviously
we
want
to
really
also
see
that
I
guess
from
the
day
one
we
are
seeing
some
model
there
and
we
are
also
hearing
that.
Okay,
these
are
not
complete,
but
I
guess
we.
We
probably
need
to
start
also
putting
some
timeline
by
which
we
think
we
we
want
to
get
it
completed
right.
E
So
I
completed
before
christmas,
but
it
didn't
happen
so
we're
already
basically
behind,
but
I
think
we
we
can
probably
set
ourselves
some
goal
in
the
future,
to
see
that
you
know
by
when
we
we
should,
you
know,
shoot
for
it
right.
E
Q
Q
Can
I
add
a
comment
yeah?
Thank
you.
This
is
diranjan.
I
want
to
read
the
comment
that
someone
made
earlier,
that
you
know
there's
a
lot
of
complexity
in
dealing
with
fragments,
especially
fragments
in
the
middle
of
the
network,
there's
a
difference
between
handing
them
at
transit
devices
versus
handling
them
at
the
final
destination.
Q
J
Q
J
G
Yeah
within
the
network,
we
basically
use
the
jumbo
frames,
so
we
don't
fragment
like
vms,
usually
cannot
audit
jumbo
frames.
G
Q
Are
looking
at
the
case
where
the
inner
packet
is
fragmented
for
some
reason,
but
the
tunnel
is
not
okay,
so
then
getting
to
the
right
location
may
not
be
a
problem,
but
then
doing
further
processing
at
that
point
may
be
difficult
because
you
don't
have
the
layer
4
information
in
the
fragments
in
the
inner
bracket.
J
J
J
Q
O
I
think
we
can
do
a
deeper
look
at
this
and
just
verify
gerald
if,
if
what
you
say
is
true-
and
we've
we've
already
taken
a
step
to
to
capture
this
scenario
so
naranjon,
if
you
want
to
follow
the
thread
that
john
posts
and
add
any
details,
let's
do
that.
But
I.
J
O
G
And
just
I'm
just
confirming
we
in
general
because,
like
I'm
speaking
separately
on
a
chat
with
gohan
and
like
I
don't
think
we
have
fragments
in
overlay
layered
at
customer
outreach.
But
but
I'm
just
confirming
this
with
vhp,
because
they,
because
all
the
vfp
matching
is
also
based
on
the
five
tuple
match.
G
If
the
framerate
zone-
five
tuple,
I
also
don't
know
they
should
not
be
matching
the
so
I
don't
believe
the
fragments
to
be
honest
on
the
overlay
layer,
but
that's
an
interesting
question.
We
need
to
confirm
it
because,
because
on
the
underlay
we
don't
fragment
packets
right,
we
use
jumbo
frames
and
they
just
flow
right
right.
G
Yeah
we
need
to
go
into
confirmation
and
the
the
question
will
be
also
from
the
point
of
view
of
the
express
route
devices
right
which
which
are
coming
to
our
network
right
in
case
the
express
route
circuit
is
doing
some
fragmentation
but,
like
I
don't
believe
this
is
happening,
but
I
will
wait
for
the
vfp
team
to
comment
on
this
right
now.
My
gut
feeling
says
that
we
don't
have
fragments
on
overlay
layer.
O
Yeah
and
but
it
doesn't.
D
M
If
the
overlay
packets,
you
know,
have
the
additional
headers,
if
that
is
incorporated
the
size
of
it
in
the
mtu
in
the
jumbo
packet
size
that
you
have
configured,
then
it
should
be
okay,
yeah,
similar
to
underlay.
G
G
Yeah,
but
at
the
same
time,
I
wonder
if
we,
even
in
this
case
because
like
we,
cannot
judge
the
packet
if
he
doesn't
have
l4
on
the
on
the
rules
one
by
one
and
it
also
will
not
much
establish
flow.
So
I
I
believe
we
will
just
drop
the
packet.
If
customer
basically
does
this,
let's
confirm
it
right.
Yeah,
that's
interesting
question:
yeah.
A
Okay,
great
thank
you,
so
I
feel,
like
we've
gone
through
the
issues
and
the
prs
to
the
point
where
we
can
does
anyone
else
have
anything
this
week
or
we
could
break.
M
Gerald
mentioned
earlier
about
some
offline
meetings
about
discussions
matty
and
everyone.
Please
include
us
as
well.
We've
done
a
bit
of
work
there,
so
it'll
be
good
to
discuss
together.
A
M
Yeah-
and
we
do
have
that,
and
you
know
we
should
begin
the
work
soon.
We
already
have
that
now.
I
think.
N
Yeah
so
christina,
I
I
open
an
issue
and
try
to
get
a
confirmation
kind
of
continuation
from
the
last
discussion
regarding
whether
the
controller
need
to
be
aware
of
the
dynamically
learned
flow.
A
So
we
talked
about
that
last
week,
where
michael
said,
the
controller
didn't
need
to
be
aware,
but
here
it
is
right
here,
yeah.
N
Yeah
yeah,
so
I'm
trying
to
get
a
confirmation
if,
if
that
is
the
right
understanding,
because
I
I
didn't
quite
get
the
firm
answer
last
time,
I
I
may
have
missed
it.
Okay,
one.
A
N
Time,
michael,
what
so,
what's
the
question
exactly
so
so
for
the
dynamically
learned
flow?
Is
that
correct
that
the
controller
does
not
need
to
know
about
it
when,
when
it's
learn
program
and
when
it's
deleted
in
both
scenario.
J
N
Right,
no,
no,
no!
You
you,
you
not
know
what
and,
and
if
that
is
the
case.
Similarly,
the
statistic
for
the
flow
also
right,
because
you
don't
know.
G
Advocating
some
sampling
statistics
we
need
to
do
so
is,
ladies,
we
don't
need
to
know
for
every
single
flow
right.
However,
the
statistics
on
flow,
which
is,
for
example,
the
metering
because,
like
each
flow,
has,
will
be
incrementing,
some
metering
bucket,
on
which
we
are
being
customers
right.
We
don't
care
per
flow,
but
we
can
later
to
query,
let's
say
every
10
seconds,
the
statistics
per
bucket,
so,
for
example,
lots
of
laws,
maybe
going,
let's
say
across
peering
right
and
we
don't
care
basically
which
flows
are
going
across
peering.
G
We
just
care
that
basically
bytes
on
the
appearing
bucket
gets
incremented
right,
so
we
acquiring
those
kind
of
buckets
right.
However,
there's
also
scenarios
when
is
customer
can
also
do
random
sampling,
which
means
that
they
can
ask
us
to,
for
example,
query
the
current
flows
right.
So
it's
not
every
single
flow
but,
for
example,
every
let's
say
x
amount
of
seconds.
G
We
should
be
able
to,
for
example,
per
eni,
let's
say:
dump
the
current
like
flows
that
that
is
basic
like,
but
mostly
from
the
customer
perspective,
which
is
like
source
destination
source
for
destination
important
protocol.
So
the
customer
can
sample
this,
and
this
is
being
used
right,
but
not
a
very
close
tracking
on
every
single
flow
and
reporting
on
every
single
flow
to
the
control.
Pane.
N
Thanks
for
the
question,
sorry
so
joe,
so
that
will
go
into
the
general
dash
yeah.
J
That
would
be
a
big
help,
yeah
comments
from
others
as
well,
but
we'll
document
everything
that
we
collect
and
why
and
then
we
will
put
it
out
there
and
other
people
can
say
hey.
I
need
something
else.
That's
fine!
A
L
A
We
have
it
partially
documented
in
the
hld
right
now
and
as
we
get
more,
we
can
go
forward
under
the
counter
section
or
the
sdn
transforms.
I'm
sorry,
I'm
in
the
wrong
document.
Okay,
thanks
for
the
thanks
for
the
question,
lisa
yeah.
N
I'm
sorry
christina
what
one
one
follow-up
to
what
michael
is
saying.
So
you
don't
need
to
know
every
single
floss
learn,
but
you
need
the
capability
for
the
user
to
say
for
a
certain
interval
how
many
flow
and
what
they
are
has
been
learned
on
a
particular
eni
or
particular
aco
matching
a
particular
acl
rule.
N
J
G
E
I
just
have
one
quick
question
guys
you
know
hopefully
quick,
and
there
is,
I
think
there
was
one
discussion
about
the
route
scale
and,
and
we
saw
that
the
route
scale-
basically,
you
know
increases
substantially
in
when
we
go
to
the
express
route
use
case
right.
So
I
just
want
to
know
that
out
of
you
know
all
what
is
basically
all
types
of
different
use:
cases
that
that
we
are,
you
know,
trying
to
address
as
part
of
dash
what
percentage
of
the
the
activity
or
customers
that
use
this.
E
You
know
express
route
scenario,
so
we
know
that.
Okay,
what's
you
know?
What's
the
percentage
of
the?
I
guess
connections
that
we're
talking
about
that
will
be.
J
O
O
What
you're
suggesting
is
maybe
that
it
would
be
handled
separately
for
the
express
shot
cases,
and
so
we
need
to
look
at
that.
But
I
guess
the
the
observation
is
yes.
If
you're
using
expressroute
the
your
eni
size
might
increase
because
of
the
the
routes
that
you
have
to
account
for,
and
it
would
completely.
G
J
A
E
Absolutely
thanks
now
I
just
want
to
remind
that
there
was.
There
was
one
comment.
Last
week's
meeting,
yeah
yeah
yeah,
you
know
there
was
one
comment
from
michael
in
the
last
week's
meeting
when
route
skill
was
discussed
and
then
it
said
okay
for
party,
and
I
we
we're
talking
about
100k
outs
and
as
basically,
you
know
the
max
max
number
of
routes
that
should
be
supported.
E
But
then
the
subsequent
comment
was
that,
okay,
in
typical
case,
you
know
majority
of
times,
you
know
it
won't
be
more
than
10k,
but
when
it
comes
to
express
route
scenario,
then
we
will
hit.
You
know
close
to
100k
so
and
that's
why
I
was
wondering
you
know
what
percentage
of
the
scenarios
we
were
talking
about
for
experts
out
so
that
we
can.
We
can
you
know,
design
things
accordingly
right.
So
you're.
O
A
O
Address
that
I
think
one
of
the
things
that
we're
considering
is
we're
doing
a
memory
footprint
analysis
and
we're
breaking
that
down
between
mappings
routes,
acl
layers
and
things
of
that
nature,
to
get
a
complete
picture
of
what
the
min
and
max
of
each
one
is
and
then
using,
essentially
the
the
memory
footprint
of
sonic
to
measure
what
a
min
and
max
might
look
like.
So
we
can
give
we
can
present
to
you
a
range
of
what
you
might
deal
with.
O
F
So
I
understood
what
what
michael
said
about
like
that.
The
counters
are
sort
of
indirect,
like
the
routes
or
the
mappings
might
point
you
to
a
counter,
and
many
flows
will
be
incrementing
like
a
shared
set
of
counters.
What
I
wanted
to
know
is
like,
what's
the
actual
number
of
counters,
like
per
eni,
that
we
could
expect
in
terms
of
scale.
J
That's
that's
what's
being
documented,
so
until
the
document's
done,
we
couldn't
tell
you
that,
but
there
are
people
working
on
that.
Okay,.
O
Yeah
yeah
there's.
I
think
we
need
to
define
the
counters
that
we
want
to
capture
what's
already
available
in
sonic
what
the
deltas
are
for
this
implementation
and
then
essentially
how
much
storage
we're
allocating
for
each
of
those
counters,
so
we're
going
to
have
multiple
counters,
but
then
each
of
each
of
the
counter
buckets.
How
much
memory
are
we
giving
and
how
much
buffer
history
do
we
have
for
each
of
those
buckets
and
I
think
that's
what
folks
are
looking
for
and
so.
O
Yeah,
and
so
we
have,
we
have
other
folks
looking
at
what
we
think
the
deltas
will
be
for
anything
that
we
need
to
account
for
that's
different
in
the
sonic
implementation
versus
what's
available
in
the
os
today,
because
some
stuff,
some
stuff,
is
just
there
right.
O
But
then
some
things
need
to
be
thought
of
in
the
context
of
eni,
so
they
may
be
either
changing
slightly
or
we
may
need
to
add
one
or
two
and
that's
that's
what
the
exercise
is
and
then
from
there
we
can
take
the
the
sizing
to
to
tell
you
what
the
appropriate
numbers
are.
F
J
A
A
Okay,
so
john
I'll
see
you
in
an
hour
or
two,
I
have
10
o'clock
if
everyone
else
does-
and
I
want
to
thank
you
for
your
time
and
all
the
great
questions
and
the
participation
and
james
I
appreciate
you
coming
and
thanks
everybody
I'll
get
notes
out
in
a
day
or
so.