►
From YouTube: Behavioral model 20220210 (Feb 10, 2022)
Description
Initial behavioral model sync call
A
And
now
the
act
from
the
first
act
actually
is
good,
because
now
we
don't
care
about
the
outer
fragment
because
it
wouldn't
have
sent
an
act
of
it
if
it
had
an
out
of
order
fragment
that
hadn't
been
received,
but
also
to
close
down
the
connection
completely.
We
need
the
the
the
act
coming
back
from
the
far
side.
A
A
So
I
guess
that's
a
question
that
we
wanted
to
put
out
there
because
vip
does
it
quite
differently,
they
just
see
the
fin
and
then
they
set
a
timer,
and
then
they
wait
that
specific
period
of
time,
which
is
quite
long
by
the
way,
and
then
they
take
down
the
connection.
I
don't
want
to
do
that.
It's
very
wasteful.
B
This
is
john,
I
think
like
if
you
really
want
to
like,
remove
the
connection
sort
of
as
aggressively
as
possible,
but
allow
those
acts
to
go
through
then
like
the
proper
way
to
do.
That
would
be
that
the
fin
itself
has
a
sequence
number
and
then
the
subsequent
act
would
have
to
be
the
ack
for
the
sequence
number
that
covers
the
fin.
B
You
know
associated
with
that
thin
packet,
but
if
you
do
that,
then
like
when
you
get
that
final
act,
you
know
that
it's
the
final
act
and
you
could
immediately
remove
the
flow
right
rather
than
waiting,
five
seconds
or
or
longer
right.
So
that's
what
we
want.
A
A
So
I
I
think
what
john
was
saying
is
you.
You
could
just
actually
look
for
this.
The
sequence
number
of
the
act
to
make
sure
it
matches
and
if
it
does
there's
an
ack
ack
as
long
as
it
does
match,
then
we're
clear
to
shut
that
down
and
I
think
that's
the
fastest
way
to
get
it
shut
down
and
that's
what
we
want.
We
don't
want
setting.
A
C
A
C
A
C
Right,
yes,
so
we
weren't
really
looking
at
the
sequence
number.
We
were
just
looking
at
the
flags
like
first,
you
need
to
first,
you
are
in
the
state
of
waiting
for
finn.
Then
you
move
to
the
state
of
waiting
for
finnec.
Then
you
wait
for
heck
after.
A
B
C
Right,
so
what
I'm
saying
is
probably
I
will
need
to
improve,
not
only
to
look
at
the
sequence
of
the
flags,
but
also
at
the
sequence
number
of
those
packet
packets
that
I
expect
to
do.
I.
A
E
He
can
ask
the
question
himself,
I
don't
know.
Oh,
not
non-first
ip
fragments
have
no
l4
ports
in
them.
How
do
you
want
to
classify
them?
Do
you
want
to
classify
them
in
the
context
of
the
first
fragment
being
combined
with
it
somehow
and
then
make
the
classification,
or
do
you
want
to
write
your
policy
rules
so
they
decide
they
explicitly
say
what
should
happen
on
first
fragments
regardless
without
having
to
know
what
their
l4
ports
are.
E
F
Yeah
and
if
there's
a
flow
cache
issue,
because
of
that
reason
as
well,
if
you,
you
know,
try
to
do
disconnection
tracking
for
defragment
and
for
non-fragmented
packets,
so
maybe
you
know.
A
E
E
F
Yeah
I
mean
so
gerald,
maybe
like
if
you
remember.
A
A
Right
we're
just
a
switch
essentially
to
everything.
So
that's
we're
not
really
going
to
reassemble
anything.
H
Actually,
please
go
ahead.
Yeah.
F
So
the
question
I
think
andy
is
asking-
and
I
have
the
same-
one
is
the
first
packet
because
it
does
not
have
l4
header.
How
do
you
you
know
what
tuple
do
you
use
for
classification,
because
that
that
would
mean
only
the
l3
parts
and
that
would
classify
differently
than
your
rest
of
your
packets?
So.
F
Sorry
I
I
okay
yeah.
The
first
argument
has
the
l
for
the
other.
Don't
sorry
I
missed
fixed
my
words
yeah
yeah.
A
F
Yeah,
so
what
has
happened
gerald
in
the
past
is
like.
If
you
look
at
rss
and
stuff,
the
input
set
that
is
used
for
fragmented
stuff
is
just
based
on
l3.
It's
never
l3,
plus
l4,
so
that
you
know
like,
if
you're
classifying
to
a
single
queue
right
or
whatever.
Otherwise,
you
end
up
with
the
first
pack
in
go
going
in
one
queue
and
the
rest
of
the
packet
going
to
other
queues,
so
I
mean
I'm
just
giving
rss
as
an
example.
A
J
Yeah
so
so
we
have
a
threaded,
vp
team
right
and
I
I
will
just
exactly
read
what
they
what
they
mentioned,
but
but
I
I'm
not
an
expert
of
how
exactly
they
are
counting
this
right.
They
mentioned
that,
and
I
cite
like
we
use
both
ips
and
ipids,
to
group
all
the
fragments
of
a
single
packet
together,
and
they
also
use
number
of
end
caps
of
the
fragment
on
and
they
are
saying
that
on
inbound
they
are
expecting
fragments
to
have
the
necessary
end
caps.
J
But
I
understand
that
this
doesn't
answer
fully
the
question
with
regards
to
lack
of
enforce.
So
the
vfp
didn't
answer
this
fully
in
the
email
that
we
asked
ask
them
to
to
do
this
right.
I
think
we
they
pointed
only
to
some
like
external
documentation,
saying
that
the
customer
can
send
jumbo
packets
if
they
want,
and
they
will
fragment
they
will
be
fragmented
in
the
v-net,
and
there
is
some
link.
Let
me
let
me
just
paste
this
link,
but
I
don't
believe
this
link
covers
this,
so
I
don't
think
this.
A
J
A
You
will
finish
the
connection
by
looking
at
the
fin,
looking
at
the
sequence
number
of
the
pin
and
making
sure
that
the
ax
matches
it's
the
last
one,
yeah
yeah,
it's
the
last
one,
so
the
sequence
number
has
to
to
be
the
same,
and
that
way
when
we
get
the
second
action
we
can.
We
can
close
down
the
connection
safely
and
that
all
packets
have
already
been
sent
by
that
time,
because
we
would
get
the
x.
A
So
we've
kind
of
solved
that
problem
and
then
I
think
mary
marion
has
said
that
he
will
update
the
behavior
model
to
do
that
and
now
we're
now
we
kind
of
moved
on
to
how
we
ever
got
to
that
conversation,
which
is
this
fragmentation,
fragmentation.
J
A
B
We're
trying
to
recall
here
like
something
that
we
saw,
that
talked
about
like
virtual
reassembly
and
like
a
fragment,
cache
and
like
like
it
seems
like
maybe
like
dealing
with
fragments,
is
like
a
stateful
thing,
but
maybe
you
don't
have
to
buffer
all
the
packets,
but
I'm
not
I'm
just
like
talking
off
the
top
of
my
head.
I
don't.
I
don't
have
like
a
full
recollection
of
of
this.
B
B
J
Yeah,
christina,
I
I
think
for
the
fragment
stuff
and
our
actual
ex,
like
actual
pocket
handling
right,
we
should
involve
someone
from
the
vfp
team
right
because
neither
of
us
right
now
on
the
stuff,
maybe
gohan,
because
they
have
some
experience
with
the
switches
right.
Maybe
you
have
some
some
details
or
prints,
but
I
know
that
I
don't
have
exactly
information,
how
the
vp
handles
the
fragments
and
how
they
are
matching
this
and-
and
they
will
know
so.
We
need
this
team
to
also
join
us.
D
Maybe
I
can
just
make
one
small
comment
when
I
talk
to
the
vfp
team
michael,
this
is
super
generic,
but
they
they
did
say.
The
first
fragment
should
contain
all
the
headers
needed
yeah.
We
can
send
through
to
determine
the
action
needed
and
apply
to
the
rest
of
the
traffic.
Okay,
how
they
do
that.
I
don't
know,
but
that's
what
they
said.
Yeah.
J
The
first
right,
I
know
that
definitely
they
are
waiting
for
the
first
fragment
right
because
like
otherwise,
they
have
no
way
to
basically
match
the
policy
right.
Yeah.
D
D
J
Yeah
we
have
our
team,
we
kind
of
started
following
up
in
the
email,
but
the
answer
they
gave
is
is
doesn't
have
yet
the
answer
for
exactly
how
of
the
implementation,
how
they
are
doing
this
right.
It
was
to
generic
it
so
far,.
E
B
Right,
it's
not
reassembly!
I
think
again,
I
think,
like
basically,
the
fragments
have
like
temporal
locality
right,
like
the
fragments.
All
the
fragments
in
like
the
full
packet
are
going
to
be
like
pretty
close
together
in
time
right.
So
there's
temporal
locality
so
like
caching
of
like,
like
some
kind.
B
Cash
that
maybe
works
because,
like
you're
just
trying
to
cover
the
time
over
which,
like
those
fragments
from
the
given
parrot
packet
like
exist,
and
so
like,
I
think
the
solution
involved
a
fragment
cache
but
and
not
buffering
the
actual
packets,
but
just
maintaining,
like
the
flow
information
associated
with
that.
With
that
packet.
Like
the
five
tube.
J
And
I
wonder,
can
you
maybe
I'm
not
quite
sure
if
this
is
done
right
now
in
harder?
But
what
about
the
sequence
number
ids
to
to
kind
of
do
this
together,
like
in
a
way
the
range
of
the
ids
right?
So
so,
whenever
pocket
arrives,
we
can
always,
for
example,
store
what
was
the
sequence
number
id
of
the
packet.
K
B
I
think
that
that's
the
solution.
I
don't
understand
all
the
logic
of
like
implementing
that,
but
I
think
that
it's
a
stateful,
it's
a
stateful
thing
in
order
to
deal
with
the
non-first
fragments.
I
Yeah-
and
I
think
that's
true
john-
you
don't
need
to
buffer
packets
as
long
as
they
come
in
in
order.
If
they
come
in
out
of
order,
then
you
would
need
to
buffer
or.
K
B
K
But
are
we
saying
like?
Are
we
controlling
both
the
sender
side
and
the
receiver
side?
Do
we
have
the
assumption.
J
K
A
K
K
We
never
do
this
for
the
tcp
traffic
right,
so
we
only
do
this
for
the
utp
traffic
because
for
the
tcp
for
the
tcp,
I
think
we
should
have
in
the
spec.
But
I
don't
know
if
we
have
that,
but
you
know
there's
a
mss
tapping
technique
that
we
use
so
that
you
know
whatever
the
path
you
know
the
customer
negotiated.
Mto
will
be
clapped
by
the
by
this
is
the
implies
layer
right
so.
K
Oh,
you
know
it
just
that
would
do
not.
You
know
they
want
to
flow
id
something
kind
of
to
to
do
that,
to
do
the
classification
to
a
particular
flow
right.
So
I
think
at
the
center
side
you
can
inject
some
information
in
the
encapsulation
layer
and
then,
after
the
after
the
receiver
side,
you
can
extract
that
flow
id
and
the
map
to
the
flow
right.
So
because,
because
you
add,
when
you
do
the
fragment
right,
it's
you
to
the
fragment,
not
the
customer
between
the
fragments.
A
These
things
are
landing
on
existing
vms,
so
once
one
vm,
which
is
attached
to
an
appliance
has
nothing
to
do
with
the
far
side
vm,
which
is
not
attached
to
the
appliance.
Really
it's
just
a
so
yeah.
That's
that's.
J
Would
say
I
would
say
we
should
we
should
assume
we
don't,
because
we
are
also,
for
example,
selling
some
packets
to
let's
say
on-prem
right
with
different
different
devices
and
this
kind
of
stuff
like
so
I
I
think
this
is.
It
would
be
nice,
but
I
think
it's
a
very
strong
assumption
that
we
control
both
sides,
but.
K
If
we,
if
we
don't,
have
control
right,
so
if
we
don't
have
control,
I
think
one
side
must
be
the
the
vip
right
so
because
of
the
existing
deployment
today,
I
think
we
just
need
to
ask
the
vip
to
give
us
what
exactly
algorithm
the
secret
sauce.
They
have
yeah
on
the
other
side,
and
then
we
do
it
on
our
side
right.
So
you
know,
I
think,
they're
talking
about
the
encapsulation
layer.
I
think
they
may
have
something
there.
J
I
know
that
vfp,
because
there
was
some
company
that
was
asking
us
to
correctly
handle
fragments
and
this
kind
of
stuff
for
the
udp,
and
I
know
that
vip
introduced
some
caching,
so
so
this
kind
of
stuff.
So
so
I
I'm
not
like.
Probably
they
have
some
like,
they
definitely
have
some
way
of
how
they
are
handling
this
right.
It
may
be
that
they
require
caching,
so
some
buffer
and
those
kind
of
stuff,
right
and-
and
the
question
is
like.
J
If
not,
then
it's
awesome,
so
you
have
different
upgrade
right
if
they
require
caching,
then
the
question
is
like
how
big
caching
can
you
do
on
the
hardware
and
can
we
avoid
it
right
and
the
interesting
observation
that
I
want
to
mention
here
that
the
connection
establishment
is
not
really
a
problem,
because
the
first
fragment
will
have
the
five
tuple.
So
the
main
thing
is
really
look
up
too
much.
J
The
existing
flows
right
and
I'm
just
wondering
and
right
now
is
like
brainstorming
right,
because
all
the
fragments
have
a
sequence
number
and-
and
john
you
mentioned.
That
is
this,
like
time,
locality
right
that
that,
for
example,
the
fragments
will
will
not
arrive
too
late,
not
arrive
to
earlier
this
kind
of
stuff
right.
So
I
was
wondering
like
if
we
do
what
is
like
example
of
the
brainstorming
algorithm.
J
There
can
be
also
different,
different
approaches
right
if
we
do
three
double
match
and
if
this
street
app
will
match
after
this
tweetable
match,
we
need
to
match
the
sequence
number
to
be
within
the
last
sequence,
number
that
the
packet
saw
plus
minus
x
right
and
if
it
returns
one
single
match
in
the
flow
table,
then
we
know
it's
for
sure
for
this.
If
it
returns
more
than
one
single
match,
then
we
can
always
discard
the
packet
and
allows
that
basically
them
to
actually
retry
till
we
get
this,
so
this
will
be
slightly
slower.
B
Yeah,
I
think
that
makes
sense.
I
think
that's
how
it
would
work,
but
I
think
the
three
tuple
match
would
include
the
ipid,
like
all
fragments
of
the
same
packet
have
the
same
ipid
and
then,
like
the
sender,
that's
doing
the
fragmentation
like
increments,
that
ipid
for,
like
the
nest,
the
next
packet,
that's
being
fragmented,
the
next
packet
that's
being
fragmented,
so
like
the
three
tuple
lookup
makes
sense
and
then,
of
course,
like
you
can
check
like
sequence
numbers
and
make
sure
that
the
packets
are
like
the
fragments
are
in
sequence.
B
Ultimately,
you
need
a
cache
of
those.
You
don't
need
to
store
the
packets
you're,
not
doing
actual
reassembly
you're
just
trying
to
like
store
the
five
tuple.
That's
associated
with
that
three
tuple.
B
You
you
do,
but
you
need
to
associate
now
like
a
three
two
pole
with
a
five
tuple
in
the
case
of
like
when
you
received
a
like
a
first
fragment,
because
the
first
fragment
has
the
three
tuple
and
the
five
tuple
and
then
the
following
fragments
just
have
the
three
tuple
right
so
like
you
need
to
have
some
kind
of
a
cash
just
like
you
said
and
like
how
big
is
it
you
know,
what's
like
the
logic
for,
if
you
got
fragments
out
of
order,
do
you
drop
them
like
there's,
there's
like
a
lot
of
like
details
around
that.
K
A
J
So
for
from
our
point
of
view,
the
cps,
so
so,
let's
clarify
the
idea
right.
So
the
idea
that
why
the
cps
is
slow
because,
basically
just
the
rule
processing
is
slow
right
because
we
don't
have
a
fast
engine
to
process
the
rules
right,
but
the
rules
are
being
hit
by
both
tcp
and
udp
packets
right.
So
we
want
to
basically
optimize
the
rule
processing
engine,
so
we
can
handle
basically
this
kind
of
like
new
packets,
that
we
haven't
seen
of
the
exiting
flow
evaluation
fast
right
and
then
there
is
this
stuff.
J
This
is
maybe
also
maybe
the
question
for
the
fpga
team
because
they
offloaded
because,
like
all
the
flows
that
gets
created
by
the
vfp,
they
always
gets
uploaded
to
100
right
now.
So
we
have
it
right
now,
right
so
right
now,
every
single
flow
that
is
created
we
already
offer
to
hardware.
So
the
solution
of
matching
the
packet
to
extinct
flow
in
on
hardware
level.
It's
already
solved
on
fpga
right,
so
we
can.
J
J
Of
the
of
the
existing
flows
is
the
stuff
that
we
need
to
rediscover.
I
think
this
is
already
there
in
hardware
right
and
we
just
need
to
potentially
use
it
right.
At
the
same
time,
I'm
not
sure
if
fpga
funds-
fragmentation,
that's
open
question,
maybe
they
don't,
and
maybe
they
just
basically
asked
the
customer
to
retransmit
to
get
this
in
order.
There's
an
open
question.
D
B
J
We're,
okay
yeah.
We
are
okay
with
it,
so
so
basically
like.
If
this
doesn't
impact
the
pps
right
for
the
already
established
connections
right
we
can
we
can.
We
can
do
the
slow
pass
for
the
for
extreme
ones
right,
because
the
the
the
fast
path
like
hitting
first,
the
already
established
transposition
right,
the
flow
right.
It's
mostly
important
from
two
point
of
views
right,
one
point
of
view,
which
is
basically,
we
don't
need
to
do
entire
role,
processing,
which
may
have
lots
of
prefixes
right.
J
They
need
to
exchange
the
state
right
and
the
idea
is
that
the
goal
states,
so
the
rules
that
we
are
plumbing
is
eventually
consistent,
which
means
that
we,
because
those
are
in
the
two
physically
different
locations
right,
there
will
be
no
way
that
you
will
guarantee
that
they
will
get
applied
at
exactly
the
same
milliseconds
and
this
kind
of
stuff
right,
which
means
that
the,
but
once
the
customer
establish
the
flow,
the
flow
needs
to
work
in
a
specific
way
that
was
established.
Otherwise
customer
connection
will
break
right.
J
So
that's
why
part
of
the
aha
is
like
once
one
device
creates
the
flow
and
sets
up
the
specific
transposition
based
on
that
current
rule.
That
time
this
flow
basically
needs
to
be
kind
of
transformed
to
the
to
the
backup
appliance
in
order
because
like
if
they
can,
because
the
main
thing
about
the
right
is
when
the
device
dies,
we
can
lose
packets,
but
we
cannot
drop
established
connections.
That's
the
main
goal.
B
So
so,
like
aj
is,
like
a
whole
big
topic
right
yeah.
This
was
like
this
was
like
a
much
smaller
point,
just
that
the
specification
like
describes
flow
path
and
fast
path.
In
one
case,
the
slow
path
does
like
the
full
acl
the
route
lookup.
All
that
inserts
the
flow
fast
path
does
just
the
transformations,
and
I
wasn't
sure
whether
the
behavioral
model
should
represent
fast
path
and
slow
path
or
whether,
like
from
a
behavioral
perspective,
there's
no
difference
from
saying,
like
every
packet
just
goes
through
the
slope,
slow
path.
A
Even
it,
I
don't
think
anybody
will
implement
it
that
way.
I've
heard
that,
but
that's
you'll
never
get
the
cps
out
of
it,
but
it's
still
one
piece
ago:
it's
like
you
always
do
the
flow
lookup.
If
you
don't.
A
You
do
the
rest
of
it,
so
it's
only
one
one
behavior,
really
it's
just.
How
far
do
you
go?
Do
you
end
at
the
flow
lookup?
Or
do
you
end
you
do
the
flow
look
and
then
you
have
to
follow
it
up
with
the
full
slope.
B
Right
I
get
it
I
mean.
The
fast
path
is
is
is
like
a
shortcut.
You
know
that
you
can
take
after
the
established.
I
agree
with
you.
Probably
everybody
will
implement
some
kind
of
optimization
to
do
fast
path.
The
specification
kind
of
specifies
it
that
way,
but
the
behavioral
model
doesn't
implement
it.
B
That
way,
and
are
we
okay
with
that
and
like
I
was
trying
to
get
at
this
like
maybe
a
week
ago
when
I
was
asking
michael
about
like
updates
to
the
configuration
because
like
in
my
mind,
if
the
behavior
you
want
is
like
you're
going
through
the
slow
path
on
every
single
packet,
then
an
update
to
the
configuration
would
be
like
immediate,
but
I
think
the
answer
we
got
a
week
or
two
ago
was
that
no,
like
updates
to
the
configuration,
don't
have
to
be
immediate
and
like
even
for
existing
flows
that
are
already
established,
like
that
updates
to
the
configuration,
would
have
to
be
applied
to
those
existing
flows,
but
not
immediately
exactly
yeah,
but
they
could
be
applied
like
a
little
bit
more
lazily.
B
They
don't
have
to
be
applied
immediately.
So
like
I,
I
understand
all
of
that
and
I
think
I
understand
what
we
need
to
do
to
implement
a
data
plane.
But
when
I
look
at
the
behavioral
model,
it's
just
implementing
the
slow
path
and
okay.
A
J
J
Yeah
I
look
up
the
table
yeah.
I
think
it's
potentially
it's
beneficial
to
also
document
exactly
how
to
do
the
look
up
there,
especially
from
the
fermentation,
also
perspective.
So
we
can
also
potentially
spend
some
time
to
to
document
the
fast
part
as
well
and
then
do
behavioral
model
for
the
fast
part.
A
I
I
think
so
because
we're
gonna
find
all
kinds
of
exceptions
to
when
we
do
aja
and
other
things
that,
if
we
don't
try
to
keep
this
model
close
enough
to
how
people
are
going
to
implement,
it's
really
not
going
to
cover
down
the
road.
As
things
get
more
complex,
all
the
cases-
and
I
I
don't
know
why
it
wouldn't
have
a
past
path.
A
In
the
behavioral
model,
it's
going
to
come
up
later,
somehow,
even
in
when
you
do
memory
size,
you
always
include
the
the
flow
flow
table.
Why?
Because
it's
the
biggest
table
of
all
right
and
that
doesn't
even
even
consider
that
there's
even
such
a
thing,
so
it
wouldn't
take
anything
for
them
to
implement
the
fast
path
as
part
of
the
behavioral
model.
Why?
Wouldn't
the
question
I
would
say
is
why
not?
Why
not
implement
it?.
D
A
C
I'm
trying
to
understand:
why
should
why
the
difference
between
slow
path
and
fast
paths
should
be
reflected
in
the
behavioral
model,
since
different,
even
even
within
our
company?
Different
generations
of
the
hardware
will
have
differences
between
what
goes
into
a
slow
path.
What
goes
into
the
fast
path,
I
don't
know
how
we
can
generalize
it
with
the
behavioral
model,.
A
C
A
C
Okay,
okay,
so
sorry,
my
definition
is
different.
My
understanding
of
fastpass
is
different.
Okay,
this.
D
C
This
is,
this
is
reflected
in
the
behavioral
model
like
if
it's
not
the
first
packet
of
the
connection
we
already
know
about
this
flow.
We
are
not
doing
the
processing
of
the
acls
and
other
stages
for
every
consecutive
packet
of
the
flow.
It
is
already
there.
A
Okay,
so
that
matches
the
documentation.
So
why
john
do
we.
B
Think,
okay,
I
mean,
maybe
I'm
not
understanding
the
p4,
I'm
not
a
p4
expert,
I'm
not
maybe
I'm
not
understanding
the
p4
code
but,
like
I
didn't
see
the
flow
table
like
the
only
flow
table
I
saw
in
there
was
the
connection
table
which
just
basically
only
like
to
have
the
state
of
whether
like
it
was
established
or
not.
I
didn't
see
like
transforms
or
anything
like
in
a
flow
table.
C
So
there
is,
I
can
actually
send
you
a
link
to
exact
place
in
the
code,
but
there
is
an
explicit
verification.
If
the
connection
is
established,
then
we
do
not
do
acl
lookups.
B
B
That
may
be
fine,
but
it's
like
doesn't
match
like
like
the
spec
is
trying
to
like
provide
guidance
on
like
what
you
should
do
in
a
slow
path
and
what
you
should
do
in
a
fast
path
and
like
right,
like
you're,
saying
like
okay,
like
we'll
skip
the
acl,
but
we'll
still
do
the
replica
now,
maybe
that's
how
your
implementation
will
work.
Maybe
a
different
implementation
might
store
the
the
route
entry
itself
in
the
in
the
connection
table.
B
B
As
long
as
what
michael
said
before
that,
like
updates
to
the
route
table,
don't
have
to
be
reflected
immediately
in
the
data
plane,
which
you
know
because,
like
for
example,
let's
say
you
stored
the
transform
in
the
in
the
connection
table
well
like
it
may
not
be
that
easy
to
take
a
route
update
and
now
propagate
it
to
every
flow
in
the
flow
table.
So
that's
going
to
take
time
and
that
can
be
done.
Lazily
like
we
understand,
but
it's
like
the
behavioral
model
doesn't
to
me
the
behavioral
model
sort
of
says.
J
And
also
fyi,
if
you
guys
need
some
information,
how
we
are
doing
this
in
vfp
the
in
case.
Basically,
there
is
a
conflict
change.
They
usually
increase,
something
which
they
call
like
epoch
number
of
the,
but
not
for
the
car,
but
the
eni
right
and
each
kind
of
like
flow
that
was
offloaded
has
this
number.
So
whenever,
whenever
packet
comes,
then
they
basically
notice
that
it
doesn't
match,
and
then
they
invalidate
it
and
and
redo
entire
contacts.
Slowpad
again.
I
B
J
Right
so
yeah
and
then
yeah
with
this
apoc
update,
also
interesting
stuff
that
came
up
as
we
were
doing
also
work
on
some
different
stuff
on
our
site.
Right
is
that
we
noticed
some
differences
between
harder
versus
software
right
and
the
idea
is,
for
example,
in
a
software
model,
things
per
eni,
which
means
that,
even
if,
for
example,
let's
say
two
enis
may
belong
to
the
same
customer
v-net,
let's
say
right,
the
same
custom
virtual
network,
the
same
vpc
right,
so
the
vpc
shares
shares
mappings
right.
J
So
all
the
mappings
between
between
kind
of
addresses,
that
they're
used
for
transposition
are
common
for
the
vpc
right
and
two
enis
may
be
belonging
to
the
same
vpc
and
may
accidentally
land
on
the
same
card
right.
So
in
vfp,
all
the
state
is
actually
copied
per
eni.
So
in
this
case
the
the
vpc,
even
though,
is
potentially
global
object
right
it.
It
is
not
handled
as
a
global
object.
It
is
basically
copied
per
eni,
which
is
not
very
efficient.
This
kind
of
stuff
right,
but
at
the
same
point,
all
the
all.
J
The
updates
are
kind
of
like
updates
the
epoch
always
separate.
You
on
both
times,
right
and
and
here
one
thing
that
I
want
to
make
sure
that
in
this
case,
for
example,
let's
say
the
mappings
change
on
the
vpc
right,
then
there
is
also
a
need
to
know
which
enis
potentially
or
which
flows
belonging
to
which
enis
needs
to
be
handled,
and
I
know
that
there
was
some
approach
that
a
team
tried
to
in
this
case.
J
Okay,
if
potentially
multiple
enis
could
be
impacted,
then
let's
not
use
eni
based
epoch,
but
let's
use
card
base
epoch
right
so
every
time
once
anything
changes
in
the
configuration
of
the
card
in
the
respective
involved
dni
they
they
change
the
epoch
on
the
card.
But
but
this
goes
different
different
behavior
that
like,
if
one
customer
is
churning
the
the
configuration,
is
impacting
the
performance
of
the
other
customers
on
the
ca
on
the
card.
If
the
e-book
is
global.
So
that's
why
ebook
cannot
be
global
right
at
the
same
point.
J
We
would
prefer
the
epoch
to
be,
for
example,
per
eni.
At
the
same
point,
I
understand
that
there
is
a
complexity,
because
the
best
efficient
way
to
handle
the
object
like
vpc
is
to
have
them
kind
of
global
and
potentially
reference,
because
this
is
like
one
set
of
mappings
it'll
be
more
efficient
to
store
them,
especially
they
want
to
keep
basically
one
million
mappings
for
pc
right,
but
this
requires
some
ability
to
track
that
if
there
is
an
update
to
the
pc,
potentially
those
other
enis
epochs
needs
to
be
incremented.
J
So
just
fyi
that
that
we
run
into
running
to
this
before
that,
because
some
objects
potentially
can
be
shared
by
different
dnas.
We
can
definitely
implement
them
as
a
copy
right,
but
at
the
same
point
this
is
lots
of
memory,
so
those
object
having
pc
should
be
most
likely
global.
At
the
same
point,.
J
Incrementing
epoch
on
the
global
card
level
causes
this
problem
that
one
customer
can
affect
the
other
customers
if
they
were
constantly
reprogramming
this,
and
we
also
don't
want
this
to
happen.
B
Right,
I
I
thought
the
mappings
were
like
scoped
to
the
eni
you're
saying
like
that's
like,
like
two
local
scope,
two
local
of
scoping
that,
like
multiple
enis,
might
want
to
share
mappings.
But,
like
that
wasn't
clear
to
me,
I
mean
I
think,
like
we
were,
assuming
that
that
mappings
were
like
scoped
to
the.
J
So
I
I
can
do
that.
The
mappings
mappings
are
per
vpc
right,
so
mappings
are
inside
the
vpc
and
the
eni
will
be
we
kind
of
like
peering
with
one
vpc,
which
is
on
vpc
or
potentially
some
other
vpcs
right,
and
it
really
depends
like
how
many,
for
example,
customer
will
deploy
vms
and
this
kind
of
stuff
right,
because
they
may
they
may
deploy.
J
To
have
this
vpc
not
be
copied
because
because
because
vpc
can
have
up
to
one
million
mappings
right
so
in
this
case,
but
this
would
be
the
same
right
from
our
control
plane.
We
are
okay
to,
for
example,
if
the,
if
the
mapping
for
one
vpc
changes
to
update
always
vpc1
and
vpc2,
because
the
eni
will
refer
to
vc1
the
inivpc2,
even
though
underneath
is
the
same
ppc.
J
So
it
makes
sense
to
to
consider
this
hardware
and
most
likely
the
mappings
will
be
in
the
hardware
in
the
memory
to
kind
of
and-
and
there
are
significant
number
of
them
right-
one
million
right,
so
it
makes
sense
to
to
make
them
potentially
as
a
top
level
object,
not
very
ni
object
and
having
enough
be
able
to
refer
to
the
stop
level
object.
Yeah.
B
B
B
And
then
I
had
one
question
that
I
posted
today
has
to
do
with
like
enforcement
of
connection
per
second
and
enforcement
of
flow
table
limits,
connection
per
second
limits
and
flow
table
limits.
Where
are
the?
Where
are
those
limits
enforced
like?
Would
they
be
enforced
from
the
dpu?
I
know
like
the
throughput
is
not
enforced
in
the
dpu.
A
That's
right,
that's
right!
They
would
be
enforcing
this
in
the
deep
view,
so
we
can
give
you
some
examples
there.
Let's
say
I
have
multiple
vms
going
to
the
same
dpu
we
may
over
subscribe,
but
we
can't
make
it
unlimited.
So
let's
say
a
two
core
vm
goes
through
this
appliance.
A
We
actually
need
to
limit
that
to
some
much
lower
number.
So
it's
not
going
to
be
millions
of
connections
per
second,
it
might
be
like
50k
connections
per
second
for
a
two
core.
If
we
didn't
put
limits
there,
then
the
two
core
could
essentially
couldn't
reach
millions
because
it
wouldn't
have
the
bandwidth
to
do
it,
but
it
could
certainly
reach
much
higher,
and
so,
if
we
want
to
over
subscribe,
we
have
to
at
least
have
some.
We
know
we
need
to
know
by
what
right.
A
B
A
B
A
J
Because
we
will
divide
this
limit
like
to
two
bytes
kind
of
stuff,
because
the
flow
speeding
will
also
be
like
equal,
like
from
the
point
of
view
like
the
the
standard
distribution
should
be,
should
be
basically
redirecting
them
equally.
B
B
B
Right,
the
same
with
the
flow
flow
table
size
limits
per
year,
right
yeah,
like
how
accurate
does
that
have
to
be,
I
mean
like?
Can
there
be
a
little
bit
of
fuzz
on
that
like.
A
Like
it
would
be,
it
would
be
fuzzy
and
that
it
would
be
a
bit
bigger
when
you
split
it.
You
pretty
much
have
to
go
a
bit
bigger,
because
you
know
it's
not
perfect
right,
but
customers
don't
care
if
they
get
a
little
bit
more
cps
or
a
little.
B
A
A
J
Potentially,
like
once,
we
get-
maybe
those
added
like
let's
say,
udp,
this
kind
of
stuff
right
and
potentially
the
fastpass
or
maybe
the
fastboot
is
already
there
right
into
a
behavioral
model.
Maybe
then,
on
the
email
we
can,
we
can
decide
if,
if
you
want
to
meet
to
to
to
additional
like
look
into
this
or
something
like
this
right-
and
this
would
be
like
two
weeks
from
now,
for
example,.
A
I
I
think
we
need
some
kind
of
sub
group
to
be
honest
like
just
like
sonic
has
subgroups,
because
you
can't
have
everybody
in
an
audience
who
doesn't
write,
datapop,
who
really
doesn't
know
the
details
of
fragmentation
or
tcp
or
whatever
it's
impossible
to
get
the
designs
done
with
that
many
people
the
subgroups
are
to
bring
more
like
like-minded.
So
if
you're
like
a
datapathy
type
guy,
that's
really
into
the
details
of
how
to
do
that.
D
We
write
a
couple
of
to
do's
here
then
to
to
help
out
with
udp.
Can
you
guys
chime
in,
and
I
can
write
a
few
things
here?
I
think
marion
said
fast
path
was
already
in
there.
F
Gerald
does
the
dash
document
talk
about
the
connection
rate
limits
and
stuff
or
enforcing.
G
It
you
know
between
enforcing
yeah,
enforcing
limits,
definitely.
A
A
C
A
Future
yeah,
we
just
want
to
make
sure
we
share
with
the
community
what
we're
doing
so.
Everybody
has
a
common
understanding
and
learns
together,
but
yeah
agreed
you're
gonna
have
that
soon,
so
that'll
be
a
crosstalk
thing.
Great
did.
A
H
So
yeah,
christina
sorry,
my
internet
connection
was
very
very
bad
today
you
know,
but
definitely
we
can
meet
bi-weekly
or
tribally,
as
you
would
prefer.
A
I
think
bi-weekly
at
first
and
then,
if
there's
less
to
do
later,
see
once
my
concern
also
is
that
okay
we're
all
on
v-net,
okay,
great,
but
we
have
seven
service
models
and
at
some
point,
if
we
can
get
the
v-net
done,
then
maybe
we
can
start
splitting
up
this
work.
I
don't
think
it
has
to
be
done.
Serially
like
it's
being
done
today,
but
let's
get
through
v-net
first
and
we
can
see
how
we
could
split
up
some
of
this
work.
D
Okay,
thanks
everybody,
and-
and
I
think
I
mentioned
to
the
larger
group
that
we
have
a
microsoft-
has
our
org.
Not
all
of
microsoft
has
a
quiet
week
next
week,
so
I
sent
the
cancellation
for
the
community
meeting
and
we
we
have
to
focus
on
planning,
and
you
know
gives
us
time
to
to
think
about
different
things
and
so
we'll-
and
I
know
john
you're
you're
out
the
23rd
right.
Okay,
all
right
thanks!
Everybody
appreciate
your
time.
Thank
you.