►
From YouTube: Behavioral model 20220301 (March 1, 2022)
Description
March 1, 2022 Behavioral Model sync call
A
Like
I,
I
think,
but
it
you
know,
but
that
means
that,
like
let's
just
say,
inbound
packets
would
be
like
double
tunneled
right
or
is
that
not
true?
They
would
never.
B
C
B
Yeah,
we
do
have
scenarios
that
sometimes
some
things
are
double
tunnel,
mostly
if
this
is
coming
from
slb
load
balancers.
So
if
traffic
coming
from
from
node
balancers,
it
is
the
load,
balancers
are
the
additional
end
cap
on
top
of
what
original
packet
can
be.
So
so,
in
this
case
there
are
some.
There
is
basically
once
there
is
one
is
coming
from
the
inbound
load
balancer
then
the
traffic
will
be
in
additionally
end
cap.
D
B
No,
we
need
to
vienna
has
a
single
handcuff
right.
However,
there
is
a
scenario
where
the
destination
is
actually
inside
the
v-net,
but
the
destination
is
internal
load
balancer
entity.
So
in
this
case,
we
we
basically
encounter.
This
has
been
very
traffic
but
because
the
destination
is
not
no
longer
planned,
the
traffic
just
gets
routed
through
the
load,
balancer
max
device,
and
this
little
balancer
max
picks
the
destination
adds
the
double
end
cap
and
then
forwards
forwards
to
the
specific
destination.
D
Yeah,
so
I
think
we're
missing
one.
Drawing
in
all
our
pictures
is,
it
is
possible,
even
even
not
even
going
to
a
load
bouncer,
it's
quite
possible
to
go
from
one
appliance
to
another
appliance
where
the
ibm
is
connected.
B
But
I
I
want
to
emphasize
that
basically
load
balance
is
more
like
software
load
balance
like
load
balancer
entity
of
counting
traffic
from
the
internet
right
that
would
definitely
will
have
been
p4
model.
I'm
not
quite
sure
that
currently
formula
started
talking
about
this
or
not,
but
this
is
like
how
we
handle
internet.
So
this
is
not
not
the
peanut
to
vinnie
traffic
that
we
need
to
be
a
traffic.
A
G
G
Recall
no,
I
think
this
one
is
basically
the
appliance
header
like
the
current
appliance,
which,
which
will
be
used
for
the
encapsulation
and
yeah.
A
But
I
like,
I
think,
it's
my
understanding
right,
that
the
end
point
like
the
the
nick,
that's
on
the
server
of
the
vm,
will
add
an
end
cap
to
get
the
packet
to
the
appliance.
B
A
Out
of
the
vm
there's
no
end
cap,
there's
no
tunnel
right,
but
out
of
the
nick
on
the
server
there
would
be
this
vxlan
end
cap.
That
would
get
get
it
to
the
appliance
right
right
and
and
then
and
then
the
appliance
would
you
know
would
would
do
you
know
like
an
inbound
or
outbound
route.
Look
up
and
like
I
also
had
a
question
about
that
inbound
route,
lookup.
A
I
saw
I
think
in
the
p4
code
that
the
inbound
route
lookup
is
just
based
on
like
a
lookup
of
the
vni
in
that
vxlan
header,
and
I
I
wasn't
sure
if
that
was
correct
or
not
like,
I
did
like.
Let
me
put
it
this
way
like
I
see
stuff
in
the
p4
model.
That,
like
is
not
documented
in
the
spec,
and
so
like
it's
unclear
to
me
like,
for
example,
in
the
spec.
I
don't
think
I
saw
anything
that
explains
like
how
the
inbound
route
lookup
is
performed.
B
D
B
B
Because
the
left-hand
side
is
just
the
tunnel
right,
so
left-hand
side
is
from
the
vm
to
to.
This
is
just
like
a
pure
tunnel
like
one
single
tunnel
that
basically
from
the
vm
every
single
packet,
goes
to
the
appliance
and
then
goes
back
right,
the,
but
the
entire
processing.
The
entire
thing
that
p4
pipeline
specifies
or
mostly
specifies,
will
be
the
right
hand
side,
which
is
basically
that
once
this
kind
of
raw
packet
from
the
vm
gets
to
the
rules,
what
we
are
doing
with
this
packet
depending
how
which
destination
is
going.
B
B
Yeah,
so
I
think
I
think
what
we
can
do.
We
can
release
the
christina's
document
right
because
it
will
specify
the
right
hand,
part
of
the
of
of
the
basic
key
routing
and
the
outbound,
and
then
we
can
follow
up
with
that.
With
the
update
with
this
document,
it
will
also
specify
the
inbound
on
the
right
side.
B
A
D
The
other
way
around
it
went
before
the
documents
were
done.
Okay,
so
the
one
that
christine
is
working
on
right
now,
which
is
based
on
the
recording
that
michael
did,
that
is,
is
explaining
why
the
lookups
are
more
complicated
than
just
a
this
prefix
match.
Okay,.
E
D
D
Then
we
will
issue
that
document
and
then
the
p4
mall
has
to
be
updated
with
there's
a
lot
of
detail
in
the
document
on
how
to
process
the
forwarding,
because
it's
not
it's
like
we
talked
about
remember
last
time
we
said
well,
it
could
be
going
to
another
vm,
but
it
also
could
be
going
to
a
load
balancer.
It
also
could
be
going
through
express
front
or
also
could
be
going
to
the
internet.
D
That's
all
done.
That's
never
been
described
clearly
of
how
that
is
done
and
how
you
redirect
you
know
to
applying
to
software
appliances
in
between
and
all
that
kind
of
stuff.
So
we're
almost
done
with
that.
Okay-
and
I
think
you're
just
saying
okay,
but
that's
the
outgoing
now
just
make
sure
you
document
the
ink
coming
from
the
right
hand,
side
as
well.
A
I'm
also
happy
to
get
the
outbound
documented
more
clearly
too,
so
I
I
mean
we'll
be
happy
to
get
both
direct.
I
think.
H
E
H
E
This
is
the
pr
and
I'm
it's
in
draft
form
and
where
I'm
almost
done,
I
just
need
to
do
the
on-prem
and
the
private
link
part
at
the
bottom,
but
so
far
I've
done.
We've
done
explicit.
We've
done
peered
v-net
using
mappings
direct
communication
between
subnets,
adding
a
firewall,
hop
filtering
a
default
route,
trusting
some
internet
traffic,
but
not
other
and
set
on-prem
route
to
express
route.
I
just
I
just
need
to
pick
it
up
right
here
and
finish
it.
E
A
So
one
other
thing
that
I
saw
in
the
document
was
that
I
think
most
of
the
text
describes
vxlan
and
cap,
but
the
diagrams
show
gre.
B
Yeah,
so
to
be
honest,
there
is
still
some
scenarios
that
are
still
using
gre
right
like
like
the
load
balancer,
those
kind
of
stuff.
They
are
still
using
the
gre
right
vm
to
vm
inside
the
v-net
right.
This
one
is
is
using
both
it
used
to
be
using
gre
right
now
we
switched
to
vxlan.
B
Yet
so,
for
example,
vm
to
vm
communication
infinite,
we
are
able
to
switch
to
vxlan
some
of
the
other
communication,
which
is,
for
example,
involving
max
as
maxes
are
still
using
great,
less
lb
max.
So
in
this
case
the
protocol
will
also
need
to
support
allowing
to
end
up
with
the
gre,
for
because
we
are
assuming
that
we
will
not
be
able
to
transition
like
not
earlier
than
a
year
or
so.
B
I
True
for
the
long
term,
from
dash
perspective,
did
we
just
suggest
vxlan
or
do
we
want
to
include
nvidia
as
well.
D
B
Yeah
and
also
if
we
do
only
vxlan
right,
then
there
are
maybe
some
scenarios
which
you
right
now
will
not
be
able
to
enable.
D
E
A
C
So
this
is
a
follow-up
to
earlier.
In
the
conversation
I
know
you
want
to
close
this
out,
so
I
we
were
talking
about
the
fin
followed
by
ack,
maybe
wanting
to
match
up
sequence
numbers.
C
D
D
I
you'd
have
to
test
both
cases,
so
that
means
you'd
have
to
test
the
testers
who
are
writing
test
cases
would
have
write
the
test
case
for
receiving
the
acts
or
after
a
short
period
of
time,
and
that's
the
one
that
we
said
when
we
talked
about
earlier.
We
said:
that's
extra
code,
that's
going
to
have
to
exist
for
something,
that's
almost
never
going
to
happen.
F
Generally,
I
maybe
I
didn't
understand
that
part.
So
if
we
have
an
absolute
timer
started
when
we
have
get
a
fin
or
rst,
which
is
five
to
ten
seconds
or
whatever
is
the
time
decided
if
if
it
was
a
normal
behaving
connection,
of
course
you
will
expect
to
get
that
acknowledgement
and
all
that
you
know
pending
acknowledgements
in
that
time
yeah
I
didn't
understand
the
later
half.
A
D
F
Just
one
second,
maybe
I'll
try
to
answer
both
of
them.
One
is
about
the
expense
of
timers
versus
sequence
number
on
hardware
design.
So
you
know
we
debated
a
whole
lot
on
that
drill
and
actually
the
expense
of
timer
is
a
lot
less
than
it
is
for
sequence
number
and
we
do
have
a
calculation
and
we
can
probably
look
through
it.
The
the
timer
state
is
more
of
because
you
know
it's
like
a
thread
typically
just
like
how
it
would
be
in
software
or
a
hardware
thread
you're.
Really
not.
F
You
really
really
are
not
it's
a
sweet,
timer
kind
of
thing
right,
so
it's
not
like
you're
spending
too
many
bits
for
ev
every
time
that
you
sweep
that
you
need
to
manage
a
huge
number
of
state
plus.
Of
course
you
have
to
have
the
complete
state
as
well,
which
you
will
you
know
check
against,
and
there
are
ways
to
solve
that
by
double
caching
and
stuff
about
the
you
know,
the
flow
cache
you
having
a
large
number
of
like
outstanding
flow
cache.
F
As
long
as
your
cache
is,
you
know
doing
the
right
thing
with
respect
to
you
know
aging
out,
you
know
to
backup
memory
of
not
removing,
but
I'm
saying,
like
you
know,
pushing
to
your
to
your
backup
in
an
lru
kind
of
mechanism.
F
If
you're
not
getting
anything,
it
will
get
pushed
out
right.
So
I
I
we
don't
really
worry
about
that.
It
is
going
to
hog
your
cash
because
you
know
it
is
not
having
any
active
stuff
going
on
it
right.
So
so
that's
not.
D
D
Discussed
before
so,
it's
sort
of
like
a
timer.
Only
it's
really
a
time
stamp.
That's
there
such
that
background
tasks
can
sweep
through
and
clean.
These
things
out
right
is
that
what
you're
suggesting.
A
A
F
A
Nothing
you're
talking
about
an
implementation
detail
of
your
hardware
right
that
maybe
other
hardware
doesn't
have
you're
talking
about
like
caches
and
stuff.
I
think
sorry,
I
think
john
is
talking.
C
A
I'm
just
saying
that
the
flow
table
in
general
is
going
to
have
to
you're
going
to
those
entries
will
remain
for
that
10
seconds
and
like
at
very
high
connection
rates.
Your
flow
table
might
just
be
full
of
connections
that
have
actually
ended,
but
you,
but
because
of
this
10
seconds,
they're
still
in
the
flow
table.
F
I
But
we
also.
F
Always
two
parts
right
flow
table
is
your
working
set
of
flow
table
and
your
total
flow
table,
which
is
all
your
connections
and
I'm
pretty
sure
like
there
is
a
well-defined.
I
mean
there
are
three
parameters.
One
is
the
rate
of
flow
additions
that
you
want
to
support,
and
you
know
b
is
your
working
set
of
flows,
which
also
is
decided
by
you
know
the
rate
of
flow
additions
that
you
want
to
support
and
then
the
latency
to
your
backup,
which
is
your
complete
millions
of
flows
right.
F
D
Next,
we
haven't
even
mentioned
in
the
the
anywhere
in
our
documentation.
F
D
Think
that
what
you
guys
are
talking
about,
there's
no
wrong
or
right
there.
I
can
tell
you
that
vfp
uses
a
timer
and
they
don't
use
the
they
don't
actually
wait
for
the
act.
That
doesn't
mean
it's
wrong
to
actually
close
it
out.
I
think
it's
faster
to
close
it
up
with
the
axe.
Absolutely
I
would
say
that,
given
that
we
use
more
intel's
method
today
of
like
we
just
actually
wait
a
few
seconds
before
we
close
after
we
see
a
pin,
I
think
there's
precedence
for
it.
So
you
can't
say
it's
wrong.
F
Yeah,
the
the
only
so
gerald-
and
that
is
something
we
do-
support
that
you
can
close
it
out.
You
can
make
the
you
know
timer
to
be
something
which
is
like
immediate
zero
on
a
finnac.
The
question
is
about
knowing
that
you
just
you
know,
counting
the
finance,
then
right,
like
you
know,
you
have
got
it
from
both
directions
or
whatever
right
so.
D
Means
it's
not
really
as
efficient
as
if
you
added
this
immediate
withdrawal
of
a
finnac
right,
because
that
nothing's
more
efficient
than
that
there's
nothing
more
coming!
You
don't
need
to
wait
anymore.
F
Right
and
and
and
you
know
in
terms
of
efficiency,
to
close
the
connection
down
your
right
journal,
but
in
terms
of
implementation,
I
believe
you
know
there
are
trade-offs,
and
I
I
you
know.
D
I
think
I'm
okay
with
actually
doing
the
timer,
we
do
it.
So
I
can't
like
tell
people
that
that's
I
mean
that's,
how
vip
does
it?
So
I
think
that
we
can
go
with
that,
but
then
optionally
include
the
language
that
if
you
include
the
further
action
of
of
tracking
the
the
acts
you
will
close
down,
connections
quicker
and
if
you
can
do
that,
and
it's
no
problem
in
your
implementation,
guess
what
when
we
go
to
test
it
it'll
just
be
a
little
bit
more
efficient.
A
Well,
but
hold
on
a
second
take
the
hero
test.
Yeah,
I
don't
know
5
million
connections
per
second
of
tcp
connections,
right
hero
tests
well,.
D
A
I
I
understood,
but
you'll
get
the
six
packets
and
then
that
resource
that
memory
will
be
consumed
for
10
seconds
right.
So.
D
A
So
there's
there's
like
there's
something
like
sort
of
inconsistent
here:
okay,
like
if
the
hero
test
needs
to
do
five
million
connections
per
second
with
a
one
second
aging,
timer,
okay,
but
then
like
in
deployment.
You
need
to
do
five
million
connections
per
second,
but
it's
okay
to
have
a
10.
Second,
like
that's.
D
A
D
Is
how
we
dissect
your
implementation
and
find
out
how
good
it
is
the
other
tests
that
we're
going
to
have
that
go
beyond
that
are
going
to
be
more
realistic
to
the
actual?
What
goes
out
into
the
field
hero
test
is
to
break
you
it.
It
does
tell
us
which
part
of
the
design
is
the
weakest,
and
then
that
is
is
so
there's
three
parts
to
it.
There's
a
hero
test,
then
there's
going
to
be
a.
D
What
we
conceive
is
going
to
be
a
a
more
of
a
how
we're
going
to
set
the
the
parameters
in
real
life
kind
of
tests
and
there's
a
conformance
test
and
a
conformance
test
will
test
for
all
kinds
of
things.
So
hero
tests
only
did
one
type
of
testing
so
far,
which
was
trying
to
break
the
unit
and
find
out.
D
If
it's,
you
know,
for
example,
if
they
can't
age
in
a
second,
it's
because
they
didn't,
even
you
know,
they're
doing
it
in
software,
and
we
can
tell
like
literally
right
away
that
they're
doing
that
in
software
or
slow,
we'll
call
it
slow
path,
because
some
people
are
all
software,
but
we'll
call
it
slow
path
and
there's
no
hardware
accelerator
for
it,
and
that's
why
the
hero
test
kind
of
tries
to
break
things
open.
That's
why
we
round
robin
packets
in
the
background
one
packet
at
a
time
at
round
robin.
Why?
D
Because
we
know
there's
caches
in
nick's
and
we
want
to
break
then
we
want
to
see
where
they
break,
and
so
that
test,
which
you
haven't
seen
quite
yet,
is
like.
Where
you
say
it's
going
to
be
a
round
robin
of
one.
Then
it's
going
to
be
a
round
robin
of
eight
packets
in
a
row
round
robin
of
60
and
32.
D
Why?
Because
we
know
there
will
be
implementations
out
there
that
fail
the
one
packet
round
robin
and
we
want
to
find
them
and
find
out
that
that
we
broke
them
and
where
they
don't
break
and
it
goes
on
and
on
so.
The
hero
test
is
more
less
trying
to
help
the
buyer
figure
out.
Where
are
the
strengths
and
weaknesses
of
a
design,
but
as
part
of
that,
the
hero
test
is
not
done.
D
We
are
talking
about
adding
the
more
of
the
real
timers
into
the
test
so
that
we
follow
it
with
like
a
more
of
a
real
life
scenario
and
also
the
conformance
meaning
that
there's
so
many
ways
of
cheating,
a
hero
test
just
say:
allow
all
you're
done
you're
good
right,
that's
like
total
cheating,
but
so
it
would
be
backed
up
with
it
with
a
compliance
test,
which
means
that
every
rule
will
have
been
inspected.
D
D
A
D
A
A
D
D
A
Okay,
so
that
number's
not
correct,
no
okay,
so
because
my
concern
like
take
the
50
million,
if
five
million
connections
per
second
is
a
real
number
and
your
flow
table
is
in,
you
have,
let's
just
say,
a
10
second
age
time
to
age
out.
A
Connections
that
have
ended
like
your
flow
table
will
be
completely
overwhelmed
by
flows
that
have
been
aged
out
right
I
mean
I
it's
that's
the
point
I'm
trying
to
make
is
that
I
think,
if
you
don't
do
the
sequence
number
tracking
to
end
the
connection,
then
your
flow
table
will
have
to
be
like
many
times
like
the
size
in
order
to
be
able
to
handle.
D
No,
you
might
be
right
and
here's
why
even
bfp
doesn't
use
do
that
today,
but
bfp
doesn't
do
very
many
connections
per
second.
So
therefore
they
wouldn't
have
had
this
issue
right
and
we're
trying
to
do
something
that
is
a
hundred
times
faster.
So
that's
why
this
issue
is
now
new
coming
to
bear.
So
now
I'm
thinking
you're
bringing
up
a
very,
very
good
point.
A
F
So
I
mean
the
time
you
keep
it
active
like
the
timer
is,
of
course,
a
parameter,
as
you
mentioned,
you
know
that
you
have
to
decide,
depending
on
the
scale
of
your
the
connection,
add
as
well
as
the
total
connections
that
you're
going
to
support
as
to
like
you
know
what
is
a
good
absolute
timer
if
you're
going
with
the
timer
approach
to
kind
of
keep
them
alive,
but
the
one
more
question
I
have
the
cps
here
is
four
million
plus,
so
is
that
is
that
five
million
is
what
you're
saying
general.
D
So
so
these
numbers
are
arbitrary,
because
it's
whatever
you
can
do.
The
thing
is
that
when,
depending
on
when
you
started
testing,
so
let's
say
you
test
the
v-net
and
you
say:
I've
got
a
rough
v-net.
I
kind
of
did
stuff.
There's
no
counters.
D
I
didn't
handle
all
the
different
forwarding
techniques
because
I
didn't
know
them,
and
you
know,
if
you
do
that
in
the
beginning,
you
might
reach
a
higher
number
and
then
what
happens
is,
as
you
actually
implement
everything
that
you
need
to
do
it
will.
You
know
we
see
it
go
like
3
million
plus
it
won't
be
the
full
5
million.
So
5
million
is
not
a
real
number.
It's
whatever
you
can
do.
D
That's
why
the
hero
test
has
to
be
crafted,
because
you
have
to
figure
out
what
your
highest
connection
rate
is
and
then
your
background
traffic
has
to
be
against
that,
and
so
those
are
arbitrary
numbers
right
now.
What
we
see
is
we
can.
We
can
easily
achieve
three
million
plus,
even
with
the
full
aha,
with
everything
turned
on,
and
so
when
you
hear
numbers
being
thrown
around
is
because
okay
did
you
have
aha
at
that
time?
Did
you
have
other
counters
at
that
time?
Did
you
have
the
right?
D
You
know
all
the
forwarding
at
that
time,
and
and
if
you
didn't,
of
course,
your
number
is
going
to
be
higher,
or
did
you
really
do
rules?
Were
you
just
doing
tuples,
which
is
a
totally
different
thing?
That's
like
a
firewall
filter.
It's
not
really
a
set
of
rules
so
ignore
those
are
guidance
numbers
just
the
way,
but
we
do
have
some
experience
so,
for
example,
16
million
connections
don't
trivialize
that
that's
going
to
take
up
almost
all
your
32
gig
of
memories.
So
it's
not
like.
D
We
have
experience
enough
with
that.
Now
that
that
is
the
biggest
memory
consumer
is
the
actual
flow
table
right,
not
not
the
policies,
not
the
policies
that
you
plumb,
not
even
the
forwarding
tables,
but
the
actual
flow
tables
dominate
the
memory.
So
that's
why
you
know
if
we
said,
for
example,
50
million
first
of
all,
wouldn't
fit
50
million,
we
know
from
practice
would
not
fit
into
32
gig
right
and
and
so
what
ended
up
happening.
Is
we
started
at
16
just
to
give
you
some
background.
D
We
got
16,
but
then
we
said
well.
What
about
h?
A
well
aha
is
going
to
need.
You
know
twice
as
much
memory,
because
we've
got
to
plumb
like
all
the
rules
from
both
sides
of
the
aj
platform,
blah
de
blah,
and
we
ended
up
saying
we
went
through
the
memory
calculator
and
actually
implemented
it.
We
only
ended
up
with
about
eight
eight
million
flows,
there's
actually
sixteen,
but
only
eight
million
are
being
used
because
of
the
aha
function
and
how
it
was
implemented.
D
So
the
numbers
will
come
down
as
you
start
to
use
the
features
that,
as
a
feature
said
white,
you
know
everything
kind
of
comes
down,
but
where
we
settle
even
with
aha,
we
ended
up
with
on
a
200
gig,
with
a
32
gig
memory,
we
were
able
to
support
the
eight
gig
flows.
The
16
is
a
requirement
in
the
v-net
test,
because
v-net
doesn't
include
aj
and
if
you
couldn't
do
16
million
without
you
can't
do
8
million
with
aj.
So
that's
that's
kind
of
some
just
background
of
all
the
stuff.
E
D
D
That
gives
you,
you
know,
allows
you
to
have
you
know
larger
than
32,
gig
and
stuff
like
that,
so
we
can
have
bigger
tables,
but
like
we're
kind
of
being
realistic.
Today,
with
with
that
and
we'll
publish
more
numbers,
as
we
you
know,
as
the
hero
test
is
published,
we
can
kind
of
gives
a
little
bit
more
guidance,
but
the
hero
test
is
still
being
written
and
analyzed
today
by
excite
and
others
and
including
ourselves,
so
not
ready
to
give
the
final
numbers
yet.
F
F
D
F
F
D
F
Yeah,
so
the
four
timer
way,
which
I
think
is
the
right
model,
at
least
for
what
we're
doing.
Is
you
start
the
last?
You
know
getting
done
kind
of
timer
at
pin,
but
then
you
make
the
timer
to
zero
for
immediate
termination.
As
soon
as
you
see
the
finance
right,
so
I
think
that's
the
model
that
seems
to.
I
I
A
I
I
don't
want
to
write
that
down,
because
I
don't
necessarily
agree.
I
think
that
I
think
that
tracking
the
sequence
number
is
is
the
is
a
way
of
removing
the
flow
more
quickly.
I
thought
we'd.
I
thought
that
she
just
said.
F
As
you
know,
the
last
time
or
state
for
a
flow
finn
being
the
third
state
of
the
timer
and
fin
act
being
the
final,
I
can
write
this
down
journal
because
you
know
we.
This
is
the
flow
that
we're
using,
but.
D
E
D
C
F
F
D
D
Yeah
I
I
thought
I
heard
somebody
saying
thin
in
the
same
package,
which
is
not
possible.
F
No,
no,
no
finn
and
then
a
finn,
ack,
okay,
okay,
yeah
yeah
and
that's
the
flow,
and
I
I
could
probably
write
it
down
and
we
actually
have
a
floor.
Written
down.
I'll,
probably
just
share
that.
Okay.
E
E
D
Appear
because
I
think
john
can
also
comment
on
it.
So
let's
have
the
debate
like
we
normally.
Would
you
know
in
a
thing
like
this?
You
do
the
pull
request
and
then
people
can
chime
in
and
they
can
put
their
comments
and
then
we
can.
I
think
this
one's
worth,
finally
closing
out
close
close
right,
so
we
didn't
close
it
up
today.
So
we'll
have
another
chat
after
doing
a
pull
request
and
having
more
comments
and
then
we'll
try
to
close
it
out
for
a
bit.
D
E
If
you
need
help,
just
call
me
if
you
need
help
doing
the
pr
adding
john
as
a
reviewer.
I
Oh
okay,
sure
question
yeah
yeah.
I
will
just
be
mentioning
the
absolute
timer
and
usage
of
finnac
to
close
out
without
using
the
sequence
number,
and
I
will
need
your
help
to
add
john
as
a
reviewer.
I
Okay,
we'll
try
previously,
sometimes
it
didn't
book,
so
we'll
try
yeah.
D
Would
be,
I
really
want
to
close
this
because
in
the
end,
the
behavior
model
doesn't
have
to
be
efficient
like
how
it's
written
so,
but
I
do
want
it
to
be
up
to
date,
and
we've
got
all
kinds
of
stuff
that
we
need
to
do
to
the
behavioral
model
from
the
forwarding
both
came
down
up
on
boarding
plus
to
finish
out
this
stuff
and
so
christina.
I
think
it's
only
tuesday.
Why?
Don't
we
meet
again
friday
and
try
to
close
this?
D
We
do
the
pull
request
today
from
intel
side
comments
by
friday
and
then
have
another
short
meeting,
maybe
a
half
hour
meeting
to
see
if
we
can
all
agree
to
how
to
close-
and
that
gives
john
some
time
to
make
his
comments
again
and
arguments
on
this
I'd
like
to
I
if
we
could
close
the
bike
friday,
otherwise
we're
delaying
you
know
anybody
going
up
and
doing
more
work
on
the
p4
behavioral
model,
because
they're
afraid
that
we're
not
settled.