►
From YouTube: Antrea Community Meeting 03/15/2022
Description
Antrea Community Meeting, March 15th 2022
A
A
A
Meeting
today
is
wednesday
march
the
16th,
and
in
particular
I
would
like
to
thank
the
team
members
that
are
joining
us
from
the
united
states
as
today
they
are
joining
us
at
10
p.m,
which
is
quite
very
late
for
them,
since
they
already
switched
to
daylight
saving
time
and
as
a
reminder,
the
next
andrea
community
meeting
when
also
europe
with
which
daylight
saving
time
will
shift
by
an
hour.
A
This
means
that
the
meeting
will
not
be
anymore
at
1
pm
beijing
time,
but
it
will
move
at
12
p.m.
Beijing
time,
considering
that
china
doesn't
have
a
daylight
saving,
so
just
keep
in
mind
these
that,
for
kind
of
for
the
next
six
seven
months,
the
meeting
will
move
to
12
p.m.
China
time,
and
so,
let's
go
directly
to
the
agenda
of
the
meeting.
A
Since
due
to
my
microphone
malfunction,
we
already
lost
a
couple
of
minutes,
so
without
wasting
more
time,
I
would
like
to
introduce
the
agenda,
which
is
a
hung,
providing
us
a
presentation
on
real
life
tracing
and
therefore,
hang.
Please
go
ahead
with
your
presentation.
B
So
can
you
see
my
screen
and
the
design
dock?
Yes,
we
can.
Okay
thanks!
So
I'm
I'm
just
going
to
start
so
today,
I'm
going
to
introduce
the
design
for
live
traffic
tuition,
so
this
is
their
design
want
to
answer
the
following
user
cases
to
support
the
following
user
cases.
One
is
we
want
to
trace
the
real
traffic
and
trail
to
not
only
just
a
craft
package
of
the
first
packet,
and
I
want
to
give
users
the
ability
to
configure
their
their
intentions
on
by
different
sampling
methods.
B
For
example,
I
want
to
sample
first
15
packets
of
sampling
of
the
traffic
at
a
given
interval,
so
in
the
last
the
user
may
want
to
traffic
the
trace
in
both
directions.
So
this
is
the
the
features
I
want
to.
We
wanted
to
support
in
this
design.
So
first
I
want
to
give
a
few
minutes
to
introduce
what
we
have
done
in
current
versions
of
trees
flow.
So
we.
B
Can
trace
his
flow
now
support
inject
a
packet
into
os
and
twist
the
real
traffic
for
the
first
packet.
So
we
can
a
user
can
create
a
truthful
crd
and
specifies
the
type
of
this
stress
flow
and
he
will
also
he
or
she
will
also
configure
the
source
and
the
destination
of
the
package
and
when
the
trace
flow
session
ended.
B
So
we
will
report
the
observations
to
the
status
field
of
the
trace
flow
crd,
so
the
user
can
use
the
coverstl
of
the
ui
to
start
a
trace
flow
session
and
we,
when
the
session
ended,
so
the
results
will
be
presented
by
in
the
ui
to
can
be
created
by
a
graph.
So
this
is
the
current
feature,
our
truthful
support.
So
this
design
has
proposed
a.
B
We
we
add
a
package
sample
step
in
the
normal,
truthful
session,
so
meanwhile
we
can
reuse
the
all
of
the
older
ovs
pebble
line.
So
this
is
the
basic
design
of
our
our
trees
flow
for
real
traffic
and
the
office
is
the
truthful
session
I
need.
We
can
use
a
aggregated
api
to
aggregate
our
results,
because
now
we
we
have
multiple
package
results,
so
we
it's
not
be
enough
to
just
use
the
truthful
crd,
so
we
can
use
the
kubernetes
aggregate
api
to
merge
or
aggregate
all
the
results
by
a
single
api.
B
So
after
this
changes,
our
tuition
session
has
has
a
different
configure.
So
I
have
highlights
the
difference
between
our
current
version,
which
is
fishing
session
compared
to
the
older
one.
So
we
have
still
used
the
source
destination,
endpoint
the
filter
and
we
have
still
have
a
timeout
field,
and
we
now
we
will
have
a
new
sampling,
config
failure.
So
here
I
have
a
simple
example,
so
we
have
our
familiar
fields,
but
we
have
a
new
sample
fader,
which
has
a
tab
and
a
number.
B
So
I
would
have
means
we
want
to
capture
the
first
end.
Sampling
means
that
we
have
only
capture
first
15
packets
in
this
session,
so
this
is
the
obvious
change
to
the
user.
So
he
only
needed
to
add
this
new
configure
and
the
trace
flow
session
will
behave,
will
start
so
after
this
change.
So
we
we
have
a
small.
We
have
a
new
step
in
our
twist
flow
session.
It
happens
very
earlier
before
the
os
parabola
means,
which
is
we
will
mark
some
packets
based
on
this
sample,
configure
input
by
the
user.
B
So
after
this
step
down,
all
this
marked
packet
will
be
processed
in
the
os
purple
line.
It's
pretty
much
the
same
as
the
older
ones,
but
there
are
some
small
difference
so
after
this
next
I
want
to
introduce
how
we
can
actually
do
the
sampling
step.
So
we
have
a
few
choices.
The
first
one
is
the
tc
sample
action,
so
tc
sample
actions
use
the
p
sample
kernel
model
to
sample
packets.
B
So
you
can
so
you
say
you
can
add
a
filter
to
the
tc
ingress
or
egress
the
q
disk.
And
after
that
you
can
add
a
list
of
actions.
One
of
them
is
the
sample
action.
So
when
you,
after
you
add
the
sample
action,
the
kernel,
the
p
sample
kind
of
model
will
actually
do
the
same
thing
for
you,
but
it
will
send
a
copy
of
the
sample
package
to
the
userland
program.
So
you
can
get
a
copy
of
this
package.
B
The
next
two
choices
are
as
flow
and
epifix.
These
are
the
sampling
actions
over
s
support.
So
we
can
configure
this
sampling
method
in
os,
but
the
only
support
for
flow
or
bridge
level
metrics
for
s
flow
it.
I
can
only
gather
the
bridge
level
matrix
and
send
to
a
external
collector
for
fpfix.
It
can
be
configured
both
at
a
flow
of
bridge
level
and
after
you
gather
the
metrics,
you
still
need
to
send
them
to
a
external
collector.
B
So
this
these
three
choices
both
they
are
not
what
we
need,
because
we
want
to
modify
the
sample
packet
in
place.
So
we
we
need
to
modify
the
original
traffic
and
under
this
solutions
they
can
only
gather
some
metrics
for
flow
and
or
send
your
copy.
So
we
will.
Our
last
choice
is
using
ebpf
because
we
can
use
eppf
to
attach
to
the
different
host
different
hook,
point
to
modify
to
filter
and
modify
the
package.
B
So
we
we
will
dive
into
the
ebtf
solution
and
see
what
we
can
do.
So
here
is
a
picture
which
will
show
how
we
can
some
of
the
most
important.
If
you
have
a
hook
punch,
for
example,
the
one
we
want
to
use
is
ebpf
for
pvtc.
If
you
have
a
hooker,
because
it's
a
this
position
is
a
stay
just
after
the
ip
tables
process
and
before
the
we
actually
send
the
package
before
the
net
dv
process.
B
So
this
is
important
because
we
need
to
make
sure
this
program
performs
before
the
gso
patch
happens,
so
which
will
affect
our
choice
for
the
packet
mark.
So
we
will
cover
later
and
over
here.
Over
the
years,
the
linux
kernel
has
provided
multiple
improvements
for
the
tc
ebf.
B
For
example,
it
have
a
specific
q,
disk,
chord
called
sh
class
actor
and
a
classifier
called
class
bpfo
and
a
new
model
called
the
router
action.
It
means
you
just
need
to
use
a
single
direction,
single
bbpf
program,
which
can
both
do
the
filter
and
the
action
at
one
time
to
you
know
you
don't
need
to
actually
deploy
two
programs,
the
first
one
is
doing
filter
and
why
is
doing
the
action?
B
So
this
is
a
huge
performance
improvement
and
the
data
structure
our
program
will
deal
with
is
sk
buffer
because
it
contains
nearly
all
the
information
we
need
for
our
package,
so
we
will
still,
for
example,
we
can't
access
the
dscp
and
the
package
mark
and
the
ipv
id
field,
which
we
will
cover
later,
because
we
will
need
to
use
it
as
a
packet
id.
So
this
is
the
inside
of
our
ebpf
solution.
B
So
after
we
choose
the
underlying
technology
for
the
epf
configure
missions
configuration
so
we
can
support
the
following
different
types
of
conf
of
sampling.
The
first
one
is
the
sampling
first
unpackaged,
the
second
one.
It
means
we
after
we
sampling
the
first
impact.
We
can
stop
this
stress
flow
session.
The
second
one
is:
we
can
sample
one
out
of
impacts.
It
means
that
we
we
can
counter
the
packet
and,
after
after
we
unpackage,
we
can
pick
one
just
one
to
mark
it
and
send
it
to
the
over
overlap
to
be
matched.
B
So
this
is
the
most
common
configure
which
will
be
which
it
has
been
supported
by
the
tc
sample
actually
and
as
flow
and
mp
fix.
The
last
one
is
a
sample
at
a
given
interval.
It
means
we
can
sample
it.
B
For
example,
we
can
sample
the
traffic
by
10,
ms
or
one
second,
which
is
nearly
a
rather
a
rather
rare
choice.
So
we
each
of
this
sampling
method
will
have
different
parameters,
for
example,
our
first
m
sampling.
We
we
we
need
the
user
to
provide
a
number
which
means
that
the
packet
number
we
want
to
sample
and
we
need
to.
B
Enforce
the
the
the
number
the
range
for
this
number,
because
since
we
need
to
modify
the
origin
package,
we
don't
want
to
bring
huge
performance
impact,
so
we
cannot
allow
the
user
to
sample
too
much
packets.
So
this
is
the
number
we
need
to
design
carefully
to
to
both
match
the
user
need
and
don't
bring
too
much
performance
impact
of
our
twist
flow
session.
So
we
can
we
we
need
to
choose
a
max
number
for
this
pyramid.
B
The
the
second
type
is
package
number
sampling,
which
is
we
need
a
rate
of,
or
we
can
name
it
as
a
number.
So
normally
this
this
number
is
pretty
large.
For
example,
most
of
the
sampling
configures,
supported
by
tc
action
are
as
flow
are,
a
prefix.
They
have
a
number
like
655,
three
six,
so
so
we
can
most
of
these
numbers
choice.
Are
the
intentions
are
trying
to
reduce
the
performance
impact,
so
the
last
type
we
support
is
interval
sampling,
which
means
we
we
can.
We
should
which
we
have.
B
We
will
have
one
pyramid
which
defines
what
intervals
of
the
packs
are
sampled.
So
among
these
three
sampling
configures,
I
think
the
most
important
one
is
the
first
ensemble,
because
so
when
you
come
capture
a
series
of
packets,
these
packages
are
more
related
than
the
the
other
two
sampling
types.
For
example,
we
can't
capture
a
single
tcp
is
a
package.
If
you
can
specific
a
proper
number.
For
example,
we
we
have
a
rough
gas.
For
example,
we
have
a
15
volt.
B
B
And
for
the
intro
sampling,
we
still
need
to
remember
one
thing
that
we
we
have
come
out
of
failed
in
the
trees
flow
session,
so
both
the
package
number
sampling
and
the
interval
sampling
and
the
first
and
example
they
will
have
a
hard
limit
because
we
have
a
temp
timeout
failed
in
the
spec
fail
spec
field,
so
both
this
sampling
method
will
be
ended
if
at
a
reasonable
timeout.
So
this
is
the
description
of
this
three
sampling
configured.
B
So
after
adding
this
configure
sampling
configure
our
trace
flow
status
field
will
still
be
used
to
aggregate
the
result
of
the
faster
package.
If
so,
there
are
two
cases:
first,
we
can
still
use
the
trace
flow.
As
always,
there
are
no
sampling
configures
and
nothing
will
be
changed
and
if
the
sampling
configure
present,
we
can't
still
report
the
result
of
the
first
package,
but
we
can.
We
also
need
to
aggregate
the
result
of
other
packets
to
different
places
so
to
the
users.
B
So
in
the
package
sampling
process
we
have,
we
still
need
the
dhcp
field
that
we
already
used
before
to
acting
as
a
data
plane
tag
and
after
before.
Besides
that,
because
we
are
capturing
multiple
packets,
we
still
need
a
unique
id
field
for
to
correct
correlated
the
multi,
the
captured
package,
because
we
have
a-
we
may
have
a
different
host
under
different
packets.
So
we
can,
we
may
we
need
to
capture
them
at
a
different
point,
different
checkpoint.
B
So
after
the
session
is
ended,
we
need
something
to
merge
to
correlate
the
results,
so
we
need
our
failed
to
acting
as
the
you
you
unique
id
for
the
package.
So
we
have.
I
have
presents
two
solutions.
The
first
one
is
a
package
mark.
This
is
a
internal
packet
failed
in
the
corner
and
we
can
carry
it
in
the
deliver
option.
Underwear,
the
second
one
is
ipid,
which
is
a
internal
which
is
the
standard
field
in
the
ipv4
header.
B
So
the
preferred
solution
is
using
rtid,
which
I
will
explain
why
later,
but
first
I
want
to
share
the
package
mark
solution.
So
this
failed
is
a
internal
field
in
linux
kernel
and
we
can
use
h
to
match
the
different
bits
in
over
a
box
in
os
or
ip
tables
and
andrea
already
uses
some
others
field
for
the
equal
scenario.
So
we
have
still
have
the
four
remaining
24
bits
to
be
which
can
be
used
to
act
as
the
packet
id.
So.
B
B
So
this
is
the
drawback
main
drawback
of
this
solution
and
the
next
choice
we
have
is
a
pid
which
we
prefer,
because
it's
a
it's
already
in
the
ipv4
hud,
so
we
can,
before
after
we
can
figure
out
figure
out
what
is
normal
usage
and
to
avoid
the
conflict
between
the
the
between
the
normal
intuition.
So
we
can
use
this
field
carefully.
B
B
So,
however,
over
the
years
this
field
has
been
used
for
different
cases
and
different
operating
systems
tend
to
set
different
values
for
this
field.
So
I
have
a
little
research
and
we
have
listed
the
different
possible
values
for
this
field.
The
first
one
is
a
global
counter
means,
each
time
when
the
os
is
under
a
package,
it
will
increment
this
field.
The
second
one
is
a
local
counter
that
means
for
different
destinations.
B
There
are
also
there
are
random
numbers
and
constant
values.
Inconsistent
values
is
usually
its
value
is
zero.
So
so
this
failure
can
be
set
by
by
different
algorithm,
and
this
value
can
be
vary
for
different
situations.
So
we
need
to
figure
out
how
to
figure
that
figure
out
all
the
situations
for
it.
So
after
the
so,
I
have
actually
captured
multiple
packets
from
different
os
and
verifies
the
values
distribution.
B
B
It's
the
random
number
is
a
pretty
real
case,
so
we
can't
acknowledge
for
the
for
our
design
so
there
for
for
different
traffic,
for
example
tcp,
since
our
we
can
use
pmtu
to
figure
out
how
the
actual
mtu
for
our
session,
so
the
don't
fragment
bit
is
always
set
means.
This
package
does
not
need
to
be
segmented,
and
in
this
case
the
id
value
in
this
package
should
be
ignored
by
all
the
device
under
the
destination.
B
So
we
can
set
this
failed
safely
in
this
in
this
situation,
so
for
udp
traffic
it
can
be
as
a
case
for
the
dfb
because
different
oasis
to
choose
different
approaches.
So
we
can.
You
can
see
that
some
of
the
packages
for
the
udb
traffic
you
said
to
the
dfb
is
set
to
zero.
Some
of
it
is
set
to
one.
So
we
we
need
to
deal
with
both
the
situations.
B
Besides
the
tcp
and
the
udp
traffic,
the
gio
and
the
gso
process
also
will
affect
the
api
defender.
So
we,
for
example,
we
we
have
multiple
packets
come
from
the
same
flow
and
the
gro
may
choose
a
protocol
specific
algorithm
to
match
this
package.
The
different
protocols
will
have
different
algorithms,
so
for
the
tcp
case,
the
two
packets
can
be
merged
only
if
they
have
they
belong
to
the
same
flow
and
they
have
the
tcp
flags.
B
For
example,
the
sequel
failed
and
they
needed
to
have
the
same.
Dsap
failed
and
their
ipid
failed
should
be
continuous,
which
means
usually
is
zero
or
no
sorry
continuous,
which
means
you
should
be
increment
one
by
one
and
a
constant,
which
means-
and
it's
always
zero
and
the
df
don't
fragment
flag
should
be
the
same.
B
B
Our
our
sample
package
will
still
be
stand
alone,
but
so
when
we
receive
it,
so
there
are
still
some
risks
I
will
cover
in
the
future
and
the
the
final
case
is
for
ipv6.
Since
fpv
six
is
id
field,
it's
different
from
fpv4,
because
this
id
field
will
only
exist
after
the
frag
packet
is
fragmented.
So
normally
you
you,
you
cannot
assumption
that
this
failed
is
always
exist.
B
B
There
are
two
main
cases.
The
first
one
is
this.
This
field
can
be
zero,
which
means
the
df
bit
is
set
and
all
the
devices
under
the
destination
it
cannot
relay
on
this
field.
It
cannot
rely
on
this
field
to
to
perform
action,
so
we
can't
set
this
failed
to
a
random
value.
So
we
know
that
this
is.
We
can't
use
it
to
correlate
to
the
target
package.
So
if
this
failed
is
not
zero,
which
means
it's
already
a
flow
counter
or
global
counter,
so
we
can
reusing
the
existing
value
so.
B
So
we
there
are
still
some
risks.
For
example,
we
we
need
to
carefully
choose
an
fpid
which
we
won't
collect
conflict
with
the
existing
ones
and
the
second
possible
risk
is
if
packet,
loss
or
privacy
happens,
which
means
the
gr
agarose
may
be
failed
may
may
be
failed.
So
in
that
case
we
can
still
report
what
we
have
got
after
the
packet
imac
message,
because
we
may
have
got
some
potential
results
from
the
sampling
package,
so
we
we
don't
have
much
to
do
with
this
circumstance
circumstances,
but
we
can
report
the
results
we
have
found.
B
This
is
the
package
mark
section
we
have
considered
and.
C
Ever
crying
here
yeah,
I
think
you
talk
about
the
water
cases,
so
I'm
not
sure
I
I'm
following,
but
I
trust
you
this
computer,
a
lot
of
the
skill,
but
it's
still
it
still
sounds
very
tricky.
Right,
yes,
is
that
just
our
invention-
or
you
know
any
other
solution-
also
use
this
field
to
do
something
similar
to
us,
basically
to
trace
the
packet.
C
B
B
The
most
super
suitable
choice
is
besides.
Rpid
is
still
the
package
mark
solution,
because
it's
a
failed
to
be
used
as
a
mark
for
this
particular
package
and.
C
B
So
there
are
many
pieces
to
use
this
failed
to
do
some
research,
but
I
think
modifying
this
package
is
is
not
quite
common
because
they
have
seen
some
research
to
use
to
read
this
value
to
as
a
matrix
to
find
out
the,
for
example.
They
can
be
used
to
this
to
find
how
many
backend
backend
servers
behind
be
service
behind
int.
So
so
this
field
is
a
useful
metrics
for
many
pieces,
but
I
think
modifying
this.
This
field
is
it.
B
It
may
be
still
common
for
the
early
days,
but
in
today's
so
this
field
has
been,
as
I
said
before
this,
the
latest
rfc
has
clifford
clarified
this
user
failed.
B
C
Another
coincidence
for
the
low-income
mode
do
we
know
any
chances,
author
or
any
device
in
the
underlying
network
control
over
the
packet?
If
we,
if
we
said
again.
B
No
because
if
we
set
this
failed
or
if
it's
a
failed,
if
a
status
is
filled,
it
means
its
original
value
is
zero,
which
means
the
device
will
it
will?
It
cannot
segment
this
package
and
we'll
ignore
this
field
for
for
all
the
behaviors
for
all
the
possible
cases,
so
I
think
it's
pretty
safe
to
set
this
filter
only
if
we,
our
value,
won't
collapse,
conflict
with
the
existing
existing
sequence,
the
id
sequence.
B
B
Observations
and
we
can't
get
the
get
the
result
for
our
captured
sampling
package,
which
is
maybe,
which
is
multiple,
so
we
we
need
to
the
the
first
result
we
can't
get.
Is
we
different
from
the
current
flow
session?
We
can
capture
the
result
package
and
the
storage
on
the
local
cache
first,
because
we
now
we
not
only
have
the
trees
flow
paths
for
not
only
a
single
packet.
B
We
can
get
the
trees
flow
paths
for
each
of
this
packet,
so
we
can
capture
and
store
it
on
the
source
in
the
point
under
the
destiny
point
for
for
the
the
death
endpoint
for
each
node,
and
we
can
store
this
packet
in
the
p-cap
ng
format,
which
can
we
can
also
record
the
extract
ipid
field
in
the
comment
section
of
this
p
cap
ng
format,
and
we
can
also
record
the
trace
flow
path
in
this
comment
section.
B
So
after
we
gather
all
this
captured
raw
package
and
we
can
use
the
aggregate
api
just
like
we
have
done
that
before
the
support
bundle-
api,
which
is
you
the
user,
can
use
the
under
cto
tool
or
we
may
have
a
new
button
for
the
ui.
So
we
can
allow
the
user
to
download
the
capture
package
and
this
package
can
be
revealed
on
the
local
gui
tools
such
as
wireshark,
and
so
I
think
this
is
a
pretty
pretty
convenient
for
people
to
analysis,
their
traffic
in
their
preferred
ways.
B
So,
besides
that,
we
can
also
analysis
this
package
and
provide
some
extra
details
or
list
metadata.
So
we
can't
get
the
packet
metadata
from
all
the
package
and
we
can
get
the
packet
id
the
protocol
the
size
and
we
can
analysis
this
package
and
use
x,
extra
storage
or
maybe
we
can
use
the
agency
or
crd
or
local
catch
or
in
memory
to
provide
extra
extra
apis
to
presence
this
analysis.
B
B
Raw
data
in
logo,
catch
and
the
metadata
or
part
is
tricky
because
we
we
need
to.
B
Store
it
on
some
different
places,
because
we
want
this,
results
can
be
retrieved
by
real
time.
So
we
won't
do
don't
want
the
user
to
wait
for
a
long
time
to
for
the
api
to
response.
So
this
is
the
aggregate
api
part
and
we
still
have.
The
other
thing
I
want
to
which
we
come
down
is
a
the
bell
direction
sample,
because
we
we
in
the
case
of
our
destination
is
a
part.
So
we
can't
install
the
epa
program
on
the
destination.
Tc
hook,
point
and
sample
the
traffic
in
both
directions.
B
So
this
won't
be
very
hard
because
we
can't
reuse
most
our
epa
program
and
our
trust
flow
pipeline
so
and
those
I
think
this
will
be
very
useful
for
the
real
traffic
tracing,
and
so
after
all,
these
changes.
All
this
did
then
have
introduced.
We
have
now
our
test
flow
workflow.
It
becomes
first,
the
user
will
create
a
trustful
request
using
the
crd
or
our
cl
command
2..
B
So
after
this
trace
flow
session
is
created
and
we
will,
if
it
will
continue
sampling
configure,
we
will
deploy
our
ebpl
same
program
to
the
target,
tc
hook
point
and
this
program
will
start
to
sample
the
package
and
mark
the
package.
So
after
this
sampling
action
started,
our
our
live
traffic
result
will
be
reported
by
our
progress,
properly
public
packaging
message.
B
So
after
we
get
to
the
packaging
message,
part
of
the
first
result,
because
we
still
want
to
report
the
result
of
the
first
packet
to
to
the
truthful
ocrd
and
the
capture
package
for
other
packets
for
all
the
package
will
be
saved
on
localhost
and
will
be
aggregated
to
buy
kubernetes
algorithm
here
and
will
be
served
to
the
user.
So
this
is
the
total
over
workflow
for
our
new
trust
flow
session.
B
So
I
think
that
that
will
be
all
because
our
I
have
covered
the
future
work,
something
we
can't
done
later,
because
we,
the
the
last
part,
is
we
can't
be
we
can't
to
add.
We
can
add
some
ui
pages
to
represent
the
package.
We
have
captured,
for
example,
a
download
button
for
the
raw
packet
data
and
some
pages
or
diagrams
for
the
package
metadata
and
a
packet
list
metadata.
B
A
To
be
honest
with
you,
I
I
have
some
curiosity
some
questions,
but
I
still
have
to
wrap
my
end
around
around
the
presentation.
So
I
will
probably
comment
on
the
document.
My
only
question
is
about
is
about
filters.
You
know
when
we
inject
when
we
inject
a
pocket.
A
Typically,
we
know
what
we
are
injecting
in
normal
trace
flow.
Now
we
are,
we
are,
we
are
pretty
much.
We
are
just
we
are
injecting
we
when
we
do
live
sampling,
we
are.
We
are
still
honoring
all
the
filters
on
source
destination
packet
and
you
know
like
a
sport
destination
board.
That's
just
my
curiosity.
A
If,
basically,
we
are
going
to
monitor
only
leave
traffic
that
matches
the
conditions
specified
here
or
when.
Instead
we
are
going
to
monitor
all
the
live
traffic,
I
think
you
are
still
on.
We
are
still
only
monitoring
traffic
between
source
and
destination
that
matches
the
characteristics
that
we
see
here
is
that
correct.
B
A
Yeah,
in
this
case,
do
you
are
you
going
to
install
the
filter
in
the
ebpf
program?
Like
is
the
ebf
program
that
will
only
inject
in
data
matching
the
filter,
or
are
we
doing
filter
like
in
normal
trace
flow.
B
B
A
Okay
thanks,
so
this
leads
me
to
the
final
question
that
I
have
it's
probably
very
stupid,
since
this
filtering
is
done
before
the
obs
pipeline
processing,
when
we
have
this
destination
matching.
A
B
This
situation
is
when
the
target
is
a
servicerp,
so
I
may
need
to
to
have
a
deep
have
a
closer
clear
look
at
this
part
first,
and
I
think
I
need
to
consider
this
this
situation
carefully
after
this
okay
to
this
session,
okay,
to
be.
A
A
Okay,
so
thanks
that
was
my
only
question,
and
then
I
think
that
you
know
if
you
wanted
to
restrict
the
ability
of
doing
trees
floof,
that's
everything
that
that's
just
stuff
that
we
can
control
using
kubernetes
airbag,
because
you
know
the
only
additional
thing
here
is
that,
since
we
are
monitoring
live
traffic,
there
could
be
a
privacy
concern.
A
So
I
don't
know
if
we
want
to
make
a.
I
think
I
think
it
should
be
possible
to
say
that
have
some
restrictions
in
terms
of
airbag
in
terms
of
the
users
that
can
do
live
traffic
filtering.
A
This
is
like
you
know
it's
just
because
I
love
to
be
pedant,
but
you
know,
since
in
this
case
we
are
monitoring,
live
application
traffic.
There
could
be
some
privacy
concerns
in
users,
and
so
you
want
to
restrict
the
ability
of
doing
live
tracing
like
only
to
the
owners
of
the
applications
or
to
admin
users.
A
C
Yeah,
actually,
I
have
several
comments.
I'm
not
sure
we
can
talk
away
in
this
meeting,
but
overall
first
I
feel
the
treasure
is
a
little
overloaded.
I
I
kind
of
feel
probably
we
should
use
another
crd
month.
C
C
I
just
mean
from
from
a
user
perspective,
I
I
feel
probably
we
don't
need
to
mix
highly
captured
sampling
into
the
truthful.
That's
my
feeling.
C
Because
sounds
like
you
you're
most
of
your,
I
mean
this.
This
feature
is
more
for
sampling
and
pad
capturing,
and
I
I
am
still
trying
to
understand
why
we
need
to
choose
the
first
packet
together
with
feature.
I
think
they
are
kind
of
a
independent
information
on
my
imaging
so.
B
I
think
create
a
new
crd
is
totally
accessible
above
us
to
just
for
the
sampling
and
we
can
call
it
a
live
chase
flow
or
something.
So
I
think
this
is
just
an
different
choice
and
we
can
we
can't.
We
can
create
a
new
crd
for
this.
This
this
trace
flow.
This
new
kind
of
cheese
flow.
C
Sure
I
think
it's
just
my
personal
idea.
Probably
we
can
also
discuss
more
here
and
we
can
hear
others
opinion.
C
C
Add
a
tag
into
the
packet,
so
we
capture
the
pact
on
the
two
sets
or
we
can
just
capture
the
pad
on
the
source
so
that
station
we
just
choose
one
and
we
don't
need.
B
Yeah,
I
think
this
this
part
is
also
can
be
discarded,
because
we
we
still,
we
can't
only
capture
the
packet,
either
the
source
and
the
point
which
will
provide
a
pretty
much
useful
invention
and
we
can
also
choose
to
capture
them
at
both
the
south
as
a
destination.
I
think
these
are
not
final
choices,
because
we
can't
do
for
either
case
or
we
can't
do
both,
because
we
have
the
ability
to
do
so,
but
it
does
not
mean
we
have
to
do
all
these
captures.
C
Exactly
probably
my
last
one
that
yeah
probably
to
understand
the
eppf
signal
like
how
much
complexity
it
is,
I
mean
how
much
code
we
are
adding.
If
it's
two,
I
think,
since
we
have
already.
B
This
program
will,
I,
I
don't
think
it
will
be
for
too
it
will.
It
will
be
too
large,
because
the
ecosystem
for
tc
bpf
is
pretty
mature
and
we
have
all
the
things
we
need
to.
B
B
So
besides
that,
I
think
most
the
most
other
parts
is
pretty
travel
for
us
to
implant
mentality.
So
so
you
want
to
have
a
poc
or
it's
like
we.
I
don't
have
no,
but
I
don't
think
it
will
require
much
effort
to
do
so.
C
Okay,
anyway,
I
I
I
just
feel
probably
really
do
you
watching
here.
I
I
I'm
just
concerned,
I
mean
if
it's
lots
of
occurred,
then
it
can
be
overkill
and
also
means
we
need
to
maintain
a
you
know
some
additional
code,
especially
with
the
new
technology.
That's
my.
C
A
A
Okay,
so
hung
the
only
last
favor
that
I
would
like
to
ask
you
if
you
can
share
on
the
chat
the
link
to
the
guitar
yeah,
so
that
we
have
a.
A
3428,
that's
great
and
so
yeah
that
everyone
can
go
there
and
you
know
leave
comments
on
your
on
your
proposals.
B
A
All
right,
it
seems
that
we
are
at
the
end
of
this
meeting
so
for
the
the
next
meeting,
we'll
be
back
and
as
usual,
I
end
monday
evening
for
folks
in
u.s,
tuesday,
for
folks
in
the
rest
of
the
world,
as
we
already
announced,
the
only
difference
is
that
now,
with
the
daylight
savings,
starting
also
in
europe,
the
next
meeting
will
move
at
12
p.m.