►
From YouTube: 2021-11-04 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
A
D
My
experience
it
does,
it
turns
out
everyone
who
joins
these
calls
from
google
is
using
the
web
client,
because
last
year
google
banned
the
executable
on
all
corporate
laptops
like
like.
If
you
try
to
install
it,
the
laptop
just
goes
nuts
now
and
so
they're
all
using
the
web
one.
It
seems
to
work
okay,
great,
because
I've
always
used.
A
The
the
client,
but
yes
for
some.
D
A
D
So
I
had
a
customer
call
this
morning.
Actually
I
don't
know
if
I
cost
my
call
this
morning,
like
splunk
related
with
the
company
ikea
and
I
gotta
say:
there's
zoom
backgrounds
like
both
virtual
and
real
were
on
point.
It
was.
D
D
D
E
B
A
A
All
right,
can
you
see,
see
the
sides?
Okay,
perfect,
okay,
so
we've
been
talking
last
time
we
talked
about
the
flow
mill
contribution
today
we're
going
to
be
doing
pixie
and
hopefully,
in
the
future
session,
we're
going
to
be
doing
parka.
I
think
in
a
couple
weeks
and
hopefully
more
stuff
down
on
the
road
map,
but
today
I'll
be
talking
a
little
bit
about
pixie.
A
So
my
goal
for
this
was
to
give
a
little
bit
of
a
background
of
pixie
how
it
kind
of
had
a
high
level
using
zvpf
and
kind
of
tried
to
focus
on
the
modularity
aspect,
because
that's
kind
of
what
we
talked
about
a
lot
kind
of
in
previous
sessions.
But
so
with
that,
let
me
just
just
kind
of
dive
in
so
high
level.
What
is
pixi,
so
our
goal
is
just
to
help
developers
debug
their
kubernetes,
app
so
kind
of
say:
that's
in
the
apm
space
on
the
left.
A
I
have
some
characteristics
so
we're
auto
instrumented.
What
we
mean
by
that
is
that
it's
ebpf,
based
mostly
that's
kind
of
what
it
means
like
there's
no
manual
instrumentation
to
put
into
the
code.
There's
some
scriptable
aspect
to
it
as
well,
so,
if
anyone's
played
with
pixy
there's
this
notion
of
like
pixel
scripts,
which
you
can
use
to
kind
of
query,
the
data
that's
collected
and
we've
had
a
strong
focus
on
kubernetes
on
the
right.
I
have
kind
of
more
like
features
so
things
that
pixi
can
do
so
like
using
ebpf.
A
We
do
things
like
protocol
tracing,
so
when
one
pod
sends
a
message
to
another
one,
we'll
trace
that
message.
We
do
application
performance
profiling.
So
that's
the
flame
graph
stuff.
We
do
distributed
epf
trace
deployment,
so
that's
kind
of
more
of
a
dynamic
thing
that
allows
you
to
kind
of
run:
bpf
trace
scripts
on
your
kubernetes
cluster
and
a
few
other
things
which
we'll
touch
upon
in
the
rest
of
the
presentation.
A
Yeah
so
like,
like
http
messages
or
going
so
kind
of
at
the
application
layer
right
so
yeah,
so
we're
not
at
the
I
mean
we're
really,
we
will
touch
upon
it
a
little
bit
near
the
end.
A
If
you
want,
we
can
go
a
little
bit
deeper
into
it
at
that
point
in
time
I
can
so
I
was
debating
whether
let
me
just
do
it.
Let
me
because
I
don't
know
how
much
people
know
about
pixi.
I
don't
want
this
to
be
like
a
marketing
thing,
but
I
just
want
to
give
people
kind
of
an
idea
of
what
pixi
can
do
like.
So
when
we
talk
about
these
features
like
just
put
it
in
context,
more
than
anything
yeah.
C
A
Yeah
so
like
when
we
say:
I'm
not
gonna
again,
I'm
not
gonna
go
through
all
this,
but
I'm
just
gonna
pull
up
one
script
here
that
we
have,
which
is
http
data.
So
this
comes
to
your
point.
Morgan,
like
what
are
we
tracing
like
with
our
protocol?
Tracer,
I'm
just
showing
the
raw
data
here.
We
can
do
a
lot
more
than
this,
but
the
raw
data
is
we're
seeing
all
the
traffic
so
we're
seeing
like
you
know,
sock
shops
front
end
here
sent
a
message
to
the
cart
service.
A
The
response
code
was
a
202
things
like
that
right,
so
that
just
to
give
you
the
big
picture
of
what
we
mean
by
protocol
tracing
and
we
do
http,
but
we
also
do
a
number
of
other
protocols.
I
think
we
have
like
eight
or
nine
protocols
at
this
point
that
we
trace.
So
one
way
to
think
of
it
is
kind
of
like
I
sometimes
think
of
it
as
like
wireshark,
but
on
kubernetes
right,
except
we're
not
using
wireshark
itself,
because
that
has
a
gpl
license.
A
Another
thing
that
you
can
do
so
if
I
pull
up
the
flame
graph
here
again
just
to
give
an
idea,
this
is
actually
going
to
run
a
flame
graph
across
the
entire
cluster.
So
it's
telling
you
like
all
the
different
like
you
have
sock
shop,
the
different
pods.
You
have
in
sock
shop,
there's
a
cart
service
here
and
then
what
it
is
doing,
where
it's
spending
its
time.
So
I'm
going
to
zoom
in
here
and
then
you
can
kind
of
see
where
the
application
is
spending.
Time
and
again.
C
A
D
The
the
ebpf
instrumentation
that
gave
us
that
is
that,
like
a
sampling,
style
profiler
or
it's
like
sort
of
periodically
querying
the
language
run
their
basically
collecting
stack
traces
to
build
that.
A
Yeah,
it's
it's
a
vpf
based
profiler
and
it's
grabbing.
It's
periodically
grabbing
stack
trace,
okay,
yeah,
okay,
yeah
there's
a
bunch
of
different,
like
ebpf
profilers
are
seeming
to
pop
up
everywhere
now,
and
so
I
think,
they're
there's
a
lot
of
interesting
kind
of
bells
and
whistles
on
the
different
ones,
but
the
ebpf
ones
seem
to
be
kind
of
all.
Probably
kind
of
the
guts
of
them
are
fairly
similar.
I
would
guess
right
so
they're
just
periodically
sampling
the
stack
trace.
A
Okay,
so
coming
back
so
now
that
we
kind
of
have
an
overview
of
what
pixi
does.
Let's
talk
about
the
architecture,
because
I
think
that's
what's
more
interesting
to
this
particular
work
group,
so
the
architecture
there's
a
thing
called
the
pixie
edge
module.
That's
this
box
here
in
the
middle
and
that's
the
thing
that
runs
as
a
daemon
set
on
kubernetes.
So
you
want
one
pixie
edge
module
on
every
host
and
the
pixie
edge
module
has
a
few
components
down
at
the
bottom.
There's
a
component
called
sterling.
A
That's
the
data
collector!
That's
the
thing
that
has
evpf
in
it
and
that's
the
thing.
That's
collecting
all
the
data
from
from
linux,
like
the
messages
or
the
profile
data
or
whatever
else
you
have
it's
going
to
take
all
that
stuff.
It's
going
to
store
it
in
some
data
tables
and
we'll
talk
a
little
bit
later
about
kind
of
the
interfaces
here,
but
it'll
store
it
in
some
data
tables
and
then
there's
a
query
engine
in
the
pem2
and
there's
a
query
language.
A
So
you
can
essentially
send
messages
to
the
pems
saying
I'm
looking
for
all
http
traffic
that
had
a
latency
greater
than
500
milliseconds
or
something
right
and
then
it'll
go
and
search
the
data
tables
and
give
you
that
data
back.
So
there's
a
built-in
query
engine
where
you
can
query
the
data.
A
It's
not
really
shown
in
this
diagram,
but
because
the
pxe
edge
module
is
one
per
host.
There's
really
multiple
of
these
kind
of
side
by
side,
and
they
all
talk
to
one
pixie
cloud
and
I
use
that
term
kind
of
vaguely,
I'm
kind
of
abstracting
a
lot
of
stuff
out
there.
There's
aggregations
that
happen
across
like
if
you
do
a
query
that
has
to
go
across
multiple
post
nodes
and
all
that
stuff
could
get
aggregated
together
in
certain
in
stuff.
A
So
right
there
it
says
ui,
but
you
can
really
access
it
in
three
different
ways:
okay,
so
so
the
part
that
we're
going
to
focus
on
today
mostly
is
going
to
be
the
sterling
component,
which
is
the
data
collector
right
and
that's
the
piece
that
has
the
ebpf
collectors
and
it's
a
piece:
that's
communicating
with
the
linux
kernel
so
sterling
itself.
A
So
if
we
kind
of
drill
down
into
sterling
it
kind
of
has
a
core
part
of
it,
that's
kind
of
managing
all
the
different
data
sources
it
has
and
then
internally
we
have
this
kind
of
modular
approach
to
the
different
pieces
of
data
that
we
want
to
collect.
So
we
have
something
that
collects
process
stats.
That's
very
small.
It's
just
like
you
know.
Looking
at
table
stick
stuff
like
cpu
memory,
io
stuff,
we
have
one
for
network
stats.
Again,
it's
fairly
basic
it
just
kind
of
uses.
A
The
linux
proc
file
system
to
kind
of
figure
out
at
a
c
group
level,
kind
of
what
the
high
level
network
stats
are,
and
then
we
get
into
the
the
source
data
sources
that
are
ebpf
based.
So
we
have
the
protocol
tracer,
that's
the
one
that
that's
capturing,
the
http
or
mysql
or
postgrestate
or
whatever
it
may
be
that
we
kind
of
showed
at
the
outset.
A
Then
we
have
an
application,
cpu
profiler,
that's
the
piece
that
feeds
the
flame
graph
right
and
then
we
have
a
bpf
trace
module
which
allows
you
to
run
essentially
a
bpf
tray
script.
So
bpf
trace,
for
I
don't
know
if
folks
are
familiar
but
bpf
trace,
is
an
iovisor
project,
open
source
project.
That's
lets
you
kind
of
write,
bpf
code
in
a
high
level
language.
A
So
it's
a
little
bit
easier
than
trying
to
use
kind
of
it's
kind
of
like
a
domain-specific
language,
it's
easier
to
write
than
writing
bpf
code
directly
and
it's
used
by
kind
of
sres
and-
and
you
know
other
developers
to
kind
of
do
a
lot
of
debugging
and
stuff,
and
so
we
just
wanted
to
give
the
power
of
being
able
to
use
bpf
trace
on
your
cluster.
So
this
feature
kind
of
lets.
You
just
develop
that
stuff.
A
You
could
write
your
own
script
and
then
use
pixi
to
deploy
it
and
then
so
two
parts
that
lets
you
kind
of,
let's
pixie
manage
the
deployment
of
it
and
then
the
second
piece
of
it.
It
collects
all
the
data
and
still
puts
it
in
the
table.
So
you
still
get
the
power
of
the
whole
query
language.
A
So
it'll
take
all
the
data,
put
it
in
a
structured
format
and
then
you
can
have
access
to
it
whenever
you
want,
but
it's
dynamic
because
by
default
it's
not
on
it's
kind
of
something
you
go
in
and
write
your
own
and
then
deploy
it.
A
And
then
the
last
one
is
dynamic.
Application
logging
and
that's
another
dynamic
source,
it's
not
on
by
default,
it's
one
that
the
developer
can
go
in
and
say,
for
example,
and
this
is
alpha
feature,
but
you
could
do
stuff
like
say:
oh
something's,
going
wrong
in
my
application.
I'd
really
love
to
know.
Every
time
my
send
message
function
was
called.
What
are
the
arguments
to
that
function
like
actually
like?
A
A
Now
we
have
plants
expanded
out,
but
that
kind
of
gives
you
a
high
level
picture
of
what
it
does
by
the
way
feel
free
to
jump
in
if
anything,
if
I'm
not
explaining
anything
clearly
or
if
you
want
more
details.
A
A
A
If
you
really
think
about
most
of
these
connectors
run
fairly
independently
of
the
others
right
so
protocol
tracer
and
the
application
cpu
profiler,
for
example,
the
pc
piece
that
captures
http
traffic
versus
the
piece
that
captures
flame
graphs
are,
for
the
most
part
independent
of
each
other.
Their
bpf
code
is
completely
separate.
They
don't
really
care
about
anything
like
the
profiler
doesn't
care
much
about
what
the
pro
protocol
tracer
is
doing,
and
vice
versa.
A
However,
there
are
the
exception.
Kind
of
to
that
is
kind
of.
There
is
some
common
state
that
they
would
both
like
to
access,
and
so
that
mostly
consists
of
kubernetes
metadata,
for
example,
the
list
of
kubernetes
processes.
A
They
would
like
to
know
what
they
are,
so
the
profiler
would
like
to
know
what
the
list
of
kubernetes
processes
that
are
active
are
so
that
it
knows
where
to
focus
its
efforts
and
the
protocol
tracer
would
like
to
do
the
same
thing,
and
so
this
core
manager,
piece
that
we
have
is
going
to
get
the
kubernetes
metadata
and
essentially
so
it
kind
of
has
two
two
functions.
The
core
manager
is
going
to
tell
each
one
when
to
kind
of
wake
up
and
run,
so
you
can
set
a
periodicity
for
each
of
these
different
sources.
A
So
you
might
say,
the
network
stats,
for
example,
runs
once
per
second,
but
the
protocol
tracer
runs
five
times
per
second,
and
then
it
gives
them.
It
gives
each
of
these
source
connectors
a
context
which
includes
the
kubernetes
metadata
so
that
they
can.
A
So,
from
a
code
perspective
like
each
source,
connector
is
pretty
much
going
to
implement
this
transfer
data
call
and
you
can
kind
of
see
this
connector
context
is
the
first
argument:
that's
what
it
gets
in
as
an
input,
so
it
can
again
it
can
choose
to
access
it
if
it
wants
to
get
the
list
of
pids
or
or
some
other
kubernetes
metadata,
and
then
it
essentially
is
going
to
populate
a
bunch
of
data
tables
that
are
provided
to
it,
which
follow
a
struct.
It's
structured
events
again,
so
it's
like
the
data
tables
are
set.
A
The
context
that
we
provide
today,
you
can
kind
of
see
I'm
going
to
highlight
this.
There's
a
few
different
things
in
here.
The
two
I'm
going
to
highlight
is
like
get
you
pids,
which
is
really
kind
of
gives
you
the
list
of
pids
to
monitor
and
then
a
little
bit
lower
down.
You
can
kind
of
see,
get
kate's
metadata
again.
These
are
part
of
the
framework
that
allow
these
different
connectors.
If
they
want,
they
can
access
the
kubernetes
metadata
that
we've
collected
and
it's
common,
so
we're
not
each
connector.
A
Output,
wise,
so
each
connector
source
connector
has
to
define
one
or
more
output
tables.
Again.
We
said
these
are
structured
events,
so
I've
kind
of
shown
a
sample
on
the
right
of
an
like
what
an
http
table
might
look
like.
This
is
abbreviated
because
there's
a
lot
more
fields
than
this,
but
you
know
there's
a
time
column
for
example,
and
so
this
has
a
data
type,
a
semantic
type.
It
tells
that
it's
a
count.
A
Essentially
it's
a
metric
counter,
so
we
have
some
some
metadata
about
the
column,
but
we
have
essentially
column
name
and
the
column.
Description
are
the
first
two
fields
so
time
and
then
you
know
time
stamp.
When
the
data
was
data
records
collected,
then
we
have
request
headers,
request,
method,
request
path,
so
on
and
so
forth.
So
it
defines
that
up
front
and
then
it's
pushing
these
structured
events
out.
A
And
then
coming
to
so
coming
back
to
the
diagram
I
kind
of
showed
up
front.
So
where
are
the
interfaces
so
at
the
top
we
kind
of
mentioned
before
there's
kind
of
the
top
level
pixie
interface?
A
Where
we
have
you
know
the
ui
can
connect
to
the
pixie
cloud
or
you
could
have
a
cli
if
you
want
to
kind
of
do
things
from
the
command
line
or
there's
even
an
api
for
doing
integrations,
and
we
actually
have
an
open,
telemetry,
essentially
an
adapter,
to
change
this
interface
into
an
open,
telemetry
interface
today
that
exists
and
it's
kind
of
as
a
separate
project
right
now,
but
in
the
future
we
want
to
natively.
A
Actually
we're
working
on
this
right
now
is
to
to
have
a
native
open,
telemetry
interface
kind
of
built
into
pixi
right
at
the
top
right
and
that's
kind
of,
for
I
would
say,
like
that
interface
is
for
for
the
entire
cluster.
You
could
argue
because
it's
it's
not
just
for
one
pixy
edge
module
because
it's
gone
through
the
cloud.
A
A
A
So
those
are
the
those
are
kind
of
the
interfaces
of
the
architecture
and
where
things
stick
together.
Okay,
so
switching
so
let
me
stop
here.
Actually.
Is
there
any
questions?
Any.
E
Questions
about
that
I
have
a
question
regarding
the
the
role
of
the
data
tables.
So
you
have
these
structured
events.
When
do
you
so
you
can
have
probably
you
know
hundreds
of
thousands
of
them
per.
I
don't
know
minute
or
hour.
When
do
you
drain
those,
and
do
you
ever
you
know
you
cannot
hold
on
to
them
forever.
E
A
So
we
have
essentially
a
size
limit
to
each
data
table
and
once
you
kind
of
reach
the
limit,
we
start
expiring
the
old
data.
So
typically
that
that'll
give
us.
You
know
anywhere
from
six
to
24
hours
of
kind
of
retention
on
these
things
and
then,
if
you
want
to
persist
them
longer
than
that,
then
that's
where
the
api
for
the
integrations
comes
in,
because
you
can
essentially
pull
the
data
out
and
you
know,
store
them
in
somewhere.
More
persistent,
if
you
really
want
to,
but
essentially
it's
it's
size
limited.
A
So
as
soon
as
you
like,
the
http
table
will
have.
For
example,
you
know
500
megabytes
of
space,
it's
in
memory
and
once
you
hit
that
500
megabytes,
you
know,
the
oldest
events
will
start
expiring.
E
A
So
the
so,
by
default
kind
of
it's
more
of
a
pull
model,
so
you
can
think
of.
For
example,
if
you
pull
up
the
ui
and
you
actually
run
a
query
saying,
I
want
to
find,
for
example,
all
the
really
slow
requests,
then
that
query
will
get
compiled,
sent
to
the
query.
Engine
and
it'll
pull
the
relevant
records
out
of
the
data
tables
and
surface
them
back
to
the
ui.
A
There
is
also
a
so
having
said
that,
there's
also
kind
of
a
streaming
mode
where
you
can
set
a
query
to
say
you.
You
say
I
want
to
query
this
data
and
you
sign
a
setup
streaming
mode
where
it
says
essentially
give
me
this
data
and
then
continue
pushing
it
to
me.
Keep
streaming
it
out
to
me
as
new
events
arrive,
so
that
kind
of
is
a
little
bit
more
of
a
push
model
rather
like.
A
Right,
I
mean
so
you
have
to
essentially
it's
kind
of
like
a
subscribe
model
where
you
say
I
want
to
subscribe
to
any
events
like
you
could
say.
I
want
to
pull
all
http
events
that
have
a
latency
greater
than
500
milliseconds.
For
example,
you
can
set
streaming
mode,
which
means
it'll
return,
all
the
data
that
it
has
currently
and
then
the
connection
stays
alive
so
that
right,
if
any
new
events
arrive,
that
meet
that
criteria,
it'll
also
push
them
out.
But
yes,
it's
a
little
bit
different
than
it.
It's
like
it's
basically
like.
B
Cost
cost
effective
a
thermal
approach
right
where
all
the
data
is
staying
locally
and
it's
going
to
get
flushed
out
at
some
point.
But
you
can
query
it
all
live
so
you're,
not
paying
any
export
or
storage
costs
in
order
to
do
live
querying.
But
if
you
wanted
to
do
any
kind
of
historical
querying,
you
would
need
to
egress
this
data
because
it's
not
going
to
get
correct
forever.
Yeah.
A
Exactly
right,
so
pixi's
kind
of
vision
was
at
the
outset,
was
kind
of
more
like
live
debugging,
so
something
has
gone
wrong
in
production
and
so
you're
trying
to
like
a
six
to
24-hour
window
might
be
sufficient
for
you.
But
if
you're
trying
to
look
historically
what
happens?
Certainly
you
have
to
yeah.
A
B
Just
because
I'm
a
noob
and
a
total
idiot,
how
does
flowmill
make
use
of
pixie
in
any
way
or
are
they
just
like
two
completely
separate
projects.
A
C
C
A
A
A
So
yeah,
if
you
want
to
actually
this,
might
be
a
good
place
to
stop
anyways
so
yeah,
because
the
next
slide
I
I
was
going
to
do
a
deeper
dive
into
the
protocol,
tracer
architecture
itself,
but
that
probably
needs
a
little
bit
more
time
anyways,
and
so
it's
a
bit
of
bit
switching
gears
right.
I
think
what
we
covered.
A
What
my
goal
was
to
cover
kind
of
like
the
architecture
of
how
different
kind
of
different
sources
could
play
nicely
together
in
a
single
framework,
and
so-
and
I
know
flowmail
also
has
you
know
a
similar
thing
like
you're
expanding
up
the
framework,
but
I
think
in
a
future
session.
What
would
be
interesting
is
compare
and
contrast
on
and
see
what
works
and
what's
the
best
of
both
worlds
and
how
we
could
kind
of
bring
these
things
together.
A
So
I'll
leave
that
one
I
was
going
to
say
kind
of
discussion
like
you
know,
this
is
tying
back
to
discussion.
We
had
you
know
several
weeks
ago,
but
it's
like
what
is
you
know.
What's
the
right
approach
for
us
in
this
work
group
to
bring
these
different
hotel
ebpf
collectors
together?
A
Do
we
build
a
common
framework
with
these
plugable
sources
kind
of
similar
to
pixie,
but
it
could
be
a
different
framework
or
the
framework
that
that
you
know
fomel
has
or
do
we
go
kind
of
the
other
extreme
just
say
you
know:
do
we
go
fully
independent
connectors
such
that
we're
not
trying
to
share
any
context
or
state
we're
like
we're
just
going
to
be
like
forget
it?
It's
too
much
work
to
try
to
put
all
these
things
into
a
common
framework.
A
Open
telemetry
does
provide
an
api
already,
so
everyone
can
just
go
independent,
independently,
build
their
own
apis,
there's
pros
and
cons.
You
know,
and
I
think
we
can
have
a
bigger
discussion
in
the
future
about
this,
but
I
think
like
there's
the
main
benefit
I
think
of
option.
Two
is
independence,
but
the
main
benefit
option.
One
is
kind
of
we're
sharing
work,
so
there's
less
yeah
right,
yeah
yeah
and
there
might
be
other
options
on
the
table.
So
we
should
take
the
time
to
discuss
those
as
well.
E
I
mean
I'm
sorry
that
you
know
we're
close
to
time,
but
I
think
an
important
question
that
I
think
I
kind
of
I
missed
earlier,
but
it
kind
of
do
the
different
etf
modules
inside
pixi.
Do
those
share
kind
of
evpf
communication
mechanisms
so
do
you
do
they
share
rings?
Do
they
share
polls.
A
No
yeah
so
yeah,
so
that
was
kind
of
by
design
there,
so
you
can
think
of
the.
There
is
shared
context
that
is
given
to
them
and
in
theory
that
context
could
come
from
evpf
like
there
could
be
an
ebpf
thing,
that's
kind
of
monitoring
all
the
new
processes
in
kubernetes,
like
containers,
c
groups,
all
that
stuff
and
and
kind
of
creating
metadata.
A
But
then
the
each
of
these
ebpf
sources
is
kind
of
operating
independently
in
terms
of
the
maps
that
it
contains
the
perf
buffers
it
contains
all
that
stuff
it
just
expects
metadata
to
come
in
and
it
doesn't
even
care
if
that
metadata
is
coming
in
from
a
different
ebpf
source
or
collected
some
other
different
way.
It
just
says
tell
me
the
list
of
kind
of
like
pins
that
exist
in
the
system
or
tell
me
about
all
the
containers
in
the
system.
A
Tell
me
about
all
the
tell
me
a
little
about
networking
in
the
system,
just
like
broad
things,
so
that
it
can.
It
can
make
its
decision.
For
example,
I
was
saying
earlier
the
flame
graph
could
say.
Oh,
if
I,
if
I
give
me
the
list
of
kubernetes
processes,
so
I
don't
waste
effort
on
processes
that
don't
belong
to
kubernetes,
so
I
won't
try
to
build
flame
graphs
for
stuff.
That's
not
relevant
to
me
right.
Protocol
tracer
will
do
something
similar.
You
could
say
like.
A
Oh,
I
now
know
the
list
that
context
tells
me
the
list
of
pids
that
are
or
processes
that
are
relevant
on
this
kubernetes
cluster.
So
I
will
just
focus
my
efforts
on
those,
so
they
can
share
that
state.
So
we
collect
that
information
once
and
share
that
state
across
all
six
of
these
source
connectors,
but
then,
once
they
get
this
state
in
as
an
input,
they
don't
there's
no
wires
between
these
modules.
They
don't
collaborate
or
communicate
in
any
way.
So
there's
independence
between
them.
A
B
Is
there
background
reading
I
can
do
on
trying
to
correlate
evpf
data
with
all
the
application
level
data
like
traces
and
all
that
nonsense
cause.
I
know
there's
trickiness
there
when
you
say
correlate.
So
what
are
you
trying
to
correlate
right?
So
you
know
you've
got
an
otlp
we're
trying
to
correlate
all
the
different
signals
that
come
out
of
otlp
so,
for
example,
being
able
to
correlate
say,
network
or
processing
information
that
came
out
of
ebpf
with
with
traces.
B
Obviously,
the
network
information
could,
in
theory,
be
directly
correlated
with
traces.
If
you
know
all
of
the
encryption
and
and
other
trickiness
like
got
out
of
the
way
same
thing
with
metrics
and
like
like
other
stuff
yeah.
A
A
A
Like
let's
say
you
want
to
do
something
between
the
flame
graph
and
the
protocol
tracer,
you
can
actually
do
that
sort
of
stuff
doing
correlate.
Making
correlations
with,
for
example,
open
telemetry
data.
That's
actually
been
collected
is
something
I'm
actually
really
excited
like
I
would
be
super
excited
about,
is
something
I've
always
wanted
kind
of
to
to
do.
So,
if
that's,
but
it's
not
something,
we've
done
yet.
A
A
E
A
Exactly
right
so
like,
if
we
can
define
like
what
is,
is
a
process
id
enough
is
a
is
an
ip
like
endpoints
or
not
like.
What's
the
what
pieces
of
information
do
we
need
to
do
these
correlations
and
then
like
on
I'll,
just
speak
about
pixie,
because
I'm
more
familiar
with
that,
but
like
we
could
make
sure
that
that
information
is
in
the
tables
that
we
export
so
that
we
can
do
the
correlations
after
the
fact,
if
need
be
right,.
B
You
know
focused
on
you
know,
processes
right,
so
you
can
identify
what
process
shoved
all
this
data
out
attach
that,
with
all
the
the
same
process
that
shoveled
all
of
the
otlp
data
out
or
you
know,
application
level
stuff,
but
it
would
just
be
a
temporal
correlation
at
that
point.
Basically,.
A
Right,
what
would
probably
be
possible
today
exactly
would
probably
be
temporal,
but
if
we,
for
example
like
we
actually
know
things
like
what
file
descriptor
was
used,
you
know
for
that
connection
or
for
sending
that
information
or
the
socket
or
things
like
that-
we're
not
necessarily
exporting
all
of
it
out
today
in
the
table.
But
if
that
information
is
useful
right,
we
certainly
could
right.
So
it
would
be
interesting.
We
haven't
had
a
use
case
for
that,
but
if
a
use
case
arises
for
that,
let's
do
it
right.
That
would
be
awesome.
A
I'm
sorry
I'm
because
I
can't
see
the
screen,
who
was
that
who's
talking?
Because
I
can't
see.
B
C
Hey,
could
I
jump
in
here
for
a
second,
I'm
christian.
You
know
a
first-time
caller,
but
I
do
appreciate
what
you
guys
are
doing.
Oh
cool.
I
can
see
myself
now
very
good,
hey
you're,
probably
wondering
you
know,
you're,
probably
wondering
why
am
I
here
today
and
you
know
also
you
know
dan.
I
think
he
dropped
off,
but
dan
was
here
as
well.
You
know
both
of
us
have
spent
a
good
amount
of
time
and
you
know
hi
jonathan.
C
By
the
way
you
know
it's
been
a
while
we're
here,
hi
good,
to
see
your
progress.
C
You
know
dan
and
I
actually
part
of
the
logging
sick
right
and
we're
kind
of
trying
to
figure
out
or
you
know,
together
with
you
know,
you
know
ted
who
was
actually
there
yesterday
as
well,
and
so
a
bunch
of
folks
came
to
the
logging
sick
over
the
last
couple
of
weeks
and
we're
wondering
about
events
right
and
you
know
how
that
sort
of
fits
into
the
lock's
data
model
and
then
how
we
think
about
it
right.
You
know,
there's
an
easy
answer.
C
Is
you
know
just
club
something
in
the
lock
body
and
you
know
call
it
the
day
right,
but
but
we're
wondering
if
there's
a
little
bit
more,
that
can
be
done,
especially
if
the
producer
already
produces
some
sort
of
structured
data.
I
don't
know
how
much
of
you
guys
have
looked
into
the
log
data
model,
but
let's
not
necessarily
jump
into
all
of
that,
because
I
think
we're
like
a
little
over
time.
But
fundamentally
you
know,
we
realized
that
I
think
you
guys
actually
are
on
the
path
of
you
know
sending.
C
I
think,
though,
in
the
flow
mill
demo,
I
saw
that
you
know
there's
data
being
sort
of
channeled
through
you
know
the
sort
of
logging
infrastructure
and
the
log
data
model
right
and
we're
basically
trying
to
figure
out.
You
know
with
this
sort
of
event
orientation
you
know.
Is
there
anything
in
addition
that
we
need
to
do
that?
We
maybe
need
to
expect
you
know.
Are
there
semantic
conventions
to
agree
on?
Are
there?
I
don't
know
any
other
things
like
open,
open
season
right?
C
We
are
trying
to
get
to
a
v1,
and
you
know
get
this
thing
out.
The
door
so
yeah,
that's
why
you
know
I'm
here
today
and
then
we
just
wanted
to
sort
of
listen
in
and
a
great
demo
pixie
by
the
way-
and
you
know
congrats
to
you
guys
as
well
and
it's
all
it's
all.
You
know
really
cool
stuff,
so
yeah
I
just
wanted
to.
C
You
know,
suggest
that,
like
we're
here
to
listen,
maybe
we're
not
gonna
make
a
lot
of
progress
on
this
in
this
meeting,
but
I
would
encourage
you
guys
if
this
is
a
topic,
you
know
to
also
consider
maybe
joining
the
logsic.
We
are
meeting
every
week
now
in
order
to
kind
of
you
know,
punch
this.
You
know
out.
You
know
this
initial
version.
C
Previously
it
was
every
other
week
now
it's
every
week
and
you
know
we
would
also
be
happy
to
continue
to
come
here
or
find
some
other
way
to
sort
of
figure
this
out
I'll
I'll
stop.
There
is
the
context
is
the
context
reasonably
clear.
E
E
A
Oh
no,
I
was
gonna,
I
mean
just
if
I'm
catching
it
correctly.
I
think
it's
like
at
a
very
high
level.
It's
like
there's
some
opportunities
collaborate
between
these,
like
structured
events
and
the
logs
that
are
being
the
kind
of
the
log
focus
of
things
and
so
seeing
where
there
is
space
for
for
these
kind
of
two
two
things
to
kind
of
meet
an
interface,
but
I
think
you
also
specifically
mentioned
full
mill
so
I'll.
Let
jonathan
jump
in
specifically
on
the
floor
middle
side,.
E
I
I
would
say
that
I'm
personally
very
interested
in
in
this
type
of
collaboration,
I
think
the
log
model
is
a
bit
too
free-form
in
the
way
we've
been
at
phlomo.
We've
been
thinking
about
message.
You
know
the
the
events
that
we
relay
back
is
structured
events,
and
I,
I
think
I'm
a
little
bit
worried
about
overhead
in
kind
of
the
general
format
and
encoding
strings
everywhere,
repeating
ourselves
right
when
we
relate
telemetry,
because
it's
a
very
high
volume
telemetry
wanted
always
on
and
have
relatively
high
granularity
high
cardinality
data.
E
You
know
discussion
that
we
can
have
in
terms
of
how
to
support
high
cardinality
data
efficiently,
structured,
high
cardinality
data
efficiently,
I'd
love
to
have
that
conversation
and
if,
if
I'd
love
to
join
the
log,
sig
or
kind
of
whatever,
wherever
we,
however,
we
can
start
the
discussion.
Maybe
christian,
if
you
haven't,
do
you
have
a
an
idea
of
how
what
form
that
could
take.
C
That
would
be
one
way
to
do
it.
If
you
guys
are
available
like
next
wednesday
at
10
pacific.
We
can
also
take
your
slot
or
yet
another
slot
and
bring
the
locks
yeah.
I
don't
necessarily,
I
don't
really
care
which
way
it
goes,
but
you
know
the
logic
does
meet
every
every
wednesday
at
10
and
has
an
hour.
C
So
you
know
you
know
jonathan,
if
you
want
to
kind
of
or
if
you
want
to
take
the
lead
there,
maybe
for
now
and
and
for
this
and
and
maybe
you
know,
coordinate
with
morgan
or
tigran
all
right,
then
you
can
probably
short
circuit.
You
know
figuring
out
getting
this
on
the
program.
I'd
be
happy
to
you
know
also
help.
Of
course.
C
You
know,
you
know
it's
kind
of
just
sort
of
lead
for
all
of
this,
and
you
know
if,
if
we
can
just
basically
say
hey,
you
guys
are
there
next
wednesday,
for
example?
I
think
we
will
easily.
We
will
have
lots
of
opportunity
to
make
time
there.
Let
you
go
first,
all
of
those
types
of
things.
B
Yeah
cool
I'd
just
like
to
point
you
guys
at
the
just,
because
you
brought
up
overhead,
which
I
I
agree
is
like
my.
My
only
concern
with
like
a
general
events
model
and
is
you
know,
there's
less
structure
there
and
there's
some
repetition,
but
something
that
may
help
in
this
regard
is
a
columnar
encoding,
so
we're
looking
at
a
much
more
highly
compressed
version
of
otlp.
B
So
I
put
this
in
the
chat
just
a
link
to
the
hotep
that
describes
that.
I
don't
think
this
has
been
brought
up
in
the
logging
group
christian
to
pop
over
there
as
well,
because
I
think
it's
relevant
to
to
what
you
all
are
discussing.
C
B
Yeah
basically
like
can
we
can
we
so
f5
is
looking
at
this,
because
they
need
a
highly
compressed
form
to
to
get
data
out
of
cdns
and
really
high
volume
stuff.
But
this
this
otep's
kind
of
withering
on
the
vine
of
it,
because
there
hasn't
been
a
lot
of
like
time
from
like
tigran
and
like
core
people
to
take
a
look
at
it.
So
I
wanted
to
raise
it
here
and
then
again
in
the
logging
group.
B
E
Cool
and
I
think
john
josh
suresh-
you
know
he
came
into-
I
think
last
last
week
or
the
week
before,
and
he
mentioned
that
so
oh.
B
B
You
know
stable,
and
so
there
isn't
a
lot
of
bandwidth
from
that
group
of
people
to
to
think
too
hard
about
this
thing,
but
I
think
the
the
faster
we
can
get
an
a
well-reviewed
version
of
this
into
the
spec
and
implemented
the
the
better,
because
it
does
look
to
me
like
the
next.
The
next
set
of
like
people
coming
into
otlp
and
like
into
open
telemetry.
Are
these
like
high
volume
sources
of
data
like
evpf
and
data
coming
out
of
like
load,
balancers
and
network
switches
and
stuff
like
that
so
anyways.
A
Yeah
pixi,
actually
I
mean
this
is
interesting
because
pixi
actually
represents
everything
columnar
internally.
So
when
we
export
it's
all
columnar,
so
if
that
was
there,
you
know
for
us,
it
would
be
actually
like
it
would
be
like
yeah
we're
thinking
of
putting
in
adapters
in
but
like,
then
it
would
be
a
lot
more
natural
fit
right.
I.
B
I
mean,
I
think,
once
we
get
this
in,
this
will
become
the
like
there
other
than
like,
not
having
support
for
it.
On
the
other
end,
I
see
no
reason
why
people
would
not
be
using
this
this
version
of
the
otlp
protocol
once
we
get
it
into
the
collector,
because
it's
it's
so
much
more
efficient.
It's
better
yeah
yeah.