►
From YouTube: 2021-08-05 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
A
Got
good
document
out
that
we
talked
about
last
week.
We
can
talk
about
with
the
group
only
yesterday.
D
C
News
monitor
got
pushed
back
again,
yeah,
I'm
curious.
We
were
just
on
the
governance
committee
called
this
before
this
and
we're
we're
planning
to
have
a
open,
telemetry
community
today,
coinciding
with
kubecon
we're
sitting
there.
Thinking
like
this,
keep
going
actually
going
to
happen.
C
Not
I
don't
know,
and
and
sarah
who's
on
the
cmcf
board
was
saying-
they're
they're,
not
certain.
Now
so
we'll
see.
F
Hey
folks,
I
just
wanted
to
say
I
joined
first
time
dialer
in
so
it's.
C
D
D
C
D
B
F
Just
curious
where's,
the
slack
channel,
I'm
totally
out
of
loop,
so
I
apologize
here
we
should
have
here.
Oh.
A
I
so
in
the
chat
you
can
see
the
the
the
meeting
notes
and
we,
I
think,
jana
you
opened
the
slack
channel
for
us.
Thank
you.
Yeah.
E
A
A
So
I
mean,
if
you
have,
if
you
have
a
cntf
slack
account.
A
Yeah
there's
the
yeah.
B
B
A
E
Yep,
I
have
a
question.
You
know
you
can
start
jonathan,
like
my
you
know,
item
was
very
short
if
you,
if
you
want
to
discuss
it
first,
it's
about.
Like
you
know,
I
I
couldn't
go
to
the
sig
meeting
for
the
spec,
I'm
wondering
if
anyone
has
gone
and
like
you
know,
ask
about
like
logs
versus
events,
and
the
other
thing
that
I
actually
want
to
suggest
is
like.
If
we're
thinking
about
flow
me
like
this
ebpf
capabilities
as
a
separate
binary,
maybe
it
makes
sense
to
you
know
start
with
logs.
E
If
it's
going
to
be
a
separate
binary,
I
think,
like
you
know,
we
don't
have
to
take
it
to
the
the
spec
meeting
and
so
on.
We
can
always
convert
you
know
logs.
I
mean
the
export
format
to
something
else.
If
anything
changes
in
the
open,
telemetry.
C
As
far
as
I
know,
nobody's
gone
to
the
spec
sig
yet
to
talk
about
this,
we
like
yeah,
you
were
in
this
call
last
week.
I
think,
like
we
talked
about
it
here,
but
I
don't
think
anyone
has
taken.
A
And
we
actually
have
a
schema,
so
the
the
you
know,
the
the
code
repository
has
like
a
schema
definition
specification
that
we
can
generate.
We
can,
if
you
want,
you
know
we
can
generate
protobuf
from
it.
If,
if
for
structured
defense,
like
whatever
schema
description,
structured
logs,
structured
events,
sorry
would
have
we
could
we
could
generate
that
schema
definition.
In
fact,
maybe
maybe
the
that
dig
would
want
to
use
the
schema
definition
in
in
this
repository
for
for
specifying
structured
events,
so
you
know
you
know
this
could
be
we
can.
A
We
can
fork
the
specification
language
out
from
from
this
repository
into
the
structured
events,
if,
if
that's.
C
Yeah,
so
on
every
tuesday,
it's
every
tuesday
morning,
there's
the
overall
open,
telemetry
specification
sig.
So
it's
definitely
the
right
place
to
raise
this.
C
I
I
don't
think
we're
going
to
get
an
answer
like
super
quickly
and
as
we
discussed
last
week,
I
don't
think
this
will
need
to
block
progress
on
on
using
the
existing
data
format,
but
what
it
will
do
is
set
up
us
getting
an
answer
eventually
of
of
like
yes,
we
should
create
a
new
thing,
called
events,
we're
going
to
be
able
to
look
like
this
and
then
like
a
year
or
something
after
that,
it'll
actually
bga,
and
at
that
point
then
we
would
switch
this
over
right.
A
Perfect
great,
so
I
wanted
to
start
with
this
diagram
of
the
entities
and
the
entities,
the
metadata
and
the
measurements
that
the
collector
outputs.
A
I
think
the
you
know
if
I
can
focus
your
attention
on
this
first
line
of
entities.
These
are
kind
of
the
main,
the
the
main
objects
in
the
operating
system
that
the
collector
reasons
about
you
know
downstream,
though
so
reports
telemetry
on
the
collector
is
one
host
and
the
metadata
that
you
get.
Is
you
get
information
about
the
instance
metadata
from
the
instance
metadata
node,
so
kind
of
the
vpc,
the
host
name,
the
instance
id
id
the
type
of
the
machine
that
is
running
on
the
cloud?
A
E
Is
this
typically
like
the
resource
metadata?
It
is
similar
to
resource
metadata
right
like
in
you
know,
geometry
terms.
A
Yes,
then,
within
when
the
host,
each
of
it
in
the
collector
outputs
these
c
groups,
so
information
about
c
groups
in
the
in
the
kernel
which
are
you
know,
translate
to
containers
the
collector
outputs,
the
entire
hierarchy
of
containers.
So
you
can
see
the
tree
and
and
each
each
node
in
the
tree
has
an
object
that
the
collector
tracks,
although
what
you
would
care
about,
are
the
leaves
of
the
trees.
So
the
containers
that
actually
have
processes
on
them
then
collector
also
tracks
processes
and
associates
them
with
containers.
A
A
So
kind
of
that
is
supported,
so
you
track
processes
and
what
containers
they're
in
and
then
also
sockets.
So
you
get
a
socket
and
you
associate
it
with
a
process
and
both
get
metadata
for
that
nat
nat
information
about
the
socket
and
then
telemetry
on
the
socket.
So
I
think,
when
you
think
about
traditional
npm,
you
think
about
you,
know
this
first
path
like
this
first
row
in
the
diagram.
A
The
collector
also
collects
information
about
payloads
for
http
right
now.
The
support
is
only
for
for
getting
the
error
codes.
A
I
think
the
evp
also
collects
the
verb,
so
is
it
to
get
a
put
a
put
and
so
on,
but
but
the
reporting
reports
just
error
codes
at
this
point,
if
I
remember
correctly,
there's
also
kind
of
I
think
jana
you
asked
last
time
kind
of
is
this:
you
know
exclusively
network
or
or
kind
of
broader
ebtf
and-
and
you
know,
we've
added
this
television
tree.
That
is
also
kind
of
right
now
in
alpha,
which
is
kind
of
for
processes
collecting
cpu
marine.
A
I
o
information,
so
the
number
of
involuntary
context
switches
that
kind
of
the
linux
is
asking
the
process
to
do,
and
we've
tracked
that.
So
I
would
call
this
alpha.
We
don't
but
it's
there
and
it's
reported
to
open
to
one
country.
F
A
Or
how
do
you
grab?
So
all
of
it
is
exclusively
edps
that
so
that,
well,
I
guess,
if
you
want
to
be
to
have
the
most
accurate
statement,
what
we
do
with
the
with
the
collector
what
the
code
does
is
when
the
collector
starts.
A
It
reads
the
state
of
the
existing
system
and
the
way
it
does,
that
is
by
traversing
proc,
but
the
collector
doesn't
read
from
proc
directly,
because
then
it
would
race
with
evps
events
coming
in
so
you'd
have
like
a
bunch
of
evp
events
somewhere
and
you'd,
read
and
write,
and
you
know
your
state,
the
state
that
the
collector
would
have
would
not
be
synchronized
with
what
the
system
has,
because
by
the
time
you
read
the
ebtf
updates,
your
the
proc
thing
might
have
changed
five
times
right,
like
the
you
know,
a
socket
a
socket
with
a
way
and
a
different
one,
with
the
different
ikea
address
came
in
on
the
same
socket
address.
A
A
It
hooks
those
inside
locks
so
most
of
the
instruments,
although
I
think
all
of
the
we
took
care
that
when,
where
it
matters
you
hook
inside
locked
sections
and
then
you,
you
kind
of
the
collector
scans
proc
and
also
reads
events
that
are
flying
through
the
system
and
both
of
them
are
underneath
the
relevant
blocks.
So
you
get
a
coherent
stream
of
all
of
these
events.
So
sorry
for
the
long-winded
answer,
it's
all
ebps,
but
some
of
it
is
triggered
to
proc.
B
A
Cool,
so
these
are
the
main
entities
and
I
I
kind
of
I
wrote
a
little
review
of
kind
of
the
some
of
the.
We
call
them
messages
internally,
but
really
I
don't
you
know
I
I
don't
want
to
create
the
impression
that
you
can
only
do
logs
or
only
metrics
or
only
events
right.
A
I
used
signals.
The
word
signal
in
the
document
to
signify
kind
of
all
these
messages
I
might
I
might
use.
I
might
you
know
when
we
talk
now.
I
might
call
them
messages,
but
it's
the
same
thing
like
we
can
choose
to
report
them.
However,
we
want
into
the
collector
so,
for
example,
sockets
they
are
identified
by
u64,
unique
identifier
per
system
might
get
reused
called
sk
and
they,
when
they're
created
they're
associated
with
the
process
through
their
the
pid
of
the
process.
A
So
you
know
the
you
know
the
processes,
then
you
get
a
new
socket
and
and
then
there
are
a
handful
of
these
state,
metadata
updates
and
then
and
then
reports
on
the
sockets
so
metadata
updates
they
kind
of
when,
when
a
socket
moves
into
a
connected
state,
you
get
a
list
of
the
you,
get
the
ik
addresses
and
the
port
numbers
and
who
initiated
the
connection.
A
And
their
support
for
ipv4
ipv6
when
the
colonel
decides
how
to
remap
the
addresses
through
nat.
You
get
a
message
on
that
on
specifically
the
new
addresses
and
then
there
are
messages
you
know
either
periodic
message
telling
you
kind
of
how
much
how
many
packet
drops
you
had
on
the
socket
packet
drops
number
of
bytes
received
a
number
of
packets.
We
transmitted
around
fifth
time.
You
can
see
connection
failures,
you
can
see
resets
and
what
I
mentioned
about
http.
E
Yeah,
I
think
we
need
to
read
offline
a
bit
right
yeah.
One
thing
that
I
was
not
expecting
was
like,
so
this
is
like
each
individual
event
is
like
being
you
know,
reported
I
was
also
like
hey
like.
Can
we
report,
you
know
http,
requests
and
responses
right
like
by
just
you
know,
aggregating
some
of
those
events
together,
but
I
I
guess,
like
that's,
not
part
of
your
goals,
because
you
you
want
to
be
this,
like
low
overhead,
very
thin
layer
to
just
output.
This
type
of
events
right.
E
Yeah,
something
like
you
know,
you
know
as
soon
as
you
have
like
the
you
know
request
so
rather
than
like
me
just
listening
to
you
know
all
these,
like
you
know,
events
going
on
and
figuring
out.
Oh
there's
a
you
know,
request
started
and
I
mean
maybe
I'm
not
making
that
much
sense
right
now.
What
I'm
trying
to
say
is,
like
you
know
like,
is
there
any
high
level
like
http?
You
know
events
also,
you
want
to
expose
by
consuming
this
type
of
events.
G
So
I
haven't
collected
this
kind
of
data
before,
but
I've
seen
it
collected
in
in
in
pixie
in
one
of
their
scripts.
Like
we've,
seen
argument
level
data
and
payload
level
data
from
these
connections.
We
just
don't
collect
them
because
of
well
that
product
effort
right
now,
we
almost
certainly
will,
but
is
that
the
kind
of
thing
you're
asking
about
jonah.
F
Yeah,
by
the
way,
I
guess
do
I'm
from
pixie,
so
just
heads
up-
oh
very
good,
just
wanted
to
let
you
know
so.
I'm.
G
F
Yeah
but
yeah
we
do,
I
guess,
yeah.
I
think
if
the
interest
was
there
to
kind
of
take,
it
sounds
like
you
guys
already
have
the
we're
starting
to
get
some
http
data
out
and
you're
able
to
parse
some
information
out
and
yeah.
It's
certainly
possible
to
like
go
beyond
that
and
start
start
putting
pieces
together.
It
does
take
some
amount
of
effort,
of
course,
so
it's
kind
of
a
different
layer.
I
think,
as
you're,
probably
pointing
out.
A
What
we
have
started
building
and
you
can
see
it
in
the
repository
we
just
haven't
made
it
kind
of
we
haven't-
turned
it
on
by
default
and
it
is
collecting
kind
of
the
raw
data
streams
from
the
socket
to
the
application
layer
to
to
sorry
to
the
to
user
space.
The
problems
that
we
encountered
was
where
so
it
works
kind
of
the
kind
of
the
system.
A
You
know
the
collection
works,
but
we
were
worried
about
cases
when,
when
kind
of
a
burst
of
of
high
traffic
would
overload
these
shared
memory
rings
between
yourself
and
kernel,
because
the
kind
of
as
I
described
the
system
you
can
see
that
userspace
tracks
processes
it
tracks
sockets.
It
really
matters
if
you
lose
messages
going
from
a
collector
to
user
space.
So
usually
there's
a
relatively
high
volume
of
messages.
A
You
know
going
from
the
kernel
to
user
space,
but
it's
not
super
high.
You
know
it's
not
gigabytes
and
gigabytes
per
second
of
data,
so
we
were
worried
about
exposing
that
much
overloading
the
rings
very
high
performance
connections.
So
the
system
that
I
so
I
mean
I'd-
love
to
hear
how
you
solved
it,
and
you
know
this
would
be
kind
of.
We
would
really
love
to
to
have
this
solved.
F
Yeah-
and
it
is,
I
mean
it's
something
we
it's
a
challenge
we
faced
as
well.
Right
is
like
you're,
saying
it's,
the
data
going
into
the
perf
buffers
and
like
just
getting
data
out
of
bpf,
there's
really
no
way
to
guarantee
that
you're
going
to
be
able
to
grab
high
throughput
data.
F
You
know
without
any
loss
right,
so
you
know
in
terms
of
what
we
did
at
pixi
is
we
just
had
to
make
the
processing
of
the
data
in
in
the
user
space
robust
to
those
sort
of
losses
like
like
there
was
just
no
practical
way
to
say
we're
not
going
to
lose
data,
and
so
we
just
have
to
bite
that
bullet
and
say
the
architecture
of
the
rest
of
the
system
has
to
be
robust
to
these
losses,
and
so
that
was
kind
of
a
mindset
we
had
from
the
from
the
beginning.
F
F
It
just
increases
the
complexity
so
much
because
so
there's
so
many
variations
of
things
that
can
happen
and
you
really
have
to
be
able
to
handle
all
of
them
and
it's
tricky
to
get
right
honestly.
I
don't
think
we
have
it
100
right
either
right
now,
so
you
know,
but
we
try
to
create
a
system
where
it
can
reset
itself
and
resync,
and
you
know
resume
in
a
healthy
state
again
that
if
it
gets
confused
so
but
but
yeah,
I
totally
hear
what
you're
saying
with
that.
With
that
challenge.
A
So
I
I
I
can
tell
you
what
we
started
implementing
and
if,
if
you're
interested
in
collaborating
on
this,
I
think
this
could
be
something
that
could
be
generalizable
and
could
be
reused
across
systems.
If
maybe
I
I'm
actually
not
familiar
with
kind
of.
A
You
know
details
of
your
architecture
of
the
pixie
architecture,
but
you
know
evps
code.
What
we
started
building
was
a
system
where,
when
you
get
an
http
payload,
you
have
so
basically
you
maintain
two
sets
of
rings,
one
for
control
and
one
for
data
right
like
control,
plane,
data
plane,
network
engineers,
so
the
control
plane
rings
are
what
you
have
today
like
what
what
you
know
what
we
have
today,
those
aren't
we
don't
experience
losses.
Those
are
a
big
enough
and
the
volume
is
small
enough.
A
The
movements
are
small
enough
to
you
know,
even
on
very
busy
systems
to
get
all
the
events.
What
you
have
is
the
data
rings
where,
when
you
see
a
payload,
you
submit
it
to
you,
submit
the
you
submit
that
payload
to
the
data
ring
and
then
you
can
lose.
You
know
the
the
ring
can
be
full,
so
the
code
would
handle
that
and
then
you
submit
into
the
control
ring
a
tiny
message.
A
Just
saying
like
I
wrote,
n
megabytes
right,
like
one
megabyte
or
you
know
whatever
764
kilobyte
chunks
into
the
data
ring,
and
so
the
control
is
not
lossy.
The
data
ring
might
be
lossy,
but
then
you
know
how
much
you
lost
when
you
write
to
it
and
then
you
report
it
to
the
control
ring.
So
that
is
like
we
actually
have
an
implementation
of
that.
I
think
like
it's,
the
the
thing
is,
I'm
we
haven't
used
it
in
production
so
and
anger.
A
So
I
I
don't
know
that
it's,
but
you
know
I'd
love
it.
If
we.
F
Can
yeah
on
that
yeah?
That
would
be
awesome.
You
know
it's
funny
because
we
have
a
fairly
similar
system,
so
we
also
have
a
control.
You
know
perf
buffer
for
transmitting
control
events,
and
we
have
one
for
data,
so
you
know
it
just
seems
like
people
naturally
kind
of
you
know
land
on
that
solution.
You
know
in
spirit.
I
think
what
we
do
is
the
same.
F
What
we
do
slightly
differently
is,
instead
of
putting
the
kind
of
bite
information
in
the
control
ring
on
the
the
data
event
that
we
push
into
the
the
data
ring
will
actually
give
like
we're.
Recording
in
the
the
byte
position
like
this
is
byte
number
5000.
This
is
byte
number,
so
you
know,
seven
thousand
is
byte
number
20
million.
F
Whatever
I
mean,
this
connection
can
be
very
long
lived
and
we'll
include
that
information
with
the
data
event
that
we
send
over
so
that
when
it
comes
over
on
the
other
side
we
can
say
okay,
this
is
where
it
slots
in
and
just
using
that
information,
it's
pretty
much
the
same
information
you
have
in
the
control
ring,
but
we
just
passed
it
in
the
data
ring.
So
if
it
gets
lost
it's
okay
because
we'll
we'll
detect
it
we'll
say:
oh
the
last
thing
we
saw
was
going
from
bite
5000
to
6000..
F
Now
we
see
something
going
from
7000
to
8000.
There
was
clearly
a
lost
event
somewhere
in
the
middle
right,
so
yeah
happy
to
to
it
sounds
like
we
have
something
very
similar
in
spirit
and
I'll
have
to
collaborate
on
that
amazing.
A
Yeah
it'll
be
great,
I
I'd
really
love
it
just
just
for
there
was
another
anyway,
let's
you
know,
maybe
we
can.
A
So
I
think,
if
you
can,
you
know
jana
michael
kind
of
have
you
if
you've
seen
the
duck
you've
scanned
the
document
now.
Is
there
anything
kind
of
obvious
that
you
wish
to
add
there
or
do
you
want
to
read
it
and
then
like
on
slack?
We
can
we
can
if
you
want
additions.
E
There's
there's
like
one
comment
I
have
like.
Currently
you
know
the
decoration
is
like
the
metadata.
You
know,
decoration
is
only
available
for
nomad
and
you
know
kubernetes
like
is
this
a
extensible
model
like
you
can
just
you
know
how
easy
it
is
to
you
know,
recognize
another
platform
and,
like
you
know
like,
for
example,
ecs
like
tags
like?
Is
it
easy
to?
You
know
just
add
those
type
of
things.
A
The
so
at
least
the
docker
metadata
collection.
What
the
collector
does
is
it
queries
the
docker
engine
and
just
fetches
this
entire.
I
think
I
don't
remember
if
it's
yam
alert,
I
think
it's
json,
so
the
docker
engine
just
has
like
this
huge
json
with
everything
about
the
container.
It's
what
you
get
with
docker
inspect
and
then
there's
a
handful.
You
know
a
bunch
of
code,
you
know
maybe
150
lines
of
code.
That
say
you
know,
is
there
an
easiest
label
in
there?
So,
whatever
com.ecs.
A
A
E
Open
useful
is
a
similar.
You
know
it's
not
a
part
of
any
of
the
specs,
though
you
know
it's
kind
of
like
it's.
This
arbitrary
processor,
which
can
like
you
know,
discover
all
this
like
additional
metadata,
so
that
that's
what
I
why
I
was
like
asking.
Should
it
be
a
part
of
the
spec
like
or
should,
since
it's
going
to
be
such
an
extensible
thing?
Maybe
we
can
just
document
it,
but
don't
have
to
like
represent
it
as
a
part
of
the
spec.
I
don't
know
I'll
just
leave
some
comments.
A
Yes,
what,
whatever
you
know,
whatever
aspect
of
of
the
you
know
the
eating
dpf
collection
systems
that
that
are
general
purpose
to
come.
You
know
to
to
have
as
open
to
lev3
components
that
are
not
part
of
the
ebt
collector.
We
are,
you
know,
we'd
be
excited
to
to
switch
over
to
those.
E
The
only
difference
is
this:
like
you
know,
collector,
doesn't
necessarily
run
as
a
side
car,
so
you
want
to
discover
these
things
right
when
you're
running
right
next
to
the
workloads.
So
you
know
on
the
on
the
cluster
that
you
you
are
inspecting.
E
You
know
open
telemetric
collector
could
be
running
as
a
service,
for
example
like
there
are
a
lot
of
people
doing
that
like
they
just
run
it.
You
know,
as
a
horizontal
scalable
thing,
just
you
know,
put
push
their
data
in
otlp,
so
in
those
type
of
cases
those
resource
detection
is
not
going
to
work,
so
I
think
we
need
to
keep
it
in
in
this
particular
components.
It's
just
more
of
like.
Are
we
going
to
be?
You
know
re-implementing
right,
because
it's
a
lot
of
work,
probably
to
re-implement.
E
Resource
detection
resources
section
in
in
this
particular
vinyl.
G
So
when
it's,
when
the
collector
is
run
as
a
horizontal,
not
a
demon
set,
do
you
do,
can
you
can
you
do
like
container
metadata
extraction
or
do
you
just
lose
the
resource
and
you
get
the
metric.
E
E
G
E
Are
running
it
outside?
It's
not
easy
right,
like
I
mean
there's,
no,
you
don't
have
that
data,
so
we
need
to
produce
it
here,
regardless
for
people
who
are
running
it
outside
of
their
clusters.
I.
G
Think
that
is
not
to
be
too
much
of
a
product
manager
about
this,
but
that
seems
like
a
product
decision
right.
If
you're
mindfully
saying
I
want
to
run
it
as
a
horizontal
service,
and
none
of
my
other
metrics
are
getting
endpoint
metadata
about
the
resource,
then
maybe
the
intention
is
like.
Maybe
we
don't
need
a
need
is
a
strong
word
here.
I
think
in
terms
of
needing
to
re-implement
it
you
might
be
the
kind
of
person
who
doesn't
need
that
data,
because
you're
mindfully
making
this
decision
about
dropping
the
resource
entity.
E
Yeah,
I
couldn't
really
follow
that,
but
I
I
do
think
that
we
still
need
it.
I
think
in
terms
of
parity,
we
might
not
be
at
the
same
level.
Like
you
know,
open
telemetry
collector
has
a
wide
range
of
different
things
like
its
support.
It
supports,
like
all
the
you
know,
cloud
providers,
you
know
container
platforms
and
all
of
that,
like
maybe
here
the
the
parity
will
be
limited.
We
won't
be
able
to.
You
know,
detect
resources
for
every
other
thing.
E
A
A
F
B
A
Yeah
yeah
do
I
have
so
we
wanted
to
catch
up,
maybe
offline.
No,
so
I
don't
know
if
I
have
your
contact
details.
F
You
can
just
reach
out
at
oh,
my
first
initial
last
name
at
pixielabs.ai.
F
Yeah
that'd
be
great
to
connect
I'll
be
out
next
week,
so
we
should
I'm
pretty
busy
for
the
rest
of
this
week,
but
yeah
we
can.
We
can
definitely
sync
up
afterwards.
You
know
in
a
few
weeks
all
right,
cool.
A
And
always
hope
to
see
you
next,
you
know
your
way
next
week,
so
probably
not
yeah.
A
F
Probably
miss
next
week
but
yeah
I'll
I'll,
be
hopefully
joining
regularly.
So
that'll
be
awesome,
cool,
see,
ya.