►
From YouTube: 2021-08-19 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
Good
good,
I
haven't
been
able
to
watch
your
your
summit
talk
yet,
but.
A
It's
on
my
list:
oh
yeah,
yeah,
there's
a
lot
of
good
stuff
out
there,
yeah
pete
actually
on
my
team,
gave
the
talk
so.
A
B
A
Yeah
I've
actually,
so
I'm
still
not
hooked
in
completely.
So
maybe
while
we
wait,
I
can
ask
you
the
so.
While
pixi
is
now
cncf,
I
don't
have
a
cncf
account
and
it
looks
like
I
need
one
is
that
right.
Are
you
like
logging
into
the
slack
with
your
cncf
account?
Is
that
how
it
works
slack?
Yes,
yes,
it's
a
workspace
yeah,
it's
a
work,
yeah
the
workspace
right,
okay,
but
you
essentially
have
a
cncf
email.
Is
that
how.
B
No,
you
just
used
my
I
don't
remember
which
one
I
used.
I
think
my
personal
email
even
for
this
tncf,
because
it
was
pre
re-splunk.
So
you
can
just
you
can
just
get
your
get
get
yourself
an
invite
to
slack
there.
I
can
try
to
find
you
the
yeah
who
can
get
me
the
invite.
I
guess
that's
the
question
I'll
I'll,
send
you
the
link,
I
think,
in
the
open,
telemetry
github
repo
under
in
the
community
github
repo.
There
should
be
a
link
to
the
cncs
lack
there.
B
B
Here,
yeah
the
open
telemetry
github
in
the
community
repo
here
I'll
just
paste
it
into
the
chat.
Oh
thanks,
you
bid
us
to
it.
C
Yeah,
what's
your
email,
I
can
actually
invite
you.
I
guess.
C
A
A
B
C
Yeah,
maybe
we
can
kick
off
with
the
the
schema
like
I
was
you
know
on
vacation
last
week,
but
I
kind
of
like
left
some
comments.
Do
we
wanna,
like
you
know,
go
through
that
schema
or
do
we
still
kind
of
like
waiting
for
others
to
you
know
just
review
it?
What
what's
the
current
status.
B
Right
so
we
didn't
go
through
the
individual
schema.
You
had
a
few
comments
and
I
left
kind
of
discussion
to
the
group
you
weren't
in
last
time,
so
we
let
me
open
the
document
I
should
have.
I
should
have
commented
on
your
comment
saying
you
know.
We
discussed
this
no.
B
So
so
I'll
just
paste
it
in
the
chat,
the
the
document.
You
know
I'll
put
it
on
the
agenda.
B
Here
it
is
it's
in
the
meeting
notes,
so
the
schema
docs.
Your
first
comment
was
a
a
schema,
a
proper
schema
question.
Can
we
you
know,
can
we
define
messages
that
make
sense
right
like
we
have
a
handful
of
messages?
They
seem
to
overlap.
B
So
I
think
there
is
no
reason
not
to
change
the
schema.
It
is
very
flexible
there's
you
know
we
just
need
to
go
through
kind
of
the
schema
and
just
make
make
suggestions,
and
that
makes
sense
and
and
just
go
and
do
it
it's
not.
It's
not
even
a
big
deal
to
change
the
messages
or
anything.
C
Yeah,
I
think
the
other
thing
I
didn't
understand
like
was
it
intentional,
like
you
know,
is
there
anything
that
I
didn't
see?
You
know
semantics
twice
like
maybe
they
were
different?
You
know
things
yeah.
B
B
Yeah
yeah,
so
some
of
them
kind
of
need
kind
of
the
documentation
kind
of
I
try
to
give.
You
know
the
general
kind
of
general
understanding
of
what's
happening.
For
example,
I
think
you
you
commented
on
this
container
container
annotation
message.
The
container
annotation
is
just
like
one
pair
of
key
value
in
the
labels
from
docker
metadata,
so
you
have
a
handful
of
these
per
container
and
it
doesn't
conflict
with
container
metadata.
It's
just
it
extends
it.
B
So
kind
of
you
can
have
arbitrary
labels
that
it
reports,
but
you
know
the
documentation
is
incomplete,
but
I
think
like
there's,
really
we
should
you
know,
maybe
just
start
filing
issues.
I
don't
know.
Maybe
we
should
come
up
with
a
system
that
works,
and
you
know
we
can
file
issues
on
the
repository
and
just
start
fixing
this.
Just
you
know
if
there's
metadata,
that
is,
for
example,
nomed
metadata
overlaps
with
container
metadata
containment.
B
The
idea
was,
let's
try
to
normalize
all
of
the
different
orchestrators,
because
I
think
both
nomad
ecs
kubernetes,
they
all
have
a
concept
of
namespace.
So
can
we
just
have
like
a
container
metadata
that
has
a
namespace
so
that
we
don't
have.
C
Was
not
also
clear,
you
know
from
the
stock,
so
you
know
that's
why?
Maybe
I
couldn't
follow.
B
It's
clear,
I
think
we
should
fix.
We
should
fix
the
the
you
know
where
schema
evolved
and
does
it
make
sense
now
we
should
fix
it
like
there's,
no,
no
reason
to
it's
easy
fix:
let's
just
do
it
cool.
C
Actually,
the
biggest
question
this
destination
ipa
address
is
like,
so,
if
you
want
to
run,
you
know
flow
milk
collector
as
a
side
car
we
want
to
be
able
to.
You
know
resolve
what
it
is.
I
mean
depends
right
like
where
we're
going
to
be
resolving
what
that
ip
address
is
actually
important
to
me
like
in
terms
of
what
what
context
I'm
going
to
be
running
the
the
agent,
the
collection
agent.
So
do
we
have
like
any
ideas
like
here
like?
B
Yes,
okay,
so
ip
addresses
so
ip
addresses.
B
The
local
ip
addresses
you
know,
you
know
the
the
originator
right.
You
know
the
the
process,
the
container
the
host,
so
you
have
a
very
good
good
understanding
of
the
local
side
of
of
your
connection
right.
So
the
you
know,
the
where
you
should
care
is
the
remote
side
of
the
connection
right,
the
phlomo
collector.
B
What
it
does
is
it
enriches
with
dns
dns
is
available
locally
and
and
basically
that's
that's
what
it
is
able
to
do
locally
the
other
pieces
so
I'll
just
I
want
to
document
it
for
so
that
we
have
it
so
like
local
addresses.
There
is.
B
Solution
container
host,
so
the
you
know
you
have
dns.
The
problem
is
that
you
know
the
other
two
sources
that
the
container
frequently
uses
are
aws
instance
metadata.
So
there
is
a
collector
the
problem,
that
is,
that
it
is
a
bit
heavyweight.
B
So
first
thing
the
aws
apis
are
pulling
so
you
have
to
you
have
to
query
them,
and
you
know
we
query
them.
I
think
every
minute
and
you
don't
want
to
do
it
on
every
host.
So
that
is
how
you
found
find
out
the
rds
hosts.
B
It's
you
know,
and
so,
even
if
the
api
is
not
polling,
you
still
don't
probably
don't
want
to
do
it
on
every
house.
Yeah.
B
C
You
know
you
can
reuse,
you
know.
Yeah,
I
mean
I
was
just
trying
to
understand
the
the
entire
flow
that
makes
sense
yeah.
So
in
this
model
we
are
gonna
resolving
those
iop
addresses
like
later.
You
know
after
we
collect.
C
I'll
resolve
my
comments
by
the
way
like
I
got,
you
know,
answers
to
you
know
the
ip
addresses
like
this
context
related
thing.
It's
also
very
similar.
We
probably
don't
want
to
do
it
on
the
agent.
We
can
always
query
query
it.
You
know
when
I
mean
context
like
on
kubernetes,
for
example,
you
want
to
be
able
to
understand
like
which
service
pod
namespace,
whatever
you
can
do
it
later.
So
I'm
going
to
resolve
those
comments.
B
There
are
other
there
are.
You
had
two
questions
that
we
we
did
go
through
last
time,
which
was
where
do
we?
Are
we
planning
to
decorate
socket
telemetry
with
context?
So
I
think
we
discussed
this.
I
think
I
think
there's
consensus
that
it
should
be
configurable
that
the
collector
output,
normalized
data,
sorry
denormalized
data,
so
you
want
to
have
like
more.
B
B
We
followed
up
on,
I
think,
maybe
it's
not
relevant
to
the
schema
doc,
but
we
followed
up
on
the
discussion
of
where
to
sorry
whether
we
need
to
re-implement
container
enrichment
in
the
in
the
evps
collector,
because
we
already
have
very
good
collectors
for
container
metadata
in
the
open,
telemetry
collector
and
there
I,
I
think,
the
the
what
I
understood
the
consensus
to
be
was:
let's
try
to
push
that
functionality
into
the
open,
telemetry
collector.
Let's
try
to
have
the
collector
enrich
container
metadata
that.
C
Have
you
have
you
looked
into
that,
like
the
processors
like
most
of
the
processors
are
like
host-based
so
like
they
actually
like
just
you
know,
I
discovers
the
current
hosts
related
like
content
context.
In
this
case,
you
know
we
will
just
basically
do
like
reverse
lookups.
What
this
ips
and,
like
you
know,
try
to
resolve
its
metadata
and
so
on.
If
you
look
into
the
like
processors
like
to
see,
if
there's
anything
already,
that
does
that.
C
I
don't
remember,
that's
why
I
was
asking
but
anyways,
I
think
it
makes
sense
to
you
know,
make
it
at
the
collector.
That
makes
perfect
sense.
We
can
write
a
new
processor,
so.
B
Yes,
I
mean,
I
mean
we
want
to
move
it
to
the
open,
telemetry
collector.
So
if
there
are
these
processors
we
want
them
to
do
it.
I
haven't
looked
at
the
processors.
I
think
I
don't
remember
who
said
it,
but
I
think
morgan.
I
don't
remember
I'm
sorry.
I
don't
want
to
to
put
words
in
people's
mouths,
but
I
think
there
was
an
understanding
that
there
is
information
about
containers
in
the
open,
toilet,
free
collector
and
we
can
use
that.
But
you
know.
C
Maybe
there's
like
processors
like
this
k8
processor,
which
is
sort
of
like
you
know
actually
like
resolves
like
the
current
note
specific
stuff,
so
it
you
know,
enriches
things
with
the
current.
You
know
namespace
and
pods.
It
doesn't
necessarily
like
care
about
the
incoming
source
like
where
it
came
from
and
so
on,
but
we
can
write
a
processor
let's,
let's
take
a
look
at
the
existing
processors
to
see.
If
there's
anything,
you
know
that
that
does
what
we
want
to
do.
Otherwise
we
can
write
the
processor.
A
B
The
data
is
already
normalized.
We
were
going
to
denormalize
the
data
so
that
it
so
that
the
raw
logs
that
the
collector
outputs
are
usable
to
users
without
a
custom
back
end.
B
B
Can
we
can
we
at
least
have
an
optional
mode
of
operation
that
denormalizes
data,
so
that
users
can
run
some
filters
in
the
processors
on
their
machine
and,
and
you
know,
filter
all
the
traffic
from
specific
container
or
a
specific
process
so
that
they
can
without.
B
Okay,
I
there's
an:
can
we
have
the
lightning
talk.
The
flowmo
collector
has
a
lightning
talk
in
the
edpf
summit
today,
and
so
I
I
expected
some
measurements
from
live
systems
and
for
every
socket
report
that
is,
output,
sorry
for
every
for
every
container
there
you
see,
on
average,
230
000,
socket
reports
on
live
systems.
B
Right,
like
200
hundreds
of
thousands
of
times
for
every
container,
so
when
you
can,
when
you
can,
you
know
if
you
care,
if
you
want
to
have
all
the
data
transmitted
and
you
care
about
volume,
and
you
know
also
processing
like
cpu
like
handling
all
those
container
metadata,
parsing
it
again
again
and
again
and
again,
if
you
care
about
that,
you
want
to
normalize
data
yeah,
then.
A
B
Yeah,
the
I
think,
the
concern
from
the
opel
telemetry
technical
committee
was
that
you
actually
need
a
a
back
end
in
order
to
make
use
of
the
data
and
that
kind
of
they
were
uncomfortable
with
having
kind
of
vendors
completely
dominate
kind
of
the
data
from
the
collector.
You
know
that
the
data
wasn't,
I
think,
that's
what
I
understood
from
the
technical
committee.
B
So
this
is
why
we're
trying
to
you
know,
as
the
one
of
the
mandates
of
of
you
know,
of
this
work
group,
is
to
come
up
with
the
list
of
requirements.
You
know
the
roadmap
that
we
want
to
tackle
before
approving
the
contribution
into
open
telemetry.
So
this
is,
I
think
this
is
one
of
our
core
kind
of
initial
mandates
and
then,
of
course,
it
will
transition
into
kind
of
maintaining
the
roadmap
and
making
sure
we
cool
thanks.
B
C
Johnson,
just
because
you
mentioned
that
you
know
our
job
is
also
figuring
out.
The
roadmap,
like
I'm
kind
of
worried,
like
maybe
we're
thinking
too
much
in
detail
right
now,
like
we
haven't,
decided
yet
the
you
know
the
big
pieces
right,
the.
What
is
the
overall
flow
will
look
like
what
are
the
different
components,
whether
there
will
be
a
receiver
or
a
separate.
Maybe
let's
try
to
like
focus
on
just
having
that
big
picture.
First
and
then
you
know
schema
and
everything
is
just
kind
of
like
can
come
later.
C
It
could
be
also
an
implementation
details
like
one
of
the
things
that
I
was
thinking
is.
Maybe
we
will
aggregate
them
on
the
collector
into
you,
know,
metrics
and
other
things
which
is
sort
of
like
something
that
we've
been
discussing.
So
maybe,
like
you
know,.
B
C
C
Thing
for
a
while
I
mean
we
will,
we
can
always
expose
it,
but
we
don't
have
to
like
contribute
to
spec
to
the
open
telemetry.
It
could
be
just
you
know,
logs
being
you
know
exported.
So
maybe
let's
try
to
like
focus
on
that.
Like
big
picture
things,
I
think
we
have
a
couple
of
things.
A
C
Know
address
whether
it's
going
to
be
a
separate
thing
or
not,
where
we're
going
to
be
doing
the
the
processing
thing
which
you
had
a
answer,
whether
we
will
aggregate
them
or
not
like?
Is
it
going
to
be
a
processor
or
not?
You
know
so
those
are,
I
think,
the
top
three
things
that
we
need
to
have
an
answer
for.
B
And
hi
evan
chris:
do
you
have
any
agent
items
for
today.
D
Hey
this
is
chris,
I'm
just
here
to
listen.
Actually
I
the
dave
thaler
over
from
the
ebpf
group
he's
like
hey.
You
need
to
get
on
this
call.
This
is
where
it's
at
so
I
I
I'm
I'm
at
microsoft,
I'm
architect
working
on
like
telemetry
and
stuff
like
that
for
windows
and
azure,
and
I
don't
know
I
I'm
all
into
ebpf
so,
and
you
know
I
was
just
here
to
hang
out.
D
D
I
work
on
a
lot
of
core
operating
system
stuff,
so
like
etw
and
things
like
that
within
windows-
and
you
know
we,
of
course
we
have
azure
telemetry
open
telemetry
is
super
popular
with
riley
and
tamaran,
and
so,
but
I
I
just
I
think
ebpf
is
really
cool
and
we've
done
some
experiments
and
some
prototypes
I'm
curious
where
I'm
curious
how
to
help
out
actually,
because
I
yeah
I'm
just
trying
to
get
the
carrier
frequency
honestly.
C
Let's
try
not
to
think
too
much
about.
I
think
this
bag
and
everything-
let's
try
to
you-
know
figure
out
this
high
level
thing
and
then
the
at
least
we
can
also
like
tell
like.
Oh,
these
are
the
things
that
we
want
to
contribute
to
open,
telemetry
spec.
If
there's
anything
that
we
want
to
contribute.
B
Cool,
I
think,
for
the
your
first
item.
You
know
whether
whether
we
want
to
have
this
collector
as
a
receiver
or
a
separate
process.
We
had
some
discussion
last
last
week.
What
michael
raised
was
that
a
lot
of
their
users?
Don't
you
know,
because
of
the
elevated
permissions
required
for
evpf
many
of
their
users
preferred
evpf
collectors
to
be
a
separate
process.
A
B
And
then
what
michael
proposed-
and
I
think
they
have
been
using
this
type
of
architecture
in
some
of
their
collectors-
is
building
a
receiver
that
spins
up
using
some
glue
scripts
spins
up
some
process
for
the
collector.
There
is
some
serialization
and
deserialization
involved,
but
configure
duration
flows
from
other
telemetry
through
the
receiver.
You
have
the
configuration
as
if
it's
a
receiver
as
if
it's
in
in
process,
so
that
is
the
user
experience,
but
it
is
a
separate
process
with
some
glue.
B
We've
asked
bogdan
and
tigran
on
the
channel
and
I
think
one
of
them
replied
that
it
is
possible
and
I
think
that
would
be
kind
of
a
way
to
the
tigran.
You
can
reply
that
it
should
be.
C
Yeah
separate
process
I
think,
makes
like
the
also
like
most
sense
to
me
because
of
elevated
permissions
and
the
side
car
like
you
can
run
it
as
a
side
car.
Not
everybody
wants
to
run
the
collector
as
a
side
car,
because
it's
bigger,
so
you
can
run
at
this,
like
you
know,
flo
milnister,
you
know
this
agent
needs
to
run
as
a
fight
car.
You
know
on
an.
B
C
C
B
Yeah,
I
don't
have
a
strong,
you
know,
you
know,
leaning
that
way
or
the
other.
B
There
is
a
way
to
to
actually
link
the
collector
into
language.
You
know
the
collectors
the
utf
collector
is
currently
written
in
c
plus,
but
there
is
a
way
to
wrap
it
and
to
go
so
it
you
know
you
could
go
that
way,
but
I
think
because
of
the
elevated
permissions,
people
would.
C
Yeah,
one
of
the
other
things
would
be
like:
if
people
really
care
about
this,
they
can
always
wrap
it
in
the
long
term
like
I
think
our
primary
approach
should
be
keeping
it
as
a
separate.
You
know
thing,
it's
just
you
know
easier
to
actually
like
wrap
it
and
turn
it
into
a
receiver.
If
anyone
wants
it,
I
think
yeah
keeping
it
separate
is
good.
Also
it
you
know
once
you
have
like
you
know,
c
go
depend
on
c,
plus,
plus
dependencies.
C
B
Yeah
right
it'll
also
facilitate
testing.
If
you
can
run
a
separate,
you
know
container
process
and
then
you
can.
You
don't
have
to
run
the
open,
telemetry
collector.
You
can
split
out
kind
of
the
the
testing
there
yeah
there.
C
So
we
also
have
an
answer
for
the
second
one
we
will
enrich.
We
will
enrich
like
resolve
and
reach.
So
you
know
I'm
just
really
bad
in
typing
metadata
at
the
collector.
The
only
downside
of
this
is
like
you
have
to.
Let's
say
that
you
want
to
run
collector
at
a
different
cluster.
You
won't
be
able
to
do
the
the
enrichment
you
have
to
be
able
to.
C
You
know
talk
to
the
api
end
point
on
conveniences,
for
example,
to
the
cluster
that
you
know
your
tasks
are
running
to
be
able
to
do
this.
I
guess
it's
a
fair.
A
B
Yeah,
so
I
think
the
what
what
we
were
leaning
towards
is
enriching
at
the
amenity
at
the
collector,
with
the
exception
of
containers
that
our
end
goal
should
be
not
to
duplicate.
If
there
is
good
telemetry
in
hotel
already,
the
hotel
collector
already
has
good
container
container
metadata.
Then
we
should
use
that
we
shouldn't
re-implement
it
because
then
you
know
all
the
edge
cases
of
you
know,
nomad
or
ecs
or
kubernetes
right,
all
the
all
the
edge
cases
you
can
have
just
one
project
effort
to
maintain
if
there
is
no.
B
B
Exactly
yeah,
I
think
we're
approaching
to
time.
So
I
don't
know
if
we
want
to
cover
another
one
or.
C
A
spec,
maybe
because
it
will
be
actually
maybe
additional
implementation
detail
for
now,
so
I
think
it
is
going
to
define
what
we
wanna.
We
were
gonna
be
exposing.
Well,
maybe
we
can
keep
it
for
the
next
week,
but
you
know
I
would
like
to
hear
also
feel
like
a
couple
of
minutes.
This
is
the
hardest
piece
in
the
end.
B
Yep,
what
what
michael
said,
I
think
was-
maybe
you
know
one
of
our
first
meetings
is
that
metrics
are
the
sorry
the
the
open,
telemetric
electric
doesn't
do
well,
this
is
from
memory.
I'm
sorry
I
don't
want
you
know
again
is
that
the
open
telemetry
doesn't
do
well
with
high
cardinality
of
metrics,
and
so
we
we
need
to
be
careful
with
network
telemetry,
depending
on
the
the
resolution
of
data.
B
You
know
what
what
you're
trying
to
you
know
how
much,
how
you
know,
how
detailed
the
dimensions
are,
how
much
cardinality
you're
going
to
generate
we
need
to
be
careful
with
metrics
michael,
was
actually
very
comfortable
with
logs.
He
said
you
know
their
experience.
Datadog
is
that
you
probably
not
want
to
send
high
cardinality
metrics
into
the
like
with
metrics,
but
I
I
I
don't
have
personal
experience
with.
C
This
does
this.
Does
this
mean
that,
like
I
mean
if
we
can
still
do
it,
but
it
will
be
pre-defined
dimensions,
maybe
like
we
can't
really
produce?
Yes,
the
the
it's
going
to
completely
mess
up
the
pipeline
like
the
we,
because
each
individual,
like
I
mean
the
agree,
each
individual
almost
like
event,
will
turn
into
a
metric
if
we
won't
be
able
to
aggregate
it
that
much.
But
if
you
like
make
the
you
know,
maybe
there
could
be
a
configuration
where
you
know.
C
Users
can
choose
what
dimensions
they
want
to
aggregate
on
and
stuff
like.
We
need
to
discuss
this.
I
think
we
can
do
it
next
week
or
something
it's.
We
don't
have
time.