►
From YouTube: 2023-02-01 meeting
Description
Open cncf-opentelemetry-meeting-3@cncf.io's Personal Meeting Room
A
A
C
Yeah
he
asked
me
to
well
just
wait:
what
is
there
on
the
agenda
so
yeah?
Basically,
he
wanted
to
post
greeting
on
apart
friendly
seat
meeting
something
like
once
a
month.
I
think
what
we
should
figure
out
here
is
first.
C
We
want
to
consider
and
then,
if
so,
how
can
we
gauge
the
interest
from
the
community?
Should
we
like
pin
some
issues
on
the
repositories
and
post
some
messages
on
slack
and
how
people
Vote
or
voice
your
opinion,
or
how
should
we
go
about
it?
Foreign.
A
To
have
one
meeting
per
month
in
an
apoc
friendly
that,
maybe
maybe
you,
let's
see
the
third
week
Wednesday
of
the
month,
we
have
two
meetings,
we
have
the
normal
one
and
we
have
an
extra
meeting
for
collector
a
pack
but
four
or
five
PM
PST.
A
If
we
want
to
cover
PST
time
or
otherwise,
we
can
do
a
meeting
of
Europe
pack
friendly.
That's
another
option.
A
Or
maybe
maybe
we
have
two
meetings,
let's
say:
first,
first
Wednesday
is
a
meeting
Europa
pack
friendly
and
that's
a
the
third
Wednesday
is
North
America
pack
friendly
meeting.
C
A
That's
my
my
thought.
Any
other
ideas
are
welcome.
C
A
A
A
These
open
source
is
not
open,
source,
I
I,
don't
know,
what's
open,
what's
the
status
of
open
search,
is
it
not
an
open
source
project
that
is
offered
by
different
Cloud
vendors
as
a
as
a
private
solution.
E
It's
a
fork
of
elasticsearch
that
is
mostly
maintained
by
AWS,
and
but
there
are
a
couple
of
other
companies
involved
as
well.
That's
that's
what
I
understand?
Okay.
A
A
Okay,
so.
A
G
Her
it's
not
on
our
roadmap
for
Ada
and
the
open
search
team
at
AWS.
Their
service
accepts
a
TLP
directly,
so
I
don't
believe
it.
It's
on
that
roadmap
either.
G
Mm-Hmm
they
have
a
component
called
Data
prepper,
which
is
also
open
source
similar
to
The
Collector.
In
many
ways
it's
a
processing
pipeline,
except
so
TLP,
does
the
pre-processing
necessary
to
build
the
service
map
and
other
things
for
storing
traces
and
open
search
and
then
sends
it
to
open
search.
So
that's
their
recommended
mechanism.
So.
A
Okay,
otherwise.
A
B
A
Anyway,
if
somebody
Alex,
can
you
take
an
action
item
just
to
post
a
link
to
the
rules
in
the
issue,
so
that
people
know
that
this
is
what
we
will
do.
H
F
You
know,
there's
right
now,
there's
some
within
some
of
the
sdks
there's
some
logic
when
you
grab
the
db.statement
attribute
to
apply
redaction
of
sensitive
portions
of
the
SQL
query,
it's
just
a
bunch
of
regex's
and
there
were
some
efforts
taken
I
believe,
but
someone
was
looking
into
it
like
Upstream
we
had
internally.
F
We
had
written
like
some
custom
processors
to
do
this
in
the
collector,
because
the
performance
of
doing
this
in
Ruby
was
bad,
ended
up
not
going
anywhere
I'm
looking
into
collecting
Telemetry
from
the
from
a
different
receiver
that
has
on
a
different
place
that
has
unobfuscated
sequel
in
it.
F
So
yeah
I
wasn't
sure
if
there
was
a
any
work
to
allow
that
to
occur
within
the
agent
or
within
the
collector,
or
whether
it's
only
in
clients
and
like
is
the
recommended
approach,
still
just
redaction
using
a
redaction,
the
redaction,
processor
or
or
you
know,
dropping
the
attribute
or
something
like
that.
Or
is
there
any
it's
I'm?
Looking
through
the
open
issues,
I
saw
someone
make
a
PR
about
a
obfuscation
processor,
but
it
was
for
like
encryption
like
encrypting
things,
not
necessarily
like
modifying
the
string.
F
You
know
just
pieces
of
the
string.
I
was
wondering
if
anyone's
familiar
with
that,
whether.
A
A
Added,
for
these
specific
reasons
to
be
able
to
remove
Pi
data,
to
remove
sensitive
data
from
from
things
and
that's
why
they
said
that?
No,
no,
no
we're
not
going
to
put
this
in
the
transform.
We
want
a
specific
use
for
for
this
and
I
I
got
convinced
that
this
is
the
solution
for
this
problem
and
I.
I
hope
that
this
is
the
solution
for
your
problem
as
well.
Well,.
F
We
you
can
drop
the
stuff
on
the
floor.
That's
like
the
redaction
is
in
like
you
would
you
would
keep
the
table
names,
but
you
drop
the
insert
social
security
number.
You
know
value
so
I
think
that's
still
like
I,
don't
think,
there's
functionality
to
do
that.
I'm
interested
in
that
functionality.
I
was
thinking
of
upstreaming.
There's
some
half
dumb
work
internally.
That
I
was
thinking
of
upstreaming
I'm,
not
sure.
If
the
processor
is
the
right
way
to
do
it
or
whether
it
should
be
something
at
ottl
or
something
like
that.
F
Would
it
replace
the
red,
would
it
doesn't
it
just
drop
that
might
be
actually,
does
it
I'm,
sorry,
you
would
think
I'd
be
more
prepared.
Does
it
just.
F
A
Values
take
a
look
at
that.
Can
you
take
a
look
at
that
and
now
in
terms
of
in
terms
of
if
we
need
to,
for
example,
have
a
subset
of
spans
or
or
select
for
selecting
some
Telemetry
where
we
apply
the
regex
I
definitely
agree
with
you
that
we
should
include
ottl
for
for,
like
the
conditions
part
as
we
did
in
other
in
other
processors,
everyone
back.
Then
everyone
told
me
that
this
is
so
important
for
gdpr
for
everything
else.
Then
it
shouldn't
be
in
a
generic
like
transform
processor.
A
F
F
F
A
Just
be
a
doctor,
I
mean
I
searched
for
masks
and
I
have
like
five
or
six
results
in
that
read
me.
If
it
doesn't
do
masking
in
his
doing
deletion,
then
then
something
is
off.
I
It
one
thing
to
note
is
that
it
looks
like
it
only
supports
traces.
So
if
you
have
to
mask
things
in
other
data
types,
I
don't
think
it
supports
that.
Yet
no.
F
Yeah
I
guess
it
does
cool
it
replaces
it
with
four,
of
course,
four
asterisks,
as
one
does
well
I,
think
I'm
good
to
go.
Thank
you.
A
So
so
you
have
what
you
need.
Please
please
help
us
offer
you
to
maintain
these.
If
you
think
you
you
you're
gonna,
have.
A
You
think
it's
useful
for
you
and
you
think
it's
a
useful
personality.
Please
offer
us
help
and
and
maintain
the
component
and
adopt.
B
F
A
Next
is
Christina.
D
Hi
everyone
I
just
wanted
a
little
bit
of
discussion
on
my
open
PR,
so
I
can
know
how
I
can
move
forward
and
get
it
merged.
I'm,
trying
to
add
LD
Flags
for
The
Collector
Builder.
A
A
A
For
me
for
me
in
general,
I'm
trying
to
keep
the
number
of
flags
minimal
and
try
to
put
as
much
as
possible
in
the
yaml
config
like
because
I'm
a
configure,
a
config
as
a
code
guy,
which
means
like
I'm,
preferring
to
have
my
my
whole
configuration
to
submitted
somewhere
instead
of
using
Flags.
But
it
seems
that
this
is
a
real
need
for
for
having
a
flag
here.
So.
E
Yeah
I
think
I
think
one
part
that
really
helps
the
case
is
that
this
is
really
an
environmental,
VAR
or
environmental
information
that
has
been
passed
and
not
so
much
information
about
the
the
beauty
itself,
like
I,
don't
know,
I
feel
like
a
flag,
is
indeed
the
right
place
for
this
one
here,
yeah.
I
A
Yeah
yeah
yeah
now
in
terms
of
position,
so
Jurassic,
okay,
flag,
let's,
let's
make
it
a
flag
for
the
moment
and
we
can
debate
and
deprecate
that
and
switch
environment
whatever
later,
but
should
this
be
as
part
of
the
distribution
and
allow
also
to
be
configured
as
part
of
the
the
yaml
file?
My
idea
here,
my
idea
here
was
there:
are
things
like
if
you
want
to
enable
things
like
I,
don't
know,
we
have
a
build
tag,
enabled
some
build
tag.
I
think
you
can
still
do
with
this
LD
correct.
A
A
Not
necessarily
the
version
number
but,
for
example,
somebody
may
put
a
Boolean
variable
somewhere
and
say
hey
if
if
this
is
enabled
I'm
doing
something,
and
you
can
enable
that
with
the
with
LD
Flags
correct.
E
That
specific
case
we
handle
with
any
special
a
special
value
there.
Anything
let
me
check,
because
I
remember
seeing
a
PR
that
added
support
for
this
exact
case
for
keeping
the
the
build
flags
but
I'm,
failing
to
see
a
concrete
another
concrete
example
of
that.
A
I
mean
we
can
merge,
because
this
one
is
always
only
the
flag
and
then
later
we
can
consolidate
on
on
this.
But
I
would
like
to
pick
as
an
action
item
to
consolidate,
because
if
we
have
multiple
ways
of
configuring
parts
of
LD
Flags,
that's
another
issue
for
me,
but
I'm
not
gonna
block
this
PR,
which
adds
the
most
generic
thing.
So
we
we
should
probably
consolidate
on
this.
The
other
options.
A
Just
create
a
PR
to
sorry
an
issue
to
track
the
fact
that
hey
there
may
have
been
that
build
thing,
build
tags
in
let's
consolidate
on
this
and
for
the
moment,
let's
move
forward
with
the
pr
make
sense,
all
right,
yep.
H
J
All
right,
the
next
one's
mine,
so
this
issue,
basically
some
folks
at
my
company-
have
seen
some
panics
from
The
Collector
when
components
will
start
or
stop
sort
of
incorrectly
and
I
think
like.
J
Ultimately,
this
probably
just
comes
down
to
each
individual
implementation
needing
to
be
correct,
but
because
it's
happened
a
few
times,
I'm
kind
of
thinking
about
this
at
a
systemic
level
like
should
we
have
some
kind
of
resiliency
built
in
such
that
if,
if
a
starter,
stop
fails
on
a
component
and
it
throws
a
panic
or
maybe
even
a
panic
is
thrown
while
the
component
is
running,
is
it
possible
for
us
to
have
something
where
we
can
recover
from
that?
Should
we
even
try,
or
should
we
consider
this
individual
correctness,
issues.
A
A
There
was
a
discussion
about
I.
Think
tigran
has
a
PR
to
replace
the
first
to
replace
the
Panic.
So
right
now
right
now.
Let
me
explain
this
right
now.
If
a
component
tells
you
that
error
like
using
that
channel
report,
error,
fatal
error,
we
panic
and
we
crash
so
first
of
all,
I
think
there
is
a
proposal
to
remove
that
and
actually
every
component
reports
their
status
and
and
we
can
decide
at
a
collector
level
what
we
want
to
do
with
that
status.
We
want
to
shut
down
nicely
everything
we
want
to.
A
That's,
that's
a
that's
one
effort
into
that
direction.
The
second
one
was
this
PR
that
I
think
Sean
added
some
helpers,
which
I
think
got
close
for
whatever
reason
I
can
look
into
that.
But,
oh,
there
is
still
still
open
the
the
the
pr
to
to
make
things
to
allow
things
to
to
be
started
and
stopped
correctly,
like
essentially
yeah
I.
A
Okay,
yes,
so
there
is
that
for,
but
now
now
done,
I
think
we
need
a
Consolidated
story
I.
There
are
multiple
efforts,
but
nobody
has
a
consolidation
of
all
these
stories.
All
these,
where
do
we
want
to
get
I,
think
will
be
helpful
to
have
somebody
Champion.
The
entire
thing
and
say:
hey
here
is
the
counter
State
here
is
where
we
want
to
get
here,
are
the
efforts
that
we
will
follow
to
together.
H
J
Sure
I
can
I
can
organize
that
I'll
create
a
tracking
issue
and
pull
these
things
together.
A
Perfect
that
that
will
help
I
mean
for
the
moment.
You
can
come
and
say:
hey.
We
need
these.
We
need
that,
but
I
I
have
no
clue.
Where
do
we
try
to
to
get
for
the
and
what
is
a
reasonable
State,
because
another
thing
is,
there
will
be
errors
for
a
component
that,
for
example,
is
trying
to
bind
to
a
port
and
the
port
is
occupied.
What
do
we
do?
I
mean
you
have
to
crash
because
or
I
don't
know,
do
you,
but.
K
Should
reach
an
error,
but
the
the
thing
was
happening
was
we
were
panicking
inside
the
stop
methods
right,
and
that
was
just
like
a
big
defect
in
so
what
would
happen
is
the
it
would
start.
One
of
the
components
would
return
error.
Then
we
stop
all
the
components.
One
of
them
would
panic
because
it
wasn't
studded,
so
we
got
that
nailed,
but
I
think
we
still
need
to
probably
I
think
what
I
love
to
have
a
guideline
on
is:
can
we
make
sure
we
never
panic
like?
Can
we
make
sure
that
whatever
possible?
K
A
Yeah
I
I
think
the
only
panics
that
I'm
okay
with
are
are
things
like,
for
example,
that
are
doing
initialization
like
must
create
a
instrument
open,
Telemetry
instrument
for
for
metrics,
which
shouldn't
we
shouldn't
fail,
or
things
like
that
like
or
things
that
shouldn't
fail.
Let's
see,
but
yes,
I
I,
don't
want
a
drawn
time
to
have
any
panic.
J
What's
the
level
of
resiliency
we're
going
for
there
like?
Is
there
a?
Is
there
a
reasonable
approach
there
anyways
I
can
open,
maybe
a
separate
issue
on
on
that
one
too.
Okay,.
A
So
so
in
general,
yes,
let's,
let's
have
some
rules
like
I
found
it
in
this
community.
That
works
great.
If
we
document
some
of
these
things
as
rules
so
so
happy
to
to
debate
there,
but
but
definitely
happy
to
to
have
a
rule
like
no
Panic
in
in
start
stopping
stuff
like
that,
and
then
we
can
debate
if
we
allow
other
words
other
places
to
Panic
or
or
never.
K
Yeah
I
I
had
a
bit
of
a
weird
thing
that
I
built
with
the
the
winter
break,
which
was
a
crash
report
extension.
It
works
as
an
extension
of
a
collector
has
recovered
coal
in
it
and,
if
there's
any
panics,
then
it
recovers
and
sends
that
information
to
some
back
end
of
your
choosing
and
then
eventually
exits
with
the
good
one,
but
I
mean
the
feedback
I
got
from
from
folks.
So
far
has
been.
We
forget
to
that
that
you
failed
like
if
everything
else
is.
K
E
K
Yeah
I'm
not
married
to
any
of
this
is
just
playing
around
you
know
so
great.
K
H
Yep,
hey
everyone,
my
first
sick
meeting,
I
work
for
as
an
engineer
in
sumo.
Thank
you.
Thank
you.
So
I
created
a
new
issue
for
adding
a
syslog
forwarder,
slash
exporter.
We
seem
to
have
a
syslog
receiver
already
in
the
contract
Repository.
A
Yeah
so
because
this
is
not
a
vendor
specific
based
on
our
rules,
we
have
to
have
a
volunteer
that
wants
to
to
do
this.
We
have
a
bunch
of
people
as
a
approvers
and
I
think
a
couple
of
them
are
from
sumo
if
I'm
not
mistaken,
so
you
may
want
to
engage
with
some
of
them
there
to
convince
to
sponsor
you,
but
in
general
yeah,
so
I,
don't
know
what
to
say
more,
but
I'm,
not
good
at
vlogging,
so
I'm
not
definitely
not
going
to
be
able
to
help
you.
H
Okay,
yeah
I
can
ask
other
folks
in
the
company
as
well,
but
yeah
I
just
wanted
to
bring
this
to
the
Sig
meeting
as
well.
Yeah
yeah.
A
H
So
there
is
a
there
is
a
customer
that
wants
this
and
we
are
trying
to
build
it.
You
know
vendor
agnostic
solution
for
that
they
can
use
from
Upstream.
So
that's
the
idea,
yeah!
That's.
A
H
A
You
so
guys,
please,
if
you
have
interest
into
this
I'm
looking
for
you,
Dan
I,
don't
know
if
you
have
interest
in
this,
but
I
know
you
have
expertise
into
this.
If
you
want
to
help
would
be
good.
A
Yeah
again
no
pressure,
it
doesn't
mean
you
have
to
do
it.
It's
just
like
I
know.
You
have
the
expertise.
K
Hey
everybody:
we
have
a
sedimentary
gen
thing
that
you
know,
allows
you
to
send
traces
and
have
added
support
for
metrics
for
for
reasons
to
your
ability
to
your
own
usage
of
The
Collector
I'm,
getting
some
really
good
feedback
from
Pablo
on
the
best
way
to
go
about
this
and,
frankly,
I
need
to
be
schooled
into
the
best
way
to
do
this
and
I'm
sure
that
on
this
meeting,
we're
going
to
get
a
lot
of
folks
with
a
very
good
opinions
about
the
best
way
to
do
metric
generation
and
a
plan.
K
Could
you
know
make
sure
that
we
get
some
meaningful
metrics,
so
yeah
and
also
you
know,
I've
had
some
breakage
because
I've
upgraded
the
go
SDK
and
all
of
a
sudden
I,
like
the
things
were
a
little
different,
so
yeah
probably
is
on
the
call
awesome,
so
yeah
I
promised
I
would
come
here
and
and
mention
this
I'm
yeah
I'm
very
incompetent
about
some
of
this
stuff.
If
that
doesn't
show,
please
don't
yeah.
C
I
I
think
there's
really
lots
of
people
on
this
call
that
know
more
than
you
and
me
about
metrics.
Maybe
I
can
like
yeah
pitch
what
I,
but
my
comment
was
about.
Is
the
red
lights
on
another.
D
C
So
yeah
I
think
the
the
default
behavior
of
the
metrics
of
command
should
be
yeah
but
simple
as
possible,
and
one
of
the
things
that
I
suggested
is
that
we
should
produce
a
gauge
by
default
and
instead
of
well
initially
I,
think
we
were
proofing
on
I
mean
we
shoot
hop,
support
for
all
metric
types
eventually,
but
this
is
about
the
default
behavior
when,
when
passing
the
options-
and
so
there
was
some
discussion
here
on
how
to
best
do
these,
given
that
kgs
are
only
async
and
we
want
to
have
a
refined
in
control
on
the
data
points
we
produce
and
I
think
what
we
want
to
know
here
is
like
whether
people
agree
on
kgs
being
the
thing
that
we
should
produce
by
default
and
if
so,
what's
the
best
way
to
do
it,
leveraging
the
the
open,
symmetrical
SDK.
B
I'm
I'm,
just
a
quick
question,
are
you're
using
the
SDK
you're,
not
just
producing
like
the
data
directly
using
something
like
P
data
or
something
like
that.
K
Yeah
I
copied
those
you
know
in
the
trace
this
command.
So
maybe
this
was
the
worst.
Let
me
know:
I
can
definitely
you'll
just
be
DP
data
instead,
but
we're
sending
over
to
a
TLP
I
didn't
know
that
I
know
what
to
think.
If
there's
a
better
approach,
we
should
take
it
no.
A
K
Yeah
but
then
maybe
I
can
give
you
a
little
bit
of
a
use
case
here,
so
we're
just
trying
to
saturate
a
back-end
service
with
as
many
metrics
as
possible.
We're
not
trying
to
be
nice,
we're
not
trying
to
make
sense
of
the
metric
itself.
We're
just
doing
we're
sending
data
we're
sending
data.
A
Points
then,
then,
fake
gauge
and
I
I
agree
with
Pablo
just
start
with
a
gauge
Faker
gauge
and
put
the
value
random
whatever
or
whatever.
If
you
wanted
and
awesome,
that's.
K
A
C
A
The
counter
is
nice
for
the
property
that
it
it.
You
can
do
the
plus
one
in
by
that
you
measure
number
of
times
the
number
of
metrics,
so
you
kind
of
have
the
throughput
as
well,
measured
with
the
counter.
C
A
G
A
A
K
Yeah
I
mean
the
the
thing
that
hit
me
when
I
was
trying
to
do
this.
Is
that
there's
a
aggregation
on
gauge
values
by
default,
the
to
some
level
of
a?
If
you
do
the
async
gauge,
then
the
exporter
only
runs
on
its
own
schedule
or
in
the
problem.
Is
this
particular
command?
All
I
want
is
to
blurb
like
send
as
many
data
points
as
possible
per
second,
as
per
rate
that
is
being
passed
in
and
when
it
usually
go
SDK
then
I
don't
get
as
much
of
like
the
control
of
that
do.
K
A
K
A
Okay,
okay,
ping:
me,
your
slack!
If
you
want
I,
can
I
have
a
decent
knowledge
of
this,
so
I
can
help.
I
Tyler
yeah
I
added
one
late
last
item
because
he
brought
up
conditions
in
the
ottl
back
in
November.
We
added
ottl
filtering
to
the
filter
processor,
which
we
wrote
up
an
issue
around
what
needs
to
happen
or
like?
Could
we
replace
all
the
other
conditional
logic
in
the
filter,
processor,
config
with
ottl,
and
the
outcome
was
yes,
but
we
had
to
do
some
PRS
to
fill
some
gaps
in
the
capabilities.
I
Pri
length
is
the
last
Gap
to
fill.
If
it's
a
gap
we
want
to
fill.
So
if
we
were
to
merge
that
PRN
we
could
start
deprecating
the
old
config
options
and
filter
processor.
If
we
think
that
this
type
of
capability
shouldn't
be
allowed,
then
we
can
still
deprecate
the
old
stuff,
but
we
have
to
do
it
as
a
breaking
change.
C
A
Oh
okay,
engine,
okay,.
I
Yeah,
the
big
thing
that
this
PR
allows
is
or
I
guess
what
the
filter
processor
allowed
you
to
do
was
to
specify
either
the
presence
of
a
data
point
on
a
metric
or
a
data
point
on
them
on
a
metric.
They
had
a
specific
value
and
if
that
was
true
for
any
data
point
on
the
metric
drop,
the
whole
metric.
I
A
A
Think
I,
like
it
I
like
the
functions
one
one
quick
comment
in
general,
but
probably
you
figured
that
out
that
I'm
not
a
fan
of
of
change
log
entries.
Can
you
put
the
descriptions
that
you
have
in
the
note
in
the
change
log
entry
as
part
of
the
pr
description
as
well
I.
Think
after
I
read
that
I
will
I
immediately
understood
what
you
are
doing
but
from
the
pr
description
was
not
clear.
A
A
I
Once
if,
if
that
does
get
merged
in,
and
we
like
that,
Aqua
move
forward
with
the
deprecation
of
the
other
config
options
in
the
filter
processor,
which
will
then
also
start
the
ball
rolling
for
deprecations.
Potentially
an
internal
filter
and
moving
like
the
internal
filter
to
Otto.
A
A
Let's
we
can
discuss
okay,
but
I
like
it
on
my
side.
By
the
way,
a
quick
update,
I'm
I
was
focusing
on
on
building
bunch
of
Milestones
I.
Don't
know
if
you
saw
that
creating
Milestones
with
a
bunch
of
things
that
we
need
to
do
to
to
achieve.
A
Api
stability
for
different
modules
and
so
I
created
something
for
Consumer
I
created
something
for
what
else
feature
gate,
and
there
is
one
for
component,
but
I
think
that
one
Alex
has
to
wait
for
for
all
the
others
to
to
finish,
because
it's
a
dependency
on
all
the
others
and
next
my
next
Target
will
be
to
create
similar
things
for
conf
map,
which
are
it's
still
used
by
by
component
and
stuff.
So
then,
after
that
we
can
I
can
focus
on
component
and
Alex
can
do
his
work.
There.