►
From YouTube: 2023-03-01 meeting
Description
Open cncf-opentelemetry-meeting-3@cncf.io's Personal Meeting Room
B
C
E
F
G
H
F
G
I
F
Case
it
was
just
the
two
of
us,
so
we
went
through
the
feature
day
Milestone
on
yes,
it
could
be
in
that
it's
going
to
look
into
what
kubernetes
does
for
video
Gates
on
how
they
deprecate
them
see
if
we
can
do
the
same
yeah,
a
couple
of
PRS
that
I
think
there
that
if
anyone
wants
to
give
feedback
on,
they
are
on
the
agenda.
A
E
C
So
we
have
a
PR
open
from
one
of
my
colleagues
Danny
who's,
looking
to
see
how
we
can
do
a
bit
better
and
there
was
a
discussion.
C
Okay,
so
I'm
I'm,
looking
through
this,
it
was
Dan,
actually
brought
up
that
we
should
have
a
feature
gate
for
this,
but
it's
a
resource
attribute,
so
we
don't
do
future
Gates
on
resource
attributes.
Yet
we
think
we
do
that
on
metrics.
J
Yeah
I,
don't
believe
we
don't
in
the
past
and
that
document
about
definition
of
how
we
handle
breaking
changes
on
the
on
the
receivers
on
the
covers
metric
names
and
Metric
attributes.
Those
are
considered
breaking
chain,
but
not
the
resource
attribution
given
that
I'm
not
sure
we
can.
We
can
discuss
it
here
or
maybe
in
a
separate
issue.
J
Yeah
again,
that's
most
of
the
back
ends
emerge
resource
attribute
with
metric
attributes.
We
can,
we
can
do
the
same,
say
that
any
change
to
the
resource
attribute
in
the
scriptures
breaking,
and
it
doesn't
mean
that
we
cannot
do
anything.
We
have
pretty
clear
guideline
how
we
change
metric
names,
how
we
change
metric
attributes
within
the
scrapers,
and
we
don't
even
need
the
feature
Gates
anymore,
for
format
for
resource
attributes
without
needity.
J
E
C
D
Okay,
well,
basically,
we
were
the
scaling
documentation
and
we
noticed
that
components
have
States,
but
that's
not
something.
That's
really
well
covered
in
our
in
you
know
in
the
current
docs,
so
I
asked
them
Tuan
and
what
was
going
on
with
that-
and
he
said
that
I
should
bring
this
to
the
Forum
I,
see
that
a
lot
of
people
reply
replied
already.
C
D
J
D
D
Sorry,
yeah
yeah.
My
big
question,
though,
is
like
is
a
component
always
a
stateless
or
stateful,
or
is
this
something
that
can
change
depending
on
other
factors?
We
don't
know.
J
K
The
other
one
is
yeah,
that's
the
tail
sampling,
but
the
other
one
is
cumulative
to
Delta
is
also
a
state.
Signalfx
exporter
is
a
stateful
FYI
Explorer.
A
H
K
Span
spam
metrics
is
not
necessarily
stateful.
Somebody
mentioned
that
I
mean
it.
Has
a
state,
don't
get
me
wrong
jurassi,
but
the
state
is
not.
You
don't
require
that
all
the
mess
it
all
the
spans
are
getting
to
the
same
metric
because
or
maybe
I'm
wrong.
But
if,
if
you
are
building
multiple
Deltas
of
these,
you
can
send
them
to
the
backend
independently,
in
that
case
Jurassic.
Unless
you
you
build
a
cumulative
when
when
in
that
case
it
becomes
stateful
in
a
way
that
you
need
to
Route
everything
to
that
component.
K
So
I
also
think
that
there
is
a
big
difference
between
is
considered
best
processing
stateful,
because
we
keep
everything
for
10,
milliseconds
or
one
second
to
batch.
C
K
That
definition,
yes,
but
I,
think
what
we
need.
What
we
care
in
terms
of
scalability
is
that
if
we
need
special
routing
to
this
component
to
make
it
work
so
so
the
the
tail
based
sampling,
for
example,
needs
that
all
the
the
spans
with
the
same
Trace
ID
to
hit
the
same
component.
True,
so
that's
that's.
What
exactly
is
the
problem
for
scalability?
Not
the
not
the
fact
that
I
have
a
small
state
of
keeping
everything
in
the
last
10
milliseconds.
K
That's
not
a
problem
for
scalability,
so
I
don't
know
if
the
statefulness
is
what
we're
looking
for
Anna
here
or
is
the
capability
of
linearly
scale
like
independently
or
whatever
term?
It
is.
D
As
the
writer
I
always
like,
my
focus
is
the
user,
and
my
kind
of
concern
is
what
are
they
gonna
need
like
if
I?
My
guess
is
that
this
is
not
super
important,
since
it
doesn't
appear
very
often
in
dogs.
So
probably
this
is
not
something
like
the
stateless
or
statefulness
or
whatever.
It
doesn't
need
to
be
taken
into
account,
because
we
would
mention
it.
This
we'll
mention
this
more
often,
but
it
does
appear
in
scaling.
D
So
at
some
point
it
is
an
it
has
an
impact,
so
the
user
needs
to
be
warned
about
it
like
right
now,
it's
in
the
warnings,
but
my
question
here
was
like:
should
we
bring
this
up
and
make
it
more
noticeable
because
it
was
not
until
I
checked
the
scaling
documents
that
I
noticed
this?
D
So
is
this
something
that
the
user
needs
to
know
and
needs
to
be
concerned
about?
Is
it
okay?
If
we
leave
it
like
this,
and
you
know
only
one
scaling:
people
need
to
pay
attention
to
this.
Factor.
D
Is
this
something
we
should
add
in
like
in
each
component
as
a
as
a
piece
of
information
that
you
know
the
user
needs
to
know
when,
when
going
when
when
using
that
component,
those
are
my
questions
I'm
just
like
thinking
if
I
were
I
mean
if
you
were
the
user,
if
you
were
going
to
use
this,
is
this
something
you
would
like
to
know
from
the
beginning?
Is
this
something
that
it's
not
that
impactful
and
it's
okay?
If
you
know
I
I,
don't
take
it
into
account
when
deciding
my
architecture.
K
Yeah
I
mean
I
think
it's
important
to
know.
If,
if
you
are
looking
to
scale
your
things
which
components
can
be
linearly
scaling
or
which
components
can
add,
and
what
are
the
requirements
to
scale
different
components,
because
even
those
with
State
can
be
scaled
if
you
do
the
right
thing,
so
it's
just
a
matter
of
how
hard
and
very
how
easy
is
to
scale
some
of
these
components.
K
D
D
K
Far
yeah
so
far
you
are
the
first
one
who
complains
about
the
documentation.
There
are
a
bunch
of
people
who
want
us
to
make
it
easier
to
scale
different
components,
but
that's
that's
a
technical
requirement
like
they
really
want
us
to
to
work
more
on
that
component
to
be
easier
to
scale,
but
I
haven't
heard
so
far
on
somebody
complaining
that
hey
I
did
not
know
that
this
component
cannot
be
scaled.
Then.
L
So
I
guess
one
one
point
that
I
would
like
to
make.
Is
it's
not
a
question
of
when
to
scale
because
I
see
The
Collector,
as
always
having
at
least
three
instances
for
high
availability?
L
So
we
are
talking
about
like
not
in
if
scenario,
it
is
most
likely
that
people
need
more
than
one
instance
at
all
times
anyway,
right,
so
we're
talking
about
a
very
realistic
production
scenario,
and
there
are
components
that
need
to
be
handled
like
field
sibling
when
it's
getting
when,
when
part
of
a
deployment
like
this,
now
the
tail
sampling
and
the
load
balancing
exporter
they
they
do,
they
do
have
a
decent
documentation,
I
suppose
as
part
of
their
weakness,
but
they
I
think
what
is
what
was
being
proposed
there.
L
That
originated
is
this
conversation.
Here
is
something
that
could
be
adopted
for
those
components
right.
So
an
entry
on
the
repeat
file,
saying
this
is
a
stateful
component.
This
needs
special
opinion
when,
when
part
of
an
scenario
when
scaling
up
and
so
on
so
forth,
so
I
think
it
is
useful
to
have
that
information.
Now.
What
I?
L
Don't
think
that
we
need
is,
for
all
of
the
components
to
have
an
entry
on
the
written
files
is
stating
that
they
are
stateless,
so
I
think
the
proposal
I
think
on
Juan
proposal,
Tyler,
actually
Tyler
I
think
proposed
to
have
the
default
idea.
Being
everything
is
stateless
and
whatever
is
exception,
needs
to
make
it
clear
in
the
readme
file
and
then.
J
L
Can
have
something
to
parse
the
readme
files
or
whatever
to
make
a
list
somewhere.
You
know
what
are
the
warnings
that
they
provide
and
so
on
you
know
about
the
span.
Matrix
processor,
specifically
we've
got.
We've
got
a
train
on
this
like
some
days
ago
and
I.
Think
a
little
quotation
for
the
for
the
span.
Matrix
processor
does
mention
that
specific
metrics
have
to
go
to
the
same
instance
and
because
of
aggregation
issues.
I
think
that
was
the
comment
from
the.
J
L
And
a
user
was
was
asking,
you
know,
can
you
can
you
give
me
an
example
of
you
know
such
a
problem?
So
what
should
I
be
aware
of
what
kind
of
scenario
should
I
think
about
when
scaling
and
using
this
pen,
Matrix
processor,
so
I
think
it
is
a
very
valid
scenario
and
the
things
that
we
need
to
document.
L
Also
in
explicitly
telling
users
what
needs
to
be
done
when
scaling
up.
M
I
agree
with
gerasi:
we
had
this
discussion
back
in
July
and
we
we
said
we
would
use
transform
processor
and
cumulative
Delta
processor
as
like
a
a
test
for
how
this,
like
warning,
slash,
caveat,
section,
works
and
so
that
that's
been
out
there
for
a
while.
M
We've
had
that
warning
section
that
the
standard
awards
are
in
core,
not
contrib,
so
they
can
be
linked
from
any
component,
no
matter
where
it's
living
and
so
I
think
it
sounds
like
this
is
the
time
that
code
owner
should
go
through
components
and
determine
yes,
that
this
is
a
state
full
component
that
should
be
listed
in
the
the
warning
section
on
the
readme
and
if
there
are
any
other
you
know,
dangerous
aspects
of
the
component,
like
the
transform
processor,
for
example,
has
can
have
identity
crisis
with
fans
and
metrics,
or
you
can
drop
spans
and
orphan
things
or
whatever.
M
M
And
we
didn't
create
a
big
issue
at
the
time
because
we
wanted
to
see
if
it
was
going
to
be
a
useful
thing,
but
we
could
maybe
make
a
like
an
issue
like
we
had
for
the
stability
I
know
we
just
closed
that
one
out
and
it
was
really
exciting
and
it
had
like
a
hundred
other
PRS
linked
to
it
about
adding
the
stability
of
each
component.
But
maybe
we
should
do
something
similar
for
warning
now
for
the
components
that
need
it
to
track
all
that
work.
C
I'm
in
favor
I
even
went
to
a
further
titles
that
I
said
in
about
that.
Maybe
for,
if
you
want
to
be
a
beta
component,
then
you
need
to
have
these
warnings
at
least
thought
through
at
least
do
a
first
pass
edit
at
least
have
it
in
your
readme
I.
Think
that
would
be
a
good
gate
for
better
yeah.
M
I
think
that's
that's
fair
and.
K
It's
I
think
it
should
it's
fair
to
actually
extend
the
stability
so
right
now
we
have
the
stability
thing.
Maybe
we
we
have
something
more
of
status
or
whatever
we
call
that
include
stability
may
include
things
about.
Scalability
may
include
other
things
in
the
future.
I
think
it's.
It's
reason.
Maturity
yeah,
something
like
that.
But
maturity
is
still
related
to
to
code
wise,
but
but
there
may
be
components
Anthony
that
will
always
have
trouble
scaling
because
by
the
nature
you
need
to
Route
everything
to
the
same
instance
and
stuff
like
that.
K
K
You
probably
have
a
lot
of
experience
with
that,
but
I
I
can
easily
see
a
way
to
to
use
some
kind
of
Kafka
do
do
some
kind
of
routing
by
by
Trace
ID
and
then
and
then
using
Kafka,
and
then
on
the
other
side,
there's
still
the
sampling,
which
is
a
trivial,
not
a
trivial.
But
it's
a
it's
an
easy
solution
to
in
words,
but
I
think
we
should
document
something
like
that.
L
C
I'll
just
say
one
thing:
quick,
I
think
and
I
can
help.
If
you
give
her
the
materials
she
can
make,
it
really
shine.
So
it's
also
a
way
where
we
can
collaborate
on
that
and
she's.
Definitely
looking
to
that,
okay
Jersey
go.
L
Yeah,
so
what
was
going
to
mention
a
link
that
Tyler
also
shared,
which
is
probably
the
source
for
honest
question,
actually
I,
think
I
want
to
mentioned
basicating,
docs
and
I
assume
it
is
this
page
here,
and
so
this
page
I
think
it
specifically
mentions
the
tail
sampling
processor
and
shows
an
example
of
of
how
to
scale
that,
with
the
load,
balancing
that
at
least
that's
my
recollection
and
but
we
we
should
certainly
expand
the
stock
here
to
show
other
cases
like
scaling.
L
Other
scenarios,
but
I'd
like
to
I,
mean
with
this
one
here
in
mind:
what
I,
what
I?
What
I
had
the
back
of
my
mind
was
what
should
a
user
know
in
order
to
accomplish
a
specific
scenario?
So
it's
not
very
it's
not
really
about
a
specific
component,
but
more
like
achieving
in
this
case,
so
how
to
get
load.
Balancing
exporter
together
with
the
tail
simply
export
tail
signaling
processor,
to
achieve
a
scalable
tail
sampling
solution
and
I'd
like
to
keep
the
user
facing
docs
use
case
based.
L
But
perhaps
we
do
need
better
perk
and
plunge
documentation
and
as
much
as
I
appreciate
the
help
that
Anand
can
provide
us
I
think
the
best
people
to
write
those
docs
are
the
component
owners.
At
least
you
know
the
first
version
of
those.
Oh.
D
Yeah
definitely
but
I'm
here
to
help
and
edit
once
like
the
obviously
the
content
and
the
concepts
are
there
just
like
count
on
me.
If
you
need
me.
M
How
does
it
sound
if
I
make
a
big
issue
to
track,
adding
the
warning,
section
or
I
think
that's
the
right
place
to
start
Bogdan?
It
sounded
like.
Maybe
you
had
some
some
bigger
ideas
like
do
you
want
to?
Is
there
something
that
we
need
to
flesh
out
more
or
is
like
the
warning
section
that
we
have
on
the
transform
processor
and
the
cumulative
Delta
like
a
good
starting
spot?
That's.
K
K
Message
that
includes
stability,
and
we
include
other
informations
I,
think
one
of
them
may
be
statefulness
or
whatever
it
is
the
information
at
least
to
print
it
initially
in
all.
F
K
Just
because
there
may
be
components
that
we
may
use
from
third
party
or
whatever
and
may
not
have
good
documentation,
but
usually
include,
is
the
trust
Yeah
by
trusting
code.
Usually
so
I
I
believe
that
it's
it's
good
to
extend
that
stability
to
return
more
than
adjustability
and
then
start
from
there
I
don't
know
what
would
be
that
the
terms
that
will
return,
but
but
it's
it's,
what
we
discuss
as
well.
K
J
I
can
go
next
if
you're
done
with
this
yeah,
so
I
just
want
to
give
you
an
update
about
the
workaround,
the
date
and
the
mutation
issue
and
to
iterate
and
for
those
from
each
previous
meeting.
I
can
share
some
some
slides.
A
J
Yeah,
this
is
the
current
state
we
Mark
in
order
to
make
any
mutation
of
the
B
data
from
any
kind
of
companies.
We
need
to
code
owners
of
the
company
need
to
mark
them,
whether
they're,
mutable
or
immutable,
and
it
works
fine.
Until
someone
some
component
Miss
puts
Ron
attribution.
It
says
that
this
doesn't
mutate
the
data,
but
it
does-
and
we
had
a
few
of
them
already
and
given
that
a
collector
will
go,
will
grow
more
and
more
third-party
companies
will
be
added.
J
We
need
to
handle
this
like
a
more
restrictive
way.
So,
yes-
and
if
you
run
into
this
this
thing
here,
you'll
get
like
this,
this
kind
of
statues
that
is
really
hard
to
read
and
hard
to
debug
and
understand.
J
What's
going
on
so
before,
declaring
the
data
as
1.0,
we
need
to
handle
that
issue,
and
the
first
approach
was
to
just
like
have
first
approach
and
what
we,
what
we
stick
with
is
having
state
for
p
data,
whether
it's
state
can
be
shared
or
exclusive
and,
for
example,
in
the
first
solution
we
Mark
the
next
like
any
any
second
or
third
Etc
fan
out
component
share.
It
and
then,
if
user
need
to
change
anything,
they
check
for
the
state
and
do
the
copy
manually
and
still
it
can
gives
us
paintings.
J
But
at
least
they
are
pretty
clear.
What's
going
on
and
like
you
need
to
check
for
for
the
status
before
before
my
American
any
invitation
and
the
first
solution
for
the
compile
time
that
I
discussed
last
time
was
to
introduce
new
interface
and
we
went
through
several
iterations
I
posted
the
link
in
the
in
the.
J
Doc
knows,
and
the
last
one
has
like
a
working
State
already
how
it
can
be
applied
it.
J
It
needs
to
go
through
several
iterations,
but
it's
it's
ready
and
works
pretty
well,
but
with
this
solution
with
this
compile
time
solution,
we
have
this
a
bit
clunk
interface
when
we
need
to
make
a
decision
about
mutation
somewhere
somewhere
deep
in
the
in
the
logic,
for
example,
we
are
like
we
find
some
spam
that
we
need
to
change
only
by
iterating,
all
of
them,
and
we
once
we
do
that
once
we
realize
that
hey
these
plants
need
to
be
mutated,
we
need
to
access
the
mutable
interface
through
the
traces
through
the
like
the
upper
object
of
trace,
and
it
gives
us
a
new
type.
J
So
if
we
cannot
reuse
the
old
type,
it's
like
the
interface
a
bit
clunky
it's
working,
but
for
that
for
those
scenarios,
when
we
don't
know
in
advance
whether
we
want
to
mutate
or
not,
it's
not
that
simple.
So
in
that
latest
thing,
which
I
tried
out
thanks
to
walk
down
for
suggesting
that
idea
is
copy
on
the
right.
So
we
changed
the
internals
of
the
P
data
and
make
sure
every
object
has
links
to
its
parent
and
given.
J
If
we
have
linked
to
its
parent,
we
can
always
copy
the
whole
object
from
any
access
to
immutable.
Existing
mutable
methods
and
yeah
I
try
this
out.
It
works
pretty
well,
and
you
can
you
can
take
a
look
at
the
pull
request.
It's
is
there.
Data
is
working
fine,
the
only
like
the
errors
here.
They
are
about
equality
because
we
changed
in
journals
of
the
data
and
like
regular
assets.
J
J
It's
like
I
I
run
through
some
benchmarks,
and
for
now
it
doesn't
seem
good,
but
I
have
some
ideas
of
how
to
improve
it
and
we'll
work
on
them
after
that
and
see
like
once,
I
get
idea
about
what
is
like
the
minimum
impact
we
can
get
with
this
approach.
We
can
evaluate
whether
we'll
go
with
this
one
or
stay
with
that
mutable
mutable
interface,
yeah.
J
And
folks
yeah
any
feedback
like
take
a
look
at
the
pull
request.
If
you
have
a
chance
Etc
that
that's
it
any
questions.
A
H
H
J
I
I'll
provide
all
the
results
once
I
get
them,
I
mean
for
now.
It's
like
yeah.
It
adds
a
few
allocations
for
all
the
operation
but
I'm
trying
to
reduce
them.
I'm.
Looking
at
the
different
approaches
how
to
handle
this
Living
Kitchen
Etc,
once
I
get
the
menu
model
like
there's
a
minimal
overhead
results,
I'll
share
them
cool.
Thank
you.
N
That's
yeah,
yeah
I
think
that's
me
so
there's
this
issue
that
was
opened
up
a
while
back
by
somebody
in
the
community
that
was
asking
for
an
implementation
for
the
the
Q
Center,
like
the
queued
retry
sender,
where
there
could
be
two
tiers
where
the
first
one
is
in
memory
one
and
the
second
one
can
be
persistent
in
this
way.
N
You
wouldn't
have
to
have
most
of
your
data
right
to
disk
before
exporting
and
just
would
be
used
as
like
a
backup
to
write
to
when
the
collector
went
down
or
if
there
were
too
many
messages
and
things
like
that
or
too
many
requests.
So.
N
I
took
on
the
the
issue
and
made
a
PR
and
Alex
suggested
to
bring
it
up,
and
so
you
can
get
feedback.
N
So
the
approach
that
I
took
for
the
PR
was
just
to
have
a
just
have
two
of
the
existing
queue.
We
try
to
send
yours
and
just
tab
them
managed
by
a
in
between
tiered
Center.
That
would
send
requests
that
were
like
if
the
primary
one
overflow,
then
those
overflowed
like
the
oldest
request,
would
be
sent
to
and
handled
there
and
then,
if
the
collector
got
shut
down,
then
all
the
request
from
the
in-memory
one
would
get
flushed
out
and
sent
to
the
backlog,
one
as
well.
H
N
I
think
it's
just
so
that
so
the
idea
was
that
if
you
want
to
kind
of
prioritize
the
newer
data
over
the
older
ones,
because
if
we
are
blocked
on
old
data
and
send
it
to
the
backlog,
then
it's
possible
that
those
will
never
get
sent.
If
they're
running
at
a
slower
pace.
N
And
if
they
get
so
for
an
existing
retry
set
up,
if
it
fails,
it
can
recue
and
it's
possible
that
you'll
just
be
stuck
with
a
bunch
of
folder
requests.
N
So
I
think
the
main
driving
force
for
the
issue
that
was
cut
was
to
improve
the
performance.
When
you
want
to
have
some
persistence,
because
if
you
enable
persistence,
it
will
just
write
to
disk
each
time
before,
sending
it
and
then
deleting
it,
which
is
slower
than
just
having
it
in
memory.
N
I
want
to
use
the
existing
persistent
storage
and
the
existing
queue
to
retry
I
mean
like
existing
in
memory
queue
as
well
I'm
just
trying
to
have
it
as
a
in
between
to
coordinate
to
manage
both
of
them.
So.
N
So
if
by
default,
it'll
just
be,
if
you
don't
enable
the
backlog,
then
it'll
just
be
the
same
behavior
as
it
exists
like
currently
has
where
you
can
enable
either
the
in
memory
or
the
persistent
queue.
So
that
configuration
is
the
same.
If
you
have
this
other
configuration,
this
backlog
configuration,
which
is
the
same
as
the
is
the
same
configuration
as
the
original
queue
configuration
then
it'll,
try
to
you
know
manage
both
of
them.
J
Okay,
that
makes
sense,
makes
sense
thanks,
yeah
we'll
we'll
probably
take
a
look
in
the
podcast
pretty.
H
N
O
So
it's
for
everyone
else's
benefit,
the
so
looking
at
how
to
implement
routing
connectors.
So
the
idea
of
being
here
this
is
kind
of
related
to
what
Dimitri
showed,
but
basically
that
if
you
have
a
connector,
basically
like
the
routing
processor,
if
you
wanted
to
implement
that
as
a
connector,
you
want
to
pick
and
choose
which
pipelines
you're
going
to
send
the
data
to
and
in
order
to
do
that
safely.
O
With
the
current
implementation
of
mutable
data,
we
need
to
use
a
fan
out
node
and
basically
we're
trying
to
figure
out
how
to
make
that
easy
and
and
correct
how
to
pass
that
into
the
Constructor
of
the
connectors.
So
it
has
access
to
that
so
that
it
can
send
data
to
the
pipelines
that
it
wants.
If
it
wants
to
send
data
to
multiple
pipelines,
then
it
doesn't
have
to
worry
too
much
about
immutability.
O
O
I
guess
at
this
point
we're
going
to
do
you
feel
like
you
and
I
are
on
the
same
page
about
what's
intended
with
the
Constructor
and
passing
in
that
object.
So.
K
O
It's
not
exactly
how
I'm
looking
at
it
I'm
thinking
of
it
like
this,
if,
if
I'm
this,
this
is
like
the
signature
for
the
Constructor
of
the
routing
connector,
for
example,
right
I
know
like
I,
expect
that
the
next
component
is
a
fan
out,
like
that's
inherent
in
the
Constructor
there,
because
I'm
building
a
a
routing
connector,
it's
not
an
assumption.
It's
something.
I
have
to
validate
and
I
can
error
out.
If
that's
not
the
case,
but
it's
like
the
expectation
is
that
it
will
be
right,
correct.
K
K
O
K
K
Okay,
that's
great!
You
are
moving
that
problem
to
the
top
level
like
by
doing
that
what
you
suggested
you
are
moving
the
problem
of
checking
if,
if
I
am
in
the
right
place
or
not
to
the
to
the
to
the
next
component
to
the
top
component,
which
is
in
our
case
the
graphical
Construction,
which
will
have
to
check
if
this
interface
implements
this
this
Constructor
or
that
Constructor
and
if,
if
not
configured
well,
I
will
error.
O
Yeah
I
mean
I,
don't
I,
don't
see
how
that's
different
than
you
know,
for
example
like
trying
to
use
a
connector
in
a
way
that
it's
not
usable
for
like
if
I
use
the
count,
connector
and
I
put
it
in
a
it.
Have
it
outputting
to
a
logs
pipeline
like
that's,
going
to
be
checked
when
the
service
is
building
the
graph
right.
K
Yeah
but
I
think
we
we
got
into
very
deep
conversations.
What
so?
What
is
our
problem
right
now?
Do
you,
wanna
I
can
present
for
everyone.
So
the
latest
argument
we
got
to
is
this
one
correct,
like
you
proposed.
K
Do
you
propose
an
interface
which
embeds
traces
and
has
this
extra,
this
extra
method
and
my
suggestion
was
like
I,
don't
think
we
need
to
embed
the
consumed
traces?
We
can
propose
only
these
as
an
optional
interface,
which
consumer
May
Implement
to
to
offer
this
capability,
which
is
common
in
goal
and
the
usage
will
become
what
you
are
seeing
is
like.
Hey
I
want
all
this
one
to
transform
to
this,
and
my
my
suggestion
is
like
okay.
If
you
fail
to
transform
to
this,
it
means
that
you
have
only
one
connector
like.
K
Why
would
I
need
to
wrap
it
into
this
thing
if,
if
I'm
not
having
multiple
pipelines,
so
my
idea
was
like,
if
I'm,
not
if
I'm,
not
filling
out
I'm,
just
gonna
give
you
the
next
Consumer
and
you
it's
on
you
to
figure
out
that
hey.
If,
if
I'm
not
filling
out,
it
means
that
you
have
only
one
consumer,
and
this
is
it.
O
As
a
regular
one,
we
create
the
type
of
search
and
it
fails,
and
so
we're
basically
the
the
idea.
There,
then,
is
that
we're
just
saying
a
routing
connector
in
general,
no
matter
what
implementation
it
is,
may
not
be
used
in
a
way
where
it's
used
with
only
one
export
pipeline.
Basically,
right
like
it,
has
to
have
two
or
more
I.
K
Mean
it's
it's
the
implementation
of
the
routing,
what
it
does
like
it.
It
may
fail
here
so
here
on
this
line
that
I'm
presenting
and
I'm
hovering
here
like
on
this
line,
you
may
fail,
you
may
return
an
error
because
you
can
return
an
error
and
say:
hey
I'm,
expecting
at
least
two
or
I'm
going
and
happily
sent
to
just
this
pipeline.
It's
it's
implementation,
detail
for,
for
from
the
service
perspective,
I'm,
giving
you
all
the
information
the
next.
This
is
the
next
Consumer.
K
O
K
The
yeah,
the
reason,
the
reason
why
I
don't
want
to
do
this
rapping
always
is
because
it's
it's
a
extra
wrapping
for
no
reasons,
but
we
we
can
argue-
and
also
also
this
is,
in
my
opinion
this
is
more
gold
thing
to
have
optional
interface,
where,
where
yeah,
this
interface
may
be
implemented
by
this
type
of
component-
and
this
is
the
extra
functionality
that
it
brings,
it
doesn't
necessarily
have
to
include
the
other
one.
We
can.
O
This
part's,
that
makes
sense
to
me
I'm
on
the
same
page
with
you.
There
I
understand
that
one,
the
the
part
that
to
me
is
more
important.
Design
implication.
Is
the
fan
out
part
right
like
how
do
we
prevent
mutability
problems
and
and
I'll
I'll
say
it
right
up
front
I
mean
this.
If
we
Implement
what
Dimitri
just
presented,
then
this
is
not
even
a
problem
to
worry
about,
but
until
then.
K
Even
there
it's
a
problem,
because
every
time
when
you
put
when
every
time
when
you
put
it
on
a
new
pipeline,
you
need
to
mark
it
as
shared
or
something
you
need
to
call
an
extra
method.
So
it's
still
a
problem
there,
yeah
in
all
the
solutions
that
we
are
working
on
with
Dimitri
on
that,
but
I'm
all
about
exposing
fan
out.
Don't
don't
get
me
wrong.
I
I
mentioned
there
I'm
all
about
exposing
phenol.
K
Now
between
the
interface
that
we
return,
the
map
versus
the
other
one
that
accepts
IDs
the
downside
for
the
one
that
accepts
IDs
we
discuss
this
is
you
need
to
know
the
IDS
in
advance,
so
an
alternative
to
that
maybe
is.
We
have
actually
two
methods,
one
that
returns
only
the
IDS
and
one
that
gives
you
this
Constructor
think
that
you
have.
So
that's
that's
actually
I.
O
Commented
that
a
few
minutes
ago,
you
probably
oh.
K
O
Same
same
page
there,
but
so
that
I
prefer
that
because
then
what
we
don't,
we
don't
have
to
export
a
fan
out,
which
I
think
is
preferable,
like
the
connector
doesn't
even
have
to
worry
about
fan
out
at
all.
All
it
has
to
think
about
is
which
pipelines
do
I
want
to
send
to
and
if
fan
art
is
necessary,
it's
taken
care
of
the
family
package
is
not
exported
perfect.
K
O
It
make
sense
like
yeah,
then
that's
a
good
point
that
still
needs
to
be
solved
in
the
other
one,
but
I
think
I
think
we're
on
the
same
page.
There
we'll
just
add
that
to
the
interface
so
I'm
going
to
update
the
other
PR,
then
7179
so
I
think
that's
essentially
where
we're
at
okay
cool
thanks
for
talking
that
through.
K
Perfect
sorry,
everyone
else.
If
we
were
not
clear,
we
had
a
lot
of
history
in
this,
so.
I
Yeah,
thank
you.
Yes,
I
was
just
bringing
up
an
issue.
I
opened
a
couple
days
ago,
specifically
about
the
process.
Cpu
utilization
metric
like
I,
could
be
misunderstanding
it,
but
when
I
started
collecting
with
it
got
very
unexpected
values
like
above
2000
below
negative
1000.
I
If
the
issue
is
what
I
think
it
is,
I
also
made
it
PR
to
quickly
fix
that.
Like
I
said
I'll
just
check
me
if
I,
if
I'm
missing
something
or
if
so,
they're
able
to
review
that
fix.
If
it
is
appropriate,
yeah.
K
Numbers
numbers
like
that
doesn't
seem
right,
so
I
don't
know
if
it's,
where
is
the
bug,
but
the
numbers
I
would
expect
to
be
between
zero
and
one
okay,
yeah.
B
B
K
B
I
Yeah
actually
have
the
pr
open.
Now
it's
linked
to
the
issue:
yeah
yeah
cool.
B
That's
from
Nicolas
Takashi,
I'm,
gonna
think
he's
with
us
and
I
actually
have
a
comment
on
that
on
that
issue.
That
might
be
controversial,
maybe
worth
discussing
I
can
I
can
share
that.
If
you
want.
B
So
I
think
the
proposal
is
to
add
an
alert
manager,
receiver
and
exporter.
So
my
problem
with
this
is
that
alerts,
and
neither
logs
nor
metrics
in
our
traces
right.
So
my
comment
was
basically
like
this
sounds
like
shoehorning
and
arbitrary
data
type
into
the
collector
and
of
course
it
can
be
used
for
that.
The
question
is
whether
it
should
be
used
for
that
and.
K
O
Yeah
I
would
challenge
the
Challenger
assumption
there.
That
alerts
are
sort
of
like
completely
arbitrary
data
type
that
we're
just
trying
we're
shooting
into
logs
I
I.
Think
you
have
a
point
about
the
exporter.
I'll
come
back
to
that,
but
as
far
as
a
receiver,
I
think
alerts
are
very
very
similar
to
events
right
and
to
log.
So
it's
just
basically
just
something
happened
and
you're
describing
it
like.
That's
I,
think
I.
Think
a
case
can
be
made
that
you
can
model
an
alert
as
a
log
in
a
reasonable
way.
E
A
O
Like
you
know,
there's
probably
some
little
difference,
but
it's
effectively
the
same
thing
but
I
think
you
have
a
point
about
the
exporter,
because
in
order
to
export
that
you're
making
a
lot
of
assumptions
about
the
exact
contents
of
that
log,
then
right
so
you
know
the
structure
of
the
attributes
whatever,
and
maybe
that's
okay,
but
I
I,
don't
know
I
see
where
there's
a
problem
there,
because
you
can't
just
send
like
any
logs
to
this
export
and
expect
it
to
work.
It's
making
a
ton
of
assumptions
so
anyways.
O
J
Do
you
have
any
process
of
like
defining
that
kind
of
arbitrary
data
in
the
logs,
because
I
mean
even
even
in
the
spec
right,
if
we,
even
if
we,
even
if
we
accept
it
and
we
kind
of
agree
on
some
like
protocol
or
schema
within
logs
for
for
this
type
of
data
like
it's,
it
can
be
different
opinion
from
the
spec
side,
Etc
I'm
curious.
Do
we
have
anything
on
the
specification
for
for
such
scenarios
that.
A
H
Event
doesn't
really
say
anything
about
how
to
process
it,
though.
Basically
what
we're?
What
we're
looking
to
specify
for
events
is
that
they
will
carry
information
that
describes
how
a
consumer
can
find
a
schema
to
use
to
process
it,
such
as
a
domain
and
a
name
or
a
URI,
or
something
like
that.
But
we
haven't
talked
at
all
about
how
to
define
events
for
particular
users,
yep.
O
In
after
that,
again,
I
don't
see
how
this
is
very
different
than
just
regular
logs
right
like
like,
if
you
Injustice
this
log,
it
has
a
certain
structure.
You
might
put
the
attributes
in
certain
places
and
it's
just
a
schema
that
represents
syslog
and
it
should
be
codified
in
semantic
conventions,
and
there
should
be
some
way
to
know
that
this
is
a
syslog
same
thing
with
events
same
thing,
with
with
any
type
of
alerts
that
we
want,
like
I,
don't
see
how
that
there's
a
whole
lot
of
difference.
There
really
I'm
missing.
H
That
but
so
I
think
maybe
the
question
is:
what
do
we
want
to
be
supporting
and
building
in
collector
and
collector
control?
Like
I've
said
many
times
before,
a
collector
is
basically
a
framework
for
building
Telemetry
processing
applications.
We
can
build
anything
we
want
out
of
it.
Is
this
something
that's
in
the
scope
of
what
we
think
that
belongs
in
The
Collector
contrib
repo?
Or
is
it
something
that
we
can
say
here's
a
framework?
If
you
want
to
go
build
this,
but
it's
not
part
of
our
mission,
it's
not
part
of
our
vision.
O
For
what
it's
with
my
opinion
is
that
that
just
boils
down
to
whether
someone
wants
to
sponsor
the
component
I,
don't
see
that
as
categorically
something
that
we
should
not
do
any
more
than
you
know.
Should
we
have
a
syslog
receiver,
because
it's
you
know
kind
of
a
specific
structure
that
we
you
know
have
to
then
support.
J
We
should
at
least
utilize
the
event
right.
We
Define
like
the
amid
demand,
the
Mind
event
name
Etc,
and
then,
if
we
can
fit
it
in
the
log
as
seamless
as
possible,
for
example,
just
like,
if
it's
Json,
we
put
it,
as
is
I,
think
that
that
would
work.
But
otherwise,
if
we
need
to
come
up
with
some
other
schema,
and
in
that
case
it's
not
I'm
worried
that
we
should
do
a
bit
differently.
In
that
case,.
C
How
about
how
about
a
middle
ground
here,
where
we
would
actually
have
him
work
first,
on
the
specification
level
to
do
a
semantic
convention
to
create
a
mapping
from
his
other
type
to
logs
and
then
once
that
actually
is
adopted,
then
he
can
create
his
component
based
of
that
convention.
But
he's
gonna
have
to
go.
Do
the
work
of
doing
the
mapping,
so
he
might
as
well
go
to
the
convey
to
the
specification
work.
First.
J
Right
if
this,
if
this
translation
is
required,
if
it's
just
an
event
with
the
existing
predefined
schema,
that
schema
is
defined
somewhere
else,
that's
the
idea
of
the
event.
As
far
as
I
remember
so
schema.
The
definition
is
like
some
other
and
like
entity.
What
being
represented
in
that
event
is
their
responsibility.
J
For
example,
if
it's
alert
manager,
we
need
to
Define
this
as
a
seller
manager
alert
whatever
probably
version
of
the
alert
manager
scheme,
at
least
if
they
change
schemas
or
with
like
somehow
need
to
be
matched,
and
then
we
just
put
like
payload
Into
Love
by
and
but
yeah.
It
needs
to
be
aligned
with
what
is
going
on
for.
In
events,
space
of
the
spec
specification.
G
Yeah
I
can
try
to
be
quick,
so
it's
there
is
this
OCB
tool
that
we
have,
and
there
was
some
initial
work
in
I,
don't
know
which
repository
it
was
probably
open.
Temperature
collector
to
containerize
it
and
I
would
like
to
pick
this
up,
and
the
question
is:
where
should
it
live?
So
Jurassic
pointed
out
that
we
should
move
it
over
to
the
release
repository
but
and.
K
G
Yeah,
that's
what
I
meant
and
I
just
wanted
to
verify
to
clarify
I
discuss
it
with
him.
If
this
is
something,
oh,
that's
the
way
to
go
or
not.
Does
it.
L
Yeah
I
think
we
talked
a
few
weeks
ago
that
it
is
confusing
for
users
to
have
some
things
on
the
core
release
as
part
of
the
core
and
some
things
as
part
of
the
releases
Repository
already.
L
But
you
know
I
think
he's
the
one
that
brought
that
we
should
be
actually
providing
things
only
in
the
releases
repository,
so
everything
in
there
I
I.
It
wasn't
my
I
I,
don't
know
I'm
I,
guess
I'm
I'm
neutral
here:
I,
don't
have
strong
opinions.
I
would
I
see
the
way
I
see
the
OCB
as
something
that
perhaps
is
a
different
audience
than
the
collector
itself.
So
perhaps
the
core
would
be
suitable,
but
I
I
think
also
that
everything
could
be
inside
releases.
I
also
see
the
point
here
so
I
don't
know.
M
Would
would
you
move
something
like
Telemetry,
gen
out
of
control
and
into
or
the
release
of
telemetry
gen
out
of
contribute
into
releases,
because
I
don't
see
that
one
living
in
releases
I
feel
like
images
come
from
releases
like
it
releases,
spits
out
images
and
that's
what
spits
out,
but
also,
if
we
spit
out
an
image
in
one
spot
in
the
binary
in
another
spot.
I
recognize
that
that's
confusing.
L
M
L
Well,
the
technical
reason
for
that
is:
we
use
OCB
to
build
the
core
distribution
and
the
conserved
distribution,
so
the
actual
compilation
of
distribution
happens
during
the
releases
within
the
releases
Repository.
Now
the
compilation
of
OCB
itself,
like
we
use
Gore
releases
for
that
now,
the
combination
of
OCB
itself
happens
on
the
core.
Now
we
could
move
two
releases
as
well,
but
then
but
then
we
would
have
to
check
out
the
code
for
the
old
for
OCB.
L
L
Now,
I
don't
know
if
you
all
know
what
OCB
does,
but
what
it
does
is
it
actually
generates
the
code
for
the
distribution
and
it
makes
references
to
the
to
the
go
modules
that
exist
in
the
core
input
trip.
So
in
the
background
it
does
generate
a
goal
code
that
isn't
compiled
and
released
as
part
of
this
process.
F
M
If
we
had
some
sort
of
release
process
in
the
releases
repository
that
spit
well,
I
guess
it's
the
same
problem,
whether
we
use
a
binary
and
image
it
doesn't
solve
it.
We
just
have.
We
would
still
have
to
have
something
that
happens
in
releases
before
we
could
release
core.
We
would
have
to
do
like
an
independent
release,
essentially
of
OCB
like
that,
would
be
its
own
step
and
then
we'd
have
to
release
core
and
contribute
with
that.
A
L
I
guess
one
way
of
solving
the
adverse
very
very
naively
is
getting
a
a
simple
markdown
page
on
on
opentlander.io
and
have
that
to
be
our
releases
page
like
that
page
would
list
everything
that
we
have
for
specific
versions
and
then
we
can
automate.
But
when
doing
the
release
using
the
releases
repository,
we
can
create
a
PR
automatically
against
the
open
parameter.io
page
with
a
new
release,
page
and
yeah,
and
someone
eventually
approves
emerges
and
users
have
one
page
to
download
everything.
L
M
If
we
leave
OCB
binary
and
release
in
core,
would
you
make
core
start
releasing
an
image
for
OCB
in
that
repository
or
would
you
then
do
a
different
release
in
releases
that
publishes
an
image.
K
Start
with
that
Jurassic,
let's,
let's
start
with
having
the
the
image
published
and
then
because
the
the
from
from
downloading
from
Docker
Hub
or
from
any
other
place,
doesn't
matter
from
where
you
emit
it.
So
let's
start
with
having
that,
because
people
are
keep
asking
for
for
having
this
and
then
we
can
debate
which
repo
releases
it,
which
I
think
is
very,
very
not
that
important
for
for
things
like
this,
because
since
we
published
a
Docker
Hub,
it
doesn't
matter
the
source.
K
L
But
there
is,
there
is
absolutely
we
could
change
a
release,
the
release
that
we've
made
for
the
core.
We
can
change
it,
but
from
the
releases
that
process,
so
the
releases
process
can
change
the
GitHub
release
that
we've
made
for
core
and
updated
with
the
binaries
that
we've
generated
there.
So
it
is
certainly
doable
and
scriptable.
L
If
we
go
with,
you
know
the
website
being
the
source
of
Truth.
Perhaps
another
path
or
another
solution
is
to
create
links
on
the
on
the
website
like.
If
people
access,
open,
toronto.io,
slash,
releases,
slash,
collector,
slash
b078.0,
then
there
seamlessly
redirected
to
the
binary
that
we
have
wherever
we
have
them,
so
pay
it
on
the
core
pository
bit
on
the
releases
and
whatnot
that.
K
That
would
be
cool
because
then,
then
we
have
this
gate.
We
proxy.
Whatever
thing
that
allows
us
to
move
wherever
we
want,
the
binary
is
and
then
people
we
recommend
everyone
to
download
from
there,
not
from
from
The
Source,
where
we
have
it
and
and
then
we
don't
break
it.
If
we
move
one
tool
to
another
or
anything
like
that,
and
that's
that's
my
goal
like
not
break
people.
L
Next
week,
so
I
cannot
follow
up
very
quickly
on
that.
So
perhaps
if
you
could
ask
one
of
the
autocom's
channel
whether
it
is
possible
to
create
such
links
from
the
from
the
website
which
other
places
I
know,
we
can
create
links
for
four
pages.
You
know,
but
I
don't
know
if
we
can
do
redirects
from
from
Hugo,
like
from
from
the
platform
we
have
for
the
website
to
external
sources
like
trick
GitHub.
G
But
only
quickly,
to
summarize,
the
idea
is
now
that
we
have,
on
the
I
o
page,
a
section
where
we
have
different
links
to
all
the
binaries,
so
no
web,
no
matter
where
they
are
coming
from.
So
maybe
on
the
release
page
of
the
collector
on
the
release
page
for
something
else,
and
we
want
to
have
for
OCB
the
image
in
the
open,
Telemetry
collector
repository
directly.
So
we
publish
it
there
and
since
it's
linked
it's
no,
it
doesn't
matter
where
it
is
currently
yeah.
L
That's
the
rough
idea,
I
guess
I!
Guess
there
are
three
things:
three
separate
things,
so
the
first
one
is
getting
an
image
published
as
soon
as
possible,
and
that
can
be
done
from
the
the
main
Repository.
L
The
second
thing
is
getting
an
open,
television.io
page
listing
all
of
the
artifacts
for
individual
releases,
so
have
a
078.0
Pages,
zero,
seven,
seven
or
whatever
version
we
have
right
now
and
just
all
of
the
art
files
for
that,
including
OCD
and
core
and
contrib
and
plantagen,
and
so
on
and
so
forth,
and
the
third
one
like
more
longer
term
is
saying:
if
it
is
possible
to
have
the
the
binaries
links
to
be
pointed
to
an
open
language.io,
what,
when
people
download
that
they
they
get,
they
redirect
to
the
proper
binary
location
like
GitHub
releases
for
core
or
GitHub
releases
for
releases.