►
From YouTube: 2021-06-23 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
Oh
okay,
man:
well,
we
were
wondering
that
you
must
be
grabbing.
B
Okay,
cool
cool:
we
we
were
presenting
a
helm
chart
on
the
in
our
prometheus
web
group
today
that
we
have
we
are
building
and
wanted
to
get
your
feedback
so
we'll
reach
out
to
you.
It's
just
a
heads.
C
C
No
I'm
back
since
since
tuesday,
so
oh.
C
B
B
B
May
are
you
pre
planning
to
present
your
updates
on
the
doc?
I
think
you
don't
have
an
agenda
item
yet.
D
D
Put
either
here,
let
people
know
we
add
a
few
sections
in
the
previous
design
block.
Please
help
to
review,
and
you
know
edit
the
comments,
yeah
and
and
hopefully
because
right
now,
I
think,
for
the
new
processor
reflective
design
discussion.
We
actually
have
two
available
options
there.
You
know
we
we
really
want
help
and
the
people
you
know
to
help
the
club.
B
B
And
I'm
waiting
for
either
tigran
or
logdon
to
join.
B
B
B
B
In
okay,
so
just
the
first
item
again
as
you
get
started,
please
add
any
pr's.
You
may
be
blocked
on
that
you're.
Looking
for
reviews
or
or
any
kind
of
you
know
merge
on
and
I
can
I
can
take
a
look.
B
H
You're
right
in
time,
it's
good.
I
apologize
if
I'm
in
the
middle
of
eating,
so
I
apologize
for
the
background.
Yeah
I
just
was
curious.
Is
there
a
what's
the
current
release
process
on
trip?
Is
it
still
every
other
tuesday.
B
I
think
my
understanding
is
and-
and
I
think
we
have
bogdan
now-
what
the
release
process
is-
oh
yeah,
we
have
walked
in.
Finally,
so
this
is
good
logdon.
Is
it
once
a
month
at
this
point,
not
sure
if
he
can
respond.
I
H
Okay,
so
I
guess
follow-up
question
is:
is
there
a
release
today
or
yeah?
There
should
have
been
a
release.
Yesterday,
then
right
it
was
a
release.
I
H
I
Can
you
can
see
there
is
already
the
pr
to
do
the
release,
but
it
doesn't
pass
the
right
right.
The
test,
okay,.
H
B
Okay,
cool
thanks
eric
thanks
for
flagging
that
I
think
there
were
a
couple
of
topics
in
that
we
had.
One
was
the
multi-config
support
and,
and
again
we
decided
last
week
that
we
would
only
do
one
design
review.
So
this
is
the
dock
that
we
are
looking
at
bogdan.
Is
it
okay
to
review
this
now.
B
I
Need
to
understand
that
this
is
kind
of
out
of
scope
for
the
stability.
So,
yes,
it
may
be
a
bit
of
a
delay
in
increments,
but
yeah.
B
Start
discussing
yeah,
I
mean
again.
I
think
that
that's
understood,
because,
given
that
you
are
focused
on
trace
stability,
you
know
requirements
first
rahan,
do
you
want
to
share
your
screen
and
just
walk
through
this.
F
J
So
our
core
requirement
for
this
feature,
which
I
think
makes
it
a
little
bit
different
from
the
splunk
feature-
is
that
we
can
take
in
a
large
number
of
config
files.
Each
config
file
should
be
able
to
have
their
own
receivers,
processors,
exporters
and
pipelines.
So
you
could
have
you
know
one
receiver
having
processes
receivers
but
no
exporters,
one
receiver
having
all
four
or
sorry
one
config
file
and
they
should
be
all
be
able
to
be
combined.
J
J
I
know
I
believe
it
was
when
you
who
asked
about
the
audience
our
main
audience,
for
this
is
one
end
users
and
that
we've
seen
some
complaints
where,
as
the
conditions
are
getting
larger
and
larger,
it's
desirable
to
be
able
to
split
them
up
into
multiple
pieces
and
then,
secondly,
as
providers
as
aws,
we
often
just
give
our
customers
a
config
file
to
use,
but
if
they
want
to
add
their
own,
you
know
pipelines
on
top
of
that.
J
Right
now,
they
have
to
actually
open
up
a
config
file,
understand
what
we've
done
and
then
add
pipelines
or
processes
into
that
this
would
allow
them
to
just
import
our
config
file
into
whatever
they're
doing
already
so
splunk
solution
is
solving
a
slightly
different
problem.
Although
I
know
paolo
said
they
might
eventually
be
solving
a
similar
problem.
So
if
that's
something
they
plan
on
doing,
I
would
appreciate
hearing
what
the
exact
solution
they
have
is,
but
it
allows
you
to
include
files
as
yamo,
fragments
and
insert
that
into
various
points
into
a
config
file.
J
J
Another
quick
thing
to
note
about
this
is
that
it
implements
delete
files
and
watch
files
inside
which
is
nice,
and
we
might
include
that
by
running
splunk
script
in
front
of
ours.
Although
watch
files
is
natively
supported
in
conf,
so
it
might
need
to
be
moved
there
for
parsing.
We
would
like
to
simply
use
cut
function
for
processors,
which
will
allow
us
to
see
it'll,
give
us
all
the
processes
in
each
in
each
in
each
file.
Then
all
the
receivers,
then
all
the
exporters
and
so
forth.
J
So
we
don't
have
to
do
parsing
by
hand
and
it
should
be
fairly
robust
and
then
to
recombine
them.
We
would
prefer
to
use
cons,
write
to
config
functionality
with
the
merge
at
function,
but
we
could
also
use
splunk
solution
to
insert
each
of
the
partial
sections
as
emo,
fragments
or
concatenate
them
by
hand
other
details
to
consider
later.
How
can
we
make
it
possible
for
these
to
come
from
anywhere
like
s3
and
vault?
J
I
think
hopefully,
con
should
handle
this
fairly
cleanly
and
then,
if
components
with
the
same
name
are
defined
in
different
partial
config
files.
We
have
a
couple
options.
We
can
always
take
the
later
ones,
so
customers
will
be
submitting
config
files
in
some
sort
of
order.
My
preference
is
to
throw
an
error,
but
there's
various
options
here
and
then
just
as
an
example
of
what
we
would
like
to
do
so
here
we
see
config
one.
J
It
has
certain,
it
has
receivers,
exporters
and
pipelines,
and
then
here
we
see
config
two
and
its
own
receivers,
its
own
exporters.
But
we
see
that
in
the
in
this
section
it's
using
exporters
defined
up
here.
So
it's
able
to
refer
to
other
sections
and
then
we
combine
them
together.
J
It's
we
see
pipelines
from
both
sections.
So
are
there
any
questions?
I
can
address
about
the
design
about
how
it's
different
from
splunk's
functionality
or
anything
else.
K
Aditya,
I
have
a
question.
This
is
punya.
This
may
this
may
be
kind
of
built
into
conf
handling.
I
just
want
to
check
in
will
error
messages,
identify
the
file
that
contains
the
violation.
J
Is
not
built
into
conf
by
default,
but
I
would
do
it
it's
not
too
hard
to
implement
because
conf
conf
implements
configs
underneath
as
basically
hash
maps,
so
you
can
check
where
the
keys
came
from.
I
meant
more.
K
That
you
know
when
so,
if,
if
like
something
fails
deep
inside
one
of
the
unmarsh
pillars,
will
the
user
find
out
like
because
at
that
point
is?
Are
people
even
aware
that
there
is
a
conf
thing
going
on.
K
So
this
could
be.
I
think,
I
think
now
I'm
running
up
against
the
limits
of
my
understanding
of
how
config
parsing
works,
but
I
guess,
as
an
end
user,
if
some
validation
logic
fails
somewhere
inside
one
of
the
like
one
of
the
other
packages,
it
will
be
attributed
correctly.
I
guess
I
just.
I
would
like
that
to
be
a
requirement.
J
Okay,
so
I,
if
there's
an
error
right
now
in
the
merging
stage,
it's
very
possible
for
us
for
me
to
generate
the
error
and
I'm
currently
planning
on
generating
the
error
for
what
files
those
errors
was
in.
So
if,
for
example,
if
a
user
defines
processor
x
in
file
1
and
then
file
3,
I
will
say
you
define
processor
x
and
file
1
and
file.
3,
that's
not
allowed.
K
J
Could
be
invalid
in
a
different
way
correct,
so
this
and
then
what
once
once
the
once
an
error
is
propagated
up
by
the
open,
telemetry's
config
validator.
There
could
be
problems
here.
So
that's.
J
Yeah,
this
is
a
bit
trickier.
So
if,
if
the
error,
because
the
the
config
file
validator,
throws
errors
in
a
set
style,
if
the
error
is
about
a
specific
component
or
something
that
showed
up
somewhere,
it
is
possible
to
see
where
it
came
up
with
with
conf.
J
The
problem
with
this
is
one
I
don't
think
there
are
currently
any
errors
other
than
this.
But
if
we're
just
propagating
errors
up
from
the
validator,
it
would
require
some
hard
coding
to
parse
the
error,
to
figure
out
where
the
component
came
from.
I'm
happy
to
add
that
I'm
not
sure
what
the
balance
would
be
between
maintainability
and
hard
coding
versus
ability
to
parse
these
errors
and.
I
J
Ideally
at
least
our
our
goal
is
that,
rather
than
changing
the
validator
or
the
way
we
handle
config
files,
we
will
essentially
just
be
running
an
init
script,
which
will
which
will
combine
config
files
and
then
hand
it
into
the
existing
procedures.
To
like
minimize
the
amount
of
changes
we
need
to
make
the
the
downside.
B
J
Sorry,
the
one
downside
of
this,
as
I
was
mentioning
before,
is
that
if
the
downstream
the
validator
throws
an
error
and
we
propagate
it
up,
we
have.
We
essentially
have
to
parse
the
error
to
understand
what
the
validator
was
saying,
because
we
don't
have
any
more
privileged
access
to
the
validator's
internals
than
the
customer
would.
K
So
I
think
this
is
why
I
was
asking
the
question
about
the
audience.
If
this
is
meant
for
providers,
then
I
have
no
concerns
with
letting
it
proceed
with
this
notion,
otherwise,
as
an
end
user.
If
I
try
to
use
this
feature-
and
I
make
a
mistake
like
I
make
a
thing
that
doesn't
break
yaml
merging,
but
it
breaks
the
semantics
of
a
particular
component,
then
I
will
get
back
an
error
message
referencing
a
file
that,
as
far
as
I'm
concerned
as
an
implementation,
detail
yeah
right.
So
so
that's
my
caution
here
I
would
love.
K
J
Yeah
I
mean,
I
guess
one
question
that
I
would
ask,
because
I'm
I'm
fairly
new
to
this
project
is
right.
Now
the
config
validator
throws
a
fairly
small
amount
of
total,
like
templates
of
errors.
If
that
number
remains
small,
I
think
it's
like
six
or
seven
right
now.
It's
not
super
hard
to
just
parse
them
and
then
intelligently
attribute,
but
if
that
number
grows
then
hard
coding
it
and
maintaining
it
would
be
quite
painful.
Do
people
have
any
sort
of
understanding
of
whether
that
number
of
errors
is
expected
to
grow
significantly.
I
J
J
The
best
approach
would
be
to
try
to
not
modify
the
existing
config
loader,
that's
something
we
could
look
into
if
we
want
to
like
have
long
term
error,
error
propagation,
where
we're
directly
modifying
the
existing
config
loader,
in
which
case
the
script,
would
essentially
sit
in
the
same
file
and
does
have
access
to
to
some
of
these
issues,
and
that
way,
rather
than
rather
than
propagating
an
error
when
we
generated
the
error,
we
would
know
where
it
came
from
in
the
first
place,.
J
I'm
actually
curious
does
the
does
the
existing
splunk
solution
manage
this?
I've
read
through
the
code
a
little
bit,
I'm
curious
if
they
had
a
solution
that
worked
well
for
them.
H
I
So
we
we
have
only
one
file
and
the
the
only
thing
that
we
we
have.
We
have
these
sources
imported
via
the
yaml
templates
and
stuff.
So
what
we
can
do
is
we
can
tell
you
which
so,
if
it's
from
the
main
file
or
is
from
a
source,
so
it's
kind
of
very
it's
good
enough
for
people
to
know
from
where
that
error
comes
from.
J
I
So
so,
because,
because
of
the
fact
that
we
we
embedded
that
into
the
loader
not
directly
but
yeah,.
I
J
I
J
If
to
try
to
summarize,
maybe
the
the
best
approach
at
the
moment
is
to
currently
build
this
outside
of
the
loader,
especially
because
I
think
the
bigger
use
case
for
this
is
likely
is
likely
to
be
partially
cloud
providers
and
then
handle
the
error
propagation,
with
some
somewhat
dirty
hard-coded
parsing
of
the
errors
generated
by
the
by
the
config
validator,
and
as
that,
if
that
grows
significantly
in
the
future,
it
will
be
fairly
easy
to
move
the
code
into
the
config
loader
to
just
sit
at
the
top
of
the
function,
and
then
you
would
have.
J
I
Have
to
do
in
the
loader
is
for
the
future
is
to
define
our
own
error
that
contains
maybe
the
section
in
the
in
the
file
a
couple
of
other
things
that
we
can
include
while
we
load
the
file.
So
then,
once
we
have
those
information,
then
reverse
engineering,
the
merge
we
can
tell
the
user
in
which
file
that
came
from
correct.
I
J
Yeah
so
yeah.
If
we
return
the
problematic
key
in
question,
then
then
it
will
be
very
easy
to
add
this
outside
of
the
loader.
L
A
Hey,
I
have
a
very
odd
question:
why
are
we?
Why
are
we
doing
this.
J
So
the
main
two
reasons
so
the
reason
to
want
to
split
config
files
in
general
is
that
it,
you
know,
as
config
files
get
larger
and
larger
it's
nice
to
be
able
to
partition
them.
For
the
same
reason,
you
can
partition
up
code
files.
The
reason
to
specifically
insist
on
the
requirement
that
each
config
cloud
should
be
able
to
like
define
a
bunch
of
components
of
various
kinds
in
isolation.
J
Is
that
one
of
the
specific
use
cases
we
have
in
mind
is
that
we,
as
a
cloud
provider,
would
give
a
customer
a
config
file,
which
has
you
know,
a
lot
of
components
and
a
lot
of
pipelines
and
then
they're
able
to
add
directly
on
top
of
that,
their
own
pipelines
and
stuff
without
having
to
even
look
at
our
config
file,
because
it
might
be
like
a
thousand
lines
long
and
not
something.
We
want
customers
to
be
looking
at.
E
M
N
Or
so.
J
J
We
could
merge
arrays,
but
for
the
most
part
the
repeated
keys
in
question
would
be
names
of
pipelines,
names
of
or
names
of
processors
or
names
of
receivers.
So
none
of
those
should
be
arrays.
It
does
seem
like
trying
to
intelligently
merge
components
would
get
extremely
messy,
because,
while
arrays
are
some
components,
there
are
also
lots
of
other
components
like
that
are
boolean
values
or
or
in
values
or
or
string
values.
J
J
K
So
my
understanding,
please
correct
me
if
I'm
wrong,
is
that
we're
not
talking
about
merging
generic
maps,
we're
seeing
a
very
specific
merge
operation
that
applies
at
the
top
two
levels
of
the
structure.
K
Yes,
we're
not
saying:
let's
take
two
generic
yaml
data
structures
and
try
to
smoosh
them
together.
We're
saying
this
is
specific
to
the
collector,
and
so
we
know
that
we
know
the
structure
of
the
collector
and
that's
all
we're
signing
up
to
do.
If
we
do
anything
else,
I
personally
would
be
quite
conservative,
as
you
said
about
it,
we're.
J
We're
only
merging,
like
the
list
of
processors,
the
list
of
receivers,
the
list
of
experts
in
the
list
of
pipelines,
and
we
don't
expect
that
structure
to
change
significantly
in
the
long
term.
So
we
feel
fairly
comfortable
hard-coding
that
bit
unless
we
plan
on
adding
a
new
kind
of
component,
and
even
if
we
did
that
should
happen.
Pretty
rarely.
M
I
mean
there
is
some
benefit
to
merging
the
whole
map,
though
I
mean
yeah,
I
mean
there's
sometimes
like
you
may
have
like
an
exporter
that
the
user
wants
to
like
set
another
option
on
right
without
overriding
the
entire
exporter
that
you've
already
configured
inside
your
your
base.
M
Right,
like
let's
say
they
want
to
turn
on
debugging
right,
there's
some
debug
flag,
whatever
on
the
on
the
exporter,
like
it's
kind
of
annoying,
to
have
to
rewrite
the
entire
thing
just
to
set
one
one
additional
flag
like
merging
the
map
seems
relatively
safe.
It's
just,
I
feel,
like
the
arrays,
is
a
little
bit
trickier
that
it
feels
like
just
merging
the
map
seems
okay.
K
I
I
think
so
jay,
I
think,
there's
there's
totally
value
in
doing
that,
but
it
depends
on
how
important,
for
example,
you
consider
error
reporting
to
be.
I
think
it's
very
important
because,
as
an
end
user,
I
want
to
know
what
went
wrong
once
you
start
merging
individual
components.
The
attribution
becomes
really
hard.
K
Right,
especially
as
a
like,
as
a
provider
as
someone
who's
trying
to
achieve
a
lot
of
flexibility,
that
merging
is
powerful
as
an
end
user.
Once
I
have
these
overlays,
which
part
is
responsible
for
the
error
right,
all
the
system
will
tell
me
is
there's
an
error
in
file
foo,
maybe
file
four
bar,
both
of
which
reference
top
level
component
c.
M
Well,
I
mean,
if
you
know
the
key
yeah
I
think
you're
thinking,
at
least
for
me.
If
you
have
the
key
that
failed,
I
mean
it
seems
like,
and
you
had
like
you
know
three
configs
that
applied
like
seems
like
you
would
check
at
the
end
and
see.
M
Does
this
one
have
like
what
was
what's
the
latest
one?
That
has
the
references
that
key,
but
I
mean
yeah
anyway,
I
don't
want
to
get
into
the
weeds
but
yeah.
I
guess
we
should
also
bring
up
with
t
green,
because
I
know
tigran
has
talked
about
this
in
the
past
and
I
think
his
proposal
for
something
it
was
for
remote,
config,
yeah,
I'd,
say
I'd,
say:
let's
I'd
say:
let's
see
what
tigran
thinks
about
it
as
well,
since
I
know
he
had
some
thoughts
about
it
in
the
past.
C
Yeah
one
thing
we
discussed
in
the
past
during
the
review
of
this
blank
solution
was
that
it
has
to
be
very
clear
to
the
user
where
the
problem
is
coming
from.
If
the
end
file
is
going
to
be
merging.
C
C
C
You
know
what
is
the
final
configuration
yaml
file
yeah
one
one
question
that
I
would
have
is:
do
you
have
any
references
for
other
other
solutions
or
other
projects
using
a
similar
approach,
because
I
mean
I
think
I
have
a
thing
that
we
are
reinventing
the
wheel
here
like,
of
course,
there
are
things
like
customized
for
kubernetes,
mmo
files
and
so
on
so
forth,
and
I
wonder
if
we
are
at
the
right
place
here
to
fix
the
problem,
this
problem
at
the
collector's
side
and
not
and
it
shouldn't
be
on
the
provisioning
side.
J
I
haven't
spent
a
huge
amount
of
time,
looking
at
other
places,
doing
multiple
config
file
support,
especially
because
our
desired
solution
was
to
be
fairly
hotel
specific.
I
know
that
fluent
d
has
a
similar
structure,
but
I'm
not
especially
familiar
with
it.
I
don't
know
if
others
are
are
more
so
where
they,
where
they
have
a
similar
ability
to
merge
config
files
in
a
similar
sort
of
way.
C
Yeah,
because
if
it's
only
about
you
know
adding
or
concatenating
like
snippets
of
code,
then
it's
very
much
like
you
know:
regular
daemon
process
processes
for
linux,
it
just
loads
all
the
files
within
a
specific
directory,
and
it
just
uses
that
as
a
file.
But
if
you're
talking
about
combining
nodes
of
yaml
or
yemo
nodes
with
other
demo
nodes
from
other
files,
then
you
are
really
talking
about
is
something
like
customize
right,
basically
telling
where
to
override
which
value
from
which
file?
And
that
is
really
complex.
And
I'm
not
quite
sure.
J
Yeah,
I
think,
in
terms
of
overwriting
for
duplicate
keys.
The
only
two
solutions
we
were
considering
seriously
were
one
to
just:
throw
an
error
to
the
to
the
customer
and
say
you've
defined
the
same
component
in
file
one
and
file,
two
remove
it
somewhere
or
to
just
take
the
later
one
and
then
issue
a
warning
saying
you
define
this
component
in
file
one
and
file
two.
We
took
the
definition
from
file
two.
C
Yeah,
I
think
the
most
sane
approach
here
is
to
throw
an
error,
because
you
know
if
it
is
an
automated
deployment,
then
users
are
not
going
to
read
the
log.
The
log
files
yeah.
K
C
J
O
I
do
want
to
mention
that
you
no
one
has
mentioned
the
packaging
case
here
that
eventually,
when
the
collector
is
packaged
in
a
distro,
the
distro's
probably
going
to
have
some
default
config
file,
and
that
is
a
great
application
for
this
multiple
config
files
thing
so
yeah.
I
wouldn't.
I
would
encourage
you
to
think
that
there's
not
always
a
config
management
system
involved
here.
B
Yeah,
I
agree
it
couldn't
because
there
will
be
multiple
config
files,
especially
in
distress.
A
Hey
in
the
spirit
of
again
asking,
why
are
we
doing?
Why
are
we
taking
this
approach?
It
just
occurred
to
me
that
yeah
have
we
tried
an
option
of
like
giving
a
customer
templates,
because
you're
saying
you
know
the
main
use
case?
Is
you
want
a
customer
to
write
their
own
definitions
over
over
what
has
already
been
provided
so
as
a
cloud
provider?
A
Maybe
you
should
present
them
like
some
form
of
template
and
then
just
fill
that
out,
because
multiple
configuration
like
merging
files
to
me
seems
like
it
seems
like
overkill
for
that
kind
of
problem.
Do
you
cut
what
I'm
saying.
A
A
The
error
messages
might
not
be
correct
if
you
provided
like
a
template
template
of
sorts,
it
might
be
even
easier
to
catch
these
cutscenes
problems,
as
opposed
to,
and
I'd
bet
you
when,
if
the
only
way
to
catch
that
kind
of
error
is,
if
you
do
a
dns
resolution
and
at
that
point
there
are
all
these
issues
that
come
up
with
when
you're
trying
to
do
like
mult,
you
know
multi-file
merging
you,
you
obviously
can't
catch
that
you
can't
catch
that
problem.
So
that's
that's.
O
A
J
The
only
the
only
complication
around
the
reason
why,
oh
I'm
sorry,
the
only
complication
on
multiple
files
that
I
think
could
confuse
users
is
that
if
we
name
components
in
files,
then
they
can't
use
those
component
names.
J
K
H
I
want
to
point
out
the
the
audience.
B
B
Dog
doesn't
address
that,
but
rather
what
were
you
saying.
J
O
When
I
said
distro
creators,
I
was
simply
using
that
as
a
proxy
for
traditionally
packaged
software
as
opposed
to
something
that
ansible
or
chef
has
installed,
not
necessarily
coming
from
the
distro
right
I
mean
I,
I
know
google
has
its
own
repos.
I
assume
amazon
also
has
ways
of
distributing
software
out.
That's
not
direct.
We.
O
Right
yeah,
so
in
those
scenarios
you
need
some
kind
of
default,
config
files
that
come
with
the
package
and
then
the
user
probably
wants
to
add
on
to
those.
But
not
yes,
you
know
the
the
receiver
that
you've
configured
to
write
to
cloudwatch
you
don't
the
user
has
no
interest
in
touching
that
and
if
that's
in
the
same
config
file
as
the
one,
the
user
is
editing,
then
as
soon
as
the
user
touches
that
you
know,
the
the
package
manager
will
stop
updating
that
config
file
and
you
can
no
longer
change
those
defaults.
J
M
E
K
O
I
don't
think
that's
actually
critical.
I
agree.
That's
a
that's
a
useful
thing,
but
I
don't
think
that's
a
hard
requirement
with
careful
design
of
the
default
configuration
you
know
you
could
put
each
receiver
in
a
separate
config
file
and
the
normal
normal
mechanism
with
packages
would
be
that
the
user
would
edit
the
one
config
file
they
want
to
override
and
then
at
that
point
they
you
know
they
they
take
over
that
file
without
interfering
with
whatever
other
files
they
ship
are
so
yeah.
B
I
I
would
like
to
time
box
this
discussion
because
I
I
think
that
we
have
other
topics
on
this,
and
can
we
other
if
you
can
round
you
know
just
summarize,
maybe
that's.
We
have
a
lot
of
good
feedback
here.
Maybe
we
should
take
some
action
items
and
add
yeah.
J
So
I
think
yeah.
Thank
you
so
much
for
the
feedback.
I
think
the
big
takeaways
are
one.
We
will
that
we
need
to
have
intelligent
errors
that
point
to
the
the
attribution,
not
just
in
the
merging
phase,
but
also
in
the
validation
phase.
I
don't
think
in
the
live
phase
as
possible
like
if
the
if
it
throws
an
error.
While
it's
running,
I
think
attribution
then,
would
be
extremely
difficult,
but
it's
something
I
can
look
into.
J
Secondly:
figuring
out
how
to
deal
with
duplicate
components,
I
think
that
setting
it
up
to
do
merging
of
components
right
now
would
be
a
poor
idea,
but
to
quentin
and
punya's
point.
It
wouldn't
be
too
hard
to
set
a
flag
where,
if
you
set
the
flag
to
strict
it
throws
an
error.
If
you
have
duplicate
components
and
then,
if
you
submit
the
components
and
some
if
you
submit
the
file
in
some
order,
if
you
set
the
file
to
loose,
it
will
take
the
later
one.
J
If
we
want
to
allow
overrides
to
be
a
possibility,
so
I
think
that's
a
fairly
easy
thing
to
add,
and
then
I
think
I
think,
there's
agreement
that
we
should
keep
the
parsing
and
the
merging
strictly
inside
conf
to
avoid
adding
extra
bloat
on
top.
J
So
I
will
note
those
down
as
action
items
before
my
review
this
afternoon.
Thank
you
so
much.
Everyone.
B
Awesome
awesome
thanks
thanks
for
going
through
that.
So
folks
again
there
were
a
couple
of
items
that
we
have
on
the
agenda
and
one
of
them
is
again,
as
some
of
you
may
have
already
seen
we
have
been
discussing.
You
know
how
to
get
the
collector
to
be
especially
the
collector
core
to
be
stable
for
tracing
a
signal
heading
towards
a
trace
stability,
at
least
on
the
collector
and
which
is
the
kind
of
the
last
component.
B
That's
left
in
the
tracing
stability
effort
that
the
project
has
been
making
so
to
that
effect.
Again,
our
maintainers
bogdan
and
tigran.
You
know
have
been
discussing
on
the
issue.
I
think
it's
three
seven.
B
I
can
find
the
exact
three
four
seven,
four
there's
an
issue
where
we've
been
discussing.
You
know
how
we
can
actually
pare
down
the
collector
core
components
and
and
and
figure
out.
You
know
what
is
the
basic
functionality
that
is
considered
to
be
core
as
well
as
stable
and
and
that
can
be
made
available
in
stable
release
by
the
end
of
next
month.
B
So
there
are
a
couple
of
items
on
the
backlog
that
we
are
working
on
semantic
versioning
which,
as
well
as
some
versioning
documentation
and
and
also
components
that
we'd
like
to
move
as
a
full
list
to
can
trip
and
that
discussion
is
ongoing
so
again
for
context.
Just
please
read
the
issue
and
the
prometheus
receiver
again,
as
well
as
the
other
prometheus
components
we
have
discussed,
maybe
something
that
we'd
like
to
keep
in
core.
But
that's
discussion,
that's
still
in
progress
and
the
list
is
still
being
finalized.
B
That
said,
there
was
bogdan.
Did
you
want
to
add
any
other
parts
to
this
specific
discussion,
because
what
I
wanted
to
bring
up
is
you
know?
How
do
we
ensure
that,
given
that
we
will
have
an
existing
trace
processor
that
will
be
available
till
a
new
trace
processor
with
the
redesign
is
built
in
pintrip?
B
How
would
we
guarantee
customer?
You
know
a
migration
and
custom
customer
interoperability
right?
That
is
if
we
continue
to
keep
the
name
of
the
new
trace
processor,
the
same
so
that
was
and
discussion
that
we
had.
You
know
following
the
discussions
we
had
earlier
yesterday
and
and
just
wanted
to
kind
of
understand
that
better.
That
is
how
do
we
eliminate
the
configuration
changes
that
the
customer
will
have
to
go
through
if
we
have
the
latest
version
of
the
collector?
You
know
with
a
new
trace
processor
available
later.
I
And
the
only
thing
that
we
really
require
for
stability
is
to
define
to
just
to
define
the
names
of
the
these
new
processors,
because
what
what
we
try
to
do
is
ensure
that
we
don't
use
the
these
names
for
other
components
that
we
will
deprecate
in
order
to
simplify
our
life
of
deprecation.
I
Instead
of
replacing
an
old
processor,
we
will
come
up
with
a
new
one
and
we'll
make
a
transition
plan
for
users
from
the
old
ones.
So
that's
the
only
thing
that
we
need
the
last
week,
discussion
and
design
dog
stable,
like
finalize
the
names
and
that's
that's
everything.
B
Okay,
so
men,
I
think
that
you
had-
and
this
is
following
up
from
min's
proposal
earlier
bogdan-
on
the
design
of
the
new
processors.
Again,
the
discussion,
you
know
we
we
did
an
evaluation
and
min
has
added
that
section
to
the
to
the
dock.
I've
reviewed
it
and
then
I'd
like
the
larger
community
to
also
take
a
look
at
it.
This
was
based
on
the
discussion
we
had
in
last
week's
session,
and
this
was
again
trade-offs.
B
You
know
with
the
current
naming
conventions
versus
keeping
new,
adding
new
names
and
new
implementation
for
the
tracing
processor
and
what
the
you
know,
migration
options
are
there.
B
Mint,
can
you
quickly
share
your
doc
and
just
walk
through
that
section.
D
Yeah
sure
also
I
want
I
want.
I
want
a
problem.
I
got
a
question
to
bookman.
What
are
you
just
saying
you
are
just
saying:
we
only
need
to
finalize
the
name
of
the
new
processors.
That's
not
enough
right,
because
we,
the
name,
is
yeah,
of
course,
names
key
thing,
but
we
also
going
to.
We
definitely
will
change
the
attribute
configuration
attributes
under
data
name,
so
that
thing
is
looks
like
we
there's
no
way
saying
we
can,
you
know,
emulate
the
configuration.
D
D
Yeah
yeah
yeah
yeah,
that's
that's
right,
yeah
yeah!
That's
that's!
Actually,
one
of
the
options
I
was
trying
to.
I
put
it
here
saying
so
I
I
probably
just
skipped
the
trade-off
thing.
I
actually
added
two
sections
to
the
old
dock
once
trade-off
to
compare
the
current
the
solution
with
the
new
one,
a
single
signal
based
solution,
so
it
both
has
pros
and
cons.
I
want
to
you
know:
you
know
everyone
helped
to
review
and
they
probably,
if
you
can
add
some
critical
things
and
missed
for
pros
and
cons,
then
we
can.
D
We
can
decide
at
least
the
high
level
we
want
to
understand,
which
way
we
want
to
go.
Then
that's
basically
the
solution,
part
and
and
also
for
the
migration
we
assumed
there
will
be
definitely
breaking
change
on
the
configuration
side.
So
we
have.
We
have
to
think
about
a
way
how
to
help
customers.
You
know
the
older
customers
who's
already
using
the
old
processor
to
onboard
the
new
processors.
D
So
I
think
that
two
options
option
one
like
is
really
traditional
one.
So
we
we
moved
all
the
you
know
the
processor
to
country
repo
as
we're
doing.
Right
now
and
we're
going
to
implement
the
new
processors,
and
then
you
know,
I
think
we're
gonna
have
two
plus
sets
of
processors
coexist
for
a
while,
and
you
know
yeah
during
the
yeah.
D
Yeah
yeah
sure
sure
yeah.
This
is
basically
that
solution,
and
then
you
know
once
the
the
new
one
is
stabilized,
then
we
were
going
to
make.
You
know
a
deprecation
announcement
to
the
older
one,
we're
going
to
add.
For
example,
we
can
add
a
warning
message
in
the
log.
We're
gonna
provide
the
documentation,
tell
customer
how
to
do
the
migrations,
and
you
know
we're
gonna
wait
for
a
while.
D
Then
you
know
after
a
few
months,
probably
six
months
like
we
mentioned
we're
gonna
replace
the
old
processors
in
the
ripple,
so
the
other
con
is
so
you
know
you
know
the
other
kind.
Is
you
know
we,
because
we
are
there
like
at
least
for
aws?
They
already
have
a
few.
You
know:
8
000
customers
is
already
on
board
to
open
tiny
magic
for
their
production
usage.
As
I
know
so,
the
thing
they're
going
to
make
is
first
for
to
support
them.
D
You
know
to
reduce
the
production
impact
to
their
business
so
either
way
we
we
either
need
to
keep
the
old
processor
in
the
contributor
contributor
ripple
for
a
long
longer
time.
You
know,
I
that's
that's
the
part.
I
don't
know
what
we're
just
saying
is:
after
six
months,
we
stop
we're
gonna,
remove
the
old
processor,
but
that's
a
good
chance.
Some
of
the
some
of
the
users
will
be
impact.
You
know,
they're
gonna
probably
impact
their
business.
This
is
definitely
a
lot
of
good
way,
but
you
know
yeah.
D
This
is
the
reality.
You
know
if
we
go
with
this
option,
but
with
option
two
like
boxing
you
mentioned,
because
the
new
processor
will
be
a
superset
with
the
old
processor
right,
at
least
from
the
functionality
part,
so
we
can
build
a
a
translation
tool.
Basically
we
can
you
know,
because
it's
all
built
struct.
D
We
when
we
read
the
auto
config
when
we
detect
auto
processor
from
the
configuration
we
definitely
can
convert
them
into
with
a
new
configuration
and
also
cover
all
the
functionality,
all
the
features,
the
requirements,
the
add-ins
they
added
in
the
configuration
to
a
new
configuration
that
we
want.
Then
we
can
generate
new
yaml
file
and
override,
or
you
know,
replace
the
old
one.
We
keep
the
older
ones
back
up,
but
we're
going
to
use
the
new
one.
Also.
D
I
I
I
do
agree
with
that.
Another
option
by
the
way
is
to
actually
delete
the
old
logic
and
the
old
processor
becomes
just
a
wrapper
of
the
new
one
and
do
the
translation
in
there
yeah.
We
can
discuss
about
that,
but
but
I
do
agree
that
at
least
a
tool
that
can
be
used
for
users
to
translate
their
power
into
the
new
config.
It
is
required
for
for
the
task.
B
Yeah
I
mean
bogdan.
We
had
discussed
that
you
know
perhaps
having
the
old
processor
as
a
wrapper,
but
we
still
will
have
to
write
the
transformation
you
know
tool
to
migrate
right.
So
that's
something
that
we
can.
We
can
itemize
in
more
detail.
I
And
amazon
is
pretty
well
known
to
be
very
custom
centric.
I
believe
that
it's
a
good
solution.
D
Cool
great
yeah
thanks,
thank
you
yeah.
If
I
start
parkman
you're
saying
option
three:
we
really
do.
If
you
you
know,
there's
a
chance.
Can
you.
B
B
B
B
Please
think
about
it
and
again,
please
feel
free
to
comment
on
issue
three.
Four,
seven.
Four,
that's
an
ongoing
list
I'll
also
be
creating
issues
on
each
of
the
components
as
we
finalize
the
list.
E
I
B
Yep
and
and
logan
again
really
appreciate.
You
know
us
working
on
this
together
towards
getting
towards
that
stability,
but
yeah.
You
were
mentioning
something.
B
I
think
we
discussed
that
a
bit
funny
and
I
was
going
to
again
have
a
follow-up
discussion
with
bogdan
to
figure
out.
You
know
what
we
do
so
I'll
definitely
pull
you
in
for
the
especially
the
open
census
and
prometheus
work
that
we
are
doing
right
now
right,
so
yep.
We'll
definitely
need
to
look
at
that,
because
it
is
a
breaking
change
for
metrics,
especially.
I
I
L
B
And
not
yet
punya.
We
want
to
map
out
the
details
of
what
this
actually
means,
because
you
know
we
want
to
clearly
have
an
understanding
of
what
open
senses
dependencies
do
we
can
we
remove
first
versus
later
and
and
just
have
a
clear
design
for
that.
B
No
emmanuel
this
is
that
both
in
parallel
one
is
the
prometheus,
we're
not
rewriting
anything
in
this
process,
we're
just
moving
components
into
contrib
and
the
open
census.
A
Okay,
because
you
know
I
I'm
tasked
with
basically
phasing
out
open
sense
dependencies.
B
Receiver,
it
will
definitely
affect
us,
but
again
it
should
not
affect
us
other
than
anything
structurally
great.
B
B
All
right,
I
think
that
any
other
questions
folks
had
we
will
publish
a
timeline.
You
know
and
again
we're
tracking
with
right
now
about
two
three
weeks
of
work.
So,
hopefully
you
know
we
can
complete
the
items
that
are
in
the
backlog
that
we
are
targeting
and
have
a
clear
game
plan
for
the
next
release,
which
will
have
core
collectors
stable
in
july.
P
B
P
E
B
The
work
that
we
are
doing
on
one
metric,
so
the
sdks
will
stabilize
the
api,
has
already
been
stabilized.
So
right
now
we're
working
on
the
sdk
implementation
and
that's
striking
for
august.
P
I
P
I
We
are
actually
forced
to
do
four
metrics
as
well,
because
because
we
have
a
notion
of
a
consumer
or
a
component
and
is
the
same
for
the
reason
but
permanently
so
every
time
when
we
stabilize
the
metric
the
tracing,
we
are
kind
of
stabilizing
the
metrics
as
well.
So
the
only
remaining
items
for
metrics
are
the
change
from
from
the
protobuf
zero
seven
to
zero
nine,
which,
in
my
opinion,
with
one
or
two
let's
say
two
engineers
will
be
probably
around
a
month
a
month
and
a
half
of
work.
P
Okay,
that's
that's
encouraging!
Thank
you
like
for
what
this
was,
and
I
understand
this
is
an
aws
thing,
but
we
have
critical
dependency
on
integrating
open
telemetry
into
our
container
services
for
reinventing
december,
so
in
whatever
ways
that
we
can
help
or
the
community
can
help
to
get
the
collector
trace
and
metrics
to
be,
we
won
before
december
or
by
first
december
1st.
That
would
be
very
desirable
for
aws
and
I'm
happy
to
help
with
resources.
B
Yeah
mark
thanks
thanks.
We
definitely
are
counting
on
your
support.
I
Yeah-
and
I
will
I
will
ask
alolita
for
specific
items
that
she
can
help
us
by
adding
some
resources
to
implement
those.
B
B
Thanks
bogdan
and
anybody
else
had
any
questions
on
this
before
we
we're
at
time,
but
just
wanted
to
do
a
call
out
that
we
have
an
helm
chart
proposal
for
the
open,
telemetry
operator
that
deploys
the
collector.
B
So
please
take
a
look
at
that
design
proposal
offline
and
add
your
feedback
we'll
be
discussing
this
in
the
next
session
next
week.
B
Thanks
again,
everyone
thanks
for
joining
in.
L
L
E
So
this
first
one
sort
of
just
a
call
for
comment
if
anyone's
interested,
you
know,
there's
been
suggested
that
we
should
solidify
semantic
conventions
for
how
we
describe
a
file.
You
know
file
name
path,
things
like
this.
E
You
know
we
use
them
in
the
in
our
file
file
log
receiver,
but
presumably
there
will
be
many
other
use
cases
across
the
prop
the
wider
project,
so
basically
just
put
forth
an
initial
proposal
and
a
little
bit
of
discussion
on
it,
but
I
just
want
to
make
sure
everyone
was
aware
of
this
in
case
they
have
an
interest
in
it.
L
I
think
the
the
question
was
there
about
whether
the
file
makes
a
good
sense
as
a
namespace
for
these
things.
I
think
it's
reasonable.
I
don't
know
if
there
is
any
other
suggestions,
so
I
mean
well
was
worried
that
maybe
it's
not
very
applicable
to
spence,
but
I
think
it
is.
I
don't
know
what
what
do
people
think?
Is
there
a
better
name.
E
L
Yeah
so
anyway,
maybe
maybe
if
you
think
that
it's
a
good
name
support
it
in
the
in
the
issue
or
if
you
can
come
up
with
some
other
suggestion,
an
alternate
name,
please
suggest
there,
but
I
think
it
yeah.
It
does
make
sense.
E
The
other,
the
other
item,
so
at
some
point
last
week
we
a
conversation,
someone
brought
up
in
a
thread
on
slack
that
there's
this
common
spec,
which
defines
what
an
attribute
or
what
what
the
attributes
field
should
be
for
all
signal
types,
and
I
think
that,
as
the
part
of
the
logs
community
is
sort
of,
I
don't
know,
maybe
we've
kind
of
operated
independently
of
this
definition
or,
but
I
think
they're,
basically
in
conflict
to
an
extent.
I
don't
know
if
this
is
really
a
problem.
E
I
mean
this
is
just
a
specific.
I
mean
this
is
a
specification
problem.
I
don't
think
not
to
dismiss
that,
but
I
think
we
have
a
technical
problem
here.
We
just
have
to
figure
out
what
we
need
to
do
to
sort
of
make
sure
we're
playing
nice
with
everyone
and
be
aligned.
E
L
So
I
guess
maybe
some
historical
context
on
this,
when,
when
we
were
figuring
out
what
the
spans
should
look
like
the
attributes
for
the
spans,
there
was
a
discussion
around
whether
we
should
allow
nested
key
value
pairs
like
maps
of
maps
or
we
shouldn't
whether
a
race
should
allow
elements
of
different
types
or
no,
eventually
the.
I
guess
the
conclusion
was
that
we
need
to
keep
it
limited
to
a
certain
subset,
like
only
arrays
which
have
elements
of
the
same
type,
no
nesting
for
the
maps.
L
The
reason
was
that
the
tracing
protocols,
other
than
otlp,
cannot
nicely
represent
this
complicated
data
and
so
that,
let's
limit
it
at
the
api
level,
so
that
whatever
is
produced
by
open
telemetry,
can
be
represented
in
these
other
tracing
protocols
like
zipking
jaeger.
This
is
where
it
comes
from.
This
restriction
is
essentially
an
artifact
of
what
the
tracing
world
thinks
and
attributes
can
be
in
open,
telemetry
protocol.
L
L
So
the
data
model
that
we
then
later
proposed
for
logs
introduced
this
notion
of
okay.
The
protocol
can
represent
this
data,
but
the
api
remained
with
the
old
concept
of
restricting
certain
subset
of
data
right
only
to
be
representable
via
api
calls.
Now
I
guess
the
question
is
whether
we
want
to
try
to
extend
the
api
to
allow
this
more.
I
guess
complex
data
structures
for
the
attributes,
and
I
think
probably
the
answer
should
be
no,
because
that
even
if
we
try
that
that
is
not
going
to
be
easy
to
do
maybe
long
term.
L
What
exists
today
applies
to
to
the
traces
and
metrics,
but
for
the
logs
we
additionally
allow
this
and
that
right
whatever
is
that
the
the
nesting
byte
data
type.
So
I
think
that
we
need
to
fix
the
spec,
but
there
is
no
contradiction
here.
What
was
written,
there
was
written
when
the
logs
did
not
did
not
exist
at
all,
so
what
we
need
is
more
like
a
clarification
there.
I
think
I
think
the
simplest
is
just
that
right.
E
So
do
you
think
that
just
proposing
a
note
onto
that
common
spec
file
is
is
all
that
we
need
to
do?
Then?
I.
L
Think
so
well,
at
least
that's
that
can
start
the
discussion.
If
the
spec
people
think
differently,
then
that's
a
good
good
way
to
start
that
discussion
right,
but
I
think
that
that
that
that
works
for
us,
at
least
for
logging
signal
that
works,
and
similarly
the
logging
api
in
open,
telemetry,
whatever
it
is,
the
future
to
become
right
may
have
a
similar
restriction.
L
L
So
I
think
that's
fine.
This
was
discussed.
It
was
considered
acceptable
that
the
the
protocol
and
the
data
model
itself
are
capable
of
representing
a
super
set
of
types
right
compared
to
the
api
and
for
now
I
think
I
would
just
do
that.
Let's
start
with
the
clarification
of
the
spec,
so
that
it
is
clear
that
there
is
no.
There
is
no
conflict.
There
is
no
contradiction.
There.
L
Q
Yeah,
no,
I
did
not
discuss
it
during
the
characteristic
today.
Actually
I
have
in
the
meantime,
I
have
just
like
made
some
finishing
touches
for
my,
like
last
update
with
the
this
thing,
like
for
consumers,
retaining
the
persistence,
so
I
have
updated
the
pr
and
I'm
going
to
continue
doing
some
tests
if
how
that
works
and
yeah
and
see
if
they're,
like
some
other
gaps,
but
the
overall
solution
like
or
the
idea
of
the
solution
is
there.
So
it's
I
think
it's
it's
ready
for
for
being
reviewed.
Q
L
Q
It's
it's
it's
slightly
different
so
because
having
a
continuous
q
makes
life
much
simpler,
and
what
I'm
doing
is
that
we
have
like
this.
This
continued
continuous
part
that
is
waiting
for
processing
essentially,
and
then
we
have
a
part
that
might
have
some
gaps
that
is
currently
being
processed
but
not
deleted.
Yet
so
I'm
essentially
adding
a
callback
to
consumers.
So
consumers
are
finalizing
processing
the
items
like
whether
it's
successful
or
it's
a
failure.
Then
they
clean
up
these
items
and
delete
them
at
the
same
time.
Q
I
have
another
key
in
this
storage,
which
contains
an
array
of
currently
processed
items.
So,
if
they're
like
I
know,
power
failure
or
the
process
would
be
killed
or
etc.
On
startup,
it
can
load
the
items
that
were
not
deleted
yet
and
continue
processing
of
those.
So
I
I
tried
to
explain
this
in
this
diagram.
I
have
put
into
readme,
so
I
hope
it
will
be
like
somewhat
clear-
and
this
was
for
me
the
solution
that
was
still
not
very
complex
and
and
could
be
implemented
using
the
interface
matching.
Q
L
Okay,
I
think
we
can
work
on
improving
the
the
actual
way
the
q
works.
At
this
point,
we
want
to
just
prove
that
the
storage
interface
is
suitable
for
it
and,
if
not,
what
needs
to
be
modified
right.
So
I
guess
it's
important
to
understand
if
you're
doing,
let's
say,
for
example,
multiple
operations
per
per
call
on
the
storage,
because
the
storage
requires
you,
because
you
need
to
update
multiple
keys.
Let's
say
right,
multiple.
L
L
Q
L
We
can
wait,
you
can
take
care.
Q
Yeah,
I
just
want
to
say
that
I
decoupled
it
in
a
way
that
I'm
I
moved
the
the
api
part
from
the
contrib
to
core
and
I've
included
this.
This
comment.
J
Q
Won't
work,
but
I
I
wasn't
sure
if
that
was
because
this
could
be
done
in
several
ways,
because
maybe
the
it
actually,
the
implementation
of
the
of
this
persistence
could
be
moved
to
a
contrib
and
and
the
queue
to
retry
can
be
extended
with
interface
as
well,
for
configuration,
etc.
So
there
like
several
ways
this
could
have
been
accomplished,
but
this
is
what
I
did
there.
So
I.
L
Think
what
you
have
is
fine,
yeah,
let's,
let's
not
make
it
more
complicated.
I
think
I
think
that's
that's
reasonable
right.
We
can.
The
storage
is
pluggable,
but
we
consider
to
retry
to
be
core
thing.
It
does
not
need
to
be
pluggable,
so
what
you
have,
I
think,
is
fine.
The
the
approach
that
you
have
is
is
good
there.
L
Q
L
Q
Yeah
yeah,
that's
that
that's
a
good
point,
but
then
from
when
I
was
doing
benchmarks,
the
decimalization
had
like
very
little
impact
on
performance.
So
it
was,
I
would
say,
like
two
orders
of
magnitude
less
than
then,
let's
say
the
fire
storage,
which
was
pretty
decent
in
terms
of
performance
too.
So
so
it.
G
Q
My
idea
is
that
sure
this
can
be
optimized,
but
it
has
like
rooted
little
particular
significance.
Yeah.
L
Yeah,
no,
that's
fine!
I
think
that's
fine
again,
that's
something
that
we
can
optimize
in
the
future,
so
that,
if
the
queue
is
short,
you
just
keep
some
certain
amount
in
memory.
If
it
becomes
larger
it
becomes
detached.
So
obviously
it
has
to
go
through
the
file
in
that
case,
but
that's
for
the
future
and
again
I
just
wanted
to
make
sure
that
if
we
want
to
do
something
like
that,
the
storage
interface
is
not
going
to
prevent
us
from
doing
that
right.
L
L
Thank
you
for
working
on
this.
I
think
it's
important
yeah.
Q
L
For
yeah,
sorry
that
it
is
taking
so
much
time-
and
I
think
it's
important
for
us
to
make
sure
that
we're
not
creating
something
that
is
a
dead
end
or
is
difficult
to
make
faster
in
the
future.
I
want
to
make
sure
that
we
get
the
interface
right,
especially
because
the
interface
is
likely
going
to
become
part
of
the
core
that
we
are
planning
to
to
ga
soon.
So
it's
going
to
be
more
complicated
to
make
any
changes
after
that.
So,
let's,
let's
make
sure
we
get
the
interface
portion
right,
the
implementation,
that's
fine!
B
I
think
tigran
just
hi
I
just
joined
in
because
my
I'm
looking
to
get
more
engineers
involved
on
our
end
to
actually
help
out
with
the
logging
effort.
As
we
you
know,
pick
up
steam
and
and
again
have
interacted
with,
as
well
as
dan
and
david,
so
looking
forward
to
contributing
to
the
project
as
we
as
we
get
get
more
engineers
a
couple
of
engineers
to
work
on
work
with
you
guys.
L
B
E
Yeah
that
will
certainly
be
appreciated.
I
would
say:
progress
on
the
milestone
has
slowed
down
and
like,
as
people
have
started,
to
get
there
start
using
this
stuff.
There's
more
feedback
and
more
yeah.
B
E
B
And
I
really
appreciate
all
of
your
work,
because
I
think
that
we
have
good
opportunity
now,
I'm
sorry,
we
didn't
get
involved
earlier
in
more
depth,
but
we'll
definitely
that's
what
we
want
to
kind
of
help
out
on,
not
only
given
that
we
finalize
the
data
model
and
the
spec.
You
know
any
requirements,
but
also
on
the
actual
implementation.
B
L
L
B
Yeah
we
can,
we
can
relate
multiple
repos
into
a
project
board
at
the
hotel
level.
So
if
we
keep
a
project
board
there,
then
we
can
probably
you
know,
just
pull
in
and
track
just
have
cards
that
you
know
are
related
to
multiple,
but.
L
B
L
For
now
we
have
in
the
collector
country
we
just
mark
the
issues
with
logs,
I
believe
label.
So
it
is
marked-
and
I
guess
the
majority
is
still
in
the
collector
country,
but
we
also
have
log
collection,
repo
and
we
have
things
in
the
specification
repo.
L
B
Again,
I'm
sorry,
I'm
just
I'll
read
up
on
the
issues
I
don't
want
to.
You
know
just
ask
newbie
questions
right
now.
It's
more
that
I
want
to
understand
a
bit
more.
We
had
done
a
fair
bit
of
work
on
the
data
model.
You
know
which
david
had
also,
and
you
had
also
you
know
kind
of
specified,
and
this
was
again
you
know
five
months
ago,
but
we
also
did
an
implementation,
as
you
know,
on
the
c
plus
plus
logging
api
and
had
evaluated.
B
You
know
what
is
implemented
right
now
on
java,
as
well
as
in
python,
with
the
native
implementations,
you
know
of
logging
capabilities
from
the
language
itself,
so
I
don't
know
if
that's
useful,
but
would
certainly
like
to
apply
some
of
that.
You
know
experience
back
into
the
logging
effort
and,
as
we
build
out
the
receiver
again
wanted
to
understand
you
know.
What's
the
thinking
are
we
focused
on
the
otlp
logging,
exporter
and
first,
and
the
stanza
receiver,
of
course
becomes
the
default
receiver
of
choice.
L
So
I
guess
most
of
the
focus
recently
in
the
recent
months
was
just
on
the
collector
primarily
on
the
file
logs,
but
also
the
bits
that
are
in
file
lock
but
can
be
used
by
others
like
tcp
log
receiver,
for
example,
can
also
use
the
bits
that
the
parser
the
pipeline
and
their
stuff-
that's
that's
where
the
focus
right
now
is
of
of
these
people
right
who,
who
are
part
of
the
log,
seek
but
you're
right.
The
python
sig
started
adding
the
login
capabilities.
I
don't
know
where
they
are.
L
If
you
look
at
the
milestone
in
the
contrib
repository,
it
is
primarily
about
some
remaining
basic
capabilities
like
journal
d
receiver,
wind
log
receiver-
and
I
would
say
another
important
remaining
thing
is
dependent
on
the
config
sources
which
which
need
to
be
still
need
to
be
imported
from
our
splunk's
distro
to
inflammatory
distro.
We
have
that
on
our
own,
but
it's
not
yet
in
open
telemetry.
That's
a
separate
discussion
right
you're,
aware
of
that
of.
L
Sources
once
we
have
that
the
logs
we
want
to
use
it
for,
for
log
plugins,
essentially
a
way
to
define
how
you
collect
logs
from
specific
applications,
which
stanza
has
we
have
that
knowledge
base.
We
just
need
to
re
re,
redefine
that
using
the
the
new
notion
of
config
sources.
I
think
once
we
do
that,
so
those
are
those
couple
of
things
that
I
listed.
L
We
can
consider
the
basic
logs
in
collector
to
be
ready,
yeah,
not
production,
ready,
maybe
it's
more
testing,
more
validation,
but
feature
wise,
that's
what
we're
aiming
as
the
minimum,
and
then
there
is
a
long
tail
of
other
things
that
we
will
need
to
be
added.
But
that's
that
doesn't
prevent
us
from
saying
we
have
something
that
now
is
useful.
Okay,
no.
B
I
mean
that
seems
that
that
sounds
like
a
good
strategy,
because
I
think
that
if
we
can
get
the
log,
the
basic
you
know,
logging
features
available
in
the
collector
as
a
pipeline
yeah.
Then
that's
a
big
big
step
forward.
So
what
about
tigran?
What
about
the
elasticsearch?
You
know
implementation!
B
B
You
know
we
had
looked
at
actually
and
implemented
an
elastic
search,
water,
mc
plus,
plus
exporter.
L
B
L
B
B
Yeah,
okay,
again,
I
will
take
a
look
and
see
if
that's
something
that
you
know,
needs
tests
or
needs
features,
weather
features,
yeah.
L
E
I
just
wanted
to
mention
to
my
very
minor
things
quickly
here
I
see
rock
just
joined
brock,
fixed
the
the
data
loss
issue
that
we
had
with
our
file
input
operator.
So
I
know
just
calling
attention
to
that
in
case
anyone
you
know
trimmering
at
you,
people
on
your
team
had
run
some
benchmarks.
I
don't
know
if
you
care
to
run
those
again,
but
if
you
do,
you
know,
you'll
see
better
results
there,
so
we're
showing
100
data
delivery
there
right
now.
L
E
And
then
the
other
related
thing
tigrin
pointed
out
a
very
valid
point
that
we
don't
have
this.
This
code
base
is
not
well
documented,
this
this
operator
in
particular
being
so
complicated
that
took
us
so
long
to
figure
this
out.
We
should
document
this
much
better,
so
I
opened
a
pr
today
that
has
a
design
dock
for
the
file
input
operator.
If
anyone
cares,
look
at
that,
that's
the
log
collection,
repo,
still
a
draft
state,
because
I
will
probably
push
a
few
bits
of
polish
onto
that.