►
From YouTube: 2021-05-05 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
B
A
C
A
All
right,
okay,
so
the
first
item,
so,
as
you
heard
our
lolita
joined
as
the
triager
on
the
collector
repositories.
Thank
you
very
much.
This
is
great
and
very,
very
much
wanted.
I
think
we
needed
this
help.
So
I'm
very
happy
that
you
offered
your
help.
Thank
you.
A
I
guess
you
are
you
expecting
to
facilitate
the
issue
triaging
and
pr
triaging.
What
do
you
have
in
your
mind?
Maybe
you
can
tell.
B
So
again,
I
think
that
one
of
the
things
that
tigran
again
I'm
I'm
thinking,
is
that
we'll
have
to
I'd
like
to
go
through
the
full
backlog
and
kind
of
get
an
understanding
of
you
know
what
your
methodology
has
been
so
far
and
specifically
in
my
discussion
with
bogdan-
and
you
know
also
with
you
earlier,
the
idea
was
to
alleviate
you
know
some
of
the
workflow
pressure
from
the
work.
B
That
is,
the
prs
that
are
being
filed
for
the
exporters
and
receiver
changes
on
collector
core
to
be
able
to
actually
decentralize
the
or
distribute
the
the
code
review
process.
If
you
will
for
those
parts
to
other,
you
know,
approvers
and,
and
then,
as
long
as
folks
have
taken
a
look.
At
least
we
have
two.
You
know,
reviews
on
pr's
related
to
receivers
or
exporters.
We
can,
I
can
tag
them
as
ready
for
merging
and
then
with
final
review.
B
A
Yeah
yeah,
yes,
I
agree
and
I
think
it's
clear
that
that
there
is
a
pain
right.
We
yeah,
we
see
that
the
prs
are
taking
longer
than
we
would
want
them
to
take.
I
I
was
just
chatting
with
pogba
about
possible
ways
we
can
solve
this.
I
think
there
are
a
few
options
there,
so
I
mean
it's
clear:
the
pain
is
there
right?
A
I
would
like
to
hear
a
bit
more
from
people
what
pain
they
are
experiencing,
not
just
what
I'm
seeing
myself,
but
what
people
see
from
their
site
the
contributors
and
then,
let's
try
find
a
solution
to
this.
I
want
to
make
sure
that
we're
not
delaying
people
we're
not
blocking
people.
We
are
enabling
people
to
actually
move
that
at
the
right
pace.
A
A
Decentralization
like
we
were
thinking
about
other
repositories
anyway,
I
think
that
yeah.
B
Now
one
of
the
other
areas,
which
is
really
you
know
of
urgency,
is
the
getting
the
1.0.
You
know:
tracing
stability
in
the
collector
and
again
bogdan,
and
you
have
been
very
good
about
you-
know
creating
some
of
the
phase,
one
and
phase
two
backlogs
that
exist
for
getting
the
collector
to
1.0.
B
But
I
think
that
that
you
know
in
my
discussions
with
bogdan
and
just
stepping
through
all
the
items
we
have
picked
up,
you
know
from
aws
a
bunch
of
them
to
work
on
to
help
in
you
know
getting
to
stability
sooner
but
again,
that's
a
moving
target
also
because
there
are
some
additional
issues
that
have
been
added.
So
I'm
gonna
work
with
bogdan
to
you
know,
make
sure
that
that's
something
which
has
a
clear
list
and
can
be
communicated
for
you
know
any
engineers
to
come
and
pick
up
issues.
A
B
Because
I
mean
again,
there
are
some
nice
to
haves
that
have
been
added,
but
again,
let's
figure
out.
You
know
if
that's
a
phase
three
and
that
continues
on
after
we
can
do
1.0
yeah.
A
B
D
A
D
Had
a
question
here
like
so,
we
are
saying
like
we
need
two
approval
before
adding
the
get
to
mustache.
So
is
there
any
kind
of
rules
there
like?
Who
will
be
those
two
approvers?
Do
we
have
a
list
or
like
kind
of
approved,
reviewers
from
them
or
a
public
list
from
them?
We
can
ping
and
get
to
approvals
ahead
of
time.
B
Yeah
I
mean
bogdan
again,
has
wagged
and
shared
the
code.
Owners
list
is
already
listed,
but
also
the
other
part
that
we
have,
you
know
been
following
as
a
good
practice
is
to
have
two
approvers.
If
there
are
specific,
you
know
vendor
components
or
specific
components,
whether
other
experts
working
in
the
area,
for
example,
for
prometheus.
B
You
know
a
lot
of
the
googlers
as
well
as
aws
engineers
have
been
involved,
driving
the
prometheus
web
group
and
also
others.
You
know
like
bogdan,
who
have
and
joshua
looked
at
the
overall
work.
That's
ongoing!
So
getting
pro
approvals,
you
know,
code
reviews
from
specifically
you
know,
people
focus
or
engineers
focused
on
that
area
is
what
we're
going
after.
So
if
you
need
an
explicit
list,
we
can
certainly
add
more
to
code
owners,
but
rule
of
thumb
you
know
get
senior
engineers
who
are
working
with
you.
For
example,
anthony
is
reviewing.
D
A
D
Yeah,
I
think
yeah
next
item
is
the
same
one,
so
our
so
our
launch
date
is
like
approaching
thursday
and
pr
was
like.
I
was
not
ever
of
this
rule,
so
it
was
like
hanging
for
a
long
time,
but
we
got
like
yesterday
we
got
some
one
up
got
like
two
yeah.
B
I
think
I
think
it
I'll
just
tag
it
just
now.
I
just
looked
at
your
pr
really.
D
B
D
A
We
have
code
owners
to
find
for
the
areas,
so
they
should
be
automatically
assigned
to
the
right
people
and
if
they
are
also
not
reviewing
the
pr,
also
ping
them
directly
so
mention
the
the
assignee
mention
the
people
who
are
reviewers
and
the
pr
in
the
comments
so
ping
them.
Sometimes
it's
just
that
people
miss
that
there
is
a
need
for
them
to
do
something
right.
They
don't
notice.
Mr
notification,
it's
not.
A
Malicious,
I'm
saying
right
just
need
to
be
reminded
sometimes,
but
definitely
we
also
need
to
improve
this
process.
So,
while
we're
thinking
about
how
we
improve
it
in
general
short-term
solution
here,
short-term
advice
would
be
for
the
author
to
try
to
ping
and
work
directly
to
try.
Clocks
are
very
good
process
work
with
the
assignee
working
viewers
to
get
the
process
moving
faster.
D
Yeah,
so
sorry
go
ahead,
so
I
think
thanks.
So
I
think
I
was
about
that
and
I
will
be
honest
here
so
I
tried
all
the
possible
ways
so
but
yeah
I
will
keep
praying
thanks.
C
A
D
I
just
want
to
add
one
more
comment.
I
don't
know
like.
I
feel
like
some
of
our
like
approvers
or
like
code
owners
are
maybe
getting
too
much
peers.
That's
my
understanding.
I
might
be
wrong
totally,
but
I
feel
like
maybe
it's
a
as
an
example
like
bogdan
is
getting
too
many
things
and
it's
getting
maybe
single
point
of
bottlenecks
sometimes
is:
can
we
just.
B
D
C
A
B
B
No,
no
we'll
we
again,
it
should
come
from
aws,
okay,
maintainers
or
engineers,
so
anthony
will
take
a
look
if
he
hasn't
already
here.
Thank
you.
A
D
B
B
A
Actually,
so,
if
we
look
at
the
formal
side
of
things,
we
don't
have
an
approved
approval
from
an
approver
right.
Android
is
an
approver.
He
reviewed
he
did
not
yet
approve.
So
technically,
there
is
no
way
that
we
can
measure
this
right
as
maintainers
yeah.
Okay,
if
the
fact
that
it
is
it's
assigned
to
both
them
does
not
mean
that
the
program
should
be.
A
Well
formally,
then
we
could
merge
it
right.
So,
let's
make
it
clear
right.
There
are
reviewers.
There
are
code
owners.
They
should
be
your
first
contact
with
people.
They
should
be
doing
the
review
they
unless
we
get
the
approvals
from
the
code
owners,
which
is
great.
We
have
that
here,
but
we
also
have
an
approver
here.
A
D
Yeah,
so
here
I
got
one
conclusion,
then
sorry,
I'm
taking
longer
so
then
say.
For
example,
we
got
like
two
approvals
from
our
internal
team
and
they
are
also
contributing
heavily,
and
I
have
idea
about
the
code
base
now.
What
about
the
third
like?
We
are
saying
the
approver,
like
honorary
say,
for
example,
is
in
a
different
time
zone
and
we
are
hanging
for
him,
so
he
so
can.
A
Of
course
they
can
so
and
this
these
are
strong
signals
to
the
approver,
but
because
these
people
are
not
official
approvers,
it
is
not
sufficient
for
merging
right.
So
one
another,
approver
or
maintainer
needs
to
have
a
look
at
the
code
and
give
their
approval
before
it
can
can
be
nursed.
But
these
are
still
very
significant,
strong
signals.
C
A
B
Make
sure
that
you
know
the
process
is
clearly
understood.
Yeah.
A
B
E
Right
the
goal
of
that
first
level
of
reviews
by
aws
contributors
who
are
not
yet
approvers,
is
to
try
to
simplify
for
the
people
who
are
approvers
who
come
along
later
yeah
someone's
already
looked
at
this
they've
worked
through
some
of
the
issues.
I
can
look
at
those
and
see
that
I
agree
and
hopefully
streamline
the
review
process
for
the
approvers.
G
Also
also,
they
they're
gonna
count
on
as
part
of
requirements
for
becoming
an
approver.
So
another
thing
that
we
should
look
into
the
next
different
is
looking
at
some
of
the
people
like
anthony,
and
I
think
there
is
another
guy
from
the
datawmx
psi
which
are
doing
a
lot
of
reviews.
So
we
need
to
to
look
into
if
they
meet
the
requirements
and
maybe
make
them
approve,
or
so
we
get
more
help.
A
I
already
reassigned
this
one
to
myself.
So
are
we
good
with
this
too
anything
else
to
discuss
here.
F
A
H
Similar
story,
but
maybe
the
difference
is
that
this
one
is
making
severe
refactoring
and
changing
paras
and
parameters
into
settings,
so
it
touches
like
hundreds
of
fires.
Essentially,
so
if
we
want
to
merge
it,
I
think
that
would
be
it'd
be
great
if
someone
could
review
it
and
decide
if
that's
the
way
to
go,
because
each
other
marriage
brings
new
conflicts
to
solve.
So
is
this
the
piano
yeah?
This
is
for
country,
but
there's
another
one
for
the
core
version.
G
I
think
that
I
will
take
the
core
one
tigran.
Can
you
see
me
on
that
or
assign
to
me.
I
So
this
is
adding
a
constant
scale
thing
to
matrix,
transform
processor,
and
I
see
that
you
know
it
has
been
approved
by
an
approver
now.
I
know
it
can't
be
merged
right
now,
because
the
pr
needs
to
be
updated.
That's
obviously
on
us,
but
I
want
to
confirm
if
there's
anything
else,
that
needs
to
be
done.
I
think
this
stood
for
about
a
week
and
we
we
should
probably
have
pinged
someone
to
act
on
it.
So
I
understand
that
so
yeah.
G
One
thing,
as
I
explained
to
to
three
hourglass
folks
rehan:
I
really
don't
want
to
add
more
things
into
this
yet
so
we
have
to
put.
I
The
other
half
of
my
question
was
going
to
be
exactly
that
right.
Can
we
write
down
explicitly
the
policy
on
what
kinds
of
changes
we
are
or
are
not
accepting
to
metric
transform
processor?
Because
I
had
this,
I
had
the
same
discussion
with
the
person
at
google
who
was
doing
this
saying:
hey
a
lot
of
prior
work
like
we
have
stalled
a
lot
of
prior
work
and
you
can
see
you
know
even
the
approver
approved
this,
presumably
because
he
did
not
know
that
this
policy
exists.
So
I
think
we
should
make
that
like.
I
G
Okay,
is
that
fair?
I
will
put
the
pr
to
to
write
that
sorry,
but
this
gets
super
messy
unless
and
I
know
the
only
person
who's
gonna
do.
It
is
gonna,
be
me
because
I'm
not
gonna
support
seeing
this
using
open
sources
and
stuff,
so
I
I
really
hope
others
will
pick
this
up
and
and
transform
it.
A
A
That's
a
good
ask.
I
remember
seeing
in
the
previous
meeting
notes
for
the
collector
someone
said
that
they
are
promising
to
go
ahead
and
fix
the
transformation.
D
Oh
no,
that
was
I
I
was
also
planning
to
say,
like
this
was
like
in
the
matrix
generation
processor,
so
which
calculates
the
matrix
from
two
existing
matrix
or
scale
the
value
of
an
existing
matrix.
So
I
think
I
linked
this
on
my
ipr,
so
I
was
just
wondering:
can
maybe
this
can
be
part
of
the
new
matrix
generation
processor.
G
Okay,
so
please
somebody
has
to
to
do
one
thing:
we
need
to
make
an
inventory
of
whatever
functionality
we
have
in
the
transformation
and
whatever
you
want
to
add
in
the
new
one,
because
it's
super
confusing.
Why
do
we
need
a
new
one?
Like
all
these
questions,
I
I
have
no
answer
and
by
pushing
to
merge
things
without
having
the
clarity-
maybe
maybe
it's
my
fault,
but
I
should
have
said
from
the
beginning.
I
I
really
need
to
see.
Where
are
you
going
with
this?
What's
the
final
state.
B
Actually
bogdan,
one
of
the
things
I
would
recommend
is
that
you
know
for
major
refactorings
such
as
these.
There
should
be
a
design
dog
proposal
that
is
done
first,
because
it's
really
very
unclear
to
other
maintainers
or
developers
coming
onto
the
collector
project
to
to
understand
what
the
law
assumptions
are.
G
Yeah,
it
would
be
good
to
have
a
one-pager
not
too
much,
especially
when
it
comes
to
this
moving
functionality
from
one
to
other,
but
but
at
least
to
know.
Okay,
we're
gonna
have
we're.
Gonna
have
two
processor,
one
that
works
on
on
a
single
metric,
one
one
that
works
on
multiple
metrics
and
do
whatever.
G
But
anyway
tell
me
what
exactly
is
the
final
state,
so
ryan,
maybe
maybe
as
part
of
your
initial,
adding
the
new
skeleton
for
yet
another
transformation
processor.
We
should
clarify
that
yep.
I
So
I
feel
I
feel
bad
piling
on
this.
So
rehan.
Please
accept
my
accept
my
apologies,
but
there's
sort
of
this
question
of
what
are
we
doing
with
metric
transform
processor?
G
So
sorry,
already
yeah.
G
Here,
one
thing
that
I
experiment
a
bit
was
a
kind
of
mini
dsl
to
to
do
the
filtering
by
filtering
I
mean
you
want
to
specify
which
metrics
will
be
used
by
this
transformation
or
whatever.
So
I
and
I
played
a
bit
with
that-
that's
one
language
that
we
need
or
and
consistent
across
traces
metrics,
like
you,
want
to
say,
metrics,
that
resource
has
resource
attributes,
has
labels,
as
whatever
things
you
want
to
do,
and
then
is
the
transformation
language,
something
like
from
ql.
G
You
say,
but
that
this
transformation
language
needs,
probably
we
can
think
of
if
we
can
generalize
it
on
the
also
for
for
attributes
for
spans
and
for
others.
So
anyway.
Yes,
I
think
it
should
be
something
like
that.
G
I
I
Then
that
can
like,
if
we're
so,
if
we
make
a
design
dock
and
we,
if
I
think
rahan,
you
have
done
all
the
work
here-
I'm
not
planning
I'm
not
trying
to
kind
of
take
any
of
that
off.
But
I'm
saying,
if
there's
a
design
dock
that
tries
to
cover
these
two
things,
maybe
can
also,
even
if
without
designing
the
language
say.
Okay,
we
are
doing.
We
are
doing
these
small,
primitive
processors
with
a
view
to
eventually
having
a
couple
of
dsls
that
subsume
these
things.
I
J
Okay,
cool
thanks
yeah,
but
can
I
I
just
thought
a
thought
on
this
too,
like
so
less
than
a
design,
doc
is
just
a
github
issue.
Should
we
should
we
be
thinking
about
having
a
stricter
rule
about
the
need
for
a
github
issue
to
exist
and
maybe
even
be
approved?
Having
like
a
tag
that
says
approved
like
first,
we
decide,
we
would
like
to
do
this
thing
and
then
we
actually
do
it.
B
Agreed
I
mean
then
then
in
general,
that's
the
best
practice
or
good
practice.
We
have
followed.
You
know
again.
I
think
everybody
doesn't
follow
it
consistently,
but
there
should
be
an
issue.
You
know
clearly
stating
this
is
the
purpose
and
this
is
you
know
these
are
the
changes
that
we
are
proposing
and
why
and
then
then
a
pr
to
follow
right,
I
mean
that's
right.
J
And
I've
seen
that
followed
some,
but
not
always,
and
so
like
I
mean
I'm
the
person
who
approved
this,
I
came
in
and
thought
okay.
This
seems
reasonable.
The
implementation
looks
good,
but
I
you
know
I
skipped
the
issue
step.
I
should
have
validated
that,
but
I'm
not
aware
that
we
have
a
like
a
properly
stripped
rule
on
that
or
a
process
that
that
strictly
requires
that
and
I've
seen
it
not
used
in
many
cases.
J
So
maybe
we
need
to
double
down
on
that
requirement,
and
I
know
I
personally
will
I
don't
want
to
be
approving
things
that
shouldn't
shouldn't
have
been
existed
in
the
first
place.
D
D
I
One
not
at
the
moment,
so
I
think
all
I'm
going
all
well.
I
was
going
to
follow
up
with
you
after
this
to
ask
if
you
are
going
to
write
a
design
doc
or
if
not,
I
was
going
to
write
a
design
doc
in
the
design
doc.
I
was
going
to
say
hey
step
one
we
are
we
are.
We
are
freezing
right,
we
are
feature
freezing
metric,
transform,
processor
and
the
implementation
of
that
is
a
readme
change
or
whatever
it
is
step.
I
Two
metric
generation
processor
has
this
set
of
things,
and
I
I
would
hope
that
you
can
flesh
that
out
right
and
in
terms
of
what
I'm
saying
about
query
language,
I'm
not
planning
on
designing
any
of
that.
I'm
just
planning
on
saying
the
project
as
a
whole
has
consensus.
A
query.
Language
is
a
good
idea
for
these
kinds
of
use
cases
and
so
further
work
on
that
is
welcome.
I
think
this
is
more
as
a
guide
to
future
contributors,
so
that
people
like,
I
think
you
and
quentin,
have
both
gotten
caught
up
in.
I
They
don't
know
ahead
of
time
what
kinds
of
contributions
are
expected,
and
so
then
you
get
stuck
where
you
try
to
do
something.
You
put
a
lot
of
effort.
You
wait
because
sometimes
there
are
even
delays
that
no
one
is
trying
to
put
in
place,
and
then
you
find
out
when
it's
close
to
a
deadline
or
the
community
didn't
want
that
contribution
anyway,.
D
I
D
C
D
Yeah
so,
for
I
just
want
to
say
like
I
cannot
establish
this
right
now.
Maybe
I
will
go
forward
with
this,
but
later,
if
you
want
to
improve
or
write
a
new
one,
definitely
we
can
go
that
one,
but
right
I
don't
have
option
for
a
stop
for
now.
D
Yeah
totally,
and
I
also
like,
like
bogdan's
idea-
maybe
we
can
like
document
everything
on
which
functionality
we
have
support
and
where
we
want
to
put
heart.
F
I
I
can
so
rihanna,
it
sounds
like
you
are
pretty
busy.
I
can
at
least
give
you
a
skeleton
and
you
can
approve
it
or
you
can
comment
on
it.
How
does
that
sound.
B
A
D
A
I
Let
me
just
quickly
interrupt,
I
think
this
this
line
of
this
line
of
of
argument
for
why
we
should
not
work
on
it.
I
think
this
has
happened
before
in
the
sig,
and
it
makes
sense
it's
just
not
documented.
That's
it
yeah.
A
A
It's
a
pretty
important
thing
to
understand
right.
My
my
reading
of
the
previous
meeting
notes
was
that
let
me
try
to
find.
Where
was
that,
I
think
I
saw.
I
A
We
are
adding
more
to
that
technical
debt.
We
would
like
not
to
do
that.
That's
that's
bogdan's
point
here
right.
We
certainly
can
document
it,
but
the
fact
that
it's
not
documented
does
not
mean
that
we
should
not
raise
it
right.
So
the
question
is
that
new
proposal
for
the
new
processor
is
it
going
to
be
a
brand
new
one
that
is
not
reusing
this
code
and
it's
building
use
its
functionality
using
the
new
p
data
types.
Is
that.
I
A
Use
the
p
data.
D
Metrics
our.
G
We
do
about
this
one,
so
so
one
thing
about
that
generation
processor,
which
I'm
very
confused
again
still
is
the
difference
between
that
one
and
this
one
sometimes
sometimes
when
I
read
the
code
and
the
intentions
there,
I
felt
that
the
motivation
of
doing
that
is
because
I
said
no
to
new
functionality
here.
So
people
was
just
jumping.
Okay,
let's
create
another
one.
We
can
find
a
small
reason
why
it
should
be
another
one,
so
I
can
get
that
kodi.
G
I
want
to
to
very
clarify
and
understand
everyone.
The
intention
of
why
do
we
need
a
separate
one
and
actually,
if
this
was
p
data
based
and
we
would
have
accepted
this,
would
would
you
put
that
here
or
not?
That's
the
question
I
don't
want.
I
don't
want
to
hide
things
and
and
and
put
things
under
the
the.
G
B
D
Okay,
so
when
I
said
the
design
dock,
I
said
like
on
the
github
issues,
I
have
opened
an
issue
and
it
describes
the
high
level
like
what
we
are
expecting
and
the
main
difference
is
like
it
was
so
it
was
said
in
our
sig
meeting
and
that
in
the
metric
transform
processor
we
do
not
create
new
metrics
and,
as
I
am
like,
intend
to
create
new
matrix
from
systematics
by
doing
some
kind
of
calculations,
so
it's
better
to
have
a
different
whole
new
process
or
which
will
generate
new
metrics
from
existing
matrix.
G
G
D
D
D
Yeah
when
I
review
the
design
like,
I
think
so,
maybe
you
said
made
that
one
like
we
don't
want
to
and
like
so
it
was
said
earlier
that,
like
we
don't
modify
any
existing
metrics
kind
of,
we
don't
create
new
metrics
in
the
metric
standpoint,
processor,
but
yeah
anyway.
We
have
the
option
there.
G
G
That's
the
first
question
that
I
would
like
to
answer
like
for
for
these
two
two
functionalities.
I
mean
we
probably
need
more
processors,
but
not
not
for
this
function.
For
this
functionality
do
we
want
to
have
two
processors,
one
that
just
simply
modifies
things
on
a
metric
without
creating,
as
you
said,
and
the
other
one
creates
new
metrics
or
everything
should
be
in
one
processor.
That's
that's
the
first
question
that
I
want
to
answer.
G
Second,
is,
as
you
pointed
there
are,
there
are
a
bunch
of
functionalities
and
stuff
that
are
here,
and
we
don't
even
know
about
them.
So.
B
Yeah,
absolutely
because
I
I
do
think
that
this
needs
to
be
reconciled
with
the
overall
design
of
the
and
thinking
of
the
metrics
processor
in
general.
It
is
very
you
know.
Obviously
bogdan.
You
have
a
whole
bunch
of
assumptions.
You
know
that
you've
made
and
that's
not
something
that
you
know
again
should
be
reconciled
with
what
what's
being
proposed.
The
only
way
to
clearly
do
that
is
a
design,
even
if
it
isn't.
D
Also
so
yeah,
so
I
I
want
to
be
clear
here
so
so
is
this
the
recommendation
that,
like
okay,
so
punya
will
start
working
on
that
and
we
work
on
github,
that's
fine,
but
so
this
decision
is
is
still
hanging
kind
of
whether
we
will
accept
the
resource
detection,
resource
generation,
matrix
generation
processor
in
our
content,
repo
or
not.
Are
we
saying
yeah.
I
G
I'm
not
necessarily
saying
that
we
don't
want
to
have
the
new
one,
because
maybe
the
right
thing
to
do
to
make
the
change
from
open
sensors
to
big
data
is
to
write
a
new
one
and
instead
of
changing
in
place
that,
but
I
want
to
have
a
better
understanding
of.
If
we
add
this
new
processor,
would
it
be
different
than
the
in
network
transforms
changes
to
b
data?
G
Would
it
be
very
confusing
for
users
why
we
have
to
what
are
the
limitations
between
the
two?
I
just
want
that
decision,
which
can
be
again
half
a
page
and
clarify
what
is
the
long
term
and
where
which
functionality
leaves
where
and
if
I
have
that
we
can
unblock
immediately
so
he's
not
waiting
for
too
much
it's
just
waiting
for
for
a
bit
of
clarity
of
where,
where
we
want
to
be
in
six
months.
D
Can
I
go
ahead,
please
yeah.
So
one
more
thing
like
so
I
have
an
open
question
here:
can
the
matrix
transform
processor
or
does
the
math
transfer
processor
sends
a
value
of
a
matrix
or
create
a
new
metric
with
a
new
value
like
data
points
value
from
my
understanding?
Not
so,
but
in
the
research
matrix
generation
processor
it
creates
a
new
matrix
with
new
value.
Every
time
the
value
is
sensed,
the
data
point
values
are
changed.
G
D
Don't
understand
the
data
point,
what
the
value
the
values
like
say,
for
example,
I
am
measuring
like
cpu
utilized.
The
value
for
cpu's
load
was
like
like
0.5
cpu,
so
it
may
be
sensed.
We
can
scale
up
a
scale
down
or
calculate
the
utilized
matrix,
so
the
matrix
is
being
generated
with
new
values,
whereas
in
the
matrix
transform
processor
we
don't
change
the
value
of
the
matrix.
We
only
like
rename
a
matrix
or
add
some
labels
or
transform
the
exact
same
data
point
or
same
matrix.
I
F
G
So
we
change
it
yeah.
So
I
I
understand.
I
understand
why
you
need
to
change
the
value.
Don't
get
me
wrong,
but
everything
that
you
said
ryan
right
now,
rehan,
it's
something
that
for
you
it's
obvious
for
me.
It's
not
obvious,
and
I
want
it
written
down
somewhere.
So
I
can
read
it
and
digest
that
this
is
the
right,
the
limitation
between
them.
That's
what
I'm
asking.
F
I
think,
as
I
mentioned
earlier,
I
just
asked
a
real
hand
to
write
it
down.
So
what
what
is
the
use
cases
for
cpu
utilize?
This
example
that
to
say
where
to
change
it
and
what's
easy,
my
I'll
get
you
to
take
a
look.
I
Okay,
can
I
can
I
I
just
want
to
ask
this:
ask
the
view
the
discussion
we're
having
from
a
slightly
different
angle
and
see
if
that
helps
us
right.
There
are
two
competing
concerns
here.
One
is
getting
our
product
right
right,
so
we
are
saying:
let's
not
confuse
users,
let's
not
create
multiple
things
that
have
subtle
differences
that
you
know
we
understand,
because
we
are
so
deep
in
this,
but
someone
who
is
who
walks
up
to
this
will
be
will
will
find
unintuitive
right.
So
these
are
all
very
good
product
questions.
I
At
the
same
time,
quentin
from
our
team
rehan
from
aws,
you
are
all
trying
to
meet
deadlines
that
you
have
right
and
so,
as
you
as
we
and
as
you
come
close
to
those
deadlines,
we
become
of
necessity.
I
We
care
less
about
the
principles
and
more
about
getting
something
done
right,
and
so
I'm
trying
to
understand
how
in
this
group
do
we
want
to
balance
those
concerns
right,
because
what
I'm
sensing,
what
I'm
sensing
from
from
reihan
and
what
I
know
is
the
case
for
clinton-
is
we're
saying:
look
we
would
love
for
this
to
become
a
great
product,
but
if
we
need
to
ship
tomorrow,
this
decision
may
take
another
two
weeks.
It
may
get
reversed.
Who
knows,
and
if
I
have
to
scrap
and
rewrite
metrics
I
mean
metric
generation.
I
Processor
is
a
big
thing
to
rewrite
and
retarget
this
small
pr
is
a
small
thing,
but
whatever
it
is
right
like
as
a
group,
what
is
the
escape
hatch
that
we
are
giving
ourselves
so
that
we
don't?
We
don't
take
a
decision
made
in
haste?
Sometimes
decisions
have
to
be
made
in
haste.
How
do
we
make
sure
that
those
decisions
made
in
haste
don't
then
become
a
back
compact
burden
for
us
forever?.
I
I
E
Term
with
the
the
insights
processors
that
they're
creating
and
whether
we
can
move
some
of
this
metrics
transformation
inside
of
that
surface
area
right
so
so
make
it
an
internal
concern
rather
than
something
that
is
done
accomplished
through
configuration
of
other
components
within
the
collector.
I
think,
ultimately
long
term.
It
would
be
great
to
have
the
collector
be
able
to
be
a
set
of
building
blocks
that
can
be
assembled
to
achieve
these
goals,
but
we
may
need
to
tactically
try
to
achieve
them
in
code
inside
of
a
single
processor,.
D
Yeah
or
one
of
the
proposals
was
from
bogdan,
I
guess
like
so
maybe
can
we
just
add
a
tag
like
experimental,
like
experimental,
matrix
generation
processor
then
maybe,
once
we
have
a
better
design
and
long-term
plan
for
a
unified
processor
and
when
that's
ready,
maybe
we
can
just
deprecate
this
one.
F
So,
let's
right
hand,
let's
just
give
a
short
page.
B
Yeah,
because
I
I
think
what
bogdan
said
also
in
the
chat
is,
you
know
what
he
said
earlier,
also
and
yeah.
F
Yeah
yeah,
I
think,
there's
a
balance
between
this
one
and
the
delivery
of
the
standard
is
a
key
point.
C
F
I
think
about
the
discussion
could
have
earlier
because
rare
hand
just
waited
for
this
one
for
two
weeks
already.
Okay,
and
if
this
is
something
we
should
do
have
done
earlier-
probably
is
better.
But
anyway,
let's
move
on
look
forward.
B
Yeah,
thank
you
again,
we'll
work
together
to
make
progress.
A
A
That's
yes!
Unfortunately,
these
two
things
can
can
have
can
be
conflicting
goals
right,
but
as
a
maintainer,
I
am
not
going
to
be
lowering
the
bar
to
meet
your
deadline.
I
can
do
something
else
right.
I
can
improve
the
process
and
that's
what
I
think
we
should
be
doing.
Not
borrowing.
C
A
B
Agree
then
tigran,
I
mean
that's
one
of
the
reasons
why
you
know
we've
been
trying
to
figure
out
how
to
alleviate
some
of
the
pain
points
in
the
process
and
and
and
just
you
know,
get
a
more
of
a
well-defined
process
in
place
and
design.
You
know
for
certain
key
changes
is
very
essential.
I
mean
it's
just
good
software
engineering
practice
and,
and
it
has
to
be
something
we
you
know
clearly
call
out,
especially
when
fundamental
changes
are
being
made
to
the
collector.
A
Yes,
so
can
I
I'm
saying
as
a
maintainer?
Yes,
I
care
about
your
goals.
No,
I'm
not
going
to
lower
my
bar
to
make
sure
you
meet
your
deadline.
We
need
to
address
this
differently.
We
need
to
make
sure
to
have
a
process
a
way
for
you
to
move
faster,
and
that
way
is
not
by
lowering
the
bar.
That
way
is
by
improving
what
we
can
do
for
you
to
move
into
temperature
faster.
I
I
I
think
that's
the
last
one,
so
we
have
time
so
I
okay,
if
I
can
summarize
the
the
ideas
here
right,
I'm
glad
that
we
are
seeing
the
problem
for
what
it
is.
I'm
not
I'm
not
saying
that's
a
novel
contribution,
but
that
there
is
a
tension
between
these
two
needs
right.
I
I
believe
that
as
a
community,
we
we
all
agree
with
the
maintainer
point
of
view.
The
product
needs
to
keep
getting
better.
I
We
cannot,
if
we
regress
the
product,
then
there's
a
tragedy
of
the
commons
and
open
telemetry
collector
will
fail
right.
So
we
don't
want
that
to
happen.
It's
kind
of
killing
the
golden
goose
in
terms
of
improvements.
I
One
piece
is
process
improvements
that
that
help
everyone
right
so
I'll,
as
you
mentioned,
having
guide
institutionalizing
design,
docs
and
giving
people
clear
guidelines
on
on
how
to
avoid
just
purely
wasted
time
right,
there
is
a
norm
ping
this
person
they
will
get
on
it
if
they
waste
one
week
of
your
time.
No
one
benefits,
so
those
are
kind
of
pure
benefit
things
right,
but
and
then
there
are
things
like
having
a
roadmap,
so
you
know
which
contributions
are
and
are
not
welcome.
B
All
people,
but
but
punya
again,
unless
you
can
actually
clearly
see
that
you
know
there
is
a
larger
goal
here
and
that's
aligned
with
you
know,
long-term
roadmap
of
the
project.
I
Right,
absolutely,
but
so
having
that
long-term
roadmap
be
visible,
so
that
people
can
align
what
they're
doing
to
it
right
so
that
those
are
all
good
things.
The
third
thing
which
I
think
we
haven't
talked
about
here,
or
at
least
I
have
not
heard
so
much,
is
what
is
the
escape
hatch,
and
this
is
why
I'm
talking
about
escape
hatch.
Not,
for
example,
can
you
do
out
of
tree
development
like?
Should
we
go?
I
Should
we
guide
people
to
doing
out
of
tree
work
and
using
the
collector
builder
so
that
they
are
not
polluting
the
comments,
but
they
are
able
to
move?
Should
we
shift
the
norm
of
the
community
to
okay
ship?
If
you
want
to
have
an
experimental
tag
in
your
repository
in
your
build
of
the
collector
go
ahead,
and
do
that
right,
you
are
taking
a
certain
risk,
but
then
you
are
taking
the
risk.
You
are
not
pushing
the
risk
on
the
community.
A
For
for
the
first
one
I
would
like
everybody
to
when
they
see
pain,
points
like
this,
which
are
let's
say
recurring.
Please
submit
an
issue
so
that
we
can
see
what
we
can
do
about
that.
That's
your
first
point.
The
second
is
roadmap.
I
completely
agree
with
that
and
we
actually
have
a
roadmap
for
logs
and
I
think
it
proved
to
be
very
useful.
I
don't
see
a
roadmap
for
for
metrics.
I
would
like
somebody
who
is
interested
in
creating
that
to
create
one
and
for
the
escape
patch.
A
B
G
Yeah
punia,
against
your
point,
the
escape
patch.
I
I
do
really
appreciate
aws
trying
to
be
open
source
and
trying
to
push
us
to
improve
the
process
and
working
with
us.
G
If
you
do
escape
for
everything,
then
you're
no
longer
open
source,
so
I
I
think
I
think
it
has
to
be
a
balance
between
between
doing
these
kind
of
things,
because
otherwise
you
just
become
a
user
of
this
product
of
this
thing,
you're
not
a
contributor
on
this
thing,
so
I
I
really
want
to
say
thank
you
for
aws,
because
they
are
super,
pushing
us
to
improve
process
and
do
the
right
thing.
Sometimes
again,
we
have
this.
G
These
conflicts
that
we
have
like
today
in
terms
of
interest
and
and
priority,
but
I
I
really
believe
that
what
they
are
doing
is
the
right
thing
to
do
personally
in
unless
there
is
something
that
we
completely
say:
no,
no.
We
don't
support
that.
Then
yes
go
for
for
the
escape
patch,
but
as
when,
when
we
have
these
things
that
we're
looking
for
improvements
and
stuff,
I
think
it's
good
to
have
this
conversation,
if
I
hope
I'm
not
missing
their
deadlines
too.
G
Bad
or
or
things
I
I'm
trying
to
to
to
do
that,
but
I
also
don't
think
that
going
through
the
escape
patch-
and
maybe
this
is
something
that
I
would
like
at
one
point
to
understand
from
you.
Why
do
you
want
to
move
the
google
thing
out
of
the
country,
because
I
heard
that
from
from
george
stewart
and
would
be
interesting
to
hear
what
are
your
motivations
for
that.
I
Yeah
I'll
write
that
up
separately,
I
think
the
google
thing
is
a
little
bit
different.
So
I'm
not
the
my
my
argument
for
the
google
exporters
is
actually
substantially
different
from
what
I'm
saying
here.
So
let
me
let
me
move
that
away
for
a
little
bit
and
focus
on
what
I'm
trying
to
say
here.
I
don't
I
I
hear
you
bogdan
that
there
is
a
risk.
I
Once
the
escape
hatch
becomes
institutionalized,
people
say
well
open,
telemetry
collector
exists,
it's
the
core
of
what
I'm
doing,
but
I'm
just
going
to
build
a
whole
bunch
of
stuff.
Maybe
it's
even
proprietary.
It's
not
even
github
open
sourced,
it's
living
in
my
private
repository
and
I
don't
care
about
open
source.
I
think
that's
absolutely
a
valid
thing
to
worry
about,
but.
I
I
I
built
this
thing
in
my
in
my
downstream
repository,
it's
labeled
experimental.
I
think
it's
great
that
users
have
used
it.
The
performance
is
fantastic,
I
think
it.
You
know
when
we
have
a
discussion
here
about
trade-offs.
I
can
come
with
data
and
then
you
can
say
okay.
I
understand
why
this
is
a
good
idea.
I'm
I'm
willing
to
upstream
it.
B
B
That
we,
I
would
say
as
a
counter
agree
argument
again,
is
not
an
argument,
but
just
a
discussion
is
that
again
you
know
you
open
this
hatch
of
downstream
experimental
components,
but
the
very
fact
that
they're
sitting
in
an
aws
repo
or
a
google
repo
means
that
it's
being
validated
and
maintained
by
by
us
and
and
that
in
itself
causes
fragmentation.
B
So
again
we
have
to
be
very
clear
on
the
project
that
you
know
core
you
know
has
the
standard
and
the
quality
and
the
integrity
that
is
preserved,
and
you
know
fully
available
to
everybody
to
use
stably
I
mean
it
is
it
is.
There
is
a
downside
to
this
right.
I
There
is
a
downside,
I'm
not
I'm,
not
pretending
that
this
is
a
an
unalloyed
good,
but
okay,
I
I
think
I've
heard
we're
close
to
the
end
of
the
end
of
this
thing
for
this
out
of
tree
question.
I
So
I
think
metric
transform.
We
have
already
the
next
steps
my
understanding
is
rehan
has
presented
his
doc
saying
both.
Why
have
metric
generation
and
the
design
of
metric
generation?
I
am
going
to
put
together
a
doc
saying
going
a
level
out.
What
is
our
roadmap
for
transformation
right?
That
is
a
narrower
topic
than
metrics.
I
will
see
what
resources
I
can
find,
but
say
here
is
transform,
transform.
Has
these
functional
limitations
like
this
is
the
functional
boundary?
I
You
know
you
can
add
more
stuff
to
transform
once
you
once
you
make
a
certain
technical
change,
then
here's
the
functional
boundary
of
transform,
then
here's
the
functional
boundary
of
generation.
Maybe
we
decide
that
that
means
they
should
actually
be
the
same
thing
or
not
right
and
then
super
long
term.
We
want
to
get
to
this
set
vaguely
of
dsls
for
doing
this
kind
of
stuff,
and
when
that
happens,
they
will
subsume
the
existing
things
or
they
will
fit
into
the
existing
things
and
here's
how
yeah.
G
One
couple
of
things:
first:
I
want
you
to
address
when
this
should
be
moved
to
core.
So
one
of
the
reason
why
that
is
in
contribute
is
because
it
was
an
experimental
thing
developed
by
james
and,
and
we
said,
we're
not
gonna
put
it
in
color.
G
One
thing
we
need
to
address
when
this
is
gonna
get
into
core
second
thing:
I
think
the
dsl
at
least
for
filtering
it's
a
much
urgent
thing
that
we
need
to
address
as
soon
as
possible
if
we
want
to
stabilize
configuration
for
these
components,
because
without
that
we
cannot
stabilize
configuration.
So
that
being
said,
I
also
a
solution.
Fyi
rehan,
a
solution
to
for
experimental
tag
is
not
actually
to
put
an
experimental
tag,
but
to
put
experimental
in
the
knee
in
the
type
of
the
processor.
So
then.
G
Like
that,
so
then
at
least
one
thing
we
reserve
the
name
for
we
when
we
have
that
stable
thing,
we
reserve
the
name
of
that
component
and
not
have
conflicts.
So
at
least
that
is
a
it's
a
good
thing.
Another
thing
is
anyway
punya.
I
will
wait
for
for
that
vision,
things
and
stuff,
and
we
can
start
discussing-
and
I
think
it's
a
good
step
now.
C
G
Every
hunt,
if
you
really
have
a
big
deadline,
just
put
a
name
experimental
and
probably
that's
something
that
we
can
say.
One
thing
that
that
ryan
pune
has
suggested
is
maybe
documenting
the
readme
or
contributing
that
things,
starting
with
experimental,
are
following
this
process
and
stuff.
F
G
D
F
B
Think
I
think
bin
fang,
let
me
suggest
a
couple
of
things.
One
is
that
punya
and
I
also
will
take
a
look
at
it
today
and,
let's
again,
as
as
you
know,
neutral
readers,
if
we
cannot
figure
out,
you
know
what
are
the
goals
of
what
you're,
changing
and
and
other
you
know
fundamental
assumptions
you're
making
then
we'll
tag
it
on
on
and
edit.
F
B
C
K
L
K
K
C
K
Okay
yeah,
so
we
are
sharing
the
screen,
probably
sh.
We
can
start
the
discussion.
Meanwhile,
let
others
to
join.
K
K
K
K
Okay,
when
we
see
the
cma
configuration
I'm
just
looking
to
simic
for
sdk,
we
have
one
sdk
target,
which
basically
includes
all
the
header
files.
If
I'm
not
wrong,
then
we
have
one
versioning
and
then
we
have
common
traces
matrix
where
basically,
each
directory
within
the
sdk
folder
is
a
separate
target
and
there
would
be
separate
library
get
getting
created
for
each
of
them
and
he
plans
to
have
a
single
sdk,
probably
an
archive
or
a
shared
liability
or
anything
sdk
which
will
include
all
of
them.
L
I
understand
the
reasoning
for
that.
It
seems
logical
to
me
because
for
consumers
it
is
usually
easier
to
deal
with
one
path
to
include
directors
and
with
one
single
r
consumption
value.
M
Yes,
and
but
I
have
a
concern
that
if
we
create
sdk
semi
target
to
include
include
every
sdk
component-
maybe
that's
true
to
be
too
big
for
for
any
user
like
for
exporter
component,
or
maybe
the
user
user-
don't
want
everything
linked
to
be
linked.
Probably
we
can
create
something
called
sdk
core
that
everyone
use.
Our
sdk
should
be
linked.
M
L
For
shared
library,
that's
the
case,
but
for
static
library.
I
think
it's
totally
fine,
because
you're
not
going
to
be
linking
the
all
depends
depends
how
we
hook
up
the
exporters.
You
mean
the
concern
is
that
even
if
you're
only
using
one
exposure,
you
end
up
bundling
all
of
them
because
they
are
a.
L
K
L
I'm
just
saying
that
when
you
statically
link,
there
is
an
opportunity
to
optimize
unless
you
have
a
function
method
which
does
something
like
add,
instrumentation
library
and
explicitly
I'll
adds
all
of
these
classes.
So
it's
like
a
later
lookup
of
what
exact
tracer
provider
you
need.
So
in
that
case
there
is
no
opportunity
for
optimization,
because
the
entire
set
is
going
to
be
bundled
right.
M
Now
great
for
static
linking,
I
think
even
we
combine
all
this
together.
Maybe
there
may
be
no
extra
head
and
the
link
could
eliminate
the
name,
but
I
think
for
our
build
script,
the
user
can
still
provide
some
built
variable,
like
cmx
shared
library,
to
build
our
sdk
as
shared
labeled.
That's
that
is
still
an
option.
F
K
M
L
When
we
create
a
single
sdk
binary,
we
still
let
the
consumers
to
bundle
individual
pieces
were
not
no.
L
I
I
am
not
against
the
idea,
I
am
not
against
the
idea,
but
I
agree
with
what
thomas
saying
about
the
sizing
considerations
like
how
do
we
build
a
combo
that
excludes
things,
because
if
I
only
need
one
exporter
for
my
manually
instrumented
app,
I
don't
want
to
blow
it
my
app
with
a
two
megabyte
library.
I
want
something
really
small
and
I
mean
maybe,
as
is
it's
not
gonna
fly,
but
it
may
work
with
some
tweaks
like.
L
L
I
mean
this
is
probably
all
custom
and
should
not
be
ga
blocking
it's
more
like
for
when
vendors
cook,
their
own
thing
right.
L
L
Yes
and
the
other
thing
is
for
the
static
library
like
I,
I
don't
remember
exactly
how
we
that
do
that
ad
instrumentation
library
like
when
we
register
them,
is
there
way
to
exclude
that
factory
which
registers
all
of
them
and
rather
than
when
you
build
a
static
library
without
that
factory?
And
you
precisely
know
of
what
exporter
you
have
like,
you
only
need
one.
L
Then
what
happens
is
you'd
have
to
provide
that
can
factory
with
just
one
thing
or
don't
even
use
that
factory
at
all,
you
just
say
my
otlp
tracer
provider
or
whatever,
and
then,
when
you
statically
link
all
the
unused.
L
L
K
L
Yes-
and
I
would
even
say
that,
can
we
relax
a
little
bit
and
say
that
for
cmake
we
do
this
and
it's
more
like
a
kind
of
gesture
for
those
who
need
it
so
that
we
don't
impose
an
obligation
on
bazel
build
folks
to
do
the
same.
Like
I
mean,
let's
try
it
out
and
if
it
works
great
and
if
there's
a
good
feedback
that
yes,
we
like
that
single
target,
then
we
can
say:
oh
yeah,
let's
add
similar
approach
to
bazel
without
restructuring
basal
much
without
increasing
unnecessary
churning
base
of
belgrade.
L
I
mean
basal
build
should
be
right
now,
structures
the
same
as
cmake
and
when
we
add
something
in
cmake
that
imposes
an
obligation
to
add
something
similar
to
bazel
as
well,
because
we
support
both
build
systems.
I
mean
cool.
If
somebody
wants
that
in
cmake,
let's
add
it
as
an
optional
thing
for
somebody
who
needs
it
and
if
there's
a
strong
feedback.
Yes,
this
is
the
right
way
to
consume
this.
We
need
a
single
target.
Let's
see
how
to
do
that
for
bazel,
then
after
okay,
and
that
both
should
not
be
gable
yeah.
K
K
K
Okay,
that
should
be
fine,
and
probably
we
can
go
to
the
next
item.
I
think,
if
we
all
agree
on
this,
so
I
just
wanted
to
talk
about
the
started
looking
for
at
least
a
release
candidate
for
1.4.2
release,
and
for
that
I
have
created
few.
K
K
With
the
compliance
matrix
within
the
specs,
and
at
least
we
should
have
idea,
we
should
have
a
clarity
where
all
what
all
places
we
are
compliant
and
we're
wrong
we're
not
compliant
so
just
just
to
open
one
of
the
ticket.
I
mean
it's
basically,
what
is
spam,
so
these
are
all
all
the
these.
This
complete
list
is
coming
from
the
compliance
matrix.
The
only
thing
is
that
it
is
coming
from
1.0.0
compliance
metric.
K
This
is
not
anything
new,
add
it
after
1.02,
so
for
our
1.0
release,
I
mean
we
can
focus
on
the
compliance
for
the
wonder,
total
host
x,
anything
new
coming,
it's
good,
we
added,
but
that's
not.
Something
should
be
obligated
for
us
to
really
support
it
for
our
1.2
release,
so
I've
just
added
this.
Probably
I
think
the
ask
would
be.
I
mean
I've,
basically
split
it
up
into
maximum
based
on
the
different
components
that
spit
out
multiple
ticket.
K
For
that
spam,
this
one
for
trace,
pointer
spam
package,
resource
context,
propagation
exporters
and
similarly,
so
probably
if
we
can
divide
it
among
ourselves,
feel
free
to
pick
up
any
one
of
them
and,
let's
start
doing
the
compliance
check
for
one
dot
or
release
candidate.
We
may
not
be
compliant
with
all
of
that,
but
as
long
as
we
have
some
plan
for
these
stuff,
which
we
are
not
complaining
or
nothing,
we
should
be
good.
L
K
L
And
maybe
like
I
can
first
start
with
that.
I
mean
if
we
can
deviate
up,
because
I
think
dom
has
better
experience
with
your
tlp
in
general,
but
I
can
take
a
look
at
memory
and
questions
like
we
can
split
it.
M
K
Okay,
so
this
okay,
so
I
think
this
had
standard
output
and
in
memory.
M
Do
we
need
to
make
sure
every
checkbox
are
checked
or
are
completed.
K
So
if
we
are
compliant
yeah,
probably
what
we
can
do,
just
probably
I
mean
we
can
discuss
on.
How
should
we
do
it?
I
mean
if,
if
we
have
validated
whether
this
is
compliant
or
not,
we
can
tick
the
box
and
probably
put
put
some
I
mean
either
in
the
description.
You
can
quote
it.
If
it
is
not,
or
I
mean
I'll
say
that
if
it's
compliant,
you
can
take
it
and
for
those
who
are
compliant,
you
can
put
it
in
the
description.
L
Maybe
you
can
even
well,
I
don't
know
how
you
feel
about
it,
but
you
can
assign
two
people
to
one
issue.
You
don't
have
to
split.
You
can
just
say
max
two
of
these
okay
and
thomas
or
tlp,.
K
K
So,
let's
see,
but
I
think,
let's,
let's
target
to
at
least
ensure
that
these
all
are
assigned
across
the
approvers.
I
checked
with
your
hands
and
josh
also
would
be
good
to
have
them.
They
have
some
of
those
items
there,
okay,
so
yeah.
That
was
one
of
the
thing,
and
apart
from
that,
just
wanted
to
talk
about
the
release
milestone.
I
think
this
talk
here
only.
K
We
I
mean,
as
of
now
we
do
have-
I
mean
I'll,
just
go
down
and
then
click
this,
and
then
there
is
one
milestone.
We've
created
for
a
release
candidate-
and
this
has
I
mean
I've-
tried
my
best
to
add
all
the
issues
which
really
should
be
fixed
as
part
of
this,
but
probably
I
think
that
this
would
be
good
if
we
all
can
go
through
this
and
see
if
there
are
something
which
is
messy
or
something
which
should
not
be
there
and
as
part
of
the
exercise
of
the
compliance
validation.
K
So
probably,
any
indicative
should
not
be
here
just
put
a
comment
on
that
that
this
should
not
belong
here
with
some
reasoning,
and
we
can
pull
it
up
I'll
pull
it
out
from
here
in
the
milestone.
The
target,
as
of
now
is
30th
of
may,
which
is
something
we
have
communicated
to
the
tc
to
the
open
elementary
tc.
K
But
let's
see,
let's
figure
it
out
once
we
have
better
idea
once
we
have
done
all
the
compliance
citation,
we'll
have
a
better
idea.
Where
do
we
really
stand
so
yeah?
That's
something
I
just
wanted
to
talk
about
and.
L
Yes,
so
let
me
just
elaborate,
I
wanted
to
achieve
exactly
what
evgeny
mentioned
about,
but
I
also
want
to
overlay
a
custom,
contrib
exporters.
My
motivation
here
is
mainly
that
build
flies
must
match.
Build.
Contents
must
match
between
the
main
release
and
contrib.
L
Let's
say
if
we
do
some
refactor
that
breaks
the
exporters
and
as
long
as
we
have
a
standard
established
process
for
validating
it
again,
a
refactoring
domain
does
not
it's
like.
We
don't
have
to
comply
like
we.
We
can
easily
change
them
in,
but
it
would
be
great
to
see
if
country
repo
is
broken
after
a
change
in
demand.
So
I'd
like
to
have
an
overlay
to
build
not
just
the
main
standard
set
of
exporters
but
maine
with
countries
exporters
and
for
the
country
of
exporters.
L
What
I'm
proposing
is
to
maintain
exact,
a
precise
layout,
as
we
have
in
the
main
repo
like
slash
exporter,
cmake
and
bazel,
both
supported
the
same
way
as
what
we
do
for
anything
else.
So
then,
I'm
showing
in
that
issue,
let
me
actually
copy
it
in
the
document
I'll
copy.
The
link
to
my
last
response
just
a
moment,
I'll
paste
it
here
so
I'll
add
at
the
end,
like
this
comment,
yeah
well,
if
you
can
open
it
on
your
screen,
yeah.
L
So
again,
it
shouldn't
be
mandatory
check.
L
We
can
also
set
it
up,
possibly
as
an
optional
check,
because
that
way,
when
we
run
an
optional
check,
we
can
immediately
see
if
country
got
broken
by
some
main
mainline
commit
because
it's
like
we
were
discussing
this
before
somebody
is
contributing
a
bunch
of
exporters
and
right
now,
as
of
now
everything's
working
fine,
then
people
are
happy.
They
went
away
right
because
they've
done
their
job.
Something
changed
in
the
main.
Like
some
api
changed,
even
a
subtle
change
sometimes
may
require
a
minor
sample
change
in
the
corresponding
sport.
L
So
let's
say
we
broke
it
at
some
point.
So
now
we
don't
have
any
process
to
validate
and
find
and
identify
that
the
moment
we
break
it.
So
what
I
want
to
achieve.
I
want
to
get
a
notification
hey
by
the
way
you
actually
broke
country.
You
don't
worry
about
it.
You
still
proceed
with
the
marriage
just
emerge,
but
you
are
or
not
not,
it's
not
like
blaming
a
person.
L
It's
like
this
was
the
moment
when
contrib
got
broken
and
then
so
those
who
care
for
the
contrib
repo
for
a
specific
module,
they'd
go
and
kinda
fix
it
up
or
adjust,
or
maybe
it's
a
good
gesture
of
the
main
reaper
maintainer
to
also
adjust
that
minor
thing
in
the
country
repo.
Accordingly.
L
So
again,
my
very
short-sighted
view
I
want
to
get
fluent
and
I
want
to
build
the
main
sdk
with
the
matching
api
sdk
and
build
flags
and
fluent
and
fluent
right.
Now
is
just
merrily
and
overlay
pretty
much
on
top
of
the
main
belt
tree
and
I'm
showing
how
it's
not
like
a
lot
of
lines
of
code.
It's
about
20
lines
of
c
make,
and
what
what
bothers
me
also
is
that
for
bazel?
Actually,
you
don't
have
to
do
anything
to
implement
the
same
construct
like
for
c
make.
L
I
have
to
have
an
extension
point,
but
for
bazel
you
can
just
copy
the
directory
on
top
of
the
build
3
and
basil
would
pick
it
up.
So
what
I
would
like,
I
would
like
to
have
a
similar
feature
where
I'd
say.
L
If
I
had
already
checked
out
the
country
repo,
I
can
say,
build
with
this
contributor
by
specifying
the
path
to
it
or
the
second
part
of
this
cma
exchange.
If
I
specified
with
contrib
as
part
of
my
regular
cmake
build,
it's
also
going
to
fetch
me
the
main
latest
country
overlaid
in
a
separate
directory
and
there's
the
cmake
command
for
that
which
is
fetch
content,
and
then
I
add
that
directory
and
I
build
the
entire
set,
and
in
that
one
run
I
get
main
belt
I
get
contribute
built.
L
L
K
K
L
So,
first
of
all,
this
option
is
not
default
and
I'm
not
adding
a
sub
module
so
you're
not
fetching
it
you're,
not
paying
any
extra
fee.
It's
paper
play.
L
So
what
I'm
asking
for
is
an
extension
point,
and
the
purpose
of
that
extension
point
is
to
is
good,
like
I
have
good
intention
in
mind
to
validate
that
the
baseline
of
main
lost
contrib
is
building
linking
and
working,
and
I
can
do
it
in
one
run
and
that's
how
I
kind
of
see
I'm
you
guys
know
I'm
working
on
fluent
exporter
right
now
and
I'm
trying
to
make
my
own
life
pleasant
while
making
it
pleasant.
I
would
like
to
share
how
I
would
have
done
it.
L
L
L
First
of
all,
this
has
been
added
like
this
fetch
quantum.
That's
fetch
content
itself.
It
has
certain
restrictions
like,
for
example,
if
you
use
it
and
if
you
added
from
another
main
project,
then
you
also
have
to
specify
a
separate
binary
output
directory.
L
L
So
it's
like
you
build
two
at
once,
so
somebody
thought
about
this,
which
means
that
somebody
uses
it
already
that
way,
and
in
general
I
am
mostly
inspired
by
let's
say:
android
os
build
system
where
you
can.
They
use
a
separate
repo
tool
which
overlays
like
directories
in
a
single
build
tree
and
that
that
works
the
way.
How,
if
you
have
something
in
that
vendor
directory,
then
it's
also
going
to
build
the
all
of
the
vendor
directory
and
merge
it
into
combined
build
image.
L
This
is
main
which
can
build
just
as
is
but
there's
main
with
country
where
you'd
also
add
it,
and
then
again
you
get
a
single
target
and
that's
where
I
mentioned
what
evgeny
was
proposing
about
the
sdk
target
like
I
would
like
to
even
elaborate
more
on
this.
I
would
have
wanted
a
single
consumable
target,
which
has
standard
exporters
and
maybe
my
exporters,
which
I
need
because
then
from
the
instrumentation
standpoint,
you
still
use
the
same
api.
L
It
still
uses
the
same
library,
but
in
your
code,
you'd
say
by
the
way.
I
want
the
fluent
exporter
right
and
you
use
it,
but
then
you
can
also
say,
oh
by
the
way
I
changed
my
mind
about
fluent
now
I
have
otlp
agent
up
and
running,
and
I
want
to
use
that
instead.
L
So
then,
from
the
build
perspective,
you
don't
change
much
other
than
you'd
say.
Oh
now,
the
class
for
the
exporter
is
or
tlp
export
and
you
change
one
line
and
you
don't
change
any
of
the
rest,
which
is
giving
freedom
to
migrate
across
the
standard
set
of
exporters
plus
the
non-standard.
As
long
as
your
build
produces
a
superset
standard
features
plus
well
even
linux
kernel,
I'd
say
you
can
actually
can
a
modular
build
architecture
when
you
can
plug
in
an
optional
piece.
That's
what
I'm
trying
to
propose
here.
L
I
think
the
hassle
of
maintaining
a
separate
build
with
the
same
flags
and
with
the
same
options,
because
let's
say
you
want
to
build
with
stl,
not
with
the
no
std
library
but
with
the
standard
classes.
Now
you
actually
have
to
build
the
main
first,
then
you
would
have
to
remember
what
what
flags
you
used
to
build.
The
main
you'd
also
have
to
pass
the
same
bell
flags
and
the
defines
to
the
exporter.
L
Because
exporter
uses
some
of
these
classes
like
no
std
classes,
then
you
also
have
to
tell
the
exporter
where
to
find
the
main,
which
is
yes,
another
hassle,
whereas
in
this
model,
when
you
add
the
sub
module,
well,
not
sub
modules.
Sorry
for
that,
when
you
add
a
directory
subdirectory
into
the
main,
it
already
intrinsically
knows
all
of
these
properties
that
you
passed
through
the
main
build
system.
So
you
don't
have
to
redo
that
step
again,
so
it's
like
plug
and
play
again.
Bazel
does
the
same
thing.
L
But
since
cmake
cannot,
I
need
an
extension
point
to
say:
hey,
add
the
subdirectory
to
the
build
from
here.
If
it's
there,
if
it's
not
there,
it's
not
even
altering
the
main
build
at
all
with
prometheus.
For
example,
we
have
that
option
with
prometheus,
which
is
not
even
standard
thing.
I
guess
we
would
have
to
remove
it
because
metrics
api
is
not
even
approved
right.
I'd
say
with
prometheus
build
option.
L
Now
is
even
more
invasive
because
it
forces
you
to
use
submodules
to
fetch
a
ton
of
premium
libraries
right
and
you
fetch
them
for
nothing.
It's
a
waste
because
you're,
if
you're
building
without
it,
you
just
wasted
disk
space
to
recursively
phage,
a
premise
here:
cpp
client
with
all
subs
dependencies.
L
I
don't
know
like
it's
big
now
here.
It's
doing
nothing
by
default,
if
you're
not
building
with
country,
it's
skipped
you're,
not
even
stretching
it.
Only
if
you're
building
with
can
trip
that
is
going
to
fetch
content
and
add
that
content
as
a
subdirectory
and
then
the
sub
sub
directory
c
mic
lists
in
here.
It's
all
the
build
flags
compiler
knowledge
of
where
the
other
projects
are.
L
It
allows
me
to
let's
say:
steal
your
zipkin
exporter
copy
it
in
an
alternate
place,
modify
it
to
my
liking
and
add
it
with
this
question
and
it's
just
more
like
a
developer
add-on,
build
build,
flavor
simplification
and
the
ci
to
validate
both
main
and
contribute
ones,
and
I
can
help
with
setting
that
ci.
I
don't
know,
maybe
in
the
country
repo
itself
just
so
that
we
run
it
nightly.
L
For
example,
then,
if
you
are
so
worried
about
like
changing
the
main
flagged
as
breaking
contrib,
let's
not
even
do
that
as
part
of
the
nci,
but
at
least
we
can
do
it
as
part
of
the
nightly
contrib
ci,
and
that
way
we
can
monitor
when,
when
contribute
got,
broken.
M
L
So
there's
a
subtle
change
if
you
want
to
do
add
directory,
so
that
what
you're
proposing,
in
essence,
is
from
the
contrib
to
do
a
directory
of
the.
F
K
So
so
there
are
two
things
here
I
mean
if
we
are
asking
about
the
convenience
for
the
end
user
so
that
they
want
fluency,
but
they
should
be
able
to
use
the
complete
packages
single
package
yeah.
I
mean
we're
talking
about
the
other
thing
that
we
have
to
ensure
that
contract
reboots
should
not
get
broken
because
of
any
change
happening
in
the
main
repo
that
probably
having
the
nightly
builds
in
the
contract
paper
will
definitely
make.
L
Just
like
I
listed
the
number
of
bullet
points,
I
think
my
key
point
is
a
single
build
artifact,
which
is
built
with
matching
build
flags
and
settings
for
both
main
with
contrib,
because
country
cannot
be
built
without
main
its
main
may
be
built
with
contract
country
bonus.
Soon,
I
guess,
may
provide
its
own
binaries
as
well.
I
mean.
L
For
the
other
scenarios,
such
as
when
we
provide
examples
for
like
instrumentation
examples
not
like
totally
detached
stuff
like,
for
example,
our
engine's
instrumentation
is
not
needing
to
to
merge
into
lib
open,
telemetry
sdk.
However,
exporters
are
the
keys
where
we
need
to
build
them
with
the
main
line.
L
So
I
don't
know
if
you
would
like
to
rename
this
with
contrib
exporters,
because
that's
the
only
thing
I
am
after
okay,
so
that
still
leaves
an
option
to
build
country
as
country
as
a
separate
detached
project
which
does
not
depend
on
the
main,
and
in
that
case
you
can
use
some
examples
or
something
like
these
things,
but
in
my
case
it's
when
I
would
like
to
ship
a
single
target
which
includes
both
standard
and
non-standard
exporters.
L
Again.
This
is
something
I
wanted
to
deliver
by
ge,
like
there's
no
dependency
on
that
of
york
for
the
open,
telemetry
v1ga.
But
in
terms
of
a
timeline
I
wanted
to
get
some
of
that
work
done
in
parallel.
K
I
mean
I
don't
see,
this
is
a
big
change
happening
in
the
cma
configuration.
The
only
thing
I'll
be
more
worried
is
that
it's
a
standard
practice
being
followed
in
this
similar
scenarios,
where
adding
a
something
I
mean,
adding
a
different
repo
which
is
not
related
or
which
is
not.
The
main
term
is
not
dependent
on
that
directly.
L
And
we
are
going
to
build
that
it's
my
thing.
My
my
argument
here
is
it's
maintained
by
the
same,
our
organization
and
contributions
to
that
are
presently
restricted
to
members
of
the
organization
and
since
organization
imposed
some
restrictions
and
structure,
we
couldn't
just
randomly
go
and
create
a
repo.
We
had
to
follow
certain
structure
as
we
create
a
structure,
let's
also
describe
a
a
way
how
to
consume
that
structured
people
an
addon.
L
My
expectation
is,
maybe
there's
going
to
be
another
vendor
exporter
and
why
I'm
making
that
assumption
is
I'm
looking
at
some
other
projects
which
previously
routed
data
to
different
clouds
like
google
cloud,
or
I
understand
that
we
focus
on
no
tlp
and
hopefully
otlp
is
the
central
piece
for
all
folks.
L
There
could
be
other
flows,
and
I
can
refer
examples
here.
L
Sorry,
somebody
came
here
anyways
if
you
guys
feel
like
this
is
just
not
working
I'd
like
to
hear
a
more
concrete
proposal
like
I'm,
showing
16
lines.
How,
if
you
are
telling
me
that
there's
another
way,
show
me
your
16
16
lines,
how
you
would
have
done
it.
K
I
mean,
if
you
are
asking
me
probably
I
want
to
do
I
mean
I
will
not
be
doing
it
at
all
I'll,
let
let
the
the
implementers
of
exporters
decide.
How
do
you
want
to
provide
the
build
target
now
without
touching
the
main
repo
and
let
them
whether
they
want
to
have
to
so
we
already
had
built
for
the
main
tripod.
Let.
K
They
want
to
do
it
for
for
their
own
exporter
and
let
end
user
decide
how
to
use
both
of
both
these
different
targets
in
their
own
applications,
instead
of
providing
any
convenience
in
the
main
repo.
That's
totally
why
I
thought
that
does
not
mean
that
this
thing
should
not
be
there.
I
mean
I'm
totally
up
for
this.
If.
L
For
the
other
languages,
I'm
wondering
if
there
was
a
if
there
was
a
structure,
because
we
have
this
country
repo
created
for
all
other
languages
as
well
right.
I.
K
L
To
justify
that,
I'm
looking
at
guru
repo
right
now,
I
see
that
they
do
improve
a
certain
structure
on
exporters,
for
example,
which
one
which
report
go
open.
I.
L
So
my
point
is
this:
is
structured
country
repo
and
if
we
provide
some
guidance
to
how
to
structure
additional
exporters
in
there
so
that
they
can
be
built
against
the
main.
This
shouldn't
hurt
anybody
right
like
how
does
this
hurt
anybody
in
the
main
ripple?
L
K
L
Talking
about
open
telemetry
go
slash
country,
oh
sorry,
yeah.
So
if
you
go
to
a
country
yeah,
so
they
do
have
this
same
structure
like
exporter,
slash,
metric
and
propagators
and
all
this,
so
they
do
follow
structures.
For
example,
if
we
go
to
exporters,
it
has
like
three
different
exporters
in
there
like
cortex.statsd
like
different
destinations.
L
So
here
what
I
think
is
for
the
country
repo.
In
our
case,
we
also
have
slash
exporters.
You
should
I
should
send
you
the
link
to
if
you
still
have
couple
minutes.
Let
me
just
quickly
show
you
the.
I
think
I
pushed
something
in
the
branch
which
I
can
share.
L
L
K
L
Yeah
so
go
to
our
no,
it
should
be
in
the
chat
window,
but
go
go
to
this
one
yeah.
Yes,
so
here
I'm
adding
things
like,
for
example,
if
you
go
to
the
exporters,
so
let's
start
with
the
cmec
list,
just
real
quick,
so
the
root
level
cmec
list
adds
right
now,
just
the
exporters.
L
If
you
go
to
exporters,
exporters
actually
enforces
precisely
the
same
structure
as
we
have
for
the
main
repo
exporter,
so
the
layout
is
identical
pretty
much
so
then,
if
you
go
into
this
one,
it
has
this
c
make
lists
text
which
includes
all
these,
and
it
refers
to
targets
which
are
contributed
by.
L
See
that's
the
main
thing
here,
so
I
still
need
open
telemetry
equipment.
I
still
need
open,
telemetry,
trace
resources
and
the
entire
thing
must
be
built
with
the
same,
build
flags
and
with
the
same
compiler,
so
it
must
be
built
at
the
same
time.
Otherwise,
my
binary
is
not
going
to
match
not
going
to
work
with
the
main
repo
libraries.
L
So
how
would
I
inject
that?
Because
then
it
means
that
every
exported
developer
on
their
own
must
set
up
their
own
custom,
build
system
so
rather
than
going
there
and
doing
that,
I'd
like
to
propose
a
structure
in
the
country,
repo
and
the
motivation
for
that
is
country.
Repo
is
owned
by
open,
telemetry
organization.
L
It's
been
created
and
authorized.
Individual
users
have
been.
You
know,
allowed
to
make
all
these
contributions
right
now.
So
in
essence,
the
cma
file
you
would
see
here
in
the
country
repo
is
absolutely
identical,
as
if
you
designed
it
in
the
main.
K
K
K
I
mean
that's
a
way
of
consumption.
We
suggest
for
any.
K
L
K
L
K
L
L
Sure,
let
me
try
that
and
see
if
that
is
a
viable
option
and
if
it's
not
I'll,
give
my
feedback
on
this.