►
From YouTube: 2021-05-19 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
B
A
Right
welcome
everyone.
Let's
start,
let's
see
what
we
have
in
the
agenda.
D
Good
morning,
everyone
yeah-
this
is
me,
I'm
from
amazon,
aws,
so
yeah
we.
Actually.
We
have
your
engineer
here,
trying
to
contribute
to
the
milestone
issues
on
the
milestone
list
and
after
looking
at
the
issues,
we
got
a
bunch
of
questions
here,
probably
seeking
for
help
here
to
you
know
if
we
can
get
some
direction
or
some
advice
here,
it
help
us
to
for
the
contribution
and
hi.
E
Grin
we've
been
adding
so
we've
been,
you
know,
going
through
the
backlog
for
the
phase,
one
and
phase
two,
you
know
items
that
are
required
for
achieving
1.0
stability
for
the
collector,
and
these
are
specifically
issues
from
that
those
backlogs
that
we
would
like
to
get
some
clarity
on,
because
you
know
we're
looking
to
pick
up
some
of
the
work
on
these
issues
and
and
would
be
great
to
actually
get
a
a
bit
more
firm
on
the
scope,
because
I
think
that
there
are,
you
know
the
the
descriptions
are
generic
enough
and
I
think
that
it
would
be
good
to
get
tigran
either
your
guidance
or
you
know
I
think
bogdan-
hasn't
joined
yet
right
so,
but
it
would
be
good
to
get
your
guidance
here
right.
A
Yeah
from
what
I
remember,
these
two
issues
are
quite
different:
one
is
about
using
the
hotels
metrics
for
the
old
metrics
instead
of
oc,
which
we
cannot
do
yet
we're
completely
right,
because
the
api
is
not
yet
ready,
it's
not
stable.
We
cannot
do
that.
The
second
one
is
about
hiding
the
implementation
details
so
that
we
can
still
continue
using
the
open
sensors
for
reporting
our
own
metrics,
but
the
components
which
use
this
observe
report
package.
A
They
are
not
exposed
to
that
fact
so
that
when
later
we
swap
open
census,
bioinflammatory
implementation,
we
don't
break
the
components.
So
that's
what
the
second
one
is
about
right.
It
says
just
just
expose
our
own
api
and
the
components
can
call
that
and
we
do
not.
When
later
we
migrate
to
open
climate
tree,
we
do
not
break
the
components.
D
Yeah,
I
think
I
I
I
I
yeah
thank
you
tiguan
for
answering.
I
think,
for
the
first
question.
What
I
hear
is
so
are
we
saying
the
first
question?
Is
it
is
vlogger
right
now
we
cannot
move
it
forward
before
g8
right.
That's
correct,
not
not.
E
But
is
this
a
difference
specifically
for
so
let's
1.0
is
trying
to
achieve.
A
Api,
we
don't
want
to
break
that.
We
don't
want
to
break
components.
It
may
be
tracing
component.
Tracing
component
exposes
its
own
metrics
yeah,
not
want
to
break
tracing
components.
We
can.
I
don't
think
we
can
honestly
say
that
we
are
stable
for
tracing
if
every
single
tracing
processor
or
exporter
receiver
can
be
broken
down
down
the
road
yep.
I
agree,
so
we
will
need
to
to
have
that
guarantee
that,
if
you're
using
our
internal
api
for
exposing
your
metrics,
then
we're
not
going
to
break
you.
That's
what
it
is
about.
D
Yeah
but
yeah
yeah,
I
agree
for
the
so
so
I
think
the
conclusion
for
for
first
one
are
we
saying
we?
I
know
the
goal
right
now
we
are
going
to
move
that
into
from
the
p2
list,
right,
probably
after
once,
when
we
are
trying
to
do
the
metrics,
ga
or
other
stuff.
We're
gonna
start
to
pick
up
that.
The
first
item.
A
D
Okay,
I
mean
that
that
helps
and
yeah.
So
for
second
one,
actually,
I
started
work
on
it.
I
see
there,
for
example,
we
have
this
two
ops
report
component.
It
does
has
three
public
api
using
the
view,
so
I
can.
I
can
hide
it
like
a
create
a
structure
to
hide
those
view
into
a
new
struct.
Is
that
the
things
that
we
want?
I
just
want
to
confirm.
Yeah.
A
D
Cool
cool
yeah,
thanks
yeah,
I
got
it
yeah.
I
think
I
I'm
clear
for
the
first
fighter
there
yeah
thank
you,
okay,
okay
cool
for
the
second
one,
I
think
yeah.
Also,
we
looked
all
the
way
since
we
went
through
a
lot
of
me
and
the
team
we
went
throughout
the
list
for
these
four
items.
I
think
we
needed
some
decision
from
the
here.
You
know
so
we
know.
Should
we
continue
to
pick
it
up
or
should
we
close
it
yeah.
E
This
is
two
five
six,
five,
okay.
E
A
We
we
changed
the
port.
We
kept
the
old
one
to
have
some
sort
of
grace
period
for
people
to
migrate.
I
think
we
kept
it
for
long
enough,
maybe
a
good
time.
I
don't.
A
F
E
F
Clean
up
clean
up
all
the
reference
to
the
old
port,
at
least
whatever
we
can
and
then
and
then
and
then
remove
it,
then
leave
it
for
another
release
or
something
and
then
remove
it.
So
at
least
that
let's
do,
let's,
let's
make
sure
we
do
that
at
least
because
there
will
be.
There
may
be
other
libraries,
for
example,
sdk
that
still
use
the
old
one.
We
need
to
to
clean
this.
E
G
F
Stuff
like
that,
that
they
still
use
the
old
one
yeah,
we
need
to
make
sure
that
nobody
uses
the
old
one.
I
I
really
want
to
have
that
before
ge,
but
I
think
to
do
it
the
right
thing.
We
need
to
to
look
into
all
the
organizations
search
for
that
port
number.
If
we
find
it
file
an
issue
to
remove
it
in
all
the
repos,
either
by
changing
the
defaults
or
by
whatever.
E
H
Like
a
lot
of
docs
and
and
examples
that's
so
it
should
be
fairly
simple
to
go
through
and
clean
up,
but
yeah
that.
F
Was
exactly
jurassic
point
only
like
there
are
a
lot.
There
is
a
lot
of
documentation
that
still
uses
the
old
one.
Anyway,
we
need
to
find
those.
E
F
So
anyway,
the
cleanup
is
good.
No
matter
if
we
end
up
removing
or
not,
the
cleanup
is
always
good,
because
we
change
that.
So
we
need
to
do
the
cleanup.
So,
let's,
let's,
let's
identify
all
the
changes
that
we
need
and
where
did
we
find
it
and
then
make
a
decision
of
how
long
should
we
wait
until
we
remove
based
on
where
we
find
this?
If
we
find
it
again
in
the
api
sdk,
maybe
a
bit
longer,
if
we
find
it
on
in
examples
documentation,
we
can
just
remove
it.
F
F
All
right
all
this
discussion,
somebody
should
summarize
in
the
issue
for,
for
others,
to
see
what
is
left
to
do
and
what
we
need
to
do.
E
D
A
A
F
So
so
the
the
whole
thing
is,
we
have
with
start
in
the
in
the
component
helper
and
then
in
the
exporter
helper.
We
duplicate
that
for
good
or
for
bad
for
easy
use,
and
I
was
thinking
should
we
remove
the
duplication
and
have
just
a
width
component
option
that
accepts
component
option
that
will
be
the
width
start
anyway
see
the
pr
there
is
a
prototype
pr.
F
F
Also
comment
on
the
issue,
but
look
at
the
pr
to
see
how
how
the
changes
look
like.
Also,
anyone
like
anthony
anyone
who
yeah.
E
Yeah
yeah
absolutely,
but
I
mean
again,
I
think
I
don't
know
if
jay
is
here
or
others,
but
I
don't
see
a
carlos
again.
You
know
really
request
some
of
you
to
be
more
active
as
approvers,
because
there
are
a
lot
of
rpr's
that
are,
you
know
just
sitting,
because
the
approvers
are
not
taking
a
look,
and
I
understand
totally
you
guys
are
busy.
But
if
you
can't,
you
know
be
available
to
do.
E
F
So
we
can
discuss
about
that
as
a
next
item.
If
there
are
outstanding
prs.
F
F
On
that
default
tls
here
I
don't
have
any
context.
B
So
this
is
from
one
of
my
co-workers
at
new
relic.
I
I
believe
he
was
saying
that
the
collector
there
was
some
sort
of
spec
that
the
collector
may
not
have
been
following.
That's
the
only
context
I
have
so,
if
you
specify
a
like,
you
have
to
specify
a
full
url
endpoint,
I
think,
according
to
the
spec,
whereas
in
the
collector,
maybe
you
only
put
the
host
in.
B
I
I'm
not
I'm
really
not
sure,
but
that's
the
only
context.
I
have.
A
Here,
scroll
down,
I
think
I
commented
on
this:
can
you
scroll
down
yeah
yeah?
I
remember
now.
So
this
is
an
inconsistency
and
the
spec
does
not
say
how
it
should
be
and
the
I
think
some
of
the
sdks
require
the
http
or
https
in
the
url,
and
so
essentially
they
require
a
url,
but
collector
requires
an
endpoint,
not
a
url,
so
it
says
hostname
and
and
the
port
number.
A
So
this
really
requires
us
to
standardize
on
one
thing
and
do
do
the
same
thing
on
in
the
collector
in
the
sdks.
That's
that's,
that's
the
what
the
problem
is
about
and
we
should
likely
fix
it.
Whatever
we
decide
on,
if
it's,
if
it's
different
from
what
we
do
in
the
collector,
would
likely
should
fix
it
in
now
in
a
way
that
it
accepts
both
the
old
and
new
formats,
because
a
lot
of
people
are
already
using
the
configuration
without
this
https
prefix.
A
So
that's
what
I
would
do,
let's,
let's
likely
as
part
of
the
spec,
the
spec,
should
define
what
we
do.
How
do
the
end
points
for
otlp
protocol
are
supposed
to
be
specified?
Is
it
with
or
without
the
html.
B
Yeah,
unfortunately,
alan's
the
other
allen
is
out
on
vacation
today.
Otherwise
I
would
have
asked
him
to
join,
but
I
will
let
him
know
to
communicate
with
everyone.
I
E
Tigran,
are
you
going
to
add
the
and
just
the
comment
on
the
spec
needing
to
have
more
clarity.
E
A
F
I
I
added
a
comment
and
assigned
to
you
to
summarizing
the
issue.
What
are
the
next
steps?
Okay,
just
that
not
more
just
summarize
for
for,
for
the
other,
alan.
A
E
I
mean,
because
is
it
I
mean
we
should
be
a
bit
more?
We
should
take
any
of
the
issues
that
are
not
breaking
changes
or
core
changes
that
need
to
that
are
nice
to
have
to
phase
three.
I
would
request
that
because,
as
anthony
says
soon
trademark
is
not
causing.
F
All
of
its
anxiety,
I
agree.
I
think
we
did
a
good
job
moving
things,
but
still
still
there
are
a
lot
of
issues
unresolved
and
yeah.
I
know
I'm
responsible
for
creating
more
issues
into
different
phases,
but
my
my
goal
was
to
split
ambiguous
issues
into
smaller
ones
that
people
can
take.
So
I
don't
think
we
added
new
new
work
to
these
phases,
like.
E
Yeah,
I
think
it's
just
more
clarity.
I
mean
again
thanks
for
doing
that.
It's
it's
definitely
the
the
couple
of
issues
are
just
you
know
more
detail
from
the
existing
ones.
F
Yeah
but
to
be
honest
to
answer
your
question,
I
don't
have
any
clue
I
wish
on
june
somewhere
in
june,
we
have
a
release
candidate
yeah,
which
doesn't
mean
one
zero.
It
means
a
release
candidate
that
needs
to
cook
for
another
couple
of
weeks.
F
Stability
but
yeah,
let's,
let's
aim
for
somewhere
in
june
for
a
release
candidate.
E
I
mean
again,
I
I
think
that
the
issue
is
that
we
have
so
we've
been
looking
at
it.
You
know
from
the
aws
team,
at
least
where
we
can
help
and
and
but
would
request
others
to
pick
up
some
of
the
open
issues.
Also,
for
you
know,
using
the
collector
or
participating
on
the
code
base,
but
it
is
it
is
that
bogdan
that
you
know
we
do
need
to
have
a
clear
plan
and-
and
you
know
many
of
our
releases
as
well
as
others
who
are
releasing
it
downstream.
E
F
At
this
point
at
this
point,
I
think
we
have
a
clear
plan,
so
more
engineer
will
definitely
help.
Now.
There
is
a
question,
because
a
bunch
of
things
that
you
also
need
from
aws
are
metrics
related,
which
takes
time
from
from
other,
from
other
things,
yeah
more
engineering
working
on
the
ge
things.
I
definitely
think
would
help.
A
A
A
I
realize
that
people
need
these
things,
but
either
please
wait
with
the
proposals,
or
maybe
you
implement
that
processor
on
your
own
in
your
own
distribution,
especially
with
alvita.
You
know
you,
you
know
you
have
your
aws
distro,
so
maybe
temporarily
you
can
have
your
own
repository
as
a
processor.
A
It
takes
significant
time
to
review
these
changes.
These
other
changes
which
are
not
necessary
for
the
ga.
Like
all
the
prometheus
changes,
all
the
processor
proposals,
I'm
not
saying
in
any
way
that
they
are
not
important
or
unnecessary,
but
I'm
saying
if
we
could
postpone
those
or
do
elsewhere,
it
would
help
us
to
move
faster
with
with
the
issues
that
are
necessary.
I.
E
E
Is
it
something
that
you
would
consider
just
you
know
putting
into
a
separate
repo
so
that
we
can
actually
test?
You
know,
merge,
have
an
experimental
you
know
and
work
ongoing
so
that
we
can
actually
make
progress
on
the
prs
and
then,
of
course,
the
final
reviews
for
it
being
available
to
be
used.
You
know
and
and
obviously
requires
bugden
in
your
review
right.
A
E
Because
I
understand
that
the
prometheus
you
know
pipeline
is
has
at
least
you
know,
we
reviewed
the
prs
today.
There
are
at
least
11
prs
that
are
in
flight
and-
and
you
know,
bogdan
has
been
prolific
in
in
reviewing
you
know
most
of
these
and
getting
them,
but
it
takes
time
and-
and
you
know
that
again
takes
out
cycles
from
being
able
to
review
the
other
changes
that
are
needed
for
1.0,
as
well
as
any
any
new
functionality.
E
That's
coming
in
also
distracts
from
it,
but
the
prometheus
components,
which
are
you
know
again
critical
for
metric
stability
need
to
be.
You
know
unblocked
if
there's
some
way
to
partition
the
code
owners,
which
I
think
github
is
not
really
very
good
at
doing,
or
you
know,
giving
us
more
flexibility
and
being
able
to
improve
and
merge
and
test
experiment
in
an
experimental
repo.
E
A
E
Right
right,
so
ga
is
not
1.0,
right
and
and
and
1.0
is
specifically
targeting
tracing.
You
know
right
now
still
tracing
stability.
Is
that
your
understanding
or
is
it
something
else.
A
F
Indeed,
but
but
we
cannot,
we
cannot
some
of
the
things
we
cannot
delay
because
we
have
a
broken
component
which
a
lot
of
people
are
using.
So
we
need
to
to
be
mindful
what
I'm
recommending.
If
I
a
alolita,
please
please
ensure
that
for
prometheus,
let's,
let's
make
it
an
amazon
team
for
prometheus
ownership.
So
probably
we
have
anthony
and
maybe
anna
yeah
and
make
sure
that
both
of
them
are
reviewing
all
the
pr's
before
before
asking
us
to
review.
F
Yeah
so
let's
create
a
team,
maybe
maybe
one
thing
that
we
can
do
is,
but
I
I
don't
think
like
yeah,
just
just
create
this
physical
thing
and
assign
all
these
pr's
to
the
three,
the
three
people
that
we
mentioned
and
and
then
at
least
that
will
help
us,
because
because
that
means
we
we
have
less
to
review
and
worry.
A
It
does
it
will
help
a
lot
I
mean
just
today
right
this
morning
I
reviewed
two
pr's
intermediates,
pretty
small
ones,
but
it
still
takes
time
right.
If
they
were
revealed
approved
by
an
approver.
I
would
most
likely
spend
a
lot
less
time
on.
They.
F
G
F
We
still,
I
still
trust
them,
it's
still
better
to
have
them
pre-reviewed
the
things
different.
A
F
Okay,
do
we
have
other
items
yep
when
you
stay
full
for
matrix
ryan,
hello?
F
K
I
think
at
least
the
options
I
was
thinking.
I
just
want
to
review
with
you
and
to
see
like
which
is
the
good
option.
At
least
I
want
to
get
the
recommendation,
then
maybe
we
can
decide.
F
Okay,
I
see
it's
not
a
long
document,
so
probably
what
you
can
do.
I
would
give
you
15
minutes
to
for
this
so
ping
me
on
slack
this
afternoon.
We
can
chat
about
the
document.
E
Yeah,
because
I
think
that
you
know
this
is
bogdan,
going
back
to
your
issue
also
3185
on
the
existing,
you
know,
phase
one
phase,
two
1.0
backlogs,
where
you
know
you've
clearly
called
out
the
need
for
consolidation
of
the
processors.
E
You
know
one
for
the
span
processor,
you
know
looking
at
the
metrics
processor
and
the
logs
processor,
so
I
think
that
you
know
right
now,
as,
as
you
rightly
pointed
out,
there
is
a
proliferation
of
processors
needed
for
different
kinds
of
computations
and
you've
been
marking
experimental
for
some
of
the
processors
that
exist,
but
would
like
to
see
a
more
comprehensive
design
for
that,
and
and
that's
what
we
want
to
discuss,
because
you
know
there
are
obviously
short-term
approaches
of
marking.
E
F
K
Yeah,
so
next
one
is
related
to
your
issue,
and
I
was
just
wondering
I
tried
to
summarize
all
that
stuff,
and
this
is
like
it's
still
not
finished,
so
I'm
working
on
it.
I
just
want
to
know,
is
there
any
other
doc?
We
are
working
on
where
I
can
like
collaborate,
and
I.
F
F
So
as
a
short
summary
for
everyone,
we
have
a
mess
of
processors
right
now
in
the
core.
We
have
a
to
change
things
on
a
span.
We
have
a
span
processor
just
to
change
the
name.
We
have
a
attribute
processor
that
is
capable
of
changing
attributes
for
spans
and
also
for
logs,
but
but
not
other
things
on
the
span
and
we
and
not
the
resource
attributes,
because
we
have
another
one
called
resource
processor
for
changing
the
resource
attributes.
So
there
is
kind
of
like
based
on
the
need.
F
We
added
different
processors
and
I
think,
a
consolidation
of
those
indeed
sharing
library
sharing
code,
it's
good,
but
I
think
we
should.
We
should
consolidate
for
users
to
see
okay,
I
want
to
change
some
property
understand.
I
need
to
use
a
spam
processor.
I
need
to
change
something
on
a
log
I
need
to
to
use
a
log
processor.
I
need
to
change
something
on
a.
F
I
need
to
use
a
resource,
processor
or
whatever,
whatever
pattern
we
do,
but
let's
be
consistent
and
make
make
things
consistent
in
this.
That's
the
first
problem
of
the
issue.
The
second
problem
of
the
issue
is:
do
we
want
to
have
a
dsl
for
for
for
things
like
deciding
the
where
statement
like
which
spans
do
we
want
to
change
and
stuff,
and
there
are
some
proposals
there
and
so
on.
So
anyway,
it's
a
lot,
it's
a
large
issue,
but
ryan.
F
Luckily,
he
he
volunteered
himself
to
start
a
document
and
the
process.
So
we
should
review
and
and
comment
there
and.
G
E
Yep
yep
so
bogdan
just
to
the
you
know,
triaging
that
we
did
yesterday
also
I'll
create
separate
issues
for
the
span.
Processor,
metrics,
processor
kind
of
you
know.
That's.
K
All
in
one
issue
right
now,
yeah,
then
maybe
a
little,
so
I
would
maybe
suggest
something
like
we
should
have
like
one
genetic
issue
for
this,
where
we
can
maybe
at
least
plan
the
designs,
how
we
are
planning
to
like
how
many
processors
we
want
to
see
what
will
be
the
like
high
level
expectation
from
out
of
these
processors.
So
a
common
generic
issue,
then
maybe
we
can
later
break
this
issue
into
multiple
kind
of
issues.
F
31A5
is
ryan
for
for
overall
things,
and
then
we
can
speed
from
there.
E
F
So
let's
have
the
overall
discussion,
design,
stuff
decisions
in
that
issue
and
once
we
agree
on
something
we
can
start
filing
issues
for
separate
things
that
we
agreed
on.
So.
A
Just
one
more
thing:
let's
make
it
make
sure
that
this
is
for
phase
two
right.
Yes,
we
should
know
it.
We
should
not.
I
would
suggest
we
don't
start
implementing
this
right
away.
Let's
give
you
the
time
for
us
to
understand
what
we
want
to
do
and
we
we
will
really
address
this
in
phase
two.
I
have
concerned
about
this
suggested
approach,
so
I
will
need
to
comment
on
this.
F
E
I
mean
right
right
right,
because
what
is
happening
right
now
is
that
you
know
there
are
so
many
pr's
coming
in
just
for
these
processors,
which
are
you
know,
really
don't,
have
a
comprehensive
design
and
and
I'd
love
to
see
a
more
comprehensive.
E
F
K
F
Let's
start
a
discussion,
people
should
comment
there
and
we
can
follow
up
on
that.
Okay,
thank
you.
Yeah
bing.
We
need.
L
Hey
hey:
this
is
the
team
from
aws,
so
yeah.
We
are
we're
doing
some
work
about
this
container
inside,
while
studying
the
pr
one
of
the
reviewers
found
that
the
dupli
duplicated
functionality
is
there.
So
there's
a
two
hotel
component,
which
has
some
using
pretty
much
the
same
kubernetes
informers
to
call
in
the
kubernetes
api
servers
and
yeah.
L
I
look
into
the
code.
It
seems
like
the
the
logic
there
is
is
quite
I
mean
the
the
logic
to
call
in
the
api
server
and
the
logic
to
doing
those
processors
or
metric
scrapping
is
quite
coupled.
L
So
this
is
one
thing
and
okay.
The
last
thing
is.
We
are
also
introducing
a
lot
of
way
to
calling
this
kubernetes
api
servers,
but
we
are
kind
of
using
a
lower
level
api
to
get
it.
L
So
I
I
think
I
just
want
to
point
out
this
issue,
because
I
I
think,
as
in
the
future,
as
more
and
more
receivers
and
processors
is
going
to
calling
this
api
servers,
there
is
a
need
to
develop
a
common
utils
for
multiple
components
to
use
so,
but
but
the
thing
is
because,
because,
like
I
said,
the
based
on
current
situations
to
design
a
easy
to
use
and
extendable
kubernetes
youtube
seems
like
a
big
effort
and
will
involve
like
say
the
owner
or
multiple
components.
L
I
so
I
don't
think
this
can
be
done
in
a
short
time.
So
that's
why
I
was
firing.
This
feature
request
and
hope.
The
community,
including
aws,
can
contribute
to
resolve
this
issue
in
the
future,
but
in
terms
of
the
pr
we
are
working
on,
I
think
we
well
due
to
the
timeline
of
the
project.
I
think
we
still
want
to
have
introducing
probably
more
duplicate
function,
functionality
there
and
we
will
put
it
into
the
aws
folders,
and
so
currently
only
the
canon
inside
receiver
will
be
using
it.
L
So
I
was
yeah
I
I
I
want
to.
I
just
want
to
bring
this
issue
up.
So
that
the
community
are
aware
of
it
and
we
can
possibly
work
out
the
road
map
to
try
to
resolve
it
in
the
future
and
yeah.
I
just
want
to
yeah,
seek
like
say
otherwise
or
suggestion
from
guys
what
we
should
do
for
the
next
step.
A
There's
two
parts
here:
right:
one
is
duplicating
the
code
like
doing
the
kubernetes
related
things
once
in
one
place,
but
there
is
also
the
we
were
discussing
the
possibility
of
actually
executing
it
once
actually
connecting
to
kubernetes
server
to
kubernetes
query
the
kubernetes
api
wants
for
components
so
that,
if
more
than
one
component
needs
to
get
some
data
from
kubernetes,
they
don't
do
it.
They
don't
basically
retrieve
the
same
information
over
and
over
and
over
because
they
all
need
it.
A
This
seems
to
be
doing
the
the
first
part
of
it
duplicating
the
code,
which
I
think
it's
it's
a
good
thing
to
do
right.
I
don't
know
how
common
are
the
things
that
the
components
are
doing,
but
if
you
can
see
the
pattern
there
and
if
you
see
that
it's
extractable
can
be
kind
of
implemented
as
a
sort
of
a
library
that
other
components
can
use.
A
I
think
that's
a
good
thing
to
do,
but
I
would
also
think
about
the
the
the
other
part
that
I
mentioned
right
have
some
sort
of
maybe
an
extension
on
which
components
rely
or
or
maybe
some
shared
functionality
in
the
core.
There
are
many
ways
to
think
about
this:
that
components
can
go
through,
which
then
is
responsible
for
connecting
vernees
itself.
A
It
kind
of
is
related,
but
at
the
same
time
maybe
we
should
be
coupling
these
two
things
but
think
about
those.
At
the
same
time,
I
did
not
look
at
the
exact
proposal
that
you
have,
but
I
think
idea-wise,
it's
a
good
thing
to
do.
L
A
I
would
so
I
would
first
of
all
have
a
look
myself
at
what
the
pr
is
doing
right
now.
A
I
don't
know
that,
but
I
would
ask
you
to
think
about
if
this
common
functionality
can
be
extracted
as
a
as
a
separate
runtime
feature
or
maybe
of
the
core,
not
necessarily
of
the
core,
maybe
as
an
extension,
maybe
as
a
helper,
for
components
as
a
shared
helper,
which
is
a
single
instance
in
the
collector,
so
that
the
components
that
need
to
talk
to
kubernetes
api
that
do
it
through
this
shared
instance
and-
and
it
happens
once
right,
so
that
it
fetches
the
data
caches
whatever
it
needs
to
do.
It's
done
once.
A
A
F
You
will
be
good
because
if
we
talk
from
every
instance
with
kublet
or
with
the
with
the
kubernetes
master
for
for
two
three
times,
it
is
going
to
be
overkill
for
kubernetes.
So
we
need
to
definitely
need
the
number
of
connections
to
the
master
and
limit.
Also
the
number
of
times
we
called
to
the
kublet
and
other
things.
F
So
I
know
david
has
a
lot
of
experience
as
well,
and
he
told
me
always
to
to
limit
the
number
of
connections
and
watchers
in
the
kubernetes
if
possible,
so
that
would
be
good
to
to
to
do
if,
as
a
goal
for
us.
M
Okay,
okay,
sure
sure
so
a
couple
things
so
yeah.
I
posted
a
link
to
the
kubernetes
observer,
which
is
used
for
like
service
discovery.
So
it
has
there's
a
lot
of
similarities
between
the
observer
and
the
processor
they
both
like
they
both
are
interested
in
pods
running
on
the
on
the
local
node.
They
don't
care
about
the
whole
cluster,
there's
actually
versus
like
the
k-8
cluster
receiver.
You
link
to
so
that
that
runs
a
single
instance
per
the
cluster
that
that
fetches
resources
about
the
entire
cluster.
M
So
it
does
things
that
are
very
different
right.
So
it's
not
just
just
these
things,
big
kubernetes.
You
have
to
think
about
what
are
the
resources
that
it's
actually
syncing
from
api
server?
Is
it
syncing
everything
in
the
cluster
or
is
it
syncing
like
things
that
are
local
to
the
node
and
yeah?
So
so
you
have
there's
different
use
cases
there
so
case.
Cluster
receiver
is
kind
of
a
special
case,
which
I
would
probably
probably
ignore
to
some
extent.
C
I
was
just
going
to
chime
in
that
the
kh
processor
is
incredibly
valuable
to
our
vendor
use
case
and
it
would
be
really
wonderful
to
see
some
of
that
upstreamed
into
core
some
some
and
so
I'm
just
echoing,
I
guess,
pigs
taking
sentiments.
That's
that's
all.
L
Yeah,
okay,
okay,
okay,
yeah!
It
seems
that
there's
a
there's
quite
some
demand
for
having
this
common
youtube.
Still.
I
will
just
yeah
keep
the
request
there
and
if,
if
anything
need
from
our
for
metabolism,
you
can
also
leave
comments
there
and
we
are,
I
think,
we're
happy
to
contribute.
C
I
I
guess,
while
I
have
half
of
aws's
entire
company
here,
have
you
all
thought
about.
C
Yeah,
very
small
team,
yeah,
that's
good!
No
more!
It's
amazing!
I
appreciate
all
the
effort
everyone's
doing
here,
because
I'm
not
doing
any,
except
for
snarky
comments
and
bad
presentations,
but
has
there
been
any
movement
internally
at
abs?
If
you
don't,
if
it's
fine,
if
you
can't
talk
about
it
around
moving
cage,
processor
stuff
into
your
distribution
or
what
are
the?
I
know
there
were
some
roadblocks
earlier
around
concerns
around
you
know,
package,
size
and
perhaps
resource
consumption.
C
L
Yeah,
I
I
haven't
had
the
time
to
do
the
deep
analysis,
because
this
was
came
up
from
our
one
of
our
reviewer
in
newark.
He
noticed
the
duplica
functionality
there
and
they
point
out
so
yeah.
We
didn't
have
time
to
do
the
deep
time
yeah.
E
E
To
put
it
into
a
dot,
without,
you
know,
without
it
being
on
the
project.
C
A
special
case,
to
some
extent
I
don't,
I
don't
know
that
the
way
that
really
works
in
practice.
I.
E
Yep
yep
absolutely
all
right
moving
on
again,
I
think
that
david
or
quinton
you
had
some
questions.
O
I
yeah
I
mean
I,
it
ties
into
several
other
things
that
you
mentioned
earlier,
but
I've
had
a
pr
open
for
almost
a
month
now
trying
to
to
add
a
feature
to
the
metrics
transform
processor.
I
know
there's
been
a
whole
bunch
of
discussion
about
doing
designing
new
processors
and
there's
a
new
dock
this
week,
but
it's
kind
of
blocking
our
release
to
not
have
that
feature
in
there.
O
Last
week
I
tried
updating
the
pr
to
rename
it
to
experimental,
since
you
guys
a
couple
weeks
ago
said
that
you,
you
wanted
to
merge
things
as
experimental
and
it
hasn't
had
any
reviews,
I've
poked
on
the
pr
in
slack
in
this
meeting
last
week
we
didn't
get
to
the
agenda
item.
What
do
I
have
to
do
to
get
this
pr
merged
or
replaced
or
something.
O
E
So
tigran,
I
think
I
would
suggest
the
metrics
transform
processor.
You
know
we've
been
discussing
that
and
clinton.
I
can.
Certainly
you
know
pick
up
your
pr
and
discuss
with
bogdan
when
you
know
when
we
triage
the
backlog.
E
But
tigran,
I
think
that,
that's
again,
you
know
the
convergence
issue.
If
we
could
mark
certain,
you
know
requirements
experimental
and
make
that
available.
It's
I
think,
in
a
bit
when.
H
Yeah,
not
just
you,
I
think,
there's
also
a
bit
of
overlap
with
the
metrics
generation
processor
that
rayon
had
proposed
here,
because
that
did
the
same
sort
of
thing
with.
O
Scalars,
but
I
feel
like
the
the
long-term
goal
is
to
have
one
that
replaces
or
that
one
merged
thing
that
replaces
both
of
them
right.
The
only
reason
there's
a
generation
there's
a
proposal
for
that
is
to
start
using
the
new
metric
types
right:
they
they
want
to
stop
using
the
open
census,
metric
types
yep,
okay,
so.
A
A
O
Certainly,
we
can
fork
metrics
transform
processor
we'd,
rather
not
because
we'd
like
to
stay
together
with
upstream.
You
know
the
way
that
metrics
transform
processor
is
designed.
Is
you
do
a
whole
bunch
of
different
operations
on
a
single
metric,
so
it
would
be
pretty
cumbersome
to
to
have
to
set
up.
You
know,
for
example,
a
pipeline
with
a
metrics
transform
processor
and
then
something
else
and
then
another
instance
of
metrics
transform
processor
to
continue
processing
the
metrics.
A
It's
not
that
cumbersome.
I,
I
don't
quite
see
it
that
way.
You
have
a
processor
which
has
steps,
and
instead
of
that,
you
have
two
processors
which
have
a
set
of
steps.
One
of
the
the
second
processor
has
another
step,
which
is
the
experimental
step.
Why
is
it
that
much
more
cumbersome?
Where
is
the?
Where
is
the
problem
there?
I
don't
quite
see
it.
O
I
mean
potentially
you're
you're
you're
stacking,
multiple
of
the
the
metrics
transform
price
I
mean.
Maybe
your
point
is
that
metrics
transform
processor
shouldn't
have
done
more
than
one
thing
in
a
single
pipeline
stage,
but
that's
not
how
it
was
designed.
A
E
Yeah
yeah
exactly
I
mean
again
tigran.
I
think
there
are
multiple
calculations
that
are
needed
for
these
transformations
that
we
would
like
to
see
in
a
consolidated
metrics
processor.
In
the
short
run,
you
know
we
don't
want
to
have
this
proliferation
downstream
of
multiple
processors
going
on
and
and
bogdan's.
Thinking
has
been
that
you
know
we
tag
these
as
experimental
on
the
project
but
make
them
available
on
the
project
itself.
Instead
of
you
know
again
having
and
complete
proliferation
of
different
implementations.
I.
A
A
A
E
Yeah
and
and
tigran,
I
think
that
the
again
what
clinton
is
requesting
is,
you
know
very
similar
to
what
we've
been
also
running
into.
Is
that
again,
bogdan's
guidance
has
been
tagged,
these
as
experimental
make
them
available
for
now
and
as
we
build
the
consolidated
metrics
processor,
you
know
all
this
functionality
will
converge
right.
So
again,
it's
it's
short
term
versus
long
term.
We
don't
want
to
really
fragment.
E
A
P
So
tigran
joe
here,
one
of
the
bigger
issues
and
elita
is
alluding
to
it
is
on
the
google
side
we
haven't
been
making
many
contributions
to
the
collector
we
now
finally
have
in
production.
P
Our
our
windows,
unified
agent
and
our
linux,
unified
agent,
is
about
to
go
to
production
in
june,
and
so
we're
going
to
have
impetus
to
to
to
make
a
lot
of
improvements.
P
We'd
much
prefer
if
it's
possible
that
there
be
patterns
for
us
to
contribute
to
upstream,
whenever
possible,
even
for
some
of
the
edge
cases
rather
than
for-
and
this
is
like
you
know,
this
is
like
pr
number
two
you
know.
So,
if
we
don't,
you
know,
I'm
exaggerating,
obviously,
but
if,
if
we
can't
figure
out
patterns
to
deal
with
this,
it
is
going
to
cause
forking
as
alita
points
out,
certainly
with
us,
and
I
would
imagine
with
other
teams
too.
A
Yeah
yeah,
I
understand
no,
that's.
I
wouldn't
want
that
to
happen
right.
I
don't
want
fortune
to
happen.
I
was
merely
suggesting
that
if
you
consider
that
this
is
a
temporary
solution
for
you,
then
temporarily
it
could
be
a
processor,
a
separate
processor
that
you
implement
on
your
own
right.
I
was
not
suggesting
that
you
do
the
forking,
because
we're
refusing
to
make
this
happen.
O
A
That's
that's,
that's
the
choice
you
can
make,
but
is
that
the
right
thing
to
do
for
you
if
you're
going
to
diverge
from
the
processor,
I
don't
know.
If
that's
that's!
That's
going
to.
E
E
O
E
Well,
at
least
it'll
take
some
weeks
for
sure.
G
A
E
Q
Okay,
cool
I'll,
try
and
be
quick,
although
we're
close
enough.
Q
Q
What
are
what
are
the
downsides
of
the
current
behavior,
so
batching
by
number
of
metric
metrics
versus
by
a
number
of
data
points,
just
means
that
you
won't
end
up
well,
so
full
disclosure,
google
api
limits
by
number
of
data
points,
so
we're
interested
in
this
because
that's
the
way
our
api
is
designed,
but
more
generally
initially.
Q
Actually,
we
came
with
the
proposal
of
let's
add
an
option
to
be
able
to
do
data
points
instead
of
metrics,
because
that's
something
we'd
like,
but
in
general,
the
number
of
metrics
isn't
a
good
indicator
of
how
much
data
you're
going
to
send.
The
number
of
data
points
is
probably
much
better,
and
so,
if
the
idea
is
that
you
want
sort
of
equally
sized
batches
in
terms
of
data,
this
was
at
least
a
few
weeks
ago
agreed
on
as
a
better
approach
to
getting
roughly
equal
size,
batches.
H
Yeah,
this
is
kind
of
related
to
an
issue
we
had
in
the
go
sdk
with
the
jaeger
exporter,
where
we
were
bashing
by
number
of
spans,
as
opposed
to
the
size
of
the
exported
data,
and
the
number
of
spans
doesn't
necessarily
relate
to
the
size
of
the
data
you're
shipping.
So
I
think
it's
important
to
batch
by
a
measurement,
that's
as
close
as
possible
to
how
many
bytes
am
I
actually
going
to
be
going
to
be
shipping,
or
at
least
a
close
to
a
linear
correlation
to.
C
That
so
this
applies
a
lot
to
data
dogs,
api
ingestion
for
traces,
more
there's,
some
limits
on
size
so
based
on
fights.
But
so
I
think
it
is
a
useful.
Q
Change:
okay,
if
you
you
have
other
feedback,
please
add
it
to
the
english.
Otherwise,
hopefully
we
can
get
that
pr
through.
If
you
are
an
approver
and
would
like
to
review
it,
it's
that
would
be
quite
helpful
and
then
my
second
topic,
if
there's
nothing
else
on
the
first,
I've
been
working
on
end-to-end
processing,
metrics
processing,
latency
metrics.
So
basically,
how
long
has
a
given
metric
trace
or
log
spent
from
being
received
to
being
sent?
Q
And
last
week
I
had
said
that
I
was
going
to
do
more
prototyping
with
doing
something
that's
context-based
and
in
particular
I
had
to
work
through
how
that
would
work
for
the
batch
processor
because
having
multiple
input,
contexts
and
then
trying
to
figure
out
how
how
to
merge
those
into
a
single
output
context.
When
we
send
data,
that's
been
batched
together
with
something
that
I
had
to
work
through.
Q
The
solution
I've
come
up
with
is
a
merge
function
for
two
contexts:
it's
not
the
prettiest
thing
and
I've
I've
sort
of
stopped
there
in
the
hopes
that
we
can
sort
of
decide,
yay
or
nay
on
whether
to
move
forward
with
it
or
whether
we
should
go
back
to
the
drawing
board
and
look
for
alternatives.
Q
I'm
looking
for
feedback
in
particular,
there
isn't
a
good
sort
of
design
process
in
the
collector,
but
I'm
looking
for
approval
from
tigran
and
bogdan
and
potentially
approvers
just
to
say
that
the
design
is
okay.
Q
A
Okay,
thanks
david,
I
will
have
a
look
we're
out
of
time
guys.
So,
let's
move
the
remaining
topics
to
the
next
week.
C
R
Hi
guys
I
get
a
little
confused
because
the
meeting
link
leads
to
another
call
yeah,
so
many
people
yeah.
So
I
guess
it's
our
time
now.
P
T
T
Times
when
I
joined
so
previous
meeting
is
still
going
on.
S
Yes,
so
this
the
link
which
I
used
was
the
one
which
was
given
in
our
thick
reform,
which
works
fine,
I
think,
probably
there,
the
other
link
is
not
working.
I
think
it
is
which
is
in
the
google
calendar.
I
think
that
does
not
work,
and
some
the
one
which
is
in
our
meeting
agenda
also
does
not
work.
S
R
I
think
we
we
actually
might
need
avolita
on
this
call.
We
discussed
something
in
in
the
slack
channel,
maybe
anthony
you
can
assist
us
with
that.
The
topic
related
to
validation
of
the
propagators
for
the.
H
Yeah
yeah,
so
I
was
hoping
alolita
and
maybe
some
of
the
interns
would
be
able
to
join
us
today.
I
guess
they
showed
up.
Let
me
try
to
ping
her
in
slack
and
see
if
she
will
be
able
to
join
us.
She'll
probably
have
more
information.
R
I
just
want
to
confirm
that
it
appears
we
do
have
a
code.
I
just
want
to
validate
that
the
paths
for
both
http
and
grpc.
R
What
else
do
we
want
to
cover
initially
for
the
ga
are
well
tested
and
from
our
end,
we
are
focusing
mainly
on
w3c,
trace
context
like
within
azure
and
for
the
microsoft
customers.
We
would
usually
endorse
the
w3c
standard,
so
it
would
be
great
if
other
participants
help
to
validate
the
other
propagators,
such
as
b3
or,
if
there's
a
country
propagator.
That
needs
to
be
supported,
maybe
adding
it
to
the
country
repo
as
well,
so
that
we
can
come
up
with
the
full
well
test.
That
said,
yg.
H
Yeah,
so
I
noticed
that
in
your
core
repo
you
had
implementations
of,
I
think
it
was
w3cb
or
b3
and
there
might
have
been
one
other,
but
I
I
think
we
also
might
want
to
look
at
whether
adding
an
x-ray
propagator
to
your
contrib
would
be
useful,
because
that's
certainly
something
that's
always
of
interest
to
us
is
yes,
is
making
it
easy
for
people
to
use
x-ray,
and
I
I
think
that
reviewing
and
going
over
those
you're
testing
the
propagators
would
be
something
we
could
probably
help
with.
R
And
I
think
testing,
and
it
would
be
good
to
have
some
examples.
I
think
lalit
is
currently
working
on
with
one
of
the
linux
foundation-
students
right
alec
on
that.
Yes,.
S
So
yeah,
yes,
so
myself
and
tom,
we
both
have
been
working
with
it's,
not
a
linux
foundation.
Student
linux,
open
source
foundation,
but
it
was
some
of
the.
I
think
some
university
students
as
part
of
their
course
work
got
it
and
they
have
been
doing
the
grpc
client
server
example.
Instrumentation
example
in
our
repo
and.
R
It's
a
great
example:
in
my
opinion,
it
is
big
and
complex
and
from
the
customer
perspective
I
often
hear
that
feedback
like.
I
don't
want
to
know
how
the
sausage
is
made.
I
want
to
get
a
one-liner
populate.
R
It
doesn't
matter
what
media
a
b
that
http
header
or
grpc
metadata
and
then
extract
or
create
spam
with
remote
context,
which
is
x
like
templated
pretty
much
and
the
closer
we
get
to
that
one
two
liner,
like
it,
really
can
be
a
very
smart
template
right.
It
can
be
a
very
smart
template
function,
implemented
somewhere
as
optional
header,
but
the
shorter,
the
simpler
it
becomes,
the
better
it
is
for
the
customers.
It'd
be
great.
If
we
can
achieve
that
goal
like
easy
to
use
examples
for
the
propagators.
R
R
H
In
my
experience,
the
propagation
system
is
something
that
people
seem
to
struggle
with
as
a
concept
initially,
and
I
don't
think
in
go.
We've
found
really
good
ways
to
to
make
that
clear
other
than
having
conversations
with
people.
So
if
you
guys
come
across
examples
that
really
help
with
that
we'd
be
interested.
R
That's
another
thing
like
we
sometimes
have
those
templates
for
http,
but
that's
not
exactly
what
my
immediate
earliest
customers
want.
So
this
is
a
bit
of
a
disconnect
of
what
we
deliver
and
how
we
do
it
and
versus
what
the
customer
really
wants
and
that's
why
it'd
be
great
to
kind
of
cover
both
bases,
http
and
grpc,
and
I
think
in
most
cases
jrpc
is
more
frequently
used
these
days
than
http.
H
Propagating
context
over
kafka
is
another
one
that
comes
up
frequently.
Recently
we
had
a
question
about
red
as
pub
sub
as
well,
which
I
think
is
the
the
first
time
I've
encountered
a
protocol
that
doesn't
have
headers
that
could
easily
be
used,
so
you'd
have
to
do
inband
context
propagation,
so
perhaps
having
examples
of
more
distinct
types
of
communication
mechanisms
that
you
can
use
for
propagating
context.
It's
not
you
know.
H
R
Yes
makes
sense
the
like
what
you
are
saying
about
the
map.
Switching
does
reason:
nato
with
like
what
I've
been
thinking
about
both
cases:
http
headers
multi-map
and
the
grpc
metadata
map,
an
engineer
pc,
there's
the
string,
ref
string
graph
map,
but
they're
all
similar.
In
a
way
it
doesn't
matter
what
concrete
collection
it
is
and
what
concrete
key
value
to
add
but
they're
kind
of
similar,
and
I
am
sure
we
can
find
the
common
template
for
that
stuff.
R
H
So
brian
uribe
has
joined
us
who's,
one
of
the
interns
who's
working
with
us
this
summer.
Brian,
are
you
I
I
haven't
been
able
to
talk
to
aleda
about
if
anybody's
going
to
be
able
to
work
with
us
on
the
c
plus
sdk,
but
is
that
why
you're
here
yeah?
That's
I'm
just
joining
in
to
kind
of
hear
and
hopefully
help
out
yeah
yeah.
I
don't
know
if
this
would
be
directly
with
elita,
but
I'm
definitely
interested
in
so
max.
H
I
think
you
had
mentioned
that
there
was
a
an
issue
or
a
ticket
that
could
have
issue
that
that
dealt
with
some
of
these
questions.
Maybe
if
you
link
that
yeah.
R
I
think
there
is
something
related
to
well.
It
was
early.
Well
it
can
you
hear
us
yeah
max.
I
can
you
I
think
so
there
was
that
issue
initially
about
just
grpc
from
what
I
remember.
Perhaps
we
should
create
separate
issues
for
concrete
exporters
and
then
dvl,
and
I
think
that
a
student
who's
been
active
and
posting.
The
example
right
now
is
working
specifically
on
grpc,
and
is
he
working
just
on
w3cmgrfc.
R
R
H
Sure,
but
that's
it's
at
least
a
starting
point,
some
place
to
look
at
perhaps
as
a
template
for
creating
the
others.
Yes,.
R
S
R
And
examples
yes,
because
sometimes
we
can
point
people
to
our
test
code
and
in
most
cases
it
works.
In
other
cases,
customers
ask
for
a
standalone
example.
They
can
try
incrementally
build
up
one.
I
work
with
customers
who
don't
even
want
to
dive
deep
into
details
like
how
to
build
with
cmake.
R
It's
say:
oh,
we
have
something
we
somehow
magically
got
the
lib
file
or
that
a
file.
We
don't
want
to
study
how
to
build
your
sdk,
give
us
the
the
artifacts
and
that's
the
case
where
they
would
also
ask
oh
and
give
us
the
example.
We
don't
really
want
to
spend
time
on
learning
how
to
run
your
test
suite.
R
So
that's
why
I
think
both
validation
and
examples
are
like
required.
S
S
It
should
be,
I
mean
it
should
work
perfectly
if
we
can
set
up
any
one
of
the
global
propagators
set
up
a
global
propagator,
say
w3c
and
create
a
grpc
client
for
example,
and
then,
if
I
said
anything
like
v3
or
jaguar,
the
same
example
should
work
out
of
the
box.
So
it
should
be
plug
and
play
just
pick
up.
H
R
Yes,
absolutely
and
that
that
makes
it
super
easy,
then,
to
work
with
customers.
We
just
tell
them
use
any
generic
example
and
then
plug
in
your
concrete
propagator
type
like
similar
to
how
we
deal
with
exporters.
Today,
when
the
instrumentation
code
is
about
the
same
and
the
only
templated
part
that
needs
a
configurable
piece
is
the
concrete
exporter
type,
it
would
be
great
to
have
to
that
to
get
to
that
level
for
the
propagators
as
well.
R
I
added
the
link
to
the
original
w3c
jrpc
issue
and
the
pr
associated
with
that.
It's
a
great
super
bulky.
Pr
that
needs
further
work,
but
least
it
chose
good
direction.
R
Maybe
if
we
can
discuss
this
on
monday
right,
our
next
meeting
is
on
monday.
Again
you
follow
it.
You
can
join
us
on
monday.
We
can
explore
if
we
can
get
any
reinforcement
of
help
from
interns
on
that.
R
Can
I
guys
try
to
spend
like
a
minute
on
that
other
pr?
R
So
my
goal
here
on
the
next
item
is
to
build
a
main
class
build
country
in
one
go,
because
what
happens
is
you
can
build
the
main
with
the
standard
set
of
things,
but
I
also
want
to
build
the
main
ripple
with
non-standard
additional
vendor
contributed
set
of
things
and
verify
that
all
of
my
changes
in
country
repo,
all
of
my
exporters
that
follow
the
same
template
the
same
api
are
still
sane
in
the
last
two
weeks
since
there's
still
some
churn
before
ga.
R
I
had
to
address
something
with
set
resource
because
we
refactored
the
api,
so
the
sooner
we
can
get
to
that
point
when
we
can
run
ci
continuously
on
one
country
the
better,
because
then
I
can
actually
propose
my
changes
to
like
fluent
exporter,
for
example
in
country
people
and
set
up
the
the
nightly
build
loop
that
allows
to
build
country.
R
Now
again,
my
feedback
here
is:
we
do
not
impose
strict
requirement
on
the
main
repo,
our
contributors,
to
make
sure
that
they're
not
breaking
and
trip.
No,
that's
not.
The
point
contributor
is
still
secondary.
R
The
goal
is
we
keep
nightly
on
country
and
if
something
gets
broken
by
some
change
in
the
main,
then
at
least
we
know
when
and
the
owner
of
a
specific
module
can
say.
Oh,
I
now
need
to
refactor
my
code
because
main
repo
developers
decided
to
refactor
the
api
or
introduce
a
new
method.
Now
I
also
need
to
add
that,
and
it's
still
the
responsibility
of
the
country
module
maintainer.
R
But
what
I'm
trying
to
give
is
a
tooling
for
this
sort
of
ci
task.
I
personally
use
it
myself
right
now
for
the
fluent
exporter
and
I
think
evgeny
had
a
comment
about
the
naming
of
this
build
option
and
by
default
this
build
option
is
off.
He
suggested
that
we
go
instead
of
with
country
to
build
country.
R
I
changed
that
so
if
there
are
any
other
like
reasons
why
we
should
not
be
doing
it.
That
way.
Please
comment
on
the
pr.
That's
it.
T
So
even
the
width
contribute
the
p
and
the
changes
added
to
the
main
ripple.
I
think
the
memory
pro
ci
will
not
will
not
enable
it.
No.
R
No,
it's
only
gonna
be
like,
I
think,
maybe
in
the
main
repo,
it
makes
sense
to
add.
The
document
by
default.
Research
option
is
always
off,
it's
not
enforced
in
the
main,
but
it
is
enforced
in
country
so
like
with
nightlist,
you
can
see.
Oh
this
nightly
failed
and
then
it's
an
action
item
for
the
country,
boners
to
say,
hey!
Well,
my
older
exporter
no
longer
works
because
main
decided
to
refactor.
I
need
to
fix
it,
or
at
least
a
country.
R
I
can
tell
my
customers,
okay,
you
have
to
use
open
challenge
with
this
with
with
this
fluent
exporter
and
I'm
sorry,
but
it's
broken
now
with
0.7.
I
haven't
fixed
it.
Yet
so
customers
stay
back
and
they
still
run
with
open
telemetry
before
the
refactor
and
then
meanwhile,
I'm
working
urgently
to
fix
it
up
on
latest
contrib
and
then
I
say
cool
now
my
ci
passes.
I
got
it
working
with
0.7
and
I
can
sleep
tight
until
the
next
time
it
breaks.
R
So
only
the
contrib
itself
should
have
that
kind
of
optional
validation,
which
shouldn't
block
the
merger
of
any
other
unrelated
changes
more
like
an
indicator
daily
tracker,
whether
my
module
is
still
compatible
with
the
man
or
not.
R
That's
the
intent,
and
I
highly
respect
opencv
project
and
I
almost
precisely
followed
what
they
are
doing
and
in
in
a
certain
sense
the
way
how
linux
kernel
builds
modules
with
kernel
3,
specifying
the
path
to
module
that's
being
built.
This
is
structurally
the
same
thing.
What
I'm
proposing
here
again
it's
off
by
default!
S
Yeah
yeah
thanks
thanks
so
much
I
just
from
I
mean
as
a
maintainer.
I
just
want
to
ensure
that
I
mean
the
deployers
agree
on
having
a
configurations
within
the
main
repo
which
to
make
the
external
repo
compile.
So
as
long
as
I
think
we
have
another
approval
for
that,
I
can
definitely
I
mean
I
don't
see
any
issue
in
merging
this.
R
R
So
I
think
it
is
acceptable
to
impose
certain
rules
on
that
country.
Certain
build
rules,
certain
consistency
and
that
way.
Also,
if
you
need
to
create
your
own
custom
exporter
for
your
own
custom
flow
and
what
it
is
you
copy.
A
zipkin
exporter
like
I
stole
the
one
that
ali
did
right
like
I
took
the
existing
exporter
from
the
main
repo
as
a
template.
R
H
Yeah
as
an
external
observer,
I'm
not
familiar
with
bazel
or
cmake
it's
been
years
since
I've
done
c
plus,
but
I
think
it
really
does
make
sense
to
say
that
contrib
is
of
a
piece
with
the
core
repo
and
everything
in
there
should
follow
the
same
standards
be
integrated
in
the
same
build
process
on
the
go
side,
it's
a
little
easier
for
us
since
go
has
its
own
build
tool
chain.
Everything
has
to
use
that
everything
uses
modules,
but
we
still
say
here's
the
hierarchy
for
packages
that
we
expect
you
to
have
right.
H
If
you're
instrumenting
an
application,
we
expect
you
to
have
the
instrumentation
package
name
aligned
with
the
the
package
name
of
the
thing.
You're
instrumenting
and
we've
got
some
standards
working
trips
that
help
it
look
like
a
cohesive
product.
I
think
that's
a
good
and
reasonable
thing
to
have.
R
And
also
main
repo
contains
certain
required
target
targets
like
open
elementary
underscore
api,
open,
telemetry
underscore
common.
So
if
you
are
implementing
your
own
exporter,
you
kinda
already
depend
on
those
main
repo
targets
described
in
the
main
repo,
and
it's
like
what
I'm
offering
is
build
main
with
an
overlay
and
overlay
already
has
all
these
targets.
So
it's
very
simple
and
it's
also
somewhat
aligned
with
how
bazel
does
it
basil?
R
You
can
just
drop
in
a
directory,
and
it's
just
gonna
build
everything
in
that
directory
as
long
as
there's
a
build
file
like
uppercase
or
uppercase
bell
file
so
same
with
the
cma,
but
for
cmake
we
have
to
explicitly
say
yeah
by
the
way.
Add
this
directory
for
me,
please,
because
I
want
this
director
to
be
now
added
as
part
of
the
belt
sort
of
thing.
R
So
again,
please
let
me
know,
because
I
can
depend
on
it
a
little
bit
for
the
fluent
exporter
build
for
one
of
my
customers.
T
And
I
have
one
more
question
with
that:
with
this
beautiful
trip.
Will
this
be
the
only
way
to
build
the
contributor?
Does
it
stereo
support
like
fan
party
package.
R
If
you
don't
want
to
you
don't
have
to
like,
you,
may
maintain
your
own
build
scripts
in
your
own
belt
system
and
if
you
don't
hook
it
up
to
the
top
levels
in
my
class,
it's
not
going
to
be
built.
So
answering
your
questions.
No,
that's
not
the
only
way
for
certain
parts
of
it.
You
can
build
it
using
whatever
alternate
preferred,
build
system
that
the
the
vendor
would
like
to
use.
R
So
it's
more
like,
I
would
like
to
impose
certain
structure
on
things
like
exporters,
for
example,
because
exporters
always
have
direct
dependency
on
a
structure
in
the
main-
and
this
is
fragile
thing-
it
can
be
easily
broken
if
main
gets
refactored.
That's
why
I
want
to
have
a
structure
and
see
how
I
run
it.
R
In
my
example,
I've
been
broken
two
times
so
good
like
when
I
pull.
I
see
what
changed
I
I
I
changed
things
in
my
branch,
but
when
my
branch
gets
merged
to
the
main
branch
of
the
country,
I
want
to
make
sure
that
it's
not
what
I
want
to
track
when
it
breaks
right
away.
R
G
I
have
one
one
question
I
have:
I
have
not
maybe
written
the
entire
discussion
properly,
but
do
we
assume,
then
that
all
of
the
exporters
in
the
country
are
like
the
same
class
citizen
and
what
I
mean
by
that
is
like
if
one
of
them,
maybe
just
to
get
broken.
C
R
Ci
loop
would
fail,
so
I
can
tell
you
how
we
treated
it
in
some
other
project,
ci
loop,
entire
ci
is
going
to
break
yeah.
Then
we
can
take
an
administrator
decision
on
the
next
call.
We
say:
oh
my
football
exporter.
Somebody's
full
bar
exporter
got
broken
because
of
the
refractor.
R
In
the
main
you
can
do
a
gentle
paying
of
the
co-donor
if
code
owners
offline
forever.
I
would
say:
well
if
zero
don't
build
that
stuff,
because
he's
offline
or
her
like
she
is
a
fine
and
maybe
a
few
polite
things.
Maybe
an
issue
opened
on
contribute
to
track
and
turning
that
specific
module
off
there
are
multiple
ways
to
do
it.
G
Okay,
thank
you
very
much.
I
have
to
drop,
unfortunately
from
this
poll
as
I
have
another
meeting,
but
I
will
see
you
on
monday.
Thank
you.
H
I
have
to
drop
as
well,
but
I'll
note
that
on
the
in
the
go
sdk
when
we
have
a
breaking
change
in
our
core,
we
try
to
update
all
of
the
modules
that
we've
added
to
contrib
to
deal
with
that
breaking
change
so
that
we
kind
of
take
it
upon
ourselves
as
maintainers.
To
keep
that
in
mind,
I
don't
know
if
that's
something
you
can
or
can't
do,
but
it's
another
option
to
consider
so
that
you
don't
have
to
disable
components.
I.
R
I'm
all
for
that
approach.
It's
in
most
cases.
This
is
the
practical
most
reasonable
approach.
Sometimes
there's
a
bit
a
different
attitude
like.
Why
do
I
have
to
care
about
football?
Let
the
owner
of
a
bar
catch
up,
so
okay
depends
on
how.
R
I'm
just
thinking
here
nightly
can
ensure
that
you
can
verify
irrespectively
whether
this
really
is
published
or
not,
and
you'd
have
better
responsiveness,
better
response
time.
So,
if
monday,
some
breaking
change
in
the
main,
but
no
new
release
country
is
gonna,
build
against
the
latest
main
or
maine
is
gonna,
be
built
in
contribution.
R
We
build
country
on
and
trigger
a
build
failure
and
the
notification
to
country
maintainers,
but
he
this
thing
got
broken
just
now
this
nightly
and
you
can
easily
tell
what
change
in
the
main
trigger
that
break.
R
So
that's
pretty
much
my
thinking
on
this
and
the
another
thing
is
I
from
again
from
the
summerfish
perspective.
If
I
run
as
you
like
build
pipeline
somewhere,
I
still
clone
just
the
main
open
telemetry
repository
in
my
bill,
or
I
depend
only
on
the
main.
But
then
I
tell
the
belt
of
the
main
to
say:
hey
fetch
this
for
me,
and
the
main
difference
is
how
it
is
better
than
sub
module
sub
module.
R
S
S
T
R
I
mean
so
here's
the
thing.
So
if
we
run
a
build
validation
in
contrib,
let's
say
we
set
up
two
build
loops,
one,
for
example
ubuntu
latest
and
windows,
late
test,
just
two
of
them.
R
I
realize
it's
not
a
comprehensive
list,
so
I
realize
it's,
maybe
not
covering
gcc
4.8
on,
like
prehistoric
builds
right,
but
it
still
ensures
that
I
would
definitely
get
a
modern
c
mate
320
or
something
which
means
that
I
can
do
the
fetch
content,
which
means
that
I
cover
at
least
something,
but
in
most
cases
the
major
refactor
things
are
going
to
be
caught.
So
if
this
is
a
refactor
introduction
of
a
new
method
like
primitive
bill,
failure
is
going
to
be
discovered
by
those
two
loops.
We
don't
have
to
cover
100
percent.
R
R
And
I
think
that
again,
I
can
only
comment
on
on
what
my
customer
needs.
This
is
covering
my
customer
needs.
T
R
The
way
in
that
pr,
I
have
through
two
options
and
the
first
one
I
kept
exactly
for
that
reason.
So
one
option
is
environment.
Variable
that
specifies
the
path
to
already
checked
out
three
and
the
other
one
is
that
goes
through
the
fetch
content
path.
So
the
first
one,
if
defined
environment,
variable
open,
telemetry
control
path.
So
you
could
clone
it
manually
if
you're
running
without
same
mate,
define
that
variable
and
do
not
define
build
contrib.
R
So
you
can
still
kind
of
bypass
that
and
you
can
still
build
it
even
with
the
legacy
cmake,
but
on
the
legacies
you
make.
It
means
that
you'd
have
to
manually
do
a
bit
of
extra
build
setup
on
your
own,
so
yeah
that
that
one.
So
that's
the
bypass
fallback
to
case
or,
if
you're
running
with
an
old,
thin
mechanic.
T
R
And
the
the
last
one
is
just
the
config
file
used
for
combos
in
the
ide.
I
verified
this
in
visual
studio
id
with
cma,
so
it's
like,
I
think
you
have.
This
is
probably
just
formatting,
the
very
bottom
of
it
yeah.
So
it's
built
with
country
a
belt
contrib
through
build
control.
R
True
and
the
belt
configuration
built
combo
for
that
is
sdlibx
64
release
contrib,
so
I
can
actually
have
separate
build
trees,
one
the
standard
build
without
anything
and
then
there's
a
separate,
build
config
with
the
contrib
that
builds
all
of
my
extra
overlaid
stuff.
On
top
of
it.
R
And
it's
it's
really
convenient
because
then
you
so
let's
say
you
had
190
test
cases.
Then
you
build
with
contrib.
You've
got
228
test
cases.
You
can
still
do
the
whole
thing.
In
one
run,
you
can
identify
all
the
the
custom
contrib
test
breakages
in
one
run,
and
you
don't
have
to
worry
about
how
to
get
it
set
up
because
there's
already
a
proposed
setup
for
for
that
sort
of
scenario.
R
Again,
my
strong
reason
for
that
for
doing
that
is
I
highly
respect:
opencv
vc,
plus
plus
project,
it's
a
very
prominent
project,
mainly
driven
by
intel,
and
they
follow
the
same
pattern.
So
why
not?
Let's
just
can
replicate-
let's
not
reinvent
the
wheel
and
do
similar
to
what
they
are
doing.
S
R
R
R
I
know
I'll
need
a
separate
pr,
because
if
you
try
this
right
now,
what's
going
to
happen
is
there
is
no
c
mic
list
txt
in
this
exactly
on
a
fail
right,
you
don't
have
any
questions.
R
So
what
I'm
going
to
do,
I'm
going
to
submit
either
a
dummy,
see
my
list
or
I
can
submit
one
as
part
of
my
fluent
pr,
because
I
I
I
am
doing
it
as
part
of
the
fluentd
exporter
rpr.
This
is
an
issue,
but
I
haven't
started
the
pr
yet
I
can
try.
R
S
R
For
the
exporter,
I
can
tell
you
that
it's
going
to
be
less
than
a
second
less
than
a
few
seconds
for
the
first
for
a
single
exporter,
because
a
single
exporter
depends
on
open,
telemetry,
common
and
opencl
elementary
api,
and
it's
usually
two
cppcp
files
right,
not
not
a
lot,
and
I
think
the
only
exotic
scenario
is
if
I
start
building
like
docker
image,
for
example,
which
I
would
probably
avoid
doing
like
what
what
what
could
add
10
minutes
to
to
the
bit
like
docker
image,
build
what
else
like
stress
test.
R
That's
going
to
take
forever,
but
and
perhaps
there's
some
tolerance
and
that's
a
question
that
should
be
discussed
in
country
prepo
to
your
main
repo.
It
doesn't
have
any
it's
zero
yeah.
S
R
R
Options
again
in
the
country,
we
can
use
exclusions
to
skip
certain
things
that
do
not
have
to
be
built
as
part
of
that
process.
I
can
elaborate
on
this
in
the
country,
repo.
S
T
I
think
that
I
saw
the
zip
cam
exporter.
We
only
have
to
say
make
build
right.
We
enable
it
in
one
hour
same
xei
and
I'd
like
to
do
the
same.
Just
add,
jagger
exporter,
to
to
zip
king.
The
viewer's
time
should
be
fine,
there's
no
much
sources
just
files
and
the
test
need
a
few
extra
tests
introduced.
So
I
think,
from
time
perspective
that
should
be
fine,
or
do
you
think
we
should
create.
T
S
No,
I
think
we
as
of
now
we
only
have
singles
just
a
second
we'll,
probably
have
to
go
to
work.
R
G
R
Maybe
I
have
a
good
machine,
but
it
is
optimizing
it
for
speed
and
the
parallelized
is
the
belt
good
enough.
So
I
think
it'd
be
great
to
add
the
jagger
and
zipkin
to
the
default,
and
we
can
speed
it
up
because
right
now
for
windows,
for
example,
we
still
use
a
mess,
build
it's
very
sequential.
R
T
R
For
windows,
I
think
we
have
lib
curl
on
windows-
it's
not
ideal,
but
at
first.
S
S
So
probably
you
can
add
it
in
this
when
cmic
test,
yes,
exporter,.
R
So
what
I'm
trying
to
do
right
now?
I
just
enabled
zipkin
build
on
my
local
machine
and
I'm
using
build
with
ninja
to
validate
how
much
time
it
would
take
to
build
it
from
claim.
There
are
some
dependencies
like
grpc,
for
example,
for
other
things
for
rtlp.
If
these
take
long
time
like
and
right
now,
we
are
outsourcing
it
on
windows
to
vc
package
for
the
vc
package
to
build
all
these
other
external
dependencies,
but
our
internal
stuff
that
we
can
turn
on
and
off.
R
So
a
minute
like
two
minutes
and
that's
with
zipkin
enabled
so
the
cost
of
enabling
zipkin
is
near
zero.
Let's
enable
it.
S
S
S
R
I'm
just
asking:
do
you
think
our
final
metrics
story
gonna
be
similar
to
what
we
have
do
we
need
to
care,
I
mean,
perhaps
let's
keep
it
sane,
that's
fine!
It's
going
to
be
easier
for
us
to
refactor
it
later
yeah.
S
Yeah,
I
mean
to
be
honest,
I'm
not
sure
how
how
much
of
the
deviation.
R
T
Okay,
I
will
make
submit
up
here
with
this
yeah
and
for
max
mentioned
for
windows
right.
We
should
also
enable
both
for
windows
right
yeah.
R
So
for
for
windows,
take
a
look
at
my
apr.
I
can
add
the
ci
around
it.
It
uses
ninja
on
windows,
so
it's
moving.
So
it's
like
there's
still.
It's
still
cmake
instead
of
cmake
generating
ms
build
solution
file,
ms
build
generates,
ninja
build
files
and
then
we
build
with
ninja
and
then
executables
and
dlls,
and
all
targets
are
still
the
same
and
all
the
ca
test
c
c
test
tests
are
the
same,
but
the
build
is
about
five
times
faster,
really
interesting,
and
it's
the
default
builder.
R
Now
for
cmake
projects
in
digital
studio
2019,
I
mean
as
much
as
we
all
love
a
mess
build.
We
admit
that
ninja
is
good.
R
R
Cannot
use
it
on
linux
with
civic,
you
can
build
with
ninja
on
linux.
I
spoke
to.
R
Yeah,
it's
just
good
like
chromium
uses,
google
ninja
flavor
of
ninja
as
well,
so
many
big
projects
switching
over
to
that
right
now
and
even
on
windows.
We
admit
that
in
visual
studio
we
prefer
ninja
over
the
ms
build
right
now.
T
Yeah
I'm
using
ninja
locally,
you
might
have
deaf
environment,
I
mean
I
mean
not
such
careful
about
speed
and
then
the
good
thing
is
the
ninja
emitted
output
is
much
more
clean
than
ms
build
msb.
Would
you
meet
a
lot
of
log
messages
to
consoles
so
harder
to
to
see
some
error
message.
R
S
We
are
talking
about
the
speed
I
mean
just
thinking
about.
How
should
we,
because
right
now,
for
I
mean
the
cia
environment,
I
think
this
cmake
bills
are
taking
lots
of
time
at
least
online.
R
That's
good
as
well,
and
I
think
the
other
option
it's
either
documents
with
all
these
things
or
github
actions
cache,
because
you
can
actually
have
a
the
limit
on
that
is
five
gigabytes,
though
per
repo.
R
So
if
we
cannot
see
it
across
2s
or
more
than
2s,
then
github
cache
may
not
be
viable
option,
then
docker
image
rebuilt
image
may
be
a
better
option.
R
Okay,
yes
you're
right,
because
it's
like
all
these
dependencies
build,
takes
five
to
ten
minutes
and
then
our
own
repo
takes
like
a
minute
or
two.
Yes,
exactly
so.
In
order
to
speed
it
up,
we
need
to
improve
the
first
part
build
of
the
dependencies
and
the
grpc
is
the
most
busy
bulkhead
dependency
right
now.
S
R
R
S
R
We
should
switch
to
ninja
everywhere,
where
we
do
cmake.
We
should
prefer
a
ninja
over
make
and
ninja
over
ms
build.
S
Who
just
want
to
instrument
their
libraries
using
for
using
open
elementary
schematics?
They
may
not
be
needing
sdk.
They
just
need
the
header
part,
so
I
think
it
would
make
sense
to
add
some
github
option:
a
cmake
option
in
our
cma
configuration,
which
will
only
build
the
open,
telemetry
api
project,
I'm
not
sure
how
it
will
work
with
bezel,
but.
R
It
also
exports
now
the
build
flags
like
definitions
for
whether
it's
a
standard
library
or
the
no
std
for
the
api
surface
classes,
but
I
don't
know
how
to
do
it
for
bazel.
So
maybe
we
should
ask
the
customer
if
they're
using
cmake,
then
we
can
partially
satisfy
their
ask
yeah.
I
think
they
are
using
civic.
S
R
S
R
By
the
way
about
this
open,
telemetry
underscore
api,
we
had
the
good
chat
with
went
out
last
night
about
we
pretty
much
need
to
update
the
document
like
I
have
that
document,
building
steadily
versus
no
std
like
different
set
of
classes.
So
now
we
need
to
mention
that
exported
developers
should
do
target
link
libraries
of
either
open
telemetry
underscore
common
or
open
telemetry
underscore
api
in
order
to
bring
in
those
definitions.
R
The
issue
I
had
is
before
we
had
those
build
flags
like
ad
definitions,
use
standard
library
in
the
top
level
cma,
so
it
was
applying
by
default
to
all
targets.
Now
we
export
it
only
through
the
open,
telemetry
api
target
and
in
order
to
make
sure
that
you
build
with
the
same
flags
with
the
same
classes,
you
now
have
to
take
a
dependency
on
open,
telemetry,
underscore
common
or
open
challenger
api.
R
I
need
to
add
a
paragraph
about
it
in
the
markdown
document,
so
that
exporter
devs
follow
this
practice,
because
I
I
got
hit
by
this
for
the
fluent
exporter
when
it
got
refactored
in
the
main.
My
code
stopped
blinking,
because
maine
is
building
with
one
set
of
libraries,
and
my
exporter
was
building
with
a
different
set
of
classes
and
it
didn't
link
because
the
the
definition
signature
was
different,
so
something
that
we
need
to
mention
in
the
markdown.
S
S
Let
me
just
quickly,
I
think,
probably
so
these
were
the
tickets
which
I
created.
Okay,.
S
You
later
I
have
another
meeting,
yeah
sure,
okay,
so
probably
I
think
I
quickly,
I
think
I
can
just
you
can
drop.
I
think
whenever
you
want
to.
I
just
want
to
so.
I
did
like
this
is
how
I
am
doing
the
very
teacher.
This
is
what
zip
kit
and
ladder.
So
basically,
these
are
the
all
the
bullet
points
coming
from
specs
matrix
and
what
all
we
support
in
case.
S
R
S
And
those
those
change
not
supported
things
is
not
going
to
break
the
backward
compatibility.
I
think
we
should
be
good,
yes
makes
sense,
so
we
should
not
be
that
any
api
surface
we
are
going
to
change
in
so
that
that's
a
plan.
I
think
we
should
know
where
the
gap
is
and
yeah.
We
should
have
a
plan
in
hand,
so
sounds
good
yeah.
So
that's
what
just
wanted
to
talk
about,
how
I'm
doing
it,
and
probably
you
when
tom
can
just
do
this
for
memory
console
and
I.
R
R
R
So
I
can
thank
him
on
this
issue.
I
guess
because
he
he
asked
me
if
we
need
help
with
this,
and
I
said
yes
sure
why
not
please
help
us.
S
Okay,
he's
just
just
foundation
he's
the
same
guy,
who
did
some
contribution.