►
From YouTube: 2021-06-30 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
B
B
A
A
A
C
Tigran
I
can
give
an
update
again.
This
is
just
something
that
we
are
going
to
continue
to
maintain
as
we
have
open
prs
as
well
as
any
other
pr's
that
have
been.
You
know,
triaged
and
are
ready
to
merge.
It's
just
an
update,
so
you
know
folks
can
look
at
it.
If
there
are
any
anything
is
missing
or
people
are
looking
at
reviews
we'll
continue
to
update
this
every
week.
C
A
D
Hi
yeah,
thank
you,
so
I
think
I
had
the
skeletal
pr
for
delta
to
cumulative
conversion.
It's
only
the
skeleton,
but
there
are
some
comments
regarding
the
configuration.
How
do
you
want
to
filter
this,
and
bogdan
and
josh
mcdonald
actually
made
a
good
number
of
comments
and
they
explained
everything,
so
I
just
updated
the
pr
removing
all
the
config
option
based
on
their
comments.
So
it's
maybe
just
they
need
to
have
a
look
and
get
it
merged
so
that
I
can
send
other
peers,
so
I
just
put
it
to
get
some
attention
from
them.
D
B
Yeah
not
really
pure,
it's
only
an
issue
where
I'm
I'm
recommending
beneath
to
be
a
an
approver
for
the
operator,
and
I
would
like
the
attention
from
the
maintainers
in
either
even
a
plus
one
or
minus
one.
B
B
Yes,
so
finis
has
made
a
few
contributions
to
the
operator
and
I'm
very
confident
in
having
him
as
an
approver
for
the
operator.
I
cannot
judge
what
he's
doing
on
the
collector's
side.
I
haven't
seen
many
aprs
from
him
or
reviewed
any
at
all
that
I
can
remember
so.
This
request
here
is
coped
to
the
operator.
E
B
E
B
Yeah,
we
still
have
this
bootstrapping
problem
right
now.
So
it's
mostly
me,
I
do
see
some
contributions
from
the
community.
Vinit
is
one
of
them.
I
see
some
folks
from
aws
also
help
me
here
in
their.
B
E
B
Okay,
so
whenever
that
happens,
I
do
I
just
cannot
not
have
you
all
right
now,
because
I
cannot
approve
my
own
peers,
so
I
need
when
I
open
apr.
I
need
you
to
approve
it
here,
so
I.
C
Yeah
sorry,
amazon
mute
so
again
just
wanted
to
go
through
the
backlogs
that
bogdan
and
tigran.
Thank
you,
for
you
know
kind
of
triaging
with
me
and
and
just
wanted
to
give
an
update.
I
can
share
about
tigran.
Are
you
sharing
or
I
can
show
you.
C
Yes,
please
share.
Okay,
I
can
share.
C
So
just
going
through
both
of
the
phases,
for
you
know
what
we
are
targeting
for.
C
Race
stability,
can
you
see
my
screen?
Yes,
okay,
cool,
so
we
have
two
phases.
You
know
based
on
discussions
in
detail
with
bogdan
and
tigrin
on
what
needs
to
be
done
to
achieve
stability
for
the
tracing
components
in
core
collector
and,
as
many
of
you
may
already
be
tracking
this
there
are
you
know,
all
of
these
items
are
in
flight
right
now.
There
are
two
very
clear
phases
that
have
been
carved
out
which
are
in
flight.
The
p
data
work.
Obviously
bogdan
has
been.
C
You
know,
driving
that
and
there's
a
lot
of.
I
think
it's
almost
done.
I
think
you
added
another
one
of
these
issues
yesterday,
but
again,
it's
related
to
the
p
data
changes
that
are
being
made
other
than
that
the
semantic
conventions
work
is
being
done
by
reviews
are
being
done
by
anthony
as
well
as
he
submitted
a
virgin
doc.
C
So
for
those
of
you
who
have
not
looked
at
it,
please
take
a
look
in
two
five,
four
three
and
you
know
we're
almost
done
with
phase
one,
which
is
great
because
that's
related
to
the
changes
that
you
know
we're
also
proposing
and
making
in
phase
two
which
will
which
we
are
anticipating
you
know,
will
all
complete
in
the
next
three
weeks,
two
weeks,
hopefully
but
three
weeks.
Sorry.
C
So
this
is,
let
me
go
through
our
stage
2
for
trace
ga
and,
as
you
can
see,
all
the
items
that
are
here
are
being
you
know.
We
are
triaging
just
regularly,
so
there
are
some
tracking
issues
which
are
not
factored
in,
but
other
than
that.
I
think
the
two
major
issues
that
are
kind
of
in
progress
and
and
derasis
here.
So
that
would
be
a
good
thing
to
discuss.
C
One
is
the
auth
issue
that
I
think
jurassic
you've
been
responding
on
the
issue
itself,
but
there
are
some
pr's
that
have
been
completed
and
then
there
was
a
question
from
tigran
on
the
auth
propagation.
You
know
impact
on
stability.
Are
there
any
tests?
We
need
to
add
or
not,
and
that's
something
which
is
an
open
item.
I
think
everything
else
is
in
pretty
good
shape.
B
I
can
I
can
make
a
couple
of
comments
here
as
well,
so
for
the
most
part
it
is,
it
is
done-
or
at
least
you
know-
from
from
the
ga
perspectives,
authentication
for
incoming
requests
and
outgoing
requests.
They're
they're
done
they're
they're
in
an
architecture
that
I
think
we
all
see
as
scalable
and
ready
for
the
future.
The
only
thing
that
is
painting
is
the
propagation
of
authentication
data.
Through
the
pipeline.
B
We
have
a
couple
of
use
cases
where
we
would
need
that
one
of
them
is
the
pass-through
authentication
and
the
second
one
is
multi-tenancy,
for
instance,
and
so
on
and
so
forth
right,
so
we
need
a.
We
need
to
think
about
the
auth
propagation.
I
I
have
that
in
my
queue.
It's
just
you
know
I.
I
have
quite
a
lot
here
right
now
on
my
plate,
but
I
do
plan
on
working
on
this
one
here.
B
One
thing
that
we
can
do
right
now
and
I
I
plan
on
doing
that
by
the
end
of
this
week-
is
at
least
making
apr
to
change
the
interfaces
for
the
authenticator
so
that
we
make
that
part
like
future
proof
of
proof,
so
that
we
can
do
the
the
auth
propagation
in
the
future,
so
pulse
ga,
but
we
stabilize
the
api
already
right
now
right.
B
So
we
fix
what
the
api
should
look
like
based
on
information
that
we
know
right
now,
so
it
is
not
a
guarantee
that
we
are
not
going
to
break
it
or
you
know
that
we
are
not
going
to
need
something
different
in
the
future
when
we
actually
have
the
implementation,
but
it
is
something
that
you
know
based
on
on
what
we've
talked
over
the
past.
I
don't
know
three
four
months
already.
B
We
know
that
we're
gonna
need
and
concretely,
I
think
the
authenticate
method
has
to
receive
a
context
and
return
a
context
as
well.
I
don't
know
I
have
to
go
over
my
notes,
but
I
think
that's
that's
mostly
the
change
that
needs
to
be
done.
A
Yeah
you're
you're
completely
righteous.
We
don't
need
to
implement
it
right
now.
I
would
very
much
prefer
us
to
have
full
clarity
on
the
design
and
how
the
apis
would
look
like,
so
that
we
know
that
whatever
changes
we
need
to
make
in
the
api
are
additive
won't
be
breaking,
I
think
they
are,
but
until
we
have
that
full
clarity,
it's
hard
to
tell
right
so
again,
there
is
absolutely
no
need
to
have
full
implementation,
but
some
sketch
is
there
right,
some
design
document
updated
with
the
decisions.
A
B
Yeah-
and
I
think
I
think
that
part
is
actually
also
kind
of
settled-
we
discussed
a
couple
of
times
already
in
using
context
for
that
and
whatever
the
approach
we
decide
in
actually
doing
the
propagation,
we
need
a
context,
because
that's
the
only
data
structure
that
we
have
at
that
point
right
I
mean
if
we
are
doing
junctions
or
you
know
through
junctions
or
if
it
is
something
else,
the
only
thing
we
have
there
is
the
context.
Perhaps
the
only
thing
that
is
could
be
problematic.
Is
that
streams
for
grpc?
B
They
do
not
return
contexts
right,
so
they
did.
I
think
they
don't
even
accept
context
as
well,
so
it
is
one
edge
case
that
we
have
to
think
about,
but
I
think
I
think
we
we
did
discuss
there
in
the
past
and
we
know
what
we
need
to
have
in
the
api.
A
E
There
may
be
a
one
one
problem
that
may
force
us
to
to
fix
this.
Sooner
than
later,
which
is
we
have
the
package
called
client
in
the
collector,
which
is
not,
which
is
very
similar
with
this
problem,
so
we
need
to
actually
propagate
the
peer
information,
the
prip
through
the
pipeline,
so
yet
another
thing
that
we
need
to
propagate
from
the
from
the
receiver
through
the
pipeline.
E
So
I
think
it
will
be
very
important
to
have
a
similar
solution,
because
it's
kind
of
a
similar
problem
also
to
to
your
point
jurassic
with
the
streaming
grpc.
They
do
have
a
context
that
is
created
when
and
available.
When
the
first
package
you
see
comes.
B
Right
but
it
is
a
different
semantics
from
from
the
http,
because
the
http
is
for
each
rpc
so
for
each
call
there
is
a
context
and
there
is
an
associated
authentication
data
and
for
streaming.
If
it
is
at
the
beginning
of
the
connection,
it
is
not
per
or
is
it
for
rpc.
E
It's
per
rpc,
but
the
rpc
may
include
multiple
packages
sent
so
so
depends
on
what
protocol
you
implement
on
top
of
the
streaming.
So
so,
for
example,
if
you
have
a
bi-directional
stream
that
just
streams
multiple
packages,
it's
working,
we
we
can
discuss,
I
mean
in
indeed
on
the
bi-directional
streaming.
You
can
implement
your
own
protocol
on
top
of
the
bi-directional
streaming
and
you
can
say
that
every
package,
it
it's
actually
a
different
rpc,
then
that's
different.
But
then
then
you
cannot
send
metadata.
E
B
Okay,
so
I
I
now
got
here:
what
was
the
actual
problem
and
the
problem
is,
the
handler
does
not
have
access
to
the
to
the
context.
So
I'm
just
pasting
here:
the
code.
E
I
was
actually
considering
to
to
make
some
connection
for,
for
example,
for
prip.
We
can
make
some
connection
interceptors
and
put
the
the
ip
in
the
context
when
the
connection
is
established,
because
that's
when
that's
when
we
can
know
the
our
the
prp,
but
for
for
for
hours,
it
may
be
every
time
when
we
receive
some
headers
because
we
expect
to
be
headers
based.
So,
let's,
let's
discuss
offline,
but
we
need
to
to
do
this.
C
Okay,
cool
thanks
jurassic
thanks
to
erin,
so
this
is.
This
is
one
of
the
areas
you
know
that
is,
we
do
need
a
design
and
implement.
You
know,
figure
out
what
needs
to
be
completed
here.
The
other
there's
one
more
issue
here
in
the
in
phase
two,
which
ellen
is
working
on
again.
This
is
also
something
that
you
know
he's
he
I
don't
know
if
alan
is
here,
but
just
to
call
out
attention.
This
is
the
otlp
endpoint.
C
It
is
something
that
I
think
we've
gone
back
and
forth.
Tigran
you've
been
reviewing
this
issue.
A
C
C
G
F
Should
have
time
to
get
on
top
of
it.
Next
week,
though,
I
do
still
have
the
outstanding
spec
pr
which
I
know
is
mostly
you
know,
sdk
focused,
but
that
has
not
landed
yet,
but
I'm
I
think
that
there's
some
good
consensus
on
it
so
yeah.
If
folks
feel
that
it's
good
to
move
forward
with
this,
I'm
happy
to
do
that.
A
C
Good
good
great,
because
these
are
the
two
items
that
were
actually
we
didn't,
have
an
update
on.
So
thanks
alan
thanks
for
joining
in
really
appreciate
your
help
there,
other
than
that.
I
think
everything
else
is
on
track.
There
are
no
major
areas
that
you
know.
I
mean
there
are
areas
but
everybody's
working
on
them.
There's
nothing,
that's
yellow
here,
but
we
are
tracking
for
you
know.
Hopefully,
in
two
weeks
time
we
should
have
a
pretty
good.
C
You
know
most
of
these
issues
done
and
we
are
hoping
to
do
a
release
at
the
end
of
july,
again
on
the
normal
cadence
bogdan.
I
think
you
had
said
that
we
will
do
and
release
earlier
in
july
next
week
and
then
again
another
one
which
will
have
hopefully
all
these
items
complete
for
tracing
stability
for
core
by
the
end
of
july.
So
that's
what
we
are
tracking
towards.
C
E
C
Okay,
great
and
and
just
to
call
out,
you
know
attention
to
three
four,
seven:
four,
where
there
is
an
active
discussion,
that's
ongoing
for
moving
the
components
which
are
not
required
for
collector
core
to
be
stable
to
contrib
or
to
the
prometheus
web
group
from
atheist
components.
C
Again,
you
know
we
have
a
list
that,
as
of
yesterday,
which
again
folks
can
please
take
a
look
at
this
again
bogdan
and
I
kind
of
went
through
each
component
in
the
collector
and
tigran.
This
is
slightly
different
from
what
you
had
proposed,
but
please
take
a
look
at
it
again.
C
By
and
large
it
is
very
similar,
but
I
think
bogdan
had
a
very
good
point
about
moving
open
sensors
to
you
know
contribute
for
now,
because
we
do
want
to
make
sure
that
those
you
know
hooks
are
moved
over
time
removed
over
time
and
that's
something
he's
working
on
and
also,
as
you
can
see,
there's
more
work
required
on
test
bed
test
util,
as
well
as
some
of
the
other
processors
and
exporters
right.
C
So
I
again
would
strongly
propose
that
the
prometheus
group
of
components
we
move
to
the
prometheus
web
group
we'll
build
out
the
tooling
to
make
sure
those
are
buildable
as
well
as
you
know,
fully
implemented
in
terms
of
the
build
and
deploy
test
pipelines,
but
that's
something
that
we
can
maintain
cadence
on.
We
have
a
lot
of
folks
like
david
or
you
know,
some
of
our
folks,
anthony
et
cetera,
are
working
on
it.
So
again,
this
is
for
consideration
and
folks
feel
free
to
comment
on
it
bogdan.
C
A
I
we'll
have
a
look.
I
didn't
see
that
final
version
that
you
posted,
I
think
it
behaves
yeah.
It
looks
good
from
what
I
see
right
now.
I
will
have
a
more
more
thorough,
read
of
that
and
we'll
post
if
there's
anything,
okay,.
B
Yeah
one
question
that
I
had
and
I
left
as
a
comment
here
is
we
talked
in
the
past
in
living
in
having
different
ideas
or
different
concepts
or
splitting
the
definitions
of
core
repository
code
repository
and
a
distribution
that
we
give
to
users.
B
So
we
we
provide
users
with
a
build
that
contains
jaeger
and
otlp
and
open
census,
for
instance,
but
it
doesn't
mean
that
those
that
code
would
reside
on
the
core
repository
and
the
core
repository,
the
main
consumer
of
that
are
downstream
collector
builders
right,
so
people
who
are
building
distributions
of
the
the
collector.
B
B
It
doesn't
necessarily
mean
that
we
are
switching
the
code
repository
itself,
so
we
can
still
have
components
at
the
core,
but
we
just
need
to
explicitly
mark
them
that
they
are
not
part
of
the
api
for
the
core.
You
know
so
people
should
not
rely.
So,
even
though
the
eager
exporter
is
in
the
core,
people
should
not
rely
on
or
should
not
build
code.
That
depends
on
that
specific
exporter
being
decor.
B
So
we
can
use
the
collector
builder
to
build
the
distribution.
So
you
know,
building
the
distribution
is
something
that
is
external
to
the
core
repository,
so
it
can
be
part
of
a
another
repository
called,
I
don't
know
collector
distributions
or
whatever,
and
that
contains
the
manifests
that
are
then
built
by
the
the
collector
builder
and
a
perhaps
a
separate
repository
could
exist
for
the
the
the
official
components
that
are
part
of
the
of
the
main
distribution.
C
Yeah
I
mean
again
jurassic
to
your
point.
I
mean
I've
been
taking
a
look
at
the
collector
builder
and
then
we
also
had
a
discussion
around
this
as
to
how
we
can
separate
out.
You
know
the
the
source
location
was
in
code
repositories
versus
the
actual
distribution,
builds
right
and
and
and
usually,
as
you
know,
with
distributions,
there
are
components
that
you
can
pick
and
choose
and
include
and
do
a
build.
You
know
that's
pretty
standard
in
linux
distributions
and
I'd
like
to
kind
of
move
towards.
C
You
know
that
kind
of
automation
and
and
configuration
or
installer
tool,
if
you
will,
which
maybe
can
be
based
on
top
of
the
collective
builder
and
extend
it
right.
So
there
are
some
pretty
nice
examples
that
you
know
exist
today,
even
in
the
go
world-
and
you
know-
we've
been
looking
at
that,
so
I
will
submit
a
proposal
for
you
know
kind
of
some
of
my
thinking
around
how
we
can
do
that
and
that's
based
on
jurassic.
Your
point
also
that
downstream
distributions
can
be
very
much.
C
You
know
plug
and
pick
and
build
right.
So
that's
something
that
we
should
totally
move
towards,
but
I
don't
know
if
that's
something
that's
a.
I
think
it's
like
an
ongoing
effort.
It's
not
necessarily
tied
to
ga
or
tracing.
C
Yeah-
and
I
think
that
that's
the
idea
with
this
list
that
you
know
anything
that
is
not
guaranteed
to
be
stable,
will
move
to
contrib
and-
and
you
know,
that's
kind
of
the
separation
it's
on
go
proposed
right
now,
but
to
your
point,
ideally
something
that
the
go
sdk
has
been
also
looking
at
and
anthony.
Maybe
you
can
chip
in
here.
C
Is
that
really
centralizing
and
reusing
what
the
go
sdk
is
also
doing
with
marking
stable
the
components
that
are,
you
know
able
to
build
with,
and
then
the
other
ones
being
tagged
as
experimental.
H
E
H
We've
been
building
some
tooling
in
the
the
go
api
and
sdk
to
help
wrangle
that
complexity
and
make
it
so
that
we
can
say:
okay,
here's
a
set
of
modules
that
are
version
1.0,
and
we
ensure
that
nothing
in
that
set
depends
on
any
of
the
modules
that
are
not
yet
1.0
and
that
every
module
is
in
one
of
these
sets
and
there's
a
set
of
validations
that
we
go
through
so
that
we
can
then
increment
the
versions
together
we
can
do
with
all
the
tagging
and
those
sorts
of
things
that
are
necessary
to
create
a
release.
H
So
we
we
can
kind
of
remove
some
of
that
complexity,
but
it
is
definitely
additional
overhead
that
that
has
to
be
considered
and
a
single
module
approach
is
far
simpler.
I
think
that
only
works
for
core
because
could
trip.
Obviously
we
want
to
keep
everything
in
separate
modules,
and
so
some
of
that
tooling
may
still
be
useful,
for
you
know
the
tagging
version
and
all
of
that
maintenance
overhead,
but
in
core.
H
If
we
can
keep
a
single
module,
then
we
don't
have
to
worry
about
interdependencies
between
the
modules
and
ensuring
that
stable
modules
don't
depend
on
unstable
modules,
which
was
a
big
part
of
why
nico
sdk
only
recently
released
its
1.0
rc.
It
took
us
a
long
time
to
untangle
some
of
the
interactions
between
the
core
of
our
api
and
the
metrics
api,
which
is
still
unstable.
For
instance,
let's
see.
H
Yeah
there
are,
there
are
a
couple
of
modules:
are
there
commands
that
are
separate
modules
and
internal
tools,
but
those,
I
think,
are
very
clear:
one-way
dependencies
upwards,
the
the
core
module
includes
all
of
the
things
that
it
depends
on
as
a
self-contained
unit.
As
far
as
I
could
tell.
E
E
Except
except
the
apr
which
emerged
that
extracted
the
p
data
but
yeah.
A
B
B
No
no
well,
I
think
my
point
is
that
there
are
two
audiences
to
the
to
open.
Telemetry
collector
one
audience
are
users,
so
use
people
who
are
downloading
the
binaries
and
are
expecting
certain
components
to
be
there
and
those
components
are
stable
and
one
of
those
components
is
certainly
jager
or
zipkin
right,
so
things
that
we
defined
already
a
long
time
ago,
that
would
be
first-class
citizens.
B
Yeah
or
prometheus
yeah,
yeah
and
the
second
types
or
the
second
audience
for
the
collector
are
the
collector
builders.
So
people
are
building
downstream
distributions
based
on
the
open
electric
core
and
for
those
folks
we
do
not
want
to
leak
exporters
or
components
as
part
of
the
api
right.
So
we
don't
want
them
to
use
the
eager
exporter
in
their
components,
because
we
might
just
remove
that
that
component
in
the
future.
We
might
you
know
we
don't
want
that
to
be
part
of
the
public
api.
B
Basically,
so
what
I'm
asking
is,
or
what
I'm
suggesting
is
to
decouple
the
definitions
of
what
is
the
core
api
and
what
is
a
distribution,
and
if
we
do
that,
then
we
have
a
distribution
of
open,
telemetry
collector.
That
is
suitable
for
a
general
case,
and
then
we
have
another
distribution
that
is
suitable
for
side
cars,
and
then
we
have
one
that
is
suitable
for
tracing
only
and
so
on
so
forth
right.
So
I
think
we
talked
about
the
idea
of
having
stamps,
so
we
have
a
stamp
here
saying
open,
telemetry
collector
certified
for
sidebars.
B
I
don't
know
yeah.
C
That's
that's
true,
jurassic
and,
and
I
think
that
that's
a
really
good
long-term-
I
mean
you
know
even
short-term
vision
to
shoot,
for
I
do
think
that
you
know
ideally
collector
core
should
be
otlp.
You
know
end-to-end
right.
Everything
else
actually
should
be
edible
and
distributable
right
as
distribution.
E
To
maybe
split
in
different
modules
than
jugger
zipkin
prometheus,
even
though
you
keep
them
in
the
core
repo.
You
have
different
modules,
so
at
least
the
core
module
will
include
the
interfaces
plus
otlp
receiver
exporter
and
maybe
the
debug
or
login.
Whatever
is
the
name
and
that's
about
it,
but
I
kind
of
like
these,
but
can
somebody
summarize
all
these
changes
that
we
will
have
to
do
to
to
achieve
that
because
yeah
yeah?
We
are
talking
a
lot
about
this
and
nobody
put
together
some.
C
You
know
and
and
the
the
source
you
know,
structure
as
well
as
the
repo
structures
that
are
required
or
what
we
are
proposing
to
be
able
to
build
a
scalable
distribution
model
right.
So
I
I
do
think
we
have
to
do
a
design
and
propose
that
for
review
to
the
to
the
maintainers,
as
well
as
the
community.
C
No,
no
I'd
be
happy
to
help
you
jurassic
because
we've
been
thinking
about
it
and
you
know,
given
that
we
do
build
the
distribution
for
aws
downstream.
We
have
a
lot
of
work
that
we've
already
done
on
that
so
happy
to
add.
Okay
and.
E
I
I
have
a
small
question
if
you
don't
mind
so
there's
some
effort
to
basically
reorganize
the
processors
for
attributes,
resources
span
and
all
that,
and
now
that
will
be
in
the
collector
country.
Repo.
Is
that
something
that
this
group
will
track,
or
this
is
something
that
will
be
tracked
in
a
different
milestone
or
does
it
change
anything
in
that
regard?.
C
Carlos
we
do
plan
to
track
each
and
every
item
I
just
haven't
opened
up
the
issues
yet,
but
we'll
track
each
one
of
these
items.
You
know
which
are
being
moved
to
contrib,
that's
just
completely
temporary
for
now,
because
we
just
want
to
make
sure
that
the
core
collector
is
stable
and
and
then
we'll
move.
You
know
each
one
of
these
components
back
as
they
are
refactored
or
they
are
stabilized
perfect
thanks.
So
much.
C
All
right
cool,
so
I
mean
that's
all
I
had
on
the
backlogs
and
and
just
as
and
call
out.
I
also
wanted
to
call
attention
to
a
couple
of
backlogs
that
I've
been
working
with
bogdan
on,
as
you
can
see
here.
These
are
for
metrics,
so
we
are
starting
to
plan,
for
you
know
what
components
will
be
required
for
metrics
to
stabilize
and
again,
as
you
can
see,
these
are
some
of
the
high-level
umbrella
issues.
C
You
know
where
bogdan
has
very
clearly
itemized
like
we
need
to
be
at
otlp.
You
know,
0.9.0,
you
know,
support
list
the
collector
components
which
are
not
affected
by
the
metrics
changes,
etc.
So,
again,
please
take
a
look
again.
This
is
work
in
progress.
We
will
have
a
more
detailed
roadmap
and
and
backlog
as
we
build
this
out,
the
other
you
know
areas
which
are
dependent
on
or
or
kind
of,
our
phase,
two
in
the
metrics
ga
roadmap
or,
for
example,
the
prometheus
receiver.
C
Thinking
about
you
know
the
redesign-
and
this
has
been
discussed
and
a
design
proposal
is
in
the
work,
so
manuel
has
been
working
on
that.
So
definitely,
you
know
kind
of
focusing
in
the
receiver
on
what
it
does
best
scraping,
metrics
and,
taking
perhaps
you
know
the
service
discovery,
for
example,
to
be
a
more
general
purpose:
module
for
the
collector
building
that
out
designing
it
as
well
as
re-implementing,
redesigning
and
implementing
the
collector's
metrics
processor,
amongst
the
other
processor
design,
discussions
that
are
ongoing
right
now.
C
So
there
is
a
tracking
issue,
for
you
know
the
overall
processor,
redesign
and
metrics
processor
is
just
one
of
those.
But
again
this
is
just
the
initial
you
know
cut
of
what
are
the
major
areas
that
you
know
need
to
be
worked
on
and
and
of
course,
along
with
this
is
all
the
prometheus.
C
E
Yes,
so
three
one,
three,
seven,
it's
a
bit
more
pr
higher
priority
than
than
these
yeah.
Then
the
reason
is
right.
Now
there
is
a
there
is
a
dependency
between
open
census,
conversion
to
from
bit
data
with
prometheus,
which
means
which
means
that
the
open
census
receiver
exporter
has
to
be
in
the
same
repo
as
prometheus,
which
we
may
not
want
to
have
this
requirement.
So
if
alolita,
if
you
can
prioritize
that
more
especially
because
I
know
emmanuel
is
working
on,
that
would
be
great.
C
Good
and
and
and
that's
all
I
had
you
know
in
terms
of
degreeing,
just
sharing
the
updates
on
the.
A
C
I
think
I
think
I.
C
With
the
with
the
components.
A
Right
right,
so
the
multiple
config
file
feature
support
is
aditya
rayhan
here.
G
Yeah
this
this
shouldn't
be
too
long.
I
just
know
that
we
discussed
a
lot
of
issues
last
week
and
I
wanted
to
confirm
that
we
had
a
go
ahead
to
start
implementing
in
terms
of
your
question,
tigran,
which
I
believe
I
answered
in
the
google
doc
as
well.
The
config
file
submitted
at
the
command
line
will
have
an
include
section
at
the
top
optionally.
If
it's
not
there,
it'll
just
be
a
normal
config
file,
so
it
shouldn't
be
breaking.
A
So
can
you
say
one
more
time:
how
do
you
intend
to
have
the
multiple
config
files
provided
to
the
collector.
G
G
A
Yes,
yes,
please
add
to
the
design,
doc.
Okay,
I
I
I'm
not
completely
sure
that
that's
a
very
nice
way
to
specify
multiple
files.
It
is
backwards
compatible,
which
I
think
is
the
most
important
thing,
but
I'd
like
to
think
a
bit
more
about
that.
A
Okay
and
other
than
if
both
the
four
sections
are
specified
and
the
include
section
is
there.
G
A
A
G
A
D
Thank
you
so
one
thing
just
here
like
so
and
then
is
the
decision
is
like
we
are
waiting
for
just
mentioned.
Palu
go
from
splunk,
I
guess
so
after
the
discussion
we
can
come
into
a
decision.
Is
it
something
like
this
or
for
the
core
functionalities?
D
We
are
okay,
because
our
requirement,
it
seems
kind
of
quite
bit
different
from
the
span.
What
has
done
so
because
this
is
also
another
concern
like
so,
if
the
core
functionalities,
we
are
okay
with
like
into
end
consensus.
Maybe
we
can
start
working
on
that
part
then,
for
the
conflict.
Part
as
like
think
and
has
some
concerns,
I
think
we
can
also
sense
them
later.
That
should
be
like
easier.
How
you
feel
to
configure
them
is
that
something
we
are
taking
as
a
decision
or.
A
It's
not
completely
clear
to
me
that
that's
the
case
with
this
proposal.
That's
what
I
would
like
to
clarify
right
once.
We
are
sure
about
that.
Yes,
let's
move,
let's
move
forward
and
start
the
implementation,
and
that
is
also
the
reason
why
I
like
paulo's
opinion,
because
he
spent
a
lot
of
time
thinking
about
it
earlier,
and
this
was
one
of
the
topics
that
we
considered
as
well.
So
I
would
like
to
understand
if
he
has
any
additional
thoughts
on
on
the
proposal
that
you
made.
Maybe
he
has
some
improvement
suggestions
once
we
do.
D
D
A
Let's
say
this
way
in
principle:
I'm
absolutely
not
opposed
to
having
a
feature
like
this,
like
it's
very
welcome.
I
just
want
to
make
sure
that
the
user-facing
design
of
it,
what
it
looks
like
to
the
user
is,
is
extremely
user-friendly
right.
The
usability
is
is,
is
great.
That's
what
I
would
like
to
ensure.
D
Yeah,
thank
you
then
yeah
we
will
discuss.
Take
it
away.
C
And
rehan
just
to
note
also
that
you
know,
even
as
you
implement
this
feature,
things
can
change
because
metrics
is,
you
know
still
in
metrics,
work
is
still
in
flight
and
and
again,
if
you
know
it
would
be
good
to
get
paulo's
feedback
earlier
than
later.
J
Yeah,
so
we
proposed
the
hem
chart
for
operator
last
week
and
got
reviewed
by
several
maintenance.
Thank
you,
and
one
thing
we
added
this
week
is:
we
are
planning
to
add
an
alternative
or
another
option
for
the
grenades
native
sir
manager
to
generate
the
signs
serves
which
the
operator
requires.
J
So
here
is
the
the
design
dock
and
please
take
a
look
well.
I
will
also
just
ask
all
your
comments,
so
we
are
not
going
to
replace
this
native
serve
manager
where
I
just
want
to.
You
know,
provide
an
another
option
for
users
in
case
that
they
don't
want
to
install
the
server
manager
as
prerequisites
of
the
operator.
J
J
C
B
Yeah
and
in
general
we
favor
for
his
with
this
helm
chart
as
long
as
you
know,
we
have
people
committed
to
supporting
it
because
one
problem
we
have
on
jaeger
right
now
is
we
have
a
helm
chart,
but
it
very
frequently
lags
behind
the
operator
itself
and
it
creates
conflict
because
people
try
to
use
the
operator
with
the
helm
chart
and
the
things
don't
work
for
some
reason
and
when
we
see
it's
because
the
home
chart
is
lagging
behind
now
for
this
specific
part
on
the
cert
manager,
I
don't
know
I
would
not
introduce
yet
another
solution
for
for
certificate
management.
B
In
my
experience,
people
in
production-
they
don't
even
use
circ
manager,
they
or
some
people
do,
of
course,
but
pretty
much.
Every
single
customer
every
single
person
would
use
a
different
solution
for
for
managing
certificates.
B
So
we
don't
worry
about
auto
provisioning
of
the
search.
Most
of
the
time
people
have
the
tooling
to
auto
provision
certificates
based
on
their
infra,
based
on
their
requirements,
and
the
case
that
you
have
here
for
self-signed
search.
They
mostly
affect
them
local,
dev
environments,
and
for
that
I
would
just
recommend
circ
manager.
B
The
circ
manager,
so
jet
stacks
cert
manager,
and
you
know
just
and
tell
people
very
clearly
in
the
documentation
that
they
need
a
certain,
a
secret
named
xyz
with
the
certificate
information
in
this
format
in
those
keys
and
so
on
and
so
forth.
C
Jurassic,
that's
a
very
good
point
and
and
you're
saying
that
we
should
just
provide
sample
sample
examples
and
the
documentation,
for
you
know
different
users
to
be
able
to
configure
the
different.
You
know
cert
managers
right,
because
that
that
makes
sense
it
makes
it
more
generic
and
we
will
be
maintaining
the
helm
chart.
You
know
long
term,
so
no
worries
about
that
cool.
B
Nice
yeah,
so
I
guess,
let's
start
with
that,
so
just
documentation.
So
let's
keep
the
helm
chart
just
touching
the
the
the
open,
telemetry
and
open
telemetry
operator
pieces
and
let
the
search
management
part
out
of
it.
You
know
only
on
the
as
part
of
the
documentation
and
if
people
do
need
to
have
auto
provisioning
of
the
certificates.
For
that
then
sure
then
we
can.
We
can
improve
the
home
chart
later.
C
Okay,
yeah
that
that
sounds
good.
That's
good
advice,
jurassic
thanks,
yeah.
B
B
B
What
I
would
suggest
is
having
something
like
similar,
because
you
don't
have
make
files
here,
then
you'd
have
to
instruct
users
to
install
jet
stacks
hound
chart
before
this
one
here
and
then
you
know,
install
start
manager
before
and
then
install
one
of
the
charts
here
so
either
the
operator
or
the
collector.
H
Okay,
so
we
would
still
keep
the
the
cert
manager
cr
for
generating
a
cert
and
guard
it
like,
like.
We
see
here
with
an
enabled
or
enabled
flag
to
say
if
you
have
certain
manager
installed,
and
you
want
us
to
generate
a
cert
using
it
set
these
values.
If
you
don't
set
this
to
false
and
provide
your
insert
yeah.
J
C
B
Yeah
and
I'm
trying
to
remember
how
it
works
for
openshift,
because
we
have
the
same
the
same
situation
there.
I
not
quite
sure
that
we
have
the
crds.
B
Well,
we
yeah
so
right
now,
I
don't
know
if
we
don't
have
the
crgs
and
olm
would
just
trip
it
out
of
the
manifest
of
the
bundle
in
in
operator
terms
or
if
it
has
another
crg.
That
is
then
implemented
by
something
openshift
specific,
which,
which
would
be
my
guess
actually,
because
circuit
manager
is
kind
of
the
default.
It's
kind
of
the
the
the
effective
standard
for
circuit
management
for
kubernetes.
C
Thanks,
I
think
that
was
the
feedback.
I
think
we
were
looking
for
super
helpful
thanks.
K
Yes,
I
believe
this
is
my
first
meeting
on
on
on
this
this
session,
or
you
know
this
sick,
so
hi,
I'm
jonathan,
I
usually
work
on
the
metrics
and
the
and
the
spec
meetings
are
around
that
and
my
question
is
that,
like
this
is
quite
a
long
story
on
the
spring
framework
side,
but
the
short
version
is
that
we
are
going
to
support
endpoints,
that
we'll
be
able
to
publish
otlp
and
zipkin
format
like
tracing
data,
and
the
question
is:
does
open
telemetry
collector
want
to
support
it
or
has
a
willing
to
support
scraping
for
this
endpoint.
K
E
E
I
don't
know
what
protocol
we
want
to
do
there.
So
that's
that's
the
first
step
I
think
before
does
zipkin
define
a
pull
protocol.
K
Well,
zipkin
only
defines
the
format
look
at
the
the
json
format,
so
the
the
protocol-
it's
it's
basically
http
and
it's
a
push.
That's
that's
all
like
the
zipkin
server
itself
has,
but
that
is
so.
I
the
protocol
and
the
format
on
zip
inside
those
are
like
to
like
separate
things
and
the
zip
code.
Server
is
not
able
to
pull
at
the
moment.
It
is
only
able
to
to
get
the
data
via
push.
K
E
K
I
mean
right
now
we
do
have
a
draft
for
this,
but
that
will
that
be
a
change.
So,
yes,
basically,
the
first
step
is,
is
defining
like.
How
can
you
pull
this
data?
Is
it
web
sockets?
Is
it
an
http?
Why
are
the
headers
you
can
tell
like,
which
was
the
last
piece
of
data
that
you
received
and
so
on?
So
the
question
is
from
our
side
or
from
spring
side.
Is
that
like
does
otlp
wants
to
do
it
so
that
we
can?
E
However,
it
goes,
then
you
can
come
to
to
us
as
a
community
and
say
hey:
do
you
want
to
support
this
protocol
and
most
likely,
we
will
help
you
with
the
with
that
support,
but
but
again
we
we
have
to
know
what
to
support
first,
spec,
first,
okay,
cool.
Thank
you.
E
For
the
otlp
protocol,
yes
for
the
one
that
websockets
or
whatever
we
choose
to
do,
how
do
you
follow
the
checkpoints?
And
so
anyway,
it's
it's.
It's
an
entire
protocol
that
has
to
be
defined
there
for
for
us
to
know
what
to
implement
so
yeah.
If
you
want
to
have
it
in
on
our
side,
the
benefit
of
having
in
our
side
will
be
forced
on
the
collector
to
support
it.
If
that
gets
approved,
there
is
going
to
be
no
question
and
we
have
to
maintain
that
now.
E
If
you
have
your
own
protocol
defined,
you
can
still
implement
your
own
receiver
for
that,
but
most
likely
the
community
will
not
help
that
much
with
maintaining
so
benefits
and
downsides.
C
All
right,
then,
bargain
last
but
not
least,
you're
right.
E
Oh
yeah,
my
item
is,
I
am,
I
am
annoyed
with
the
zipkin
thrift
exporter
and
I
don't
want
to
hear
who
uses
it
and
if
anyone
still
uses
that
we
we
do
support
the
grpc,
that
is
in
core
and
everything.
But
there
is
this
zipping
thrift
exporter
and
if
anyone
uses
it,
we
would
like
to
to
know.
B
B
E
E
H
H
C
B
From
from
eager
perspective,
I
can
tell
you
that
you
know:
we've
been
supporting
jrpc
for
quite
a
long
time
already
and
that's
the
recommended
way
of
sending
data
to
jager.
We
changed
the
year
agent
to
use
jrpc,
also
quite
a
long
time
ago.
I
think
a
couple
of
years
ago
now
and
we
do
recommend
people
using
that
instead,
so
you
know,
if
you're
asking
for
the
receiver,
so
you
should
accept
three
eager
thrift
receive
on.
You
know
on
the
receiver
side,
but
on
the
exporter.
B
I
really
don't
see
why
I
I
mean
you:
can
you
can
just
use
a
supporter
expert
in
jpc?
The
only
case
that
I
can
think
of
is
someone
implemented
a
jager
thrift
endpoint
somewhere.
That
is
not
eager,
that's
just
compatible
with
and
they
do
not
support
grpc
yet-
and
I
don't
think
that's
the
case,
all
the
implementation
or
the
endpoint
eager
compatible
endpoints
that
I
know
of
they
provide
at
least
your
pc
as
well.
In
addition
to
thrift,.
E
E
When,
when
you're,
when,
when
jager
is
gonna,
consider
to
remove
the
support
from
their
side
like
to
remove
the
because
right
now
from
the
agent,
you
can
talk,
thrift
correct,
you
can
still
talk
stripped
to
the
next
hope.
B
B
I
think
that
the
flag
was
deprecated
and
we
switched
the
default
to
grpc
and
then
we
removed,
but
I
can
confirm
that
for
that
for
the
next
week
we
oh
yeah,
so
I
think
we
still
have
on
like
the
agent
can
still
receive
a
multitude
of
formats,
like
you
know,
udp
zipkindrift
and
udp
thrift,
and
so
on
so
forth
and
compact
and
binary
and
so
on.
But
sending
data
is
only
hrpc.
I
can
confirm
that
information
for
next
week,
but
I
think
that's
the
situation
that.
E
D
L
M
They
were
actually
quite
good
and
interesting
information
about
jaeger
thrift
versus
jrpc
yeah.
I
was
just
I
heard
that
conversation
about
dropping
thrift,
support
and
they're,
going
mostly
with
grpc
for
sending
the
I
don't
know
if
it
affects
us.
Hopefully,
the
jager
itself
still
accepts
the
the
same
tricks
protocol.
M
M
I
think
it
was
agent,
collector,
sig,
okay,
and
I
think
they
were
discussing
the
topic
of
forwarding
data
from
jaeger
to
ohiop
collector
and
I
think
the
assumption
they
made
that
is
likely
grpc
channel
from
jager
agent
to
the
otlp.
M
I
was
wondering,
if
they're
moving
with
that
and
if
maybe
they
support
your
pc
already
and
know,
if
that
be
more
compact,
let's
say
if
we
use
jpc
for
the
client
side,
but
probably
not
right
now
like
right
now,
it's
all
done
and
three
this
I
assume,
works.
Well,
it's
more
like
longer
term.
If
there
are
shifting
towers
grpc,
it
would
probably
be
also
consistent
for
us
to
use
grpc
where
possible.
That
way,
we
would
use
grpc
for
your
tlp
and
perhaps
eager
implementation
can
also
shift
to
grpc
as
well.
M
So
that
way
we
can
kind
of
duplicate
the
dependency
on
thrift,
so
roma.
I
remember
schrift
itself
is
kind
of
fairly
bulky
of
its
own.
It
has
all
these
boost
libraries
dependencies
and
all
that.
So
I
think
it's
as
bulky
as
grpc.
N
Hello
good
morning,
tom
hey
am
I
honorable.
I
think
there
is
some
issue.
B
N
Yes,
so
I
mean
I
wanted
to
talk
about
yeah.
So
basically,
I
think
it's
related
to
the
bazel
and
I
think
which
you
have
written.
So
I
mean
right
now.
The
current
issue
is
that
we
cannot
upgrade
the
grpc
from
1.34
to
1.38.
N
The
latest
version
is
1.38
and
we
are
kind
of
stuck
to
1.34
and
the
old
grpc
does
not
work
with
bazel
four,
so
we
cannot
upgrade
to
the
latest
basal
version,
because
that
won't
compile
the
old
grpc
and
the
problem
right
now
is
that
we
support.
I
mean
our
open,
telemetry
c,
plus
plus
supports
gcc
4.8,
but
grpc
latest
version
supports
gcc,
4.9
plus
and
even
epsilon.
N
N
N
N
The
problem
is
that
it
works
fine
with
cmake,
because
we
have
different.
We
can
split
out
our
ci's
and
our
builds
separately
for
for
open
telemetry
x
of
router
exporter
and
for
core
g
open,
telemetry
c
plus,
but
for
bazel
we
don't
have
that
kind
of.
M
I
would
like
to
add
some
facts
here.
I
was
looking
at
the
matrix
of
enterprise,
linux
versions
and
what
compilers
they
come
with
and
their
official
enterprise
end
of
life
support
dates.
So,
for
example,
if
I
look
at
red
hat
enterprise,
linux
7,
which
is
already
out
of
support,
officially
red
hat,
is
not
going
to
patch
it
for
you
that
one
had
4.8.
M
However,
the
currently
support
red
hat
version-
red
hat
enterprise-
linux
are
8.1.
It
already
has
gcc8
and
gcc
9,
which
is
a
like
a
fairly
recent
version.
That
would
work
well
for
us,
so
maybe
we
should
make
that
call
and
say
that
now
the
minimum
we
require
is
4.9
and
we
can
substitute
that
basic.
Well,
basically,
we
can
say
that
there
are
presently
no
officially
supported
major
enterprise,
linux
versions
that
are
still
on
4.8.
M
That
is
why
we
make
that
tough
call
and
we
require
4.9,
and
if
somebody
still
needs
this,
I
guess
we
can
say
this
is
not
officially
supported.
M
But
if
you
have
custom
patches
for
your
own,
build
somehow
provide
the
document
call
to
make
it
work,
but
we
should
not
accept
those
patches
back
because
then,
if
we
accept
those
patches
back,
it
is
our
ongoing
hassle
to
keep
it
running
and
operational,
despite
the
fact
that
most
enterprises
have
already
moved
on
from
that,
and
I
think
if
we
had
this
conversation
seven
months
ago,
fine
all
the
red
cap
enterprise
linux,
maybe
was
still
supported
seven
months
ago,
but
right
now,
most
versions
have
already
been
declared
end
of
life
and
the
minimum
from
what
I
can
tell
from
the
support
matrix
is
now
not
even
4.9.
M
It's
already
eight
or
nine,
it's
more
fresh
version
of
gcc.
I
think
we
can
just
say:
let's
move
the
build
loop
to
4.9.
I.
M
And
even
for
red
hat
enterprise,
linux
7
from
what
I
remember
I
personally
used
like
df2
set
seven,
it's
like
you
can
actually
install
a
package
and
set
up
the
build
environment
to
use
gcc7
instead
of
stock
4.8
and
in
the
other
project.
Like
I'm
working
on
for
the
microsoft
telemetry,
we
do
exactly
that.
M
Okay.
So
basically
it's
like
the
only
scenario
that
we're
breaking
right
now.
Let's
say
customer
takes
vanilla,
clean
red
hat
enterprise,
linux
7.,
they
just
updated,
they
don't
install
any
custom
tool
chains,
no
custom
patches
and
that's
the
environment
where
they
won't
be
able
to
compile,
because
now
latest
grpc
is
not
going
to
compile
for
them.
But
I
guess
my
point
is
this:
environment
itself
is
not
even
supported
by
red
hat.
Why
do
we
have
to
support
it?
I
don't
think
we
should
okay.
N
N
M
So
but
yeah
that's
okay,
I
mean,
I
think
I
I
agree
with
you.
It's
just
yes.
I
realized
that
the
rest
of
stuff
is
going
to
work,
but
it's
probably
going
to
be
just
easier
to
say
that
in
order
to
satisfy
the
full
set
of
requirements
and
the
otp
grpc
exporter
is
part
of
these
requirements.
So
it's
like,
if
you
want
to
build
a
full
set,
you
need
to.
You,
have
at
least
4.9,
and
we
can
do
the
right
step
like
I
can
do
the
write
up
how
to
set
up
how
about
this.
M
I
can
borrow
the
script
I
already
had
how
to
install
gcc
7
on
red
hat
enterprise,
linux,
7
and
above,
and
we
can
also
say
that
red
hat
enterprise,
linux
8
already
comes
with
gcc
8
or
gcc
9
and
already
supported
out
of
the
box,
and
we
don't
want
to
go
back
like
10
years
back,
because
I
think,
a
year
ago
somebody
asked
for
gcc
4.8
cool
that
was
nearly
a
year
ago,
but
time
moves
on,
and
I
haven't
heard
again
from
any
of
the
the
contributors
or
maintainers
or
customers
or
active
participants
that
they
still
need
it.
M
Maybe
that's
the
moment
when
we
say
we
don't
support
it
anymore
and
that
be
more
like
easier
to
support.
We
just
say:
oh
no,
we,
the
main
bar,
is
4.9
right
now,
just
a
tiny
step
forward
and
I'm
gonna
post
a
link
to
that
article
for
the
support
lifecycle
and
it
lists
the
current
red
hat
enterprise,
linux
versions
and
the
default
compiler
that
comes
with
h
and
the
latest
officially
supported
enterprise.
Linux
is
already
8.1,
which
already
has
gcc
8
and
g69.
So
we
should
be
good
with
all
officially
supported
versions.
M
L
Could
we
add
some
some
message
to
our
build
system
to
if
we
detect
the
compiler
is
4.8?
We
say
this
will
be
duplicated
for
some
time
a
few
months,
but
give
people
just
a
notice
instead
of
just
to
retire.
It
now.
M
We
can
probably
print
the
message
with
the
link
that
shows
how
to
install
a
more
recent
compiler.
Something
like
your
compiler
is
too
old.
Please
review
this
article,
which
describes
a
few
ways
how
you
may
be
able
to
install
a
more
recent
tool
chain
that
being
a
good
gesture
right.
L
Yeah,
but
I'm
still
for
this
one,
I
have
a
concern,
I
think,
with
the
battle.
Don't
I
don't
ask
user
to
aggregate
the
compiler
just
for
consuming
open,
telemetry,
c
plus
plus,
because
I
think
we
could
just
be
very
small
part
of
the
whole
build
system
right.
M
Tom,
I
think
the
other
week
how
this
is
managed,
at
least
in
a
few
environments.
You
can
kind
of
update
alternatives.
So
it's
like
you
can
actually
have
multiple
two
chains
side
by
side
and
I
think
in
red
hat
what
they
do.
You
just
set
up
the
build
environment
with
a
newer
compiler.
It
doesn't
mean
that
you
replaced
the
old
compiler.
The
word
compiler
may
still
be
installed.
It's
just.
You
set
up
the
environment
to
point
to
the
newer
one.
Do.
L
M
M
I
mean
I
realize.
Maybe
we
we
must
give
stronger
guarantees
than
just
should
work,
but
at
least
initially,
I
think
if
they
have
a
newer
compiler
and
if
they
set
up
the
build
environment
to
build
just
open
telemetry
with
that
compiler,
they
should
be
able
to
load
the
library
and
run
things
even
if
their
main
host
executable
is
compiled.
Prebuilt
with
like
gcc
4.8
or
like
that,
should
work.
M
N
That
should
vary
when
I
was
thinking
that
that
may
not
work,
because
we
don't
guarantee
the
avs
stability
for
sdks,
so
if
they
compile
both
api
and
sdk
with
gcc
4.8,
but
they
compile
same
I
I
I
get.
What
you're
saying
the
exported
part
yeah.
M
I
just
don't
want
this
to
hold
us
back.
I
mean
we
are
stuck
with
something
that
others
don't
support.
N
M
All
the
components-
yes,
let
me
also
put
another
link,
so
this
link
I
put
is
about
mapping
between
red
hat
enterprise
versions
and
default
are
gcc
versions,
but
linux
versions
support
matrix
and
right
now.
The
only
officially
supported
based
on
their
life
cycle
is
red,
hat
8.1,
which
already
the
gcc8
or
gcc
nine.
M
It's
like
way
newer
and
with
that
one
we
shouldn't
have
any
problems.
So
I'd
say
we
just
you
know
it's.
We
don't
support
things
that
are
10
years
old.
We
have
to
move.
N
M
Ubuntu
1804
is
a
7.4,
so
I
think
long-term
supported
version
has
the
newer
compiler
already.
It
would
be
good
to
have
the
matrix
written
down
to
see
what
we
are
losing
exactly
like.
Let's
say:
if
it's
unpatched
well
anyways
for
anything
unpatched,
I
would
say
that
they
should
patch
and
they
should
install
the
latest
available
updates
and
then
once
you
look
at
the
latest
available
updates
for
a
baseline.
M
M
I
can
I
can
share
a
few
tips:
how
to
install
the
toolchain
newer
tool
chain
on
an
old
system
and
that's
as
far
as
we
can
go
with
respect
to
to
helping
others.
You
see
we
started.
I
think
a
lot
of
support
questions
right
now
like
you
would
probably
agree
that
when
we
look
at
github
issues
recently,
many
of
these
are
build
related
questions
issues
and
all
that
not
necessarily
functional
questions.
So
it's
going
to
be
more
efficient.
M
If
we
have
very
constrained
support
matrix
rather
than
starting
to
support
more,
we
should
try
to
shrink
it
into
reasonable
set,
and
I
think,
moving
from
4.8
to
4.9
at
least
is
a
reasonable
step.
N
That
makes
sense,
I
mean.
The
only
thing
is
that
if
we
talk
about
anything
more
than
4.8,
it
is
not
going
to
be
4.9
or
5.1.
It
is
some
because
I
know
9
is
also
I
mean
none
of
the
current
distributions.
4.9
was
never
released
as
a
long
time.
Long
term
support,
even
5.9
was
never
released.
Probably
it
would
be
something
5.6
or
something
that
would
be
the
next
version
after
4.8,
which
compiler
mostly
people
will
be
using.
M
A
good
article
that
I
used
before
it
is
actually
done
by
red
hat
for
red
hat
for
developers
and
they
I'll
I'll
I'll
link
it
in
in
our
discussion
how
to
install
g67
on
red
hat
enterprise,
linux
7.,
so
you
guys
see
in
my
stream
right.
M
So
basically,
even
the
the
enterprise
distro
that
originally
came
out
with
gcc
4.8
has
the
tips
or
instructions
how
to
install
a
newer
gsst.
So
we're
saying
for
runtime
is
probably
kind
of
tricky
version
because
it
has
never
been
massively
adopted,
but
gcc7
was
adopted,
so
we
can
say
that
we
may
require
at
least
4.9.
M
However,
you
can
use
these
instructions
and
refer
the
link
like
not
our
instructions.
We
don't
want
to
write.
Our
own
instructions
refer
to
existing
link,
which
suggests
how
they
may
install
gcc7
done.
So
we
are
out
of
that.
We
say:
oh,
your
compiler
is
too
old.
M
N
You
know
only
only
scenarios
probably
would
be
the
existing
application,
which
may
be
dependent
on
4.8,
and
they
may
be
reluctant
to
move
to
7.2
for
higher
versions,
but
I
think
it's
fair
enough.
I
mean
our
dependencies
are
on.
Oh,
I
mean
on
grpc.
If
they
don't
support
something,
I
think
we
have
to
definitely
move
forward
with
those
versions.
M
I
personally
think
it's
a
bit
of
a
chicken
and
egg
problem
so
see
somebody
does
not
want
to
invest
their
time
to
upgrade.
Yes,
why
do
we
have
to
stretch
and
support
way
wider
range
and
pay
extra
cost
for
supporting
that,
because
someone
cannot
move
on
and
I'd
rather
say
that
the
window
cannot
be
very
long.
We
have
to
be
moving
on
and
maybe
at
least
once
or
like
like
two
times
a
year.
M
We
should
review
where
we
are-
and
I
think
here
we
have
a
few
good
answers
like
first
red
hat
doesn't
support
it.
Second,
even
if
it
does
there's
an
article
how
to
use
a
little
more
recent
compiler,
please
use
the
more
recent
compiler
and
yeah
you'll
probably
get
some
dislikes
or
frustrations
from
users
who
would
report
to
those
issues,
but
it's
still
a
valid
answer,
whether
they
like
it
or
not.
It's
a
valid
answer.
Please
use
more
recent
compiler.
N
Okay,
so
I
think
to
start
with
we'll
change
our
documentation,
I
mean
that
we
support
4.9,
we'll
change
our
ci
to
not
we'll
remove
four
rotate
compiler
from
that,
let's
see,
what's
the
next
next
available
compiler
I
mean
for
ubuntu.
As
I
said,
I
don't
see
4.9
or
even
5.1,
available
for
ubuntu,
20
or
1
to
18.
So
probably,
we
need
to
find
next
available
compiler
for
ubuntu
and
then
probably
use
that
in
our
cle.
M
Actually,
you
know
what
I
was
wrong
about:
the
version
of
gcc:
it's
even
gcc,
eight
that
is
available
for
the
old
red
hat
linux
seven
and
they
do
have
an
article.
So
it's
like
okay,
I
remember
there
was
that
dev
tool
set
seven
dev
to
set
eight,
so
you
actually
just
run
that
command
to
install
an
additional
tool
set
side
by
side
with
the
default
toolset,
and
then
it
does
not
remove
the
gcc
4.8.
You
just
install
an
additional
compiler
and
before
you
build,
you
set
up
your
shell
with
the
environment.
M
M
M
Like
I'm
just
thinking
like
went
out,
he
invested
some
time
in
maintaining
the
patches
for
the
older
version
and
he
was
offering
some
links
to
his
own
repository,
which
kept
it
compiling
even
with
the
older
compiler.
I
think
it's
great,
but
accepting
that
into
our
repository
is
a
maintenance
hustle
for
us.
M
N
M
Yes,
yes,
and
maybe
in
that
article
we
can
also
say
if
you
need
the
newer
compiler.
These
are
the
instructions
you
may
follow
and
I
posted
the
link
to
install
a
new
compiler.
N
N
L
N
So
we
do
so
we
do
document
that
we
support
both
of
them,
but
I
mean
I
agree
that
bazel
support,
I
mean
we,
not
all
the
components
are
right
for
compilable
compiling
using
bazel.
Yes,
I
have
been
taking
this
point.
I
mean
from
past
two
maintenance
meeting
I've
been
talking
about
this,
that
we
don't
have
basil
expertise
in
current
simplest
community.
We
need
somebody
to
really
help
us
on
that.
N
N
I
said
josh
has
done
some
work,
but
I
think
his
priority
has
moved
to
somewhere
else.
We
don't
have
anybody
expert
at
that
level.
So
probably
somebody
from
google
or
somewhere,
who
has
expertise
with
c
plus
pleasant
basin,
who
can
help
us
out,
would
be
definitely
helpful,
but
I'm
not
getting
somebody
as
of
now.
M
I
have
a
secondary
concern
related
to
that
we
do
have
t-sun
and
the
a-sun
configurations
for
bazel,
and
I
noticed
that
in
a
few
cases,
even
google
test
or
rather
standard
mature
products
hit
false
positives
with
that,
and
especially
memory
electrical
in
there
may
trigger
something
that
it
believes
that
maybe
a
leak,
it's
not
a
guaranteed
leak.
Since
we
it's
a
good
thing,
we
run
all
these
checks
with
bazel,
but
then
it's
a
maintenance
overhead,
for
example.
M
I
noticed
I
was
modifying
the
tcp
socket
library,
and
I
only
had
some
issues
with
that
only
with
the
test
server
part
which
is
not
even
used
in
production
with
bazel
t-san
or
bazel
a-sun
config,
whereas
it
passes
the
other
tests
fully
like
with
cmic
on
windows
and
with
c
mic
on
linux.
So
there's
another
direct
dimension:
it's
not
only
maintaining
the
bazel
build
files,
it's
only
it's
also.
Maintaining
the
sanity
of
the
bazel
runs
because
bazel
does
more
checks
than
our
cmake
tests
do,
and
it's
definitely
going
to
be
helpful.
M
If
we
get
some
assistance
from
a
person
who's
more
familiar
with
those
google
and
journal
tooling
yeah
could.
M
Build,
I
think
we
should
we
may
I
don't
know
if
we
should
for
ga,
I
think
it's
nice
to
have.
I
don't
want
this
to
be
a
blocking
item
for
ga.
Obviously,
there
are
other
good
libraries
like
tc
malloc.
The
tc
malloc
can
also
be
used
for
capturing
memory
leaks
in
most
cases.
My
experience
was
that
sometimes
we
hit
false
positives
and
as
part
of
ci,
it
is
hard
to
determine
if
it's
a
real
issue
or
if
it's
a
false,
positive,
so
so
manual
runs.
N
But
definitely
this
these
two
ci's
have
helped
me
out
a
couple
of
times
I
mean
I
did
did
had
some
valid
yeah
memory:
correction
research
with
these
people-
these
caught
it
here
so
yeah.
It
is.
N
M
Maybe
we
should
highlight
that
in
the
slack
chat
to
say
hey,
we
might
need
a
bit
of
assistance
with
wasel
like
ongoing,
at
least
at
least
once
in
two
weeks,
for
somebody
who's
familiar
with
that
to
join
us
and
make
sure
that
things
are
running.
M
M
Yes,
we
are
actively
looking
for
committers
in
that
in
that
space,
okay,.
N
M
Here's
the
thing
I
think,
for
the
mandatory
components
as
long
as
things
are
there
and
running
fine
and
for
things
like
examples,
or
even
some,
let's
say
optional
tests
for
things
that,
like
you
know,
trace
z,
z,
z
pages.
M
I
think
it's
a
great
feature,
but
I'm
not
sure
how
applicable
how
widely
adopted
it
is
amongst
all
the
customers,
so
not
having
a
portion
of
the
build
covered
by
basel
should
be
fine,
and
I
think
we
need
to
track
a
roster
of
good
first
issue
for
someone
who
is
already
familiar
with
basil,
so
good,
fresh
tissue,
but
requires
somebody
who
knows
bliss.
M
M
For
the
fluent
the
the
draft
stuff
that
I
did,
I
had
to
turn
off
some
tests
for
the
socket
library
because
of
the
issues
I
mentioned.
I
had
some
interesting
issue
with
the
t-sign,
but
I
actually
added
the
build
file
for
the
fluency
library
itself,
so
I
can
cover
that.
I
mean
I
sorted
out
how
to
use
bazel
in
that
sense,
but
again,
yes,
longer
term
maintenance
of
the
whole
setup
for
all
components.
We
need
some
committer
who
who
is
more
familiar
with
this
exactly.
N
N
N
I
think
it
was
reported
a
couple
days
back
on
slack,
and
I
think
this
was
a
valid
problem,
so
we
do
maintain
context
share
pointer
in
a
stack,
but
whenever
we
remove
that
from
the
stack
we
actually
just
decrement
the
stack
top
variable,
we
don't
really
remove
it
from
the
stack,
so
the
sharepoint
state
remains
in
that,
so
it
would
be
reused
that
memory
would
be
reused
and
when
it
is
used
at
that
time
the
share
pointed
would
be
removed.
N
So
that
was
the
issue,
so
I
just
put
a
fix
that
whenever
we
are
releasing
the
this
span
from
the
current
context,
we
basically
rewrite
that
memory
with
the
entity
context
so
that
the
shade
pointed
gets
removed
insurance.
So
for
just
go
through
that,
I
think
it's
it's
a
minor
fix.
N
M
Is
a
simple
solution?
It's
a
good
change.
I
I
do
have
rather
unrelated
question.
I'm
not
going
to
put
it
in
this
vr
because
it's
unrelated
to
the
essence
of
this
vr,
so
this
entire
structure
about
thread
safety
guarantees
away
because,
right
now
we
don't
do
any
locking
on
it
right.
I
mean
the
pop
method,
for
example,
is
that
possible.
M
Gets
its
own
structure
yeah,
so
we.
N
Have
to
ensure
that
a
given
span
cannot
be
active
in
multiple
threads.
It
has
to
be
active
in
a
given
thread.
Only
so
detective
means
that
that's
the
current
active
running
thread-
okay,
so
any
child
still
comes,
will
become
a
child
of
that
threat.
So
we
cannot
make
a
given
thread
active
in
multiple
threads.
We
can
definitely
pass
that
thread
in
multiple
thread,
but
not
make
it
active,
because
the
thread-
local
storage-
is
only
specific
to
the
thread
where
it
is
created.
N
N
N
M
Would
you
mind
if
I
take
some
time
I
need
to
try
this
code
yeah
yeah,
please
yeah
and
yeah.
I
I
had
some
of
the
similar
not
similar,
but
somewhat
related
pr
in
terms
of
I
thought,
multi-threaded.
M
For
my
customer,
because
they
are
using
etw
exporter
and
they
asked
for
that-
and
I
wanted
to
try
and
give
them
the
example
with
that
exporter.
It
is
gonna
work
practically
with
any
other
exporter,
because
only
the
small
portions
are
actually
sets
up.
The
provider
and
tracer
is
unique,
but
the
rest
of
code
should
be
applicable
to
other
exporters
as
well,
but
I
wanted
to
give
them
something:
that's
like
sausage,
that's
already
cooked,
so
that
they
can
just
take,
consume
and
run
it.
M
M
That
you
mentioned
with
the
test-
I
I
added
it
with
the
thought
in
mind.
Actually
it
it
again
failed
the
same
circular,
buffer
test
simulation,
so
the
other
issue
we
have
in
umbracula
like
unrelated
to
my
change.
We
do
have
some
ongoing
issue
with
trace
circular
buffer
test
simulation,
I'm
gonna
re-run,
but
it
is
tripping
occasionally
in
our
ci.
N
N
L
M
N
The
problem
here
is
that,
even
though
that
span
is
is
going
to
be
dropped,
you
can
still
create
the
child
sort
of
door
span
and
that
creates
a
different
trace
hierarchy,
because
that
child
again
becomes
a
root,
because
there
is
no
parent
for
that
child.
If
you
try
to
create
so
the
higher
there
is
no
single
trace
getting
maintained
just
because
any
span
dropped
in
between
gets
does
not
really
get
so
valid
to
recite
an
explanation,
so
the
good
good.
N
The
good
point
was
that
we
were
able
to
maintain
this
one
static
span
instance,
and
we
used
to
give
that
span.
We
used
to
return
that
span
instance
for
all
dropped
spans,
but
now
we
have
to
create
a
new
span
for
every
drops
when
having
the
right
span,
ready.
N
N
M
I
don't
know
if
it's
significant,
though
I
mean.
Obviously
we
have
to
generate
some
buffer
right
and
it's
going
to
be
spun
id
race
ids
like
how
many
bytes
16,
plus
eight
bytes
randomness
for
every.
J
M
I
I'd
say:
maybe
maybe
we
should
merge
it
and
discuss
if
somebody
like
we,
we
need
to
remember
about
it.
I
don't
know
if
it
deserves
an
issue.
M
We
need
to
remember
that
this
perth
penalty
associated
with
dropped,
no
span
that
we
actually
populate
16,
plus
8
bytes,
and
if
somebody
says
why
did
I
set
up
my
system
to
drop
all,
but
I
still
see
overhead
in
the
instrumentation.
M
That's
going
to
be
the
time
when
we
need
to
decide.
Maybe
there
may
might
be
a
better.
I
don't
know
like
if
death
and
just
random
guess
like
there
could
be
an
fdf
that
alters
this
behavior
and
instead
optimize
it
so
that
we
don't
generate
like
we
go
away
from
the
spec,
but
we
optimize
that
path
to
avoid
generating
the
unnecessary
24
bucks
yeah,
but
other
than
that.
That's
a
good
point.
M
N
Sure
and
then
there
was
a
global
log
anchor
I
mean
I,
I
don't
want
to
really
merge
it
merge
this
yeah
I
mean
I'm
okay.
If
the
suggestions
are
that
we
should
just
give
a
printf
style
formatting
for
log
macros
and
totally
fine
with
me
that
we
agree
on
that.
I
think
we
can
have
that
in
one
header
filing.
Let's
use
it
everywhere,
that's
more
simpler!
We
cannot
change
it
at
the
runtime
we
have
to
really.
Probably
the
only
thing
is
that
that
handlers
can
only
change
at
the
compile
time.
I
was
thinking.
M
About
this-
and
I
I
personally
have
a
preference
for
c
style
method,
logging
method
because
of
old
school
attachment
to
c
code,
and
because
I
can
use
the
same
logic
across
c
and
c
plus
plus,
and
my
other
argument
would
be
that
for
libraries
like
I
looked
at
how
opencv
does
it
and
how
unreal
engine
c
plus
plus
code
does
it
so
they
both
follow
the
varia
variable
number
of
arguments,
but-
and
I
mean
I
agree
with
so
it
depends
on
how
we
treat
things
if
we
start
thinking
about
c,
plus,
plus
20
and
above
and
if
we
start
at
that
moment,
perhaps
everything
just
c
plus
plus
20
and
more
modern,
is
the
right
thing
to
do,
and
in
that
model
we
should
probably
avoid
macros
in
the
variable
arguments.
M
But
if
we
look
back
at
a
historic
experience
of
what
other
frameworks
provide
and
perhaps
what
other
engineers
used
in
the
last
10
to
20
years,
I'd
say
a
printf
style
or
at
least
variable
arguments,
not
necessarily
printf
style
but
variable
arguments.
Logging
is
a
good
thing
to
have
so
how
about
this?
M
M
Perhaps
we
can
have
something
that
provide
variable
arguments,
handling
printf
style
as
well
as
something
that
accepts
a
structured
c
plus
plus
object,
which
accumulates
the
detail,
like
some
sort
of
context,
object
that
accumulates
file
line,
number
location
of
where
the
the
trace
is
emitted,
as
well
as
the
message
and
maybe
optional
attributes
for
additional
arguments
like
pointer
to
structure
that
was
involved
in
that
failure,
or
some
other
extra
arguments
like
extras
that
are
relevant
to
debugging
that
code.
I
I
can
share
some
thoughts
on
that.
M
M
The
issue
is
that
I
am
trying
to
solve
similar
issues,
same
issues
in
fluent
export,
and
I
refreshed
it
with
some
logging
macro
stuff
as
well
and
like
it
is
needed
because
we
have
for
those
that
are
conditions.
There
are
cases
we
need
to
do
something
about
it.
We
need
to
log
it
somehow,
and
it
would
be
great
if
we
normalize
and
have
a
standard
mechanism
for
doing
it
across
all
exporters
yeah.
M
N
M
Yeah,
yes,
absolutely!
Yes,
it's
for
sdk
internal
news
and
perhaps
for
exporter
developers
that
use
sdk,
headers
yeah.
M
Because
I
I
I
invested
in
similar
thoughts
on
a
few
occasions
and
a
few
iterations
and
I
I
can
try
to
summarize
some
of
the
patterns,
maybe
as
the
working
process,
we
are
showing
alternatives
and
we
can
discuss
what
you
guys
think
is
is
better
okay,
sure.
N
M
Please
don't
look
today.
I
will
refresh
it
again.
I
will
remove
some
unnecessary
stuff
from
it
to
keep
it
smaller.
N
I
have
I
haven't
looked
into
that,
but
I
may
have
one
comment.
Probably
I
think
once
I
go
through
that
I'll
have
a
better
understanding
that
if
it
is
some
socket
library,
we
want
to
expose
it
for
external
export
for
some
something
external,
I
think
probably
won't
be
a
good
idea
to
bring
up
any
library
and
really
provide
and
provide
it
as
an
api
to
external
customers
that
want,
even
though
it's
exporters,
it
won't
be
a
very
good
idea.
But
I
think
let
me
go
through
that
I'll
have
more
understanding.
M
So
what
I
did
is
pretty
much
tcp
udp
and
the
unix
domain,
socket
client
library,
and
I
also
added
a
server
library,
but
the
server
library
is
not
intended
for
using
production
in
any
way,
because
I
think
I
have
some
issues
with
the
server
side
and
the
reason
why
I
needed
the
server
library
is
to
emulate
end-to-end
tests,
such
as,
if
I
run,
for
example,
fluent
exporter,
I
need
to
run
fluent
server
that
accepts
my
data
over
tcp,
udp
or
unix
domain,
then
decodes
it
and,
for
example,
echo
is
back
so
that
I
can
cross
reference
and
check
that
the
data
is
well
formed
and
properly
received
by
server.
M
M
Most
of
the
most
of
the
network
circuit
libraries
are
boost
licensed,
and
I
couldn't
find
a
good
cross
plot
circuit
library,
because
I
don't
want
to
deal
with
these
socket
apis
directly.
M
That's
why
I
wrote
that
abstraction
layer-
and
I
placed
it
mostly
on
top
of
existing
http
server
code-
that
we
already
had.
We
had
a
test,
http
server
and
what
I
did.
I
enhanced
this
code
with
support
for
unix
domain
sockets,
and
I
previously
it
was
only
http,
but
I
made
it
so
that
now
it's
also
either
tcp
udp
or
unix
domain,
and
it
allows
you
to
build
a
test
mock
echo
servers
and
stuff
with
the
callback
and
all
that.
M
M
That
way,
we
don't
share
it
with
the
other
exported
developers.
So
I
can
keep
it
private
to
my
exporter,
only
that's
where
I'm
looking
for
some
feedback
anyways.
If
you
can
take
a
look
at
code,
the
code
won't
change
much
irrespective
of
where
it
lives,
but
I
do
not
have
a
strong
preference
where
exactly
it
lives.
M
Yeah,
if
you
can
take
a
look,
look
at
the
biggest
files
that
don't
have
beef,
because
I
actually
wrote
them
just
see
what
kind
of
classes
I'm
adding
it's
more
like
socket
abstraction
for
various
circuit
types
and
send
receive
connect,
bind
all
these
primitives
are
made
in
a
cross-platform
way.
N
N
So
probably
if
somebody
can
just
see
that
then
there's
a
waveform
for
windows,
I
think
what's
happening
with
that.
Last
time
we
thought
that
you're
going
to
do
some
changes
in
this.
M
I
was
gonna,
add
it
to
ci,
I
am
thinking.
Maybe
should
these
are
only
build
script
changes
right
now.
Obviously
they
pass
all
the
tests.
I
don't
think
it
does
conflict
with
anything.
We
have
main
motivation
was
to
make
it
agnostic
of
visual
studio,
so
it
does
work
now
with
visual
studio,
but
it
also
works
with
a
low
vm
clang.
If
I
install
llvm
clank
from
client
website
from
llvm
website,
okay,
I
think
it's
a.
M
Ready
to
and
I'd
say,
is
I'd
send
a
separate,
github
actions
setup.
I
will
add
the
yamo
for
calling
into
these
scripts
so
that
we
would
have
a
loop
that
uses
those
scripts
okay
other
than
that
I'd
say
it
should
be
safe
to
merge.
It
doesn't
change
any
of
the
code,
it
just
fixes
a
few
things
and
it
allows
to
use
ninja
as
the
build
tool
which
gives
significant
speed
up
for
the
local
builds.
M
M
M
Thanks
thanks
for
your
time,
maybe
one
thing
for
the
jugger
on
windows
for
thrift.
We
agree
it's
a
documentation
issue
and
not
a
builder
show,
and
I
think
the
problem
was
that
the
customer
used
two
different
build
systems.
They
used
their
own
cmic
tools,
yes,
sir,
and
they
and
that
way
we
don't
use
dc
package
provided
dependencies.
So
it's
more
of
a
user
error
and
I
think
we
need
to
document
it.
Yes,
exactly.