►
From YouTube: 2021-06-16 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
A
Let's
start
with
the
processor's
design,
if
the
author
is
here,
can
we
do
that?
Who
is
the
author?
A
B
Yep,
I'm
sharing
my
screen.
Can
you
guys
see
my
screen?
Yes,
yeah.
Thank
you.
Teamwork,
so
yeah
before
I
start
to
walk
through
the
design,
probably
just
give
a
little
bit
of
background
introduction,
so
I
think
we
often
he
reached
the
issue
in
our
backlog.
I
think
he
foresees
some
messy
of
the
current
process,
processor
design,
and
he
got
some
general
idea
here
and
some
requirements
we
want
to
do
so.
B
Basically,
I'm
I
just
trying
to
you
know
make
you
know
by
following
his
description
here
to
make
the
things
more
concrete
in
in
this
proposal,
so
yeah
before
actually
yeah,
it's
a
little
bit
a
few
pages.
Probably
I
can
just
start
from
the
propulsive
design.
I
see
tiguan
in
the
punya
and
jay
already
gave
some
comments
here.
Yeah
we
can
talk
about
looking
to
gaza
comments
a
little
bit
later,
but
let
me
let
me
just
quickly,
you
know,
introduce
the
idea.
B
What
do
we
have
here?
So
basically
the
target,
the
processor
we
want
to
properly
refactor
or
redesign
a
little
bit.
It's
it's!
It's
all
in
this
diagram.
From
from
what
I
see
in
the
current
core
repo,
you
know,
all
the
processor
we
have
is
is
basically
like
a
feature
specific
or
or
to
to
do
something
based
on
a
special
feature.
So
I
can
see
the
problem
is
that
so
in
the
future?
B
If,
for
example,
if
we
have
some
common
use
cases
we
need
to
implement,
we
probably
either
need
to
add
a
new
processor
or
add
that
feature
in
the
current.
You
know
trying
to
put
them
into
one
of
the
processor
here,
so
I
can
see
the
extensibility
or
you
know
it's
not
that
flexible.
So
the
idea
we
are
trying
to
you
know
trying
to
you
know,
refactor
those
processors
to
make
its
processor
to
be
data
type
like
specific.
So
we
for
each
data
type,
for
example,
trace
processor
matches,
processor
or
loss
processor.
B
We're
gonna,
put
all
the
things
we
want
into
a
specific
signal,
a
data
type
processor
so,
and
also
these
processors,
for
example,
attribute
resource.
They
could
be
common
things
for
this
processor
and
that
we
all
we
already
have
some
like
existing
existing
implementation
or
the
common
configuration
for
you
being
used
in
this
processor.
B
We
can
directly
migrate
those
implementation
into
the
current
into
the
new
processor,
so
a
lot
of
code
can
be
reused
and
they
share
a
few
of
the
the
you
know:
the
same
configuration
syntax
if
they
were
trying
to
to
do
the
same
thing
for
this,
for
different
data
type
or
and
also,
if
there's
a
new,
you
know,
request
or
new
common
user
case
need
to
be
introduced
for
each
data
type.
B
Then
we
can
easily
know
where
we
should
put
it,
and
also
we
can
limit
the
number
of
the
core
processors
in
the
core
repo.
That's
basically
the
the
idea
of
this
proposal
and
yeah,
I
see
t1.
Probably
you
also
mentioned
that
we
probably
should
have
some
alternative
or
backup
plans.
Sorry,
I
didn't
do
that
at
the
time.
Probably
we
can
discuss
a
little
bit
if
anyone
has
some
more
ideas.
What
was
the
better?
You
know
way
for
us
to
do
it.
B
Yeah
project
is
a
good
good
hunt
to
risk
it
out
so
yeah
so
yeah.
This
is
the
idea.
So
what
what?
What
I'm
trying
to
propose?
I'm
trying
to
first
of
all
our
goal
here
is
we're
trying
to
in
this
proposal
we're
trying
to
you
know
list
the
new
purpose:
processors
we
want
to
have
in
the
core
yeah,
as
I
just
discussed,
they
just
talked
properly
for
each
new
data
type,
we're
gonna,
introduce
a
new
processor
and
also
because
of
the
you
know
we,
the
current
processor.
B
Actually
it
doesn't
work
right,
it
does
all
the
lesser
rework
and
then
we
do
have
pretty
good
implementation.
There,
a
lot
of
configuration.
It's
been
pretty
like
implemented
as
a
module
so
or
we
we
can
reuse
them.
So
we
are
trying
our
best
to
reuse
the
current
configuration
module
into
the
new
processor,
so
it
you
know,
which
means
we
don't
change
too
much,
since
their
customer
could
easily
understand
how
to
configure
the
new
processor
and
reduce
the
adoption
curve
as
well.
At
the
same
time,
we
don't
also
our
goal.
B
We
don't
want
to
have
any
functionality
regression
right.
So
for
anything
we
have
done
in
the
core
processor
for
processing
the
data.
We
want
to
keep
all
those
functionalities,
and
then
we
discuss
what
what
else
we
need.
To
add,
add
into
those
core
processors
and,
of
course,
the
performance
we
want
to
keep
you
know
even
enhance
it
or
at
least
keep
the
same
performance.
B
That's
the
goal
and
the
long
goal
is
we
probably
because
I
also
see
a
lot
of
conversation
in
the
issue
you
know
like
from
either
aws
or
google
or
the
relic.
We
want
to
add
some.
You
know
special
requests,
how
to
handle
metrics
all
those
things.
Probably
it's
a
lot.
It
will
not
be
covered
in
this
stock.
B
We
can,
you
know
we
can
talk
about
that
later
and
yeah
yep
and
the
security
thing
will
not
be
covered
if
it's
vendor,
specifically
new
endpoints
or
other
stuff,
so
yep,
that's
the
goal
in
the
long
go
for
the
requirements,
so
I
think
I
I
looked
at
the
current
attribute
processor
and
the
resource
and
a
resource
processor
and
a
spam
processor.
What
do
we
have?
I
try
to
summarize
the
things
we
are
trying
to
do
in
those
processors.
B
I
think
one
of
the
big
thing
is
you
know
we
have
that
a
key
action,
key
action,
value
added
there
we
are
we're
trying
to
do.
You
know
we
can
do
the
filter.
First
of
all,
when
the
data
coming
into
the
pipeline,
we
have
a
matcher
or
have
a
way
to
you
know
to
to
filter
what
kind
of
data
point
we
want
to
do
the
data
mutation
then
and
on
those
data
you
know
for
for
the
attributes
or
the
resource
we.
How
are
we?
B
You
know
what
kind
of
actions
we
want
to
do
on
those
key
pair
value.
So
we
have
all
such
things
here
and
then
you
know
yeah
also
yeah
the
filter
scenes
yeah
customer.
You
know
what
we
also
have
a
future
customer.
Our
filter
processor
currently
is
only
filter.
The
I
think
the
only
filter
matrix
for
now.
B
So
do
we
need
to
expand
this
filter
processor
to
all
three
data
types.
At
the
same
time,
we
also
have
you
know
some
special
expand
processor
we're
trying
to
modify
the
data
name
or
data
body
all
those
stuff
should
we
you
know
for,
for
all
those
special
requests.
How
are
we,
you
know,
should
actually
we
for,
for
example,
for
those
things
we've
done
in
spam
processor
should
we
we
should
move,
then?
Where
should
we
put
it,
then
you
know
for
our
yeah.
B
I
think
the
proposal
we're
going
to
move
all
those
features
into
chase
processor.
At
the
same
time,
if
we
have
a
new,
you
know
a
request
or
requirement
need
to
handle
for
the
metrics.
We
need
to
move
them
to
the
metrics
processor.
I
also
list
some
common
things
in
the
you
know.
In
the
each
processor
section
we
can
discuss
later
later
any
questions
so
far
before
I
move.
B
Okay,
then
you
know:
let's
talk
about
the
configuration
design
like
I
yeah.
If
you
take
a
look
at
all
those,
you
know
the
the
requirements
we
have
here
actually,
like
I
said
all
those
requirements
currently
has
been
implemented
and
that
we
already
have
those
configuration
implementation
as
a
module
in
either
in
our
internal
folder,
a
folder
or
in
our
config
folder.
So
what
we
need
to
do,
we
still
keep
those
configurations,
the
things
we
probably
just
rearrange
or
recombination
those
configurations
into
each
of
the
new
processors.
B
That's
the
idea,
for
example,
for
trace
processor.
We
still
need
to.
We
still
want
to
do
the
you
know,
attributes
or
resource
value,
insert
updates
all
those
stuff.
We
also
want
to
have
the
match
rule
that
we
can
include
or
exclude
the
at
the
data
points
when
it
comes
to
pipeline.
We
want
to
do
the
mutation
and
also
the
filter
yeah,
this
the
same
thing.
If
you
want
to
drop
any
data,
point
that
the
filter
will
happily
rely
on
the
matcher.
B
You
know
configuration
also
so
basically
for
any
of
the
common
configuration
implementation
in
the
future.
It
should
be
existed
somewhere
and
they
can
share
shared
by
the
real
processors
yeah.
That's
that
that's
the
that
you
know
how
we
are
going
to
you
know
this
new
process
is
going
to
work
based
on
the
current
thing
we
already
have,
so
I
I
think
that
the
new
implication
could
be
really
simple
easier
to
be
implemented
so
for
the
trace
processor.
Okay,
so
I
think
this
is
the
one.
B
Basically
we're
trying
to
combine
all
these
three
existing
processors,
all
the
functionalities
into
one
processor,
so
what
it
does
it
first,
we
should
have
a
feature
configuration
right
so
to
select
the
data
points
and
also
we
are
allowed
to
do
the
you
know
the
some
data
processing
actions
to
either
rules
to
use
this
resource
or
all
attributes
like
we
we've
done
for
each
of
these
into
in
each
of
these
processors,
and
then
you
know
in
the
spam
processor
there's
also
special.
You
know
logic
to
handle
the
spam
data.
B
We
also
move
into
the
trace
processor,
so
this
is
the
three
key
functionality
we
already
done
in
the
current
core
processor.
So
to
put
them
into
like
the
example
of
the
configuration.
So
here
is
the
example
right,
so
we
have
the
filter
configuration
customer
can
do.
The
filtering
include
exclude
as
well
as
we
have.
Currently
we
have
the
you
know
the
action
key
value
thing
we
do
the
insert
update.
I
I
did
a
little
bit
modification
to
this
struct.
B
I
add
a
new
type
here
so
because,
from
the
what
we
have
here
today,
the
resource
attributes
they
share
pretty
similar
thing
right,
they're
all
key
value
pairs,
so
I
just
add
a
data
type
into
this
new
struct.
So
if
customer
in
the
configuration
they
can
specify
if
they
want
to
do
one
action
to
resource
and
the
other
action
to
our
attributes,
so
this
has
been-
you
know
extended
a
little
bit,
then
the
the
rename
thing
is
specific
for
the
trace.
So
this
is
also
the
existing
thing.
B
Basically,
we
can
either
do
the
observed
to
the
spanning
or
we
can
derive
some
new
attributes
from
spanning.
So
this
is
how
the
trace
processor
configuration
will
be
look
like
in
the
future.
I
also
put
a
example
here,
so
basically
yeah.
This
is
like
you
have
spam
for
chase
processor.
Sorry,
you
have
trace
processor,
you
can
do
attribute,
you
know,
update
or
insert
you
can
do
a
resource
insert
you
can
also.
You
can
also
do
the
you
know
attribute
attribute
extract
based
on
the
rejects.
B
You
do
you
define
here
and
the
other
all
such
thing
observed
or
delete
hash.
Then
you
know
we
can
also
do
the
filter
based
on
what,
if
you
want
to
change
something,
you
can
also
apply
some
like
a
strict
pattern.
You
do
the
key
value
match
or
you
do
the
rejects,
match
all
those
stuff
or
exclude
yeah.
I
try
to
put
all
you
know
the
examples
based
on
what
we
discussed
there
from
the
dock.
B
Then
you
know
the
matrix.
Processor
is
little
p.
I
I
put
it
you
know
a
note
here
is
really
debatable,
because
I
I
know
there's
a
lot
of
team
like
want
to
do
a
special
thing
in
the
processor,
but
I
think
in
this
talk
in
this
discussion,
I
I'm
just
trying
to
cover
whatever
we
already
done
as
common
case
some
common
case.
You
know
for
this
new
metric
processor.
I
don't
want
to
go
too
deep,
because
yeah
they're
going
to
be
a
take
a
long
time.
B
I
think
the
requirement
that
I
put
here
is
the
first
the
same
thing
we
need
to
support
the
you
know
the
updates
on
the
resource,
attributes
and
also
the
you
know,
do
the
filtering,
and
then
we,
I
think
the
one
with
the
common
use
case
is
the
union
transformation,
at
least
for
in
I
I
I
see
a
box
also
put
it
there,
so
I
also
tried
to
put
some
design
on
the
configuration
how
we
can
do
the
unit
transformation
and
also
I
see
some
common
use
case
in
the
matrix
transform
processor,
which
is
in
the
country,
repo,
and
it
could
aggregate
the
metrics
by
label.
B
All
those
stuff
it
looks
pretty
good
use
case
I
haven't
put-
I
haven't
moved
here.
If
we
can
decide
it,
I
can
move
those
configuration
from
the
country
into
the
corporate
this
new
corporate
processor
yeah.
This
is
also
the
thing
we
can
discuss
so
yeah.
So
basically
the
same
thing
I
for
the
for
the
unit
transformation.
B
I
put
a
example,
a
strap
here,
so
we
can
select
the
matrix
name
and
what
unit,
for
example,
if
you
want
to
convert
your
matrix
unit
from
milliseconds
to
seconds
so
you
put
a
seconds
here
and
then
you
have
a
multi-factor
because
for
milliseconds
two
seconds
probably
you
want
to
enlarge
your
value
by
1000.
So
you
can
also
put
a
a
multi-factor
here:
yeah
yeah
the
configuration
it's
similar
yeah.
We
can
take
a
look
at
that
later
and
then
the
log
processor
the
same
thing.
So
I
I
didn't.
B
I
don't
have
too
much
since
here
we
can
all
keep
adding
if
there's
some
new
user
cases,
we
need
to
add
into
the
log,
processor
and
and
the
last
right,
the
filter
processor,
because
right
now
we
only
do
the
in
the
core
repo.
We
only
do
filter
by
metrics
or
metrics,
so
can
we?
We
also
want
to
extend
it
into
the
all
three
data
types.
I
know
it's
probably
a
little
bit
for
the
trace.
B
It
may
be
a
little
bit
conflict
with
the
spat
sampling,
but
I
think
you
know
if
we
have
this
thing,
maybe
we
we
should
apply
to
three
data
types.
Basically,
what
a
customer
needs
yeah
yeah.
This
is
pretty
straightforward.
It's
a
basic
idea.
I
I
think
for
the
trace
part.
I
I
I
I
think
it's
pretty,
you
know
straightforward.
You
can
start
the
indentation
and
the
metrics.
The
logs
does
need
a
little
bit
more
discussion.
B
Yeah.
I
don't
have
you
know
any
backup
plan
in
this
stock.
Yet
but
yeah
it's
it's
open.
So
we
can.
We
can
you
know
anyone
can
throw
any
idea
to
this
talk,
hey
people.
I
think
I'm
done
any
question.
A
Me
thank
you
for
for
writing
the
document
and
presenting
it.
I,
like
the
the
simplicity
of
the
idea,
the
fact
that
you
can
clearly
tell
that
if
you
want
to
do
anything
with
your
metrics
use
the
metric
processing
right
anything
with
your
spams
use,
the
spans
processor.
I
think
that
removes
a
lot
of
confusion.
A
A
One
typical
thing
that
people
can
do
usually
do.
Is
you
apply
the
service
dot
name
space
to
all
the
telemetry
that
you're
passing
through
this
particular
collector,
because
it
it
handles
the
resources,
the
telemetry,
for
a
particular
team,
for
example.
Right
today
you
can
do
that
with
resource
processor,
and
you
define
that
configuration
once
and
you
put
that
processor
in
all
of
your
pipelines,
and
it
applies
that
modification
to
logs
traces
and
metrics.
A
C
C
Yeah,
so
so
it's
it's
a
double-edged
sword
and
I
don't
know
which
one
is
the
right
answer,
but
also
in
terms
of
configuration
with
the
with
the
config
source
and
stuff.
You
may
be
able
to
use
kind
of
a
template,
so
you
you
put
somewhere
the
config
for
mutating
the
the
service
and
you
just
import
into
two
or
three
processors.
If
you
want
that's,
that's
a
a
partial
solution
to
this
problem,
but
I
think
it's
it's
something
that
you
indeed
we
need
to
discuss.
B
B
So
I
and
also
the
configuration,
is
just
one
dancing
people
a
lot
of
just
keep
changing
the
configuration
every
day
right.
He
just
declared
the
things
he
wants
at
the
beginning.
They
probably
run
for
half
a
year
or
a
few
months.
Without
doing
anything,
I
don't
think
you
know
we
want
to.
I
think
that
duplicate
a
little
bit,
but
it
has
clear
picture
to
customer
auto
users.
They
know
for
metrics.
I've
done
such
kind
of
change
for
traces.
I've
done
such
kind
of
change.
That's
much
clearer
than
you
know,
duplicate
the
configuration
a
little
bit.
B
That's
just
my
personal
idea.
Yeah
yeah.
A
One
other
way
to
prevent
the
the
literal
copy
pasting
of
the
config
sections
is
you
can
use
anchors
and
aliases
in
yammer
right?
So
that's
that
kind
of
mitigates
it
somewhat
to
something.
C
The
other
thing
that
mean
one
one
of
the
thing
that
I
was
looking
for
and
probably
is
a
follow-up
on
after
the
this
discussion
is
if,
if
there
is
a
way
to
improve
the
configuration
for
this,
so
I
know
that
you
focus
more
or
less
on
on
the
structure
on
the
overall
structure.
But
another
point
that
I
would
like
to
discuss.
Maybe
in
a
follow-up
design
is:
how
can
we
improve
the
matching
configuration,
for
example?
They
include
exclude.
C
B
D
Yeah
and
then
I
think
that
the
other
aspect,
once
we
have
a
better
you
know
better
handle
on
what
functionality
goes
where,
like
you
know,
as
tigran
said,
there's
common
functionality
can
we
actually
identify
that
and
the
idea
was
bogged
in
again
to
your
thinking
that
you
know
how
do
we
actually
make
sure
that
you
know
functionality
for
processing
for
traces,
metrics
and
logs
is
clearly
defined
and
then,
of
course,
having
common
functionality
and
templating
for
configurations.
You
know
also
addressed
and
compliance
tests.
For
you
know
each
of
these.
C
So
so,
personally
personally,
I
don't
see
a
the
problem
that
tigran
raised
as
very
big.
So
I
would
say
personally
that
I
like
more
this
simplicity,
and
I
think
this
is
probably
the
first
thing
that
we
should
agree
on.
E
I
think
also
with
the
repeating
configurations,
assuming
we
upstream
splunk's
code
to
insert
you
know
yaml
fragment
into
configuration,
then
you
could
essentially
use
the
same
configuration
for
each
of
the
separate
processors.
A
A
C
I
I
see
I
see,
I
see
this
pointing
right,
but
I
think
there
is
a
point
of
in
the
goals
or
somewhere.
We
need
to
define
all
the
functionality
that
we
gonna
support
in
core
versus
extra
functionality
that
can
be
implemented
using
the,
for
example.
If,
if
I
understood
correctly,
we
have
two
main
two
or
three
parts,
so
we
have
the
matching
part
which,
which
is
a
config
and
implementation
of
that
we
have
the
the
mutation
of
attributes.
We
have
the
mutation
of
resource
and
so
on
so
forth.
A
A
Does
it
make
sense
what
I'm
saying
right,
but
does
that
mean
that
you
have
some
of
your
spawn
related
processing
capabilities
in
the
core
and
some
are
maybe
in
the
country
and
then
the
the
boundary
becomes
unclear
right?
How
do
you
decide
what
goes
in
core?
What
goes
into
trip,
and
why
do
you
really
have
two
processors
now
that
both
claim
to
work
with
spam?
A
C
A
A
A
D
Yeah
tigran,
I
mean
we
can
take
a
look
at
that,
but
I
think
that
you
know
looking
at
signals.
The
advantage
is
that
you
know
again.
We
need
to
look
at
what
are
the
use
cases
where
you
know
at
any
given
point
in
time,
both
traces
and
metrics
or
metrics
and
logs
you
know,
is
being
handled
and
what
is
the
common
functionality
there?
Because
don't
think
you
know
for
most
use
cases.
D
There
is
so
much
of
an
overlap
where,
if
a
particular
signal
is
being
processed
that
you
would
have,
you
know
similar
functionality
that
you're
leveraging
you
know
for
for
spans
or
for.
B
Yeah
in
the
yeah
and
also
2.0
question
your
concern.
If
we
are
going
to
put
a
lot
of
like
functionalities
for,
for
example,
trace
in
core,
does
that
mean
you
know?
If
that's
the
the
truth,
we
we
probably
also
need
to
add
a
lot
of
processors
for
each
of
those
features.
You
know
if
we
go
with
the
current
away
right,
I
I
can,
for
you
know
the
the
for
way
customer
how
to
configure
them
the
I
think
that
it's
it.
Currently,
we
already
have
too
many
processors.
B
It
really
can't
be
really
hard
for
them
to
understand
each
of
them
and
and
the
things
they
need
to
focus
if
this
force
trace
only
if
these
four
metrics
in
the
function,
it's
it's
hard
configuration
curve.
It's
it's
going
to
be
high,
and
I
also
doubt
if
for
the
core,
should
we
put
that
many
things
there?
B
You
know
into
the
to
to
have
the
core
sorry
core
repo
have
that
much
functionality
to
handle
those
data,
so
I
would
say
either
way,
even
though,
if,
if
you're
worried
about
putting
200
straightforward,
if
we
go
with
the
new
proposal,
it's
going
to
be
also
a
problem
to
the
current
current
design.
Right,
that's
just
what
I'm
I
I'm
thinking
right
now.
I.
D
Mean
we
were
one
of
the
key
areas
that
we
were
you
know
again
in
discussions
with
bogdan
have
been,
then,
how
do
we
manage
proliferation
of
processor
rules
right
and
and
for
any
data
signal
and-
and
that's
kind
of
we
see
that
pretty
rampant
with
metrics
right
now,
as
we
have
been
adding,
you
know
different
kinds
of
rules,
but
that's
my
function.
Does
that
really
scale
it
doesn't
it's?
It
actually
is
very
hard
on
maintainability.
A
F
We
write
them
to
configure
ourselves,
like
you
know
the
custom
stuff
in
the
contrib
like
container
insights
and
stuff,
like
that.
This
seems
like
it's
more
for
the
end
users
to
be
able
to,
you
know,
make
sure
that
their
like
attributes
are
consistent
all
around,
so
they
can
rename
things
and
so
on
and
so
on.
I
think
it's
just
not
in
the
dark.
Maybe
that's
why
we
are
not
on
the
same
page,
because
we
don't
know
you
know
who's
the
audience
here.
Yeah.
D
C
It
will
be
interesting
to
yana's
point.
It
will
be
interesting
to
get
10,
10,
12
examples
of
functionalities
and
stuff
and
and
see
and
see.
Who
is
the
audience
as
jana
pointed?
Is
it
intended
by
the
end
user
to
be
configured
for
for
consistency
with
attributes,
because
it
happens
that
in
ruby
they
call
that
attribute
foo
and
in
go?
It
called
that
attribute
bar
and
they
want
to
end
up
with
buzz
in
the
back
end
or
is
it
something
that
is
very
backhand
specific
because
they
are
using
this
backhand?
C
They
need
this
attribute
to
be
called
bar,
even
though
open
telemetry
name
it
foo,
for
example.
So
I
think
there
are
a
bunch
of
use
cases
like
this,
where,
where
we
need
to
maybe
have
a
set
of
use,
cases
defined
and
see
better
understand
the
trade
off
the
the
first
point
will
better
match
with
these
use,
keys,
yeah.
F
F
Fine
yeah
yeah.
I
saw
that
those
other
things
there's
one
one
more
thing
that
I'm
curious
about
like.
If
I
want
to
expand,
for
example
like
if
I
want
to
provide
some
custom,
you
know
logic.
I
still
want
to
write
a
let's
say,
a
processor.
How
is
it
going
to
work
with
this
like
common
processor,
like?
Should
I
be
following
the
same
configuration
style?
F
Maybe
you
know
the
current
model
of
having
multiple
processors
is
nice,
because
if
you
know
we
want
to
add
like
any
in
span
processing
type
of
logic,
we
can
go
ahead
and
like
invent
our
new
rules
and
stuff
like
that,
I
just
want
to
understand
what
is
your
overall
thinking.
This
is
going
to
be
the
core
processor,
but
what,
if
you
know,
we
want
to
put
something
extra?
F
Should
we
follow
the
naming
conventions
or
you
know,
should
we
rename,
for
example,
a
span
processor
to
call
to
be
called
trace,
processor
for
consistencies
and
so
for
consistency,
yeah.
D
I
mean
again,
I
think,
mia,
that's
a
very
good
point
and
I
think
that
that
we've
discussed,
but
not
really
itemized
in
this
dock,
and
I
think
it
would
be
totally
worth
itemizing.
You
know
some
of
the
conventions
that
we'd
like
to
standardize
in
terms
of
you
know
what
not
only
naming
but
other
aspects
right.
What
is
the
data?
What
are
we
talking
about
configuration
templates?
You
know
what
are
we?
What
are
we
doing?
Yeah.
D
D
To
me
so.
D
And
then
that's
one
of
the
issues
that
you
know,
we've
also
run
into
as
we've
had
multiple
metrics
processor
requests
and
you
know
yards
that
we've
filed.
We
ourselves
are
not
being
able
to
even
understand
you
know
across
okay.
So
what
does
this
processor
actually
do
today?
And
what
do
we
need
to
you
know?
Do
we
build
another
processor
for
a
specific
configuration,
so
good
point
yeah,
I
think
a
semantic
conventions
or
just
a
convention
section
would
be
useful.
H
I
want
to
respond
specifically
to
the
open
question
that
you
posed
in
the
stock
around
metrics
under
under
metrics.
You
said
you
said:
aggregation
is
an
optional
thing,
you're
thinking
about,
and
I
want
to
highlight
that
metrics
are
fundamentally
different
from
all
of
the
other
data
types
here
in
that
metrics
already
represent
aggregate
information
and
so
for
all
of
the
other
data
types.
C
Because
that
may
drive
us
towards
one
solution
or
the
other
in
terms
of
having
a
unified
thing
versus
split
by
functionality.
I
H
All
right,
I
wanted
to
add
one
more
thing
before
we
move
on,
which
is
that,
on
the
flip
side,
it's
also
common
to
want
to
turn
one
metric
into
two
metrics,
where
I
think
that's
also
incredibly
unlikely
on
on
trace
and
logs.
C
I
C
That's
true
jay,
you
are
raising
your
hand,
yeah.
J
D
K
One
more
one
more
implementation
question:
I
I
asked
this
in
a
comment,
but
I
do
want
to
get
the
group's
feedback
as
well.
So
it
sounds
like
the
metric
aspect
of
this
design
will
potentially
reuse
some
of
the
implementation
of
the
existing
metric
transform
processor
and,
as
we
know
that
implementation
isn't
something
we
want
to
support
long
term,
because
it's
using
open
sensor
stereotypes.
K
D
Yep
absolutely
that
goes
without
saying
I
mean
I
think,
that's
the
section
that
jay
also
called
out
in
terms
of
deprecation
right.
So
we
don't
want
those
dependencies.
G
C
B
D
I
think
I
mean
we
will
go,
you
know,
go
and
add
the
sections
that
you
know
have
been
suggested.
I
think
I
I
took
a
bunch
of
notes,
so
we
can.
B
D
On
it,
and
then
we
will
once
we
have
an
updated
version,
we
can,
you
know,
get
feedback
from
folks
and
discuss.
C
So
so,
in
my
opinion,
in
my
personal
opinion,
the
main
question
that
we
need
to
answer
is:
do
we
go
with
a
model
on
first
signal
processors,
or
do
we
go
on
a
model
of
per
functionality?
Processors
both
are
valid
models,
and
that's
why
one
of
the
main
thing
that
we
ask
you
is
to
do
the
comparison
and
the
trade-off
thing
once
we
have
that
we
need
to
start
discussing
about
the
configuration.
C
D
Okay,
yeah,
that
sounds
good
so
next
time
we'll
come
back
with
an
trade-off
evaluation
and
of
signal
versus
functionality,
and
then
you
know
again:
let's
go
through
that.
First.
C
Sounds
good
thanks
and
also
happy
to
to
review
some
of
these
things
offline.
I
don't
think
it
needs
to
to
be
all
always
online,
but
I
think
now
that
we
understand
we
saw
the
initial
thing.
We
came
up
with
the
goals
or
the
main
questions
that
we
need
to
answer.
Let's
start
to
do
it
as
much
as
we
can
offline
and
if
we
get
blocked
we
we
call
for
another
meeting.
A
C
I
mean
I
mean
this
was
an
experiment
and
it
doesn't
seem
that
we
can
fit
two
of
them
in
one
hour
and
have
discussions
of
other
things,
and
I
don't
think
we
should
drop
other
questions.
So
if
anyone
else
has
an
objection,
I
would
call
it
done
for
the
design,
because
we
reserve
40
minutes
and
then
just
move
to
the
questions.
Alolita.
Are
you?
Okay
with
that.
D
A
A
L
Yeah,
I'm
here
all
right:
okay,
hi
everyone,
my
name
is
mario,
so
I
brought
this
issue
that
I
opened
very
recently
to
give
a
brief
context
of
the
issue:
is
the
so
the
applications
telemetry
is
built
on
the
obs
report
and
obvious
matrix
packages
and
the
metrics
are
being
exported
with
a
prometheus
exporter
by
default,
which
is
from
open
sensors.
L
The
problem
that
that
we're
facing
is
that
this
particular
exporter
is
not
very
configurable,
and
for
us
this
is
of
particular
interest
when
we
are
running
multiple
applications
within
the
same
process
and
being
able
to
configure
this
exporter
is
we
found
we
found.
That
is
a
good
way
of
differentiating
between
the
applications
metrics.
L
But
as
we
also
thought
that
as
a
medium-term
possible
solution
would
be
exposing
this
prometheus
export
configuration
in
the
application
settings
yeah
until
we
can
use
the
generic
metrics
exporter
so
yeah,
I
guess
my
question
is:
if
there
are
any
thoughts
already
on
this
and
yeah,
we
will
like
to
contribute
with
this
and
yeah.
So
sorry,
okay,
very
good.
F
G
F
Able
to
you
know,
add
custom
attributes
or
names
like
do
you
have
a
list
of
things
that
you
care
about
in
the
configuration.
L
Right
so
it's
for
us,
it's
of
particular
interest
the
registry
of
these
prometheus
exports,
so
we
can
attach
labels
and
use
custom
registries.
That
would
be
like
the
first
number
one
item.
C
I
think
I
think
this
touch
is
a
bigger
problem
in
the
bigger
bigger
hole
in
our
design,
which
is
all
the
telemetry,
including
the
logger,
the
the
the
zap
logger
and
the
exporters
for
matrix
and
even
the
exporter
for
for
logger,
because
you
configure
the
file
where
to
put
the
logs
and
stuff
all
of
them,
for
some
of
them
are
flags.
Some
of
them
are
hardcoded
in
code,
and
I
think
we
don't
have
a
good
story
overall.
C
So
so
I
was
thinking
about
this
and
I
think
I
think
the
reasonable
solution
for
this
would
be
the
following.
In
my
opinion.
First
of
all,
we
should
aim
for
having
this
in
the
configuration
in
the
ammo
file,
not
as
flags
if
possible,
then.
Secondly,
there
are
they're,
gonna,
probably
be
even
if
we
switch
to
open
telemetry
even
for
zap,
there
may
be
custom
build
exporters
that
you
want
to
to
install
insta
instead
of
the
other
ones
that
we
provide.
C
So,
let's
assume
we
provide
prometheus,
but
somebody
may
ask
us
what
about
if
I
want
directly
aws
metrics
exporter
or
something
like
that
for
for
these
metrics
not,
and
I
don't
want
to
expose
us
prometheus
and
point
and
stuff,
so
there
will
be
a
lot
of
questions.
C
I
think
I
think
what
I
what
I'm
saying
here
is
there
are
two
problems:
is
the
configuration,
so
we
should
come
up
with
the
story
of
how
do
you
configure
all
of
these
things
and
then
second,
is
the
the
creation
of
the
the
components
so
like
the
exporters,
somebody
has
to
create
them
or
there
needs
to
be
a
factory
or
something
that
we
pass
the
config
and
gives
us
the
exported
instance
that
we
can
install
and
and
apply
to
the
to
the
configuration
that
we
have
so
that
being
said,
I
think
these
are
the
two
problems
that
we
need
to
address
and
we
need
to
come
up
with
generic
solutions
for,
for
both
of
these
problems.
C
No,
they
don't
so.
This
is
an
interesting
question.
Should
we
put
that
our
own
telemetry
into
the
into
the
pipelines
and
probably.
F
Know
in
terms
of
like,
if
it
breaks
you
know,
we
will
still
want
to
expose
some
metrics.
C
Correct
correct,
correct,
so
that's
that's
that's
a
good
point.
The
other
point
is
maybe
maybe
one
option
for
us
would
be
to
to
have
to
be
able
to
configure
one
of
the
exporters,
not
the
entire
pipelines,
but
one
of
the
exporters
or
specific
pipelines
for
own
telemetry.
I
don't
know,
I
don't
have
a
good
answer
jana,
but
what
I'm
trying
to
say
is
these
are
the
problems
that
I'm
seeing
and
I
think
we
need
to
have
a
discussion
in
the
end-to-end
story
and
to
discuss
this.
A
Okay,
I
think,
to
summarize
this,
the
hiding
of
the
of
the
ops
report
to
the
internal
is
deliberate,
because
we
don't
think
that
that's
the
final
api
and
we
don't
want
to
make
it
public
before
we
go
to
the
ga.
The
this
needs
a
full
endpoint
design
and
until
we
have
that,
I
don't
think
we
can
move
forward
right
with
implementing
anything.
That's
that's!
That's
my
take
on
this.
C
Yeah,
so
I
think
I
think
I
think
what
I'm
trying
to
say
is
we
need
asean
appointed.
So
there
are
three
things
configuration:
how
do
we
create
the
the
custom
components
that
need
to
be
set
to
the
meter
or
to
the
blogger
or
whatever,
and
the
third
one
is?
How
do
we
send
them
via
pipeline
or
do
we
send
them
directly,
and
how
do
we
do
that?
So
these
are
the
three
questions
that
if
we
have
an
answer
to
that,
I
think
we
can
start
implementing.
C
L
Right
yeah,
it
makes
sense.
So
should
we
open
some
issues
or
make
some
steps
to
move
this
forward.
A
D
I
think
I
think
the
initial
initial
issue
was
opened
by
bogdan
just
as
in
placeholder,
but
you
know
again,
the
idea
has
been
anthony
has
been
working
on
it
for
the
go
library
and
the
idea
was
to
reuse
some
of
that
and
pull
that
in
for
the
collector.
Also.
M
Yeah,
so
I
think
the
the
question
that
we
we
currently
have
is
is
the
collector
okay
to
rely
on
the
semantic
convention,
constants
and
variables
that
are
generated
for
the
go
sdk
or
the
go
api
that
use
the
attribute
key
value
data
type
from
the
api.
M
C
Right
now
we
don't,
we
don't
depend
on
anything.
We
we
just
use,
we
use
strings,
so
I
don't
know
exactly.
How
do
you
depend
on
the
attribute
key
that
you
you
mentioned,
but
that
being
said
as
long
as
it's
a
standalone
package,
I
don't
see
a
problem
on
depending
on
that.
A
Release
cadence
thing
is
now
more
clear
because
it's
tied
to
specification
version
numbers,
the
semantic
conventions
correspond
classification
version
number.
Whenever
there
is
a
new
release
of
the
specification,
the
goal
sdk
is
going
to
be
regenerating
the
semantic
conventions
and
publishing
a
new
one,
a
new
package
for
that
particular
version.
So
I
think
that
started
then,
except
that
maybe
if
there
is
an
in-flight,
something
that
is
not
yet
released,
it's
probably
going
to
be
difficult
to
start
using
it,
but
maybe
you
temporarily
just
hardcode
the
the
thing
in
your
collector
code.
A
C
Anthony
can
you
comment
on
the
issue
with
the
with
the
problem
that
we
just
discussed
in
the
meantime,
I
I
I
need
to
take
a
look
at
what
is
in
this
attribute
key
structure
and
what
what
dependency
does
it
bring
with
it?
Also,
the
last
question
that
I
have
is:
does
it
is
this
a
standalone
module,
or
does
this
bring
the
entire
api
to
us.
M
It's
not
currently
a
stand-alone
module,
but
I
think
if
we
needed
to
make
it
separate,
we
could
separate
out
the
the
semcoms
and
attribute
packages
baggage
is
not
we
want
to
be
looking
at.
You
want
to
be
looking
at
that
attribute
and
attribute
should
be
actually
should
be
entirely
standalone.
It
should
have
no
dependencies
outside
of
standard
library.
F
M
Right
so
in
in
the
go
api
we
generate
them
as
attribute
key
and
and
the
enum
values
those
attribute
key
values,
because
those
are
used
in
the
api
when
you
want
to
provide
attributes
to
a
span
or
as
labels
on
a
metric.
I
don't
know
how
useful
that
is
in
the
collector.
It's
certainly
possible
to
take
the
tooling
that
we've
generated,
though,
or
that
we've
built
and
use
a
different
template
that
doesn't
use
attribute
key
around
the
names.
C
No,
no,
what
I'm
trying
to
say
is
if
you
make
a
function,
call
with
either
string
or
attribute
key.
You
don't
have
to
do
a
conversion.
M
M
I
see
if
you,
if
you
find
a
var
block
in
one
of
those
sim
columns.
You
should
be
able
to
see
an
example
that,
where
we
generate
the
actual
enum
values
or
in
something
like
this
yeah
here,
you
can
see
that
there.
A
A
M
Attribute
are
in
the
go
hotel
api
module.
If
you
really
wanted
them
separate,
we
we
could
look
at
doing
that
too
then
become
two
separate
modules,
six
of
one
after
the
other.
From
our
perspective,
I
think
so.
C
So
different,
there
is
another
problem
we
do
have
in
the
p
data
in
in
our
data.
We
do
have
another
attribute,
which
is
not
this
attribute
and
that
may
confuse
users.
C
The
other
option
is,
can
you
make
it
the
tool
to
generate
twice
the
the
things
one
that
is
called
common
foo
and
then
the
other
one
is
common
for
key
and
we
just
depend
in
that
one.
The
kamafu
key
depend
is
so
cloud
provider
key.
Is
this
and
you
have
another
cloud
provider
which
is
just
a
string.
M
I
I
think
I
would
prefer
to
generate
two
separate
packages
that
could
be
used
one
this
way
and
one
in
the
other
way,
if
you
could
bring
up
the
semconf
gen
template
and
just
show
that
I
think
that'll
make
clear
how
easy
it
would
be
to
just
generate
multiple
times.
It's
internal
tools.
J
M
So
we
could
just
treat
this
template
to
be
whatever
types
you
wanted
to
be
generated
and
that
would
allow
us
to
build
attributes
with
the
same
name.
This
tool
handles
things
like
ensuring
that
capitalization
is
appropriate
for
go
conventions
and
the
like.
So
everybody
would
have
the
same
names
to
remember,
but
they
would
point
to
different
value
types.
C
Okay,
let
me
let
me
look
at
this.
I
now
understand
better
the
problems
and
stuff.
So
let
me
let
me
look
a
bit
more
into
this
and
I'll
come
back
to
the
issue
with
an
answer,
but
but
I
think
it's
it's
a
great
it's
a
great
tool.
So
at
least
we
definitely
can
reuse
the
tool
and
do
you
mind
by
the
way
anthony?
Do
you
mind
putting
this
tool
as
part
of
if
we
make
it
generic,
can
we
put
it
as
part
of
the
build
tools?
Repo?
So
then,.
M
Yeah
we
can
so.
I
think
we
skipped
over
eddie's
design
proposal,
which
was
also
for
another
maintenance
tool
that
we're
using
to
handle
package
versioning.
But
I
think
if
we
wanted
to
reuse
that
and
wanted
to
reuse
this,
we
would
probably
just
take
all
of
the
internal
tools
that
we
have
in
the
go,
repo
and
move
them
out
into
a
separate
repo,
whether
that's
build
tools
or
another
one
just
for
go
tools
so
that
they
could
be
more
easily
reshared
yeah.
D
So
bargain,
just
to
add
on
to
what
anthony
said,
can
you
please
take
a
look
tigran
involving
on
the
build
tools,
proposal
and
the
release
proposal?
This
is
again
based
on
the
work
that
we've
been
doing
for
the
you
know:
release
tooling
automation
on
the
go
repo,
so
reusing
that
also
for
collector,
and
it
is
a.
M
C
D
A
D
A
F
D
C
Thank
you.
Thank
you
jana.
You
know
my
two
cents
on
this.
If
you
have
time
to
add
tests
to
run
the
test
in
github,
actions
on
arm
feel
free
to
add
it.
D
Yeah,
I
don't,
I
don't
think
it
does,
but
we
could
you
know
kind
of.
C
Okay
thanks,
but
but
the
answer
the
general
answer
is.
We
would
like
to
support
as
many
platforms
as
we
can
if
possible,
but
but
we
don't
have
time
to
add
all
these
steps
and
stuff.
B
D
I
think
that
again
bogdan
and
tigran,
you
know
I
know
you
guys
have
been
reviewing
and
merging
the
prs.
I
think
there
are
five
pr's
in
flight
that
need
to
be
reviewed
and
completed,
but
we
are
pretty
close.
Can
we
you
know
kind
of
discuss
the
release?
Maybe
we
can
just
do
it
off
band,
but.
C
Let's
I
think,
yeah
so,
unfortunately,
I
think
I
discovered
a
couple
of
more
issues.
I
will
file
them
tomorrow,
but
I
think
we
we
should
be
done
with
the
phase
one
not
with
the
entire
rc.
So
the
the
discussion
was
to
to
be
done
with
phase
one,
because
in
phase
two
we
need
to
to
kind
of
understand
better
the
processor
structures
and
stuff
like
that
which
which
so
is
not
a
real
rc.
It's
an
rc
of
of
the
core
packages
of
the
yeah.
C
So
something
like
that,
but
yeah,
it's
the
first.
The
first
packages
will
be
declare
stable.
B
N
Okay,
got
it
yep.
Thank
you!
So
one
more
thing
I
bought
down
and
chicken
so
for
the
second
design
dot.
So
as
you
didn't
get
time,
so
can
you
please
make
some
comments
offline,
if
possible,
so
so
that
we
can
start
the
final
design
and
implementation
for
the
second
designer
for
multiple
config
file,
support,
okay,
yeah.
D
O
Welcome.
Welcome
to
the
meeting.
O
O
P
O
O
Q
O
Probably
we
can
just,
I
think
we
can
start.
I
think
I
just
wanted
to
talk
about
this
so
because
this
this
was
something
which
bogdan
has
raised
the
api
changes
for
span
and
the
baggage
we
return
pointer
whenever
we
create
a
new
span
and
the
reason
why
we
return
a
shared
pointer
is
the
first
advantage
is
to
like
to
the
end
user.
O
They
can
definitely
pass
that
share
pointer
across
the
different
functions
across
the
different
threads
and
they
don't
need
to
worry
about
the
lifetime
of
that
shade,
pointer
and,
secondly,
most
important
is
for
us
internally.
We
maintain
that
we
do
store
that
span
instance
as
I'm
going
to
required
inside
the
context
so
like
if
there
is
a
thread,
local
context
whenever
spam
gets
created
and
when
we
make
it
as
when
the
user
make
it
as
active
span.
O
O
O
O
Manually
in
the
context
and
pass
it
to
the
child
thread
child
span,
and
even
we
will
do
it
automatically
whenever
the
user
wants
to
make
that
current
span
as
active.
So
in
both
the
scenarios
it
will
get
stored
in
this
span
context,
and
then
that's
one
of
the
scenarios
that
the
the
same
instance
would
be
with
the
with
the
user,
and
then
it
would
be
also
stored
in
this.
O
In
the
current
context,
so
I
tried
some
scenarios
to
remove
that
share
pointer
by
by
returning
a
unique
pointer
to
the
user
and
then
store
the
raw
pointer
in
the
context,
and
whenever
that
span
is
the
unique
pointer
is
getting
destroyed.
Somehow
try
to
remove
it.
Remove
that
raw
pointer
from
the
context
that
that
adds
lots
of
coupling
with
the
context
in
the
span
which
we
don't
as
of
now
have,
and
then
it
needs
lots
of
draw
pointer
management.
So
just
then.
Q
O
Q
I'm
saying
I'm
strongly
opposed
to
that
I'd
say
no
and
I
would
have
tagged
exchanges
requested
because
I
don't
see
value
especially
right
now
before
we
release
version
one
ga.
We
should
consider
this
enhancement
after
right
after
we
can
get
together
and
see
what
other
better
ways
to
do.
It.
O
Q
And
we
are
losing
that
partial
ownership,
semantics
yeah.
So
then
there
are
two
reasons:
initially,
it
was
billed
well.
At
least
the
ask
was,
or
the
idea
was
that
we're
gonna
get
better
perf.
No,
we
didn't
get
better
perf
and
we
also
lose
the
the
shared
ownership
semantics,
which
is,
I
don't
know,
let's
let's
park
it,
let's
get
back
to
it
later,
because
we
tried
and
sorry
we
couldn't
get
there.
O
Only
concern
I
have
yeah,
I
mean
I
totally.
O
You,
but
I'm
thinking
the
same
thing.
The
only
concern
I
have
is
that
if
there
is
any
api
change,
we
should
be
doing
it
now.
It
should
not
happen
that
after
ga
we
come,
we
come
to
know
that,
no,
when
there
is
some
valid
reason
which
comes
up
that
now,
we
have
to
remove
it
at
that
time.
We
may
not
be
in
a
situation
to
really
change
the
api.
O
I
know
we
you
can
try
to
support
both
the
apis,
but
I
think
that
will
I
mean
the
back
end
system
will
become
supporting
more
will
not
be
easy
one.
I
mean
in
the
backend
sdk
to
support
the
api
to
really
support
both
the
interfaces.
So
I
mean
if
any
change
is
required,
it
should
be
now
or
otherwise.
We
should
say
that
now
we
are
not
going
to
change
anything,
we
don't
see
any
improvement
for
us.
Stability
is
more
important
than
losing
the
stability
just
because
of
using
raw
pointers
or
really
unique
pointers.
P
O
I
mean
I
feeling
checked
with
them.
I
don't
think
they
have
any
numbers
in
mind.
They
want
it
to
be
as
fast
as
possible.
If
that's
something
you
can
do
it
I
mean.
Definitely
if
it
is,
we
can
do
it
using
unique
pointer
drop
pointer.
It
should
be
fast,
but
we
are
not
able
to
achieve
that
because
there
is
overhead
getting
inserted
somewhere
else
because
of
removing
the
shade
pointers.
So
if
we
won't
get
that,
then
I
think
we
should
say
that
we
can't
really
achieve
that
goal
which
we're
expecting
okay.
O
Last
one,
of
course
I
mean
no,
it
won't,
in
fact
the
memory
you
say
may
increase
because
to
use
the
raw
pointer,
I
know
we
have
to
add
different
data
members
at
multiple
places
to
ensure
that
we
can
do
clean
up.
So
we
have
to
keep
the
pointers
of
say,
context
pointer.
We
have
to
keep
it
in
the
span
and
the
span
pointer.
We
have
to
keep
into
the
context
so
that
we
can
clean
up
both
the
ways
so
actually
we're
going
to
add
different
points.
O
O
O
We
can
check
in
our
ci
machine
also,
but
I
think
it
should
be
comparable,
because
I
ensured
that
we
had
the
same
load
when
I
was
testing
both
the
scenarios.
P
P
I
have
I
haven't
seen
your
screen.
O
Okay,
yeah,
I'm
sharing
it
now
sorry,
so.
O
O
P
O
I'm
okay
max
if
you're
saying
that,
let's
park
it
as
of
now,
I
think
that
we
will
see
it
afterwards.
I
think
it's
totally
fine.
As
of
now
we're
not
going
to
do
any
change,
because
we
don't
have
that
statistics
to
really
prove
that
anything
is
improving
going
forward.
If
something
is
there,
we
can
definitely
do
it,
maybe
in
a
separate
name,
space
or
maybe
some
other
way.
We
can
definitely
try
to
have.
Q
A
good
objective,
a
way
to
take
a
look,
and
it
is
invaluable
to
have
these.
We
use
them
as
a
reference
and
later
on.
I
was
thinking
that
perhaps
if
customers
would
like
to
use
some
simpler
type
like
value
object
type,
maybe
we
can
have
some
operator
assignment
operator
or
something
or
I
don't
know,
an
additional
method
that
allows
to
obtain
whatever
object,
type
like
better
value
object,
type
that
they
are
expecting
to
get
instead
of
a
shared
shared
media.
Q
But
in
most
of
these
scenarios
I
think
shared
ptr
adds
value
for
that
shared
ownership,
semantics,
partial
ownership,
semantics,
and
I
I
don't.
I
struggle
to
see
how
we
can
address
that
later
on,
because
even
if
we
provide
some
neat
easy
to
use
class,
but
then
we
still
need
to
support
scenarios.
For
example,
context
propagation
across
threads
or
propagation
across
thread
pools,
and
I
already
got
questions
from
our
current
customers
about
doing
that,
and
I've
been
showing
them
the
existing
api.
Q
So
the
issue
here
is:
if
we
refactor
that
first
of
all
that
invalidates
all
of
my
prior
answers
and
then
the
next
question
is:
if
we
do
it
differently.
In
the
new
model,
we
really
need
to
take
that
scenario
into
consideration.
Q
O
Okay,
I
think
it
was
same
observation
which
I
found
even
in
the
baggage
api
I
mean
api.
I
think
the
recommendation
was
if
we
can
use
that
pointer
to
implementation
approach.
I
mean
I
didn't
see
how
it
solves
a
problem,
because
you
still
have
a
shade
pointer,
the
actual
implementation
share
pointer,
which
is
pointing
to
the
actual
implementation.
So
I
mean
I
was
not
expecting
the
benchmark
to
improve
and
I
don't
I
didn't
see
any
improvement.
Also.
Q
Valid
I'm
thinking,
maybe
if
it
helps
I
can
try
to
add
some
benchmarks
or
test
code
or
examples
for
the
point
explanation
across
thread
as
an
example.
So
then,
as
if
we
ever
need
to
refactor
that
we
will
then
take
that
example
and
test
into
consideration.
Q
That
would
also
help
us
to
kind
of
stay
on
track,
without
forgetting
that
scenario,
but
thank
you
for
the
benchmark.
It's
it's
great.
O
Q
Right
especially
like
when
we
start
to
thread
in
a
thread
pool,
and
we
need
to
parent
the
child's
pattern
to
the
pattern,
and
this
is
common
scenario
like
in
azure.
This
is
to
be
comments
in
azure
native
code
services,.
O
Q
Q
Some
of
these
in
the
open
source-
and
we
can
keep
those
running,
and
at
least
when
we
get
to
the
refactor,
that
we
consider
those
examples
as
well.
Q
Q
Tom,
I
think
we
can
they
have
to
be
in
a
separate
executable.
I
noticed
that
what
that
was
a
comment
from
george:
we
cannot
mix
test
and
benchmark
in
one
compilation
unit,
just
because
bazel
would
prefer
that
not
being
included
in
a
single
compilation
unit,
but
I
think
we
can
place
them
in
the
same
folder
and
as
long
as
cmake
will
produce
separate
executables
some
tests
and
some
benchmarks.
P
Yes,
the
currently
for
test,
we
can
run
a
single
command
or,
like
c
test,
to
run
all
the
tests.
I
just
want
to
have
a
single
command
to
run
all
the
benchmarks.
Q
I
think,
maybe
if
we
make
sure
that
the
naming
of
it
is
adequate,
I'm
pretty
sure
you
can
filter
you
can
also
use
target.
Let
me
take
a
look
because
usually
I've
been
just
running
the
full
set
as
well.
Q
Just
a
single
executable
right:
it's
just
this
test
is
more.
Q
Q
I
think
the
only
feedback
from
josh
was
that
we
keep
tests
and
benchmarks
in
separate
executables,
because
that's
something
that
in
their
structure
is
the
preferred
way
of
doing
things
and
I'm
okay
with
that.
Q
Q
Yeah
it's
just
like.
In
some
other
environment,
we
had
many
compilation
units
producing
one
single
executable
and
that
single
func
test
succee
had
everything,
including
tests
and
benchmarks.
So
obviously
that
is
not
the
best
practice
for
the
bazel
structure
project.
That's
why
we
go
with
those
separate
compilation,
unit,
separate
executable
and
separate
executables,
specifically
for
the
benchmarks.
O
O
Yeah,
so
there
is,
there
are
these
two
pr
pictures
which
open
today
one
is
in
jagger,
I
think
tom
I
just
assigned
it
to
you.
Just
I
mean
I
approve
it.
It's
a
simple
change.
I
think
you
can
just
improve
it
and
it
should
be
okay.
Okay,
yeah
I
raised
cpr
for
the
benchmark
tests.
I
think
that's
something
you
can
see.
Q
This
was
assuming
that
feedback
from
this.
I
think
it's
a
minor.
E
Q
Like
I
had
a
few
quicks
with
c
plus
plus
20.
yeah,
so
let
me
explain
this
so
for
this
one
c
make
version
greater
was
a
version
greater
or
equal
was
only
added
to
cmak
3.7.
Q
So
that's
why,
since
we
don't
know
exactly
what
cmake
we're
running,
I
cannot
use
cmake
version
greater
or
equal.
I
would
have
put
it
version
greater
or
equal
312,
but
I
cannot
use
greater
or
equal,
so
I
put
version
greater
than
311
999
with
an
anticipation
that
the
next
one
is
any
of
312.
That's
a
quirk
that
I
unfortunately
had
to
use
to
be
compatible,
potentially
with
c
mic
lower
than
3.7.
Q
Q
Yeah
and
it's
it's
mostly
the
old
ubuntu
14,
I
think,
or
for
ubuntu
16,
they
still
have
the
old
cmake,
I'm
not
sure
if
we
have
anything
else
in
our
cmic
rules
that
presently
requires
to
make
newer
than
like
3.7.
Q
P
So
eight
five
I
mentioned
here-
that's
a
that
was
a
vulnerable.
Q
Failure
there
was
a
failure
for
the
cia,
a
build
which
I
think
now
may
be
uncommented
if
we
emerge
this
and
rebase
the
other
one.
Q
O
I
think
that's
fine,
I
think
if
we
merge
your
pr
first
and
then
I
can
do
the
changes
and
then
I'll
merge
it.
Q
Q
F
Q
I
was
adding
extra
tests
and
stuff.
That's
why
it
wasn't
that
bad,
not
a
big
pr.
It
was
just
three
three
three
lines
or
two
lines:
two
lines
of
code:
yeah,
that's
thank
thanks
for
commenting
and
the
interesting
that
on
visual
studio,
2019,
which
is
c
plus
plus
20,
it
all
compiles
successfully.
So
it's
a
matter
of
including
or
not
including
the
header,
and
somehow
it
just
happens
that
on
on
on
visual
studio,
it's
more
relaxed.
Q
I
didn't
even
include
that
header,
but
it's
still
somewhere
else,
maybe
in
the
run
time
or
something
somehow
already
had
this
header
included,
but
gcc
is
being
more
strict,
yeah,
okay,
okay,
this
one,
I
think
I'll,
do
the
changes,
yeah
and
then
probably-
and
now
I
already
approved
the
looks
good.
So
you
can
do
it
for.
Q
I'll
send
a
separate
follow-up,
so
I
did
the
scripts
explained
how
to
build
for
140
141
windows
42.
I
was
trying
to
also
update
the
tr
to
add
it
to
the
actual
ci.
Q
Okay
right
now
I
only
changed
the
build
script.
I
ran
into
some
issues
with
the
installation
of
the
old
visual
studio
2015
on
on
github
image.
I'm
trying,
like
I
tried
a
few
options.
I'm
kind
of
struggling
with
that
right
now.
It
all
builds
when
you
manually
install
it
on
machine,
and
I
know
that
the
scripts
work
I'm
trying
to
add
the
ci
part
that
automates
it
and
that's
where
I
had
some
issues
where
I
installed
the
package
on
server
edition.
It
was
complaining
about
some
service
packs
missing.
Q
All
this
like
other
additional
stuff,
I'll
refresh
it
later
this
week,
is
that
additional.
Q
It,
but
I
will
tell
you
yes,
take
a
look
at
this.
These
scripts
and
I'll,
send
the
follow-up
that
adds
them
to
ci.
P
O
Q
Yes
or
maybe,
maybe
you
can
keep
it
open
like
forever
if
you
play
this
is
after
ga.
O
Q
E
O
Sure,
definitely
anything
which
improves
will
welcome
yeah.
I
have
started
looking
because
right
now,
not
just
the
api,
our
internal
data
structures
may
have
some.
I
mean
there
would
be
some
room
for
improvement.
I
mean
we
have.
We
have
the
linked
list
structure
for
context
management.
We
have
stack
length
and
stack
both
we
use
for
context
management
and
there
is
a
key,
this
key
value
pair
area
of
key
value
pair,
which
we
use
it
for
internally
for
baggage.
O
So
I've
started
looking
if
we
can
have
some
improvement
on
those
areas,
but
I
think
that
something
is
not
going
to
that.
That's
not
going
to
affect
the
api,
but
that's
more
internal.
That
is
but
that
that's
a
separate
for
the
discussion
but
yeah.
I
think
I'll
raise
the
issue
for
that
and
probably
investigate
it
separately.