►
From YouTube: 2020-11-04 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
B
C
A
A
I
I
think
we
can
start
so.
I
haven't
put
an
agenda
about
the
update
on
the
login.
We
can
probably
cover
that
after
we
finish
the
other
topics,
so
we'll
cover
the
first
topic
from
pad,
so
it
is
coming
from
the
maintainers
meeting
and
the
spec
meeting
so
in
general,
like
as
we're
close
to
ga,
we
want
to
do
some
usability
study
to
see
if
the
sdk
is
easy
to
use
and
also
if
the
document
is
ready
there.
A
So
tight
started
by
putting
this
dog
footing
guide
and
it
is
a
it
is
a
very
scoped
like
staffs,
only
focusing
on
the
tracing
part.
So
the
idea
is
as
a
customer.
If
you
want
to
use
this
retracing
part,
how
are
you
going
to
build
like
http,
client
and
server
and
communicate
and
make
sure
that
they
can?
A
The
ask
is:
when
we're
ready,
we
got
to
first
update.
This
document,
like
I
know
like
in
donet,
people
have
discussed
they're
going
to
put
the
donut
here
in
two
weeks.
I
think
for
c
platforms,
we're
probably
a
little
bit
too
early-
we're
not
ready
for
people
outside
this
group
to
try
it,
but
still
I
find
it
helpful
so
potentially
like
within
this
team,
we
can
try
and
once
we
like,
we
got
the
reasonable
confidence
we
can
discuss.
D
I
do
have
some
concerns
about
the
lack
of
c
api
because
it
happened
on
two
occurrences
so
far
that
people
are
asking
about
some
popular
products
that
are
written
in
c
rather
than
c,
plus
plus
such
as,
for
example,
my
sacral
and
genes
are
both
pure
c
and
right
now
we
do
not
have
an
answer
for
that.
D
Or
is
it
a
separatory?
I
think
it
is.
It
is
fundamental
issue,
though,
which
we
don't
have
a
good
answer,
for
it
distorts
the
onboarding
and
user
experience,
because
people
confuse
c
and
c,
plus,
plus
because
of
c
c
plus
plus,
and
that's
the
group
where
they
come
for
answers.
A
So
I
think,
probably
what
we
should
clarify
like
in
this
document.
We
already
clarified
everything
is
c
plus
plus
you
won't
be
able
to
find
anything
about
c,
and
what
we
probably
need
to
do
is
just
to
remove
the
c
from
the
sig,
and
I
can
update
all
the
calendar
to
bring
the
clarity
and
whether
we
want
to
support
c,
and
how
are
we
going
to
support
c
like
should
we
create
another
sig
given
like
c
plus
plus,
is
still
a
little
bit
behind.
A
D
A
B
To
call
it
out
as
a
fact
right,
like
a
frequently
asked
question:
does
this?
Does
this
open
telemetry
sequence
plus
also
supports
c
just
like
right
near
the
top
like
frequently
asked
questions?
No,
we
don't
support
c
right
because,
especially
if
it's
getting
asked
all
the
time.
D
Yeah,
so
there
were
at
least
two
questions
in
the
last
two
weeks
in
our
chat
in
guitar,
and
I
think
we
had
this
discussion
like
way
back,
but
right
now
yeah.
I
agree
we
should
just
twist
it
like.
No,
we
don't
an
ass.
A
B
One
one
meta
point
to
make:
if
you
look
at
the
number
of
contributors
per
open,
telemetry
project
c
plus
plus,
is
the
least
it's
still
not
like
unhealthy,
like
there's
30
total
contributors
in
the
repo
in
the
past
three
months
or
so,
but
it's
like
a
third
of
all
the
other
repositories.
B
So
in
terms
of
increasing
the
scope
of
what
the
group
is
able
to
do,
I
would
be
re,
and
especially
since
I
consider
cnc
plus
the
hardest
languages
to
write
in
I
I
don't
know.
If
I
would,
you
know,
increase
your
scope
there.
I
think
I
think
that's
a
risk
to
deliver
ga
it
might
be
like
post
ga,
that's
something
to
look
into,
but.
E
D
F
I'm
thinking
about
even
we
want
to
support
c
and,
as
our
code
base,
our
library
or
our
code
is
based
on
safe
place
path
and
we're
going
to
release
we're
going
to
source
distribution.
So
any
project
which
is
c
project
relies
on
us,
which
I
think
can
compile
us
right.
Maybe
before
we
talk
about
providing
some
some
stream
layer
to
do
a
api
or
api
conversion.
But
I
think
this
may
not
work
for
for
project
for
real
c
project,
which
will
only
require
c
compiler,
not
c,
plus.
D
Plus
tom,
I
think
there
are
two
different
options
in
there
when
if
we
provide
the
pre-built,
which
is
a
go,
we
provide
api's
table
pre-built
at
some
point
for
some
projects
which
are
themselves
a
c
and
it's
possible
to
build
a
c
api
extern
over
the
c
plus
plus,
and
they
will
dynamically
load
and
bloat
themselves
that
way
like
yeah.
They
can
still
use
us
whether
it's
a
good
way
or
not.
D
It's
another
question:
maybe
it's
bad
for
due
to
bloating,
because
they
ended
up
loading,
a
c
plus
plus
library
which
brought
in
the
whole
c
runtime
into
an
address
space
right,
but
still
a
viable
option,
whereas
for
the
others
they'd
say
no,
we
don't
even
want
any
c
plus
plus
stuff
right,
and
this
is
a
way
longer
path
together.
We
don't
even
have
anything
in
this
trigger.
F
D
It
is
in
the
sdk
scope,
not
for
the
exporters,
but
we
do
have
like
plugin
example.
So
then
imagine
if
we
have,
I
see
api
on
top
of
that
plugin
example
and
then
another
c
api
on
top
of
get
tracer
and
all
this
spans.
That
span
like
c
projection
on
top
of
surplus
plus.
It
is
a
bit
ugly,
but
it
is
one
viable
option.
D
It's
definitely
not
covering
the
entire
scope
and
the
entire
possible
use
cases.
A
So,
given
the
situation
we
are,
I
I
I
think,
we're
probably
like
one
of
the
slowest
sig
and-
and
we
have
a
small
number
of
folks
here.
I
I
think
I
didn't
see
here-
it's
probably
a
pipe
dream
for
now,
so
my
proposal
would
be
just
clarify
c,
plus
plus,
and
I
I
think,
supporting
c
might
be
a
good
topic
after
we
can
ga
the
c
plus
class
sdk.
D
Sorry
for
bringing
this,
but
I
thought
that
we
should
mention
it
because
there
will
be
this
question.
We
should
be
prepared
to
answer.
A
D
Riley,
I
have
a
question
so
there
guys
I
have
an
internal
customer
which
improperly
uses
existing
tracing
api
to
emit
log
events,
because
we
didn't
have
log
api
when
they
were
starting
to
try
things
out.
D
D
So
should
we
list
them
as
early
adopters
and
they'll,
get
some
feedback
from
them
and
say
hey.
These
are
the
folks
that
tried
it.
A
D
Okay,
I
got
it,
I
got
it
sure
and
no.
That
was
a
totally
different
case,
because
I
was
helping
them
individually
that
doesn't
apply
here.
Okay,.
A
Yeah,
so
the
document
is
here:
we
expect
people
to
follow
the
steps
and
see
how
far
they
can
go
and
after
that,
instead
of
submitting
the
final
result,
because
we're
not
ready
it's
not
listed
here,
you
don't
see
cpu
as
fast.
We
expect
people
to
follow
this
document
and
tell
us
how
like
like
how
far
they
can
go
where
we
are
and
and
are
we
in
general,
okay
to
to
put
c
plus
fast
here
for
wider,
like
experiment.
D
I
I
don't
think
we
are
ready.
I
think
we
need
to
provide
easier
and
boarding
experience
like
either
vc
package
install
or
rather
ready
to
use
pre-built
or
some
build
scripts
that
facilitate
right
now.
The
best
we
have
is
ci
loops,
and
this
is
not
the
perfect
onboarding.
I.
A
Yeah,
I
totally
agree,
so
I'm
I'm
not
saying
we're
ready.
I
think
we're
not
ready,
but
I
I
think
we
need
to
have
a
good
understanding
of
which
part
we
should
improve
in
order
to
get
to
the
ready
state.
E
B
I
signed
up
to
do
java
because
there
was
a
recommendation
that
you
do
something
you're
not
familiar
with
at
all,
so
I
was
actually
going
to
do
theirs.
I
can
also
look
at
c
because
I'm
still
not
super
familiar,
but
I
can.
I
can
do
that
as
well,
depending
on
when
we're
ready
and
how
much
time
I
have
with
all
that
java
is
also
not
ready.
So
I've
been
waiting
for
them
to
make
their
010
release.
A
D
Guys,
you
would
see
two
totally
different
feedback
if
a
person
familiar
with
c
plus
plus
and
build
systems
like
c
mic,
for
example,
usually
you'd
get
like.
Oh,
it
was
all
easy
like
in
one
day.
I
got
it
up
and
running
and
then
there's
another
set
of
customers
who
would
pre
like
it
oops.
That
would
prefer
a
pre-built
package
such
as
install
package
run
sample
done
and
they're
not
getting
it,
and
that's
where
you'd
get
one
star
out
to
five-star
feedback.
So
it's
gonna
be
polarized.
B
I'm
muted,
yes,
so
basically
I.
What
I
want
to
kind
of
understand
is
what,
when
we
release,
what
does
it
look
like
like?
What
does
the
release
of
the
simples
plus
api
look
like?
Is
it
a
tag
on
the
repo?
Is
it
a
distribution
artifact
when
it
comes
to
cmake
and
bazel
like?
Are
we
when
we
make
a
test?
Are
we
trying
to
make
sure
that,
as
often
as
possible,
it's
available
in
both
those
build
systems
with
the,
especially
with
the
no
standard
versus
standard
versus
absolute
changes?
B
That
max
is
going
through
like
what
what
will
bazel
support?
You
know
what
are
expected
use
cases
of
users,
I'm
just
trying
to
get
an
understanding
of
how
to
consume
the
simplest
plus
repository
on
my
end
and
then,
when
I
make
contributions,
understand
or
or
like
do
code
review,
you
know:
when
should
the
bazel
build
do
something
and
the
cmake
build
do
something,
and
when
are
they
allowed
to
diverge
right.
A
Yeah,
so
so
one
thing
I
can
share
like
previously,
we
kind
of
decided
that
the
release
will
be
in
the
source
form
we're
not
going
to
do
the
binary
form
because
of
the
support
like
different
compilation,
flags,
different
flavors
of
compiler
target
operating
system,
so
it
will
be
source
form
where
the
tag
on
the
man
I
know
were
like.
We
were
asked
to
move
away
from
the
master
branch,
so
just
to
be
correct,
so
it
will
be
a
tag
and
we
follow
the
github
release
process.
A
So,
every
time
we
do
release,
we
tag
it
and
we
put
the
release
note
and
eventually,
we'll
have
changed
out.
So
basically,
I'm
trying
to
follow
what
I've
learned
like
the
python
and
that
are
doing
shouldn't
be
a
big
difference.
The
only
difference
here
is
python
released
the
the
paid
package
and
downloaded
release
new
guide
package.
Here
we
only
release
the
source,
a
table.
B
A
B
Yeah
no,
I
was
curious
if
there
was
like
a
what
do
you
call
them
not
like
a
real
release
like
a
a
snapshot
release
like
a
you
know
here,
here's
a
alpha
or
something
like
v
v,
one
dash
alpha
or
something
I
was
just.
I
didn't
see
anything
like
that,
so
I
wasn't
sure
what
to
expect
when
we
do
release.
B
So,
if
I
were
to
craft
like
you
know,
internal
to
google,
if
I'm
going
to
craft
something
to
go,
consume,
open,
telemetry,
c,
plus
plus
and
make
use
of
it
internally,
how
should
I
be
looking
at
pulling
in
these
releases
right?
What's
what's
the
what's
the?
How
do
I
do
that
effectively
and
then
my
question
around
basil
and
cmake
are:
should
I
reasonably
expect
basil
and
cmake
to
be
completely
on
par,
like
tests
written
for
one
also
work
on
the
other?
D
Right
now
we
committed
to
support
both
and
the
effort
is
to
keep
them
totally
on
par.
It
is
not
always
immediately
the
case,
sometimes
when
one
is
lagging
behind
the
other,
but
we
are
striving
to
patch
it
up,
and
my
understanding
is
for
google
appraisal
is
going
to
be
the
preferred
choice
for
many
other
customers.
I
assume
provides
ability
to
generate
build
files
like
for
ms
build
for
ninja
for
whatever
else
and
that's
why
it
it.
Since
it's
a
popular
build
system.
We
also
committed
to
support
that
as
well.
B
Yeah
yeah,
I
mean
as
a
googler
I
want.
Obviously
I
want
bazel,
but
as
a
c
plus
person,
I
think
cmake
is
the
right
choice
too.
So
what
what
what
I
was
trying
to
do?
Just
where
this
question's
coming
from?
I
was
looking
at
your
cl
max
around
the
no
std,
and
I
was
looking
to
figure
out
how
to
configure
bazel
to
have
different
targets
so
that
we
could
pull
in
one
with
the
standard
or
without
the
standard
and
my
recommendation
there
is.
B
We
should
pick
one
configuration
that
bazel
supports
and
not
support
the
other
ones
and
use
cmake
to
do
multi-configuration
based
builds
the
reason.
Why
is
just
bazel's
not
designed
for
it,
and
it
makes
it
really
really
really
really
painful.
Now
I
don't
know
if
you
were
looking
into
it
already.
I.
D
Didn't
I
I
was
focusing
on
cmake
and
in
fact
some
of
my
build
failures
of
funny,
build
failures
that
I
see
with
gcc
4.8.
I
figured
out
exactly
the
root
cause.
Why
it's
very
interesting.
It's
like
it's
about
compiling
everything
with
gcc
4.8,
including
the
google
benchmark,
including
the
google
test,
and
not
depending
on
any
of
the
system
provided
libraries
for
this.
D
So
that's
a
corner
case
when
we
want
to
build
this,
the
legacy
compiler,
it
seems
like
we
have
to
build
the
whole
world
with
the
matching
legacy
compiler
for
the
tests
to
pass
and
for
the
benchmarks
to
be
used.
I'll
update
the
pr
with
that.
So
there
are
some
measures
like
this
in
c
mac
as
well
in
c
mac
land
as
well.
D
So
it's
like
we
have
to
specify
external
variables
before
we
kick
off
the
c
mic,
build
to
make
sure
that
we
build
the
full
set
of
depths
with
the
matching
within
the
matching
environment.
F
B
That
would
be
awesome.
I
don't
think
that's
the
case,
so
basil
philosophically
is
different
than
cmake
and
those
philosophical
philosophical
differences
cause
issues
so
like
in
bazel,
there's
meant
to
be
one
instance
of
a
build
that
you
use
that
whole
like
I,
if
you
see
this,
include
this.
If
you
see
this
include
this
bazel
doesn't
want
to
get
in
the
business
of
that.
They
don't
want
that
at
all.
B
That
is
that's
like
antithetical
to
how
it's
designed,
and
so
so
when
it
comes
to
bazel,
we
want
to
have
bazel
kind
of
support,
one
configuration
and
only
one
configuration
and
if
cmake
is
supporting
multiple
configurations,
bazel
has
to
pick
one
and
anything
related
to
those
other
configurations
which
just
wouldn't
be
in
cmake
or
sorry,
wouldn't
be
in
bazel.
They
would
only
be
in
cmake
the
the
other
thing
around
bazel
is
it.
B
It
allows
multiple
tool
chains
to
be
configured,
but
the
assumption
is,
if
I'm,
using
like
the
vcc
tool
chain
or
the
gcc
tool
chain
right.
I
have
the
same
set
of
binaries
all
the
way
through,
like
my
whole
dependency
graph
and
I'm
it's
also
kind
of
relatively
optimized
for
static
linking
over
dynamic
linking,
but
that's
that's
kind
of
my
short
knit
around
basil
like
I
might.
B
B
I
think
bazel's,
the
eliminator
and
cmake
is
a
lot
more
flexible,
and
so,
if
we're
going
to
support
both-
which
you
know
I
I
really
want
us
to,
I
think
you
pick
a
limited
set
of
things
that
bazel
does,
and
you
know
start
with
that
and
then
cmake
can
be
more
flexible,
but
we'll
have
to
like
tease
that
out,
I'm
just
kind
of
curious
what
how
things
were
going
ahead
of
time
before
I
I
looked
at
the
project
to
kind
of
get
understanding
for
what
people
expect.
D
So
I
I'm
just
wondering
let's
say
we
assume
that
this
some
standard
set
of
build
flags
for
bazel
and
there's
a
standard,
no
std
library
like
no
std
library,
you,
you
understand
what
I
mean
and
we
have
a
reproducible,
build
loop
for
that
with
cmake,
and
this
is
the
default,
and
this
is
the
something
that
would
provide
you
pre-belts
or
or
build
artifacts.
D
If
you
rebuild
from
a
given
tag-
and
that
is
what
we
are
recommending
for
wide
consumption
by
anybody
who
runs
everywhere,
they
would
use
that
so
we'd
still
keep
basil
locked
to
that
build
flavor,
whereas
if
you
are
building
from
source
or
if
you
are
building
with
cma
or
if
you
are
building
with
a
vc
package
which
itself
uses
image,
you
can
specify
various
triplets
or
various
whatever
like
wide
set
of
build
configurations
and
cook
that
steak
to
your
own
liking.
D
But
when
you
cook
that
steak
to
your
own
liking,
you
are
on
your
own
to
support
it
and
on
the
build
artifacts
for
that.
So,
for
example,
if
I
have
like
vendor
microsoft
bing
and
they
want
to
cook
it
to
their
liking,
they
they
do
if
they
found
some
fundamental
issue
like
like
bug
and
core
parts,
they
have
to
prove
that
it's
the
bug
in
there
they
will
debug,
they
will
troubleshoot,
they
will
own
anything
related
to
crashes.
B
Okay,
yeah,
I
mean
that's,
I
I
think
that's
exactly
kind
of
what
I'm
looking
for
just
so
when
I
write
code
and
when
I
review
a
pull
request,
like
let's
say,
there's
there's
some
especially
this
pull
request
the
no
standard,
one
right,
there's
big
differences
between
c
make
and
basil
now
right
and
how
much
do
we
want
to
try
to
unify?
How
much
are
they
allowed
to
diverge?
When
is
it
acceptable
to
diverge?
That's
the
meta
question
here.
I
think,
the
in
term.
B
What
I'm
hearing,
if
I
reiterate
it
is
bazel,
will
be
the
canonical
way
we
want
users
to
consume.
Cmake
will
support
the
canonical
way
with
no
configuration,
but
cmake
will
also
be
flexible
and
allow
custom
configuration.
So
if
you
want
to
do
custom,
anything
at
cmake,
bazel
and
default
cmake
should
be
exactly
equivalent.
D
Yeah
yes,
default,
build,
are
identical
and
cmake
gives
the
flexibility
as
well
as
receipt
package,
provides
a
higher
order
wrapper
which
can
have
recipes
for
certain
pre-configured
things,
and
then
it's
going
to
simplify
consumption
experience
even
for
those
who
want
to
build
custom.
So
it's
like
custom
profile.
One
custom
profile,
two
custom
profile:
two:
three:
whatever
they'll
like
play,
style
care
about
my
customers,
I'm
gonna
write
up.
Do
the
write-up
that
explains
how
for
the
for
the
vc
package,
for
example,.
B
D
Can
we
produce
both
like?
I
know
that
for
cmake
we
can
certainly
provide
both
builds
at
once,
like.
B
B
You
end
up
with
two
api
libraries
in
bazel,
one
which
is
with
standard
library
and
one
which
is
without,
and
then,
when
you
build
the
sdk
right,
the
sdk
that
depends
on
the
library
will
have
one
that
depends
on
with
api
or
with
the
sd,
with
this
standard
library
and
one
that
depends
without
so,
it
just
takes
every
single,
build
target
and
forks
it
into
two.
You
can
hide
this
with
skylark
like
macros,
to
make
it
simpler
for
us
to
maintain,
can
more
confusing
for
other
people
to
understand.
B
Why
there's
two
but
effectively
what
you
end
up
with.
Is
you
either
fork
every
single
target,
all
the
way
down?
You
can
do
funny
things
by
playing
with
tool
chains
and
say
I
have
gcc
with
standard
library,
tool
chain
and
gcc
without
standard
library
tool
chain
that
might
work,
and
it's
really
ugly
and
not
really
conventional,
but
there's
not
a
great
option
here,
like
I
think
the
best
option
for
us
in
terms
of
ease
of
maintenance,
which
I
think
we
need
to
kind
of
target.
B
B
F
And
I
have
two
questions.
The
first
one
is
white.
We
choose
bazel
as
default
instead
of
cmake,
I
think
for
for
maybe
for
non-google
contributors,
maybe
c
make
is
more,
become
easy
or
more
known
knowledge.
The
second
one
question
is:
if
we,
I
think
we're
going
to
take
to
build
systems
and
how
do
we
make
sure
both
are
equivalent?
D
F
To
build
systems
I
mean
which
one
I
mean
we
just.
We
want
to
say
that
bazel
is
default
right.
I
think
we
will
have
to
build
this
and
that's
fine.
D
Yes,
we
will
have
two.
I
don't
think
we
are
saying
that
the
bazel
is
the
thing
to
build
or
cmake
is
the
thing
to
build.
We
provide
both
and
both
should
with
default
settings
should
provide
equivalent
output
functionally
equivalent
output
tests.
We
somewhat
cover
the
existing
functionality,
regression
tests
that
everything
still
compiles,
but
we
do
not
verify
that.
For
example,
I
had
only
something
to
see
make,
but
I
don't
add
it
to
bazel
that
the
new
functionality
showed
up
in
one
set
of
build
but
didn't
show
up
in
the
other.
D
B
It
so
it's
easy
theoretically
to
keep
these
things
up
to
date.
If
we
compare
a
number
of
test
cases,
I
guess
the
question
I
would
have
is:
is
there
a
need
for
them
to
ever
diverge
and
your
cl
around
no
standard
versus
standard
library
is
where
I
think
there
is
a
divergence,
but
there
should
always
be
that
default.
Configuration
on
both
sides,
that's
equivalent,
so
maybe
we
could
just
check
those
two.
D
In
terms
of
functionality-wise,
my
expectation,
though,
is
that
all
tests
would
pass
so
it's
not
a
functional
diversions
per
se.
I
would
like
to
see
the
core
basal
test
numbers
of
tests
being
the
same
test
results
being
the
same.
It's
the
way
how
I
cook
it
and
that
should
be
totally
a
pack
to
the
customer
of
the
sdk,
because
api
remains
the
same.
B
D
I
am
and
in
fact
what
I
am
seeing
with
gcc
4.8
right
now
and
I'm
using
one
box
and
one
a
source
share
where
I
produce
output
directory,
and
then
I
can
split
gcc
9
g67
gcc
4.8,
then
whatever
it's
like.
I
end
up
needing
at
least
full
set
of
build
arts
effects
for
gcc
4.8,
and
it's
an
existing
issue
and
related
to
my
change.
D
What
I
found
is
that
I
end
up
needing
to
compile
benchmark
with
the
matching
compiler,
which
just
we
were
not
mixing
and
matching
g
test
and
in
benchmark
before,
and
when
I
added
a
few
benchmark
tests.
I
ran
into
issue
only
with
gcc
4.8,
not
the
others,
and
it's
like
it's
similar
to
what
you're
saying
with
bazel.
D
I
have
to
make
sure
that
my
entire
build
environment
is
kind
of
well
defined
and
pre-compiled
with
exact
same
tool
chain
and
that
wasn't
exactly
the
case
for
the
benchmark
dependency
yet
and
like
again
for
the
cmake,
how
it's
managed
I
can
entirely
and
for
the
dc
package,
how
it's
managed.
I
can
entirely
streamline
the
set
of
target
architecture,
compiler
version
and
all
this
specify
all
and
then
build
the
the
full
tree,
including
dependencies
with
that
configuration
kind
of
thing.
D
B
Right,
so
that's
that's
like
its
philosophy,
so
it
does
sound
like
there's
going
to
be
some
some
good
things
there.
What
what
I
didn't
so
okay!
So
if
you
need
help,
if
anybody
needs
help
with
bazel
in
this
project,
like
I
can,
I
can
help
out
there.
What
I
was
trying
to
figure
out
was
this
no
standard
thing?
B
What
we
do,
because
we've
already
taken
a
lot
of
time
with
this
discussion,
maybe
we'll
take
this
offline
and
just
leave
bazel,
as
is
for
now,
but
I
do
want
to
talk
about
because,
specifically,
I
want
to
consume
the
standard
library
version
of
the
open,
telemetry
c
plus
plus
in
bazel,
and
I
don't
want
to
make
that
cause
a
huge
amount
of
maintenance
burden
on
people
which,
from
what
I
can
tell
right
now
it
might
so.
B
Let's
have
that
discussion
in
the
next
one
after
that,
pr,
you
know,
is
ready
more
with
like
when
you
feel
more
comfortable
with
it
and
when
I
have
a
chance
to
maybe
send
you
some
of
my
ideas
that
I've
been
toying
with
for
it
locally.
So.
D
I
I
also
honestly,
I
also
need
to
catch
up
and
learn
a
little
bit
about
basil,
because
we
have
a
certain
users
at
microsoft
for
that
or
very
similar
based
on
that
build
system.
D
It's
like
I'm
saying
that
most
of
us
are
cmake
but
yeah.
I
know
a
customer
who
would
also
need
that
bazel
build
okay,
okay,
cool
microsoft.
Yes,.
D
Google
ninja
no
cmic
can't
produce
files
for
google
ninja
ninja
is
fine,
ninja
is
kinda
covered
by
simulate
and
our
chromium
browser.
Our
version
of
the
chromium
browser
uses.
F
D
Dramatically
speed
up,
we
can
dramatically
speed
up
our
build
if
we
produce
gn
files
instead
of
make
and
use
that
it's
gonna
save
a
ton
of
compute
time
we
can.
We
should
switch
to
that.
We
should
switch
to
google
ninja
for
c
mic
build
for
the
bezel.
My
understanding
is
that
android
uses
very
similar
bazel
based
build
system
and,
as
you
guys
might
aware,
microsoft
does
produce
certain
android
based
products,
and
I
see
them
potentially
needing
bazel
as
well.
B
Okay,
so
so
it
sounds
like
I
I'll
follow
up
with
you
offline
and
try
to
help
out
with
that
pull
request
on
the
bazel
side
and
we'll
get
a
proposal
together
of
what
bazel
configurations
will
look
like
right.
D
And
my
apologies
for
breaking
gcc
4.8
tests
on
my
end,
like
with
that
cd
level
commit
big
one.
I
figured
it
out
I'll
show
how
I
want
to
do
that.
Entire
separate,
vertical
pillar,
build
of
everything
with
gcc
4.8
that'll
fix
it.
Okay,
and
I
think
michelle
he's
working
on
atw
exporter,
he
had
a
minor
question
about
bazel.
He
needs
to
exclude
a
portion
from
windows
only.
I
think
he
might
be
able
to
sort
it
out.
D
It's
like
he's
working
on
an
ecw
exporter
when
transferring
for
windows
as
part
of
open
telemetry
project
and
his
stuff.
His
pr
only
applies
to
window
to
windows.
Right
now
doesn't
apply
to
linux,
so
he
needs
to
add.
If
windows
then
include
this,
but
if
linux
don't
include
and
don't
build
any
of
that.
A
Okay,
so
I
wonder
if
we
can
do
a
quick
update
on
the
login
workstream,
so
the
initial
login
api
pr
has
been
merged.
There
are
a
couple
questions
which
might
result
in
like
changing
some
of
the
api
just
for
performance
and
also
the
the
question
about.
Should
we
allocate
the
log
record
object
on
the
stack
and
then
do
the
copy,
if
we're
doing
the
asynchronous
exporting,
or
we
just
allocate
that
initially
on
the
heap.
So
we
just
move
the
pointers
around
and
change
the
ownership.
A
Besides
that,
I
I
think
mark
and
karen
are
working
on
the
sdk
implementation
and
also
the
plumbing
work
to
send
initial
data
to
elastic
backend.
So
that
should
give
us
a
good,
end-to-end
demo
and
there's
one
dependency.
So
in
order
to
send
to
elastic
search
backend,
you
have
two
options.
One
is
you
take
some
existing
library
from
elastic,
but
it
looks
like
there's,
no
good
library,
ready
and
also
different
version
of
elastic
backhand
they're,
very
different,
like
they
break
the
protocol
from
time
to
time.
A
A
So
there's
one
one.
One
thing
I
want
to
explain
so
in
order
to
get
those
prs
in
the
open,
telemetry
ripple,
there's
a
process
for
aws
so
like
they
want
to
do
an
internal
review
just
to
make
sure
like
they
cover
the
basic
things
and
help
the
interns
to
fix
some
of
the
obvious
problems
before
they
start
to
put
the
pressure
on
this
community.
A
So
so
the
initial
pr
will
be
aws
and
and
the
reviewer
would
be
all
coming
from
the
amazon
side.
So
I'm
I'm
I'm
not
going
to
do
any
code
review.
There
might
help
to
join
some
meeting
to
answer
some
questions
regarding
the
general
direction,
but
after
they
finish
the
internal
review,
the
prs
will
come
to
the
open
time
tree
repo
and
that's
where
we
start
to
engage
just
to
make
sure
people
are
clear
on
the
process.
A
Okay,
karen
mark,
do
you
do
you
have
something
to
update.
C
Right,
yes,
so
our
next
pr
contains
the
logger
and
logger
provider
implementation
in
the
sdk,
and
that's
just
about
ready,
so
we're
gonna
file
the
pr
internally
today
and
we're
gonna
have
a
code
review
like
internally
at
amazon
and
then,
hopefully,
by
the
end
of
the
week,
we
can
have
that
actually
pushed
upstream,
okay
or
the
pr
upstream,
okay
cool.
Thank
you.
E
F
A
F
Okay,
so,
but
then
why
other
links
need
a
span
context
I
mean
it
seems
we
don't
need
to
that
right.
E
A
But
like
the
difference
between
a
span
and
a
span,
contacts
is
that
spam
contacts
might
be
something
you
got
from
the
remote
side
like.
If
there's
a
client
calling
your
service,
you
got
the
incoming,
spend
id
and
trace
id.
That's
the
spam
contacts.
You
don't
have
the
like.
The
spend,
attributes
or
other
stuff,
because
that
that
belongs
to
the
the
color
side
makes
sense.
A
Want
to
make
like,
even
in
your
application,
you
don't
have
any
interest
to
create
yet
another
span,
but
you
want
to
create
a
link.
You're
saying
this
incoming
client
spend
contacts
is,
is
a
trigger
for
my
particular
spec,
like
you
create
a
spec
and
you're
saying
this
incoming
spend
contacts
triggered
my
activity,
but
they
don't
have
parent
child
relationship.
So
I'm
going
to
either
link
and
put
some
attributes
there.
A
A
Yeah
so
think
about
like,
if
you
have
an
http
request,
where
you
you
do
a
post
and,
and
you
got
a
list
of
items
from
the
queue
and
each
item
they
have
their
own
trace
id
and
spam
id.
So
for
that
hdb
request
which
trace
id
are
going
to
put
you
got
to
put
something
different
right,
yeah,
not
supposed
to
pick
any
trace
id
in
the
item.
And
of
course
you
don't
know
before
you
send
the
the
request.
A
G
G
It
encapsulates
the
minimum
necessary
information
to
uniquely
identify
a
span
yeah.
That's
how
it
helps
me
to
understand.
So
that's
passing
around
bank
context
is
much
more
efficient
than
passing
around
how
it
spends
there's
always
a
span.
Context
corresponds
to
one
span
and
it's
just
kind
of
a
condensed
minimum
of
information.
G
A
A
D
A
D
This
one
for
the
build
tools,
it
does
include
submodules,
which
would
be
needed
for
building
from
source
all
dependencies,
and
I
will
require
this
even
for
gcc
4.8
support
where
it
just
happens,
that
we
cannot
link
against
a
the
os
installed
benchmark.
For
example,
I
hit
this
only
with
gcc
4.8,
it's
like
when
we
install
a
foreign
legacy
compiler
in
a
distro.
D
We
will
then
have
to
rebuild
our
depths
with
that
foreign
legacy
compiler,
because
the
stl
may
not
necessarily
match-
and
I
focused
mainly
on
windows
things
and
mainly
on
vc
package
and
mainly
on
cmake
parts
of
the
build
less
so
on
bazel
at
all,
and
I
don't
exactly
know
yet
how
we
are
going
to
organize
the
belts
for
bazel
with
with
different
visual
studio
versions.
D
But
my
ultimate
goal
was
to
make
sure
we
have
all
the
tools
which
may
be
used
either
in
ci
loop
or
by
developers
like
for
their
local,
build.
For
example,
if
you
already
have
the
necessary
tooling
installed,
you
can
say
tools
build
and
specify
which
compiler
you
want
to
use
for
that.
Build
greatly
helps
on
windows.
D
If
you
have
all
compilers
installed
and
the
goal
is
to
produce
the
output
in
separate
parallel,
build
directories
where
you'd
have
out
visual
studio,
15
out
visual
studio,
17
out
visual
studio,
90,
and
then
you
can
run
and
compare
and
compare
the
test
runs
as
well.
There's
no
code
change
in
here,
but
if
we
merge
it,
it's
going
to
dramatically
reduce
the
scope
of
my
other
standard
library
pr.
D
So
it's
like
28
files.
None
of
them
are
actually
changing
the
the
the
code.
D
My
feedback
about
currency,
I
infra
it's
not
exactly
developer
friendly
if
you
have
multiple
compilers
and
definitely
not
as
friendly
if
you're
working
on
windows-
and
I
was
trying
to
provide
a
bit
better
windows,
focus
developer
experience
with
this
tooling,
plus
some
linux
and
mac
build
your
test.
For
example,
I
will
have
to
update
it
because
I
was
thinking
about
adding
build
g
test
and
build
a
benchmark.
D
This
is
not
for
bazel.
This
isn't
from
cmake
yeah.
D
So
here's
my
take
on
this.
We
have
tool
directory
and
we
have
ci
directory.
Ci
directory
describes
the
rules
for
how
we
run
it
like
for
continuous
integration
whose
directory
may
contain
build
scripts
or
helper
scripts
that
the
developers
would
use
locally
as
part
of
their
local
environment,
for
whatever
reasons
and
there's
some
semantic
difference
between
what
goes
into
ci
and
between
what
goes
into
tools
mm-hmm.
So.
A
D
Don't
mess
up
the
script?
Well,
we
still
run
ci
right
and
if
ci
is
gonna
be
broken
and
if
ci
depends
on
any
of
this,
we
would
expect
that
ci
passes.
D
So
if
somebody
messes
up
the
script
and
ci
breaks
we
catch,
so
I
do
plan
to
use
some
of
this
in
ci.
In
my
subsequent
pr.
A
Webinar
is
anything
not
covered
by
ci.
Then
people
may
make
the
change
once
the
ci
possible
merge
and
eventually
this
one
will
be
outdated.
I.
D
Have
bigger
worry
right
now
about
windows
in
general
because
it
seems
like,
in
order
to
be
widely
api
compatible,
we
actually
have
to
use
the
lowest
of
all
possible
compilers,
which
we
don't
even
do
right
now
we
are
building
with
some
latest,
which
means
that
we
are
not
even
ready
formally
to
say
that
we
support
15,
even
though
I
have
customers
lined
up
asking
for
that,
and
this
is
parallel
to
how
we
support
gcc
4.8,
for
example,
because
we
say
we
still
support
legacy
gcc.
D
For
that
I
need
the
tools
that
would
figure
out
if
that
compiler
is
installed,
set
up
the
right
environment
variable
and
then
kick
off
the
build.
That's
what
I'm
adding
here
I
see
if
there
are
questions
guys,
can
you
like
formalize,
exactly
what
you
don't
like
about
it
and
I'll
try
to
answer
for
this
one
michelle,
maybe
his
own
call
michelle?
Are
you
there
yeah
so
guys
on
this
one?
This
is
based
on
some
of
the
early
work
that
I
did
before,
which
didn't
have
the
proper
structure.
D
Now
michelle
is
adding
structure
to
it.
He
is
formalizing
this
as
a
formal
exporter,
and
there
are
some
comments
that
I
added
so
far.
If
you
guys
can
take
a
look,
that
would
be
great.
I
think
we
are
targeting
to
have
this
done
like
by
mid
november.
A
D
So
the
goal
here
is
this:
is
linux.
Foundation.
Project
background
is
for
any
open
source
products
that
run
windows
such
as
mysql,
proxy
on
windows
or
docker
and
windows,
or
any
other
linux
related
initiatives
on
windows.
D
We
can
instrument,
with
the
same
with
the
same
open,
telemetry
sdk,
with
the
same
opencl,
m3
api
surface,
but
route
of
the
outcome
route,
the
traces
and
logs
to
an
output
of
property
w
listener,
which
means
that
this
listener
may
subsequently
utilize,
for
example,
open
telemetry.net,
sdk
and
the
main
forward,
the
logs
and
traces
to
whatever
compatible
destination.
D
It's
like
we
maintain
generic
sdk
right,
like
instrumentation,
is
still
the
same,
so
your
code
remains
correct
platform,
any
of
the
prominent
linux
project,
docker
kubernetes,
whatever
we
can
instrument
that
and
we
can
route
the
logs
differently
depending
on
where
these
are
hosted
so
either
in
prog
to
logstash
or
out
of
proc
via
etw
channel
to
some
output
of
proc
listener
and
then
subsequently
forward,
using
whatever
c
sharp,
for
example,
opentelemetry.net.
D
So
right
now
we
do
simple:
it's
not
batched,
it's
individual
traces
and
events
as
they
appear.
There's
no
batching.
A
D
B
Yeah
simple
processor
should
lock
certain
calls
to
the
exporter,
but
I
think
the
like
getting
making
a
trace
or
sorry
span
is
not
locked,
but
everything
else
is.