►
From YouTube: 2020-12-22 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
C
A
A
E
E
Let's
add
any
any
question
or
agenda
today
to
the
document
moving,
then
we
can
start
with
that
and
after
that
you
know,
I
think,
as
we
talked
in
the
last
meeting,
we
can
start
trash
for
the
active
issues
yeah.
I
think
there
are
more
than
80
issues.
Maybe
so
we
have
two
meetings
to
charge
them
all,
we'll
see.
C
C
First
mark
I
just
wanted
to
ask
you:
can
you
do
a
demo
also.
B
F
All
right,
so
I
guess
I'll
start
with
my
item
on
the
agenda.
I
was
just
wondering
for
the
batch
log
processor,
pr
whether
it
would
be
possible
to
since
this
is
like
the
exact
same
as
the
span
processor.
Could
we
maybe
merge
this
first
and
file
an
issue
to
have
both
sort
of
their
concurrency
like
iffyness
kind
of
solved
later
just
to
sort
of
like
unblock
this
pr?
I
guess.
A
I
think
that's
fine,
like
you
saw
my
comment
about
the
while,
I
think
there's
something
wrong
about
it,
but
let's
merge
and
let's
just
update
it
in
a
separate
vr,
I'm
not
blocking
it.
F
Okay
sounds
good
yeah
about
the
while
I
sort
of
like
debated
back
and
forth
on
that
as
well.
I
removed
it,
but
like
no
one
really
like
sort
of
like
approve
that,
so
I
changed
it
back
timeline,
but.
G
For
perfect
it's
definitely
it's
definitely
odd,
but
I
I
did
re-review.
I
went
and
looked
at
the
trace
batch
processor.
I
spent.
I
spent
way
too
much
time
on
this.
I
apologize
anyway.
I
went
and
re-reviewed
the
trace
batch
processor
and
then
I
looked
at
this
file
loop
because
it
does
look
really
funky,
but
it's
correct.
It's
just
weird
like
I
think
we
could
come
up
with
a
better
way
to
do
it,
but
it's
it's
it.
G
E
Then
could
we
file
active
issue
to
optimize
this
or
like
to
make
this
simplified,
so
I'm
issue
to
follow
up,
I
think
yeah.
I
agree.
We
need
to
clean
it
up.
A
If
it's
functioning,
maybe
a
lower
priority,
you
should
see
how
to
code
it
in
a
nicer
way,
if
possible.
I
don't
think
it's
blocking
anything.
It's
just
yeah
sure
my
immediate
feedback
is
what,
if
the
other
thread
doesn't
set
that
are
we
gonna
be
just
spinning
and
burning
cpu
cycles
here
and
that's
what
looks
a
bit
scary
to
me
usually
like.
A
F
So
a
follow-up
on
that
issue.
Would
you
like
me
to
file
it
or
it
seems
like
you
might
have
some
more
like
detailed
information
on
that
or
like
with
it?
Let
me
sign
off
and
I'll
I'll
organize
how
about
that
perfect.
Thank
you.
E
Yes,
so
can
we
mark
that
comments
are
with
as
resolved?
So
knowledge
knows
the
status.
A
E
B
Yep
yeah
so
last
week,
joshua
made
a
comment
about
concurrency
in
the
response
handler
part,
and
so
I
made
a
little
change
where
now
the
two
private
members
they're
they
have
a
guard
around
it.
So
if
one's
being
changed
and
it's
being
returned
in
another
place,
now
it'll
block
until
like
it's
finished,
being
written
in
one
place.
So
I
don't
think
the
current.
The
concurrency
is
an
issue
anymore.
B
So
if
you
could
take
a
look
at
that
be
great
if
we
could
get
that
merged
and
yeah,
so
karen
and
I
we
also
did
a
presentation
earlier
today
where
for
amazon.
For
our
final,
like
going
away
presentation,
we
went
over
all
the
components
we
made,
and
so
I
made
a
little
demo
showing
off
the
api
sdk
and
we
write
to
elasticsearch
and
we
also
visualize
it
in
cabana.
So
if
you
guys
so
I'll
show
you
guys,
because
I
think
it's
pretty
cool.
B
B
B
B
So
if
we
go
there,
we
take
a
look
then
there,
but
now
we
have
kibana,
but
we
have
no
data
to
visualize.
So
I
wrote
this
basic
application
where
inside
the
main
method,
it
initializes
the
logger
and
in
here
we
do
like
the
basic
we
create
an
exporter,
and
then
we
create
a
processor
connected
to
that
exporter.
B
B
B
Oh
okay,
the
timestamp,
it's
the
time
since
epoch,
I
think
in
nanoseconds,
yeah
yeah,
I'm
not
sure
if
there's
a
better
way
to
represent
time
in
elasticsearch,
like
there's,
not
really
a
standard
on
it
for
open
telemetry.
Yet
so
I
think
leaving
it
like.
This
is
fine
for
now,
but
maybe
later
yeah.
G
Yeah
great
work
you
guys
should-
or
both
of
you
should
feel
proud.
It's
fun
right,
yeah,
that's
pretty
fun!.
B
All
right,
cool
yeah,
I
think
that's
it
for
the
elasticsearch
agenda.
Josh.
Are
you
gonna?
Take
a
look
at
it
again.
B
A
Yes,
so
this
is
based
off
a
few
issues
that
your
lord,
I
think
bogdan
added,
something
so
here's
my
logical
thinking.
I
think
we
cannot
converge
that
we
need
to
document
the
set
of
dependencies
for
basil
because
bazel
fetches
forgiven,
like
artifact
id
and
for
cmake,
with
some
modules.
A
So
my
next
question
is:
let's
say:
even
if
we
document
everything
and
we
got
predictable,
build
environments
for
both.
If
I
need
to
build
like
with
atw
with
elastic
with
my
custom
exporter,
how
do
we
describe
the
process
for
both
build
environments?
A
I
can
contribute
the
process
for
cmak
and
I
think
for
cma.
We
kind
of
have
a
few
options
already
for
bazel.
I
don't
know
if
google,
as
the
primary
user
of
bazel
build
system,
may
contribute
something
in
this
regard.
So
I
think
we
will
need
this
anyways
for
the
vendor.
Distros
like
open,
telemetry,
cpp,
google
cloud
or
points
telemetry,
cpp,
aws
or
azure.
A
G
Yeah
I've
been
working
on
our
distro
for
google
as
well.
The
cpp
one
the
code
is,
is
not
anywhere
near
ready
for
customers.
I
was
just
trying
to
make
sure
we
can
consume
the
initial
release,
so
I'm
more
than
happy
to
help
with
bazel
there.
So
for.
A
Some
portion
of
starts
to
align
on
my
end,
I
will
need
the
etw
pr
merged
from
michel.
It's
not
final.
We
still
need
to
apply
a
few
patches
in
there
and
I
would
need
a
tag
newer
than
the
current
time,
because
this
current
act,
0
1
0,
doesn't
really
include
the
stuff
that
we
would
need.
A
Meanwhile,
I
can
probably
just
start
like
with
the
process
clone
code
from
that
tag:
overlay
these
modules,
compile
with
these
options
enabled
and
for
our
flow.
We
would
probably
need
to
also
package
into
nougat
package,
because
nougat
packaging
is
what
microsoft
widely
uses
elsewhere,
how
to
cook
a
vendor
distro
into
a
nougat
package
for
consumption
elsewhere.
So
this
is
kinda
out
of
scope
of
what
we
do
here,
but
this
is
part
of
the
other
process
and
I
think
it'd
be
great.
A
If
we
democratize
this
and
show
like
how
we
do
it,
then
someone
else
if
they
would
like
to
follow
a
similar
mode.
Great,
we'll
share
that
I
can
contribute
to
the
cmac
part.
If
you
guys
fine
with
this.
C
So
max
just
to
be
clear:
what
what
do
you
mean
by
a
vendor
distro.
C
C
A
A
We
only
provide
a
bi
compatibility
on
api
surface
right,
so
you
can
load
the
full
set,
so
you
can
say,
get
tracer
or
get
logger
provider
from
this
shared
library,
and
we
have
some
examples
like
for
the
plugin
loading
and
stuff,
but
the
entire
library
has
to
be
built
with
certain
features
inside
of
it,
because
we
do
not
provide
the
plug-in
model
for
exporters.
A
It's
like,
I
cannot
dynamically
add
a
yet
another
exporter
into
pre-built
sdk,
we're
missing
it
entirely
and,
and
I
checked
how
java
is
doing
it
and
they
actually
have
that
feature
which
allows
you
to
load
the
pre-built
sdk
and
add
just
one
exporter
into
it,
which
is
awesome
like
I
love
it
yep,
but
we
don't
have
it
and
I
think
historically,
we
said
well.
We
only
provide
stability
stability
on
api,
not
for
the
exporters,
not
within
the
sdk.
A
Maybe
that
was
a
wrong
call.
I'm
just
saying
that
we
are
now
in
a
state
where,
if
I
need
to
support,
for
example,
just
a
tw
and
no
prometheus
and
no
elastic-
and
maybe
let's
say
no
otlp,
so
imagine
that
I
have
open
elementary,
my
own
sdk,
that
I
need
to
cook
with
just
etw,
which
means
that
now
I
have
to
fall
through
the
entire
hassle
of
deciding
describing
my
own
build
system
recipe
with
this.
A
You
still
have
to
do
this
whole
mumbo
jumbo
of
building
your
own
custom
sdk,
and
I
don't
know-
maybe
guys
we
should
really
strive
to
see
how
we
can
make
plugable
exporters
in
say
later
version
1.2
sometime
in
future
next
year,
but
right
now
we're
in
a
state
where
we
have
to
cook
the
full
set
once
and
I
see
that
other
vendors
are
gonna,
be
having
the
same
issue.
C
Yeah
I
mean
again
you
bring
up
a
valid
point
back
so
that
again,
I
think
that's
language
dependent.
First
of
all,
because,
as
you
know,
gold
cannot
even
handle
that
and-
and
these
are
very
language,
specific
decisions
and-
and
the
second
is
of
course
a
baseline
with
otlp
has
is
integral
to
open
telemetry
right.
So
you
cannot,
you
shouldn't,
propose
a
solution
where
you
do
not
have
otlp
ever,
but
additionally,
dynamic
exporters
is
nice
to
have.
I
mean
agreed.
G
Yeah
I
do
want
to
throw
out.
I
think
we
need
to
divide
the
problem.
The
reason
we
have
api
stability
is
like
if
someone
wants
to
depend
on
our
api
as
a
library
so
like.
If
we
get
you
know
the
logging,
library,
cpp4j
or
sorry
log
for
cpp.
Sorry,
I
was
about
to
say
log
for
j
and
I
somehow
messed
it
up
all
the
wrong
ways
anyway.
Log
for
cpp
wants
to
take
a
dependency
on
open,
telemetry
and
provide
api
level.
G
You
know
direct
right
to
it,
cool
they
should
be
able
to
do
so
by
just
grabbing
in
those
headers
and
that's
a
different
problem
than
somebody
who
wants
to
have
multiple
exporters
right
so
yeah.
I
I
totally
support
having
an
exporter
api
and
trying
to
figure
out
how
to
do
that.
I
will
say
from
a
bazel
standpoint,
given
the
way
bazel
works
right,
it's
all
these
little
dot,
lib
files
or
dot
a
files
that
it's
generating
by
default.
G
So
when
you
consume
basil,
you're
kind
of
just
by
nature
of
how
basil
forces
you
to
view
the
world,
you
are
composing
little
pieces
when
you
consume
it
in
right.
It's
not
like
work,
we're
we're
like
and
bazel
will
rebuild
the
world
from
scratch
every
time.
Just
because
that's
what
you
should
do
right,
it's
the
immutable
view
of
of
code
anyway.
G
So
from
a
basal
standpoint,
there's
a
bit
of
nuance
here,
so
I
think
we
should
focus
a
on
this
notion
of
distributions,
like
what
cmake
looks
like
what
like
db
and
artifacts
look
like
and
secondarily
just
to
acknowledge.
This
is
a
different
problem
than
what
the
api
sdk
split,
solves.
Yep.
A
A
Then
there
is
a
total
lack
of
pretty
much
contract
guarantees,
not
not
a
total
lack,
but
no
strong
guarantees
internally
within
the
sdk
for
exporters.
These
are
only
described
by
some
api
guarantee
and
there
is
no
currently
a
notion
of
abi
guarantee
or
runtime
ability
of
exporters
like.
I
would
have
a
hard
time
explaining
to
my
azure
customer,
for
example,
if
we
use
out
of
proc
agent,
which
uses
some
protocol
whatever
etw
based,
I
would
have
a
hard
time
explaining
them.
A
Why
do
they
need
to
take
a
dependency
on
grpc,
lib,
curl
or
any
other
library
which
they
are
not
even
using
in
their
full?
Then
I
think
similar
to
elastic
it's
like
for
elastic.
I
need
http
client,
but
for
the
otlp.
What
if
I
have
otlp
over
some
unix
domain
socket
in
that
scenario?
It's
still
probably
fully
supported
scenario
like
otlp
protocol,
but
no
remote
endpoint.
How
does
that
work?
Like
tom
may
probably
know
more
details
about
this?
A
H
Yeah
I
mean,
I
think
there
are
two
points
I
think
there's
the
modularity,
that
you
can
have
exporters
and
the
sdk
in
different
binaries
with
different
dependencies
battery
dependencies,
and
I
think
there
is
the
actual
api
compatibility
between
those
packages
and
I
think,
regarding
the
api
compatibility.
We
had
this
discussion
like
some
time
ago,
and
I
mean
currently
our
api
policy
says
that
we
will
not
provide
api
compatibility
on
the
export
or
sdk
level.
H
A
A
So
should
we
just
log
a
feature
ask
for
this:
it's
probably
going
to
be
a
lower
cry.
What
do
you
guys
think.
H
Yeah-
but
I
think
here
just
one
thing
to
keep
reminded
that
that
the
longer
we
wait
is
that
the
harder
it's
going
to
get
because
our
current
exporter
interface
by
itself,
it's
not
api
compatible.
So
when
we
changed
it
later
on,
that
means
that
everybody
who
implements
an
exporter
will
have
to
change
their
export
interface.
G
I
mean
it's
easier
fee
table
right
now
on
the
exporter
interface
right,
the
exporter
exposes
the
v
table
of
the
recordables
that
it
generates
and
so
like
how
is
c,
plus
abi
around
v
tables,
because
that's
the
thing
that
terrifies
me
the
most
because
back
when
I
was
really
a
c
plus
plus
developer,
it
was
a
terrible
place
around
v
tables,
and
so
you
would
always
just
make
a
you
know
a
c
struct
with
a
bunch
of
pointers
to
methods
and
that's
how
you
would
expose
any
kind
of
abstract
interface
to
make
sure
it's
stable,
because
otherwise
it's
like
chaos,
trying
to
add
a
method
in
a
safe
way.
G
Java,
specifically,
the
jvm
will
freaking
link
when
the
methods
don't
exist
or
line
up.
Like
you
can
say,
I
extend
a
method
that
doesn't
exist
and
the
jvm
will
allow
it
and
say
this
is
fine
everything's
cool
right,
it's
only
when
you
call
it
that
it'll
say.
Oh
linkage
error,
you
can
even
catch
the
stupid
exception
and
do
something
different
right,
so
they
have
a
whole
slew
of
things
they
can
do
that
are
not
available
in
cpus,
plus
that.
G
H
Regarding
the
weed
table
thing,
we
also
have
discussions
about
that,
and
basically
that
is
a
hard
requirement
that
we
have
that
all
the
basically
libraries
that
are
used
are
compiled
with
the
same
vw
out,
otherwise
also
our
api
sdk.
This
separation
blade
breaks
down,
and
I
think
the
compatibility
that
we're
talking
about
here.
It's
mostly
the
standard
lip
appearance.
A
It's
mostly
my
my
ask
is
not
about
the
abi
api
on
like
api
surface,
it's
a
with
for
within
the
sdk
for
the
exporters
like
plug-in
exporters
as
plugins,
and
since
we
cannot
commit
to
this
before
ga.
A
My
understanding
is
that
I
need
to
contribute
to
documents
such
as
building
custom
sdk,
and
I
can
contribute
the
document
such
as
building
custom
sdk
with
cmake,
but
we
support
both
systems
right,
so
we
still
need
to
see
if
maybe
a
lower
priority
need
to.
Like
does.
Google
have
like
cloud
exporter
that
is
unique.
G
First
class
citizen
exporters-
you
mean
our
expert,
so
we
have
the
under
google
cloud
platform.
There's
a
open,
telemetry
operation,
cpp
repo
that
has
a
stackdriver
exporter
in
it.
It's
actually
just
a
prototype
and
it
no
longer
compiles
because
it
hasn't
stayed
up
to
date
with
mainline,
cpus
plus
api.
So
that's
something
that
I
was
trying
to
update,
also
just
so
you
know
with
bazel,
because
this
is
a
bazel
deployed
thing.
You're,
actually
always
like
rebuilding
everything.
G
Okay,
so
the
notion
that
you
have
to
build
a
custom
is
kind
of
like
baked
into
the
notion
of
bazel,
and
the
way
that
I
was
working
on
documentation
for
how
to
consume
in
bazel
is
actually
like
how
to
set
up
all
your
source
distros
to
rebuild
the
whole
entire
sdk
and
then
consume
the
components
you
want
so
like
this
will
be
baked
in
with
bazel.
G
It
does
mean
that
bazel's,
you
know
not
good
for
say,
making
a
debian
file
and
an
ecosystem
unless
you
want
to
make
one,
that's
like
just
for
you
that
doesn't
interact
with
anyone
else,
because
you
know
it
has
this
assumption.
It
can
build
everything
from
scratch.
A
All
right
right,
yes,
yeah
for
us
it's
for
us!
I
see
the
strong
need
for
static
linking
and
no
extra
depth
like
size
would
matter,
because
I
already
had
early
prototypes
with
header,
only
open,
telemetry,
sdk
implementation,
and
it's
going
to
be
hard
for
me
to
justify
the
upgrade
to
ga
where
their
custom
build
of
that
sdk
is
gonna
include
stuff
that
my
customers
don't
need.
It's
say.
Oh
you
just
loaded
another
bunch
of
dlls
that
are
not
gonna,
be
even
needed,
or
workspace
increased
by
one
mega
for
nothing.
A
For
no
obvious
reason,
that's
something
that
I
would
like
to
avoid.
That's
where
I
think
in
cmake,
we're
going
to
come
up
with
like
build
recipes,
build
configurations
and
how
to
build
the
sdk
package
based
on
custom,
build
configuration.
A
E
A
Custom
long-term
ability
to
have
pluggable
exporters,
okay,
with
pluggable
exporters.
We
kind
of
alleviate
the
need
to
build
the
customer.
E
Yeah,
okay
seems
well
running
out
of
time
or
I'll
audit
are
you?
Do
you
want
to
charge
some
issues
or
we
can
do
that?
The
next
in
the
next
meeting.
C
I
I
mean
I
was
definitely
hoping
to
take
some.
You
know
use
some
of
the
down
time
next
week
to
triage
through
the
issues
we
have
right
now,
but
we
can
take
a
look
at
it
right
now.
If
you
want.
E
C
C
C
C
We'll
go
top
down
rework
batch
processor
to
elaborate
comments
while
loop
max
this.
A
One
I
just
reported,
we
discussed
it
in
current
vr.
A
C
A
This
one
is
a
p2.
Our
current
code
triggers
a
compiler
warning
that
we
use
duplicated
feature.
E
A
C
Yes,
so
does
there
need
to
be
a
more
rigorous
ruler
that
we
use.
A
I
can
tell
you
how
I
find
it.
We
currently
build
in
c
plus,
plus
11
compatibility
mode,
mostly
because
most
of
our
classes
backboard
the
features
from
our
latest
to
old
compiler
with
no
hdd
classes,
but
I
added
the
build
configuration
that
allows
to
rebuild
with
c
plus
plus
latest
standard,
which
already
has
all
of
the
classes
that
we
backboarded.
That's.
Why
I
hit
those
issues
when
I
built
the
customer
sku
with
c
plus
plus
latest,
and
maybe
that's
something
that
we
didn't
really
catch
elsewhere.
A
Also,
our
ci
doesn't
enforce
warnings
as
errors
at
the
moment.
So
that's
why
we
still
have
a
few
warnings
and
I
think
it's
more
like
manual
process
to
to
catch
this
and
fix
this.
E
I
think
we
can't
enable
force
for
me
as
error,
because
I
think
there
are
many
such
warnings
come
coming
from
grpc
build.
I
think,
which
we
can't
do
anything.
A
We
can
try
to
apply
rules
and
exclude
it
just
for
students,
places
places.
E
A
Yeah,
so
I
think
in
in
in
a
few
build
environments,
we're
pretty
sure
to
treat
our
learnings
as
errors
elsewhere,
and
I
think
it's
a
good
practice
in
general.
Maybe
it's
too
strange!
G
G
A
Yes,
yes,
there
are
examples.
Other
examples
like
utf-18
utf-16
to
utf-8
conversion
is
exactly
in
that
scope,
ec
plus
plus
11
14.
It
was
there
and
I
think
it
was
deprecated
and
marked
as
do
not
use
in
17.
So
there
could
be
cases
like
this
indeed
conditional
compilation.
A
Then
then
we
can
actually
detect
the
language
standard
and
use
the
future
that
you
see,
plus
plus
well
and
like
there,
but
for
a
build
that
actually
uses
those
plus
plus
20
features:
rework
it
the
way
how
it
should
be
done,
and
then
that
is
probably
the
cleanest
way.
H
A
C
Okay,
cool,
so
I've
just
added
some
comments
based
on
the
discussion
and-
and
I
guess
we
can
re
re-assess
this
later
again-
make
sense
all
right,
good.
A
Yes,
so
this
is
based
on
comment
that
josh
asked
me.
I
yeah
I
need
to
take
a
look.
I
see
how
it
could
have
been
done
with
c
plus
plus
11
anonymous
union
in
api,
safe
matter,
but
for
std
variant
and
for
std
visit.
A
But
it's
like
see,
I
really
really
want
to
get
by
buffers
added
to
spec
and
if
it
and
and
I'm
gonna
like
apply
my
effort
to
try
to
drive
it
in
this
spec
later
on
after
ga,
then
let's
say
it
gets
added
to
spec.
A
Then
what
are
we
gonna
do
about
this
issue,
because
our
typing
system
will
then
need
another
like
how
do
we?
I
I
don't
have
an
answer.
H
I
I
think,
regarding
to
the
question
you're
posing
in
there-
and
I
I
I
try
to
hint
at
this
in
my
comment.
I
think
I
I
didn't
think
it
through
all
the
way
now,
but
I'm
pretty
sure
this
will
break
api
compatibility
if
we
add
a
new
thing,
and
particularly
because
of
this,
no
no
standard
visit
function
that
we
have,
because
this
nostalgia
visit
function.
It
works
that
way
that
basically,
it
relies
on
the
way
that
you,
the
caller,
that
you
pass,
has
overloads
for
all
variations.
Oh
I
got
it
got
it.
H
Yes,
it
doesn't
even
compile,
and
so
in
some
cases
it
will
like
fail
at
compilation
time
when
we
add
a
new
thing
and
the
user
of
no
standard
visit
doesn't
have
that
edit,
in
other
cases
like
when,
basically
just
separate
binaries
and
let's
say
just
an
exporter,
uses
no
standard
visit
and
we
add
a
new
variants
to
the
attribute
value
in
the
sdk
sdk
api
and
the
export
is
not
recompiled.
I
think
that
possibly
fails
in
that
phase.
A
If
I
need
to
if
I
need
a
blob
type,
for
example
like
there's
a
similar
type
in
protocol
buffers,
I
think
it's
like
type
any
or
whatever,
and
then
message
pack
message
pack
also
supports
binary
blobs
by
buffers
and
if
I
need
to
migrate
somebody
who
already
had
this
type
before
in
their
previous
sdk
and
upgrade
them
to
open
telemetry
which
doesn't
have
that
type,
then
I
can
hack
it
and
build
a
custom
sku
and
I
don't
run
into
adi
compat
issues,
because
it's
just
my
vendor
sdk
that
doesn't
sound
right.
A
Like
I
mean,
ideally,
if
we
get
it
added,
then
how
do
we
solve
it?
I
just
don't
know
an
answer.
Maybe
if
you
guys
can
give
me
some
time
I'll,
take
a
look.
What
options
we
have.
A
But
then
does
it
mean
that
we
have
common
v2
attribute
value?
G
H
A
I'm
almost
thinking
about
you
know
something
that
is
like
home
or
com
style,
like
type
a
noun
or
type
reserved
for
future
views,
and
then
there's
like
pretty
much
like
a
void
pointer,
which
you
go
and
inspect
inside
of
it
and
see
what
september.
G
Is
like
we,
we
already
have
string
right,
I
believe
as
a
value,
and
I
think
the
spec
allows
every
value
to
be
converted
to
string
from
attribute
value.
If
I
recall
correctly
so
worst
case
scenario,
is
you
just
convert
directly
to
string
in
the
api,
which
is
really
awkward
and
ugly,
but
that
could
be
a
way
that
you
get
around
this
as
like
a
kind
of
bad
interim
solution.
A
Yes,
it's
like
you
can
actually
do
this,
and
even
base64
or
whatever,
and
put
that
base64
into
string.
Then
you
will
need
another
attribute
on
your
event
to
tell
that
this
field
is
actually
not
string,
but
this
field
is
byte
buffer
and
for
byte
buffer.
Specifically,
this
is
messed
up
because
you
increase
the
size
of
the
field.
By
about
30
percent,
you
waste
cpu
cycles
on
the
transform.
A
Then
you
weigh
cpu
cycles
and
converting
it
back
from
dbase
64
into
binary
it's
a
huge
waste
and
for
something
that
is
close
to
hardware
such
as
c
plus
plus.
I
would
actually
expect
it
to
have
its
built-in
ability
to
express
a
void
buffer,
especially
for
mobile,
embedded
driver
scenarios
and
always
so
we're
clearly
missing
something
that
other
specs
and
protocols
describe
quite
nicely,
and
I
see
this
eventually
happening,
and
this
will
become
a
bit
of
an
issue
for
us
if
it
happens
after
v1,
which
will
happen
after
we
won.
G
So
variant
is
a
union
type
right
and
there's
two
ways
to
encode
a
union
type
in
any
lo
language,
there's
like
an
actual
union
type
like
standard
variant
or
there's
just
inheritance
yeah.
Now
I
am
nervous
about
v
tables
in
c
plus
plus
an
api,
but
it
could
be
that
a
v
table
would
actually
be
more
stable
here,
because
we
could,
just
you
know,
give
it
stability
in
that
sense.
I
don't
know
anyway,
anyways.
Let's
find
another
way
to.
A
G
A
Let's
not
waste
that
time
on
this
right
now.
I
think
I
opened
this
to
address
your
ratio
that
you
are.
I
agree
with
this
I'll.
Take
a
look.
C
E
E
Not
much
yet
so,
keep
it
open.
I
think
once
it
is
each
smudge,
then
we
can
close
it.
A
So
this
one
I
I
fixed
it
in
my
standard
library
here
it's
gonna
get
closed
when
we
merge
the
using
std
library.
I
sorry
I
didn't
send
a
separate
pair.
I
fixed
it
in
the
big
pair.
C
Did
you
fix
this
already
yeah
and,
and
is
this
awaiting
yes?
Can
this
be
closed.
A
We
can
merge
374
and
this
can
be
closed.
Yes,.
E
You
can
apply
the
label
for
you.
I
think.
A
One
I
opened
because
right
now,
michelle's
pr
is
missing
this
and
like
we
will
handle
this.
This
is
specific
to
each
w
exporter.
I
don't
think
it's
blocking
anybody
else.
C
Okay,
so
is
it,
is
it
a
pr
in
progress
or.
A
When
there
is
a
painting
pr
376
into
the
way,
how
it
encodes
some
so
it's
like
w
is
key
value
pairs,
a
protocol
you
can
have
whatever
value
and
you
have
whatever
key
now
in
the
question
is
what
is
the
exact
name
for
each
field
in
the
tw
protocol
for
microsoft
we
would
prefer
certain
names-
and
this
is
outside
of
the
otlp
protocol
reality.
A
But
what
if
somebody
clones,
this
code
builds
their
own
htw
listener
and
says
my
trace.
Id
field
has
to
be
named:
trace,
dot
id,
for
example,
so
something
that
will
add
extensibility
to
etw
protocol
implementation.
My
thinking
right
now
we
have
to
keep
the
tw
fields
as
close
to
otl
p
spec
as
possible.
A
C
A
A
Right,
there's
this
air
exporter
or
tlp,
but
this
one
is
like
I'd
like
to
merge
itw
first
in
the
main
repo,
then
maybe
we
will
move
it
to
country
once
country
repo
is
set
up,
yep.
C
A
E
A
Right
so
anyways,
I
thought
that
cmake
through
id
is
only
supported
with
a
visual
studio.
2019
seems
like
I
was
mistaken.
It
actually
also
supported
my
older
version
of
2017..
A
I
need
to
install
and
check
it
for
myself,
because
if
they
use
either
command
line
build
or
if
they
use
a
latest
id,
this
would
work
I
need
to
try.
Maybe
it
could
be
a
blog
in
just
the
way
how
the
ide
is
implemented.
A
Need
info
right
now,
yeah
and
I
think
it's
more
like
non-blocking,
it's
just
a
custom
use
case,
but
I
believe
we
should
fix
it
because
there's
some
early
adopter
who's
trying
the
sdk
and
they
are
in
the
professional
environment.
They
would
like
to
use
an
older
compiler
and
all
their
id
which
we
do
support.
A
So
I
think
we
should
get
to
the.
A
C
E
Yeah
and
for
this
issue,
I
think
I
have
I
have
a
pr
attached
and
josh.
Could
you
please
take
a
look
at
the
same
egg,
build
for
rtrp.
E
C
A
No,
it's
it's!
It's
actually,
four,
six,
seven.
C
C
B
Yeah,
I
saw
most
cpp
files.
Oh
sorry,
I
think
most
header
files
have
the
pragma
once
at
the
very
top
of
it.
Just
so,
if
it's
included
by
multiple
files
that
doesn't
give
any
issue.
But
then
I
found
one
file,
the
empty
attributes.h
and
it
didn't
have
that
and
I
was
having
a
build
issue
because
of
it
and
as
soon
as
I
added
it
and
it
went
away.
So
I
think
we
just
have
to
add
private
once
to
the
top
of
this
file.
C
Another
basil
bug
bogdan-
I
have
this.
G
He
submitted
a
pr
for
at
least
one
of
these.
There
is
a
question
of
the
way
that
he
wants
to
consume.
The
basal
build
right
is
not
sub
module
friendly.
G
So
anytime
you
try
to
consume
the
tar
gz
file
from
github.
It
does
not
check
out
sub
modules
ever
and
there's
like
a
whole
bunch
of
comments
to
like
there's
a
dear
github
on
this
there's
a
stack
overflow
about
it.
G
I
don't
know
what
github's
going
to
do
eventually,
but
there's
a
question
of
if,
if
this
is
a
expected
consumption
mechanism,
because
we
haven't
documented
how
to
consume
this
library
yet
from
bazel,
should
we
support
http
archive
directly
and
if
so,
that
means
we're
going
to
have
to
remove
the
usage
of
sub-modules
from
the
bazel
build
completely,
because
I
had
just
changed
it
to
use
the
sub
modules,
so
cmake
and
bazel
were
aligned,
so
bogdan
actually
submitted
a
pr
for
most
of
these.
G
The
thing
is,
though,
he
only
did,
I
think
two,
and
we
will
the
reason
I
change
the
issue
name
is
we'll
have
to
do
it
for
every
sub-module,
so
I
can
take
on
fixing
this,
but
this
is
kind
of
a
question
around
distribution
and
how
we
want
to
distribute
open,
telemetry
cpp
for
bazel.
Do
we
want
the
http
archive
thing
to
work?
Is
that
the
best
is
that
the
expectation
of
the
community
I'd
assume
if
bogdan
thought
it
would
work,
then
that
probably,
is
the
expectation
of
reasonable
people.
E
So
my
side,
I
have
one
concern
that
at
least
in
azure
build
pipeline.
We
have
separate
tasks
for
for
git,
clone
and
other
tasks
for
build
and
during
build
time,
this
there's
a
security
configuration
which
don't
allow
any
traffic
like
say,
make
configure
time
or
so,
which
should
we
should
not
download
anything
from
network.
So
you,
so
that's
a
that's
a
reason
reason.
I
think
I
prefer
some
module.
We
can
do
other
clone
in
one
time
and
in
further
steps
attacks.
We
don't
need
any
network
traffic
to
complete
the
build.
A
G
Yeah
yeah,
I
I'm
not
saying
we
changed
cmake
at
all,
I'm
saying
we
changed
bazel
and
I
actually
don't
like.
I
would
prefer
if
bazel
had
the
same
thing
where
we
could
do
get
sub
module
checkout
with
some
sort
of
version,
hash
of
a
commit
right,
and
then
we
use
the
same
thing
for
our
github
pull
request
checker
that
doesn't
allow
network
so
that
we
could
do
performance
tests
in
a
safe
environment
that
can't
touch
the
network
after
you
check
it
out,
that'd
be
pretty
nice.
G
We
can't
if
we
do
this
with
bazel,
but
that's
fine
like
we'll
we'll
work
around
it.
I
there
are.
There
are
enough
people
in
the
community
complaining
about
this
issue
in
general
that
hopefully,
a
better
solution
shows
up
at
some
point:
bazel
doesn't
support.
You
know
git
directly
as
a
repo
you
pull
from
with
sub
modules.
G
It
only
supports
these
http,
zip
archives,
the
shots
so
yeah
anyway,
point
b
and
I
can
I
can
take
the
action
item
to
clean
up
our
basal-
build
to
align
with
this,
and
I
don't
know
where
the
basal
documentation
for
release
bug
is.
If
we
have
one,
if
we
don't
it's
kind
of
part
of
that
work
which
I've
been
trying
to
work
on,
how
we
consume.
C
H
Mainly
to
you
josh,
because
I'm
not
familiar
with
bazel,
and
I
I
wonder
how
would
this
distribution
mechanism
work
for
like
a
use
case
where
people
only
want
to
use
the
api
like,
for
example,
library
that
or
you
people
that
trust?
But
now
you
instrument
the
library
and
pull
in
the
damage
with
cpp
via
bazel.
G
You
could
use
http
archive
for
that
yeah.
So
what
will
happen
is
they'll
pull
in
the
entire
source
code
and
then
they'll
depend
on
just
the
api
piece
in
the
bazel.
Build
so
actually
have
to
depend
on
actual,
like
components
of
the
bazel
build
those
build
targets.
So
they'll
just
depend
on
that
piece.
So
they'll
pull
down
the
entire
zip
file
and
then
they'll
just
get
the
header
out
of
it
right.
G
H
G
Yeah,
I
think
if
you
were
just
consuming
the
api,
that
would
not
be
an
issue.
The
other
thing
is
when
I,
apparently
all
of
my
sub
modules
were
doing
the
thing
that
bogdan
didn't
want
to
do
by
accident,
because
I
had
worked
around
a
different
bazel
bug
and
I
had
never
deleted
that
code
once
we
fixed
it
so
anyway,
there's
there's
there's
a
lot
of
complications
around
consuming
bazel
builds
with
downstream
dependencies
and
trying
to
change
those
downstream
dependencies.
G
So
I've
been
working
on
documentation
for
our
users
for
how
they
can
do
this,
like.
How
do
I
use
a
different
version
of
curl
and
the
one
that's
baked
into
our
build
right?
That
could
be
impossible
if
we
didn't
set
things
up
appropriately.
That
was
what
one
of
the
fixes
to
the
bazel
build.
That
I
made
was.
G
The
other
bit
was
to
make
it
use
sub
modules,
which
we'll
undo
for
now
to
make
it
easier.
But
if
anyone
has
any
concerns
about
bazel,
let
me
know
otherwise
I'll
just
put
what
I
think
is
a
good
proposal
out.
A
C
Sounds
like
I
think,
we're
at
time,
and-
and
I
think
we
just
had
a
good
discussion
so
thanks,
everyone,
happy
holidays
and
we'll
touch
face
again
after
that.
Thank
you.
Thank
you.