►
From YouTube: 2021-05-10 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
A
I
don't
really
have
a
lot
of
things
for
agenda
today,
because
I'm
I'm
trying
to
wrap
up
the
fool
and
stuff
hi
guinea.
How
are.
C
Yeah
I
had
a
couple
of
small
questions.
We
just
wanted
to
clarify
a
couple
of
things.
C
C
C
So
I
had
an
agenda
item
last
time,
but
then
I
couldn't
make
the
meeting
in
the
end.
Regarding
the
sdk
simplification,
I
read
the
follow-up
notes
from
the
last
discussion
and
I
guess
I
I
it
kind
of
made
sense,
but
I
wanted
to
just
pause
and
prompt
again
whether
it's
worthwhile
doing
this
transition
completely
and
doing
it
before
release.
C
That
follows
a
normal
library
layout
of
both
header
artifacts
and
compiled
artifacts
available
as
a
single
unit
release
as
a
whole
and
to
do
that
transition
before
we
do
this
do
before
we
do
ga,
and
also
I
would
suggest
if
we
are
going
down
this
option
to
again
with
the
motivation
and
simplicity,
to
actually
remove
the
composable
individual
libraries
in
favor
of
the
one
sdk
mega
lib
at
that
point.
C
A
It
makes
sense
to
me
I
I
wonder
how
extensive
this
change
is.
Gonna
be
like,
and
if
you're
making
it
are,
you
gonna
scan
through
all
the
other
existing
make
targets
to
make
sure
that
it's
not
breaking.
C
Yeah
I
mean
I've
already
started
looking
into
this
and
there
was
obviously
internal
dependencies.
The
change
would
actually
be
in
many
cases
trivial.
Whenever
someone
asks
for
something
that
isn't
that
is
trace
logs
or
anything
they'll,
just
replace
that
with
sdk
as
a
dependency,
so
I
think
cleaning
up
the
internal
reaper,
I'm
not
too
worried
about
the
build.
C
Now
it's
debatable,
but
there
really
too
much
code
and
sdk.
In
terms
of
I
mean
we
can
discuss
legacy
platforms
and
and
how
long
it
takes
to
compile
something
for
big
iron.
But
realistically,
I
think
the
the
size
of
this
library
is
small
enough
that
that
overhead
of
build
time
is
probably
matched
with
the
benefit
of
simplicity
of
design.
C
I
don't
have
concrete
numbers.
We
can
go
and
measure
that,
but
just
as
a
quick
like
reference
point
for
me
and
my
laptop
to
build
the
whole
thing
takes
about
a
minute,
maybe
give
give
or
take
and
that's
to
build
everything.
So
sdk
is
a
part
of
that,
and
now
probably
I
don't
know
gut
feel
here
is
approximately
20
seconds
on
a
linux
enabled
laptop,
so
that's
kind
of
the
type
of
scale
we
think
talking
about
in
terms
of
build
times.
I
would
imagine.
C
I
would
suggest
not
so
I
would
suggest
keeping
exporters
as
a
separate
kind
of
library
just
because
the
use
cases
are
different,
like
if
you're
using
an
exporter
as
you're
instantiating
your
application,
and
you
want
to
export,
whereas
if
you're
using
sdk
you're,
probably
you
might
be,
writing
you're
an
exporter.
You
might
be
using
some
of
the
utilities
from
there,
so
I
would
still
ship
those
as
separate
units.
So
we
keep
the
structure
of
the
api
and
sdk
split,
I
think,
is
mandated
by
the
spec
already.
So
that's
a
pretty
trivial
one.
C
I
would
also
separate
exporters.
I
would
probably
go
as
far
as
to
keep
x.
As
I
understand
some
explorers
depend
on
ext,
as
well
as
the
extras
I'll,
probably
have
x
as
a
separate
target.
So
at
the
end
of
the
day,
we'll
have
four
targets,
which
is
api,
sdk
x,
which
users
won't
really
use,
but
it's
a
dependency
of
exporters
which
users
would
use
whether
we
combine
all
the
exporters
into
one
lib,
hotel,
exporters
or
not
is
is
also
debatable
and
it
could
go
through
the
same
transition
for
that
one.
C
I
can
see
kind
of
the
more
selective
composable
usage
being
more
likely
because
you're
not
going
to
drag
in
all
of
them.
At
the
same
time,
it
really
depends
on
how
how
expensive
it
is
to
build
them
and
again
the
same
simplicity
versus
overhead
argument.
I
think
to
me
it's
more
obvious
for
sdk
than
it
is
for
exporters,
so
I'm
I
don't.
I'm
not,
I'm
not
sure
about
exporters
making
sense
to
make
a
mega
target
or
not.
A
Actually,
since
you
mentioned
the
build
tanks,
she
make
build
with.
Ninja
is
super
fast.
Why
I'm
mentioning
it?
I
prefer
using
mostly
visual
studio
id
with
cmake
right
now
and
by
default.
When
I
build
with
visual
studio,
it
prefers
to
build
with
ninja,
and
it
seems
like
it's
owning
many
of
these
jobs
concurrently
and
it's
rather
intelligently
scanning
when
it's
an
incremental
build.
It's
also
managed
neatly
like
usually,
even
if
I
change
the
header
incrementals
are
blazing
fast.
A
We
can
give
it
a
try
and
I'm
just
thinking
that
maybe
we
should
also
switch
over
using
ci
to
that,
because
basil
is
snappy,
I'm
kind
of
I'm
not
well.
I
like
basil,
I
tried
it
out
it's
fast.
I
I
don't
use
it.
I
like
it,
because
it's
fast,
we
can
make
the
regular
semi-curtains
also
faster.
If,
instead
of
generating
a
mesh
build
files,
we
would
rather
generate
ninja.
A
That
would
only
need
an
njxe
installed
and
I
can
give
it
a
try
like
in
terms
let's
kind
of
speed
up
in
terms
of
build
times.
Overall,
we
can
also
make
it
different.
Ninja
exe
is
not
installed
and
generate.
A
mass
building.
Museum
has
built,
but
if
ninja
is
available,
if
ninja
exe
is
volume
then
generate
this,
and
that
may
alleviate
some
of
the
build
time
concerns
especially.
C
I
I
mean
from
personal
experience
again
the
swift,
the
ninja
versus
make
a
switch
would
generally
be
something
I
would
pass
externally.
So
I'll
just
say,
I'd
instantiate
seem
whenever
on
c
mic
I'll
set
my
build
system
to
use
under
the
hood.
So
I
first
of
all
agree
with
using
ninja
just
in
general
speed.
A
As
well,
mostly
developer
focused
scripts,
not
for
the
official
ci,
but
under
our
tools.
I
had
that
set
up,
build
tools
and
build
which
builds
with
various
visions
to
do
additional,
and
I
I
can
probably
modify
it
a
little
bit
so
that,
if
we
have
mean
jxe
in
the
path
prefer
ninja,
if
not
then
fall
back
through
the
existing
floor
with
dms.
A
Okay
sure,
yes
yeah.
I
think
I
have
that
build
shell
script
for
the
unix
as
well,
so
yeah.
We
can
do
this
cross
plot.
B
Okay
in
japan
released
in
github,
or
so
we
can
just
like
download,
cmake
and
download
the
ninja
it
anyway.
C
Yeah
very
much
like
make
is
today
make
is
independent
to
cmake.
A
Yeah
we
would
rather
download
really
to
use
binary.
I
mean
it's
available
from
google
downloadable
website
if
that's
a
concern
in
a
way
like
some
pre-built
executable,
at
least
that
should
not
be
a
concern
for
our
usual
open
source,
ci.
B
C
Yeah
apt
has
owned
by
cell
destroyers,
okay,
yeah.
A
C
Definitely
so
just
to
go
back
for
a
second,
so
on
the
topic
of
the
sdk,
merge
the
library
merging.
How
does
everyone
else
feel
about
kind
of
going
through
that
transition,
so
basically
going
down
the
path
of
merging
sdk
into
one
mega
target?
I.
C
Yeah,
so
so,
let's
just
discuss
it
in
a
bit
more
detail,
so
I
guess
my
suggestion
would
be
to
remove
all
existing
targets
on
the
sdk
to
start
with,
which
currently
are
sdk
itself,
which
is
the
headers,
and
then
there
is
one
sub
package
for
each
of
the
subfolders,
which
is
common
version
trace
matrix
logs.
C
C
This
will
also
change
the
publicly
facing
see,
make
exported
artifacts
and
the
conflicts
that
we
provide,
because
it
will
eliminate
the
the
extra
targets
like
the
dependencies
there.
I
would
then
sorry
yeah.
I
will
then
suggest
doing
the
same
for
ext,
just
because
it's
two
things
and
I
think
they
follow
a
similar
pattern
exporters.
C
I
don't
have
a
strong
view
as
to
whether
we
would
keep
them
separate
or
whether
we
would
merge
them
together
based
on
the
likely
usage.
My
gut
feel
right
now
suggests
that
merging
them
together
is
like
the
similar
motivation
to
the
sdk
just
because
of
the
relatively
small
size.
Obviously,
the
size
grows.
If
you
go
and
do
dash
width.
Sorry,
like
add
all
the
width
options,
but
in
practice
the
stuff
that's
installed
without
the
dash
with
the
the
dash
with
options
is
actually
pretty
lightweight.
C
B
C
You
can
definitely
have
a
composable
workload,
there's
going
to
be
a
little
bit
of
redundancy,
and
I
was
actually
writing
that
code
before
I
kind
of
to
keep
them
together.
So
you
can
do
that
now.
C
That
would
allow
users
to
both,
like,
let's
say,
use
the
combined
underscore
sdk
target,
but
also
use
the
individual
kind
of
yeah
that
trace
whatever.
As.
B
Well,
I
think,
for
advanced
user
he
may
choose
to
use
link
to
specific
component
right,
so
we
have
the
option
so
in
this
way,.
C
I
mean
I
I
I
see
the
benefit
but
kind
of
having
used
a
lot
of
these
platforms.
You
use
this
myself
generally,
if
I'm
statically,
linking
I
probably
just
by
using
the
code,
will
rely
on
the
linker
to
do
the
the
selective
choosing
for
me
if
I'm
using
a
so
I'm
designing
something
to
be
more
portable
and
therefore
more
feature
full
by
default
anyway.
C
Do
I
need
to
to
do
that
because
now
like
which,
which
sdk
do
I
need
to
pull
in,
especially
if
there's
a
redundancy
between
one
being
the
compose
and
the
ball
individual
layers
and
one
being
that
make
a
package.
A
I
think
it's
like
for
the
instrumented
apps
that
are
instrumenting
their
code
themselves,
static
and
even
sometimes
had
the
rule
is
appropriate.
A
It's
more
for
the
case
where
you're
consuming
a
binary
library,
that's
been
already
instrumented
with
open
telemetry,
and
you
would
like
to
switch
dynamically
between
different
exporters
right,
like
with
different
vendor
exporters,
mainly
because,
if
it's
all
ends
up
being
otlp
as
the
main
exporter,
then
we
have
less
of
a
moving
part.
It's
mostly
the
destination.
End
point
configuration
right.
I,
like
my
my
customer
feedback,
has
generally
been
that
they
want
to
have
as
little
moving
parts
as
possible
because
of
security
considerations.
A
They
would
even
prefer
not
to
have
a
dll.
They
would
rather
have
like
one
same
blob,
static
globe
where
you
cannot
deal
hijack
or
inject
any
other
functionality
into
the
process.
That's
been
the
feedback
that
I
had
on
at
least
few
occasions.
A
So
that's
why
that's
related
to
my
feedback
on
slack,
where
I'm
saying
that
perhaps
the
dynamic
loading
we
should
carefully
consider
for
like
1.1
or
1.2,
including
whatever
needed
security
reviews
of
that
approach.
C
Apologies
max
value
audio
cut
out
for
the
last
two
minutes.
I'm
not
sure
if
you
saw
so
I
I
I
was
trying
to
follow
the
dock,
but
unfortunately,
I
guess
tom's
taking
notes
it's
probably
not
speaking.
A
No,
I
was
a
quick,
quick
summary
is
that
dynamic
dealer
loading
is
great,
but
some
customers
are
quite
reluctant
to
have
it
due
to
potential
security
implications
of
that.
C
I
I
think
that's
that
makes
sense.
I
think
that
the
chat
in
in
slack
we
started
talking
about
that
topic.
That
is
that,
regarding
the
question
that
lalit
posted
earlier
today,
yeah.
C
It's
not
a
complete
what
it
means
if
it's
in
the
code-
and
it
does
work-
I
I
don't
know-
I
wouldn't
feel
comfortable
releasing
it
just
because
I
feel
the
the
feature
set.
As
for
the
chat,
isn't
complete
enough
for
the
intended
usage
for
it,
so
it
might
cause
more
confusion
there,
but
it
does
seem
to
work
basically
that
we
have
one
quick
thing.
I
just
wanted
to
find
a
wrap
up
with
regarding
the
composable
build.
So
let
me
let
me
do
this.
C
The
only
thing
I
just
added
wanted
to
add
there
is
that
for
bazel
I
think
there
is
a
preference
for
component
composable
components
rather
than
mega
packages
based
on
its
kind
of
selective,
build
what
you
need.
I
guess
the
target
specification
is
much
more
specific
on
that
one.
C
So,
regardless
of
whether
we
do
the
mega
package
for
cmake,
which
I
think
is
going
to
be
the
more
common
distribution
path,
I
think
for
bazel,
it
probably
doesn't
make
sense
to
keep
it
more
unit
based
more
component
based
and
just
because
that's
how
I
think
bazel
users
prefer
to
reference
specific
components
that
they
want
to
link
against.
A
My
question
about
a
pussybook
addition
of
country
was
not
necessarily
to
add
it
to
the
one
single
big
fat
library,
but
more
to
simplify
the
ci
and
ensure
that
whenever
the
main
changes
I
can
also
validate
that
some
other
country
of
york
is
still
saying
and
compiles
like
just
recently.
I
had
an
issue
about
a
week
ago
with
the
multi
multiprocessor
refactor.
A
My
other
branch
in
country
stops
working.
So
that's
why
I
really
wanted
to
have
at
least
an
optional
build
option
to
build
with
country,
not
with
country,
as
you
mentioned,
but
maybe
like
build
country,
but
whatever
the
appropriate
name
of
it,
and
the
reference
that
I
am
providing
is
to
open
cv,
which
follows
exact
same
pretty
much
pattern
and,
like
I
highly
respect
them,
they
are
the
project,
that's
probably
running
in
most
computer
revision,
c,
plus
plus
applications
right
now,.
C
One
thing
max:
I
do
want
to
go
back
to
the
second
one
thing
I
wanted
to
pick
up
on
that,
and
the
question
I
had
to
ask
is:
I
noticed
that
the
contrabrepo
doesn't
actually
have
any
build
system
inside
that
repo
yeah
today,
so
it
is
not
buildable
by
itself
so
based.
My
my
interpretation
of
this
is
that
it's
only
meant
to
be
used
as
if
it's
like
an
extension
of,
like
literally
an
overlay
of
folders
onto
on
the
base
repo.
A
C
A
It's
not
necessarily
correct,
because
I
think
we
had
a
few
discussions
before
about
what
to
do
with
custom
exporters.
Like
my
immediate
interest
is
influence.
I'm
almost
done.
I'm
gonna
send
for
review
right
like
to
tonight
and
right
now,
I'm
using
exact
same
pattern
and
pretty
much
templating
it
over
some
existing
exporter.
For
example.
A
Let's
say
I
pick
zip,
can
I
refactor
it,
and
I
got
the
fluent
done
and
the
build
system
I'd
like
to
retain
it
buildable
the
same
or
similar
way,
with
both
cmake
and
bezel,
at
least
for
that
component.
A
C
Sorry,
if
I
can
just
kind
of
clarify
what
are
those
the
implications
like,
there
is
no
way
that
I
can
do
git
clone,
contrib
and
or
cmake,
or
build,
or
anything
it's
not
designed
to
be
buildable
by
itself
and
then
having
a
dependency
on
open
telemetry,
underneath
it
at
least
from
the
current
repository,
which
seems
to
have.
A
Any
build
artifacts
because
we
don't
have
country
exporters
yet,
and
I
was
trying
to
do
the
first
step
to
do
the
preliminary
infra
for
building
that
and
then
introduced
the
contrib
exporter
fulfilled.
A
I
mean
I
can
do
both
things
in
parallel,
if
that
makes
sense
to
illustrate
how
that
works
in
a
combined
unified,
build
environment.
I.
C
I
guess
the
question
I
had
is
the
end
goal
once
we
have,
let's
say
a
few
exporters
and
contribute:
do
we
expect
that
users
can
check
out
the
contrib
repo
by
itself
and
just
build
the
extended
things
on
top
of
a
open
television
that
the
base
that
they
depended
on
from?
I
don't
know,
maybe
that
system
installed
for
that
matter.
A
Yeah,
so
here's
my
main
motivation.
I
do
have
customers
for
that
immediately
and
I
want
to
onboard
them
to
open
telemetry
api
sources
and
potentially
I
want
them
to
move
to
a
glp
exporter,
but
for
now
right
now
they
need
flint,
because
fluent
is
out
there
it's
everywhere
and
it's
prominent.
A
So
how
do
I
do?
I
can
give
them
the
set
of
instructions.
Do
this
do
this?
This?
That's
why?
Whatever
I'm
doing
I'm
trying
to
simplify
the
set
of
instructions?
Okay
and
then
I
can
tell
them
well,
you
know
you
actually
need
to
check
out
only
the
main
repo
and
the
magic
is
going
to
happen.
So
it's
going
to
be
built
with
country.
A
C
C
That's
going
to
be
built
as
part
of
that
package
separately
is
where
I
see
that
going
what
once
it's
got
more
things
in
it.
But
for
now
I
guess
we're
not
quite
there
yet
so.
C
Yeah,
it's
it's
fine
yeah.
That
makes
sense
like.
I
think
the
overlay
approach
makes
sense
to
formulize
fun,
but
I
guess
once
the
library
is
more
established,
you
may
want
to
revisit
with
your
client
whether
you
want
to
say
actually
you
know
what
just
check
out
install
these
two
dependent
packages.
Instead.
A
That
adds
like
an
extra
strain
on
me
right
now,
so
I
was
gonna,
propose
a
structure
and
then
example
of
a
prominently
used
exporter.
Someone
else
may
just
go
over
that
structure.
They
don't
have
to
reinvent
the
wheel,
how
they
will
divide
it
when
you
watch
it-
and
this
is
largely
copy
placed
right
now,
because
then
you
can
just
drop
and
replace
like
take
the
o
stream
exporter
copy
directory,
rename
it
and
you
already
got
it
plugged
into
the
build
in
a
way
so
anyways.
A
I
agree
that
it's
kind
of
secondary
to
you
guys
it's
more
important
to
me
right
now,
so
I'll
back
off
I'll
send
more
stuff
for
review,
so
we
can
integrate
the
next
week.
A
A
B
So
I
think
that
the
original
intention
of
contributes
host
third
party
exporters
code,
so
the
user
doesn't
need
to
build
our
memory
pro.
He
can
here.
She
can
just
install
open
telemetry
and
build
everything
else.
A
The
problem
is
that
contributor
depends
on
mine
right
now
for
all
the
headers
for
api,
at
least
and
for
sdk,
in
most
cases,
not
just
an
api.
So
then
you
need
to
describe
the
other
process,
how
you
build
contribute
with
me,
rather
than
main
with
country.
A
So
again,
my
take
on
this
is
a
mature
project
which
describes
incontribute
repo,
a
markdown
document.
How
to
build
main
with
country
and
main
has
an
extension
point.
So
that's
why
I
want
to
add
the
extension
point
in
the
main
and
follow
precisely
what
opencv
opencv
folks
did
so
that
it's
not
like,
I'm
really
inventing
much
I'm
pretty
much
for
what
what
they
did
before
I
mean
I
would
welcome
any
other
alternate
suggestions
that
would
help
me
to
unblock
myself
to
release
a
custom
exporter.
A
I
haven't
seen
much
and
I
think,
if
I
need
it,
I
need
to
propose
something
how
I
would
have
handled
it.
I'm
not
opposed
to
refactoring
that
process
going
forward
if
someone
else
comes
in
and
proposes
a
different
structure
as
long
as
they
also
offer
how
to
migrate.
The
prior
art
exporter,
for
example,
fluent
oh
cool,
I'm
not
opposed
to
having
that
alternate.
A
And
as
we
captured,
it's
not
really
impacting
the
flat
library,
because
exporters
are
built
separately,
anyways,
it's
just
yet
another
target,
yet
another
library
built
in
so
in
a
subdirectory
which
is
not
mixing
with
the
main
repo
cmake,
actually
forces
you
to
ensure
that
the
build
artifacts
from
country
go
into
a
separate
folder,
separate
output,
folder.
I
guess
precisely
so
that
you
cannot
like
hijack
and
overlay
the
main
library
target
binary.
B
Why
do
you
mean
now
we
need
that
at
least
two
two
build
out
hardcore,
folders
yeah.
A
No,
I
what
I
did.
I
took
the
main
output
and
I
created
like
country
under
it,
and
then
she
makes
happy
about
it.
It
was
not
happy
when
I
set
the
output
directory
for
that
sketch
content
project
to
be
the
same
as
the
main
repo
it
just
held
with
an
error
explicitly
complaining
that
I
cannot
do
that.
A
So
what
I
did
is
I
the
change
that
I'm
making
I'm
building
it
under
slash
out
slash
contrib
all
binaries
go
in
there
under
that
same
subdirectory,
but
under
contributes
of
directory
of
the
main
build
artifacts
tree
but
cts
and
whatever
other
projects,
actually
respect
that
there's
been
an
extra
content
added.
A
A
Get
these
test
results
and
test
the
contrib
and
immediately
see
if
there's
been
a
regression
that
I
need
to
take
care
of
for
refactoring.
My
exporter,
for
example.
So
that's
pretty
much
the
process
that
I'd
like
to
keep
running,
maybe
nightly
on
country
people
so
that
we
can
say:
hey
country
got
broken
on
may
15th
because
of
the
refactoring
demand.
Whoever
is
the
owners
of
a
contrib
module
fix
it.
We
don't
impose
this
duty
on
the
main
ripple
maintainers.
A
They
don't
care,
they
manufactured
whatever
it's
the
country,
boners
that
have
to
make
sure
that
their
stuff
is
still
building
against
the
latest
american
effect.
But
I
I
need
a
process
to
to
enforce
that
one
country
and
since
it's
both
repos
belong
to
the
same
organization.
I
don't
see
why
not?
Why
don't?
We
impose
some
some
structure
because
it's
the
same
kind
of
organization-
and
I
think
we
do
have
some
right
to
impose
instruction.
C
Just
ask
a
quick
question
about
the
script
over.
I
just
wanted
to
ask:
maybe
there's
some
story
behind
this.
I
was
looking
at
the
structure
of
the
api
and
actually
sdk
as
well,
but
the
the
way
that
the
in
package
is
installed
is
there
is
a
separate,
install
command
that
just
installs
all
the
headers
which
doesn't
use
the
header
target.
C
So
there's
sorry,
let
me
let
me
rephrase
that
if
you
look
at
the
repo
there
is
a
interface
you
make
interface,
which
is
the
headers
for
the
api,
which
is
a
specific
build
target
which
does
nothing
because
it's
just
an
interface
and
then
there,
but
it's
something
that
allows
you
to
have
those
includes
made
available
to
you.
If
you
want
to
use
that
target
now,
that's
not
the
thing
that
gets
installed.
As
I
understand,
the
thing
that
gets
installed
is
a
separate
declaration,
which
is
just
says
copy
all
the
headers.
C
When
you
want
to
install
them,
I
was
wondering
whether
there
was
any
specific
rationale
for
that
design
pattern,
because
normally
one
would
have
the
installable
the
target
library,
the
header
on
the
library
itself,
as
the
installable
target.
That
would
install
the
headers
at
the
same
time.
A
I
think
the
main
issue
is
that
we
were
early
in
development
and
in
many
examples
we
actually
require
some
sdk
headers
and
tests
and
all
that.
But
what
you're
saying
is
that
we
need
to
describe
just
the
api
headers
as
public
and
see
what
else
like
register
said
resource
what
what
other
apis
are
like?
What
other
sdk
level
apis
should
also
be
in
there?
Is
there
something
that
is
probably
like?
Sdk
developer
focused
versus
something
that
is
end
user
consumer
focused.
A
We
need
to
really
to
clean
that.
It's
not
designed
well
right
now,.
C
Maybe
my
point
is
a
little
bit
slightly
different
to
that.
Maybe
I
can
just
quickly
price
an
example
I'll
add
to
the
dock,
because
I
think
it
goes
explains
the
problem
a
bit.
It's
not
really
a
problem.
It's
just.
I
wanted
to
understand
it,
so
so
one
second,
let
me
just
put
it
into
the
link,
not
the
best
thing.
So
the
middle
point
inside
of
two
links,
the
first
one
there
and
the
second
one
yeah.
C
All
right,
so
the
sdk
follows
the
same
path.
I
think
those
links
are
for
api
yeah,
therefore
api.
So
where
have
you
shared
the
links?
So
I
share
them
inside
the
document
inside
the
sorry,
the
running
commentary
agenda
document
under
the
cmake
library
install
target
section.
C
So
basically,
as
you
can
see,
there
is
a
build
target
called
open,
telemetry
underscore
api,
which
actually
is
defined
in
line
one
in
this
in
this
file.
But
it's
something
that's
installable,
but
it
does
nothing
like
it's
it's
it's
almost
like
a
that's
only
exists
so
that
it
can
be
referenced
and
have
the
correct
headers
passed
to
it.
C
Which
isn't
exclusively
linked
to
that
target?
It's
just
like
whenever
somebody
runs
install,
I'm
just
going
to
fully
install
all
the
headers
into
specific
location,
I'm
just
wondering,
because
this
is
not
normally
how
I
would
have
done
this,
because
I
would
have
just
bundled
the
installation
of
the
headers
into
the
main
target
when
the
main
target's
installed,
because
you
can
kind
of
extend
line
nine
to
also
do
the
copy.
I
was
wondering
whether
there's
any
reason,
this
split
that
I
was
missing.
I.
B
C
You'd
have
to
change
a
little
bit
line.
I
can't
remember
the
specific
syntax,
but
you'll
have
to
add
those
headers
to
the
library
so
that
they
are
published
as
part
of
the
library
but
yeah
I
mean.
The
point
is
that
you
would
install
install
the
open,
telemetry
underscore
api
target.
It
would
come
with
a
copy
of
those
files.
C
Cmake,
isn't
the
easiest
tool
to
to
find
these
examples?
But
yes,
if
you,
if
you,
google,
some
header,
only
library,
examples,
they're,
basically
bindable
the
install
steps,
installs
everything
as
part
of
that
target.
B
A
Change
he's
actually
quite
active
on
on
github
there's
been
some
recent
mean
gw
change
from
the
same
place.
C
I'll
I'll
I'll
ask
around
it's
just
it
kind
of
it
threw
me
off
a
little
bit.
I
didn't
understand
why
it's
there,
so
let
me
kind
of
ask
around
and
something
to
that.
If
I
see
an
opportunity
to
kind
of
simplify
that
I'll
all
right.
A
C
A
B
Yeah.
Thank
you
thanks
for
the
thank
you,
okay,
I
think
we
can
move
to
to
the
next
issue,
and
the
next
issue
is
from
me
yeah
as
I
I
am
working
on,
something
which
changes
no
s
to
the
leaf
so
max
mentioned.
We
need
also
to
update
the
type
alias
for
us,
std
and
absolu,
then
and
yeah.
It
seems
that
we
even
I
found
that
we
don't
have
ci
viewed
for,
for
we.
A
B
A
That's
how
I
found
fine
dishes,
and
there
there
was
that
issue
with
the
temporary.
We
discussed
it
with
it,
there's
an
issue,
a
kind
of
bargain
variant
and
temporary
string
like
literal
when
you
pass
it,
it
prefers
boolean
instead
of
the
actual
string
type
and
it
was
actually
breaking
with
stl.
A
It's
not
breaking
with
no
std
library,
it's
breaking
only
with
stl.
If
you
explicitly
pass
the
object
as
std
string,
everything
works,
but
you
cannot
pass
the
string
lateral
like
the
set
resource
api
yeah,
and
I
think
from
the
at
the
the
current
state
is
from
360
tests
about
15
are
broken
and
all
break.
You
just
seem
to
be
related
to
set
resource
api.
A
I
mean
we
should
add
it
to
ci.
We
should
fix
the
broken
things
first,
then
we
need
to
add
it
to
ci.
Yes,
yeah.
I
was
wondering.
B
Why
do
we,
the
history
of
these
two
options
like
we
provided
as
no
std,
and
that
should
work
for?
I
can
tell.
A
More
about
this
okay,
I've
been
trying
to
push
this
initiative
because
for
my
customers
I
would
prefer
a
recent
compiler,
preferably
17
or
11,
19
in
c
plus
plus
20
mode,
and
in
that
case
they
do
not
need
our
creative
reimplementation
of
html
under
no
std.
A
As
much
as
I
love
it,
I
would
feel
more
comfortable
when
we
use
the
standard
library
implementation
of
these
classes
for
various
reasons,
starting
with
security
debuggability
and
all
that,
and
the
other
part
is
that
I'm
not
actually
immediately
blocked
by
the
only
15
cases
broken
because
it
just
so
happens
that
for
etw
exporter
I
can
hack
it
elsewhere
to
copy
the
object
from
temporary
to
string
and
it's
unaffected.
A
So
we
do
have
some
regressions
in
our
stream
exporter,
but
going
forward.
The
other
main
reason
here
is
starting
with
visual
studio.
2015
all
run
times,
15,
17
and
19
are
api
compatible?
Actually
there's
an
article
about
this,
so
the
whole
purpose
within
the
confines
of
one
os
windows.
A
So
it's
like,
I
would
rather
prefer
building
with
stl
on
windows
by
default,
and
it's
still
not
going
to
regress
you
in
terms
of
adi
compatibility
in
any
way,
and
you
can
still
debug
iterators.
You
can
still
use
whatever
handy
visual
studio,
debugging
features
because
the
classes
are
standard
like
I
see
a
whole
slew
of
benefits
and
pros
rather
than
coins,
and
how
about
this
I
can
volunteer
to
fix
up
the
15
cases
and
add
it
to
ci.
B
A
Now,
how
I
I
I
need
to
describe
it
in
the
markdown
real,
quick,
so
there's
a
beautiful
feature
in
visual
studio,
and
I
think
it's
coming
to
visual
studio
code
cross
plot,
which
allows
you
to
have
build
combos,
where
you
can
have
a
json
file
which
describes
how
you
want
to
cook
the
build
with
cmake.
What
build
options
you
set.
A
So
you
can
have
like
16
build
combos
with
this
with
that
and
all
that
and
I
do
have
locally
the
build
combos
for
no
std,
as
well
as
stl,
build
with
the
standard
library,
debugging
release,
both
flavors,
and
so
what
I
do
is
in
one
ux.
I
switch
between
build
combo.
I
build
all
and
run
all,
and
I
see
the
test
results
right
away
in
visual
studio.
It's
very
neat:
I'm
not
advocating
visual
studio
here
as
a
microsoft
employee.
A
Like
I
would
strongly
stand
for
promoting
standard
c,
plus
plus
classes,
10
flies
tomorrow
is
going
to
be
20
24
and
that's
when
all
these
classes
that
we
backported
are
going
to
be
entirely
relevant.
B
Nice
yeah,
I
agree.
I
also,
I
think
I
I
prefer
use
the
standard
standard
library,
as
I
think
like
for,
for
our
no
std
share
the
pointer.
We
use
some
some
character
buffer
to
to
store
pointers
right
everyone.
I
every
time
I
debug
the
shirt
pointer
I
clicked
into
each,
and
then
I
saw
that's
right
character
buffer,
not
a
reporter
pointer.
I
could.
I
could
have
followed.
A
A
And
definitely,
I
would
prefer
upsell
variant
I
mentioned
like.
I
do
have
a
work
item
on
me.
I
can.
I
can
at
least
fix
that
up
permanently,
because
the
other
great
variant
class
that
we
had
is
not
compiling
with
2015,
whereas
google
up
sale
does
compile.
So
I
think
at
least
for
windows.
I
can
definitely
switch
for
that
and
within
the
confines
of
each
pillar
for
each
concrete
os
you're,
looking
only
at
the
api
compatibility
for
concrete
os
right.
So
it's
like
you
can
say
this
this.
A
This
are
the
classes
for
windows
and
it
is
all
api
component
now
for
windows.
It's
this
this
this
and
it's
all
again
api
compat.
So
it's
like
as
long
as
you
maintain
the
builds
of
different
vendor
binaries
with
the
same
settings
for
each
concrete
os.
It's
all
going
to
line
up
nicely
and
I
think
the
default
settings
at
least
for
now.
We
should
probably
switch
to
upsell
variant
for
older
compiler
in
case.
If
we
still
want
to
maintain
going
forward.
A
The
binary
is
built
with
15
and
for
the
static
libraries
and
for
the
header.
Only
libraries,
I'd
advocate
people
just
go
with
2019
standard
runtime
standard,
runtime
classes
build
and
it
all
works,
and
I
know
it
works.
I
already
got
positive
feedback
on
this,
something
along
the
lines
that-
and
let
me
cover
this
topic.
If
you
guys.
C
I'm
not
familiar
with
windows
toolkits,
but
when
you
build
them,
the
library
version
that
you
use
or
the
library
standard
these
are
still
20
or
is
that
earlier
versions
of
20.
A
A
and
then
what
happens
is
the
string
view
yeah,
I
think
string
view
variants
was
already
in
17
and
span.
I
think.
A
So
then
it
goes
through
or
that
smart
code
logic
of
what
to
use
for
for
the
implementation
now.
Obviously,
if
you
want
to
build
one
binary
that
is
compatible
everywhere,
it
seems
like
you
would
then
have
to
build
it
with
visual
studio
2015,
assuming
nothing
is
there,
so
that
becomes
an
issue
when
we
get
to
the
topic
of
dll
flavor.
That's
why
I
intentionally
wanted
to
postpone
that
until
at
least
version
1.1
or
1.2.
C
A
A
And
I
can
have
the
ci
tom
how
about
this
I'll,
take
care
about
all
regressions
in
with
this
tl
I'll
fix
them
and
I'll
add
this.
B
B
I'm
not
sure,
I
think,
no
yeah,
because
purely
I
added
yeah.
Let
me
we
can
do
a
quick
check
yeah
that
should
be
in
the
main,
make
systemic
file
list.
A
C
A
And
jagger
are
standard
jager
our
standard
exporters
right
so
yeah.
For
now
we
depend
on
deep
curl,
unfortunately,
and
it
seems
like
we
would
rather
always
redemand
it
and
install
it.
You
can
clean
it
up.
B
We
should
probably
switch
it
on
because
that's
probably
the
future
right
yeah,
probably
I
think
yes,
yes,
then
some
that
our
ci
script
will
also
need
to
be
updated,
because
only
I
think
there
are
two
tour
ci
runs
with
otp
others
don't
so
we
need
to
pass
there
without
equals
off
for
them.
A
B
Okay,
yeah
sounds
great
and
thanks
max
for
faking
fixing,
it
fixing
the
still
build
yeah.
A
B
Yeah
and
also
there's
one
fix
in
my
pr,
I
think
I
had
a
trusted
the
header
header
order
right,
because
the
open,
telemetry
version
should
always
be
included.
I
think
at
the
first
one
because
others
use
using
open,
telemetry
namespace
relies
on
it.
A
B
I
I
think
I
think
I
tried
the
boost
with
absolute
and
with
sql,
and
it
feels
fine
but
yeah
on
the
united
states.
Resource
unit
has
failure,
so
I
think
you
are
you're
going
to
fix.