►
From YouTube: 2021-07-07 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
Okay,
so
the
first
one
is
around
log
emitter.
I
I
hope
that
I
could
ask
this.
I
could
ask
this
question
to
david
he's.
Not
here
he's
probably
the
most
knowledgeable
in
the
serie.
The
the
log
emitter
specifies
that
we
have
a
flash
function,
but
it's
unclear
if
we
even
need
it
at
all.
Three
count.
You're
here
right,
you,
you
posted
that
question.
Originally,
I
don't
know
if
you
have
any
more
context
on
this.
C
So
my
my
assumption,
like
my
assumption,
was
that
it
seems
like
maybe
hand
appender
has
to
implement
the
flash.
So
there
needs
to
be
a
way
to
flush
through
the
like.
The
handler
will
have
the
access
to
the
emitter
and
then
they
should
be
aware
to
flush
through
the
emitter.
So
that
was
my
assumption,
but
but
I
do
not
know
why
we
have
yeah.
A
A
But
for
that
I
guess
you
can
use
the
the
log
provider
right.
So
that's
that's
even
more
convenient
to
do
it
once
in
one
place,
instead
of
if
you
have
multiple
emitters
or
whatever
right.
So
it
seems
superficial
to
me.
I
don't
I
don't
know
if
there
is
a
use
case
when
you
want
to
do
that
specifically
for
a
single
emitter.
C
Correct
so
that
that
was
the
that
was
the
comment
from
the
reviewer
as
well.
Why
do
we
like
they
could
just
so?
We
also
call
the
flash
like
like
the
sdk,
makes
it
call
when,
when
there's
a
shutdown,
otherwise
there's
usually
no
no
place
where
we
use
it.
A
Yeah-
let's
maybe
do
this,
let's
wait
for
david's
response
on
that
and
because
I
I
really
don't
remember
any
rationale
for
adding
that,
but
for
some
reason
it's
there
I
am
guessing.
Maybe
there
is
something
there.
There
is
a
use
case
that
requires
it.
If
no,
then,
let's
remove
it
from
the
from
the
specification
when
we
don't
need
it
and
we
there's
no
need
to
implement
it.
C
A
Yeah,
no,
that's
good!
That's
good!
You!
You
should
anyway,
treat
the
the
current
spec
as
a
draft.
It's
a
it's
a
specification
for
prototyping.
It's
not
the
specification
for
the
final
product.
So
if
you
uncover
things
like
this,
that's
great
right:
let's,
let's
make
sure
that
you
provide
that
feedback
so
that
we
can
fix
the
specs
so
that
the
final
version
is
the
cleaner
and
the
right,
the
right
spec
for
implementing
the
rest
of
the
logging
libraries
for
other
languages.
A
The
second
one
is:
this:
is
the
follow-up
on
your
pr
that
you
created
then
for
the
file
attributes,
and
I
think
armin
is
asking
an
interesting
question
there.
If
you
read
says
I
don't
know
if
you
had
a
chance
to.
B
A
The
thing
is,
I
think
it's
it's,
it's
not
a
unique
situation
for
the
for
the
logs
for
for
any
operations,
even
if
we
record
it
in
a
span,
you
can
talk
about.
Let's
say
we
have
the
concept
of
of
a
hostname
or
or
an
ip
address
there
right
you.
You
can
talk
about
the
host
name
where
let's
say
where
the
operation
happened,
or
it
can
be
where
the
was
generated
or
it
can
be
about
an
operation
somewhere
else
on
a
different
host.
A
But
the
spawn
is
recorded
on
this
one
right.
It
happens
all
the
time.
I
think
that
is
a
question
that
I
do
remember
us
discussing
at
some
point
and
we
don't
have
a
good
answer
for
that.
In
calamity
like
we
have
the
notion
of
an
attribute
which
specifies
a
thing,
but
we
don't
have
a
way
to
specify.
What's
the
relationship
with
that
thing
in
the
telemetry
that
we're
recording
it's
a
generic
problem
that
we
have
throughout
on
telemetry,
so
it's
not
unique
to
to
the
logs
in
any
way.
I
believe.
B
Yeah
my
my
thought
on
this
was
was
basically
that
we
should
be
able
to
define
what
what
the
structure
of
a
file
is
like
in
terms
of
how
we're
going
to
describe
it
right.
That
much
is
clear
I
think,
but
then
you
know
to
your
point
here
and
I
think
it
is
a
question
of
like.
Where
does
that
go
and
you
know
does
it
does
it
have
to
be
always?
Does
it
always
have
to
be
namespace,
or
is
there
an
implicit
understanding?
B
You
know
just
from
the
context
of
what
we're
talking
about
here,
but
you
know
in
terms
of
just
like
a
file
has
a
name
and
a
path
like
these
things
aren't
going
to
change
really,
so
we
could
say
we
should
be
able
to
sort
of
just
data
model.
That
idea
and
then
have
a
separate
discussion
about
how
to
apply
that.
A
A
A
D
I'll
have
to
take
a
look
frankly,
I'm
a
little
behind
I'll
go
and
look
at
the
note.
Is
there
a
link
to
this
issue
in
the
notes.
A
Yeah
yeah,
it
is
in
the
notes.
So
very
briefly,
it's
about
the
fact
that
we,
let's
say
we
want
to
record
the
file
name
in
the
log
record
and
where
this
is
coming
from
is
the
file
name
from
which
we
collected
the
logs
right,
the
source
of
the
logs,
the
log
file
itself.
A
But
the
question
is
okay,
so
we
what
if
we
want
to
record
a
different
file?
Name
there
right,
not
where
from
we
collected
the
logs,
but
let's
say
we
did
some
processing
of
files
right
and
then
which
file
we
processed
is
is
important
right
and
we
want
to
to
put
that
as
an
attribute
in
the
log
record,
the
name
of
the
file
which
we
did
process,
which
has
nothing
to
do
with
the
name
of
the
file
in
which
we
recorded
the
logs.
A
A
And
I
was
saying
that
that's
a
very
interesting
question,
but
that's
a
generic
problem
in
open
telemetry.
We
have
no
way
of
making
this
of
distinguishing
these
things
in
in,
in
spans,
even
writing
traces
on
which
you
operated
or
which
you
want
to
refer
to.
So
we
we
know
how
to
record
data
about
things.
We
don't
know
how
to
recourse
the
relationship
to
that
thing
in
in
any
of
the
telemetry
that
open
telemetry
sports,
so
not
a
unique
problem
for
logs,
really,
it's
primarily
driven
today
by
just
the
context.
A
D
A
D
D
E
D
Test
like,
let's
make
it
concrete,
I
have
a
test.csv
and
I
have
a
little
process
that
then
crunches
to
that
thing.
Right
and
the
process
happens
to
also
lock
into
you,
know:
process.log,
okay,
so
yeah.
So
now
I
need
to
basically
capture
the
fact
that
my
record
comes
from
source
from
what
what
did
we
say:
records.log
no
process.log.
D
A
A
D
Exact
same
problem
elsewhere
for
stands
for
traces
yeah,
so
I
think
you
know
just
like
just
putting
the
sumo
lens.
You
know
you
know
and
like
putting
my
sumo
classes
on
real
quick.
I
think
what
we
did
there
like
really
early
on
is
we
basically
reserved
underscore
source
name,
it's
called
source
name.
I
think
splunk
has
something
that
it's
basically
the
same
or
very
similar
yeah.
D
E
D
Is
already
kind
of
like
really
weird,
because
then,
if
it's
a
syslog
collector,
we
just
put
syslog
there,
like
we
literally
put
the
term
syslog
there
like
like
yeah
the
six
characters
right,
which
is
which
is
already
odd
but
anyways
we
called.
We
always
call
that
like
source,
so
you
know
so
again.
I
guess
where
I'm
going
here
is
like
a
prefix
kind
of
thing
right,
maybe
maybe
what
we
should
have
done.
D
You
know
is
kind
of
encode
the
source,
log
file
and
source.file.name,
something
right
and
and
and
you
know,
and
then,
if
you
want
to
capture
you
know
like
the
test.
Csv
in
this
particular
case
would
be
input.file.name
or
something
right,
yeah,
so
you're.
A
Saying
it
actually
does
make
sense,
and
it
is
actually
a
practice
to
have
this
recorded
as
separate
attributes
and,
and
so
source.file.name
would
make
more
sense
actually
as
the
name
of
the
attribute
in
this
case,
and
it
would
allow
it
to
be
separated
from
whatever
other
file
name.
We
want
to
record.
A
A
B
Yeah
exactly,
I
think
it
yeah,
it
makes
sense,
and
then
it
leads
to
some
more
questions
like
you
know,
presumably,
would
want
to
do
the
same
type
of
thing
with
like,
for
example,
you
know
tcp
input
right,
we're
sending
logs
to
tcp.
We
have
an
iv
to
capture
the
ip
address.
We
do
that
today,
yeah
and.
A
D
B
D
B
Understood
that
these
these
attribute
names
are
not
meant
to
be
within
another
scope,
so
so
like
could
we
insert?
Can
we
wrap
these
in
source
dot
and
then
still
be
conforming
to
the
the
expectation
here
or
is
that
violating
it
because
we're
not
starting
with
net.
A
No,
I
think
it
would
be
well.
There
is
no.
There
is
no
generic
notion
of
having
a
prefix
for
an
existing
attributes
to
mean
something
that
this
this
new
invented
attribute
is
related
to
the
other
one
because
they
are
sharing
some
suffix
there.
I
don't
think
there
is
notion
like
that
at
all,
but
yeah
yeah,
I
don't
know,
what's
what's
the
solution.
D
It
encodes
the
it
encodes
in
this
case
the
purpose
and
the
reference
you
know,
adding
that
to
the
semantic
conventions,
just
going,
it's
going
to
send
a
bunch
of
head
spinning,
but
that
would
that
could
be
like
at
least
an
ad
hoc.
That
would
be
one
way
I
could
think
of
it.
Yeah.
A
D
D
What
are
the,
what
are
the,
what
are
our
friends
at
elastic
doing?
How
are
they
solving
this
problem?
Do
they
have
a
solution
for
it.
A
A
We
have
something
for
that
right.
I
think
even
I
had
a
link
to
this
yeah.
If
you
look
at
the
elastic
data
model,
they
have
file
dot.
I
think
they
also
have.
A
B
So,
just
to
take
a
step
back,
and
I
think
there
was
some
discussion
earlier
on
about
whether
the
file
these
file
attributes
should
be
part
of
the
attributes,
or
perhaps
I
think
initially
we
were
talking
about
having
them
as
part
of
the
resource
yeah.
I
think
we
had
some
good
reasons
for
making
this
part
of
the
attributes,
but
are
we
over
complicating
it?
By
going
that
direction?
D
If
it's
special,
actually
so
attributes
is
supposed
to
be
something
that
custom
that,
like
folks,
can
you
know
club
whatever
they
want
into
right?
I
think
resource.
The
rapid,
the
resource
wrapper
has
a
has
a
clear
meaning.
Right,
like
resource
is
the
purpose
right.
It
describes
the
thing
from
which
the
thing
came
from
in
in
whatever
which
way
we
can
all
right
like,
and-
and
by
that
I
don't
mean
the
log
file,
but
like
the
entity
like
resource
description.
A
D
A
H
D
But
why
don't
we
then
say
within
attributes?
The
recommendation
is
to
follow
the
semantic
conventions
from
the
perspective
of
having
the
tale
of
or
the
suffix
basically
like.
If
you
have
a
file
prefix
it
whatever
you
want
and
then
you
know,
follow
the
file
semantic
convention
to
describe
like
if
it
happens,
to
be
a
file.
E
D
And
then,
of
course,
the
next
question
would
be
okay,
but
what
are
we
supposed
to
sort
of
call
the
purpose
things,
but
we
already
have
one
semantic
convention,
which
is
the
entity
is
called
resource
right
and
then
the
next
one
will
be.
You
know
we
want
to
call
the
source
of
source,
but
then
people
have
all
kinds
of
opinions
and
water
sources.
You
know
what
is
it
like:
a
source
of
water,
also
waters
or
some
lights
or
stuff
etcetera,
etcetera.
D
B
D
B
You
know
just
to
sort
of
reframe
this
one
more
time
is
there.
Is
there
a
distinction
here
between
the
telemetry
itself
and
the
date
like
metadata
about
the
telemetry?
Are
we
basically
trying
to
describe
how
the
telemetry
was
gathered,
and
that
is
the
collision
because
that's
causing
the
collision
because
we're
trying
to
force
it
into
the
user's
you
know
just
regular
data.
A
Outside
yeah,
but
that's
that's
a
good
perspective
right,
so
this
is
you're
saying
this
is
quite
different
from
the
rest
of
the
telemetry.
This
is
more
meta
information
about
how
the
telemetry
was
collected
about
where
it
was
recorded.
But
it's
not
it's
not
about
the
actual
processing
that
happened
and
that's
why
it's
different
from
the
rest
of
the
attributes
or
whatever
is
recorded
there.
D
The
attributes
thing
is
anyways
funky
right,
because
I
think
we
keep
going.
You
know
we
are
sort
of
a
little
bit
going
in
circles
on
that
one
like
again
and
again,
in
a
way
where
you
know,
I
would
solve
the
problem
by
simply
asking
my
developer
to
basically
or
if
I
was
to
deal,
but
I
would
basically
just
emit
a
log
as
a
json
thing
or
whatever
structure
I
like
right,
and
if
I
really
wanted
to
capture
the
fact
that
it
was
a
test.csv.
D
I
would
just
manually
put
it
into
my
into
my
log
right,
because
it's
not
100
clear
to
me,
you
know,
to
which
degree
downstream
systems
would
even
know
what
that
means
right,
because
you
know
now
I'm
in
a
world
where
any
program
can
basically
do
anything
and
I'm
just
starting
to
get
super
hard
to
capture
all
of
these
things
right
and
then
that
leaves
us.
You
know
we're
just
using
file
in
the
envelope
to
sort
of
describe
the
source
file
right.
The
recommendation
will
then
be
well.
D
A
A
D
Sequence,
exactly
exactly
for
reconstruction
and
then
you
know
multi-line
detection.
All
of
these
things
and.
E
D
The
user
to
basically
go
and
like
find
a
particular
lock
record
and
then
say
show
surrounding
logs.
Then
it's
a
very
common
feature
right,
yeah.
That.
D
Which
which
they
expect
to
see
that
same
log
file
and
not
some
sort
of
intermingling
of
records
or
messages
coming
back
from
a
search
from
potentially
different
log
files,
because
you
know
they
all
match
the
term
again,
you
know
sort
of
the
sort
of
back
and
forth
is
just
like
you
really
like.
This
is
the
continuous
unease
of
of
you
know
the
sort
of
fact
that
you
know
the
log
body
can
potentially
be
structured.
D
If
you
want
to
you
know,
you
cannot
put
garbage
if
you
want
to
you
know,
and
you
know
my
opinion,
I
think
that
needs
to
be
open-ended,
but
the
more
you
know
people
think
about
putting
structured
stuff
in
there
then
they're
like
oh,
it
looks
almost
like
attributes,
so
why
should
we
don't
put
it
in
attributes
instead?
D
So
and
then,
when
you
put
it
in
attributes
that
that
triggers
the
discussion
about,
but
then
we
need
to
have
conventions
for
it
and
then
like
now,
we
now
here
we
are,
which
is
all
very
reasonable.
You
know
it's
just.
I
think
you
know
we
continue
to
kind
of
struggle
with
like
figuring
out.
You
know
where
we
want
the
line
right
that
we
don't
cross
in
terms
of
you
know,
you
know
trying
to
make
too
many
assumptions
or
recommendations
as
to
what
you
should
put
in
the
actual
vlog.
D
D
D
Yeah,
I
would,
I
would
stick
with
the
semantic
convention
for
how
to
describe
a
file.
You
just
have
to
put
it
under
like
a
source
key
or
something
I
don't
know
if
that's
another
key
or
a
prefix
or
sometimes
I
don't
really
know
what
the
difference.
Those
things
are
sort
of
the
same.
Actually
in
many
ways
right.
It's
just
a
hierarchy.
D
A
D
A
A
Maybe
yeah
maybe
take
your
time.
You
can
comment.
D
C
C
A
C
A
A
J
I
have
just
a
quick
question
on
persistent
buffering
pr,
so
I
was
a
bit
out
of
the
loop
for
the
past
two
weeks
because
of
my
pto,
but
I
just
came
back
and
I
sold
a
bunch
of
of
good
comments
there.
So
I
think
that
one
of
the
items
is
extending
the
storage
api
with
those
batch
operations.
J
So,
if
you
would
like
to,
I
can
prepare
some
ideas.
What
could
be
put
here
unless
then,
you
would
like
to
to
do
that,
as
as
the
offer
of
the
of
the
storage
extension
well.
B
Please
do
I
think,
you've
got
your
you've
got
more
details
on
what's
necessary
that
I'd
use.
J
Okay
got
it
other
than
that,
I'm
not
sure
if
they
were
like.
I
wanted
to
ask
you
to
grant.
What
do
you
think
should
be
the
next
steps
with
with
this
pr,
because
I
think
that
the
suggestion
was
to
have
this
like
ex
batching
capability
of
storage
extension
as
something
separate
so.
A
It
depends
right
depends
on
if
you're
adding
the
batching
operations,
so
if
they
are
going
to
be
in
addition
to
current
set,
get
and
delete,
then
the
current
pr
you
can.
You
can
just
merge
it
as
this,
but
if
you
want
to
replace
those
by
a
different
version
which
always
accepts
a
batch
like
like,
let's
say
you
have
set,
which
accepts
a
slice
of
keys
and
a
slice
of
values
and
that's
a
breaking
change
right.
A
So
if
that's
the
intent,
then
maybe
it's
better
to
wait
until
that
is
done,
and
then
you
I
mean
you
can
merge
it
as
is,
but
knowing
that
you
will
have
to
rework
it.
I
don't
know
it's
up
to
you
and
if
you
do
that
in
that
case
we
need
some
sort
of
note
which
says
that
this
is
unstable,
that
this
this
is
going
to
be
changed.
There
is
an
intent
to
change.
Either
way
works
for
me,
whatever
you
prefer
to
do.
A
A
I
don't
know
what
what
is
going
to
happen
is
as
a
result
of
that.
If
that
happens
then,
essentially
all
of
the
components
just
live
in
in
a
separate
repository,
and
then
you
build
from
from
red
repository
and
the
core
becomes
just
just
an
api
right,
nothing,
nothing
else.
In
that
case,
no
changes
are
needed.
You
put
the
storage
interface
as
an
api
in
the
core
and
the
actual
implementations
will
be
in
the
trip
as
they
are
right
now.
A
A
Yeah
in
the
meantime,
but
it
works
right
if
you
have,
if
you,
if
you
use
the
contrib
build
of
the
collector,
there
is
going
to
be
this
file
storage
that
you
can
actually
use.
So
it's
usable
yeah.
It
makes
the
development
more
complicated.
I
understand.
A
Okay,
okay
sounds
good,
so
yeah
up
to
you.
If
you
want
to
merge
the
current
pr
as
this
or
actually
you
said,
you
will
make
a
proposal.
Is
that
what
you
said
in
the
end
before
okay?
So,
let's
see
the
proposal
and
we'll
take
it
from
there.
A
A
Good
paulo,
I
see
you,
I
think
you
just
joined.
Did
you
have
anything
you
wanted
to
talk
about
related
to
logs.
K
No,
not
collector
logs
is
that
we
switched
the
meeting
idea
with
the
cpp
group
and-
and
now
I
think,
it's
the
time
that
we
start
the
dot-net
stuff,
and
I
was
going
to
ask
if
it's
usual.
You
guys
stand
that
long,
because
if
it
is,
I
think
I'm
gonna
ask
to
get
a
new
meeting
id.
You
know
the
ones
that
we
have
are
not
enough.
That
is
a
lot
of
questions.
A
K
A
There's
definitely
an
overlap
because
we
still
have,
I
guess,
20
minutes
to
go.
But
I
don't
see
the
which,
which
one
you
said
c,
plus
plus.
K
Yes,
c
plus
plus
asked
us
to
switch
with
them,
and
this
one
is.
A
K
A
A
K
All
right
yeah
that
that
we
need
to
sort
it
out.
You
don't
have
sound
rabbit
if
you're
trying
to
okay.
So
we
need
to
sort
that
out
I'll
start
that
out
with
two
grand
it
seems
that
the
guy
that
asked
for
the
cpp
to
switch-
I
I
don't
know
I
because
in
my
mind
it
we
are
switching
with
the
the
agent,
but
the
agent
seems
to
continue
right
after
on
the
log.
You
know
so
yeah,
okay
anyway,
we
have
to
figure
out,
welcome
back
zach.
M
K
K
K
Okay,
so
from
last
week
we
have
kind
of
talked
about
the
next
steps
for
the
poc.
I
did
very
little
on
that.
I
was
pulled
for
some
other
stuff,
but
I
I
did
do
some
trials.
We
have
applications
and
trying
to
consider
the
case
that
we're
calling
devops,
I
think
for
the
framework.
K
We
can
do
this
stuff
with
the
binding
redirects.
Even
if
you
don't
have
access
at
build
time,
because
binding
redirects
is
just
a
config
file
that
we
can
add
and
change
and
do
any
redirection
that
we
need.
The
question
then
becomes
kind
of
even
let's
say
a
more
specific
case
that
that
may
not
work
is
that
it's
possible
to
lock
down
windows
in
a
way,
and
I'm
talking
about
windows,
because
this
framework
that
it
doesn't
allow
for
that.
But
I
I'm
assuming
kind
of
that's-
is
kind
of
the
20
case.
K
K
We
have
a
kind
of
workaround
that
perhaps
is
manual
for
now,
but
even
if
you
are
not
building,
even
if
you
are
getting
the
package
with
the
xz
and
the
dlls,
we
still
have
ways
to
to
do
that
to
make
the
versions
consistent
using
just
the
binding
redirect.
K
I
also
asked
a
bit
around
about
the
case
about
devops
because
perhaps,
as
I
was
suspecting
by
the
comments
from
greg
and
chris
that
perhaps
I
had
a
a
bias
on
this,
perhaps
would
you
be
in
contact
more
with
devs?
So
then
people
were
willing
to
change
projects
and
not
being
that
involved
with
devops
people
doing
the
instrumentation.
K
So
it
seems
that
yeah
that
there
is
a
considerable
chunk
of
people
that
work
on
the
devops
scenario.
So
it's
something
that
we
should
be
covering
anyway,
and
I
just
thinking
that
the
initial
case
is
that,
let's
start
on
the
work
of
the
sdk
with
the
solutions
that
we
have
and
then
enhance
the
things
to
solve,
to
solve
the
case
of
the
devops.
I
K
I
With
that,
paulo,
as
far
as
open
telemetry,
I'm
seeing
more
interest
coming
from
people
that
have
access
to
the
source
and
can
potentially
modify,
builds
and
deploys,
as
opposed
to
somebody
that
just
has
access
to
the
server.
K
E
K
I
think
if
we
go
to
a
more
general
approach,
there
is
really
a
lot
of
people
that
just
have
access
to
the
server.
They
can
configure
the
server
to
do
stuff,
but
they
receive
basically
the
binaries
are
actually
they
receive
the
image
with
this
stuff
and
they
can
tweak
there,
but
basically
they
have
the
image.
You
know.
I
Yeah
so
there's
another
interesting
case,
and
I
don't
think
we
have
to
deal
with
it
right
away,
but
for
azure
web
apps
there
is
a
deployment
mode
where
you
basically
run
your
app
from
a
zip
file,
and
so
basically
you
get
a
read,
only
mount
of
your
app,
and
so
that's
where
that
would
be
another
case
where
we
would
have
to
get
into
their
build
process.
K
To
add
redirects
we
we
do
have,
but
I
I
I'm
kind
of
confused
about
the
products
right
now,
because
we
do
have
azure
web
extensions
that
can
install
a
profiler.
I
K
K
I
I
just
don't
want
to
kind
of
stop
to
solving
all
of
them
before
we
start
really
doing
the
the
work
you
know,
so
I
think
we
we
should
start
to
kind
of
pick
up
the
poc
and
start
to
make
that
really
product.
We
removed
a
bunch
of
things.
We
don't
have
tests,
we
have
manual
tests
there
and
we
are
getting
to
the
point
that
hey,
let's
make
this
the
real
thing
you
know
so,
and
we
had
the
question
that
greg
raised
about
how
to
do
the
integrations.
K
I
didn't
have
time
to
dig
deep
into
that.
My
main
concern.
There
is
still
the
same
thing
about
okay.
We
can
do
the
instrumentations
using
the
the
techniques
described
there,
but
then
my
question
is
how
the
reuse
of
the
sdk
comes
into
that
picture.
K
You
know
in
the
same
kind
of
line
that
I
I
think
I've
been
mentioning
last
week,
that
kind
of
until
we
have
this
operational,
we
keep
getting
resources
diverted
to
other
projects,
because
this
is
the
nature
of
the
the
people
that
are
financed
are
working
here
right.
K
So
if
we
want
to
move
this,
we
need
really
to
start
to
get
people
using
and
the
way
that
I
see
for
that
is
for
us
to
move
to
this
scenario
that
we
can
reuse
the
sdk,
because
it's
then
much
less
working
on
our
side
and
we
can
reuse
a
bunch
of
code
so
in
that
line
kind
of
I'm
very
concerned
about
anything
that
prevents
us
from
using
the
sdk.
K
If
that
makes
sense,
that's
kind
of
so
I
want
to
dig
deep
in
the
stuff
that
greg
pointed
I
had
done
that
kind
of
six
months
ago,
but
I
want
to
see
if
something
changed
there,
but
as
far
as
I
remember
that
didn't
allow
reuse
of
the
sdk
directly,
you
have
to
do
something
to
kind
of
allow
that
to
be
plugged.
K
You
know
so,
as
I
said,
I
want
to
go
back
and
and
look
again,
but
I
want
to
be
sure
that
we
really
can
use
the
sdk
with
minimal
changes
and
changes,
and
I
mean
to
the
sdk
itself.
I
think
we
look
at
then
I
identified
a
bunch
of
things
that,
like
support
your
environment
variables
and
this
kind
of
thing
that
needs
to
be
on
the
sdk,
and
I
would
rather
have
us
work
on
implementing
that
on
sdk.
K
So
we
can
reuse
here
then
kind
of
not
be
able
to
reuse
the
sdk.
I
Yeah,
so
I
think
the
environment
variable
stuff
is
a
good
idea.
The
other
thing
that
we'll
need
to
consider
is
how
we're
packaging
everything
together,
because
right
now
in
the
proof
of
concept
you're,
just
you
have
a
hard
dependency
on
the
existing
assemblies
themselves.
I
Is
that
correct,
as
as
opposed
to
the
source-
and
I
I
think
that's
perfectly
fine
for
now,
especially
if
we're
just
in
the
early
alpha
stage
but
there'll,
be
some
other
complications,
especially
when
you
try
to
enable
otlp,
especially
for
framework
apps,
because
of
those
native
components,
because
that's
also
going
to
affect
the
packaging
strategy.
K
We
can
look
at
different
avenues.
One
of
the
avenues
that
I
thought
in
the
past
is
having
the
sdk
having
a
a
new
get
source
version,
so
we
could
still
have
the
versioning,
but
then
we
build
this
stuff
on
our
side.
You
know,
but
that
still
requires
us
to
kind
of
do
the
work
on
the
sdk
repo
itself.
K
You
know,
so
I
think
we
start.
I
think
we
should
keep
at
least
for
the
time
being
the
same
model
that
we
have
of
using
the
nougat
package,
not
source
form,
but
to
deal
with
these
problems.
We
can,
then
we
work
with
the
sdk
team
to
perhaps
have
a
a
source
version
of
the
nuget
package
that
we
can
import
building
package
under
different
kind
of
organization.
Let's
say.
K
And
and
also
perhaps
this
specific
problem
of
otlp-
I
don't
know
if
that's
the
case,
but
perhaps
we
can
carefully
organize
the
nougat
packaging
and
how
we
consume
it
to
bring
all
the
dependencies
correctly.
You
know
I'm
I'm
not
sure
about
that.
Just
a
kind
of
high
level,
perhaps
alternative
to
that.
I
Yeah
and
the
reason
I'm
calling
that
one
out
specifically
is,
I
want
to
say,
for
certain
versions
of
net
core,
even
and.net
framework.
It
is
using
the
it
does,
have
a
native
dependency,
and
so
you
got
to
deal
with
fitness
as
well
as.
K
Yeah,
so
perhaps
that's
interesting
problem
for
us
to
listen
to
investigate
you
know
otlp
native
dependence.
You
know
yeah.
I
K
Okay,
I
will
add
later
the
link
here
and
I
think
we
start
investigating
that
from
there,
but
I
think
I
think
the
initial
we
keep
like
because
we
keep
the
plc
working
and
keep
growing
from
there.
We
keep
as
it
is
right
now,
but
we
investigate
this
otop
issue.
I
Yeah,
I
don't
know
if
the
sdk
has
an
implementation
for
otlp
over
http,
yet
yeah.
K
Actually,
the
performance
is
better
on
the
http
yeah,
because
the
handshake
and
the
extreme
stuff
it
for,
if
you
think,
like
for
the
stuff
that
observability
telemetry
send
usually
is
kind
of
sending
package
and
just
hack
pack
that
doesn't
fit
well
with
the
grpc
streaming
things.
That's
for
real
communication.
You
know
yeah.
M
I
Is
oh.
E
I
K
Yeah,
you
are
right,
they
changed
it
because
initially
it
was
a
stream
and
they
changed
it.
But
the
thing
is
underlying
the
grpc
thing.
Is
this?
It's
basically
the
same
api.
You
know,
although
at
the
high
level
you
are
doing
rpcs,
the
implementation
is
basically
the
same.
You
know
so,
and
this
can
change,
but
in
general,
actually
the
performance
for
the
telemetry
stuff
is
better
with
the
http
and
also
makes
the
life
of
load
balance
much
much
easier.
K
So,
but
but
getting
getting
back
so
actually
we
we
should
investigate
that,
because
this
is
one
of
the
things
that
can
change
stuff
and
in
a
sense
I
don't
know
what
is
required
for
otop.
But,
for
instance,
was.
M
K
Yeah,
I
think
I
think,
if
that's
accepted
by
the
spec,
I
think
it's
a
solution
that
perhaps
for
us
is
better.
You
know.
I
Yeah,
I
think
the
gotcha
there
is
that
the
spec
for
otlp
over
http
is
still
flagged
as
experimental,
whereas
the
grpc
version
is
flagged
as
stable.
I
see
I
see.
K
Yeah
but
okay,
that's
a
thing
that
you
need
to
to
dig
into
is
this
otop
and
this
native
dependence,
and
actually
it
serves
as
a
case
for
anything
related
with
native
dependency
right
so
because
we
should
encounter
this
in
other
new
get
packages
right.
So
it's
not
uncommon
to
have
dot-net
nuget
package
that
ships
some
asset,
that
is
native
dependence,
you
know.
So
if
you,
if
we
are
using
something
that
any
other
package,
so
this
will
provide
actually
the
story
for
all
of
them
in
a
sense.
K
K
So,
besides
that,
I
think
I
I'm
not
sure
if
erasmus
wants
to
bring
that
today,
but
he
was
thinking
some
stuff
about
the
tests,
integration
tests
and
about
to
how
to
do
some
changes.
I
don't
know
if
you
keep
up
with
what
is
happening
upstream,
that
there
was
a
bunch
of
changes
on
the
tests
there.
But
if
do
you
want
to
talk
about
that
today,
erasmus
or.
K
So
so,
basically
bring
up
the
the
dark
and
the
stuff
needed
for
each
test
individually.
Yeah.
K
Yeah
in
principle,
it
makes
sense
to
me.
I
know
that
there
was
a
lot
of
work
on
upstream,
about
that.
K
I
I'm
not
sure
about
the
current
state,
but
in
principles
makes
sense,
and
I
think
that,
typically,
when
you
are
working
with
the
instrumentation,
you
are
going
to
be
testing
a
lot
one
we
want
to
do
have
the
ci
runs
that
runs
everything
but
having
something
that
is,
is
kind
of
target
for
a
single
instrumentation
seems
to
make
a
lot
of
sense
to
me
that
that
that
seems
the
the
the
kind
of
general
take
that
from
my
part,
we'll
have
on
that.
K
So
if
we,
if
you
think
that
you
can
come
up
with
some
proposal
or
example,
on
top
of
that,
I
think
robert
did
something
in
the
past.
He
wrote
some
tests
when
we
basically
start
showing
kind
of
a
path
that
we
could
take.
There
perhaps
build
on
top
of
that.
K
L
On
that
is,
I
think
that
that
might
you
might
have
to
build
different
sort
of
bring
up
based
on
what
platform
you're
on
at
least
I
know
that
in
azure
pipelines,
if
we're
running
some
tests
on
windows,
we
actually
run
a
very
small
subset
of
our
integration
test
on
windows,
because
we
can't
run
linux
containers
there
unless
we
had
our
own
agent
versus
right
now,
we're
just
using
a
provided
like
hosted
windows
agent.
L
So
in
that
scenario,
like
you,
wouldn't
be
able
to
use
the
same
docker
compose
commands
because
likely
the
postgres
container
or
kafka
as
a
linux
image.
I'd
see
you
soon.
E
L
K
So
so,
just
for
me
to
I
I
I
perhaps
not
up
to
that
on
that,
so
you
are
saying
that
there
are
windows
images
of
a
lot
of
these
docker
containers,
darker.
D
G
K
G
K
Sorry
say
that
one
more
time
I
so
what
came
to
my
mind
when
you
said
the
limitation,
what
I
understood
is
because
we
are
running
a
windows
vm.
You
are
limited
to
run
darker
windows,
containers
right
in
the
test,
so
in
my
mind,
a
lot
of
those
things
that
we
use.
Don't
have
our
windows
image
ready
to
run.
I
think.
K
E
K
Yeah,
no!
It's
because
in
that
sense,
what
you
are
trying
to
have
it's
a
nice,
easy
environment
to
run
the
integrations
and
debug
on
your
machine
on
the
windows.
Then
you
run
a
docker
container
with
linux
image
on
your
box.
It's
fine
and
it's
easy,
but
doing
the
this
the
same
in
ci,
it's
kind
of
hard.
K
L
E
K
Yeah
or
we
can
create
a
network
for
the
windows
run
with
a
vm
from
linux.
That.
K
Yeah
yeah,
but
but
I
I
think,
exploring
that
initial
work
that
robert
did.
I
think
it's
a
good
idea
for
us
to
to
have
some.
M
In
general,
I
haven't
done
much.
I
just
wanted
to
explore
if
it's
good
to
divide
and
dancers.
Yes,
it's
good
to
have
smaller
images.
I
wouldn't
use
the
same
like
techniques
and
technologies
I
use
there.
It
was
reciprocity
to
check
if
it
will
be
faster
and
more
stable
if
you
will
have
just
them
separated.
You
know
this,
this
docker
stuff
per
test
and
he
was
faster
and
more
stable.
K
K
K
So
yeah
what
I
do
usually
is
kind
of
what
test
I
need
to
run
and
then
I
I
just.
E
K
M
All
right,
I.
N
L
Yes,
so
we're
now
we
have
it
set
up
so
we're
running
that
in
all
of
our
ci.
Basically,
besides
the
like
publishing,
azure,
devops,
artifacts
or
downloading
them,
or
maybe
installing
the.net
cores
decay,
everything
else
with
our
build
is
just
running
new
commands.
M
M
Nuke
versus
bullseye,
which
api
is
a
lot
smaller.
L
L
Yeah
my
experience
with
it
day
to
day
is
still
pretty
limited
because
I
am
usually
in
visual
studio
or
writer,
so
I
don't
need
to
access
it
too
much.
But
if
I
need
to
produce
an
msi
quickly,
then
I
just
have
to
put
the
right
commands
in
it's
works,
just
fine.
It
also
helps
understanding,
since
we
still
want
to
do
a
lot
of
trimming
down
of
making
the
build
steps
easier
to
understand.
But
nuke
already
does
that,
because
you
have
different
targets
and
it's
very
easy.
It
prints
out
the
different
dependencies
very
easily.
L
K
A
K
Yeah
so
yeah,
that's
that's
a
good.
I
think
a
lot
of
people
say
for
this
kind
of
build
season
that
the
advantage
is
that
you
don't
need
to
switch
or
your
mind,
but
on
the
other
hand,
I
think
you
still
need
to
understand,
build
pretty
much.
You
know.
Okay,
the
syntax
stayed
the
same,
but
you
still
have
I.
As
far
as
I
saw
from,
I
think,
fake
ef,
sharp
and
others.
You
still
have
the
same
concepts.
Targets
builds
dependence
and
you
have
to
understand
that.
K
But
the
syntax
is
the
same
kind
of
of
the
project
of
the
main
language
that
you
have
in
the
project
which,
which
is
is
good.
You
know,
especially
for
I
would
say,
for
things
that
are
like
f
sharp
that
are
very
different
from
the
usual.
You
know.
L
Yeah
and
some
of
the
shell
scripts
that
we
would
write
where
they'd
just
be
like
you
know
five
lines.
Those
are
able
to
consolidate
into
just
a
c
sharp,
like
oh
just
copy.
Some
files
do
that,
and
so
it's
actually
a
little
bit
consolidating
that
into
c
sharp,
which
we're
all
writing.
K
K
K
So
I
don't
have
anything
else
crossing
my
mind.
I
want
to
give
you
guys
time
if
you
guys
want
to
bring
something
up,
and
I
will
have
to
figure
out
the
meeting
to
avoid
us,
it's
good
to
know
what
the
agent
log
is
doing,
but
I
don't
think
it.
Everybody
is
interested
on
that.
So.
K
All
right,
then,
we
are
going
to
try
to
do
some
more
stuff
on
the
plc
branch
these
days.
Hopefully,
we
have
more
stuff
to
discuss
about
the
using
the
sdk
with
in
our
branch
in
our
repo,
alright,
everyone
nice
to
see
everyone
welcome
back
zach
thanks
goodbye
all
right.