►
From YouTube: 2021-05-26 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
C
D
Hey
tigran
good
morning,
good
morning,
let's
see
is
tiandu
here.
Tianji
is
my
my
partner.
D
Okay,
can
we
let
I
think
rahan
was
after
me
on
that
list?
Can
you
let
him
go
first?
I
want
to
give
my
guy
a
chance
to
ask
the
question
himself.
E
Yeah,
thank
you,
hello,
everybody
good
morning,
so
I
had
two
issues
to
discuss
today
and
I
also
see
like
I
had
a
discussion
from
our
last
meeting
of
line
with
like
bogdan
and
josh
mcdonald.
So
then
we
discussed
our
like
previous
design
and
we
came
up
with,
like
kind
of
we
came
up
into
our
consensus
of
like
designing
two
different
processors
for
calculating
rate
from
cumulative
matrix.
E
E
Okay,
so
I
believe
you
can
see
our
meeting
note
page
on
google
chrome
right.
Yes,
okay,
thank
you.
So
this
is
the
small
design
dock.
So
I
can
share
the
high
level
idea
like
we
are
getting
the
cpu
asus
matrix,
which
is
like
cumulative
some
data
points
and
we
need
to
calculate
the
rate
which
is
like,
I
think,
a
common
requirement
for
we
had
like
common
ask
earlier
also-
and
we
had
discussion
on
this,
then
after
discussion
of
and
discussing
offline.
E
So
we
came
up
with
like
the
idea
like
okay,
we
want
to
decouple
the
kind
of
pizzas
or
the
calculations.
The
first
approach
is
like:
maybe
we
can
design
a
processor
which
will
calculate
cumulative
to
delta
processor,
just
to
mention
one
more
thing
here
like
josh
is
working
on
another
thing
like
delta
to
cumulative
processing,
and
I
had
a
discussion
and
from
the
understanding
like
these
are
like
two
different,
maybe
thing
and
we
can
separate
them
and
for
like
moving
with
the
work
which
is
need
for
my
calculation.
E
So
we
can
design
it
as
a
separate,
independent
processor
cumulative
to
delta,
and
then
we
can
design
another
processor
which
will
calculate
the
rate
from
the
delta
processor
and
for
calculating
converting
the
cumulative
to
delta
processor.
So
it's
I
tried
to
explain
the
scenario
with
like
some
examples.
One
is
like
okay.
So
how
will
calculate
that.
E
Okay
so
say,
for
example,
we
have
three
different
data
point.
We
have
a
start
time
for
similar
for
all
of
them,
as
they're
like
coming
as
cumulative
sum,
but
we
have
different
current
time
and
what
you
can
do
for
converting
it
like
from
my
understanding.
E
This
is
the
value,
and
this
value
is
only
for
like
that
specific
cycle,
which
is
like
the
time
difference
from
the
first
one
and
the
second
one
which
is
like
7
minus,
so,
okay,
no,
so
the
time
the
time
would
be
like
current
time
would
be
similar.
But
the
start
time
will
be
the
kind
of
previous
data
points
current
time.
E
So
this
way
it
will
tell
us
like
this
value,
is
the
different
between
or
the
this
value
is
only
for
the
time
zone
in
between
the
first
data
point
and
the
second
data
point
and
similar
happens
for
the
third
one,
so
that
current
timestamp
will
be
similar,
but
the
start
time
system.
I
would
update
with
the
previous
drop
and
current
timestamp
this
way
it
will
tell
us
like
this
400
value
or
the
was
recorded
in
between
timestamp
like
seven
and
nine.
E
So
that
was
my
understanding,
so
I
just
want
to
make
sure
like
the
way,
I'm
thinking
so
it's
correct.
Maybe
I
know
I'm
not
doing
anything
wrong.
E
And
for
the
configuration
part,
I
just
come
up
with
some
like
high
level
idea
like
okay.
Maybe
we
can
do
things
like
for
configuring
it
we
need
to
identify
the
metric.
So
in
the
beginning
we
can
start
with
like
so
I
prefer
to
start
with
like
stick
to
messing.
Maybe
later
we
can
introduce
reject
messaging.
Also,
if
you
want,
and
for
uniquely,
we
need
to
uniquely
identify
of
the
metric,
because
we
need
to
store
the
or
need
to
know
the
previous
data
point.
E
So
for
storing
the
data
point,
we
need
to
uniquely
identify
a
metric,
and
for
this
we
need
to
also
match
the
resource,
attributes
and
matrix
levels.
E
So
my
plan
is
like
I
can
maintain
only
one
map
like
say,
for
example,
the
metric
name
plus
the
resource
attribute,
which
will
be
given
by
the
customer
like
say,
for
example,
job
name,
then
the
matrix
levels
like
pod
or
container
id
just
to
uniquely
identify
a
matrix,
a
specific
metric.
Then
I
will
maintain
a
map
and
I
will
maintain
only
one
copy
of
the
data,
so
I
am
not
maintaining
any
queue,
just
a
map
which
is
really
stored.
E
The
previous
data-
and
this
way
I
can
calculate
this
so
here,
is
like
I
tried
to
like
write
a
short
talk.
Maybe
I
will
expect
some
comment
on
this.
That
would
be
good
if
not
now,
maybe
offline
is
also
good,
and
if
you
see
like
any
kind
of
like
totally
big
thing
here,
maybe
you
can
just
share
me
here
so
that
I
can
update
and
for
the
second
process
so
which
will
calculate
the
rate
from
the
delta
matrix.
So
for
this
we
don't
need
to
know
the
previous
state
or
previous
data
point.
E
So
what
we
can
do
we
can
just
simply
like
we
have
the
timestamp
fixed,
but
the
start
time
stamp
was
updated
by
the
previous
data's
current
timestamp.
So
technically
we
can
divide
the
value
which
we
are
getting
for
that
specific
chunk
of
time
by
the
difference
of
this
current
timestamp
and
established
only
for
that
data
point.
So
technically
we
don't
need
to
store
any
previous
data
printed.
E
We
can
calculate
value,
and
here
I
did
some
like
simple
thing,
like
okay,
only
calculate
the
rate
for
this
metric
delta
to
rate,
so
I
was
also
preferring
to
give
the
customer
the
option
like
if
they
want
to
update
the
magic
name
and
the
unit
in
the
same
run.
So
that's
why
I
think
this
good
idea
to
here,
but
I
also
have
two
open
questions.
One
is
like:
should
this
delta
to
rate
processor
be
part
of
the
matrix
transform
processor,
but
because
it
seems
like
very
simple
or
like
kind
of
calculation?
E
So
maybe
so
that's
kind
of
open
question
whether
we
want
to
put
into
the
matrix,
transform
processor
or,
like
the
experimental,
matrix
generation
processor,
I
wrote,
or
we
should
design
it
as
a
new
processor,
and
another
thing
is
like
for:
after
calculating
the
rate,
do
we
want
to
delete
the
previous
data
bond
or
we
want
to
keep
both
of
them
like
the
new
rate?
Metric
would
be
like
this
type
of
metric,
which
is
not
not
mora
sun.
So
do
we
want
to
like
both
of
them
or
just
to
want
to
delete
one
of
them?
C
Thank
you
very
much.
This
is
very
useful
actually
to
have
a
document
like
this.
I
guess
in
the
in
the
interest
of
time.
I
would
suggest
that
people
do
review
this
offline,
asynchronously
and
comment
on
it.
You
have
a
very
good
document
put
together
here.
If
you
have
anything
that
you
would
like
to
clarify
now
like,
if
you
have
any
questions
that
maybe
are
blockers.
E
Okay,
thank
you
so
one
I
will
just
ask
only
two
important
questions.
Maybe
if
any
of
us
can
help
me
like
the
calculation
I
am
thinking
like
for
for
the
cumulative
to
delta.
So
is
this
right
concept
like
so
for
I
for
the
for
all
the
data
point,
I
am
keeping
the
timestamp
unix
nano,
the
similar,
which
is
coming
with
the
data
point
and
for
the
start
time
unix
nano.
E
I
am
updating
it
with
the
previous
data
points,
current
time
so
and
for
the
value
I
am
setting
this
this
difference
so
say,
for
example,
for
first
value
everything
is
similar,
but
for
the
second
one
for
the
value,
I
am
getting
this
300
minus
100
200,
but
the
start
time
unix
always
updated
with,
like
the
previous
data
points.
Current
time
is
this
concept
right.
F
Stands
out
at
me
as
a
question
I
have,
I
don't
know
if
it's
right
or
if
it's
wrong.
I
would
really
like
to
hear
what
josh
mcdonald
has
to
say
on
the
subject,
because
I
think
he's
probably
thought
about
this
particular
question.
G
I
think
over,
like
high
level,
looks
good,
but
I
need
to
double
check
the
numbers
there,
but
over,
like
high
level,
looks
good.
You
need
to
update
the
start
time
to
become
the
previous
time.
Okay,
because.
G
Delta
means
means
the
the
values
since
the
previous
the
change
in
the
in
the
metric.
Since
the
previous
observation.
E
Yeah
so
yeah
that's
one
important
thing
and
I
think
for
making
one
single
map
from
my
high
level
investigation.
It
seems
okay,
so
one
single
map
can
be
useful
for
me,
but
these
are
implementation
details.
Maybe
we
can
discuss
later
or
take
a
look
offline
and
for
the
cumulative
to
also
detect
rate
processor.
So
it's
like
yeah.
So
this
is
the
question
like.
Why
should
I
put
it?
Should
I
think
about
the
whole
new
processor
or
like
so
this
is
kind
of
like
a
conclusion.
A
generic
question.
G
E
See
yeah,
thank
you.
So
I
think
I
will
share
offline
android
for
comments
also,
okay,
yeah.
So
I
have
one
more
items
in
the
agenda.
So,
okay,
so
multiple
config
files
supporting
open
telemetry.
So
I
know
we
have
an
open
issue
and
we've
got
some
comments
from
martin
and
bob
also.
So
here
the
solution
was
like
a
config
source.
I
look
into
it,
but
my
requirement
was
something
like
technically:
we
are
looking
for
options
where
we
can
like
embed
one
config
file
into
another
or
like
combine
multiple
config
files
as
one.
E
So
I
was
not
super
sure
about
this,
so
I
was
just
wondering:
does
the
config
source
support
the
same
functionality?
I
am
expecting
like
any
solution
or
documentation.
I
mean.
C
E
C
Planning
to
upstream
it
to
the
to
our
continued
version
of
the
collector,
but
I
think
it's
exactly
what
you're
looking
for
here.
E
C
I
I
wouldn't
want
to
duplicate
this
work
right.
This
was
already
done.
We
we
were
not.
We
didn't
want
to
rush
it.
We
wanted
to
first,
do
it
on
our
own
distro
to
be
sure
that
this
is
the
right
thing
to
do.
It
seems
like
it
is,
so
I
think
at
this
point
we
are
planning
to
upstream
it.
We
just
need
to
settle
down
things
internally,
and
it
will
be
there.
E
Okay,
so
here's
the
thing
like
so
so
this
is
kind
like
one
of
our
core
requirements
for
one
of
our
project
this
year
and
we
we
are
planning
to
like
also
start
working
on
this
on
our
site,
and
we
have
one
developer
like
block
for
this,
so
I
just
want
to
check
like.
Is
there
any
way
we
can
just
take
that
code
and
make
it
faster
or
contribute
or
like.
I
There
or
what's
your
timeline
tigran,
what's.
D
C
So,
let's
do
this,
I
don't
have
a
timeline
is
following
the
call?
No
he's
not
let's
do
this.
I
will
speak
with
the
developer.
Who
did
that
on
our
distro
and
we'll
get
back
to
you
guys
with
the
timeline
on
when,
when
we're
planning
to
do
that?
If
it's
not
in
the
near
future,
then
I
guess
you
will.
You
can
go
with
your
own
solution,
but
I
think
there
is
not
much
point
in
doing
that.
You're
just
going
to
duplicate
what
is
already
done.
C
So,
let's,
let's
aim
to
actually,
maybe
if
you
want
to
help
you
can
help
actually
with
upstream
it's
still
open
source
right.
It's
not
some
sort
of
proprietary
private
repository,
so
we
can
make
it
work.
Even
even
you
can
help
with
the
apps.
C
I
Definitely
we
can
definitely
help,
I
mean,
is
it
on
your
on
the
splunk
open
source
repos
right
now.
I
E
E
C
J
Thank
you:
is
it
better
yeah,
okay,
yeah
thanks
and
hi
everyone,
I'm
a
new
developer
of
the
from
the
adolescence
insight
team
and
we
are
currently
you
want
to
use
open
elementary
for
our
new
health
check
system
and
there
is
yeah.
There
is
a
opening
issue
before
based
on
the
western
wesley's
discussion
with
some
other
team
members,
and
there
may
be
two
you
currently
extension
may
help
me
may
help
for
our
new
feature.
J
So
I'll
just
just
be
here
to
want
to
ask
some
questions
and
make
sure
our
understanding
is
correct
for
the
current
extensions
and
yeah
the
first
one
is
z
pages,
which
is
one
of
the
expansions
we
extensions.
We
want
to
confirm
something
that
is
this.
Does
anyone
know
this
extension
or
use
this
engine
before
just
want
to
make
sure
that
what
does
this
extension
do?
J
And
I
just
checked
just
run
this
extension
on
our
local
side,
and
it
just
shows
some
basic
informations
such
like
this,
like
it
shows
up
the
pipelines
and
like
the
names
or
the
witchery
series
running,
which
is
porter,
is
running
and
will
this
extension
help
to
show
the
information
like
the?
What
is
the
current
status
of
the
each
component.
J
G
No,
it's
not
it's
not
tracy's
different,
so
so
this
is
this
is
here,
is
just
your
config.
This
is
like
the
the
the
way
how
you
configure
you
configure
receiver
stats
d
with
no
processors
and
exporter
aws
emf.
J
This
configuration
yes,
I
see
okay
yeah
now
we
just
want
to
know
if,
because
yeah,
I
know
that
it
will
basically
configure
like
which
receiver
will
show
up
here,
and
we
just
want
to
know
if
there's
some
more
some
more
info
information
with
show
up
like
the
exporter
status,
yeah.
G
Yeah
right
now
we
don't
show
anything
else
except
the
name
there,
but
the
plan,
the
plan
was
to
add
more
things
to
the
to
this
page.
G
C
J
G
No,
we
we
do
not
export
that
we
we
there
was
an
issue
wesley
filed
and
I
think
there
was
a
discussion
about
what
do
we
want
to
do?
I
think
yeah.
I
think
I
think
we
can
start
thinking
about
adding
on
that
page.
We
can
add
debug
information
like
status
information.
We
can
add
metrics
about
that
component.
We
can
add
more
things
if
we
want.
G
Somebody
needs
to
somebody
needs
to
define
what
is
important
and
how
and
then
define
the
interfaces
that
we
need
to
implement
and
implement
those.
B
Yeah
perhaps
what
is
worth
mentioning
is
that
this
one
here
is
for
people
looking
to
debug
one
specific
instance
of
the
collector
and
in
production
you
are
likely
having
multiple
instances
of
the
collector,
so
you'd
likely,
or
you
should
probably
rely
on
metrics
instead
of
the
z
pages
right.
So
the
z
pages
are
very
useful
when
you're
debugging,
when
you're
trying
to
come
up
with
a
a
good
configuration
for
your
instance
or
for
your
cluster
of
instances,
but
to
actually
monitor
your
your
cluster.
G
Depends
depends
jurassic
because
from
metrics,
sometimes
you
figure
out
that
one
instance
has
a
problem
and
you
may
go
there
to
to
dig
into
more
information,
so
it
it
may
be
useful.
Even
if
you
have
a
fleet
to
to
have
extra
information
about
one
instance,
if
you
identify
that
instance
being
bad
for
whatever
reason.
C
Another
caveat
here
is
that
the
z
pages
they
are
primarily
intended
to
be
human
readable
rather
than
machine
readable.
We
do
not
really
give
any
guarantees
as
to
the
the
actual
html
that
is
produced
by
the
z
pages.
So
if
you,
if
you
want
to
use
the
the
actual
z
pages
as
a
point
that
you
hit
from
your
monitoring
tool,
that
can
be
very
fragile,
we
we
may
change
the
form
of
that.
That
is,
that
is
used
by
the
output
right.
C
So
if
you
want
to
do
that,
maybe
it
needs
to
be
some
sort,
some
sort
of
json
output
from
the
zenfone
or
something
like
that
right,
which
is
more
machine-readable.
D
Okay,
yeah
we're
not
really
we're
not
really
trying
to
like
modify
this,
we're
just
trying
to
understand
what
it
does
and
how
it
works
and
what
its
purpose
was.
Because
last
in
the
last
meeting
when
I
first
came
up
with
the
health
check
proposal,
you
said
that
this
would
do
some
of
the
things
that
we
wanted
and
then
we
were
looking
into
it.
We
didn't
really
think
that
it
actually
did
that
anyway.
I
think
we
got
all
the
information
we
want.
We
can
move
on
to
the
the
next
item.
J
Yeah
yeah
sense,
sensory
is
planning
this
and
yeah
for
the
next.
One
is
obviously
report
extension
this.
This
is
also
extension,
could
help
monitor
the
I
think,
the
number
of
the
output
from
the
each
component
so
yeah.
J
I
think
this
might
be
more
helpful
for
us
to
like
monitor
if
the
current
component
is
works
well,
like
could
send
the
grand
number
of
the
metrics
or
traces
to
the
destinations
so
yeah,
so
this
will
be
like
there's
a
few
questions
here,
like
yeah,
so
currently,
every
use
folder
will
use
this
of
the
reporting
extension
right
so
like,
but
there's
a
question
here.
Let
me
go
to
this
page.
J
Yeah,
I
think,
there's
like
for
each
component.
You
will
call
this
function
like
two
number
of
items
like
to
define
the
number
of
sent
and
the
number
of
fail
to
send,
but
it
shows
like
there
if,
there's
an
arrow
shows
up
and
it
will
send,
it
will
show
like
there
is,
there
will
be
zero
number
of
items
will
be
sent
right.
So
yes,
this
means
like
the
the
is
this
is
like.
I
just
want
to
know.
J
If,
if
this
error
is,
if
there's
a
whatever
error
shows
something
in
that
in
the
component,
it
will,
it
will
show
like
no
no
item
send
to
send
out
to
the
destination
right
and
even
if
the
component
is
still
working
or
if
this
means
the
component
is
functionally
failed.
G
I
I
don't
know
how
much
you
know
about
this
or
the
metrics,
but
this
code
does
the
following.
I
mean
if
there
was
an
error,
we
mark
the
number
of
items
as
errored.
If
there
was
no
error
in
that
call,
we
mark
the
marked
items
as
succeeded
sent.
So
it's
it's
just
a
simple
logic
that
we
we
have,
and
maybe
it's
not
the
perfect
one-
that
if
there
is
an
error,
we're
gonna
count
all
the
spans
in
the
current
request
as
errors.
Otherwise
we
count
them
as
succeeded.
G
Okay,
okay,
I
see,
but
we
do
we
do
a
sum.
So
every
request
we
call
this.
So
it's
based
on
every
request.
If
the.
If
the
current
request
is
returning
an
error,
it
means
we
haven't,
send
the
data,
so
we
will
count
them
as
failed.
If
we
succeeded-
and
we
know
how
we
don't
have
an
error,
then
we
count
them
as
send.
J
Okay,
okay,
I
see
yeah
yeah,
it
turns
on
miss,
understand
it's
cold
and
yes
thanks
and
yeah.
Let
me
see
and
yeah.
Another
question
is
that
yeah,
what
we
currently
want
to
build
is
a
new
like
a
new
health
check
extension.
So
can
we
use?
Can
we
reuse
this
currently
of
the
report?
Extension
on
the
new
extension.
J
Yeah,
just
just
like
what
previous
I
said,
we
want
to
like
monitor
the
component
healthy
status,
so
yeah
current
yeah.
If
we,
if
you
want
to
use
this
obvious
report
extension,
we
want
to
define
like
you
for
the
of
the
reporter
said.
The
component
could
send
data
to
the
end
to
the
destination,
it
will
market
a
healthy
and
if
the
observer
said
it
failed
to
send
out
the
data,
so
we
will
market
our
house.
G
Sure
so
so,
behind
the
scene,
observer
report
uses
the
old,
open,
sensors
library,
so
that
library
right
now
has
bunch
of
exporters,
one
of
them
being
prometheus.
That
is
right
now
hardcoded
into
the
into
our
code.
I
think
if
you
want
to
use
a
different
exporter
for
open
sensors,
you
can
pick
permeability.
D
That's
not
what
we
want,
though
sorry
I
think,
I
think
think
I
think
there's
a
little
bit
confusion.
So
we
want
to
build
an
extension,
a
new
health
check
extension
or
we
can
make
an
extension
of
the
existing
health
check.
Wanna,
but
and
basically
the
health
status
is
based
upon
the
metrics.
D
G
Okay,
then
then,
so
you
still
need
an
exporter
for
the
open
sensors.
The
exporter
in
in
the
open
sensors
is
not
the
exporter
in
the
collector
is
an
exporter
in
the
metrics
library
that
allows
you
to
get
the
data
out
of
the
library
and
do
whatever
you
want
with
the
data.
Does
it
make
sense.
D
A
G
Okay,
okay,
so
that's
a
diagram
with
the
flows
of
things
will
help
a
lot,
but
but
definitely
you
can
extract
the
data
out
of
observe
report.
Library
I
mean
open.
Sensors
is
behind
fyi.
That
will
change
because
we
are
planning
to
not
depend
on
open
sensors.
We
are
planning
to
depend
on
open
telemetry
when
it's
ready,
but
right
now
it's
open
sensor,
so
fyi,
but
the
values
will
not
change.
The
names
will
not
change.
Just
the
library
behind
the
scene.
G
G
And
by
the
way,
if
you
find
bugs
about
us
recording
wrong
values
like
that
feel
free
to
test
it
or
debug
it
or
put
a
pr
or
file
an
issue,
if,
if
we
have
a
bug,
but
I
don't
think
there
is
a
bug
we
we
right
now
use
the
ops
report,
data
on
our
side
and
we
we
we
get
good
metrics
out
of
that.
J
L
I
Jamaica,
I
have
a
question
there.
Do
you
is
this
something
that's
required
for
1.0
trace,
because
I
mean
that's
kind
of
what
we're
focusing
in
on.
Are
these
changes
because
I
saw
your
prs
there?
One
of
them
at
least,
is
pretty
huge.
G
Alolita
indeed,
but
I
would
like
to
ask
the
same
question
about
the
first
design
talk
about
cumulative
to
delta.
In
physics.
G
I
think
I
think
if
we
allow
that
discussion,
we
should
allow
this
discussion
and
we
should
be
collaborative.
C
So
jimmy
I
I
think
it's
the
right
thing
to
to
move
it
right.
The
approach
is
dividing
through
the
storage
extension
to
the
core
and
the
way
that
you're
approaching
this
is
very
useful
you're,
proving
that
it
can
be
used
for
the
particular
use
case
that
we
have
for
the
expert
rescued
reptile,
and
it
is
already
used
by
the
file
log
collector
fiberglass,
so
kind
of
it's
too
quite
different
use
cases
being
served
by
the
single
extension,
which
is
a
strong
signal
that
it
is
doing
something
right.
C
I
think
I
did
a
pass
already
reviewing
this
one,
but
other
so
more
eyes
are
very
welcome
on
this
one.
I
think
we
definitely
would
want
to
move
forward
with
this.
There
is
no
particular
urgency
with
this
you're
right
alolita,
but
I
think
we
can
continue
working
on
it
because
it's
it
does
not
interact
much
with
other
things
that
we
do.
So
it's
not
going
to
create
problems
with
mesh
conflicts
because
blocking
issues
for
other
things.
C
So
it's
something
that
can
be
worked
on
parallel,
which
I
believe
it's
a
good
thing
to
do.
You
know.
L
Yeah,
I
think,
that's
a
that's
a
good
aim
and
also
like,
let's
say
from
this
persistent
buffering
perspective,
how
much
it's
needed
for
for
us
at
sumo.
If
we
want
to
make
our
let's
say,
users
replace
the
current
solution
with
open,
telemetry
collector.
This
persistent
buffering
thing
is
is
quite
critical,
so
eventually
it's
it's
something
that
we
definitely
want
to
have.
C
We
saw
right
prometheus
exporter
right
recently
added
the
similar
capability,
so
it
didn't
add
which
has
each
block.
G
And
it's
not
only
for
metrics.
To
be
honest,
I
think
there
can
be
made
a
case
that
this
can
be
used
for
tracing
as
well
in
case
of
people
that
want
to
deploy
the
collector
with
the
better
guarantees
of
not
not
losing
even
tracing
data,
so
we
can
make
a
case
even
for
trees.
If
you
want
chicken,
but
that
being
said,
the
premesh
primek,
I
don't
know
how
to
pronounce
your
name,
but
america.
G
Yeah,
so
I
think
I
think
the
problem
with
that
file
extension.
It's
it's
very
interesting
for
me
because
it
comes
with
the
the
new
interface
which
is
more
important
than
the
the
file
extension
itself.
So
I
think
I
think
that's
that's
the
critical
part
that
probably
if
I
were
to
move
the
code,
I
would
start
by
moving
just
the
the
interface
first
and
review
that
and
make
sure
that
that
is
the
right
thing
we
want
to
to
do
and
then
and
then
we
can
move
the
other
code.
G
But
I'm
curious
about
this
because
for
sumo,
if
you
want
to
use
this,
you
can
even
depend
on
contrib
components,
because
you
you
have
your
exporter
there
correct,
so
why
it's
important
to
be
here.
That's
besides
the
interface,
which
I
think
I
think
the
interface.
If
you
want
to
have
this
implementation
in
core,
the
interface
is
needed
because
you
need
to
call
into
the
interface
and
you
need
to
depend
on
the
interface
to
to
for
for
your
world
thing.
But
besides
the
interface,
I
don't
know
why
you
need
the
real
implementation.
L
Yes,
several
reasons
actually
and
probably
the
most
important
one
is
that
the
way
I
implement,
that
is,
that
I'm
changing
computer
try
helper,
so
I'm
providing
the
capability
in
cubed
retry,
which
is
pretty
much
used
by
all
exporters,
almost
all
exporters
that,
instead
of
having
memory
backed
eq,
you
can
have
this
disk
backed
queue.
So
everything
will
be
persisted
for
all
signals
for
all
exporters.
It's
very
generic
solution
and
we
want.
G
G
Correct
correct
but
but
keep
in
mind
that
you
just
need
the
interface.
I
don't
think
you
need
the
implementation
of
the
file,
because
that
that
can
be
backed
by
a
file
in
hcd,
but
it
can
be
backed
by
a
file
in
big
big
table
or
in
s3
or
whatever
is
that
file?
So
so
I
think
the
code
should
not
rely
on
a
specific
implementation
of
that.
Interface
should
rely
on
the
storage
interface
to
to
provide
the
persistency
correct.
C
So
I
don't
I
that's
exactly
as
it
is
you're
right.
There
is
a
separate
interface
which
extensions
can
implement
and
it's
one
of
the
possible
implementations
primic.
What
work
done
is
saying
is
that
we
can
move
the
interface
to
the
core
and
cube
retry
in
that
case
can
start
using
this
interface
while
the
actual
implementation,
the
extension
itself
remains
in
country,
maybe
for
now,
but
we
do
not
necessarily
have
to
move
it.
That's
that's
a
possible
option.
G
G
My
my
point
being
it
allows
you
to
make
progress
and
then,
in
three
weeks,
one
month
when
we
are
ready
with
this
implementation
and
confident
on
that,
and
if
we
really
need
that
thing
into
the
core,
we
can
re-discuss
that,
but
I
think
it's
it's.
It
gives
you
a
way
to
to
move
forward
without
being
blocked.
That's
that's
what
I'm
looking
for.
G
Okay,
so
I
think
we
are
done
with
this
topic.
Do
we
have
anything
else.
N
Yeah,
hey
yeah,
I
opened
an
issue
a
while
back
and
I
apologize.
I
hadn't
realized
that
there
was
some
renewed
conversation
on
it
until
aleda.
N
Pinged
me,
it
sounds
like
maybe
there
was
some
agreement
that
the
otlp
exporter
should
require
a
scheme
for
its
endpoint
configuration
and
a
lolita
requested
that
I
submit
a
pr
which
I'd
be
happy
to
do
if,
but
I
just
wanted
to
clarify
that
this
was
in
fact
decided,
and
I
guess
that's.
My
first
question
is
this:
is
this
grpc
or
http?
G
One,
but
your
pc
one
does
not
have
a
schema
grpc
one,
it's
always
using
https.
So
that's
the
protocol.
You
cannot
use
http.
N
N
Familiar
with,
but
I
looked
at
java
as
well,
and
they
both
require
a
scheme
based
off
of
the
the
specification
here
which
doesn't
seem
to
be
specific
to
just
the
http
and
I
think,
tigran.
You
noted
at
one
point
that
you
believed
the
wording
here
to
not
be
accurate,
in
which
case
you
know.
Maybe
this
is
just
a
spec
clarification
that
needs
to
be
done
versus
a
bug
fix
so.
C
G
Changing
anything
if,
if
the
grpc,
so
this
is
an
environment
variable
okay.
So
if
when,
when
you
use
this
environment
variable
in
grpc,
we
can
still
use
it
as
not
a
scheme.
It's
it's
fine!
It's
an
implementation!.
C
We
have
two
possibilities
here.
We
either
require
java
to
support
both
with
and
without
scheme
in
the
url,
like
the
url
just
the
end
point,
or
we
do
that
in
the
collector
right
now,
it's
java
is
considered
stable,
so
I
don't
think
we
can
completely
remove
that
so
make
it
invalid
to
specify
the
scheme
in
the
in
the
environment
variable
right,
so
we
should
continue
allowing
that.
B
This
property
also
supposed
to
support
like
dns
and
urls
here,
so
that
the
grpc
client
can
load
balance
between
the
endpoints.
B
What's
that
ns,
so
the
grpc
clients
they
they
allow
you
to
specify
a
client
using
a
http
schema
or
a
dns
schema.
So
when
you
use
a
dns
schema,
it
would
go
to
the
dns
and
resolve
the
ip
addresses
for,
for
that
name
and
use
those
clients
when
doing
the
the
the
calls
in
a
load
balancing
fashion.
B
G
I'm
I'm
happy
to
support
https
column,
something
I
don't
know.
Does
the
code
of
grpc
work
with
this
or
do
we
have
to
strip
the
scheme
in
the
grp
because
we
use
it,
as
is
in
the
collector,
if
I
remember
correctly,
but
I
have
another
question
for
for
alan
does
right
now,
if
you
put
https
columns,
backslash
backslash
stub
does
not
work
right
now
or
is
just
that
we
suggest
to
not
to
do
other
things.
G
C
G
We
call
the
dial
function
and
dial
function
does
not
support
this.
It
needs
to
be
called.
We
need
to
resolve
the
address
and
then
call
dial.
I
think
we
need
to
call
net
resolve
to
the
before
that,
and
I
think
both
will
work
after
that,
so
alan
one
thing
I
would
like
to
try
it
out
is
if
we
can
make
both
work.
C
Okay,
yeah,
the
question
is:
do
we
make
it
work
in
the
collector
or
in
the
or
in
the
sd
case?
I.
C
F
So
what
we've
done
in
the
the
go
sdk
we
recently
had
a
pr
to
address
this-
is
that
we
take
the
environment
variable
with
the
schema
or
without
obviously,
if
it's
not
targeted
at
a
grpc
exporter
without
the
schema
won't
work,
because
it
won't
know
if
it's
http
or
https.
But
then,
when
we're
setting
up
the
grpc
exporter,
we
strip
that
schema
off
and
we
use
the
indication
of
http
or
https
to
determine
whether
we
also
set
the
insecure
flag.
F
F
F
N
We
do
in
dot
net
as
well.
Maybe.
C
F
G
Is
is
not
confusing
for
the
users,
because
that
means
it
will
work
on
port
80,
but
actually
it
needs
to
open
the
port
443,
because
it's
always
going
to
be.
F
Https,
the
the
port's
normally
included
anyways
because
it's
4317
instead
of
either
80
or
443..
But
yes,
it's
a
question
of
whether
you
tried
to
do
the
tls
initial
initiation
when
you
connect,
but
the
grpc
client
is
already
always
going
to
do
that.
The
question.
F
H
N
Yeah
I'd
have
to
verify
this
to
be
100
certain,
but
I'm
I'm
fairly
confident
that,
just
as
a
point
of
comparison
that
the
the.net
grpc
library,
if
you
just
supply
one,
it
requires
that
a
scheme,
the
library
itself
requires
that
a
scheme
be
present.
Otherwise
it
throws
an
exception
on
initialization
and
I'm
fairly
confident
that
if
you
don't
include
a
port
that
if
it
were
http
or
https,
that
it
would
default
to
80
or
and
or
443
respectively,.
G
That
would
be.
That
would
be
good
educational
thing.
I
think
alan
to
to
resolve
this.
I
think
it
it
has
implications
and,
as
anthony
pointed
there
are
some
behaviors
that
people
derive
on
this
with
setting
insecure
or
secure.
It
will
be
super
good
to
clarify
this
in
the
in
the
specification
as
well,
if
possible,
to
craft
a
pr
there
and
we
we
will
definitely
implement
whatever
gets
out
of
that
pr.
G
So
we
will
accept
any
change
that
gets
out
of
that
pr,
but
I
think
it
will
be
good
to
to
clarify
this,
so
http
versus
https
doesn't
mean
securing
secure
in
this
case.
Second,
second,
one
is:
what
is
the
default
port,
if
not
present
in
case
of
http
and
https.
C
B
Yeah
on
the
computer
side,
we,
when
using
tls,
we
just
use
the
same
port-
that
we
expect
the
jpc
listener
to
be
listening
on.
B
So
if
people
don't
specify
the
port,
we
just
default
to
14
250.,
it
doesn't
matter
if
it's
http
or
or
secure,
not
secure.
N
G
Points:
okay:
let's
I
think
we
need
to
clarify
a
bit
more
on
this,
I'm
happy
to
to
support
scheme
everywhere.
Now
that
all
the
others
already
supports
him,
but
I
would
like
to
be.
I
would
like
to
be
consistent
and
and
have
a
spec.
N
Okay,
yeah
I'd
be
happy
to
draft
up
a
pr
with
the
spec
first
and
continue
the
discussion
there.
I
agree.
I
think
it
would
be
good
to
have
some
consistency
between
this
and
the
sdks,
and
it
may
mean
work
on
both
sides
to
make
sure
that
we
support.
I
Yeah,
I
just
just
wanted
to
call
out
you
know.
This
is
based
on
the
discussion
we
had
last
last
week
and
I
did
file
an
issue
for
requesting
the
prometheus
approvers
list.
Just
for
you
know,
code
reviews
on
the
collector.
G
K
I
C
G
C
K
K
C
K
I
I
just
wanted
to
call
out.
I
think
there
are
a
whole
influx
of
prs
based
on
our
1.0
backlogs
phase.
One
and
phase
two
is
in
progress,
so
bogdan,
and
I
have
been
working
closely
on
tracking
the
the
progress.
So
thanks
to
bogdan
on,
you
know
doing
some
of
the
reviews,
but
there's
a
whole
slew
of
pr's
as
a
result
which
are
cleared
up
on
the
collector.
Oh,
there
are
not
too
many.
G
There
are
a
lot
of
prometheus
things
which
I
ignore
for
the
moment,
because
I'm
focusing
on
other
things,
but
I'm
waiting
for
the
prometheus
ones
as
we
discuss
I'm
waiting
for
you
to
put
a
ready
for
merge
when,
when
that
group
reviews
the
prs
and
stuff
just
put
ready
for
merge.
D
Yeah,
hey
tigger,
I'm
back,
okay,
very
quickly!
Sorry,
we
we
realized
thinking
about
it
for
our
use
case.
We
don't
really
actually
want
the
number
of
failed,
metrics
or
logs
or
traces
that
failed
to
send
like
the
raw
number.
Just
the
number
of
times
that
the
exporter
returned,
that
it
succeeded
or
failed
to
send
a
chunk
is
actually
really
more.
What
were
what
we
would
be?
After
can
we
would
it
be
okay
if
we
like
added
that
to
the
obs
report
code,
like
other
metrics,.
G
Bing
me
premium
slack:
you
need
only
10
lines
of
configuration
of
the
open
sensors
library.
You
need
to
add
another
view
that
counts
that,
for
you,
you
don't
need
to
change
anything
in
the
code.
It's
just
a
configuration.
D
G
B
G
So
I'm
flying
back
to
europe,
so
I
need
to,
I
will
not
have
internet,
but
I
will
try
to
to
give
you
the
hints
right
away
now
at
least
okay.
Thank
you
brockton.
We
appreciate
it.
Thank.
C
J
C
C
Right,
let's
start
so.
The
first
item
I
put
there
is
the
bodies
versus
attributes
the
the
issue
that
we
discussed
for
quite
some
time
now.
I
I
tried
to
put
together
some
set
of
kind
of
litmus
tests
that
allow
to
decide
where
you
put
the
thing
I'd
like
to
know
what
people
think
about
what
I
suggested
there
kind
of,
instead
of
trying
to
radically
change
things
in
the
body
versus
just
keep
them
as
is,
but
bring
more
clarity
around.
C
C
Okay,
so
do
I
do
I
get
it
as
a
signal
that
we
should
probably
go
with
just
that
proposal.
Any
objections
anybody
thinks
you
can
do
better,
maybe.
C
C
All
right,
so
the
next
one
is:
it's
been
quite
a
while
now
that
we
have
a
log
data
model
and
we
have
the
log
data
representation
telemetry
protocol
many
months,
if
not
maybe
even
already,
with
very
little
changes,
I
would
say
it
was
quite
stable.
The
only
thing
that
I
can
remember
that
we
changed
is
the
addition
of
the
bytes
as
the
data
type
recently,
so
it
seems
like
it's
serving
well
so
far.
We
have
the
implementation
of
many
components
in
the
collector.
C
I
don't
think
we
uncovered
any
missing
things
in
the
locator
model.
I
can't
remember
anything
at
least
so
I
guess
I
wanted
to
ask
what
do
people
think
about
the
idea
to
move
to
the
next
stability
level,
the
beta
which
still
allows
us
to
make
changes?
But
it's
kind
of
a
signal
to
I
guess
to
people
that
this
is
now
considered
to
be
more
stable
than
it
was.
The
experimental
means
that
we
can
change
it
any
time
data
means
we
can
change
it
only
once
every
three
months,
which
historically,
we
didn't
even
do.
A
L
Yeah,
I
think
this
should
also
encourage
vendors
to
support
logs
coming
in
otlp
yeah.
C
Yeah
yeah,
it's
kind
of
kind
of
we
see
now
that
we
now
believe
this
is
more
ready
for
implementation
by
vendors,
because
internally,
we
already
use
that
for
quite
some
time
now
and
kind
of
is
an
indication
of
our
confidence
right
to
some
degree
that
this
is
good,
just
maybe
start
using
it,
and
then,
let's
give
it
some
more
months,
and
then
anyway,
this
is
the
path
to
declaring
it
fully
stable
sometime
this
year.
I
believe
it
should
happen.
I
don't
see
why
not.
C
C
Right,
this
is
good
we're
done
with
the
items
I
added,
so
we
have
one
more
regarding
the
prototype
in
python,
so
you
can't
am
I
pronouncing
the
name
correctly?
Yes,.
M
Yes,
hey
everyone,
I'm
I
I
work
with
python
c,
and
so
we
are
we're
planning
to
do
a
experimental
release
so
working
on
prototyping.
So
we
we
are
not
expecting
any
like
drastical
changes
to
the
hotep,
so
we
I
have
followed
the
I
read
through
the
hotep,
the
current
suggestion
of
using
handlers.
It
makes
it
very
easy
for,
for
the
users
to
you,
know,
use
the
this
sdk
with
with
very
minimal
changes
and
also
following
the
like
it
very
it
closely.
M
Mimics
the
tracing
sdk
part,
so
we
are
also
not
expecting
any
radical
changes.
So
so
so,
basically,
you
want
to
do
experimental
release
and
and
gather
feedback.
So
yeah
I
wanted
to
you,
know,
join
this
week
and
then
shared
like
we're
doing
this.
C
This
is
great
you
you're,
currently,
an
approver
in
python
seek
right.
Yes,.
M
C
Okay,
okay,
that's
great
did
you
do
you
have
a
chance
to
talk
with
always
regarding
what
what
you
plan
to
do.
M
Yeah,
so
he
he
he
knows
this,
so
I
I
posted
on
the
python
channel,
so
he
he
he
he
was
also
like
agreed
like
we
wanted
to
do
this,
so
we
can
gather
the
feedback
early
and
then
you
know
if
there
is
something
that
comes
at
the
log
seek
level
we
can
share
it
from
the
python
ship
to
the
logs,
okay.
Okay,
that's
great.
C
This
is
excellent.
Thank
you.
So
don't
so
I
guess
the
autumn
is
there
it
it
is.
You
can
consider
it
to
be
a
the
first
attempt
at
describing
how
the
sdks
should
look
like
if
you
find
that
something
is
missing
or
you
would
like
something
to
be
changed
in
that
that
app,
which
specifies
the
sdks.
C
A
C
M
I
I
did
like
already
like
did
a
minimal
prototype
last
weekend,
so
I
I
posted,
like
I
asked
you
know
on
the
python
sheet
to
like
just
give
it
a
look.
So
I'm
like
I'm
planning
to
create
a
pr
later
this
week,
most
probably
like
by
end
of
the
week
so
yeah
so
like
I
I
pretty
much
did
the
like
the
prototype
part
I
I'll
have
to
write
the
documentation.
I
have
more
tests
so
yeah,
that's
that's
where
I
am
right
now.
C
That's
great
when,
when
you
have
something
that
that
you
would
like
to
show,
it
would
be
great
if
you
can
come
and
show
in
this
week.
I
think
everybody
would
be
very
interested
to
see
what
you
have
implemented.
M
No
right
now,
I'm
like
I'm
still
going
to
the
like
reading
the
hotep
again
and
then
trying
to
implement
it.
If
there
is
something
that
I
feel
missing,
that
is
something
that
I
need:
clarity,
I'll
post
on
the
channel.
Okay
sounds
good.