►
From YouTube: 2021-05-18 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
A
A
So
I
was
saying:
can
anybody
help
with
triaging
or
maybe
give
me
the
link
to
that
free
item?
Because
I
don't
oh,
what
do
we
use
for
as
a
victory?
Okay,
let
me
share
my
screen.
Okay,.
B
I'll
I'll
give
you
one
example:
normally
I
would
expect
them
to
be
stateless.
However,
you
might
want
to
keep
some
resource,
and
if
people
like
finish
using
the
the
tracer
provider,
they
might
decide.
Okay,
the
tracer
provider
owns
all
the
underlying
exporters,
processors
and
samplers,
and
they
call
like
this
post
or
garbage
collection
on
all
of
them
and
if
we
now
share
them,
that
means
the
implementation
has
to
take
care
of
the
reference
county.
B
C
I
think
we
should
do
at
least
a
recommendation
about
that.
I
agree
that
every
c
probably
has
different
expectations
already,
but
you
know
there
is
this
pr
by
christiano
mueller
regarding
passing
resource
and
instrumentation
library
to
the
sampler
and
both
the
mentioning
potential
optimization
of
just
relying
on
whatever
the
tracer
provider
has
already,
and
this
is
how
it
came.
You
know
around
like
whether
one
way
or
another
should
be
implemented
so
for
those
cases,
because
so
we
don't
end
up
talking
about
that
in
the
future.
B
Okay,
so
if
we
recommend
that
what
does
it
mean
for
languages
that
they
chose
not
to
follow
that
it
seems
the
smack
is
trying
to
create
a
bias
like.
I,
I
think
the
spec
normally
should
should
mention
like
what
what
should
be
done
or
must
be
done,
but
the
recommendation
I
I
think
it
can
give
some
implementation
detail,
but
for
this
one
it
seems
more
like
a
contract.
C
C
Yesterday
it
was
mentioned
that
in
general,
so
some
players
export
their
processors.
Yes,.
D
E
C
D
My
impression
was
that
this
has
come
up
a
bunch,
because
people
want
to
run
different
resources
inside
of
a
process
so,
but
that
that
I
know
bogan
doesn't
like
that
practice
and
he's
argued
against
it.
So
I
feel
like
it's
more
about
the
data
model
than
about
the
actual
support
for
multiple
resources
is
usually
the
sticking
point.
C
C
Sounds
good
sounds
great,
yes
and
let's
wait
for
book
dance
and
anybody
else's
opinion
on.
C
Release,
I
would
say
yes
because
it's
going
to
impact
this
pr
that
I
already
mentioned
a
pair
of
times.
I
forgot
the
number,
but
it's
gonna
impact
that
that.
A
Yeah,
that's
that's
about
publishing
the
schema
files
in
the
website
repository,
so
I
guess
I'm
going
to
take
it
myself
most
likely.
I
already
submitted
a
small
pr
which
checks
that
the
schema
files
in
this
repository
are
correct
and
then
there
will
be
a
most
likely
initially
at
least
manual
action
which
publishes
from
this
report
to
the
website.
A
A
A
C
Yeah
go
ahead.
F
Yeah
so
this
came
about
when
leiden
and
I
were
looking
through
the
spec
and
and
how
it
was
how
the
latest
changes
were
implemented,
and
we
noticed
that
the
there's
a
mention
of
the
dropped,
attributes
and
dropped
events
count
and
now,
there's
a
mention
of
joplin
counts
as
well
in
the
nano
tlp
exporter
specification,
but
there
was
no
mention
anywhere
else
as
to
where
these
counts
were
coming
from.
C
I
wonder
if
this
would
ring,
I
mean
it
does
sound
right.
They
wonder
if
we
need
an
entry
in
the
compliance
matrix
for
this
one.
F
Yeah,
maybe
we
definitely
were.
A
A
Yeah,
I
think
I
checked,
I
think,
java
implementation
and
maybe
some
others.
They
do
have
these
limiters
for
the
number
of
attributes,
but
I
don't
think
they
are
populating
their
dropped
attributes
counts.
I
I
think
I
checked
that
if
I'm
not
wrong,
I
may
be
wrong.
So
maintainers
click.
This
correctly.
F
F
Although
reading
java
for
me,
is
it's
a
bit
of
a
stretch
but
yeah,
so
I
I
don't
know
that
there's
specific
attributes
that
are
being
recorded
as
opposed
to
just
the
account
that
gets
generated
for
time,
which
which
I
thought
was
a
little
bit
confusing,
because
then
it
would
in
a
sense
mean
that
we're
carrying
all
of
these
extra
attributes
in
in
memory
until
we
export
them.
But.
A
E
A
John,
we
you're
saying
we
we
actually
populate
the
drop
counter
on
the
exporters.
Did
we
do
that?
No.
E
Well,
I
mean
we
calculate
the
difference
between
the
total
number
that
were
attempted
to
be
recorded
and
the
total
number
that
were
actually
recorded,
but
we
just
do
the
subtraction
in
the
export
we
keep
track
of.
But
we
just
we
do
keep
track
of
that.
The
number
that
were
the
number
of
calls
that
were
made
to
set
attribute,
even
if
we
don't
keep
all
of
them.
A
E
I
mean
this
does
have
a
little
bit
of
a
weird
effect.
If
someone
calls
that
attribute
with
the
same
the
same
key
over
and
over
and
over
again,
you
can
get
some
strange.
The
numbers
won't
necessarily
line
up
precisely
the
way
you
might
expect
them
to
so
that
is
a
you
know.
A
minor
consideration.
E
G
C
Yeah,
so
this
the
change
sounds
good.
As
I
said,
maybe
maybe
we
should
include
an
entry
in
the
in
the
matrix,
probably.
B
H
Yeah,
like
I,
I
I
hear
what
you're
saying
about
exposing
this
as
a
metric,
and
I
think
that
should
be
an
option,
but
I
think
this
should
actually
be
part
and
parcel
of
the
of
the
protocol.
Just
just
from
a
from
a
raw
observability
standpoint
of
I
want
to
know
right
where
the
failure
is.
I
think,
if
we
turn
this
into
a
metric,
it
divorces
it
a
little
bit
too
much.
H
I
like
where
it
is
right
now
personally,
because
we've
kind
of
leveraged
this
to
diagnose
a
bunch
of
weird
bugs
that
we
found.
As
a
meta
note,
I
found
it
really
hard
to
understand
what
fails
in
open
telemetry
when
things
go
wrong,
and
this
is
like
one
of
the
few
pieces
of
observability,
we
have
that.
I
think
we
should
maintain
a
little
bit
as
is
and
kind
of
improve,
but
I
think
metrics
will
also
have
something
just
like.
H
Dropped
attributes
count
is
on
that's
a
an
attribute
of
the
span
yeah.
You
could
also
have
it
be
an
attribute
of
a
metric
or
an
attribute
of
a
log,
and
it
would
have
the
same
meaning.
I
see.
B
Let's
see,
I
guess
this
one
might
need
some
additional
clarification.
I
can
see
some
mercury
areas,
for
example,
if
we're
running
over
the
the
maximum
number
of
attributes
and
now
we're
adding
another
attribute.
What
should
we
do?
Should
we
take
one
existing
attribute,
remove
that
just
in
case
we
so.
H
So
one
thing
I
want
to
call
out
here
this
is
this:
is
the
an
exporter
to
a
non-otlp
right?
What's
interesting?
Is
it
says
that
it
must
be
reported
as
a
key
value
pair
associated
with
the
span,
however,
like
in
google
cloud
trace
there
actually
is
a
dropped
attribute
count
that
we
report
directly
against,
which
is
not
a
key
value
pair?
It's
it's
actually
part
of
the
protocol.
H
So
I
think
like
like
to
comment
on
this
issue
that
I
might
I
might
open
a
thing
about.
You
know
vendor
specific
having
non-attribute
accounts,
but
that's
not
that's
not
a
big
deal
this
year.
I
really
think
it
does
need
to
be
called
out
in
the
sdk
that
the
sdk
is
doing
the
dropping.
C
Yeah,
I
suggest
we
merge
this
one
because
it's
correct
and
then
george,
if
you
could,
at
this
open
an
issue,
and
somebody
can
greatly
clarify
further
clarification
on
that,
because
not
at
this
moment
we
have
nothing
around
this.
So
this
is
a
good
start
and
it's
not
entirely
incorrect.
It's
just
like
it's
incomplete.
D
When
we
do
report
these
as
key
value
pairs,
it
would
be
nice
if
we
had
a
schema.
That
would
say
what
they
mean.
So
I
I
don't
think
of
these
as
key
value
pairs
that
you
use
to
characterize
the
span,
you
think
of
them
as
characters
as
key
value
pairs
that
characterize
the
collection
that
happened.
This
is
not
the
identity
of
the
span
that
has
the
dropped
attributes.
D
H
So
there's
a
bug
open
to
add
it
to
metrics
that
we
have
open
against
the
data
model
sig
that
hasn't
come
up
yet,
so
it's
only
there
for
traces
and
events,
logs
and
resources.
H
H
D
It's
interesting
that
you're
thinking
thinking
about
dropping
attributes
for
metrics,
I
think
of
usually
we
intentionally
aggregate
them
away.
So
I'm
curious
how
we'll
interpret
such
a
field,
but
it's
not
too
important.
D
D
B
C
A
C
Yeah
to
me
trying
to
force
the
user
to
do
that.
It's
too
hard!
Honestly,
I
don't
think
it's
worth
the
effort
unless
you
find
an
easy
way
to
really
make
sure
that
they
don't
do
that.
E
I
mean
I,
don't
I
don't
think,
there's
any
way.
We
I
mean
this
is
going
to
happen.
Even
if
users
don't
intend
to
do
it
because
one
library
might
just
be
appending
attributes
to
a
span
and
another
library
could
be
creating
the
span
and
that
library
instrumentation
could
be
using
a
different
version
of
the
schema
and
they
could
have
both
of
those
things
both
of
those
libraries
in
their
application.
A
I
mean
there
are
probably
ways
to
try
to
stop
it.
The
questions
should
we
like
when
you
you,
can
declare
the
schema
of
a
span
and
refuse
to
accept
attributes
which
are
not
of
that
schema,
but
that
requires
revamping
the
api
right
so
that
you
actually
explicitly
specify
the
intent
there.
What
are
you
doing
with
the
attributes
from
the
schema
perspective,
I
I
wouldn't
say
it's
impossible,
but
I
would
say
it's
probably
such
a
big
stretch
that
you
should
probably
not
even
try
to
do
that.
That's.
E
A
Yeah
yeah
that
that's
it
right
probably
very
undesirable,
should
truly
do
that.
I
guess
which
means
that
probably
this
is
limited
to
some
recommendations
in
the
documentations
and
stuff
like
that,
and
and
also
I
mean
in
the
in
the
in
the
semantic
conventions
in
the
language
libraries
working
on
on
the
go
particularly
right
now
we're
going
to
make
it
easy
to
do
the
right
thing,
but
it's
unlikely
that
we
can
prevent
doing
the
wrong
thing
totally
and
probably
shouldn't
even
try.
A
E
A
C
Sorry,
I
was
muted.
I
think
we
should
wait
for
christian
to
reply,
but
tell
him
that
we
are
really
thinking
to
close.
C
B
Okay,
I'll
stop,
sharing
and
I'll
finish
this
part.
We
can
move
forward.
Sorry
for
taking
more
time.
C
It's
great,
it's
actually
very
useful.
In
my
opinion,
okay,
the
rest
of
the
agenda
is
riley,
rocked
up
by
john
watson,
metrics
stuff
right.
E
Even
if
we
don't
have
a
solid
sd,
a
solid,
strict,
sdk
definition
in
place.
It
will
also
allow
us
to
start
using
the
metrics
api
inside
the
tracing
sdk
to
do
recordings
about,
like
meta
recordings
like
we
were
talking
about
about
dropped,
attributes
or
drop
spans,
etc
without
having
to
have
the
sdk.
The
tracing
sdk
depend
on
non-stable
artifacts
non-stable
libraries.
E
So
it's
just
it's
a
it's
more
of
a
question
and
a
hope,
obviously
not
a
requirement
but
wanted
to
get
other
people's
thoughts.
B
Yeah,
so
so
in
in
the
thursday
meeting
last
week,
I
I
think
everyone
agreed
that
it
would
be
great
and
I'm
just
bringing
this
to
the
spec
meeting
and
see
if
this
is
something
we
want.
I
can
update
the
schedule,
so
we
can.
We
can
try
to
push
the
api
to
move
faster
than
the
sdk
and
I'm
I'm
thinking
probably
like
move
the
api
a
month
ahead
of
our
previous
schedule
and
move
the
isdk
one
month
behind
the
previous
schedule.
B
I
said
pogba
said
no
or
or
you
think
it's
not
going
to
work.
I
don't
see
any
issue.
D
Okay,
thanks:
okay:
mcd
do
you
have
an
opinion
so
this
to
clarify
this?
Is
we're
going
to
try
to
accelerate
the
api
spec
so
that
we
can
get
the
basic
api
for
metrics
into
a
place
that
the
spec
says
is
I
don't
know
what
the
right
word
to
use?
Is
I'm
worried
about
the
goal,
though,
which
is
john
mentioned
like
to
do
something
with
sdks,
and
I
I'm
not.
D
I
just
worry
that
that's
going
to
raise
more
questions
than
it
addresses
for
us,
and
I
don't
know,
let's
be
at
the
vague
feeling
and
then
my
other
concern
is
that
we
haven't
talked
about
batch
recordings
or
batch
instruments
which
to
me
feels
like
a
pretty
important
feature
that
I'm
not
going
to
be
able
to
get
on
without
so
that
makes
me
feel
like.
Are
we
going
to
do
a
basic
api
and
then
wait
to
do
a
batch
api?
H
Yeah,
so
can
I
can
I
rephrase
the
question
of
do
we
think
we
can
write
instrumentation
against
just
the
api
with
no
sdk
behavior
like?
Is
that
a
useful
thing
to
be
able
to
do?
Do
we
think
we
can?
We
think
that
will
be
a
good
idea
right.
So
when
it
and
and
I'm
gonna
come
back
to,
I
think
the
whole
notion
around
like
views
and
or
weird
kind
of
bucketing
things
that
we
want
to
see
out.
H
The
other
end
of
metrics
is
still
not
quite
as
stable
as
I
would
like
to
see,
but
I
think
it's
worth
a
shot
like.
I
think
it's
worth
trying
to
the
extent
that
we
can
prevent
our
instrumentation
like
give
instrumentation
authors
a
stable
set
of
simple
instruments
to
go,
write
some
instrumentation
against.
I
think
it's.
This
is
worth
it
and
if
they
ask
any
difficult
questions,
we
say
stop
working
on
that
instrumentation
like.
H
If
that's,
if
that's
the
expectation
you
set
up,
I
think
this
can
work,
but
if
it's
like,
they
run
into
a
problem
with
instrumentation,
they
can't
do
some
weird
advanced
metric
and
that
that's
just
going
to
cause
the
same
level
of
churn
and
cycle
that
we've
had
in
the
past
that
I
think
we
need
to
avoid.
So,
if
the
expectation
is
you
run
into
something,
that's
not
supported
today
in
the
api
you
stop,
then
then
it's
okay,
but
like
that.
H
I
just
want
to
call
that
out
of
like
and
and
to
outline
josh's
concerns
as
well.
I
think
it's
directly
related
to
that
of
like
do
we
support
batching.
That's
in
my
mind,
can
we
do
some
of
the
advanced
metrics
people
will
want
well,
no
like
like
what
we're
defining
right
now
is
simple.
So
when
you
get
beyond
the
boundaries
of
what
it
does,
don't
go
further.
B
Yeah,
so
here
here's
my
my
thinking,
one
is:
if
we
want
to
push
them
the
api
before
the
isdk.
What
we
should
do
is
we
should
accelerate
the
sdk
view
part
because
we
need
to
leverage
learning
there
and
come
back
to
work
on
the
hint
api,
and
this
is
the
required
thing
in
our
original
scope.
So
we
we
should
have
kind
api
before
we
release
the
spec
as
as
recommended
for
experimental
or
even
like
feature,
freeze
or
stable
the
the
batch
thing.
B
It
was
not
in
the
in
the
scope
for
for
this
phase,
so
what
we
should
do
is
we
try
to
release
the
api
without
the
batch
and
batch
can
be
added
as
the
next
step
and-
and
we
already
figured
this-
can
be
added
without
having
to
break
the
existing
apis.
So
we
should
just
control
the
current
scope
instead
of
trying
to
increase
the
scope
dynamically,
and
the
second
thing
is,
as
I
mentioned,
the
ict
part,
so
we
try
to
accelerate
the
api
part.
B
Given
we
have
limited
energy,
we
might
slow
down
on
the
sdk
part.
So
the
the
only
thing
that
we
we
need
to
make
progress
this
month
is
the
view
part.
Then
we
come
back
to
the
hint
part.
That
means
I'm
not
going
to
spend
a
lot
of
time
on
the
ic
I'll,
just
polish,
the
api
spec
and
make
sure
we
have
the
hint
api
and
and
close
all
the
remaining
issues
on
github,
but
not
doing
anything
edition,
and
and
with
that
I
think
the
outcome
back
to
john's
original
proposal.
B
I
I
think
that
would
give
many
libraries
a
good
confidence.
They
can
start
to
instrument
without
having
to
worry
about
later
we're
going
to
break
them
again,
but,
as
josh
mentioned,
if
there's
a
certain
library
they
realize.
Oh,
we
want
some
additional
feature.
For
example,
report
things
in
a
in
a
badge:
we
don't
have
that
now
the
recommendation
is,
then
you
wait
until
we
have
that,
but
for
now
you
don't
go
randomly
and
regarding
whether
it
makes
sense
to
have
the
api
spec
without
the
sdk
spike.
B
I
think
yes,
because
the
sdk
can
remain
as
experimental
I'll
take
java.
For
example,
you
can't
have
a
relative,
stable
api.
You
give
the
instrumentation
libraries
and
they
can
do
the
work
and
then
give
them
an
experimental
version
of
this.
You
can
just
tell
them,
go
and
try
if
the
end-to-end
works,
you
can
get
the
data,
but
the
ict
part
like
I
have
no
problem,
it's
experimental.
So
as
long
as
you
can
get
the
data.
That
means
your
instrumentation
is
fine.
B
D
And
we're
just
going
to
say
something
like
we
expect
this
to
work.
You
know
you'll
get
cumulative
data,
you
won't
have
the
option
of
deltas
or
something
that
we've
talked
about
for
the
sdk.
It's
just
like
something
will
work
we're
going
to
not
specify
very
carefully.
What's
going,
what
you're
going
to
get,
though.
A
D
C
Very
good,
thank
you
for
that.
Next,
one
tikiram,
please
review
the
schema
file
check,
vr.
A
So
what's
the
plan
here,
the
plan
is
that
the
next
release
1.4,
I
think
it's
1.4
right-
will
be
the
first
release
with
a
schema
that
has
the
open
climate
system,
which
is
a
tiny
piece
of
file.
There's
nothing
there.
It's
really
just
the
very
first
version,
but
I
want
to
have
some
sort
of
automation
so
that
we
make
sure
that
we
do
not
forget
to
release
schemas
as
we
make
specification
releases.
A
Typically,
it's
going
to
be
just
a
copy
of
the
previous
scheme,
so
this
one
just
adds
the
checks
that
make
sure
that
the
version
number
that
is
specified
in
the
check
change
log
also
has
a
corresponding
schema
file,
and
I
also
will
be
adding
a
bit
more
automation
to
make
sure
that
the
schema
files
are
actually
also
published
on
the
website.
I
was
talking
about
earlier,
so
this
one
is
just
at
the
beginning.
Just
like
I'm
producing
this
one.
C
Really
yeah,
I
really
encourage
people
to
really
review
that
the
effort
that
tigran
has
been
putting
is
very
important
and
we
want
to
have
all
of
that
as
part
of
the
next
release.
C
C
Next
items
are
small
items
from
previous
meetings.
The
first
one
is
regarding
spanish
status.
I
think
we
have
more
or
less
agreement
on
that
nikita.
Thank
you
for
putting
that
together.
I
think
we
only
have
some
last
questions
from
sergey,
so
anybody
else
who
wants
to
check
that
out.
Please
do
it.
I
think
it's
it's
a
small
thing,
but
it's
important
enough
that
it
could
be
good
to
have
it
in
the
specification
sooner
than
later
and
the
final
one
is
the
service
name
service
name.
One
nikita
also
created
that
pr.
C
There
were
some
reservations
regarding
this
and
nikita
updated
the
pr
to
mention
that
these
specific
values
are
like
special.
So
that's
why
they
also
forget
their
environment
variables
book.
Then
you
had
some
reservations
against
that.
So
maybe
you
want
to
mention
that
if,
if
you
didn't
read
the
latest
iteration,
you
may
want
to
do
that
after
the
call.
C
But
please
don't
forget,
I
think
it's.
It
has
been
there
for
two
weeks
now,
so
it
would
be
really
good
to
decide
yes
or
no
or.