►
From YouTube: 2020-11-05 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
C
B
B
Yes,
since
you
I
see
in
the
agenda,
we
have
the
the
triaging
the
beginning
right.
Do
that
the
beginning
of
this
meeting,
or
we
want
to
do
the
third,
that
what
do
you
prefer
to
do.
E
I'd
like
to
continue,
if
that's
okay,
I
can
time
box
it
to
like
seven
minutes.
If
we've
got
one
two,
three
four
five
say:
oh
geez,
we
got,
we
got
a
bit
of
agenda
items
yeah.
Let's
do
that!
That's!
Okay!
Okay!
Now
I'll
give
a
quick
recap
where
we
left
off.
We
just
had
like
a
first
triaging
session,
where
we
set
labels,
github
labels
for
the
collector
sig
and
we're
going
at
prioritizing
and
categorizing
issues
in
the
collector's.
I
will
share
screen.
E
So
we
can
all
look
at
the
same
thing,
so
the
labels
of
which
we
decided
upon
just
most
recently
was
priority
labels,
so
prioritization
triaging
will
be
deciding
on
the
prioritization
and
also
this
related
open,
telemetry,
spec
area
that
it
is.
It
belongs
to
these
other
ones.
I
think
we
will
punt
on
now
for
what
whether
we
will
apply
these
labels
or
not.
E
Yeah
all
right
for
detailed
notes
as
to
what
we
covered
in
the
trio
session.
I
I
made
a
a
section
down
here
for
what
we
did
in
the
past
half
hour,
and
so
we
can
track
what
happens
and
just
as
an
fyi
there's
another
three
hour
session
tomorrow.
At
eight
o'clock
am
pst,
so
we'll
I'll
go
up
to
seven
minutes
on
this.
So
up
to
the
forward
ten
there's
one
tomorrow
at
eight
a.m.
E
I
haven't
created
a
calendar
item
for
it
because,
but
I
could
use
some
help
in
doing
that.
If
you
I.
B
B
B
G
B
H
Well,
this
is
a
p2,
at
least
because
amazon
really
really
needs
this,
and
there
are
a
bunch
of
efforts
related
to
this,
including
a
protochange
that
was.
D
B
E
If
I
can
request,
if
there
is
a
pr
available
that
fixes
this,
you
just
put
the
fixes
keyword
in
the
description
of
the
pr
and
then
the
link
to
this,
and
it
will
cross-link
it
here.
G
D
B
H
B
Okay,
probably
important
for
javascript
people,
so
I'd
say
you
too.
H
E
H
Perfect,
this
is
not
a
spec,
it's
really
our
building
process.
E
Okay,
great
so,
we've
arrived
at
that
and
we've
got
others
which
we'll
take
care
of
tomorrow.
So
that
is
the
triage
section.
E
E
And
since
the
last
release,
16
days
ago,
there's
been
54
commits
to
master
since
then,
so
I'm
sure
I
just
had
to
prove
to
them
like
at
least
two
of
them.
Look
kind
of
like
bug
fixes
that
people
might
care
about
might
want
to
bring
some
stuff
on.
Other
ones
are
just
like
all
spelling
mistakes
and
readings
and
whatnot.
E
But
I'd
like
to
ask
is
this:
the
scope
in
the
due
date
for
the
next
upcoming
milestone
is
there
something
that
we
can
shoot
for.
B
The
question
here
is
also:
do
we
want
to
use
milestones
for
this?
We
tried
at
some
point
for
every
release.
I
mean
we
tried,
but
because
it
was
just
didn't
give
anything
other
than
maintenance
burden
in
terms
of
dealing
with
the
also
github.
So
we
just
planted
that
and
you
we
just
have
those
two
right,
as
you
saw
that
the
ga1
and
the
backhoe,
I
I
don't
think,
that's
really
necessary
because
we're
releasing
fairly
regularly
on
the
bi-weekly
cadence.
B
I
don't
know
if
people
see
any
reasons
for
having
for
using
milestones
like
enter
for
every
every
single
minor
version
with
this.
Maybe
we
can
do
that,
but
I
I
don't
really
see
that
for
now.
E
But
it
does
help
communicate
is
the
due
date.
If
you
use
it
to
set
it.
B
But
again,
I
don't
think
we
necessarily
need
to
use
milestones
for
that
right.
We
do
the
releases
we
if
we
have
the
milestones
we
have
to
then
in
that
case,
go
ahead
and
add
stuff
to
the
milestones
right.
We
have
to
mark
issues
for
milestones,
which
is,
I
don't
know
if
anybody
has
the
time
and
resources
or
to
do
that
right
now,.
J
Okay,
it's
also
the
reverse
right,
because
if
we
release
every
two
weeks,
if
there's
something
there's
issues
just
for
that
milestone,
unless
it's
a
major
blocker
we're
going
to
ship
it
anyway
right.
So
then
you
have
to
go
in
and
like
remove
issues
from
the
milestone
because
you're
going
to
ship
so.
B
B
Sorry
guys
I
have
to
drop
off,
so
I
won't
be
able
to
go
for
the
rest
of
the
meeting.
Did.
F
Yeah
they
want
to
skip
to
that
then,
okay,
I
don't
actually
have
a
really
strong
opinion
about
this.
The
question
is:
do
we
want
to
stick
with
the
port
that
we're
using,
which
is
in
the
I
think
the
ina
calls
it
like
user
assigned
space
where
there
are
no
reservations,
there's
always
a
risk
of
a
conflict,
or
do
we
want
to
ask
the
ina
ian
a
for
a
reserved
port
for
us
to
use
and
the
downside
of
that
we
would
have
to
switch
to
it.
H
As
for
report,
we
discuss
about
it
and
we'll
see
the
process
here.
B
Yeah
yeah,
so
we
did
discuss.
We
think
that
it's
likely
worth
doing.
Okay
for
a
couple
of
reasons,
not
just
because
it's
already
support
port,
but
because
the
port
number
that
we
use
now
is
a
dynamic
port
number.
Yes,
there's
no
guarantee
that
it's
not
occupied,
yes,
exactly
when
you're
running
the
collector,
which
may
be
a
bigger
issue,
we
don't
probably
really
care
that
much
about
it
being
registered
or
not
yeah.
B
So
we're
going
to
do
these
two
things
that
is
listed
here,
I'm
already
looking
into
the
possibility
to
have
transmission
period
where
both
port
numbers
are
supported
and
bogdan
is
going
to
look
into
whether
we
want
to
have
one
shared
port
for
grpc
and
http.
B
F
J
Hey
andrew
so
for
the
thing
about
the
no
milestone
or
if
we
need
a
backlog
milestone,
so
the
original
reason
we
did,
that
was
so
we're
basically
treating
that
as
triage.
So
if
a
issue
wasn't
assigned
to
a
milestone,
we
considered
it
untriaged.
E
Since
I'm
helping
with
the
charge,
I
can
stick
to
whatever
processes
you
guys
desire
like
if
you'd
like
to
have
the
milestone.
I
can
just
like.
Okay
after
we've
touched
it,
and
we
know
that
we
made
a
decision,
I
could
put
it
in
there.
The
only
wrinkle
is
the.
If
we
have
one
of
the
labels
for
release
as
like
after
ga,
where
we
don't
put
a
priority.
E
That's
the
only
time
where
it's
like,
oh
well,
it
doesn't
have
a
priority.
So
did
anyone
really
triages.
H
H
Okay,
we
discussed
the
next
item:
correct,
morgan,.
H
Okay,
perfect
next
one
is
amman.
F
I
So,
okay,
I
I
was
thinking
some
context
on
metrics
aggregation.
There
was
like
this
issue
opened
a
while
ago
by
the
past
aws
intern.
The
issue
pretty
much
here
is
that
prometheus
as
a
back-end.
I
As
far
as
I'm
aware
when
you
export
metrics
to
prometheus,
it
expects
them
to
have
a
cumulative
aggregation
temporality
so
because
of
that
at
the
moment,
like
prometheus,
remote
right
exporter,
if
any
metrics
that
come
into
that
have
a
delta
temporality,
it
just
drops
them
and
it
only
pretty
much
supports
the
cumulative
metrics,
and
this
also
causes
issues
in
the
premiers
exporter
as
well
like
the
metrics.
Pretty
much
just
don't
show
up
how
prometheus
expects
and
there's
a
couple
open
issues
regarding
that.
I
So
this
was
this
processor
was
a
proposal
that
was
made
a
while
back
in
order
to
fix
that
issue,
pretty
much
it's
like
it
in
the
processor
stage,
it
converts
delta
metrics
to
cumulative
metrics.
I'm
wondering
if
this
is
like
something
that
we
still
that
there's
interest
in
proceeding
with.
If
this
is
like
the
correct
design,
according
to
the
maintainers
of
how
this
issue
should
be
resolved,
because
right
now,
like
the
prometheus
receiver,
as
far
as
I'm
aware,
only
it
only
creates
cumulative
metrics.
I
So
if
you
use
the
prometheus
receiver
and
the
prometheus
remote
right
exporter
that
works
as
expected,
no
metrics
are
dropped.
But,
let's
say,
for
example,
you
have
the
otlp
receiver,
which
creates
some
delta
metrics
cube
the
permutation
mode
exporter
would
just
drop
those,
so
you
would,
you
would
be
losing
out
some
metrics
there.
This
pretty
much
is
designed
to
solve
that
issue,
and
I
know
there
was
some
similar
discussion
with
like
the
stats,
the
receiver
as
well
about
just
you
know
in
general,
how
metrics
aggregation
should
be
handled.
I
I'm
just
curious
about
opinions
on
this.
H
By
the
way,
do
you
want
to
to
do
it?
So
there
are
a
bunch
of
problems,
and
I
think
that
solution
is
very
naive
and
it
works
only
if
the
source
is
connected
to
only
one
collector.
H
If,
if
you
have,
if
you
have
a
deployment
of
the
collector
behind
the
load,
balancer
is
going
to
be
a
way
harder
problem
for
you
to
solve.
H
H
Unless
I'm
wrong,
but
that's
my
understanding
so
and
do
I
aim
to
to
solve
that
problem-
probably
we
don't
have
time
and
resources
to
do
to
do
it
right
now,
the
the
global
problem
like
whenever
you
deploy
it
behind
the
load
balancer,
if
you
deploy,
if
you
deploy
it
locally,
and
you
know
that
you
will
receive
all
the
points,
I
think
that
should
be
possible.
I
Okay,
but
dysfunctionality
doesn't
currently
exist
in
any
in
any
place
right
just
in
order
to
convert
delta
to
cumulative
metrics
like
I
saw
in
like
one
of
the
comments
that
apparently
according
as
for
otlp
v
0.5,
it
is
included
over
here,
but
I
haven't
been
able
to
find
this.
Do
you
have
any
idea
about
that?.
H
I
I
So
that
being
said,
is
there
any
reason
to
pursue
this
proposal
then
of
this
metric
segregation
processor?
In
your
opinion,.
H
If,
as
I
said,
if
it's
necessary
for
you
and
it's
going
to
solve
a
real
problem
for
you,
I
think
we
can
add
this
into
the
country,
the
metrics
transformation
or
transform
the
processor,
okay
and
yeah.
So
I
think
you
you
may
want
to
think
to
start
a
small
pr
initially
to
mention
this
or
cece
james
james
from
google,
james
bendy.
I
Yeah,
I
think
it
does
like.
I
looked
at
the
metrics
transform
processors
like
the
original
issue
for
it
and
I
think
that's
functionality
that
they
wanted,
but
didn't
have
time
to
create
at
the
time.
So
sync
up
I'll,
follow
up
on
that
yeah.
H
Sync
up
with
james
as
the
owner
of
that
and
and
yeah
okay
yeah
awesome.
Thank.
H
H
C
K
Right
yeah,
so
it's
me
so
maybe
I
can
talk
about
a
little
bit
about
this,
so
this
this
one
is
about
the
resource
activities
to
level.
So
I
have
two
things
to
discuss
here.
One
is
regarding
the
issue
and
another
is
the
draft
prior
sent
today,
so
the
the
in
the
issue
here,
if
you
go
here,
you
see
like
so
some
of
our
members
like
share
different
views,
josh
mcdonald
said
maybe
it
might
be
a
thing
to
discuss
for
the
specification
level
or
something
so
I
replied
back
and
shared
my
views.
K
So
I
think
we
have
several
options
in
our
hand,
and
maybe
we
need
to
pick
one
to
final.
Like
finished
the
implementation,
I
guess
so
so
I'm.
H
H
I
think
josh
problem
is
not
with
the
place
where
this
should
be
is
more
like
how
the
the
the
fact
that,
in
the
resource,
we
have
attributes
with
different
types
like
beans,
bulance
and
stuff,
how
do
we
convert
them
to
strings?
I
think
that's
what
he
refers
to.
I
need
to
carefully
read
the
comment,
but
this
is
my
understanding,
and
there
are
other
comments
about
where
to
put
this.
I
I
don't
know
what
is
unclear
at
this
point.
K
Oh
so
the
first
thing
is
like
so
we
are
converting
all
the
resource
attributes
to
matrix
levels
by
default.
So
is
that
okay,
like
so
you
in
our
first
meeting,
I
remember
like
so.
Your
solution
was
something
like.
If
any
user
does
not
want
to
convert
all
the
resource
attributes
to
metric
levels
by
default,
they
can
exclude
them
using
the
resource
processor
and
by
default,
all
the
resource
attributes
will
be
converted
into
a
metric
level,
so
that.
K
H
K
I
I
want
to
share
a
few
here
like
so
you
suggested
me.
Two
different
options
like
one
is
like
implementing
a
consumer
and
or
a
helper
function,
so
I
did
both
here
and
but
I
was
able
to
only
test
using
the
helper
functions
with
the
logging
exporter
in
the
logging
exporter
before
exporting
the
data.
I
just
call
this
helper
function
like
convert
resource
attribute
schematic
levels
which
is
kind
of
like
easier
to
test,
but
honestly,
like
I
am
kind
of
confused
with
the
architecture
for
making
it
like
a
common
utility
for
or
exporters.
K
Maybe
I
am
not
understanding
the
utility
or
exporter
helper
architecture
to
make
it
a
common
option
for
all
the
explorer.
So
I
was
expecting
like
kind
of
a
more
detailed
guideline
from
you
like,
so
we
have
both
of
the
options
implemented
here.
How
can
just
make,
if
you
add
some
comments
or
give
some
guidelines
I'll
review,
that
again.
H
K
Okay,
yeah.
Thank
you
so
much.
That's
all
for
this.
I
will
then
talk
on
github.
I
have
another
proposal
for
filter
metric
using
resource
attributes.
So
the
thing
is
like
so
I
wrote
the
description.
Maybe
I
can
go.
I
just
want
to
see
like
if
it's
the
correct
path.
Maybe
I
can
start
the
implementation
before
that.
I
want
to
make
the
confirmation
from
you
guys,
maybe
your
opinion.
C
H
Delegation
is
key
here.
Is
that
only
thing,
so
I
I
want
you
and
jake
who
are
more
in
touch
with
the
with
the
filter,
processors
and
stuff
to
comment.
What
exactly
should
this
look
like,
but.
H
Thanks
anything
else
here,
okay,
so
I
think
we
we
are
waiting
for
these
guys
to
comment
rafael.
L
Hey
yup,
I'm
here
yeah
hey,
so
we
have
this
proposal
for
implementing
the
ecs
service
discovery
for
the
prometheus
receiver
and
the
way
that
we
propose
it
to
be
our
solution.
Our
proposed
solution
is
to
create
a
new
extension
that
performs
this
service
discovery
logic
for
the
prometheus
receiver
yeah.
So
basically,
it's
all
written
there.
L
Basically,
what
it
would
do
is
it
would
just
query
the
ecs
api
for
task
metadata
and
then
output
that
those
script
targets
into
a
certain
file
that
the
promoter's
receiver
can
can
then
scrape
from.
J
Yeah,
I
think
so
maybe
we
can
go
into
this
in
depth
offline,
but
I
think
you
can
replace
this
with
a
use,
a
observer.
J
So
with
these
kinds
of
concept
of
observers,
looking
like
monitor,
kubernetes,
cluster
or
host,
and
so
we
would
add
just
a
ecs
observer
and
then
from
those
observers
you
can
dynamically
start
other
receivers,
so
you
would
like
to
start
a
prometheus
receiver
based
off
on
it.
So
if
you
I'll
link
the
the
docs
to
you,
receiver
creator
take
a
look
at
that
and
the
observer
docs
and
then
and
then
we
can
yeah.
I
can
work
with
you
to
get
that
added
as
an
observer.
L
It's
gonna
append
yeah,
so
it's
it's
not
gonna
create
a
new
receiver.
It's
gonna
extend
off
of
the
properties
receiver,
so
it's
gonna
basically
create
a
file
that
populates
your
script
targets
for
the
receiver
yep.
So.
A
H
H
Kevin
another
option
for
us
is
we
have
this
simple,
prometheus
thing,
which
is
able
to
script
only
one
endpoint
and
we
have
the
ability
to
create
dynamic
receivers,
so
yeah
able
you
may
be
able
to
use
the
receiver
creator.
The
thing
that
jay
wrote
and
we'll
point
here,
things
and
we'll
be
able
to
start
new
instances
of
of
simple
prometheus
that
are
capable
of
scraping.
Only
one
point,
and
let
me
do
that
anyway.
J
wordpoint
here
will
give
you
more
routines
and
stuff
yeah.
L
L
Coming
and
going
so
right,
okay
sounds
good,
I
think
detriment
is
here
as
well.
I'm
not
sure
if
he
has
any
other
thoughts
to
chime
in
here.
C
L
C
Yeah
I
I
heard
that
so
yeah.
We
can
evaluate
the
new
solution.
Yeah
okay
sounds
good
thanks.
Everyone.
H
H
Yeah,
oh
by
the
way
couple
of
updates
for
everyone.
Kevin
just
was
promoted
to
be
an
approver
for
the
collector
country
repo
this
morning
and
we
just
added
andrew
as
a
triager
for
both
rappers
to
help
us
with
the
triage
congrats
kevin,
and
thanks
for
all
that.