►
From YouTube: 2020-12-17 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
A
A
A
A
All
right,
yes,
let
me
see
just
a
summary
of
the
latest
triaging
status.
We
have
of
the
collector
issues.
We've
made
some
progress
over
the
past
week
up
five
percent
from
last
week.
Again,
no
p
zeros,
we
do
have
two
p
ones,
they
have
the
signees.
Yes,
they
have
the
chinese
and
the
rest
p.
Two
p
threes
in
the
contra
repo.
There
are
no
p
zeros
or
p
ones.
A
The
triaging
has
come
up
with
p2s
and
p3s,
and
we
can
time
box
as
a
standing
item,
collector
issues
and
contrib
issues.
I
saw
some
ones
that
have
been
filed.
We
will
start
with
bugs
to
begin.
A
A
A
A
Custom
logger
to
our
application.
Why
would
why?
Why
would
we
need
to
look
at
the
look
at
the
bug?
So
they
build
the
custom
collectors,
so
they
start
the
the
the
service
and
they
want
to
to
pass
so
they
they
don't
have
an
option
to
pass
the
logger.
They
have
only
an
option
to
pass
the
logging
options
there,
which
I
think
it
does
not
include
what
he
wants.
A
They
want
to
customize
up
logger
in
some
certain
specific
way
to
change
the
name
of
the
field,
which
I
don't
know,
even
if
it's
possible
to
do,
but
most
this
is
a
feature
request.
In
that
case,
it's
official
request,
yes
yeah,
which
may
not
be
even
bothered,
because
I
don't
know
exactly
what
is
that?
A
I
don't
know
the
the
zap
allows
that,
but
we
do
not
allow
them
to
pass
enough
configuration
or
enough
things
when
we
create
this
uploader.
Okay,
yeah.
I
don't
know
why
we
want
to
do
that.
Why
we
don't,
because
we
have
a
limited
set
of
options
that
we
include
okay,
yeah
to
me,
this
would
be
something
that
probably
shouldn't
be
doing
like
we
shouldn't
be
doing
everything
right
problem.
A
A
Let's,
let's
keep
it
if
you
want,
but
I
think
we
should
yeah.
Okay,
I'm
not
I'm
just
saying
you
shouldn't
close
with
the
wrong
reasons.
If
the
answer
is
correct-
and
you
say
okay,
we
don't
want
to
support
this,
for
whatever
reasons
sure,
but
the
the
closing
and
say
this
is
not
a
question
for
us.
Is
the
question
for
zom
you're
right
you're
right.
I
understand
you're
right.
I
understand
that.
Okay,
let's,
let's
make
it
the
p3
and
it's
a
feature
request
not
for
and
after
ga
and.
A
Appropriate
spec!
No,
let's,
let's
keep
this
back.
A
A
A
A
Yeah,
let's,
let's
mark
it
as
we
don't
have
exporters
receivers
here
right
here,.
A
A
Okay,
where
we
are.
A
A
Yes,
okay,
all
right,
we
got
that
we
still
have
two
minutes,
so
I'm
going
to
jump
back
towards
open
to
my
collector
and
take
a
look
at
the
issues
that
we're
not
labeled
as
bugs
there.
We.
A
A
A
A
The
other
approach,
maybe
could
be
that
you
ask
the
collector
to
do
sort
of
a
dry
run
and
print
the
configuration
without
running
or
outputting
it
to
a
file.
There
are
maybe
a
few
different
options,
a
few
different
possibilities.
Thinking
to
the
logs,
I
mean
yeah.
Maybe
why
not
after
the
startup,
you
can.
A
A
Yeah
yeah
anyway,
I
think
this
is
useful,
not
critical.
I
guess
p3.
A
A
Okay,
all
right.
We've
hit
the
time
box
for
this
agenda
item,
so
we
can
carry
on
to
the
next
one.
A
A
From
the
last
meeting,
I
I
put
in
some
suggestions
of
refactoring
in
order
to
build
ordinance
and
things,
but
I
still
seem
to
be
restricted
on
my
ability
to
click
buttons
and
do
stuff
you're
able
to
do
this
for
the
specification
repo
right,
that's
right!
So
what's
the
what
permissions
do
you
have
there
right?
You
know
I'll
have
breakfast?
Okay,
it's
it's
right
permission
that
you
need
to
modify
database.
A
I
can
do
like
editing
stuff
on
the
specification
levels,
but
not
the
collector
levels,
yeah,
usually
so.
Normally
it's
approvers
who
get
the
right
permission.
I
am
not
sure
how
we
want
to
proceed.
To
this
I
mean
we.
We
made
an
exception
for
for
that
repo
as
well
just
force
me
and
yeah
crosstalk
is
not
going
to
do
reduces.
A
I
can
tell
you
when
I'm
done
so
that
way
you
can
like
nix
my
extra
permissions
afterwards
and
you
know
so
program.
Did
you
in
the
specifications
repo?
Did
you
use
the
same
team
as
the
all
of
the
other
approvers
or
you
other
than
you
separately?
A
A
A
A
Okay,
I
added
another
item
down
here
about
like
whether
we
want
to
cancel
meetings
yeah
at
least
the
one
on
the
30th.
That's
the
you
know
one's
going
to
be
partying
like
it's
20,
20
the
end
of
2020
right
yeah.
Are
we
still
expecting?
I
don't
know
if
people
intend
to
attend
at
least
people
who
are
here
in
this
call.
Anybody
thinks
they
will
be
attending.
A
A
A
I'm
happy
to
keep
sharing,
but
if
someone
wants
me
to
relinquish
this,
I
share
a
screen.
Just
let
me
know
yeah,
please
go
ahead,
so
just
for
some
background.
I
I
mentioned
this.
You
know
about
two
or
three
meetings
ago.
I
think
just
starting
to
work
on
this
particular
task
and
it's
about
aggregating,
trace
data,
so
span
data
specifically
and
emitting
metrics
on
that
span.
A
Data
emitting
like
red
metrics,
so
I've
got
a
proof
of
concepts
which
is
on
this
screenshot
here,
and
I
wanted
to
get
some
feedback
on
on
the
proof
of
concept
and
just
to
make
sure
I'm
going
down
the
right
track.
A
So
the
main
concerns
for
me
on
the
usability
of
it.
So
in
terms
of
the
configuration
in
particular,
there
are
some
constraints
in
open,
telemetry
collector,
where
there's
a
need
for
at
least
one
receiver
in
a
pipeline
and
how
I've
designed
it
is
that
there's
a
pipeline
in
the
trace
pipeline
we'd
have
one
processor
which
does
the
aggregation
of
metrics
and
does
a
no
up
on
the
on
the
traces.
A
So
it
just
passes
through
traces
but
aggregates
metrics
and
then
writes
it
to
an
exporter,
because
currently
only
exporters
can
can
receive
data.
I
guess
and
that's
it
in
that
sense.
You
can't
really
at
the
moment
you
can't
really
send
metrics
directly
to
receiver
in
order
to
discover
the
six
spotter.
I
need
to
put
that
in
a
pipeline
and
there's
a
constraint
where,
if
I
don't
provide
a
receiver,
then
there'll
be
an
error
reported,
and
so
I
just
basically
put
a
dummy
receiver
in
there.
A
So
so
you're
discovering
the
exporter
the
name
of
the
exporter.
Can
you
show
the
config?
Sorry?
Did
you
scroll
down
again,
so
you
have
the
name
of
the
exporter
to
which
to
send
to
okay.
I
got
it
and
you're
finding
the
exporter
at
runtime
and
send
the
data
correctly
exactly
yeah,
that's
one
possibility
and
and
because
this
is
and
for
that
you
need
to
have
a
pipeline,
but
in
which
you
don't
have
a
receiver.
So
that's
a
problem.
A
A
So,
instead
of
use,
the
network
right
use
the
regular
network
to
route
the
data
from
the
processor
to
the
receiver,
which
is
listening
locally,
all
right
so
the
receiver,
so
basically
in
the
receiver,
where
you
configure
it
with
a
hosting
port
and
then
right
right,
it's
a
regular
receiver.
Let's
say
it's
an
rtlp
receiver
and
you
you
just
in
your
processor,
you
sent
all
trp
data.
A
A
It
just
avoids
that
particular
problem
where
there
is
a
requirement
to
have
a
receiver,
okay,
okay,
so
another
option
is
to
create
a
a
new
component,
or
that
is
receiver
exporter
or
whatever
you
call
it
which
which
implements
both
factories
and
plays
a
role
as
both
a
receiver
and
an
exporter
and
and
then
and
then
just
put
it
in
the
pipeline
as
exporter
and
as
a
receiver.
A
I
I
will
think
about
this,
because
you
are
not
the
first
one
who
asked
me
to
to
have
a
way
to
to
have
a
stop
to
a
pipeline,
and
this
is
about
changing
the
pipeline
right
and
specifically,
this
is
also
different
data
types
of
pipelines
and
again
this
is
something
yeah.
It
came
up
right
program
a
few
times.
We
don't
have
a
generic
proper
generic
mechanism
for
supporting
this
yeah.
We
don't.
I
don't
think
we
can
come
up
with
something
before
the
ga.
A
A
Okay,
one
is
to
say
that
okay,
you
have
to
find
some
receiver
there,
even
if
you're,
just
not
using
it
for
now.
Yes,
actually,
another
question
I
had
was
what
the
motivation
was
for
that
validation
to
require
a
receiver,
because
what
I
tried
was
actually
removing
that
validation
and
then,
if
you
scroll
down
further,
I
provide
a
conclude.
That's
basically,
the
motivation
is
that
in
vast
majority
of
cases
you
do
need
a
receiver.
A
Otherwise
your
pipeline
is
just
not
doing
anything
right,
that's
the
market,
so
so
we
help
the
user
to
do
the
right
thing.
That's
the
only
multiple
yeah
but
albert.
How
would
you
do?
How
do
you
configure
if
there
was
not
that
requirement?
Can
you
show
me
yeah
if
it's
going
down
a
little
bit
further?
Sorry
after
this
comment,
yep
this
one
here,
so
this
is
an
example
working
configuration
without
the
requirement
for
receiver.
So
basically
what
the
process
the
span
vintage
processor
does.
A
So
that's
the
new
processor
aggregate
matrix
it
it
discovers
the
prometheus
exporter
and
right
straight
to
it,
and
the
prometheus
exporter
then
does
its
thing.
So
it
exports
to
to
its
her
support
to
expose
that
that
metric
center
yeah
it
will
work,
it's
just
it's
a
pipeline
which
is
not
really
a
pipeline
you're,
just
using
it
to
instantiate
the
exporter
to
which
you
can
send
the
data
correct
yeah.
A
So
the
downside
of
this
approach
is
that,
if
you
do
need
to
do
processing
on
the
metrics,
then
you'll
need
to
create
another
function
like
that
yeah.
But
how?
How
do
you
get
the
data?
So
so
so
wait
a
second
so
right
here.
If
I'm
looking
correctly
the
you
receive
from
jager,
you
go
into
spam,
metrics,
correct,
yes
and
then
the
span
metrics
produces
metrics,
yes
and
then
how?
A
How
is
batch
gonna
work
there,
because
it
just
happened
because
of
the
reflection,
the
interface
we
implement,
both
no
no
no
program,
the
the
traces
pipeline
works
as
it
works
right.
It's
a
usual
traces
pipeline.
The
spawn
metrics
produces
completely
new
metrics
from
the
observed
trace
information,
and
then
it
directly
fits
the
data
into
the
prometheus
exporter.
It
finds
the
exporter
using
the
name
of
the
exporter.
We
have
an
interface
in
the
host
which
allows
components
to
find
exporters.
Remember
we
implemented
that
for
metadata
forwarding
we
needed
that.
A
I
see
so
just
to
understand
just
to
understand
one
one
last
piece
just
to
understand
this,
so
so
the
spam
process
metrics
right
now
passes
to
the
next
processor,
the
data
that
receives
and
the
metrics
to
the
prometheus
exporter.
A
A
I
think
I
think,
to
be
honest,
I
think
we
need
to
come
up
with
a
better
way
to
design
these
pipelines,
and
if
you,
if
you
have
any
time
and
interest
into
this,
I
would
be
very
happy
to
review
a
proposal
of
maybe
refactoring
our
way
to
define
pipelines
stuff
in
the
meantime.
In
the
meantime,
as
tigran
pointed
there
are
different
hacks
that
we
can
allow
you
to
do
or
we
can
do
and
yeah
the
simplest
workaround
is
just
what
you
have
right
just
just
require
that
there
is
some
dummy
receiver
defined.
A
Always
it's
okay,
not
very
nice,
but
it
will
work
yeah.
I
can
do
that
yeah
and
then
yeah
and
to
answer
your
question
bob
I'd
love
to
spend
time
to
yeah
refactor,
but
before
refactoring.
I
would
like
to
see
a
proposal
like
how
would
you
how
how
would
the
config
look
like
and
how?
What
are
the
capabilities
and
stuff
and
once
we
have
that
we
can
start
working
on
it.
A
But
I
would
like
to
see
that
proposal
first
and
just
understand
what
what
are
the
requirements
that
we
want
to
achieve
out
of
this
refactoring?
To
is
it
to
battle
satisfy
training
or
pipelines,
so
one
one
requirement:
is
these
mix
mixed
signals
pipelines
like
this
one?
Another
requirement
that
I
heard
would
be
to
to
kind
of
be
able
to
to
put
kind
of
a
stop
and
then
define
so,
for
example,
for
example.
A
Right
now,
in
order
for
you,
we
have
only
one
point
of
fanning
out
and
one
point
of
thinning
in
so
our
our
our
pipelines,
look
like
receiver
fans
out
to
all
the
pipelines
and
then
all
the
pipelines
fan
in
into
the
exporter,
so
requirements
that
I
heard
I
heard
like
what
about
we.
We
want
to
have
a
receiver,
then
bunch
of
processors
and
then
fan
out
to
different
pipelines
and
so
on.
So
there
are
a
bunch
of
things
that
we
need.
Essentially,
we
need
to
be
able
to
build
a
processing
graph
right
now.
A
We,
we
have
a
very
limited
schema
here
for
the
particular
use
case
that
you
have.
What
would
work
is
to
allow
the
pipelines
to
have
one
input,
data
type
and
another
output,
so
basically
the
data
type
of
the
output
is
not
the
same
as
the
input
data
type.
Today.
It's
a
requirement
as
if
you
receive
traces,
you
always
process
traces,
your
export
traces.
If
we
could
allow
the
processors
to
transform
the
data
type
to
switch
the
data
type
in
the
middle
of
the
pipeline,
it
would
solve
your
problem.
A
You
would
need
to
have
two
trace
pipelines
here.
One
would
be
just
a
regular,
the
one
that
does
the
patch
processing.
If
you
do
try
and
all
that
stuff
and
you
would
have
another
one
which
does
the
transformation
receive
the
exact
same
traces,
the
copy
of
the
traces
and
would
produce
metrics
as
a
result
of
that,
and
the
end
of
that
pipeline
would
be
the
prometheus
exposure
in
that
case,
but
that's
another
possibility.
There
is
a
number
of
options
here
right
so
round
one.
A
So
what
you
have
here
right
now
is
a
forking
in
the
middle
of
the
processors,
which
is
a
different
way
of
doing
that
right.
So
it's
not
entirely
clear.
What's
the
best
approach
here,
that's
why
pogba
is
saying
a
proposal
right,
we'll
need
to
think
through
all
the
details
here.
Okay,
cool
sounds
good.
Thank
you.
Most
importantly,
it
needs
to
be
backwards
compatible.
A
We
shouldn't
break
stuff
unless,
unless
we
find
very,
very
good
reasons
and
that's
why
I
want
to
see
it
sooner
than
later,
because
we
can
not
necessarily
break
stuff,
but
we
can
deprecate
this
way
of
defining
pipelines
and
we
have
another
entry,
new
pipeline
or
whatever
and
before
ga.
We
can
remove
that
at
one
point.
A
A
A
Yep,
that's
that's
the
ticket
that
I
submitted.
This
was
a
continuation
from,
I
think,
a
sync
we
had
about
about
a
month
back
regarding
this
ecs
service
discovery
extension
for
prometheus
been
doing.
Some
discussions
with
with
jay
ran
some
metrics
as
he
as
he
suggested
and
yeah.
So
I
just
wanted
to
continue
the
discussion
here.
A
A
Yeah,
maybe
yeah,
maybe
kind
of
it's
been
a
while.
I
could
also
use
a
refresher,
okay,
sure,
yep
yeah,
so
to
give
context,
we
wanted
to
implement
this
extension
that
performs
ecs
service
discovery
for
for
a
prometheus.
The
the
proposal
that
we
had
was
to
create
an
extension
that
does
that
queries.
A
This
ecs
api
figures
out
which
services
that
we
wanna
script
from
and
output
that
list
to
some
file
that
we
define
in
the
config
and
once
we
have
that
that
config
file
the
promoters
receiver.
Would
we
just
read
from
that
sd
config
file
that
we
that
we
were
writing
to?
So
that's
how
it's
currently
done
in
cloud
watch,
and
so
that
was
our
proposal,
but
from
our
last
sync
the
suggestion
was
that
we
would
use
the
observer
receiver
creator
framework
that
you
guys
have.
A
I
was
looking
into
it
and
I
and
I
wasn't
too
sure,
with
regards
to
performance-wise
how
like,
if
it's
worse
or
better
and
so
jay
recommended
that
I
perform
some
tests
to
get
the
numbers
out
to
crunch
those
numbers
out
and
from
those
tests.
It
does
look
like
the
current
receiver
creator.
Slash
observer
framework
with
eks
did
incur
like
a
much
higher
perform
like
it
scaled
worse,
as
the
number
of
script
targets
increased
versus
the
the
standard
service
discovery.
A
So
the
two
experiments
were
one
receiver
for
all
script
targets
and
one
receiver
per
script
target
and
the
one
the
the
one
with
the
one
receiver
for
all
script
targets
performed
better
than
the
one
with
one
reset
per
target,
and
then
I
think
after
I
post
those
results.
Jay
mentioned
that
he
was
working
on
this
more
native,
simple,
prometheus
receiver.
That
might
actually
make
it
better.
A
So
I
so
I
implemented
a
poc
of
that
receiver
and
it's
still
scaled
a
bit
worse
than
the
one
receiver
for
all
targets,
even
though
it
was
a
really
lightweight
native
receiver,
and
so
from
that,
I
think
that
the
way
that
we're
gonna
the
the
direction
that
we
wanted
to
take
is
to
just
go
ahead
with
our
initial
proposal
with
the
one
receiver
for
all
skip
targets,
and
I
think,
from
what
I
saw
from
this
discussion,
is
that
this
doesn't
seem
like
the
direction
that
you
guys
are
headed.
A
So
we're
thinking
of
just
implementing
this
on
our
end,
can
you
can
you
clarify
what
it
means
for
you
that
performs
better
yeah?
So
if
you
scroll
down
here,
I
have
a
table
so
the
first
yeah
the
first
table,
so
the
one
above
that
yeah.
So
this
was
the
first
example
and
this
in
this
experiment
we
compared
the
results
of
the
current
receiver
creator
with
the
simple
prometheus
receiver,
so
setup
one
is,
I
believe,
set
up.
A
One
was
the
one
with
the
one
receiver
per
script
target,
so
this
one
setup
one.
It
consists
of
the
the
receiver
creator
and
observer
framework
right
yeah.
Exactly
this
this
thing
and
then
set
up.
A
Two
is
the
standard
prometheus
service
discovery
which
is
inbuilt
in
in
the
kubernetes,
and
so
that
was
the
setup
too,
and
so
just
looking
at
these
results
as
the
number
of
targets
scaled,
we
see
that
setup,
one
skills
worse
than
set
up
two
and
then,
if
you
scroll
down
a
bit
further,
I
invented
this
poc
of
the
native
prometheus
receiver
in
opencemetry
and
ran
the
same
experiments
on
that,
and
you
see
here
that
cpu
does
scale
a
bit
worse.
A
Even
though
memory
is
better
I
mean
so,
I
would
expect
it
to
be
better
just
because
it's
much
more
lightweight
and
doesn't
perform
that
translation
back
and
forth,
but
we
still
see
that
so
like
scaling
in
the
cpu
that's
worse
than
then
set
up
to,
which
is
why
we
decided
to
go
ahead
with
our
initial
proposal.
A
I
think
there
are
when
there
is
one
receiver.
I
guess
it
just
makes
a
single
request
to
get
all
the
data
and
that
the
the
overhead
per
request,
I
guess,
extended
right,
possibly
yeah,
I'm
not
too
sure
exactly
yeah
wait.
Wait!
Do
you
screen
the
same
target,
so
we
have
like
multiple
pods
that
they're
running.
So
so
you
scrape
every
pod
or
you
scrape
the
cubelet
for
every
pod.
A
I'm
not
sure
what
could
you
explain
excellent
like
so
what?
What?
What
do
you
mean
by
scraping
the
cupid's
for
every
pod?
So
so
this
is
easier.
Sorry,
I
don't
know
how
sorry
no!
This
is
a
eks
eks.
Yes,
so
you
do
have
the
kublet,
the
kublet,
the
the
the
demon
position
on
every
machine,
correct
right,
so
so
so
for
getting
the
the
c,
the
metrics
that
you
want
from
for
for
these
pods?
A
Do
you
scrape
directly
an
endpoint
on
the
pod,
or
do
you
scrape
the
cubelet
for
that
endpoint?
For
which
setup
you
mean
all
the
setups?
Where
are
the
metrics?
Where
do
the
metrics
lead.
A
Where
do
the
metrics
live?
You
mean
like?
Where
am
I
getting
these
metrics
from?
Yes,
okay,
that
one's
from
so
we
installed
cloudwatch
agents
alongside
in
eks
and
then
what
it
does
is
it
will
collect
the
metrics
from
the
cluster
and
send
it
to
car
wash
and
then
from
there.
I,
I
monitor
the
performance
usage
of
the
collector
itself.
A
Are
there
69
different
endpoints
that
we
there
are
yeah,
so
there
are
16
different
parts
but
like
with
the
same
like
yeah,
but
no,
I
think
for
for
the
pods.
You
actually
get
the
metrics
from
the
kublet,
which
is
the
same
endpoint,
but
for
all
of
them
am
I
am
I
not
asking
the
right
david.
You,
you
have
a
better
yeah.
I
do.
A
No
he's
he's
actually
he's
not
trying
to
get
cpu
and
memory
utilization
for
containers
he's
trying
to
get
metrics
from
the
applications
themselves,
so
he
is
scraping
each
of
the
pods
okay.
So
you,
you
literally
scrape
every
every
an
endpoint
exposed
by
every
part.
That's
right!
Okay
and.
A
Okay-
and
this
is,
can
I
can
you-
show
us
the
code
and
the
prototype,
because
this
is
worrying
me
that
we
should
do
better
on
these
yeah
so
yeah
here
it
is
most
of
the
work
here.
I
think
I
like
most
of
the
work
here.
I
got
it
from
jay's
poc.
I
just
implemented
the
some
of
the
additional
translations,
but
yeah.
This
is
pretty
much
the
code
here.
I
think
this
is
a
task
and
skip
a
test.
Yep
yeah,
here's,
the
conversion
code.
A
And
when
you
say
the
setup
to
okay,
I
understand
the
setup
one.
I
I'll
look
at
this.
If
you
okay,
this
is
sure,
request
150.,
but
for
setup
two,
what
codes
do
you
use
setup?
Two?
I
just
use
prometheus
with
a
standard
discovery
like
if
you
go
back
to
the
issue.
I
I
I
give
you
that
conflict
in
there
you
just
scroll
up
a
bit
more
yeah.
Where
is
it
here
yep?
So
this
is.
This
is
my
config
for
setup
too?
A
Okay,
I
will
I'll
look
into
this,
because
this
is
a
very
intrigue
very
interesting,
and
I
think
we
should
not
have
a
problem
with
this,
but
I
may
be
wrong.
It's
unclear
why
there
is
such
a
big
difference
right.
If
it's
different
polls,
I
think,
have
to
make
sure
the
biggest
one
in
my
history,
one
of
the
the
biggest
cpu
usage
when
and
when
it
comes
to
one
percent
or
two
percent-
are
the
the
the
timers
so
that
the
thing
that
you
have
to
to
configure
an
individual
primer
for
every
of
these.
A
So
you
you,
your
cpu
and
those
are
0.1
or
something
like
that
and
but
69,
or
that
adds
up
to
exactly
almost
the
difference
that
you
see
there.
So
so
the
timers,
because
we
don't
share
the
timers
between
between
them.
Everyone
has
its
own
timer.
A
So
everyone
schedules,
the
primary
kernel,
cannot
wakes
up
a
tread
schedules,
a
thread
and
so
on.
So
on
all
the
work
that
is
needed
there
it.
It
was
just
a
bit
there,
but
it
adds
up
when
you
have
a
lot
of
them
and
yeah
just
add
on,
like
69
just
happened
to
be
the
maximum
number
of
pods,
I
could
run
with
my
current
eks,
but
for
a
customer
it
might
scale
up
even
more
than
that
right.
A
So
I
I
think
that
there
could
be
work
done
here
to
actually
go
with
this
approach
that
that
you
guys
are
proposing
to
really
fine-tune
and
figure
out
exactly
where
this
overhead
is
is,
is
being
done
and
how
to
optimize
it.
But
I
think
that
for
our
use
case,
since
we
require
this
for
our
own
services
quite
soon,
actually
I
think
min
is
in
this
call
he's
the
engineer
that
I'm
working
with
on
this,
maybe
min.
A
If
you
want
to
jump
in
on
anything,
to
provide
more
feedback
yeah,
I
would
like
to
to
hear
that
what
I'm
trying
to
say
is
if
this
is
a
short-term
solution,
go
for
it
right
long-term.
I
would
like
to
resolve
this
right
letter
and
you
should
be
willing
to
change
if
we
provide
a
better
solution
for
you.
That's
that's
what
I'm
trying
to
guess
for
you
like
don't
make
a
final
decision
is.
Is
this
a
temporary
right
now
solution?
Sure?
Is
this
the
final
solution?
A
I
think
jay's
proposal
should
be
the
final
solution
and
we
should
figure
out
why
we
do
we
do
a
bad
job
there
and
my
guess
again
is
the
timers,
the
the
that
we
need
to
to
actually
batch
them
together
and
and
then
we
will
get
we'll
get
a
win
from
that.
I
see
also
like
it's
also
possible
that
they're
all
firing
at
like
the
same
time
and
then
like
the
loads
that
it's
like
spiky
as
well
like
do
you
want
to
like
spread
the
scrapes
across
the
some
interval
of
time?
A
That's
another
option
or
or
move
them
together
into
and
execute
them
in
by
one
go
routine
and
execute
them
in
once
mean
you,
you
said
something
sorry
for
interruption:
yeah,
yeah,
no
problem
yeah!
My
point
is
so
right
now,
since
this
is
like
we
are
trying
to
put
in
the
features
that
the
current
cloudwatch
agent
already
have
there.
So
for
for
that
one,
I'm
thinking
the
temporary
solution
either
we
can.
We
can
continue
to
investigate
with
j
solution,
or
can
we
contribute
our
current
solution
from
club?
A
You
know
color
watch
agent
to
upstream
then
we're
gonna
bring
a
new
extension
that
are
gonna.
You
know
we
we
so
data
new
plugin
or
extension.
We
can
scrape
the
endpoints
right
into
a
static
sd
file,
so
we
have
one
receiver
that
we
can
send
all
the
the
you
know
to
to
script.
All
the
end
points.
That's
the
thing.
A
If
you
know
I
was
wondering,
should
we
for
the
temporary
solution
should
we
should
we
directly
contribute
our
code
into
the
upstream
for
right
now,
then,
later
on,
once
we
have
a
result
with
this,
you
know
investigation
on
this
thing.
We
can
work
on
this,
keep
working
on
this
one.
Then
we
switch
over
here
that
wouldn't
block
our
project,
yeah
yeah.
So
so,
but
I
don't
think
for
setup
two,
you
need
any
code.
Am
I
wrong.
A
Yeah,
sorry,
sorry,
okay
yeah!
I
was
just
going
to
say
for
step
two.
We
actually
need
a
code,
because
this
is
just
eks.
We
currently
don't
have
ecs
service
discovery
and
prometheus
itself
doesn't
have
inbuilt
ecs
service
discovery
for
some
of
the
reasons
that
they
had
upstream
but
ecs.
I
think
I
think
prometheus
defined
a
discovery
interface
and
with
a
bit
of
hacky
code
that
lives
in
the
ecs
observer.
A
You
can
make
that
an
easiest,
prometeous
discovery
interface,
look
into
that
and
maybe
maybe
that
small
hack
can
live
inside
the
observer
that
you
right
now
have
make
it
an
observer
for
us
and
make
it
also
discoverably
the
discoverability.
A
So,
just
just
to
clarify
that,
like
what
you're
saying
I
are,
you
been
shooting
to
create
a
new
observer
that
performs
this
discovery
and
then
for
now
just
dumps
it
to
a
file
like
we're
saying,
or
maybe
I'm
just
not
getting.
What
you're
trying
to
say
here
with
this
prometheus
has
this
notion
of
of
discoverability
discoverable
targets
or
whatever
it's
called.
A
So
so,
if
you
have
the
same
code
base
that
you
have
implementing
both
interfaces
in
your
custom,
because
you
will
have
your
custom
distribution,
correct
and
in
your
custom
distribution,
you
can
point
this.
The
same
code,
the
same
thing
to
to
be
a
prometeous
discovery,
ability
thing:
let's
see,
yeah,
I'm
not
too
familiar
with
prometheus
itself
to
know
like
how
like
whether
that's
yeah
min.
Do
you
have
any
comments
on
this
yeah?
I
I
from
you
know
what
I
learned.
A
I
know
the
current
promises
discover
doesn't
support
the
that's
the
only
thing.
I
think
it
doesn't
support
the
ecs
discovery
at
all.
Right.
That's
why
we're
saying
we
want
to
make
some
change
to
our.
You
know,
somewhere
in
the
collector
upstream
code,
to
to
contribute
an
extension
that
we
can
dust
data.
You
know
to
to
to
support
that
ecs
discovery,
discovery
ability
there
that
that's.
My
point
like
you
already
have
one,
if
I'm
not
mistaken,
yeah
in
another
project,
yeah.
Yes,
we
do
have
one!
A
I'm
sorry,
probably
I
I
didn't.
I
don't
know
not.
Currently
I
don't
think
we
have
that.
I
don't
think
so.
I
could
be
wrong
though,
but
I
don't
think
we
have
one
I
can
just
double
check.
A
I
don't
think
there
is
any
css
yeah.
I
don't
think
we
have
one
we
have.
We
have
the
kubernetes
everywhere
right.
A
A
Just
that
will
allow
you
to
to
use
this
as
an
observer
for
us
and
when
we
are
ready,
you
don't
have
to
do
anything.
You
will
just
use
our
mechanism
and
then
and
then
in
the
meantime,
if
it
implements
the
discovery
things
that
prometheus
needs
you
can
in
the
prometeos
receiver,
you
can
hook
that
as
one
of
the
things
that
prometheus
uses
to
discover,
yeah,
yeah,
okay,
make
sense
yeah.
That
makes
sense.
Okay,
just
just
just
come
from
there.
A
I
understand
again
so
you're
saying
that
to
create
this
ecs
observer
that
potentially
implements
both
the
kinds
of
like
discovery
techniques
that
we
mentioned
here,
but
the
the
the
receiver
creator
direction
is
like
you
know
it's
to
be
implemented,
and
then
we
can
just
like
go
ahead
with
implementing
the
current,
like
sd
file
technique
and
in
the
future.
We
have
like
some
configuration
to
toggle
that
on
and
off
yeah.
I
I
don't
know
what
you
are
doing
on
your
behalf.
A
How
the
premiere
is.
That's
right!
That's
right!
Okay,
and
so
you
need.
You
will
need
that
program
you.
What
you're
proposing
means
that
this
functionality
will
need
to
be
there
right.
That's
that's
how
it
works
and
set
up
to.
Is
that
what
you
mean?
I
think
that's
not!
That's
not
exactly
what
you
want.
A
This
is
an
extension
in
in
setup
to
the
proof
of
concept.
It's
it's
just
an
extension
which
queries
the
ecs
api
as
a
result
of
querying
figures
out.
What
are
the
end
points
there?
That
needs
to
be
scraped,
writes
the
data
to
a
file
in
the
format
that
is
understandable
by
community's
receiver
and
the
premier's
receiver
sees
this
file.
It
knows
it's
configured
the
user
says
where
the
file
is.
A
You
actually
specify
the
file
name
twice
right
once
in
the
extension
one
single
creators
receiver,
so
the
extension
and
the
premieres,
the
receiver,
communicate
through
the
file
the
same
way
as
our
ex
observers
communicate
via
the
interface.
But
yes
well
in
a
way
right.
That's,
but
that's
an
internal,
well-defined
interface
in
this
case.
It's
a
custom
thing
that
only
works
for
this
particular
extension
and
for
the
committee's
receiver.
A
It's
not
terrible,
yes,
so,
but
but
what
I'm
hearing
here
is
if
they
implement
an
observer
for
ecs,
which
we
can
use
and
everyone
can
use
as
a
server.
For
the
moment
they
can
add
a
small
hack
on
that
observer
to
have
an
extra
config
dump
into
prometheus
like
file
and
give
a
file
name,
and
that's
that's
the
only
thing
that
they
need
it's
a
small
hack
on
some
functionality
that
we
already
need,
and
we
need
to
have
an
observer
for
ecs.
A
A
Okay,
I
mean
that's
good
for
you
yeah
yeah,
so
I
guess
my
team-
probably
we're
gonna,
try
to
do
that.
First
step.
The
first
step:
we're
gonna,
implement
that
ecs
metadata
query
api
and
generate
the
data
and
dump
their
file.
Then
in
the
we
are
the
step
two.
We
we're
probably
gonna
to
take
a
look
at
that
api
to
follow
up
the
design
that
we
have.
Then
we
go
to
next.
That's
our!
You
know
step
two
thing,
so
we
probably
use
this
way
to
to
move.
A
Awesome
thanks
everyone
and
thanks-
and
I
think
that
from
from
this
point
on,
I
think
that
min's
probably
gonna
pick
up
on
this
issue,
because
my
internship
ends
next
week.
So
yeah
thanks,
I
guess
everyone
and
jay
for
helping
me
me
with
this.
A
A
I
can.
I
can
create
a
pr,
a
draft
pr
on
the
contra
repo,
because
I
think
right
now
that
poc
that
I
created
was
on
my
like
our
own
internal
repo,
which
might
or
might
not
get
deleted.
So
I'm
just
just
to
be
safe,
I'll
I'll,
create
a
drop
pr
on
on
contrib,
then
for
the
poc
for
the
simple
permits.
Receiver.
If
you
guys
want
do
we
do
we
want
that
or.
A
A
One
last
item
yeah.
So
that's
the
that's
mine.
I
just
want
to
bring
to
your
attention
that
the
collectors
see
icd.
The
prs
are
all
finished
and
you
know
would
appreciate
like
a
review
before
things
kind
of
wind
down
for
the
holidays
and
our
internship
ends.
A
A
Ci
you
can
specify
like
a
like
resource
class,
and
so
the
resource
class
that
you
guys
are
doing
are
using
for
the
load
test
is
like
a
medium
class,
but
in
git
of
actions
you
have
like
a
one
size,
fit
all
type
of
runner,
and
so
the
problem
is
like
it's
hitting
it's
exceeding
like
the
the
max
like
ram
usage.
For
that.
So
I
was
wondering
like.
Would
it
be
okay
to
also
like
kind
of
tweak,
those
in
the
same
pr
or
yeah?
That's
fine!
That's
that's!
That's
acceptable
right!
A
That's
expected
in
a
way
right,
because
this
is
different
machine,
stuff
yeah
I
would,
but
I
would,
I
would
think
about
if
you
really
want
to
move
this
or
not.
If
I,
by
the
way
to
give
you
a
heads
up,
we
didn't
merge
anything
until
today
and
probably
later
today,
we'll
start
merging
some
of
these
vrs
because
we
had
a
release
and
we
didn't
want
to
screw
up
cicd
before
the
release
makes
sense.
So
we
had
yesterday
release-
and
we
said
that,
but
should
we
move
the
low
test
yet
tigran?
A
A
That's
that's
the
long-term
plan
right
running
this
on
separate
machines,
but
if
we
keep
it
on
circle,
ci,
I'm
not
sure.
The
goal
here
is
to
move
away
from
circle
ci,
even
if
we
keep
the
single
job
there.
No
we're
not
achieving
the
goal
right.
Let's,
let's
move
them
right
now,
but
the
problem
is.
The
question
here
is
how
stable
github
is
right.
It's
not
a
problem
to
bump
the
limits
by
10.
That's
completely!
Okay,
the
issue
here
is
it:
could
it
be,
could
it
have
high
variance?
A
In
that
case
it
would
be
a
problem
right
which
we
don't
know
it
actually
does
I
notice
it
jumps
out
like
a
lot.
It
would
be
very
useful
if
you
can
re-run
this
a
few
times
and
then
maybe
post
the
results
of
like
four
or
five
different
runs,
and
we
see
how
much
variance
there
is.
It
would
help
us
make
the
decision.
A
Okay,
still,
okay,
but
it
affects
our
timelines
on
how
quickly
how
much
we
can
how
how
for
how
much
longer
we
can
live
with
this
less
stable
approach.
Right,
google
make
sense.
Oh
you
can
do
that
tomorrow
or
maybe
let's
just
make
a
table,
link
that
in
the
ndpr
yeah.
That
would
be
great
and
thank
you
for
this
work.
This
is
very,
very
useful
and
much
appreciated.
A
And
yeah
I
mean,
I
guess,
like
I
think
it's
passing
c,
I
just
feeling
like
contrib
test,
but
I
hear
like
that's
not
a
blocker,
because
I'm
sorry
sorry
again,
oh
sorry,
I
just
I
just
want
to
say,
like
I
think
the
only
check
they're
failing
is
contrib
tests
and
I
think
that's
not
a
blocker,
because
I
think
on
getter.
You
said:
that's
just
like
signal
to
the
maintainers
that
they've
got
to
fix
something
and
contribute.
It
depends
on
what
what
fails
right.
A
It
may
fail
if
we
break
something
in
the
internal
apis.
In
this
case,
it
appears
that
it's
likely
an
unstable
test.
I
would
say
probably
from
what
it
appears
to
be,
which
means
for
for
from
the
perspective
of
what
you're
doing
again,
you
can
ignore
this,
but
you
should
know
this
generally
right.
Okay,
thank
you
all
right.
Thank
you.
B
Other
prometheus
issue,
but
it
doesn't
look
like
gina
joined,
so
I
guess
we'll
have
to
pick
it
up
at
the
start
of
january.
Yep
looks
like
it
also.
Can
everyone
check
that
I.
C
C
Perfect
see
you
next
year,
happy
new
year.
Everyone
and
yeah,
hopefully
into
2021
is,
is
gonna,
be
better
thanks.
Thanks
everybody
happy
new
year
happy
new.