►
From YouTube: 2021-03-25 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
A
Folks,
if
you
have,
if
you
have
any
other
items
that
you
want
to
add,
let's
just
add
and
then
get
started.
A
B
A
Right
yeah,
I
think
we're
good
to
start.
We
have
a
whole
set
of
things
that
we
wanted
to
kind
of
discuss
and
cover
today.
Jana
did
you
want
to.
You
know,
start
off
with
the
questions
that
we
had
yeah.
C
I
was
wondering
if
david
is
here
so
one
of
the
things.
A
C
To
do
yeah
yeah!
That
would
be
great
because
thanks
thanks
yeah,
let's
wait
for
david.
B
He's
here
sorry
at
least
he
appeared
like
the
moment.
I
said
that.
C
Yeah,
so
you
know,
one
of
the
things
that
we
wanted
to
do
this
week
was
to
review
this
prototype
and
the
design
dock.
Like
doing
so.
Actually
you
know
I
I
had
questions
and
I
have
more
questions
now
so
david.
I
was
wondering
if
you
had
any.
You
know
chance
to
take
a
look
and
how
we
should,
like
you
know,
proceed
with
the
conversation,
because
such
a
complicated,
like
a
big
topic
I
was
wondering
like
maybe
this
meeting
might
not
be
the
easiest
place.
C
E
Separate
separate
decision
on
this
one.
A
Yeah,
because
you
want
to
focus
in
and
you
know
kind
of
dive
deep
into
some
of
the
alternatives
and
then
what
are
the
pros
and
cons
there?
Because
you
know
I
mean
we
are
getting
geared
up
to
start
development
so.
C
Yeah
one
thing
that
we
want
to
do
like
I
mean.
Ideally
we
need
to
start,
you
know
doing
the
developments
and
you
need
to
have
some
consensus
because
we
have
this
like
may
31st,
you
know
for
the
collector
to
be
stable
if
anything
needs
to
go
into
the
collector
or
anything
needs
to
change
like
significantly.
Maybe
it's
a
good
idea
to
also
finalize
like
what
the
operator
will
look
like
I
mean
we
may
not
end
up
going
with
the
operator.
Maybe
so
I'm
just
like
you
know,
laying
all
these
like
possible
risks.
C
So
if
we
can
yeah
agree
on
on
on
a
design,
maybe
that
would
be
useful
at
this
stage,
but
I
I
feel
like
the
the
the
current
proposal
is
really
big.
It's
complicated.
I
have
no
like.
I
have
not
much
experience
with
like
controllers
also,
so
what's
the
best
way
to
go.
E
A
C
Like
you
know,
it's
exposing
the
targets
through
a
file
like
maintaining
and
running
this
thing,
especially
if
you
want
to
do
it
on
behalf
of
your
users,
will
be
very
difficult.
So
I
have
some.
You
know
concerns
like
that.
I'm
not
even
like
super
like
sure
that
this
is
the
way
that
we
should
go
like
the
operator
approach,
so
I
was
just
looking
for
a
way
like
you
know.
I
mean
if
you
have
any
of
this
type
of
like
high
level
feedback.
D
C
A
I
mean
again
I'd
also
like
to
request
others
to
kind
of
add
your
thoughts
here,
because
I
do
think
that
you
know,
as
we
get
ready
to
implement.
Are
we
all
in
agreement
on
this
design
and
really
kind
of
going
through
the
details
and
fleshing
that
out.
C
Yeah
one
of
the
like,
simpler,
like
approaches
I
was
thinking
of,
was
you
know
if
you
are
going
to
do
this
as
the
stateful
set,
the
primary
can
do
this.
Like
you
know
discovery,
you
know
the
shot
I
mean,
has
the
shards
and
then
the
other
replicas
can
you
know
come
in
and
you
know
query
I
mean
there
there
would
be
an
api
that
they
would
be
like.
It
was
kind
of
like
more
of
like
a
simpler
mechanic.
C
You
know
with
this
proposal.
There
are
a
lot
of
like
moving
parts,
so
I'm
not
sure
like
how
easy
it
will
be
to
operate
it.
F
F
C
Sorry,
I
lost
my
connection,
so
I
are
you
asking
for
like.
Is
there
any
predetermined,
like
performance
goals,.
F
C
Yeah,
so
that's
a
good
question.
It
is
not
really
well
defined,
but
you
know
what
we
were
looking
this
from
the
perspective
of
like
hey:
what
happens
if
we
give
this
to
a
customer
who
has
like
a
thousand
nodes?
Oh
you
know
I
can
spec.
C
We
we've
been
running
like
some
performances,
for
example
with
the
existing
receiver
and
like
some
like
custom
sharding,
you
know
methods
it's
more
like
you
know,
for
us
like
what
is
an
average
large
customer
would
be
like
you
know,
a
thousand
nodes
I
can
specify,
because
I
need
to
explain
what
is
the
spec
of
the
node
as
well
and
like
how
many
like
targets
we're
talking
about
and
so
on.
C
So,
but
I
think,
like
we
are
not
even
at
that
stage
to
be
able
to,
you
know,
have
have
a
conversation
to
define.
Like
you
know,
what
is
our
performance
goals?
It's
like
what
is
our
like
how
how
complicated
this
should
be.
Is
my
question
basically
right.
Like
this
approach,
I
mean
I
never
understand
operators
and
I'm
not
an
expert
in
communities.
So
that's
why
I'm
asking
for
guidance.
F
C
No,
I
gave
I
make
some
benchmarks.
Actually
this
doesn't
work
after
if
you
have
like
10
notes
and
let
me
actually
pull
out
my
performance
benchmark
document
benchmarks,.
C
C
G
I
have
a
question
in
order
to
scale
better
and
simplify
the
solution.
Can
we
remove
some
of
the
requirements
like
supporting
wall.
C
The
bottleneck
like
well,
we
don't
have
wall
right
now
in
the
existing
exporter
and
I
can
tell
you
how
much
it's
scaled.
G
C
Yeah
yeah,
we
didn't
go
with
the
demon
set
because
of
the
wall
and
it's
not
available
on
fire
gate.
Those
are
the
main
two
things.
If
we
still
think
that
you
know
we
don't
care
about
those
two
use
cases
initially.
Maybe
it's
a
way
to
go.
B
I
think
that's
the
thinking
behind
the
operator
is
preserving
that
optionality.
A
B
Without
having
the
user,
without
making
the
user,
without
forcing
them
into
a
premature
choice,
which
we
then
have
to
change
in
a
way
that
requires
that
intervention,
is
that
fair.
G
C
Well,
it
depends
right,
like
you,
may
need
to
have
like
new
configuration
and
so
on,
like
if
you
make
everything
manage
on
behalf
of
the
user.
Maybe
yes,
it's
transparent,
but
I
mean
they
don't
have
to
you.
Can
you
know
upgrade,
but
it's
not
that
it
might
not
be
easy.
Once
we
put
this
out,
we
may
need
to
break
people
in
you
know
so
well
in
terms
of
performance
criteria.
C
So
I
had
this
ten
large
notes
I'll
give
you
the
spec.
Also
like
m
five.
C
Let
me
put
it
in
the
chat,
a
cluster
with
like
10
of
these
notes,
I
start
to
see
like
the
I'm,
I'm
seeing
the
receiver
being
a
bottleneck
once
I
have
like
100
replicas
of
a
web
app
that
exposes
a
thousand
metrics
or
if
I
have
200
replicas
and
it's
exposing
400
mix.
There
are
a
couple
of
dimensions,
so
I
I
can
share
you
the
the
numbers,
maybe
in
the
chat.
C
Yeah,
this
is
one.
This
is
one
collector
it
can
scale
up
to.
Like
you
know
this,
is
it
can
scale
up
to
like
this
200?
C
You
know
web
app,
replicas,
exposing
400
metrics
or
a
hundred
of
them
exposed
in
a
thousand
metrics.
Apart
from
that,
you
see,
receiver
scrape
becoming
a
bottleneck,
so
you
know
there
are
a
couple
of
options
you
have.
You
may
ask
them
to.
You
know,
have
like
a
less
of
an
interval,
like
I
mean
scraped,
less
more
seldomly.
C
H
C
That's
useful,
maybe
we're
you
know,
solving
the
wrong
problem.
Let's
take
a
look
at
the
receiver
then,
like
I'm,
not
sure
where
the
bottleneck
is
coming
from.
To
be
honest,.
G
H
Yeah
something
else
I've
noticed
over
the
years
looking
at
different
database
systems
is,
they
all
seem
to
top
out
around
a
few
million
samples
per
second
ingested
now,
not
just
metric
systems,
but
everything
else,
but
it
seems
like
the
fundamental
limit
of
a
machine
when
you
get
into
cash
coherency
and
whatnot
is
somewhere
around
a
few
million
a
second,
so
you
know
like
radius
is
pretty
much
there,
but
that
gives
you
an
idea
of
what
the
upper
bound
is.
C
E
C
G
C
G
Yeah,
let's
let's
my
my
my
proposal
here-
would
be
to
better
understand
the
problem
that
we're
trying
to
solve
before
jumping
to
a
solution,
because
I.
G
Just
we
jump
to
to
solve
the
the
shards,
the
the
the
stateful
set
and
so
on,
but
we
don't
understand,
as
as
brian
pointed
let's,
let's
have
some
numbers
for
one
instance:
let's
have
a
comparison
with
the
current
prometheus.
Let's
have
some
target
numbers
that
we
are
aiming
for
and
then
based
on
that
find
the
solution.
A
C
Right
the
reason
that
we
don't
have
like
any
concrete,
I
think
performance
goals
was
it's.
We
never
thought
I
mean
we
never
had
like
the
comparison
from
like
you
know,
with
the
prometheus.
The
other
thing
was
this
was
almost
like.
I
mean
since
I
joined
this
project.
This
all
was
was
always
accepted
as
a
existing
problem
like
we,
we
always
was
thinking
about
like
hey.
We
need
to
find
a
way
to,
like
you
know,
have
horizontal
scalable
in
some
way
or
the
other
without
actually
like.
C
You
know,
thinking
too
much
about
what
it
means.
Concretely
in
terms
of
like
you
know,
numbers.
So
sorry,
sorry,
that's
that's.
You
know
my
fault
that
you
know.
I
haven't
also
shared
the
existing
benchmarks
that
we've
done
a
couple
of
months
ago,
but
I'll
spend
time
this
week,
trying
to
like.
C
Use
promotive
service
as
a
you
know,
performance
goal,
because
that
should
be
our
goal.
You
know
we
want
to.
If
you
want
to
have
like
you
know,
compatibility
or
like
provide
this
as
a
drop
in
replacement.
I
think
that
should
be
the
aspiration.
B
Jana,
I
recall
that,
like
maybe
four
or
five
months
ago,
you
kind
of
informally
collected
some
anecdotes
from
people
on
this
question.
Were
there
any
concrete
numbers
in
there
yeah.
D
C
C
On
the
same
cluster,
and
then
there
are,
you
know,
customers
who
just
don't
believe
in
that
they
follow
more
of
like
a
sailor,
sailor
approach
where
they,
like
you
know,
put
things
in.
You
know
multiple
things.
So
what
is
an
average
large
customer
is
a
very
difficult
question.
There's
no
answer
to
that.
It's
just
a
very
philosophical
thing
in
in
kubernetes
at
this
point.
C
That's
why
I
didn't
go
anywhere
and
partially,
like
that's
why
we
didn't
have
like
concrete,
like
performance
goals,
but
I
think
our
goal
should
be
be
very
close
to
like
prometheus
serve.
I
mean
for
me
to
serve.
Maybe
we
can
say
like
something
like
gravan
agent
should
be
our
aspiration,
because
probably
server
does
a
lot
of
things.
So
you
know
yeah,
I
mean
brian.
A
Do
you
know
if
there
are
specific
performance
goals
that
are
published
for
prometheus
server,
which
we
could
use
or
for
the
grafana
agent,
not
particularly.
H
A
H
Not
particularly,
I
think,
people
just
pack
it
together
like
if
you
look
at
my
own
github,
I
have
a
random
metric
generator,
which
is
a
pretty
worst
case.
One
there's
a
few
others
out
there,
but
in
general,
like
cpu,
is
free
ram
is
generally
a
concern
yeah
because
they
get
to
that
size.
The
cpu
is
never
the
problem.
C
Yeah,
if
there
are
numbers,
yes,
we
also
have
generators
and
everything
so
yeah.
We
have.
C
Of
reproducing
an
environment
a
matter
of
exactly
there
are
a
few
dimensions
that
we
have
to
care
about,
and
yeah
I
mean
it
takes
a
long
time
to
you
know,
generate
these
numbers.
I
spent
so
much
time
last
time.
So
you
know.
H
H
A
C
That's
one
of
the
other
goal
that
we
had,
but
you
know
we
couldn't
really
come
up
with.
What
is
small,
medium
and
large,
I
mean
large,
is
the
harder
small
and
medium
is
easy.
C
We
need
to
figure
out
this
threshold
right
like
at
what
point.
We
cannot
tell
people
to
use
one
collector
per
cluster.
If
we
understand
what
that
threshold
is,
then
it's
we
can
say,
like
you
know,
maybe
in
the
future,
we're
gonna
have
all
the
sharding,
but
for
now
we're
gonna
go
with
this.
You
know
one.
You
know
collector
approach
and
it's
your
responsibility
to
figure
out
how
to
you
know,
shard
and
I'm
not
sure
if
you
know
that
threshold
will
be
enough
for
us.
C
E
Yeah
also,
on
a
related
note,
I
found
out
that
the
receiver
is
not
actually
publishing
good
set
of
metrics.
Yet
so
far
like
the
number
of
targets,
you
know
the
the
the
script
duration.
You
know
spending
on
each
one
of
those.
A
C
A
So
wish,
can
you
add
an
issue
on
the.
A
C
If
we
have
like
you,
know
the
sharding,
it
will
have
its
own
crd,
so
eventually
we
will
add
a
configuration
and
people
can
go
and
like
enable
all
the
sharding
without
them,
you
know
going
through
through
too
much
trouble,
because
if
we
change
ourselves
from
like
giving
people
a
deployment
dml
to
maybe.
C
Let's
assume
that,
like
we
are
not
going
to
do
all
the
sharding
for
now
we're
just
going
to
improve
the
performance,
but
in
the
future
we
want
to
have
the
you
know
opportunity
to
be
able
to
provide,
maybe
all
the
sharding.
What
I'm
suggesting
is
like.
Let's
give
people,
maybe
operators
the
as
the
canonical
starting
point,
so
we
can
always
upgrade
them,
or
do
you
think
it's
just
it's
just
it
doesn't
matter.
D
C
Yeah,
that's
so
that's
more!
Like
you
know,
my
question,
like
I
mean
collector,
is
not
going
to
be
breaking
but
operator
may
are
we
in
a
stage
that
you
know
we
can
also
stabilize
the
operator,
or
I
mean
let's,
let's
go
with
the
collector.
I
think
that's
the
easiest
given
we're
going
to
stabilize
it.
C
C
C
A
C
We
could
ask
jurassic,
actually
yeah
address
is
engaging
with
all
the
other
conversations.
C
C
C
Let's
at
least
like,
let's
not
try
the
auto
chart
thing.
Let's
try
to
you
know
improve
the
performance
first,
you
know
we
are
just
you
know
optimizing
for
something
that
is
not
very
optimal
right,
like
let's
try
to
you
know,
get
the
collector
performance
to
a
level
that
you
know
is
more
resembles.
What
prometheus
server
does
and
then,
on
top
of
that,
if
there
are
still
cases
that
we
want
to
capture
like
very
large
clusters
and
so
on,
we
can
do
all
the
sharding.
J
So
yeah
now
one
thing
like
I
just
want
to
make
sure
like,
so
we
are
totally
like
excluding
it
for
like
gold
or
like
we
are
not
prioritizing
it
right.
Now,
the
auto
shutting
and
always
killing.
C
I
think
like,
if
we
don't
understand
you
know
what
is
the
actual
volatile
neck
in
the
current
like
collector.
That's
an
issue
like
we
shouldn't
be
building
the
operator
before
we
address
those
issues
we
could
still
do
it
like.
I
want
to
actually
get
your
opinion
on
this,
like
we
can
provide
people
a
generic
solution
if
there's
no
other
sharding
right.
A
So
I
think
you
know
we'd
have
to
do
both
in
parallel.
That
is
optimize
the
performance
for
the
collector
and,
at
the
same
time,
also
keep
working
on
the
operator
I
mean.
Doesn't
that
make
sense,
because
you
have
some
basic
cases
that
are
already
available.
C
Yeah
I
mean
we
can
come
up
with
a
model.
What
is
the
question
is
like?
Is
it
really
important
to
have
all
the
sharding
or
not
yeah,
because
what
brian's
saying
is,
like
you
know,
maybe
at
that
level
it's
it's
not
going
to
be
a
lot
of
like
people
who
actually
need
all
the
sharding
and
they're
already
very
advanced
users.
C
A
C
Need
to
do
more
benchmarking
and
understand
the
you
know
the
the
existing
issues
before
we
move
on.
We
can
still
work
on
the
operator
or
how
we're
going
to
be
doing
doing
not
all
the
sharding,
but
you
know
we
can
do
it
in
parallel
for
sure.
If
we
have
head
count,
you
know
like
it's:
it's
a
in
the
end
of
the
day
where
we
are
going
to
use
the
people
on.
A
So
would
the
action
item
be
to
again
I
noted
that
we'd
chat
with
jurassic
we'd,
also
kind
of
identify,
clear
thresholds
for
performance.
C
I
wonder
if
anthony
is
in
this
call,
because
I
was
wondering
about
anthony's
opinion
yeah.
C
K
C
K
If,
if
we
don't
have
a
good
understanding
of
the
performance
scaling
characteristics
of
the
collector
as
it
is
like,
if
we
think
that
the
numbers
that
you've
derived
are
are
based
on
something
that
has
a
bottleneck
that
might
be
easily
solved,
we
should
address
that
first
before
we
worry
too
much
about
whether
we're
going
to
need
to
shard
and
in
parallel
figure
out.
What
at
what
point
are
we
going
to
need
that
and
and
how
many
use
cases
are
we
going
to
cover
by
adding
that
capability?
K
So
we
can
figure
out
if
it's
something
we
should
be
focusing
on,
and
if
it
is,
then
we
should
be
able
to
work.
Those
in
parallel
with
you
know,
one
effort
to
improve
the
performance
of
the
existing
collector
single
threaded
and
then
another
to
ensure
that
if
we
hit
that
wall,
we're
able
to
then
scale
horizontally
by
shutting
out,
but
we
should
figure
out
if
we're
going
to
need
that
second
step.
First.
C
Yeah
yeah,
I
wonder
if
you
would
be
interested
in
you
know
the
doing
the
benchmarking
like
focusing
on
that
aspect.
So
I'm
saying
that
I'm
going
to
work
on
these
things,
but
you
know
I
have
billions
of
other
things
to
worry
about,
so
I'm
just
wondering
like
if
you
would
be
interested
benchmarking,
I
can
share
what
we
have
done
previously.
C
A
I
mean
I,
I
think
we
need
to
actually
have
a
performance
plan
testing
plan
so
we'll
have
to
identify
these
requirements
anyway.
Yeah.
K
A
C
A
Okay
sounds
good
and
again
we'll
share
it.
You
know
with
everyone.
So
folks
can
you
know
again
I'd
like
to
get
feedback
from
david
and
punya
as
well
as
from
wish
and
and
brian
I
mean
again,
everybody's
input
matters
here,
because
brian
we
are,
you,
know,
kind
of
want
to
use
some
of
your
guidelines
here
for
some
of
the
thresholds
and.
B
Use
cases
do
we
do
we,
so,
just
as
a
as
a
follow-up,
we
need
to
schedule
time
where
jurassic
can
attend,
since
he's
not
here
right
now,
right.
A
C
A
Cool,
I
think
we
also
got
richard
all
right.
So
that's
that's
the
next
step
I
can
follow
up.
We
can
ping
jurassic
and
figure
out
a
time.
Okay,
so
jana
was
that
related.
Also
to
your
question
on
yeah,
I'm
just
skipping
that,
because
it's.
A
Okay,
cool
good,
so
richard
is
here
right
on
time.
He
had
a
question
on
some
of
the
compliance
testing
requirements
for
open
metrics
and
richard
again
wanted
to
get
your
you
know,
guidance
on.
A
How
would
we
go
about
doing
this,
because
what
we'd
like
to
see
is
you
know
some
of
the
guidelines
that
you
would
recommend
both
those
requirements
to
build
tests
for
ensuring
that
we
are
compliant
with
open
metrics,
not
only
formats
but
also
performance
and
yeah.
What
was
the
other
areas
we
were
thinking
of.
C
If
there
was
a
case
that
you
know
I
can
put
the
open,
telemetry
collector
in
an
environment
where
there's
like
you
know,
metrics
prometheus
matrix,
and
then
you
know
I
kind
of
collect
them
and
produce
maybe
remote
right.
Is
there
a
way
for
me
to
be
able
to
test
this
entire?
Like
you
know,
whatever
I
exported
to
be
100
per,
I
mean
in
terms
of
data
points
in
terms
of
format
compatible
with
what
server
is
doing.
L
There
are
several
answers
for,
for
the
scrape
part
where
you
have
or
for
the
open,
metrics,
slash
permission,
exposition
format,
card
openmetrics
does
have
a
test
suite
and
I'll
link
it
in
a
second
for
remote
read
write
as
most
of
you
in
this
call
probably
saw,
but
I
can
also
share
the
link
for
this
again.
L
We
are
working
on
standardizing
the
the
remote
read-write
protocol,
basically
just
documenting.
What's
there
that
has
two
intentions,
a
it
allows
future
improvements,
slash
changes,
because
we
have
a
stable
thing
where
we
know
this.
This
works-
and
this
is
reliable,
similar
to
the
intention
behind
open
metrics
and
the
other
is
to
be
able
to
actually
create
a
test
suite
based
on
this,
where
we,
where
we
can
just
test
or
have
anyone
test
against
prometheus,
redirect
all
that
being
said
similar
to
the
prom
ql
testing.
L
We
are
not,
or
we
are
pretty
certain.
We
will
not
be
able
to
to
have
a
test
suite
which
gives
you
a
hundred
percent
correct
result
without
human
interpretation.
L
L
That
being
said,
I
I
think
it
makes
sense
to
write
down
all
those
interpretation
guidelines
and
such
so
it's
it's
not
like.
We
like
not
being
the
bottleneck,
because
it's
less
work
for
us
yeah
also,
the
the
wider
intention
is
to
then
have
on
prometheus
o
test
results
where
everyone
can
just
point
to
directly
hey,
I
am
compliant.
L
I
have
100
compatibility
and
just
point
to
that
to
that
proving
thing,
and
I
I
would
highly
recommend-
or
I
expect
open
telemetry
to
also
do
the
same,
where
we
just
put
this
as
part
of
of
the
test
results
once
once
it's
it's
working
or
even
before
that
we
have
this
and
that
number
and
you
can
just
not
have
a
discussion
every
single
time.
You
can
just
say,
hey.
Okay,
here
is
the
number
here's,
the
here's,
the
areas
of
work.
A
L
There
we
go,
should
I
also
link
the
promptly
stuff?
Do
you
want
to
do
in-flight,
prom
ql
as
well
as
part
of
this,
or
do
you
want
to
to
do
it
in
a
different
language.
C
Yeah
we
I
mean
these
are
next
steps
for
us.
Like
phase
two,
I
was
more
interested
in
collective
behavior
compliance
because
there
isn't
much
there
in
terms
of
you
know,
interpreting
things
like
you
know,
as
you
said,
just
going
just
looking
at
you
know
the
output
and
like
how
much
data
samples
that
we
dropped
and
stuff
like
that
so
anyways,
it
seems.
L
There
is
one
additional
way,
but
this
is
not
a
real
test
suite,
but
if
you
want
to
to
test
open,
metrics
and
remote
read
write
at
the
same
time
or
roughly
the
same
time.
L
There
is
always
permitted
itself
in
the
agent
and
having
basically
pretending
that
the
agent
is
your
collector
and
then
just
comparing
what
goes
into
both
of
them
and
what
comes
out
of
both
of
them
is
a
good
way
to
to
ensure
that
you
have
hit
all
the
snacks.
That
is
obviously
not
a
test
suite
which
has
detailed
test
reports
and
gives
you
this,
and
that
specific
thing
is
wrong.
That's
unfortunately,
not
the
case,
but
it
gives
you
a
reference
of
of
what
is
considered
a
100
compliant.
L
Of
course,
the
code
in
prometheus
prometheus
is
the
reference
for
both
remote
read,
write
and
for
open
metrics
and
for
primitive
exposition
formats.
L
That's
fair,
yes,
for
emitting
for
emitting
open
metrics.
The
reference
is
the
python
client
for
ingesting
it.
Prometheus
permutations.
L
Okay,
that
yeah
yeah
yeah.
I
know
what
you
mean:
yeah,
okay,
yeah
right.
Sorry,
I
was,
I
was
answering
this
under
the
lens
of
of
also
putting
stuff
into
immortal
mode,
because
that
is
where,
where,
if
you
basically
built
a
second
pipeline
and
just
put
it
in
the
same
place,
then
you
have
not
a
test
suite.
But
you
know
if
you
hit
it
right
or
not,.
H
Yeah,
I
agree
like
friana's
question
as
to
basically
has
my
data
gone
missing
somewhere
using
prometheus
itself.
Yeah,
that's
a
good
idea.
I
was.
I
would
suggest
the
same
for
the
question
of
whether
the
collector
is
producing
good,
open
metrics.
You
can
check
the
syntax,
but,
for
example,
if
say
the
histogram
buckets
were
changing
from
scrape
display
or
labels
were
appearing
and
disappearing
that
wouldn't
because
you'd
have
to
catch
that
by
eye.
Basically.
D
C
L
L
I'll
just
jump
in
one
or
two
more
links,
so
everyone
watching
this
recording
or
here
can
just
go
to
the
meeting,
notes
and
and
see
everything
else
I
talked
about,
but
that
can
happen.
While
we
talk.
L
Okay,
thank
you
also
fyi.
I
know
I
shared
it
with
some
of
you,
but
I
don't
know
if
everyone
is
already
is
actually
a
a
pull
request.
It's
still
against
the
grafana
agent,
not
a
prometheus
agent,
where
tom
played
with
having
promptly
l
in
flight,
which
is
where
my
initial
question
was
coming
from
course.
That
was
something
which
elolita
mentioned
as
a
requirement
on
open
telemetry
site.
So
we
try
to
see
if
it's
hard
to
do
within
agent,
slash
prometheus,
yeah
I'll,
also
link
to
this
one.
L
A
L
It
is
more
or
less
a
a
proof
of
concept
justice,
so
it's
not
even
expressed
in
experimental
stage.
It
was
just
a
can.
We
even
do
this
and
the
answer
turned
out
to
be
yes,.
A
L
I
don't
think
tom
pursued
it
since
then.
I
also
don't
think
it
gathered
that
much
attention
or
interest,
but
if
there
is
substantial
interest.
A
L
L
L
A
C
You
want
to
move
on
to
the
next
question
yeah.
My
question
was:
are
we
tracking
inspect
changes
on
the
collector?
You
know
the
the
histogram
related
thing
like
it's,
it's
going
to
be
a
part
of
the
stability
in
the
end
right,
so
we
can
easily
say
that.
May
3rd
first
is
the
is
the
timeline
here
for
all
the
spec
changes
to
be
going
into
the
collector.
C
I
mean
I
was
wondering
if
anyone
is
filing
issues-
and
you
know
talking
to
the
collector
folks
to
take
the
next
steps.
M
So
we
just
merged
or
released
version
0.8
of
the
proto.
Yesterday
bogan
did
that
which
has
a
number
of
like
deprecations
and
renaming
trying
to
get
us
to
a
smaller
protocol.
Basically,
it's
also
more
compatible
with
openmetrics.
M
We
expect
one
more
change
to
be
released
before
then
and,
as
far
as
I
know,
bogdan's
going
to
be
quickly
updating
the
collector
so
that
we
aren't
delayed-
and
I
think
we
should
ask
bogdan
more
about
the
timeline.
M
A
M
A
A
J
Yeah,
so
maybe
I
can
share
my
skin,
so
this
is
kind
of
like
a
follow-up
from
our
last
week
meeting.
So
I
was
planning
kind
of
like
this
isn't
a
very
simple,
like
resource
generation
processor
like
with
very
simple
rules
like
getting
the
percentages
or
like
as
adding
some
things
to
adding
to
new
metrics.
But
then
I
got
a
suggestion
or
like
proposal
kind
of
thing
like
can
we
explore
the
resource?
J
I
mean
recording
rules
from
prometheus,
so
the
first
thing
is
like
I
didn't
get
enough
time
to
play
with
the
code
base,
but
I
from
my
study
with
the
recording
rules
and
looking
into
from
the
high
level
so
to
me
it
seems
like
this
is
really
a.
I
mean
very
good,
robust
thing,
robust
professor
who
is
nice
to
have,
but
with
like
very
generic
purpose.
Expressions
mostly
like
all
the
things
are
like
being
executed
as
promote
query
so,
but
the
thing
is
like
from
my
understanding.
J
So
if
we
want
to
follow
this
path,
we
need
definitely
like
more
time
and
maybe
contributor,
because
the
thing
would
be
like
complex
in
my
opinion,
whereas
like
for
the
simple
to
support
my
case,
it
would
be
like
faster.
So
considering
like
the
project
timeline,
I
am
working
on,
I
am
afraid
like
if
I
can
follow
that
path.
J
M
As
a
first
class
thing,
I
mean
it
seems
like
they're
to
me,
there
are
three
different
types
of
recording
rule
that
are
out
there
and
I
think
you,
on
a
case-by-case
basis,
should
look
at
the
sort
of
functionality
behind
those
rules.
So
I
think
what
reihan
wants
is
a
fairly
simple
transformation
from
otlp
message
to
otlp
message.
That's
stateless!
In
the
sense
there's
no
memory
built
up
and
that's
one
category
of
almost
it's
just
like
a
rewrite
for,
like
a
label,
rewrite
not
really
a
recording
goal.
M
I
asked
tom
wilkie
one
of
these
meetings
a
few
weeks
a
month
ago
or
two
suggested
that
maybe
half
of
recording
rules
are
just
local
re-aggregation
and
those
are
things
that
we
ought
to
be
able
to
do
with
a
a
collector
pipeline,
but
then
the
things
that
require
global
data,
it's
a
stretch
because
you
could
configure
a
tree
of
collectors
so
that
all
of
your
data
passes
through
one
node,
in
which
case
your
recording
rules
would
then
work
on
global
data.
M
But
for
the
most
part,
I
think
people
want
to
record
their
data
to
a
back
end
and
then
apply
those
global
global
processing.
So
I'm
not
sure
I'm
not
sure
about
those
global
re-aggregation
rules
or
global
recording
rules.
They
seem
pretty
hard
and
I
think
that
perhaps
we
should
just
let
the
vendors
decide
what
they
want
to
do
with
global
data.
J
Yeah
also
like
the
purpose
of
like
this
kind
of
very
generic,
robust
rules
was
like
running
the
kind
of
query
like
language
in
the
back
end,
where
we,
just
I
don't
know,
and
in
the
data
model
of
our
like
auto
otlp
data
model,
so
doing
maybe
like
in
a
more
descriptive
way.
Like
my
calculations,
it
seems
to
be
like
more,
I
mean
acceptable
in
my
case.
I
guess
like.
Okay,
here
is
the
two
matrix
or
something
here
is
the
operation
or
education
rule.
I
want
to
just
follow
simple
rules.
J
M
I
think
that
that
second
local
re-aggregation
case
is
one
where
we
will
find
value,
and
people
do
want
to
do
that
with
the
collectors
and
it's
a
case
where
I've
talked
about
temporal
alignment
and
if
we're
going
to
be
pushing
data,
it's
it's
a
different
problem
than
if
we're
pulling
data,
and
so
I
think
this
requires
a
little
bit
more
development
before
we
can
do
talk
about
recording
rules.
I'd
like
to
not
talk
about
that
very
much
right
now,.
J
Josh
one
more
suggestion,
I'm
expecting
from
you
like
for
the
real
labeling
stuff,
so
I
was
just
saying
like
the
way
I
am
proposing
here.
So
can
I
just
develop
a
pretty
like
poc
and
just
submit
the
pr
in
the
that
way
or
like
I,
I'm
kind
of
like
confused
with
the
relabeling
option.
J
You
are
saying
like
okay,
so
are
we
still
following
the
same
way
as
like
recording
rules
like
similar
pattern
or
like
the
way
I
am
proposing
on
the
issue
proposal
like
okay,
I
can
just
come
up
with
a
simple
like
descriptive
way,
poc
and
just
submit
the
pr
in
a
new
way
like
very
descriptive
way
or
generate
this
matrix.
Something
like
this
with
this
level.
M
I
don't
think
I
can
answer
that
in
the
two
minutes.
I
have
to
look
at
your
work
and
I
have
to
take
advantage.
A
M
A
A
You,
okay,
cool
cool,
that's,
I
think
we're
at
time.
So
again
thanks
everyone
and
we
have
some
extensive
action
items,
see
you
bye.
Thank
you.
Thank.
N
O
A
Hi
tigran
hi
vegan
welcome.
We,
I
think
you
dropped
off,
but
we
had
some
follow-up
questions
on
re-labeling
and
processors
from
the
previous
prometheus
meeting.
G
Yeah
we
we
had
to
go
to
the
tc
meeting,
which
is
every
two
weeks
and
yeah
so
sorry
for
for
that,
but
I
I
cannot,
I
haven't
discovered
yet.
N
Can
you
guys
post
that
discovery
more
widely,
because
others
may
need
that
as
well?
Yes,.
N
All
right,
hey
shall
we
start
is
david
here.
I
think
he
has
the
first
item.
N
No
he's
not
in
the
call
okay,
let's
skip
that
one
for
now.
The
second
one,
john,
is
john.
The
call
which
john
is
this.
This
is
john
hi
john,
go.
P
Ahead,
shall
we
begin?
Yes
go
ahead?
P
Okay,
as
you
can
see
in
this
issue
we
created
last
month,
it
always
wants
to
migrate
to
use
audi
character
to
collect
the
matrix
and
the
current
solution
existed
in
open
telemetry
and
we
checked
other
solutions
in
the
memory
and
we
found
there's
no
exiting
solutions
meet
our
needs
because
most
of
the
metrics
were
collected
now
is
using
embedded
syllabizer
lab
and
for
ecs
cluster.
P
P
N
I
don't
know
much
about
the
area,
but
I'm
guessing
jay.
Maybe
you
will
have
some
thoughts
on
this
or
no
am
I
wrong.
G
Also,
although
john
would
be
super
useful
for
people
that
are
not
familiar
with
eks,
to
put
a
diagram
from
here
is
starting
from
kubernetes
or
whatever
you
move
to
different
component
different,
like
some
diagram
of
or
of
flowing
of
things,
and
see
where
this
receiver
will
sit
and
and
stuff
does.
It
make
sense.
G
P
Okay
and
this
container
receiver
is
also
work,
for
I
mean
the
container
inside
the
receiver
will
also
work
for
the
ecs
cluster.
So
there's
a
lot
of
aws
specific
things
yeah.
So,
let's.
G
P
Yeah
we,
this
receiver
will
collect
the
infrastructure
infrastructure
level
matrix
and
it's
not
collect
data
from
any
api.
It's
just
used
the
embedded,
see
advisor
library
to
collect
the
other
data.
P
It's
it's
not
still
rather
receiver,
we
embed
it
as
a
cd
visor
into
the
receiver.
I
mean
use
the
survivor
library,
as
you
can
see
from
this
issue,.
G
D
I
actually
considered
asking
the
new
c
advisor
maintainers
to
do
something
like
this,
because
I
think
this
would
be
kind
of
useful.
D
So
basically,
this
would
be
a
receiver
similar
to
like
the
host
stats
receiver,
but
that
ends
up
recursively
watching
the
sysfs
c
group
tree
for
container
creations
and
then
monitoring
all
of
the
c
group
files
from
those
containers.
So
that's
what
the
receiver
actually
does,
they're
saying
and
that's
what
the
cubelet
in
kubernetes
does
today,
and
so
this
proposal
is
essentially,
let's
do
exactly
what
the
cubelet
does
import
c
advisor
as
a
library
and
then
use
that
as
a
way
to
generate
metrics.
G
Is
it?
Is
there
already
a
library
or
code
in
the
cubelet
that
we
can
share
for
this?
Maybe
to
not
rewrite
all
of
these
things,
because
I
bet
the
the
whole
watching
following
the
three
and
so
on
and
so
forth?
It's
it's!
It's
some
work
that
it
will
be
better
fitted
by
by
a
standalone
library
that
we
can
share.
D
100
so
that
standalone
library
is
c
advisor,
the
cubelet
does
almost
nothing
other
than
it
takes.
Data
from
c
advisor
and
literally
just
exposes
the
prometheus
endpoint.
That
c
advisor
has
it
does
some
minor
relabeling
that
we
can
choose
to
copy
or
not
to
copy,
to
add,
for
example,
a
pod
name
or
something,
and
it
also
then
transforms
c
advisor
data
into
the
cubelets
summary
api.
So
those
are
the
two
paths.
G
Okay,
so
yeah
sorry
for
my
lack
of
knowledge
about
what
c
advisor
does
for
me,
the
c
advisor,
not
the
library,
but
when
I
learn
about
c
advisor,
I
learned
more
about
the
kernel
capabilities
of
scheduling
things,
but
if
there
is
a
library
that
does
all
the
things
that
we
need
called
the
advisor
library
or
whatever
it's
called,
I
think
it's
it's
good.
Then
then,
is
this
very
specific
to
aws
or
any
any
nec
advisor.
G
P
Important
question
here:
oh
yeah,
as
you
can
see
in
the
description
that
it
it's
not
just
about
using
survivor
library
only.
We
also
have
some
direct
specific
things,
and
we
also
need
to
yeah.
P
There's
some
part
of
the
matrix
we
generated
from
the
c,
a
laser
lab
and
some
other
matrix.
We
will
get
from
some
either
specific
function.
So
I
think
it's
not.
G
Can
we
can
we
split
them
into
two
receivers,
one
that
is
pure
c
advisor,
one
that
is
amazon,
specific
and
because
we
have
this
capability
of
running
the
two
receivers
and
pointing
them
to
the
same
pipeline.
So
essentially
from
from
the
pipeline
perspective
and
from
your
perspective,
they
are
coming
from
the
same
source.
Even
though
there
are
two
different
sources.
G
C
G
Is
wrapping
the
c
advisor
in
a
in
a
in
an
amazon,
see
advisor
more
extended
thing?
If
we
want
to
do
that,
if
you
want
to
really
have
only
one
receiver
for
your
users,
you
can
do
the
wrapping,
but
I
think
we
can.
We
can
think
of
having
this
standalone
c
advisor
based
receiver,
that
that
just
does
what
exactly
david
explained
to
me.
Yeah.
A
Bogdan,
I
I
agree
with
that
approach
because
and
john
maybe
we
should
chat
about
this,
because
I
definitely
would
like
to
see
the
generic
c
advisor.
You
know
support
as
a
generic
component
in
the
collector
and
then
anything
aws
specific
in
the
contrib.
P
Sure
yeah
from
we
did
some
research
and
it's
kind
of
difficult
to
split
this
scenes
into
a
generic
cdmaster
and
another
receiver,
because
there
are
some
matrix
we
need
to
calculate
based.
On
the
I
mean
we
need
to
generate
some
new
matrix
based
on
the
serial
value
matrix
and
also
but.
G
As
I
said,
you
can
do
the
trick
that
we
did
in
aws
prometheus
remote
right,
which
is
essentially
using
bedding
or
whatever
share
the
the
code
that
we
write
for
the
generics
advisor
into
into
the
aws
specific
thing.
So
then,
then
we
we
have
a
standalone
c
advisor,
but
we
also
have
this
amazon,
specific
c
advisor
plus
extra
that
uses
the
same
code
and
the
same
thing
and
the
same
translation
of,
for
example,
metrics
from
c
advisor
data
to
otlp
and
so
on
and
so
forth.
G
And
then
you
can
combine
that
with
with
your
own
logic,
I
think
should
be
possible,
maybe
I'm
wrong,
but
I
would
really
encourage
to
to
go
that
path
if
possible.
A
J
So
one
more
thing
a
little,
so
this
is
the
hand,
so
I
just
want
to.
H
J
The
thing
was
like
the
way
we
are
planning
to
collect
the
generic
purpose
like
c
advisor
matrix,
some
of
them
like
I,
I
guess
most
of
them
are
like
already
exposed
through
the
cubelet
c
advisor
endpoint
in
prometheus
format,
and
we
have
a
prometheus
receiver,
which
is
giving
us
all
the
metrics.
So
it
was
also
kind
of
a
concern
like
we
have
a
way
we
can
get
the
metrics
from
slash
matrix
and
point
in
prometheus
format,
and
we
already
have
a
prometheus
receiver.
J
So
do
we
really
want
to
introduce
a
new
receiver
genetic
purpose
here
by
the
receiver?
So
I
mean
it
was
like
kind
of
like
concern
in
our
case
like
wouldn't
that,
wouldn't
that
generic
one
be
used
by
prometheus
too,
then.
N
N
J
Yeah,
it
does
kind
of
like
constant
look,
are
we
I
mean
doing
duplicate,
work
or
introducing
two
different
things
for
similar
works?
So
that's
what
especially,
maybe
I
was
asking
like
planning
to
talk
with
like
david.
So
if
he
has
like
more
insights,
also.
D
I
mean
so
the
the
c
advisor
that's
embedded
into
the
cubelet
is
pretty
locked
down
because
it
has
specific
apis.
It's
trying
to
serve
and
it's
using
c
advisor
for
c
advisor
does
collect
a
lot
more
stuff
than
the
cubelet
exposes.
So
I
I
could
see
use
cases
for
that
you're
right.
It's
because
c
advisor
recursively
watches
the
entire
c
group
tree.
D
It's
very
expensive
to
run
two
of
them.
So
that
is
a
concern.
It's
a
valid
concern.
But
as
far
as
I'm
aware,
there
isn't
any
other
way
to
get
the
metrics
that
the
cubelet
doesn't
already
expose.
G
And
I
mean
it's
up
to
the
user
to
enable
this
and
do
run
them
from
from
our
perspective.
As
I
said,
I
think,
as
long
as
we
have
some
use
cases
for
this,
I'm
kind
of
having
this
c
advisor
prc
advisor
receiver
supported
somewhere.
I
think
there
is
yes,
but
is
not
only
kubernetes.
That's
using
c
group
correct.
G
There
are
other
things
that
are
using
c
group,
so
the
c
advisor
based
receiver
will
be
able
to
to
scrape
other
other
environments,
which
are
not
necessarily
only
kubernetes
environments
where
you
have
the
kubelet,
so
you
may
not
have
the
kubelet
and
still
use
c
groups
and.
P
Stuff:
okay,
sure
we
will
propose
another
proposal
in
the
github
ratio
and
list
the
solutions
about
the
design
of
the
seed
measure
and
hope
that
we
can
get
some
suggestions.
J
Yeah
so,
but
one
more
thing
I
just
want
to
hear
talk
about
like
four
likes.
We
also
plan
to
like
have
two
different
receiver.
J
One
is
like
generic
purpose
here:
buzzer,
receiver
and
another
is
to
support
like
aws
specific
practice,
but
we
have
some
limitations
like
we
will
lose
the
state
of
data
and
don't
cannot
calculate
new
metrics,
so
it
need
to
be
one
matrix
or
something
like
this,
so
I
was
just
wondering:
is
it
possible
to
like
make
it
configurable
like
we
did
something
like
in
the
exporter
helper
like
okay,
if
this
configure
is
enabled,
so
this
will
add
the
additional
metrics
with
the
existing
shared
by
the
matrix.
J
G
I
think
jay
will
be
able
to
help
here,
but
we
have
a
notion
of
what
we
call
the
scraper.
A
scraper
is
a
sub
component
for
a
receiver,
and
then
we
have
a
scraper
controller.
So
what
you
can
do
is,
if
you
have
two
different
scrapers
and
implement
that
interface,
you
can
have
a
receiver
that
has
both
scrapers
only
one
scraper
and
if,
if
you
have
both,
then
both
will
be
scraped
at
the
same
time
and
immediately
in
the
same
message,
if
you
have
one
just
one
will
be
there,
does
it
make
sense?
J
G
Yeah
look
look
at
the
in
the
jay
promised
to
me
that
he
will
help
stabilizing
that
api
and
better
better
improve
that
part.
So
I
think
that
should
be
the
solution,
like
you
put
whatever
scrapers
you
want
in
that
in
that
one
receiver
controller
whatever
is
called
scraper
controller
and
and
then
you
will
be
able
to
to
to
have
either
one
of
those
scrapes
or
both.
I.
F
N
Okay,
thank
you.
Next,
the
keyboard
store,
okay,
is
dan
in
the
call,
then,
are
you
here.
N
R
N
Can
you
maybe
show
the
design
I'm
going
to
discuss
sure?
Let
me.
R
R
Individual
components
may
need
this
and
then
in
fact
the
logs
receivers
basically
would
take
advantage
of
this,
and
this
is
this
is
coming
out
of
the
stanza
code
base.
Basically,
the
use
case
there
is
that,
when
you're,
for
example,
tailing
files,
you
typically
would
want
to
keep
track
of
how
much
of
that
file
you've
consumed.
If
the
process
dies
and
is
restarted,
you
want
to
pick
up
where
you
left
off,
and
so
you
need
something
to
persist
beyond
the
process.
R
It's
not
a
lot
of
data
in
that
case,
but
just
a
little
bit
would
be
useful,
and
so
in
exploring
what
it
would
look
like
to
implement
that
within
a
single
component
or
a
small
set
of
components,
we
became
very
clear
that
there
really
are
just
a
lot
of
design
decisions
that
go
into
this.
That
would
probably
be
better
solved
at
the
collector
level,
and
so
this
proposal
tries
to
you
know,
address
that
idea
that
that
perhaps
the
collector
should
provide
as
a
service
to
the
components,
a
mechanism
for
persisting
data
to
disk.
R
But
what
about
when?
This
is
not
available
sure,
so
I
I
say
disk
really
is
sort
of
like
the
maybe
the
vanilla
implementation,
so
there's
so
there
are
two
things
number
one.
If
the
disk
is
not
available,
those
same
receivers
or
the
same
components
should
still
operate.
Fine
in
the
example
of
file
log,
it
would
just
not
know
exactly
where
it
had
left
off,
and
so
it
could
pick
up
from
the
end
of
the
file
or
the
beginning
of
the
file.
R
However,
depending
on
how
you've
got
that
configured,
and
that
would
be
that's
covered
in
here
as
well,
it's
sort
of
like
what
the
components
should
expect
if
that
isn't
available.
R
C
R
G
R
R
Cool
yeah,
I
mean
I
mostly
bring
this
up.
You
know
you
know
anyone
who
might
be
able
to
take
advantage
of
this
and
they're
in
the
components
they've
worked
on.
You
know
love
to
have
your
opinion
on
this.
Even
if
it
comes
to
you
know,
just
what
a
library
makes
sense
for
everyone
to
use
what
the
interface
should
be.
There's
all
sorts
of
details
in
here
that
really
should
be
scrutinized
by
more
people.
R
The
other
thing
I
think
that's
worth
mentioning
is
that
an
initial
implementation
tigger
proposed
that
we
do
this
as
a
as
an
extension,
initially
really
prove
out
the
functionality,
and
then,
if
that
makes
sense,
if
that,
if
that
looks
good,
then
we
could
potentially
move
that
you
know
deeper
into
the
collector
code
base
and
just
maybe
even
inject
it
you
know
into
the
factory.
You
know
that
kind
of
thing
so.
G
So
so
what
is
your
main
concern
of
using
extensions
right
now
that
you
don't
know
what
you
are
looking
for?
Should
there
be
kind
of
a
type
of
extensions
that
will
I
mean
extensions
are
just
things
that
have
a
start
and
stop
and
run
somewhere,
and
we
make
them
available
to
every
component
to
even
like
connect
to
that.
G
So
for
for
this
storage
perspective
is
just
an
interface
that
is
missing,
so
you
can
say
that
hey
is
this
extension,
a
storage
and
can
I
use
it
or
or
what
is
the
missing
part,
because
even
if
we
come
up
with
a
notion
of
a
storage
or
something
it's
still
very
similar
with
extension,
because
there
are
things
in
the
system
that
the
other
pipeline
components
can
connect
to.
So
that's
what
I'm
trying
to
understand.
R
G
Not
necessarily
satisfy
what
would
what
would
be
a
storage
for
your
perspective,
sure
if
we
were
able
to
or
not
able,
but
we
are
willing
to
go
full
full
power
on
this
and
say:
okay,
we
add
a
new
notion
of
a
storage.
What
would
that
be
compared
with
the
with
the
with
the
extension.
R
Yeah,
so
I
mean,
I
think,
it's
pretty
close,
there's,
maybe
a
couple
of
nuances
that
we'd
want
to.
Ideally
we
would
solve
one
is
that
I
think
the
storage
would
more
naturally
be
sort
of
a
singleton,
so
you
wouldn't
it
wouldn't
make
sense
to
configure
this
twice.
So
as
an
extension
there's,
you
know
nothing
in
the
user
facing
configuration
that
would
necessarily
prevent
them.
You
can
you
can
sort
of
make
that
fail,
but.
G
That's
that's.
It
has
two
names.
I
mean
it
is
a
single
term,
but
it
has
two
two
names,
maybe
maybe,
for
example,
think
about
one
of
the
storage
being
s3
and
it's
one
is
configured
to
use
user
full.
The
other
one
is
configured
to
use
user
bar.
So
you
still
it's
still
not
necessarily
a
single
con.
If
you,
if
you
think
that
way,
because
there
are
some
parts
or,
for
example,
for
a
file
system,
one
may
be
connected
to
one
partition
and
the
other
one
to
the
different
operations
and
so
on.
R
So
the
other,
the
other
thing
I
was
going
to
mention,
which
maybe
is
a
better
point,
is
that
perhaps
I
think
by
you
might
want
this
to
be
done
by
default.
In
some
cases
like
should
the
should
the
user
have
to
add
this
extension
in
order
to
to
save
these
file,
checkpoints.
G
We
can
have
a
default
extension
called
file
system
or
whatever,
if
available,
for
example.
So
so
we
can
do
this
as
well
anyway,
but
what
I'm
trying
to
say,
I'm
not
trying
right
now
to
get
all
these
things,
but
please
document
again
I
did
not
read
the
document,
but
please
document:
what
would
you
see
if
we
were
not
to
use
extensions
and
how
would
that
help.
R
F
It's
actually
so
the
case
where
you
have,
you
may
have
multiple
extensions
or
where
you
know
it's
ambiguous,
like
which
one
a
receiver
should
connect
to
or
use
and
or
where
you
have
like
a
single
extension,
and
you
just
want
that
to
be
the
default,
but
that's
a
situation
in
other
that
we've
had
like
with
some
other
receivers,
where
you
kind
of
like.
Sometimes
you
want
the
default
when
you
have
one,
but
if
you
have
two,
you
have
to
the
receiver
has
to
say
which
one
it
wants,
which
extension
it
wants
to
use.
F
So
it's
kind
of
a
generic
like
that's,
maybe
a
problem
we
should
solve
generally,
because
it's
not
specific
to
to
this
thing,
it's
kind
of
like
a
default
versus
non-default
extension
yeah.
I
would
say
if
that
makes
any
sense.
R
B
R
I
should
give
more
thought
to
that:
oh
I'll
review,
the
doc
again
myself,
even
in
and
think
on
on
that
question
I
think
that's
a
good
point.
N
Okay,
I
think
what
we
have
with
the
extension
is
good
enough
as
a
starting
point,
whether
we
want
to
move
to
the
core
we'll
see.
There
is
no
rush
with
that.
I
believe,
because
it's
fairly
trivial
to
configure
it
as
an
extension,
and
we
also
want
to
support
the
situation
when
it's
not
available
either
way.
So
I
mean
that's,
that's
fine.
We
can
go
with
the
current
design,
but
please
do
review
it.
If
you
are
interested
and
program
you
specifically
as
a
maintainer
and
then
you
can
look
forward
to
this
sure.
G
G
You
are
talking
about
saving
things,
but
is
that
file
receiver
gonna
have
capabilities
from
to
read
from
local
these
also
from
s3,
maybe
from
from
google
cloud
storage
or
anything
like
that
or
or
is
that
very
dependent
on
the
os?
So
so
is
there
a
notion
of
a
file
that
that
you
have
inside
that
file
receiver,
because
somebody
asked
to
to
have
a
receiver
that
is
keep
in
exporter
that
is
capable
of
reading
from
from
s3
and
I'm
like?
R
I
think
it
will
depend
a
little
bit
on
the
file
system,
but
generally
speaking,
it's
it's
pretty
specifically
written
for
files
on
a
local
file
system,
but
because
I
can't
say
for
sure
that
it
wouldn't
work
in
some
other
cases,
but
I
wouldn't
I
wouldn't
expect
it
to
without
at
least
validating.
N
Is
are
s3
buckets
seekable?
Can
you
stick
them
or
you
have
to
read
the
entirety
of
it
sequentially?
I.
G
What
I'm
trying
to
say
what
I'm
trying
to
say
it
means
word
trying
to
understand
the
operations
that
needs
to
be
implemented
for
for
something
to
be
considered
a
file
in
which,
by
a
receiver-
and
maybe
maybe
maybe
that's
also
something
that
think
about
having
an
implementation
of
the
same
interface
to
be
backed
by
s3
or
gcs
or
whatever
anyway,
too
many
things.
But
I
saw
that
issue
and
it's
in
country
filed
by
jana
raquel.
G
So
you
may
want
to
look
into
that
then
and
see
and
put
your
thoughts
there
if
there
will
be
a
possibility,
because
the
whole
logic
of
like
continue
like
reading
from
the
previous
point
and
so
on
and
so
forth.
It's
it's
the
same.
It's
it's!
Just
like
whatever
operations
you
need
for
from
the
storage
to
be
supported.
F
Maybe
also
take
a
look
at
see
what
terraform
does
for
their
state
their
state
plug-ins,
since
they
store
like
they
have
to
store
their
state
to
you
know
file
system
s3,
they
support
various
ones,
so
you
could
maybe
use
them
for
inspiration
as
well.
G
That's
that's!
That's
for
state
not
for
for
reading
like
there
are
two
different
problems:
correct
yeah
yeah
for
state,
but
for
state
yeah.
It
will
be
good,
then,
for
the
state
park
that
you
are
interested
in
through
the
document.
If
we
have
a
interface
or
whatever
that
we
define
that
all
the
extensions
which
are
state
extensions
or
whatever
we
call
them
to
implement
and
maybe
similar
with
the
what
jay
said
with
the
terraform
or
something.
So
we
can
reuse
some
of
the
things.
N
N
F
So
in
terraform,
like
all
the
resources
you
create
and
metadata,
you
see
like
you,
save
to
this
state
file,
it
may
be
local
file
system
and
maybe
s3,
but
it's
it's
kind
of
like
a
json
blob
with
some
like
it's
kind
of
a
similar
class
of
data
right.
It's
not
like
a
huge
amount
of
data.
It's
just
kind
of
you
know
key
value
pairs,
so
they
yeah
and
it's
go
based.
So
there
may
just
be
some
inspiration
there
too.
Okay
to
see.
N
N
D
Yeah
thanks,
I
so
someone
on
the
gkeen
team
opened
this
up.
Basically
stackdriver
or
sorry.
Google
cloud
monitoring
does
limiting
by
the
number
of
data
points
rather
than
the
number
of
metrics
I
I
was
kind
of,
but
before
I
try
and
propose
some
solutions,
I
I
was
hoping
to
understand
a
little
bit
more
of
the
history
of
of
sort
of
why
we
did
limiting
based
on
number
of
metrics.
D
Just
to
see
my
first
impression
when
I
saw
the
problem,
which
for
background
is
that
we
currently
allow
limiting
by
number
of
metrics
rather
than
number
of
data
points.
A
number
of
data
points
seems
like
it
would
make
more
sense
only
because,
like
we're
trying
to
limit
basically
the
size
of
a
payload,
so.
G
G
We
also
started
with
counting
number
of
metrics
initially
in
our
internal
metrics,
and
then
we
moved
to
data
points,
because
that
was
more
interesting
for
users.
G
So
I
I
see
no
problem
of
probably
because
of
backwards,
compatibility
and
stuff
to
support
both
in
the
pro,
because
you
are
talking
about
the
batch
processor,
correct
to
to
be
able
to
split
the
things
by
by
number
of
data
points.
G
N
N
O
G
Right
then,
then
we
can
start
thinking
about
changing
it
to
data
points.
G
G
G
People
may
configure
on
the
way
in
the
pipeline
things
that
may
generate
new
data
points
and
you
will
get
screwed
so
one
option
that
we
did
in
the
in
the
exporters,
because
that's
where,
where
you
have
all
the
the
things.
So
if
we
make
this
split
logics
as
mini
libraries,
that
we
can
share
with
with
other
parts,
you
can
even
enforce
this
at
your
exporter
level
and
do
that
and
go.
C
N
N
Stack
driver
which
may
be
different
for
some
other
destination
right
in
theory,
if
you're
sending
to
two
destinations,
you
want
to
have
different
settings
for
this.
G
But
so
yeah,
let's,
let's
solve
it
in
the
batch
and
then
next
david.
We
can
think
about
making
the
the
mini
libraries
for
splitting,
just
maybe
internal
libraries
initially
in
the
core.
So
we
can
offer
this
as
a
capability
for
the
exporter
helper
as
well,
for
for
users
that
they
want
to
guarantee.
At
that
point,
yeah.
N
G
A
Yeah
bogdan
is
there
again
maybe
next
time
onwards,
we
should
also
review
the
backlog,
because
we
have
been
picking
up
some
of
the
issues
and
starting
to
work
on
that,
and
maybe
you
know
we
can
just
give
a
quick
update
at
the
collector
meeting
or
we
can
do
a
separate,
separate
update
on
the
issues
itself.
But
just
you
know
just
to
make
sure
that
we
have
momentum
in
progress.
I
G
There
are
a
bunch
of
issues
not
assigned
so
okay,
because
I
thought
it
was
funny.
I
think
this
one
is
done
anyway.
I
will
look
into
this.
This
was.
K
Yeah
yeah
that
one
got
wrapped
up
yesterday,
I
might
have
forgotten
to
mark
the
second
pr
is
closing
that
sorry,
it's
okay.
A
G
Done
the
work
or
working
on
this,
the
p
data
we
have
assigned
improved
component
test.
We
don't
have
anyone
to
do
this.
The
same
thing
as
anthony
did
for
for
that
package.
We
need
to
do
it
for
this
package.
A
So
bogdan
I'll,
look
through
the
backlog
again
and
then
bring
you
back,
because
my
assumption
was
that
at
least
all
the
phase
one
items
were
accounted
for.
G
G
G
Let's
make
sure
we
have
people
assign
and
stuff,
and
also
anyone
in
the
community
who
wants
to
do
this,
please
let
me
know.
A
Yeah
punia
again,
if
you
can
help-
or
you
know,
if
you're
a
team.
K
Thank
you,
and
so
for
for
reviewing
in
these
meetings.
Tyler
in
the
ghost
egg
has
like
a
a
flow
that
he
puts
in
in
the
docs
that
shows
deltas
for
each
of
the
states
in
in
the
kanban.
That
could
be
useful
so
that
we
could
say
you
know,
we've
got
now
five
issues
in
to
do
on
this
one
and
we've
burned
down
three
of
them
and
just
to
keep
kind
of
a
track
of
how
we're
making
progress.
A
No,
I
know
I
know
bogdan
you're
more
more
reached
to
tyler
now.
A
K
A
K
Why
do
you
mean
that
the
the
people
who
work
on
the
go
stick
haven't
been
working
on
the
collector
and
the
people
who
work
on
the
collector
haven't
been
working
in
the
go?
Stick
yeah.