►
From YouTube: 2021-04-21 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
C
A
A
Good
good,
so
we
have
a
pretty
good
walk
through
today
and
maybe
we
can
get
started
with
josh's
items
first
and
then
dive
into.
E
Sure
yeah
hi
everybody
put
my
name
in
here.
I
put
up
two
issues
to
discuss.
I
didn't
intend
to
have
a
long
discussion,
but
I
wanted
to
get
these
issues
in
front
of
the
group
starting
right
away.
Everybody
ready,
I
may
as
well
just
and
share
in
front,
so
we
can
see
the
same
thing.
I
wrote
these
pretty
quickly,
so
they're,
not
super
detailed.
I
was
trying
to
just
put
a
placeholder
and
see
if
people
had
things
to
say
so.
E
The
first
thing,
on
my
mind,
is
actually
response
to
something
that
brian
said
in
last
the
last
meeting,
but
I
was
on
the
phone
and
couldn't
really
talk.
So
this
is
about
the
external
labels
for
prometheus.
E
These
are
configured
in
a
prometheus,
yaml,
config
and
then
they're
added
to
all
the
outgoing
series
points,
and
I
have
been
for
a
long
time
been
thinking
of
them
as
metadata
about
the
collection
strategy
like
replica
name
and
they're
used
for
a
high
availability
in
cortex,
you
erase
one
of
those
labels,
that's
how
you
get
your
high
availability
and
then
brian
said
something
last
week
that
I
didn't
expect,
which
was
that
we
use
these
also
for
properties
about
the
processes
or
the
targets
that
are
being
monitored.
E
These
are
to
me,
they're,
pretty
fundamentally
different
kinds
of
information
and
I
believe,
there's
a
desire
in
hotel
to
begin
to
distinguish
or
or
at
least
understand
which
kind
of
information
we're
looking
at
to
sort
of
become
better
at
automatically
processing
the
data.
So
I
want
to
see
if
people
have
a
thoughts
about
this
in
prometheus
yeah,
so
like.
F
Fundamentally,
the
external
label
identifies
the
prometheus,
but
that
kind
of
overlaps,
often
with
target
labels.
So
you
can
imagine
that
if
you
had
a
prometheus
that
was
for
a
production
environment
rather
than
having
end
equals
prod
on
every
single
target,
you
might
factor
it
out
to
the
external
labels.
So
that
way,
it's
the
exact
same
as
a
target
label.
Now
you
do
also
have
replicas
cases
in
which
is
a
bit
weirder
as
to
how
you
handle
that.
But
if
you're
not
planning
on
having
replicas,
you
can
hand
wave
that.
E
Cool
that's
about
what
I
expected
and
I
guess
that's
fine.
It
was
good
to
know
that
that
was
an
explicit
intention
and
not
just
a
kind
of
accidental
behavior.
I
think
there's
a
will
in
open
telemetry
to
begin
to
convey
metadata
about
our
labels
to
say
whether
they
are
about
the
collection
infrastructure
or
about
the
process
and
there's
a
few
different
sort
of
nuances
that
we
can
pull
out
of
that
and
I'm
not
actually
making
a
proposal
right
here
and
now.
E
E
E
Is
46,
which
is
separated
by
a
large
number
of
compliance
issues
that
yana
created,
but
was
created
two
days
apart?
Sorry,
the
that
this
was
an
assignment
I
took
on
yesterday.
E
This
is
a
very
small
amount
of
work
to
get
this
issue
up
in
front
of
the
group
again
for
today,
but
my
task
over
the
next
week
is
to
write
a
comment
in
the
otlp
proto
definition
for
metrics,
saying
exactly
when
we
should
or
should
not
expect
to
see
a
start
time
or
equals
zero,
and
I
was
asked
to
come
to
this
group
and
put
the
question
in
front
of
you
as
well,
because
openmetrics
has
a
concept
of
created
time
and
I
copied
some
text
out
of
the
openmetrics
spec
here
saying
roughly
what
we
want
to
say
in
open
telemetry,
which
is
that
this
is
really
useful
information
to
help
us
detect
resets,
but
we
can't
always
count
at
being
present
now.
E
The
question
is
really
what
advice
openmetrics
has
for
this
situation,
and
the
reason
I
ask
is
that
we
are
faced
with
a
very
real
problem
today,
both
dealing
with
legacy
prometheus
clients
that
don't
have
start
time
as
well
as
dealing
with
well
we're
dealing
with
prometheus
data
in
the
form
of
its
wall
for
the
sidecar
that
lightstep
and
has
been
working
on.
In
both
cases.
E
We're
basically
asked
to
take
some
data,
that's
roughly
like
prometheus
from
outright
turn
it
into
otlp,
and
at
that
point
we
have
to
kind
of
answer
the
same
exact
question.
So
I
wanted
to
get
any
information
from
the
prometheus
developers
about
plans
to
use
that
created
timestamp
in
prometheus
because
it
might
impact
what
we're
what
we're
thinking
right
now.
F
Just
like
vague
things
in
the
future
that
might
affect
race,
but
in
relation
to
the
issue
here
is
that
well,
if
it's
not
there,
you
can't
guess,
because
you
kind
of
don't
know
to
some
extent
like
maybe
the
binary
is
resetting
it
every
minute.
Please
don't
do
that,
but
that
could
terroristically
be
happening.
F
E
So
let
me
just
two
things:
there
there's
something
about
resetting
and
unknown
resets,
but
then
there's
also
something
about
a
created
time
is
that
a
metric
that
has
the
same
job
and
instance,
labels.
E
F
E
Yes,
I
think
I
agreed
with
everything
the
one
thing
that
we're
going
to
add,
or
at
least
try
to
add,
specify
on
top,
is
the
kind
of
valid
or
sort
of
standard
translation
into
open,
telemetry
otlp.
When
you
don't
have
that
information
and
what
there's
something
that
we
think
we
can
do
to
improve
the
quality
of
the
data
and
I'm
going
to
say
it
now
which,
which
is
to
say
something
like.
E
We
believe
that
if
you
are
a
stateful
observer
of
a
stream
of
metrics,
you
then
have
an
opportunity
to
correct
the
record
on
the
data.
That's
passing
through
you,
and
that
this
is
a
useful
thing
to
do,
because
well,
it's
a
necessary
thing
to
do
because
open
termitry
has
this
idea
of
a
start
time
so
that
it
doesn't
need
stillness
markers.
E
So
the
use
of
start
time
is
to
is
to
indicate
when
the
series
is
valid
and
known
as
opposed
to
unknown
and
or
missing,
and
so,
when
you're
passing
through
a
series
of
observations
that
don't
have
start
time.
What
you
can
do
is
remember
that
you've
seen
that
start
time
and
you
can
remember
the
value
that
you've
seen
and
as
the
value
rises.
Eventually
it's
going
to
reset.
E
You
can
then
detect
the
reset,
and
you
know
more
information
than
a
downstream
observer
is
going
to
get
because
you,
you
saw
those
stale
still
still
values,
and
you
saw
the
missing
points.
So
you
have
a
better
opportunity
to
fill
in
start
time
than
the
downstream
observer
does,
because
you
kept
some
state
and
that's
what
I'm
proposing
is
that
we
really
want
to
avoid
back
ends
having
to
interpret
data
that
doesn't
have
a
start
time
as
much
as
possible,
and
one
thing
we
can
do
is
fill
in
some
start
time.
D
F
E
You're
right,
yes,
this
is
a
limitation
of
the
data,
and
I
and
I
absolutely
agree
with
everything
you've
said,
given
all
those
limitations,
we
still
think
that
that
we
want
to
make
a
recommendation
to
to
do
your
best
to
fill
in
start
time,
and
it's
acknowledging
that
we
can
still
miss
resets.
F
E
So
so
the
I
think,
then,
what
we're
talking
about
is
falling
back
on
what
I'm
calling
the
prometheus
heuristic,
which
is,
if
the
value
descends,
then
it
must
have
reset
and
the
only
information
that
we
can
inject
to.
That
is
knowledge
about
some
time
information
about
when
it
may
every
set.
E
E
Reset
the
process
disappeared
from
existence.
Okay,
I'm
I'm.
I
do
understand
that
stillness
is
orthogonal
and
I'm
not
trying
to
conflate
the
two
issues.
I'm
really
talking
about
the
case
where
that
there
is
a
true
reset,
and
there
still
is
ambiguity.
E
If
the
true
reset
there's
a
slow
moving
counter,
you
might,
you
might
actually
still
not
notice
it,
but
if
there
is
a
reset
that
you
do
notice,
what
we're
trying
to
say
is
that
the
valid
expression
of
that
in
otlp
is
to
put
time
ranges
that
that
put
a
gap
in
the
series
rather
than
to
put
a
stale
value
in
the
series.
F
E
I
understand
this
is
more
complicated
than
I'm
prepared
to
discuss
in
real
time
in
front
of
this
group.
You've
helped
me
a
lot
brian.
I
think
I
need
to
to
regroup
myself
and
continue
this
conversation
in
the
the
issue
and
or
next
week.
I've
learned
a
lot
and
I
don't
think
you
can
go
any
more
constructively
now.
You
are
right
brian.
I
will.
I
will
work
on
this.
F
E
Okay,
yeah,
I
guess
we
have
to
talk
about
that
as
well.
All
right,
thank
you.
I
will
synthesize
a
response
or
an
idea
about
this
and
come
back
to
this
group.
A
Okay
cool,
so
we
had
the
next
item,
which
is
actually
pretty
interesting
and
would
really
love
to
get
everyone's
feedback,
so
our
interns
have
been
working
with
anthony
on
you
know,
extending
the
open,
telemetry
operator
to
add
a
stateful
set
support,
and
again
this
is
the
dock.
We
can
you
share
that
and
then
you
know
we'd
like
really
like
to
get
your
feedback
as
you
read
through
it,
we'll
step
through
the
design
that
we
have
right
now.
A
The
dock
is
also
in
the
the
meeting
notes
I
can
share
it
here.
Also.
A
B
Yeah,
I
might
have
an
issue
of
permissions
because
I
never
shared
on
zoom
before.
D
B
Yeah,
so
I
can
get
started
and
yeah,
so
the
document
is
inside
the
zoom
chat,
as
well
as
the
agenda
for
anyone
who
wants
to
look
at
it
and
just
to
begin.
B
My
name
is
hui
vo,
and
I
worked
with
another
intern
iris
song
on
this
project,
in
which
we
are
enhancing
the
open,
telemetry
operator
to
be
able
to
support
and
manage
stateful
set
resources,
and
this
is
a
document
explaining
the
requirements
as
well,
the
design
that
we've
laid
out
for
this
project
and
so
just
to
get
into
a
little
bit
the
requirements.
B
B
So
here,
if
we
were
able
to
deploy
the
open
telemetry
collector
as
a
stateful
set
using
the
operator,
then
it
will
allow
us
to
manage
collectors
and
keep
them
and
have
them
have
persistent
state
for
the
collector
instance.
So,
for
example,
if
the
open
telemetry
collector
was
able
to
have
persistent
state,
then
we
would
able
we
would
be
able
to
maintain
historical
metrics
and
survive.
Pod
restarts
would
be
very
helpful
for
the
future
when
we're
dealing
with
logs.
B
So
we
can
always
get
our
persistent
volume
back
that
is
attached
to
that
pod
and
maintain
those
metrics
and
survive
any
type
of
pod
restarts
that
may
happen
in
the
future.
So
staple
sets
would
be
very
useful
if
we
could
run
the
open,
telemetry
collectors
as
them
as
he
as
listed
here.
So
next
is
the
functional
requirements
and
functional
requirements
goes
into
how
we
will
solve
the
issue
that
is
linked
here.
B
So
we
have
an
issue
in
the
prometheus
workgroup
repo
for
enabling
the
operator
to
support
stateful,
set
resources
and
as
a
part
of
our
functional
requirements,
we
want
to
scope
our
kind
of
goal
to
enable
the
user
to
report
a
open,
telemetry
collector
as
a
stateful
set
resource
first
and
foremost,
and
then
once
it
is
a
running
as
a
safety
set
resource,
then
it
should
be
able
to.
B
The
operator
should
be
able
to
send
the
configuration
for
the
stateful
set
to
accumulate
api
for
it
to
be
provisioned
basically
as
desired,
and
if
the
resource
already
exists
and
user
decides
to
change
the
custom
resource.
Somehow,
for
example,
they
want
to
change
from
five
replicas
to
three
or
vice
versa.
Then
the
operator
will
observe
this
change
and
be
able
to
update
the
staple
set
that
way
using
kubernetes
api
and
yeah,
so
that
is
it
for
the
functional
requirements
in
terms
of
quality
attribute
requirements.
B
Some
system
requirements
are
basically
the
same
as
a
current
open
technology
operator,
which
includes
having
the
cert
manager
installed,
as
well
as
the
ability
to
use
the
kuba
kuber
cuddle
tool
and
as
for
the
security
requirements,
we
will
make
sure
to
make
sure
the
operator
will
handle
any
errors
or
inputs
that
go
in
with
reconciliation
or
provisioning,
the
staple
set
resource
and
in
terms
of
performance.
B
The
open
telemetry
operator,
if
is
able
to
support
stateful
set
resources,
then
that
will
actually
enable
the
ability
to,
in
the
future,
split
up
work
between
replicas,
which
will
ensure
that
resource
usage
is
more
or
less
consistent
across
the
replicas,
which
would
be
very
good
because
performance
wise,
because
then
we
can
auto
scale
it
using
the
horizontal
pod,
auto
scaler
and
as
for
scalability,
once
we
support
stateful
set
collectors,
we
want
to
keep
the
rule
of
thumb
of
o
of
number
of
pods
watches
for
this
operator,
and
the
way
we
could
do
that
is
by
keeping
the
number
of
replicas
and
the
stable
set
to
a
low,
constant
number,
which
is
n.
B
So,
if
scalability
in
terms
of
that
case,
would
be
fine,
that's
what.
As
long
as
we
keep
the
number
of
replicas
to
a
small,
constant
number
and
to
go
through
the
data
flow
of
what
the
opentology
operator
with
stateful
sets
would
look
like.
Here
we
have
a
user
and
then
they
can
configure.
B
However,
they
like
a
custom
resource
for
the
open,
telemetry
collector
and
depending
on
how
we
configure
this
custom
resource.
The
operator
will
observe
these
changes
and
configure
them,
and
if
there
is
some
change
or
some
update
that
needs
to
be
made,
then
it
will
adjust
the
state
inside
the
stateful
set.
That
is,
that
holds
the
open,
telemetry
collector.
That
way,
and
all
this
happens
within
the
kubernetes
cluster.
B
And
lastly,
the
part
last
part
of
the
requirements
is
the
testing
and
for
testing
of
this
operator,
we
will
plan
to
use
the
cuddle
tool,
which
is
a
very
good
tool
for
using
using
n2n
tests
and
as
well
as
other
unit
tests
for
testing
out
our
staple
set
implementation
to
make
sure
that
it
works
the
way
we
expected
and
other
tasks
in
that
kind
of
like
realm,
and
before
I
move
on
to
design.
Does
anyone
have
any
questions
outstanding.
A
David,
are
there
any
assumptions
that
you
see
here
which
are
missing
again,
and
this
is
a
request
to
everyone.
A
This
specific
task-
yes,.
I
This
is
one
problem
with
the
large
record
that
I've
outlined
for
this
group
before
around
enabling
sharding
of
prometheus
scraper
sets
and
distribution
of
scrape
targets
to
replicas
in
in
a
sharded
manner.
H
Okay,
so
without
sharding,
if
we
have
multiple
replicas
in
stateful
set
like
it,
it's
it's
going
to
cause
redundant
data,
isn't
it
so
so
this
plus
sharding
will
make
this
complete.
Isn't
it.
I
Correct
yeah,
without
the
without
the
separation
of
scrape
target
discovery
and
allocation,
a
staple
set
is
probably
little
different
than
operating
as
a
deployment
with
the
addition
that
it's
got
persistent
volumes
that
could
be
used.
If
we
add
the
ability
to
export
from
a
prometheus
wall
or
if
we
add
the
ability
for
otlp
walls
to
be
created.
As
it's
been
discussed
in
the
collector
working.
I
B
Okay,
so
yeah
it
there's
no
more
questions
about
the
requirements
we
can
go
into
the
design,
so
this
document
will
talk
more
about
how
we
plan
to
actually
implement
the
design
of
enhancing
the
open,
telemetry
operator
and
kind
of
describe
some
of
the
changes
that
we
plan
to
make
within
the
code
base,
and
things
like
that.
So
first
I
want
to
talk
about
the
flowchart
for
implementation,
so
I
can
see
here
it's
more
it's
kind
of
similar,
but
it
dies
deep
into
a
little
bit
of
the
how
the
code
will
flow.
B
So
you
see
a
custom
resource
goes
and
sends
parameters
to
the
open,
telemetry
operator.
Then
here
there's
a
recon
reconciliation
within
the
stateful
for
the
staple
set
within
the
operator
that
basically
runs
tasks
to
observe
the
staple
set
and
sample
set
resources.
B
Then,
if
there's
a
change,
it
creates
it
send
those
prints
to
the
staple
set
and
back
to
it
to
either
create
it,
update
it
or
delete
it
depending
on
what
the
change
was
and
does
all
this
communicating
with
the
kubernetes
client
and
after
after
all
these
interactions,
it
should
be
able
to
manage
the
table
set
resource
and
now
to
talk
more
about
the
changes.
The
specific
changes
that
we
make
to
a
custom
resource
for
the
ultimate
telemetry
collector
I'll
hand
it
off
to
iris.
G
Okay,
so
we're
going
to
make
several
changes
to
the
custom
resource
to
for
the
operator
to
support
this
default
set
and
here's
a
template
for
the
custom
resource
that
I
used
to
create
and
open.
Telemetry
collector
with
the
operator-
and
one
thing
we
need
to
add
is
we
will
add
the
stable
set
into
the
mode
field,
as
well
as
in
the
api's
constant
list
and
in
terms
of
the
pod
management
policy
in
our
use
cases,
because
we
don't
really
require
the
pods
to
be
created
or
deleted
in
a
certain
order.
G
And
another
very
important
feature
of
this
default
set
is
that
it
it
supports
the
persistent
volume,
which
means
the
user,
can
request
persistent
volume.
So,
in
order
to
do
that,
we
will
add
a
field.
The
volume
claim
templates
to
the
open,
telemetry
collector
spec,
along
with
the
volume
month,
so
the
so
the
user
can,
after
specifying
the
volume
and
volume
amounts
you
can
also
specify
it
to
be
the
volume
to
be
persistent.
G
And
in
order
to
trans,
in
order
to
translate
the
volume
claim
templates
specified
by
the
custom
resource
into
the
kubernetes
api,
we
will
have
this
volume
claim
templates
for
us
to
do
so.
So,
basically,
what
it
does
is
it
copies
over
the
all.
The
volume
claim
templates
specifications
from
the
custom
resource
to
the
kubernetes
api
and,
besides
that,
we're
going
to
add
a
clip
to
the
config
map
volume,
which
is
persistent
yeah.
So,
theoretically
the
volume
clean
templates
every
claim
in
it
must
have
at
least
one
matching
volume
amount.
G
So
we
can
like
set
all
the
default
list
of
the
volume
claim
templates
matches
every
entry
in
the
volume
month,
but
we
choose
not
to
do
so
because,
because
the
collectors
we're
running
in
this
default
set,
not
all
of
them
might
require
the
persistent
storage.
So
we
will
left
the
decision
to
the
user
for
flex
flexibility.
I
And
so
can
can
I
back
up
just
one
second
to
the
pod
management
policy.
I
I
think
that's
one
area
where
it
would
be
good
if
anyone
in
this
group
has
input
to
consider
we,
we
thought
through
use
cases
and
didn't
see
any
requirement
for
bringing
up
collector
pods
in
an
ordered
manner
or
breaking
them
down
in
an
ordered
manner.
That's
why
we
made
this
decision,
but
if
anyone
has
any
information
that
would
lead
us
to
reconsider
that
it
would
be
good
to
know
so
that
we
could
either
expose
that
knob
or
alter
the
default.
D
I
Is
ordered
ready?
We
would
start
by
defaulting
it
to
parallel
and
not
exposing
the
option
to
choose
to
the
user.
A
D
No,
I
think,
if
that's
what
we
think
we
need,
then
we
should
stick
with
that
and
we
can
add
it
in
the
future.
If
we
want
people
to
be
able
to
specify
it
and
still
make
it
default
to
whatever
we
think
is
best
for
the
collector.
I
C
G
Okay,
so
I'll
keep
moving
so
for
the
detailed
safer
set
implementation.
G
Basically,
we
will
have
a
stateful
set
function
in
the
collector
package
and
we
need
to
define
the
object
and
meta
object
and
the
stateful
set
spec
and
for
the
meta
object
we
will
use
the
existing
metadata
configuration
used
by
other
deployments
and
for
the
stateful
set
spec.
We
will
have
three
special
fields
added
here.
G
One
is
replicas
which
can
be
specified
by
the
custom
resource
and
the
pod
management
policy
set
default
to
parallel,
and
the
volume
claim
templates
calling
the
volume
claim
templates
translation
function
we
just
mentioned
before,
and
also
in
the
reconcile
package.
We'll
also
have
a
stifle
set
function
for
us
to
process.
The
stateful
sets
configuration
from
custom
resource
to
our
current
context
and
basically
it
in
this
function.
It
can
create,
update
or
delete
the
current
stable
sets
in
accordance
with
our
given
instance.
G
G
So
another
entry
will
be
another
staple
sent.
Entry
will
be
added
to
the
to
the
to
the
controller's
new
reconciler
task,
as
well
as
the
setup
with
manager.
G
G
So
for
the
testing
strategy
we're
going
to
we're
going
to
adopt
the
unit
test
to
test
the
functions
we
mentioned
before,
whether
they
are
working
properly
and
and
also
we
will
utilize
the
kudo
test
for
us
to
verify
the
functionality
of
the
creation
of
our
stateful
set
collectors,
and
what
we
are
going
to
do
is
we
are
going
to
create
a
test
collector
instance,
and
after
after
the
after
the
operator
start
the
creation
of
the
of
the
of
the
of
the
of
the
staple
set
collector.
G
We
are
going
to
compare
the
running
kubernetes
object
and
see
whether
it
matches
our
implementation
yeah.
So
that's
that's.
The
overall
introduction
of
our
design
dog.
I
So
it
runs
kind
to
create
a
cluster
for
each
of
the
test
sets
or
it
can
be
pointed
at
a
running
cluster.
Okay,.
D
I
D
Would
you
also
mind
allowing
comment
access
to
the
document?
Just
if
I
have
any
other
comments,
I'd
like
to
drop
them?
Oh.
A
I
If
you
hit
share
in
the
upper
just
above
there,
and
then
there
should
be
a
drop
down
for
the
the
get
link
yeah,
so
anyone
with
this
link
can
view
down
at
the
bottom.
You
hit
change
there.
A
C
I
have
a
question
about
the
ordered
versus
parallel.
What
is,
does
it
mean
to
the
naming
of
the
the
pods
so
normally,
like
you
know,
stateful
set
in
order
when
you
create
them
in
order
it
creates
this.
Like
you
know,
collector-01-1-2
is
just
you
know
a
consistent
sequence
number.
Is
it
looking
like
more
like
a
replica
set
when
you
create
them
in
parallel,
where
you
get
like
a
random
hash.
C
A
It's
a
good
question
jana.
I
think
we
need
to
just
I.
A
I
A
A
Okay,
I
think,
let's
move
on
to
the
next
topic
thanks.
Sarah
thanks
yeah,
I
think
thank.
C
C
I'm
not
sure
everybody
have
seen,
but
there's
a
compliance
test
suite
under
the
prometheus
project
right
now,
and
it
also
supports
open
telemetry
collectors,
so
you
can
run
open,
telemetry
collector
against
the
test
and
see
what's
failing,
I'm
not
sure
like
if
tom
or
anyone
working
on
the
compliance
test
is
here
like
I'm,
not
sure
what
is
you
know
how
comprehensive
it
is
at
this
point,
but
at
least
we
have
a
bunch
of
things
that
is
failing,
and
then
we
try
to
like
convert
the
issues
that
we
were
tracking
in
the
working
group
into
compliance
test
results,
because
it's
I
mean
we
thought
that
it's
easier
to
triage
it
that
way,
but
turns
out
like
there
are
a
lot
of
like
common
reasons
why
some
of
the
tests
are
failing.
C
B
C
C
A
C
That's
great,
that's
great
yeah
I
was
I
was
talking
about
the
you
know.
We
were
working
on
the
compliance
test.
One
of
them
is
this,
like
instance,
and
job
labels
are
missing
because
we
were
dropping
them
as
as
unuseful
labels.
C
So
it
was
a
minor
change
in
the
code,
but
it
required
a
lot
of
like
test
changes,
and
david
gave
a
comment
here
that,
like
of
the
instance
at
least,
should
be
a
resource
attribute
rather
than
like
being
propagated
as
a
label
all
across
so
I'll
make
a
change
about
that,
and
then
we
can
come
back
to
this,
but
this
needs
to
be
merged.
There's
another
change
from
emmanuel
which
is
for
the
up
metric
and
it's
been
waiting
for
a
long
time,
I'm
not
sure
who's
reviewing
who
could
be.
C
You
know
in
the
charge
of
reviewing
these
things,
but
if
there's
any
approver
here
we
will
also
go
to
the
collector
cig.
I
guess
after
this
to
kind
of,
like
you.
A
Know,
but
I
think
it
would
be
good
if
folks,
like
josh
and
others,
if
you
guys,
can
just
provide
a
review.
C
Yeah,
if
you,
if
you
would
like
to
you,
know,
review
these
things.
This
is
the
best
time
and
we
have
another
one
like
which
is
related,
the
name
name
labels
and
it's
the
reason
why
couple
of
compliant
compliance
tests
are
failing
I'll,
send
a
change
to
that
after
this
is
merged.
I
just
didn't
want
to
work
those
things
in
parallel,
because
there
are
a
lot
of
like
dependencies
between
changes.
C
So
that's
that's.
What's
been
going
on
for
compliance
tests,
I
think
we
will
only
have
like
staleness
as
an
open
issue
once
these
three
things
are
being
fixed.
It
is
like
the
major
things-
and
I
think
emmanuel
has
been
in
many
like
you're
in
here
yeah.
C
I
was
wondering
like
if
you,
if
you've
been
thinking
about
stillness,
we
we
just
had
a
brief
conversation
about
it,
so
I'm
not
like
tracking.
What's
going
on
you
just
did
you
started
to
take.
C
J
C
Okay,
yeah,
there
is
kind
of
like
we
will
have
to
track.
You
know
what
is
seen
and
what
is
not
seen.
So
I
have
a
you
know.
I
mean
I'll,
let
you
know
we
can
collaborate.
Probably
on
that
sounds
good.
There
was
this
other
issue
which
was
like
you
know.
People
have
been
complaining
about
like
out
of
orders,
samples
and
duplicate
stuff.
It
was
related
to
the
cueing
mechanism
in
the
collector.
So
the
queue
has
this,
like
a
number
of
consumers,
it's
by
default
on
it's
in
every
exporter.
By
default.
C
Right
now,
which
takes
you
know
incoming
incoming
data
metric
data
and
consumes
it
in
you
know,
concurrently
in
like
10
consumers
or,
however,
whatever
the
number
that
you
configure
the
queue
with
so
this
was
like
compatible.
C
This
was
like
fundamentally
incompatible
with
prometheus,
because
prometheus
expects
you
to
send
samples
in
order
in
the
correct
chronological
order
and
we
disabled
it.
For
now
we
just
removed
it,
and
if
anybody
was
using
this
as
an
enabled
thing
with
a
actual
configuration,
they
were
not.
You
know
using
the
prometers
exporter
the
remote
right
exporter
correctly
anyways,
so
we
just
removed
the
the
queue
right
now.
So
the
question
is
we
want
to,
of
course,
have
some
like
cuen
features
for
people
who
want
to
fine
tune.
You
know
so
we
can.
C
C
You
know
by
providing
the
same
configuration
or
we
can
kind
of
like
go
and
enhance
the
collector
to
kind
of
have
the
cue
to
support
like
a
custom
function.
So
you
can
shard
things.
You
know
by
your
custom
logic,
so
there's
an
option
for
that
I'll.
Take
this
to
the
collector
sig
and
have
a
conversation
about
it.
We
just
need
to
still
do
some
work
because,
right
now
we
shared
by
five.
C
We
started
by
time
series
and
then
like
there
are
five
concurrent
outgoing
requests
that
is
not
configurable
by
the
user.
It's
not
really.
You
know
super
inclusive
of
all
the
cases.
So
I
want
to
improve
this
before
we
stabilize
the
collector.
C
Has
to
be
together,
okay,
yeah.
C
Is
what
is
the
timeline
for?
It.
F
C
F
C
If
there's
an
issue
brian,
I
want
to
subscribe.
F
But
in
general,
like
if
you're,
using
the
prometheus
scrape
code,
it's
just
the
append
is
the
like
the
where
it
commits
that's
the
script.
So
because
it's
all
using
the
storage
interfaces.
C
A
A
C
Yeah,
I
also
gave
them
some
like
feedback
on
the
compliance
test
itself
like
right
now.
You
have
to
modify
the
another
test
in
order
to
run
it
against
a
development
binary
and
so
on.
So
we
can
also
contribute
some.
I
think
some
of
those
minor
things
to
the
compliance
test
I'll
spend
some
time
giving
some
cosmetic
feedback.
We
will
also,
you
know,
make
this
a
part
of
the
ci
cd
at
some
point,
yep.
A
Definitely,
thank
you
good.
Thank
you
janice
great
any
other
questions.
We
are
still
have
about
a
bit
about
10
minutes
or
so.
A
Coffee
before
the
collector's
sink
any
other
questions,
folks
going
once
going
to
ice
all
right
cool,
give
you
back
15
minutes,
14
minutes!