►
From YouTube: 2021-06-23 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
A
Hey
folks,
I
think
we
have
enough
folks,
let's
get
started,
don't
see
anthony
yet,
but
we
have
a
couple
of
interesting
design
and
discussions
today
and
let's
get
started
on
this
so
alex
and
rahul
did
you
want
to
start
first
share?
It
share
your
talk
and.
C
D
E
A
A
F
F
E
I'm
sorry
about
that
is
everybody
able
to
see
the
screen.
D
Sure
so
I'll
get
started.
This
is
the
load,
balancer
design
document
and
me
and
raul
are
going
to
present
it.
So
the
related
pr
issue
is
linked
as
issue
33
in
the
second
line,
and
we
also
linked
for
more
context,
the
first
second
and
third
design
by
engineers,
hui
and
iris,
that
are
related
to
this
design,
as
well
as
the
operator
managed
load
balancer
deployment,
which
is
also
linked
in
this
document,
and
since
we
are
going
to
be
following
the
operator
manage
load,
balancer
deployment
model.
D
D
So
this
introduces
the
need
for
the
for
a
load
balancer
so
that
everything
can
be
working
to
scale
and
it's
properly
managed.
So
this
load
balancer
will
use
a
http
server
to
expose
the
targets
using
endpoint
urls,
which
will
then
be
used
by
the
prometheus
receiver
to
get
information
from
multiple
instrumented
systems,
we'll
be
using
http
based
service
discovery,
and
that
was
added
recently
to
prometheus
release
2.28.
D
So,
additionally,
this
the
load
monitor,
would
use
that
discovery.
Information
to
delegate
delegate
scraping
jobs
to
the
collector
instances
inside
of
a
staple
set
replica
based
on
a
replica
valley.
Sorry-
and
we
are
going
to
be
using
a
lease
connection-
algorithm
implementation,
there's
a
definition
at
the
bottom
for
what
that
is,
but
essentially
it's
going
to
delegate
jobs
to
the
collector
instances
that
have
the
least
amount
of
work
at
the
moment,
and
originally
we
were
going
with
a
round
robin
fashion
for
this
implementation.
But
then
we
ran
into
some
use.
D
Cases
where
we
figured
the
lease
connection
would
be
better
suited
and
below
is
a
diagram
to
show
the
interaction
of
our
load
balancer
with
the
different
parts,
such
as
discovery
nodes,
delegating
the
scrape
targets
for
the
collector
instances
and
reconciliation.
D
And
the
yeah,
so
in
terms
of
goals,
we
want
to
be
able
to
handle
one
or
more
replicas
of
a
collector
in
a
stateful
set,
so
the
load
balancer
will
be
implemented.
In
order
for
this
to
be
handled
and
the
workload
we
want
it
to
be
distributed
evenly
or
as
close
to
evenly
among
the
collector
instances,
and
since
this
is
a
operator
managed
load
balancer.
This
means
that
every
collector
that
is
managed
by
the
operator
will
have
its
own
load.
D
Balancer
deployment
and
cluster
resources
allocated
the
open,
telemetry
collector
will
utilize
http
service
discovery,
so
the
operator
will
be
able
to
configure
the
open
telemetry
collector
with
a
prometheus
receiver
using
http,
sd
config
and,
as
I
mentioned
earlier,
this
is
now
possible
with
the
new
prometheus
release.
So
now
raul
will
get
into
the
design
details
for
this.
E
Thank
you
alex
so
going
to
the
design
details.
Actually,
so
first
thing
is
that
we
are
the
open.
Elementary
collector
has
been
updated
with
the
latest
prometheus
version,
and
we
also
tested
it
yesterday,
so
it
works
with
the
http
sd
config
now
so
we
did
a.
We
ran
some
basic
test
just
to
see
if
it's
able
to
scrape
from
a
certain
http
endpoint
and
it's
working
as
expected.
E
So
with
this,
we
can
proceed
with
the
plan
of
using
the
http
sd
config
in
the
open,
telemetry
collector
to
actually
use
that
for
dynamic
target
script,
scrape
target
discovery,
and
we
would
also
only
be
using
it
in
the
case
of
the
state
verb
and
the
open,
telemetry
collector
is
running
in
a
stable
set
mode.
So
this
is
because
there
are
a
lot
of
instances
when
a
port
may
die
and
it
can
be
replaced
by
a
new
pot.
E
So
the
stateful
set
is
the
only
one
where
it
ensures
that
the
identity
of
the
pod
is
maintained
so
that
when
we
distribute
the
targets
using
the
load,
balancer,
those
are
not
affected
and
we
can
ensure
that
that
will
continue
scraping.
Even
if
the
pod
dies
and
comes
back
up
so
going
into
the
design
details
in
more
detail,
so
initially
we
would
have
a
load
balancer
configuration
option
in
the
config
file.
So
this
would
allow
the
user
to
actually
determine
whether
the
load
balance
needs
to
be
used
or
not.
E
The
default
would
be
false,
and
if
that
option
is
set
to
true
the
load,
balancer
would
be.
It
would
be
configured
to
uc
load
balancer,
the
load
balancer,
how
it
would
be
a
separate
deployment
and
it
the
load.
Balancer
mechanism
would
consist
of
a
configmap
as
well
as
your
deployment.
E
So
this
config
map
would
be
used
to
actually
send
the
details
which
would
be
required
by
the
port
for
using
the
stateful
sets
to
distribute
among
the
various
collector
instances.
E
There's
an
example
actually
of
the
conflict
map
which
you'll
be
sending,
so
that
would
be
the
load
balancer
value
as
well
as
the
label
selector.
So
this
label
selector,
would
be
the
ones
which
would
actually
be
used
to
select
the
different
parts
to
which
the
tasks
the
escape
targets
would
be
distributed
to,
and
we
would
also
have
the
config
which
is
extracted
from
the
receiver
scrape
configs.
E
And
so
we
would
also
have
the
reconcile
function
to
actually
look
for
any
changes
in
the
config
maps.
So
that
would
actually
be
the
original
conflict
which
would
be
sending
to
the
open
telemetry
operator.
E
And
if
there's
any
change
in
the
custom
resource,
we
would
actually
be
calling
the
reconcile
function
to
match
the
desired
state
with
the
experience.
E
Presently,
we
are
actually
going
with
the
option
of
having
a
separate
package
in
the
open,
telemetry
collector
repo,
and
that
would
actually
be
built
into
a
container
image
and
sent
to
the
port,
and
the
basic
logic
would
be
as
alex
had
mentioned.
We
would
be
finding
these
script
targets
and
from
the
static
config
and
we
will
be
distributing
it
among
the
various
collector
instances.
E
So
this
would
be
done
using
the
lease
connection
algorithm
like
alex
had
described
earlier,
and
why-
and
this
would
be
exposed
at
a
certain
http
endpoint
of
this
form.
So
once
this
is
exposed,
it
would
be
easy
for
the
prometheus
receiver
to
actually
just
scrape
the
endpoint
using
the
httpst
config,
and
that
would
retrieve
the
json
format
of
the
different
script
targets
and
it
would
use
that
for
the
dynamic
disk
script
target
discovery.
E
So
this
would
be
running
a
server
which
would
intercept
the
different
http
requests
sent
from
the
prometheus
receiver,
and
it
would
only
handle
the
get
request
presently
and
the
prometheus
receiver
initialization.
It
would
actually
change
all
these
create
target
conflicts
with
the
config
which
is
sent
from
the
open
telemetry
operator
to
the
collector
directly.
So
that
would
be
changed
from
the
scrape
target.
E
Config
static,
config,
sorry
to
be
httpst
configs,
so
this
would
be
replaced
with
the
url
and
where
the
jobs
are
available
for
the
collector,
and
we
also
can
specify
a
refresh
interval.
So
this
would
be
changed
in
the
open,
telemetry
collector
repo,
and
this
once
this
is
defined
since
we
verified
it's
working
already.
So
it
would
retrieve
the
json
for
these
different
script
targets
and
perform
these
script
target
discovery
and
alex
will
define
a
little
about
the
testing.
D
So
we're
going
to
essentially
create
mock
data
that
will
serve
as
instrumented
systems
or
nodes
and
collector
instances,
and
we
will
test
for
correctness
in
evenly
distributing
slash
exposing
targets.
So
this
means
that
we're
essentially
just
testing
the
least
connection
implementation
and
making
sure
that
the
load
balancer
is
delegating
correctly,
and
we
will
accomplish
this
by
creating
a
set
of
exposed
targets
per
collector
instance.
That
would
represent
an
expected
distribution.
D
This
would
then
be
compared
to
the
actual
distribution
that
the
load
balancer
would
generate.
So,
for
example
like
if
there's
four
collector
instances
and
2
000
nodes,
there
would
be
500
on
each
so
we're
going
to
be
testing
for
even
distributions
or
as
close
to
evenly
as
possible
and
yeah.
We
will
determine
success
by
asserting
that
the
amount
of
like
when
the
amount
of
jobs
get
updated
or
changed,
that
the
redistribution
is
still
aligned
with
that
and
in
terms
of
integration
testing.
D
We
are
going
to
be
using
a
mock,
http
server
with
http
sd
config
to
test
that
the
load
balancer
is
correctly
exposing
jobs,
so
the
server
will
generate
targets
which
the
collector
instances
would
be
exposed
to.
As
of
now.
This
can
already
be
tested
without
the
load
balancer,
and
we
confirmed
this
yesterday
where,
with
the
new
prometheus
release.
D
So
we
will
use
the
cuddle
tool
to
have
different
scenarios
involving
different
combinations
of
modes
and
load
balancer
deployments,
which
will
let
us
verify
that
a
load
balancer
deployment
is
only
used
under
the
stateful
set
mode,
and
we
did
have
a
question,
but
rawl
addressed
the
approach
we
were
going
to
go
with.
However,
input
on
that
would
also
be
greatly
appreciated.
A
All
right,
rohan
alex
thanks
folks,
do
you
have
any
comments
or
you
know,
do
you
see
anything
which
doesn't
you
know
feel
free
to
ask.
G
Questions
overall,
I
really
like
the
direction.
I
have
a
couple
small
comments,
but
I'll
probably
leave
them
in
the
dock.
If
that's
all
right.
C
Hey,
I
have
one
question:
how
do
we,
how
do
we
make
sure
that
the
jobs
don't
jump
around
collectors.
C
C
So
we
want
lease
destruction
right,
so
we
want
so,
for
example,
let's
say
I
have
four
jobs
and
I
keep
adding
you
know
or
modifying
or
deleting
jobs
like
every
five
minutes
right.
J
C
How
do
we
make
sure
that
we
there
is
like
a
lease
disruption
between
those?
You
know
collector
instances
when
config
keeps
constantly
changing.
E
So,
actually,
you
would
actually
be
doing
an
initial
check
to
if
the
if
a
job
is
scheduled
at
a
certain
on
a
certain
collector.
So
we
would
not
be
moving
that
only
the
new
ones
would
be
assigned
to
the
different
collectors
based
on
the
lease
connection.
The
original
ones
would
stay
as
it
is
so
that
we
maintain
the
least
disruption,
as
you
mentioned,.
C
Okay
and
the
lease
connection
is
based
on
the
the
number
of
targets.
Yes,.
E
K
Yes,
I
don't
think
that
this
duck
goes
quite
into
that
level
of
detail
about
the
the
load
balancing
algorithm.
But
my
expectation
would
be
that
when
a
new
set
of
targets
is
received
by
the
discovery
manager,
then
the
load
balancer
will
go
through
the
existing
allocations
for
all
of
the
collector
pods
remove
any
targets
that
are
no
longer
there
and
then
we'll
make
a
second
pass.
K
Adding
targets
that
were
newly
present
in
the
target
set
that
are
not
currently
scheduled
to
some
collector
pod
and
it
will
add
those
to
the
pods
that
have
the
least
targets
after
having
removed
the
old
ones.
So
any
targets
that
hadn't
changed
between
the
the
prior
allocation
and
the
current
allocation
would
stay
where
they
are.
It
would
only
be
targets
that
are
removed
or
newly
added
that
would
get
either
removed
from
a
collector
or
added
to
a
collector.
G
K
Think
long
term,
the
the
potentially
the
ideal
way
to
allocate
targets
would
be
on
a
per
metric
basis
to
to
know
the
number
of
metrics
that
each
collector
is
scraping
and
allocate
new
targets
to
the
collector
with
at
least
metrics.
But
we
don't
currently
have
that
information,
and
so
the
number
of
targets
is,
I
think,
the
best
allocation
strategy
we
can
take
for
right
now.
K
Well,
one
additional
thing:
that's
that's
nice!
I
think
about
this
model
of
having
the
load.
Balancer
separate
from
the
operator
component.
Is
that
an
end
user
can
replace
this
load
balancer
implementation
with
their
own
if
they
want
a
different
balancing
algorithm,
the
interface
is
basically
here's
the
config.
Here's
a
label
selector
to
find
the
pods.
K
L
Want-
and
he
is
the
load
balancer
only
taking
into
consideration
the
current
job
or
is
it
taking
into
consideration
the
targets
of
all
the
job
that
is
serving
because
I
seen
the
url
that
you
also
pass
the
job
name,
so
you
don't
get
all
the
targets
in
one
sdn.
What's
running
is
a
specific
reason.
Why
you
don't
query
just
all
the
jobs
in
one
http,
query.
K
So
I
think
that
the
thinking
there
is
that
the
user
will
provide
in
the
open,
telemetry
collector
resource
a
premium
configuration
with
potentially
multiple
jobs
and
potentially
job
configurations
that
are
not
service
discovery
related.
So
we
don't
want
to
collapse
all
of
those
jobs
that
they
may
have
presented
into
a
single
job.
K
If
there
are
non-service
discovery
related
configurations
that
are
happening
there,
we
simply
want
to
replace
the
service
discovery
configuration
from
each
job
with
here's,
where
you
can
go,
get
the
the
current
allocation
for
this
collector
and
then
the
the
load
balancer
will
be
aware
of
all
of
the
jobs
that
it's
configured
with,
and
it
will
be
able
to
do
that
load,
balancing
with
awareness
of
all
of
the
targets
across
all
of
the
jobs
to
all
of
the
pods.
K
So
one
collector
may
get
no
targets
for
a
single
job,
because
it's
got
a
whole
bunch
of
targets
for
a
different
job.
But
I
think
that
requesting
the
the
targets
from
the
the
the
collector
itself
requesting
the
targets
should
happen
on
a
per
job
basis,
because
we
want
one
httpsd
config
per
job,
to
make
the
minimal
disruption
to
the
user
provided
prometheus
config
as
possible.
G
Just
to
be
clear,
all
job
or
all
service
discovery
types,
including
like
a
static
config,
will
still
be
both
balanced
right.
Correct,
yes,
okay
thanks
a
future
enhancement
might
be
to
try
and
have
a
single
collector,
be
responsible
for
as
few
jobs
as
possible,
but
that
would
just
be
a
new
and
low
balancing
algorithm.
A
Yeah
exactly
I
mean,
I
think
that,
as
anthony
was
saying,
I
mean
we
should
have
the
building
the
flexibility
to
be
able
to
add
a
more
sophisticated.
You
know
algorithm
later,
but
at
least
design
for
it.
For
now.
K
Yeah
and
that
doesn't
necessarily
have
to
happen
by
replacing
the
entire
load
balancer
image
as
well.
It
could
be
that
we
build
several
algorithms
into
a
core
image
and
expose
that
configuration
through
the
open,
sound
tree,
collector
custom
resource,
but
I
I
think
that's
a
bridge
that
we
can
cross
when
we've
got
multiple
implementations
that
we
think
need.
A
A
Yeah
exactly
but
david,
you
should
definitely
add
that
point,
because
I
think
that
would
be
a
good
future
enhancement.
C
So
one
more
question
on
the
initialization
change
part.
So
now
all
collector
would
request.
C
A
A
Folks
are
done,
then,
we'd
like
to
move
on
to
the
next
topic
we
had,
which
is
we're,
building
a
help
chart
for
being
able
to
deploy
the
operator.
So
we
wanted
to
add
this
to
the
hotel
operator,
repo
and
shebo.
Do
you
want
to
share
your
screen
walk
through
it?
Yes,.
A
And
folks,
which
talk,
can
you
not
make
comments
on
the
first
one
or
both
just
the
first
one
for
you,
okay?
Okay,
sorry.
B
B
So
currently
we
can
install
the
geometry
operator
into
our
kubernetes
clusters
using
the
cube
control
apply
commands,
but
if
we
choose
to
use
the
ham,
we
can
utilize
more.
You
know
useful
functions
like
the
upgrade
drawback
of
the
customization,
and
we
also
have
more
flexibilities
as
users
to
change
the
values
which
will
pass
us
through
the
operator.
B
We
can
change
the
these
configurations
through
value
cmo
file
and
and
also
we
can
have
better
skill
abilities.
Well,
you
know
at
this
time
the
operator
just
manages
the
open
temperature
collector,
but
in
the
future
maybe
more
crds
will
be
added.
So
with
helm.
B
So,
let's
take
a
look
about
the
workflow
diagram,
so
this
one
is
pretty
straightforward.
So
as
a
user,
we
can
first
install
the
hotel
operator
hem
chart
and
this
will
create
a
namespace
and
the
operator
will
be
deployed
as
a
deployment
resource.
So
so
its
name
would
be
operator,
controller
manager
and
also
two
services
will
be
created.
One
is
the
control
manage
matrix
and
the
other
one
would
be
webhook
webhook
service.
B
So
after
this,
the
users
could
install
whatever
the
hotel
collectors
modes
they
want,
which
will
be
managed
by
the
operator.
So
so,
at
this
time
the
factor
could
be
installed
as
other
the
deployments.
The
therefore
sets
the
side
car
or
the
demon
set,
so
whichever
the
the
users
want
to
deploy
it
and
through
the
characters
they
can
begin
to
monitor
or
always
scripts
the
matrix
traces
logs
and
exposure.
B
B
So
basically,
the
charts
demo
stores
all
the
information's
basic
information
about
our
hand
above
our
operator,
hem
chart
and
value
cmo
source
default
values
which
will
be
passed
into
the
chart
and
users
may
override
the
values
in
these
files
through
command
line
or
direct
change.
This
file,
oh
well,
the
crd,
folder
stores,
all
the
crds.
We
will
need
to
use
when
the
templates
rendering-
and
at
this
time
the
the
only
crd
would
be
the
character
so
also
which
we
want
to
talk
about.
Mostly
it's
a
templates
folder.
B
Well,
all
our
templates,
the
hem
chart,
will
evaluate
when
installing
the
chart
will
be
installed
in
this
folder.
So
the
templates
will
also
be
eventually
sent
to
the
kubernetes
clusters.
First
of
all,
let's
take
a
look
at
the
admission
webhooks
further.
B
Well,
it
stores
the
configurations
of
two
types
of
the
admission
webhooks.
First
one
is
mutating
the
mission
report
and
the
second
one
is
the
validating
admission
web
and
they
will
work
together
to
make
sure
that
all
the
requests,
all
the
requests
with
the
correctly
formatted
rules
can
get
into
our
operator.
B
They
are
pretty
working
as
the
same
and
and
including
the
cluster
row
yemo
and
cluster
row
bearing
demo,
so
they
basically
configures
what
type
of
actions
will
be
permitted
based
on
the
draw
and
the
the
binding
emo
would
just
you
know,
grant
the
permissions
defined
in
this
row
and
find
it
to
a
user
or
a
group
of
users.
B
Well,
the
development
tmo
is
as
a
manifest
of
configurations
create
the
operator
as
a
deployment
resource,
as
I
have
mentioned
before,
and
the
service
demo
will
is
just
a
manifesto
configurations
to
create
the
two
services
and
the
test.
Folder
will
install
all
the
hem
child
test
configurations,
and
I
will
talk
about
this
later.
Okay.
So
let's
take
a
look
at
the
third
manager
issue.
So
currently
the
open
geometry
operator
depends
on
the
native
quantities
as
a
certain
manager,
which
means
the
users
must
install
it
first
to
install
the
operator.
B
So
when
I
was
trying
to
design
the
operator
hem
charts,
I
was
thinking
that
is
their
way
to
integrate.
You
know
the
search
manager
into
our
charts,
so
users
don't
need
to
be
bothered
to
install
the
search
manager
first.
Well,
so
the
ins,
so
the
initial
solution
comes,
to
my
mind,
is
to
utilize.
You
know
the
the
sub
charts
mechanisms,
hand
overs.
B
Well,
you
know
we
can
deploy
the
third
manager
as
a
subchart
of
the
operator
chart,
but
I
tried
it
and
tried
the
demo
and
this
solution
won't
work,
because
in
inham,
when
we
install
charts
all
the
ports
like
deployments
or
services
of
the
software,
will
take
the
parent's
child's
name.
For
example,
if
we
like
install
our
operator
and
give
it
the
release,
a
name
like
my
operator
and
all
the
names
of
the
subcharts,
like
port
services,
will
provide
the
name
like
my
operator
search
manager,
webhook,
and
this
will
cause
an
error.
B
So
I
did
some
research
in
the
certain
energy
rifle
and
that's
the
issue
which
turns
out
that
the
self-manager
team
doesn't
support
or
recommends
the
chart
developers
like
us
to
deploy
stuff
manager
as
a
subchart.
B
So
therefore,
the
current
solution
about
the
you
know
the
certificate
manager
problem
will
remain
the
same
as
before.
Until
we
find
a
better
approach
about
this,
so
here
I
give
an
example
of
the
value
cmo
file.
It
includes
all
the
configurations
you,
the
user
could
override
to
customize
their
own
operator,
hem,
charts,
okay.
So,
finally,
the
testing
strategy
well,
as
mentioned
above
all
the
test
files-
will
be
under
the
oh,
so
there's
misty,
it
will
be
a
test
yeah.
So
all
the
tiles
files
will
be
under
the
test.
B
Folder
and
a
a
test
in
the
hand.
Trap
is
basically
a
job
definition
that
specifies
a
container
with
a
command
run.
So
if,
if
we
want
to
spice
your
test,
we
we
we
just
use
the
annotation,
like
ham,
dot,
sh
hook
test
to
identify
a
job,
and
we
can
use
this
to
to
test
our
hem
charts.
So
basically
we
have
one.
We
will
have
like
two
tests.
It's
about
to
test
the
connections
of
the
services
the
operator
offers.
B
So
so
here
is
our
example
test
connection
file
and
in
the
future
we
also
plan
to
add
small
tasks
like
to
test
if
the
server
manager
is
to
like
test
the
connection
of
the
search
manager
and
see
if
it
already
exists,
so
that
so
then
we
can
our
operator-
and
here
are
some
related
links
and
docs.
B
B
A
Okay,
so
did
you
have
any
questions
to
ask
or
we
can
open
it
up
for
questions
otherwise.
C
A
Yeah,
that's
a
that's
a
good
point.
She
will
have
you
thought
about
that.
B
I
don't
know
the
crd
support
means,
because
you
know
we
can
just
pre
pre-install
the
character
crd
in
the
crds
folder
and
the
hem
chart
will
in
will
install
the
crd
into
our
kubernetes
clusters
first
before
install
the
hem
chart.
So
I'm
not
I'm
not
I'm
not
sure
what
you
mean
by
this.
B
K
B
Yeah,
I
I
have
mentioned
it
in
the
raid
me
and
you
and
you
know
currently
hamchat
doesn't
support
to
upgrade
the
crd
and
we
can
only
do
it
manual
so
it
so.
If
we
want
to
upgrade
to
crd,
you
must
you
know
manually
update
it.
A
Yeah,
I
think
you
should
call
that
out
clearly
in
the
dark.
A
Design,
I
mean,
that's,
that's
an
that's
kind
of
an
future
enhancement.
Once
it's
available.
J
Or
is
there,
is
there
plans
to
do
that,
like
the
service
monitor
crd's
importance.
K
G
B
K
K
We
took
some
initial
steps
towards
that
by
in
the
prometheus
project,
exposing
the
config
generation
routines
that
are
used
to
generate
prometheus
config
from
service
monitors
and
pod
monitors.
K
So
at
a
later
date
we
have
the
ability
to
integrate
into
the
operator
watching
for
service
monitors
and
pod
monitors
and
updating
a
managed
script
config
based
on
those,
but
we
don't
currently
have
plans
to
to
do
that.
Integration
got
it.
I
just
wanted
to
confirm.
Thank
you.
A
And
and
vishwa
you're
you're
welcome
to
add.
A
J
A
J
J
J
K
Exactly
yeah,
if
you
want
to
work
on
that,
I
would
be
happy
to
work
with
you
to
to
talk
about
the
the
steps
that
I've
gotten.
You
know
moving
towards
that
direction
and
we
can
figure
out
a
way
to
integrate
it
with
the
work.
That's
currently
ongoing.
J
A
Any
other
questions
folks
have
david
anything
jarring
being
correct.
A
Hopefully
other
than
using
it,
we
are
hoping
that
we
can
present
this
once
jurassic's
back.
You
know
from
I
think
he's
on
pto
this
week,
but
next
week
we'll
walk
through
with
him
and
do
a
more
you
know.
More
detailed
review
sounds.
G
A
All
right
cool
sheba
thanks.
I
hope
you
noted
some
of
the
questions
good
questions,
let's
move
on,
so
I
think
that
we
had
a
couple
of
other
updates.
Quick
updates.
Again,
I
think
anthony
you
ran
the
test
and
I
just
took
the
test
results
and
listed
them
here,
as
you
can
see
the
up
metric
pr.
That
was
open
for
a
while
that
got
merged,
and
there
was
also
there
were
two
tests
that
were
failing
because
of
that
of
metric
pr.
A
You
know
in
progress,
so
the
invalid
test
and
the
up
metric
test
is
now
passing.
Stillness
is
still
open
that
pr
is
still
open.
K
And
one
of
the
two
prs
needed
to
address
stillness
has
already
been
merged.
The
first
one
creates
the
the
store
for
metrics
that
we've
seen
recently
and
the
ability
to
detect
when
one
goes
away
and
the
the
second
half
of
that,
I
think,
is
working
through
some
test
updates
because
of
recent
changes
in
the
the
receiver,
with
the
the
other
pr's
that
landed
so,
hopefully
we'll
be
able
to
get
that
landed
shortly,
and
that
will
address
the
last
outstanding
test.
K
K
H
Okay,
because
I
I
was
just
running
it
in
parallel-
and
I
be
five
failures
in
0.29,
but
if,
if
head
has
them
fixed,
that's
great.
A
Richard
are
you
seeing
a
different
result?
You
said
five
tests
are.
H
Just
ran
it
on
the
side,
so
I
might
also,
but
this
is
what
I
saw
just
now,
where
we
have.
H
But
again
I
might
be
wrong
and
also
if
head
is,
is
more
current
than
0.29,
then
maybe
this
is
already
fixed
in
the
last.
K
It
looks
like
okay,
29
included
the
metric
fix.
I
think
it
was
just
a
day
or
two
ago
that
it
was
released,
so
there
shouldn't
be
much
distinction,
but
yeah
yeah,
so
I
just
ran
it
against
a
build
from
oh
29,
and
it
is
passing
all
of
the
same
tests.
This
single
stay
on
this
failure.
F
A
Done
josh
we
have
two
pr's
there.
Emmanuel
has
been
working
on
it
anthony
did
you
have
more.
K
K
That's
where
we're
handling
both
the
up
and
stay
on
this
metrics,
and
so
when
it
scrapes
the
target,
and
it
doesn't
see
that
in
the
next
screen.
It
includes
a
stale
marker
for
for
that
metric.
But
that
will
only
happen
for
metrics
that
flow
from
the
prometheus
receiver
to
the
prometheus,
remote
right
exporter.
F
K
F
Go
find
that
and
I'll
link
that
to
you
that's
okay,
I
just
I
mean
the
the
problem
was
always
that
there
was
no
good
way
to
do
that
with
a
histogram
in
otlp,
because
in
the
prometheus
representation
you
can
just
put
a
nand
value
into
every
one
of
the
series,
but
in
the
hotel
p
we
were
questioning
whether
to
put
it
in
a
sum,
or
this
is
problematic.
F
A
Josh,
do
you
have
a
link
by
any
chance
that.
F
It's
in
the
notes:
okay
down
a
little,
the
page
about
representing
stainless
steel,
otp
proto
pr
316.
A
Because
we
had
several
discussions,
brian
actually
provided
good
feedback,
so
you
know
on
how
to
handle
nance
and
where,
when
that
would
be
handled
so
there.
F
Was
a
parallel
discussion
in
the
data
model
group
and
we
we
talked
about
this
a
lot
and-
and
I
had
proposed
as
a
strong
man,
let's
just
use
nand
values,
there's
not
much
wrong
with
it.
In
my
opinion,
except
that
you
come
to
this
histogram
point,
you're
like
where
am
I
going
to
put
an
n
value,
it's
going
to
go
in
the
sum,
but
there's
this
parallel
discussion
about
when
some
is
not
meaningful
for
histograms
and
so
on.
F
So
anyway,
proposal
is
a
single
bit
and
it's
open.
A
Great,
I
think
there
were
two
other
updates
that
I
had
at
least
just
wanted
to
ask
everybody
if
there
are
any
prs
that
are
outstanding,
grace
thanks
again
for
calling
that
out
I'll
sync
up
with
tigran
again
and
bogdan
in
order
to
get
that
merged
and
and
then
hopefully,
that'll
get
done,
and
then
jim
josh.
Sorry,
we
have
your
otlp
request
pr.
So
we'll
take
a
look
at
that
and
comments.
A
Please
feel
free
to
everyone
to
comment.
Take
a
look.
Comments
are
good
on
the
pr
the
other.
You
know
aspect
that
I
just
wanted
to
call
out
for
just
your
knowledge.
You
know,
since
we
are
all
working
on
prometheus
testing
or
you
know
ensuring
that
the
pipeline
is
stable
and
fully
passes
the
compliance
tests.
You
know
that
the
prometheus
community
has
defined.
A
A
This
is
right
after,
but
just
wanted
to
call
your
attention
that
you
know
there
are
a
couple
of
strategies
that
we
are
taking
here,
one
that
you
know
we're
actually
pulling
out
any
components,
whether
those
are
exporters,
processors
receive
you
know,
or
any
receivers,
not
related
to
trace
stability
into
config
contrib
right
now,
and
of
course,
you
know,
our
discussion
has
been
to
keep
the
prometheus
components
either
in
core.
Given
that
you
know
it
is
super
important
and
something
that
we
are
working
on
actively
right
now
to
not
disrupt.
A
You
know
the
builds
and
the
retargets,
if
you
will,
but
also
otherwise,
to
actually
have
a
repo
for
all
the
components
so
that
we
can
maintain
velocity.
So
that
was
just
something
I
wanted
to
call
out,
so
that
folks
are
aware
that
that's
a
discussion,
that's
ongoing.
Another
discussion,
that's
ongoing
and
we'll
discuss
more
in
the
collector's
sig
is,
of
course,
the
semantic
conventions
that
are
being
discussed
for
the
collector
and
and
would
like
love
to
have
you
know.
Anthony
has
made
a
proposal
for
an
initial
doc.
A
Would
love
to
have
your
comments
there.
Also.
So
please
take
a
look
for
those
of
you
who
are
interested
in
semantic
conventions,
and
that
said,
I
think
those
were
the
only
two
parts
we're
almost
close
to
you
know
completing
phase
one
once
the
compliance
test
passed
and
the
prometheus
phase.
One
changes
are
at
least
done,
then
we'll
continue
on
to
phase
two
and
continue
to
you
know
add
continue
work
there.
D
A
Think
it's
this
one.
I
think
it's
this.
K
Yeah,
so
3476
is.
M
K
And
that's
based
on
what
we
did
in
the
go
client
library
there's
been
some
good
feedback
already
on
this.
So
I'm
taking
a
second
pass
at
that
to
address
the
feedback.
We'll
continue
to
discuss
that
in
the
the
next
hour-
and
I
hope
this
afternoon
to
have
an
update
to
that
addressing
feedback.
A
Yeah
and
and
again
just
to
call
out
you
know,
there
are
two
issues
that
we
have
agreed
upon
with
bogdan
in
terms
of
what
need
to
be
completed.
Obviously
the
semantic
conventions
definition
and
the
refactoring
of
collector
core.
In
order
to
make
it
you
know
again
leaner
and
stable
and
then
we'll
continue
to
add,
you
know,
pull
back
contrib
components
back
into
core
as
needed
as
we
stabilize
them
for
metrics.
A
So
that's
just
the
update
and
just
wanted
to
make
sure
everybody's
aware
of
that
on
this
group
and
and
again,
if
you
get
you
know
any
of
you
interested
in
the
semantic
conventions.
Please
help
because
anthony
has
made
the
initial
proposal
based
on
the
work
that
he
did
with
the
go
library
and
the
idea
was
to
reuse.
Some
of
that.
But
again,
the
collector
is
a
different
animal,
so
we'd
like
to
kind
of
figure
out
how
to
not
only
reuse
but
also
adapt
any
of
the
other
conventions
for
the
collector,
including
versioning.