►
From YouTube: 2021-07-14 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
B
Hello,
hopefully
you
can
hear
me
now:
alolita
had
an
unavoidable
conflict
this
morning,
so
it
won't
be
around
and
I
appear
to
have
locked
myself
out
of
my
work
computer,
so
I'm
turning
from
my
personal
computer
today
may
not
have
access
to
all
of
the
things
like
this
first
agenda
item.
That
says
we
were
passing,
but
now
we've
got
a
test,
that's
failing
again,
so
we'll
have
to
look
into
that.
B
B
B
B
B
Okay,
I
think
I
see
the
issue
with
the
compliance
tests,
so
we
we
have
set
up
a
a
nightly
action
to
run
the
compliance
tests
against
the
hotel
collector.
B
B
C
B
A
B
Okay,
beyond
that,
are
there
any
outstanding
issues
or
prs
that
anyone
is
working
on
or
is
aware
of
that
should
be
discussed
in
this.
B
D
Hey
anthony,
yes,
hey
harry
howdy,
so
I
do
have
a
pr
up.
It's
three:
five,
nine
seven
it
essentially
so
months
ago
I
sent
up
a
full,
fully
fledged
pr
to
add
the
right
ahead
log
to
the
promisius
remote
right
exporter.
D
But
you
know
bogdan
and
team
said:
hey
was
too
complex
for
them
to
review,
so
they
asked
me
to
send
pieces
yeah
pieces
of
it
so
and
the
prior
pr
got
approved
already.
It's
just
requested.
Hey.
Could
you
please
send
this
in
smaller
pieces,
so
I've
split
up
code
in
there
and
this
first
pr
just
adds
a
basic
basic
wall
that
will
then
be
wired
up
later
on.
So
please
help
me,
take
a
look
yeah.
That's
it
three.
Five,
nine
seven.
D
B
B
B
E
Yes,
I
don't
think
anyone
wants
to
see
it
go
any
other
way.
The
proposals
from
open,
histogram
and
hdr
histogram
are
not
suitable
for
prometheus,
and
I
don't
particularly
carriage
to
move
in
any
other
direction.
I
hope
that's
good
enough.
E
I,
like
bayern's
proposal,
I'm
trying
to
get
an
understanding
of
how
we
generate
that
that,
from
some
of
the
other
vendors
who
have
proposed
to
use
that
protocol,
so
we
saw
a
demonstration
by
dynahist
dynatrace
yesterday
and
still
answering
some
questions
there,
but
I
would
say:
there's
definitely
not
a
disagreement
between
open,
telemetry
and
prometheus.
There.
E
B
Well,
I
think
that's
great
news
for
this
group,
at
least
to
the
extent
that
open,
telemetry
and
prometheus
are
in
alignment.
We're
all
very
happy
here.
E
And
I
need
to
make
coffee
do
that
right
now,.
B
F
Yeah,
so
so
I
was
talking
to
david
david
ashford
offline,
so
I
just
saw
his
comments,
so
I,
as
we
have
time,
I
guess
so.
I
just
want
to
take
this
opportunity
and
learn
a
little
bit
more.
So
my
issue
is
like
I
am
using
the
prometheus
receiver,
so
we
discussed
this
earlier,
but
I
am
kind
of
a
stuck,
so
I
was
just
looking
for
some
work
around
some
other
way,
so
the
prometheus
receiver
is
using
the
prometheus
receiver.
F
I
am
getting
the
the
matrix
from
like
c
advice
or
kubelet
c
advisor
my
taken
point
and
from
this
path
we
actually
don't
have
any
practical
way
for
getting
this
memory
request,
matrix,
which
we
need
for
calculating
our
memory
design
metric.
So
this
is
kind
of
blocker
for
me,
so
I
was
just
wondering
what
could
be
and
work
around
using
open,
telemetry
components
or
system.
Do
you
have
any
kind
of
components
which
I
can
use
with
my
prometheus
receiver?
F
Maybe
any
kind
of
processor
or
anything
which
can
give
me
this
matrix
and
also
another
thing.
I
am
not
deploying
the
and
the
important
thing
I
guess
is
like
I'm
not
deploying
the
collector
as
an
demonstrate.
So
technically
I
am
running
one
collector,
who
is
will
who
is
collecting
all
the
metrics
for
all
the
from
all
the
name
space
and
from
all
the
parts?
So
we
will
have
multiple
instances
or
multiple
nodes,
but
the
collector
will
run
only
on
one
node,
maybe
for
now,
and
it
will
collect
all
the
matrix.
F
So
following
these
approach,
is
there
any
feasible
way
to
get
this
matrix?
And
I
david
replied
to
my
questions,
but
maybe
I
can
share
that
also
like,
but
from
my
understanding
is
like
davis
has.
One
is
like
use
cube
in
the
new
kubernetes
version.
We
have
something
in
the
scheduler
which
gives
us
this
matrix,
but
I
am
not
super
sure
if
any
david
or
you
can
add
something
more
to
help
me
understand
better.
That
would
be
good
and
also
others
are
also
welcome
to
help
me.
C
So
the
main
way
that
people
in
the
prometheus
community
get
this
metric
is
by
running
cube
state
which
has
prometheus
metrics.
So
if
you're
running
one
per
cluster,
you
could,
I
think,
feasibly
run
cube
state
as
well
and
only
enable
the
pod
state
metrics
and
get
state
metrics
for
pods.
C
That
way,
there
is
in
newer
clusters
something
that
clayton
has
been
working
on
to
add,
because
this
is
a
problem
I
think
for
others
as
well,
basically,
that
there
are
some
fairly
basic,
almost
required
metrics,
that
you
need
in
order
to
make
decisions
about
capacity,
planning
or
other
reasons
that
aren't
part
of
core
kubernetes.
C
So
there's
a
proposal,
that's
underway,
I
think
the
metrics
endpoint
exists
as
of
121,
if
I
remember
correctly,
but
I
may
be
misremembering
where
the
scheduler
exposes
metrics
endpoint,
that
you
can
use
to
get
container
requests
and
limits,
which
is
maybe
more
specifically
geared
towards
this,
but
of
course
that
only
works
like
in
gke.
For
example,
I
know
that
that
won't
work,
because
you
can't
collect
metrics
from
the
scheduler
in
gke,
because
it's
it's
running
somewhere
else,
so
that
may
or
may
not
work.
I
don't
know
how
amazon
has
their
stuff
set
up.
F
So
those
two
summaries
like
exposure
has
a
different
endpoint
where
it's
limiting
its
matrix.
So
we
need
to
write
up
something
which
can
get
this
matrix
from
that
scheduler
scan
point
and
also
it
only
works
for
gke.
But
for
kubernetes
we
need
to
look
fast
like
whether
it
it's
accessible
in
the
case
or
not.
C
No
so
I
mean
in
theory,
you
could
create
a
like
cube
state
receiver
or
something
that
watches
objects
and
emits
metrics
about
them,
but
you
might
as
well
just
run
actual
cube
state
and
scrape
the
prometheus
endpoint.
Since
I
assume
you
need
this
relatively
quickly
and
don't
want
to
re-implement
everything,
that's
hard,
then.
F
C
F
No,
and
also
another
thing
like
so
here
is
a
good
question
then,
like
so,
I
am
receiving
other
metrics
using
prometheus
receiver
and
if
I
run
this
as
another
kind
of
cycle
or
like
another
application
in
my
cluster
and
the
thing
is
like
it
will
give
me
a
different
set
of
metrics
and
combining
this
matrix
for
combining
this
matrix.
So
how
will
this
work
like?
I
need
to
maybe
write
some
kind
of
custom
processors
for
like
so
for
quiz
magic
is
coming
from.
Who
is
part?
F
C
F
C
I
mean
for
cube
state,
it
attaches
the
pod
name,
the
namespace
name.
F
Yeah
then
the
real
problem
is
like,
so
it
is
not
guaranteed
that
I
will
receive
all
the
matrix
in
the
same
payload
or
the
same
base.
If
this
matrix
does
not
come
like
say,
for
example,
my
one
set
of
matrix
are
coming
like
so
so
I
need
the
matrix
in
the
same
base
right.
Otherwise,
I
cannot
process
like
if
I
use
cube
state
matrix
and
prometheus
receiver
in
parallely
and
the
matrix
comes
in
different
payload
or
different
gas,
then
I
cannot
process
them
like.
I
need
to
combine
them
together
right.
B
F
Payload,
yes,
so
like
I
need
to
calculate
the
memory
reserve
metric
and
who
is
depends
on
this
memory
request
matrix,
which
is
coming
from
cube,
estimate
matrix
and
which
also
uses
the
memory
memory
uses
matrix
which
is
coming
from
prometheus.
I
believe
so,
or
the
note
note
limit-
I
guess
maybe
so,
which
is
coming
from
prometheus
receiver,
so
if
they
don't
come
in
the
same
payload
or
same
base,
so
I
cannot
calculate
the
new
metric
right.
F
D
Hey
hi
ran
so
many
years
ago,
etsy
who
used
php,
mostly
they
needed.
This-
is
when
we're
working
on
open
census.
D
They
needed
to
be
able
to
collect
metrics
from
php
processes,
but
the
problem
is
these:
you
know
these
metrics
and
also
choices
could
not
be
aggregated
into
one
like
they'd,
be
mailed
in
separate
threads
and
so
to
solve
this
kind
of
problem.
What
we
did
was
just
built
a
daemon
in
go,
that's
an
intermediary
and
it
receives.
You
know
for
every
single
one
of
these
sources.
D
F
Yeah,
like
so,
are
you
saying
like?
Maybe
we
can
so
to
write
a
custom
demo
for
doing
this
or
like
send
them
to
back
end
and
let
the
back-end
handle
this
yeah,
that's
right.
So
for
our
case
yeah,
I
understand
the
thing
like,
but
for
our
case,
like
our
back
end,
expects
this
mattress.
Otherwise
our
our
dashboard
actually
doesn't
get
populated,
that's
one
of
the
blocker
and
for
the
demonstrate
concept.
So
I
I
don't
know
like
so
in
the
case
target,
the
concept
of
demon
set
actually
doesn't
exist.
D
I
mean
what
I'm
saying
here
is:
you
know,
write
a
single
demon
that
all
you
know.
F
D
The
sources
can
be
able
to
forward
metrics
to
after
it
gets
them.
You
know
periodically
like
maybe
every
one
minute
or
so
after
it
has
combined
all
of
them.
Then
it
forwards
them
to
you
know
whatever
your
backend
or
to
the
collector
or
something
then.
B
F
B
You
and
manuel
could
could
discuss
this
concept
a
bit
further
and
when
you've
got
a
design
dock,
we
can
review
it
either
here
or
in
the
collector's
sig,
whichever
it
is.
F
Okay,
yeah.
I
think
I
got
some
basic
idea
thanks
for
your
time,
I
will
think
offline
with
also
david
and
also
anthony.
Maybe
I
will
also
contact
you
and.
B
Okay,
I
think
that
covers
the
extent
of
what
we've
got
on
our
agenda
I'll
make
one
last
call
for
any
topics
to
discuss
otherwise
I'll
give
everyone
some
time
back
this
morning,.
B
Nope,
okay,
I'm
gonna
go
get
some
coffee
you'll!
Do
what
you
wish
with
this
40
minutes
enjoy
and
I'll
see
you
guys
later.