►
From YouTube: 2021-10-20 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
C
Yep,
please
sign
in
on
the
agenda
if
you
haven't
already-
and
the
first
item
is
dimitri.
D
Hi
everyone,
so
let
me
share
my
screen.
I
have
a
small
dog
to
show
you.
D
Can
you
see
my
screen?
Yes,
so
just
this
documentary
just
to
propose
a
solution
for
the
problems
to
the
problem
that
we
have
scrapers,
which
can
collect
some
specified
set
of
metrics.
D
But
if
we
want
to
avoid
scraping
some
metrics
or
like
manipulate
them
in
any
way
like
aggregate
based
on
some
attributes,
we
have
to
set
up
additional
processors
and
it
requires
some
redundancy
to
be
collected
and
passed
into
the
pipeline.
So
we
want
to
have
a
way
to
skip
some
metrics
right
on
the
source
and
also
do
some
other
manipulation
later
on,
if,
if
needed.
So
in
order
to
do
that,
we
considered
some
options
from
the
configuration
interface
perspective
and
we
decided
that
we
should
go
with
this
one.
D
So
each
scraper
like
it's,
going
to
be
a
unified
way
to
filter
and
manipulate
matrix
on
the
scrapers,
and
the
interface
will
look
like
this.
So
a
scraper
will
get
additional
field
called
metrics
and
it
gets.
D
Keys
as
a
metric
name
and
first
option
that
we
start
with
is
I'm
gonna
be
enabled
true
or
false,
and
this
this
will
just
supposed
to
overwrite.
What's
the
set
in
metadata
definition
for
the
metric
set,
and
it
means
that
this
enabled
field
will
go
to
the
metadata
as
well.
D
So
this
gives
us
additional
opportunity
to
introduce,
like
some
optional
metrics,
for
example,
if
if
you
want
some
metrics
that
def
disabled
by
default
and
in
metadata,
it
said
enabled
false,
there
is
no
such
such
metrics
at
the
moment,
but
later
in
case,
we
want
to
have
them
a
user
can
enable
them
as
as
an
option.
So
here's
the
example.
For
example.
Let's
consider.
D
Let's
consider
metric
cubelet
stats
receiver,
which
has
a
metadata
file
and
one
of
the
sections
from
this
file
like
for
for
a
particular
metric
called
network
io.
D
Okay,
so
if
this,
if
we
agree
on
this
approach,
we
will
have
this
additional
field
to
the
metadata
for
each
metric
called
enable
true,
and
it's
like
in
configuration
interface,
it
will
be
possible
to
disable
that
particular
metric
with
with
this
kind
of
interface.
We
consider
it
some
other
options
from
a
configuration
standpoint
and
this
one
actually
is
suggested
by
bogdan,
and
we
agreed
that
this
is
the
probably
the
best
approach
and
later
on
once
we.
D
If
we
agree
on
this,
this
approach,
we
we
can
get
other
manipulation
options
under
this
configuration
part,
so
it's
gonna
be
like
changing
temporality
or
like
applying
some
aggregation
by
dropping
like
some
attributes
and
etc.
D
That's
it.
That's
like
small,
suggested
approach
for
this
problem.
Please
let
me
know
what
you
think
and
share
your
thoughts,
dmitry.
E
Thank
you
and
I
give
permissions.
Otherwise
you
will
get
yeah
sure.
C
Good
question
so
for
the
scraper
authors
from
their
perspective,
would
they
be
required
or
expected
to
respect
the
configuration?
The
reason
I
ask
is
like
sometimes
it's
very
difficult
to
do
that
at
a
very
granular
level.
Right,
like
you,
may
make
the
scraper
itself
may
just
make
one
request
that
essentially
has
all
the
metrics
right
away,
but
then
it
may
expose
it
may
choose
to
not
build
them
into
results
right,
but
otherwise
you
may
end
up.
C
You
could
really
have
like
a
spectrum
of
like
I
didn't
even
request
the
data
to
like
I
requested
the
data
didn't
build
it
into
results,
or
I
did
build
everything
in
results
and
just
basically
applied
a
filter,
and
if
you
go
too
far
to
one
side
of
that
spectrum,
then
we're
basically
just
back
to
having
a
process
a
filtering
processor
right.
So
what's
the
expectation
there.
C
E
Gonna
expose
a
easy,
enable
interface
and-
and
if
you
want
to
optimize
for
that,
you
can
optimize,
but
definitely
on
the
on
the
expo
on
the
pushing
to
the
pipeline.
You
will
not
be
pushed
because
the
generated
code
will
take
care
of
that
got
it.
Okay,.
A
A
All
right
exciting,
so
I'm
just
thinking
about
how
this
works
in
sql,
where
people
express
an
intention
and
then
there's
a
query
planner.
That
does
things
right,
so
it
seems
like
one
a
different
way
to
think
about
this
is
I
have
an
intention
of
getting
certain
metrics
or
certain
aggregates?
Some
of
those
are
implemented
through
post-processing
and
some
of
those
can
be
pushed
down
to
receivers
right
and
in
the
current
design.
A
We
are
saying
that
the
end
user
has
to
explicitly
configure
what
is
happening
at
the
receiver
level
and
what
is
happening
at
the
at
the
processor
level.
Would
it
make
sense
to
have
a
different
design
where
we're
saying?
Instead,
this
we're
not
configuring
the
processor,
we're
configuring,
the
output
and
that
is
implemented
through
processing
or
filtering
at
the
receiver
level,
as
appropriate.
E
I'm
not
sure
I
followed,
but
I
I
think
we
are
indeed
designing
the
result
that
we
expect,
so
the
result
will
be
to
have
that
metric
or
not
into
the
pipeline,
to
have
that
metric
with
that
temporality
or
with
other
temporality
in
the
pipeline,
or
to
have
that
label
associated
with
this
metric
or
not
okay.
So
I
this
is
what
I
envisioned
there
like.
You
saw
we
right
now
we
had
the
metric
name
and
enable
or
disable.
E
E
That's
why
it's
a
bit
different
than
any
other
processor,
because
so,
as
I
said
first,
we
know
the
full
definition
of
the
metric
from
the
yaml
file
and
second,
we
are
the
source.
What
does
it
mean?
It
means
we
are
not
receiving
this
from
somebody
else.
So
we
know
the
structure
is
not
going
to
change
without
without
modifying
the
code.
E
We
know
that
all
the
the
points
and
all
the
the
time
series
or
whatever
it
is
called,
are
coming
through
the
same
mechanism
through
the
same
scraper,
so
a
bunch
of
things
that
we
cannot
do
in
processors
because
of
load,
balancing
or
other
mechanisms.
We
can
do
here
because
we
are
the
source.
D
E
Yes,
but
again
here
is,
if
it's
possible,
because,
for
example,
in
case
of
redis,
you
will
capture
all
the
metrics
in
one
call,
so
you'll
get
all
of
them.
We
will
just
simply
not
create
a
metric
and
then
filter.
We
will
just
simply
create
only
the
ones
that
are
needed.
E
So
the
motivation
is
for
me.
The
biggest
motivation
is
the
fact
that
this
is
the
only
point
where
you
guarantee
to
see
all
the
points
for
a
metric,
and
you
can
do
some
transformation
that
you
cannot
do
later.
So
what
I'm
trying
to
say
here
is,
for
example,
in
the
the
processor
that
we
have
right
now:
the
delta,
the
cumulative
to
delta
processor.
E
And
and
that
all
the
points
are
in
the
same
batch
of
data
and
so
on,
so
there
are
lots
of
limitations
that
a
processor
like
that
can
have
the
the
here.
The
the
difference
is,
you
are
close
to
the
source
one
and
then,
as
you
pointed,
we
can
do
way
more
better,
like
performance
wise.
We
can
do
way
better
things,
because
we
we
have
a
limited
number
of
metrics.
We
we
have
a
very
limited
state
that
we
need
to
keep
and
so
on
and
so
forth.
So
I
think
yeah.
E
There
are
multiple
reasons
again,
I'm
not
saying
I'm
right,
I
I
can
take
all
of
this,
and
we
should.
You
should
be
writing
comments
against
this
idea
and
proposed
alternatives
if
they
are
I'm
just
explaining
why
we
believe
this
is
okay.
G
Off
the
top
of
my
head,
it
sounds
like
we
need
a
linter
for
collector
config
that'll,
prevent
like
certain
processors
from
being
put
after
batch
processors,
or
something
like
that.
But.
C
B
C
I'm
not
aware
if
it
is
or
not,
but
I'm
I'm
quite
sure
the
intention
is
that
it
should
be
so
I
think
I'll
double
check
on
that.
Make
sure
that
it
is.
I.
E
They're
gonna
get
uploaded
to
youtube
with
a
delay
of
one
or
two
days.
I
don't
know
exactly
the
delay.
G
C
I
Yeah
so
we
have
a
household
recent
npr
and
ready
to
merge
and
provide
several
members
and
dance
final
approval
and
get
merged
and
yeah
just
address
the
base.
Here.
H
I
Oh,
oh,
oh
yeah,
I
see
yeah
there's
a
new
copy
showed
up
yeah
yeah.
I
haven't
revised
for
several
days,
so
yeah
I
will
yeah.
I
will
merge
it
and
the
result
is
complete.
Yeah.
I
Sense
and
yeah-
hopefully
I
can
get
a
programming
version
afterwards,
so
that
will
observe
this.
F
Yeah,
so
our
our
team
had
submitted
a
pull
request,
and
I
know
there
was
some
back
and
forth
in
comments.
This
is
the
pull
request
for
adding
the
user
agent
header
to
know
that
the
data
was
being
sent
from
a
collector
and
it
looked
like
there
was
some
different
ideas
about
where
some
of
this
would
live
and
it
doesn't
seem
like
it
was
really
finalized
if
it's
good
as
is
or
if
there's
a
different
way,
that
we
want
to
to
change
something
there.
E
F
So
do
you
mean
so
like?
Theoretically,
if
I
have
a
collector,
I
can,
I
can
add,
you
know
something
into
there.
That
then
will
send,
along
with
my
data,
that
I'm
using
collector
version
v,
you
know
0.36
the
main
reason,
the
main
motivation
of
what
made
us
first
look
into
doing.
This
was
that
different
people
were
having
issues
and
it
was
hard
to
determine
whether
their
data
was
going
through
a
collector
where,
because,
like
the
the
user
agent
for
the
collector
might
just
be
grpc,
go
and
then
like
a
go
version
or
something
else.
F
So
this
would
tell
us
that
it
was
coming
from
a
collector
specifically
in
what
version
of
the
collector,
because
there
had
also
been
times
of
people
being
unclear
on
what
version
of
the
collector
they
were
using
now.
Ideally,
that
doesn't
really
happen,
but
like
one
one
client,
for
example,
what
had
collector
latest
as
opposed
to
collector
version
27,
and
they
didn't
realize
this,
and
this
would
have
helped
troubleshoot
some
issues
that
they
were
having
faster.
F
F
This
would
always
send
so,
similarly
to
like
an
sdk
might
have
telemetry
sdk
languages.
Java
version
is,
you
know
one
two
three
this
would
have.
This
is
the
collector,
and
this
is
collector
version,
0.36
or
0.37,
so
that
it's
it's
tracked
in
a
similar
way,
as
telemetry
sdk
versions
might
be
tracked.
F
I
think
it
was,
I
think,
it's
not
in
grpc
or
http
and
if
I
understand
we
were
able
to
add
it
into
grpc,
but
we
didn't
have
access
to
update
http,
and
so
one
of
the
last
comments
on
there
was:
how
do
we
get
access
to
that?
To
be
able
to
add
that.
E
Okay,
so
so
the
goal,
let
me
understand
the
goal,
because
before
before
jumping
to
review
the
goal
is
on
otlp
protocols
which
are
http
and
grpc.
We
want
to
to
have
a
user
agent
header
that
is
constructed
from
the
build
info
that
we
send,
together
with
the
data
for
the
back
ends,
slash
for
the
back
end
to
to
probably
consume
it
quick
thing:
what
does
it
happen?
If
you
have
a
deployment
of
you
have
an
agent,
then
you
have
a
set
of
collectors
that
are
passing
through.
E
F
E
No,
no,
you
have
a
chain
of
them,
so
you
have
one.
Oh.
F
F
E
B
I'd
know
that
user
agent
is
only
relevant
between
one
specific
client
and
server
interaction,
and
if
it
is,
if
one
collector
thinks
that
it's
relevant
for
for
the
chain
of
communication,
then
it
should
add
to
another
field.
To
I
don't
know
the
resource
attributes
to
the
data
point.
B
A
E
Josh
in
we
also
have
something
called
distributed
tracing
when
you
have
multiple
votes
that
we
can
do
to
see
how
the
request
goes
just
saying,
no
so
so
anyway,
I
I
think,
then
the
solution
will
be
limited
to
to
a
client
in
the
server.
We're
not
gonna
do
anything
if
we
are
a
chain
correct.
That's,
let's,
let's
clarify
on
this
and
let's
set
up
the
requirement
that
we're
not
gonna
care.
E
E
Yes,
always
the
last
one:
we
send
our
own
informations
as
a
user
agent.
Okay,
I'm
I'm
fine
with
that.
If
that's
the
requirement,
can
we
have
a
summary
of
like
10
lines
about
the
problems
that
we
are
trying
to
solve
and
what
not
solve
in
this,
I
some
somehow
similar
with
what
dimitri
did,
but
not
as
long
as
dimitri,
because
we
understood
much
better
but
at
least
a
summary
of
what
problems
we
solve
with
these
versus.
We
don't
solve
with
this
solution
and
the
the
pr.
E
I
think
it's
important
to
be
in
an
md
file,
because
people
will
start
asking
what
they
do
with
this
or
what
like,
explaining?
Hey.
Otlp
exporter
sends
user
agents
for
this
purpose.
These
are
the
goals.
These
are
the
known
goals
of
this
thing
and
maybe
consider
I
don't
know
if
we
want
to
consider
to
have
an
option
to
disable
this
user
agent
header
in
the
exporter
or
not.
F
E
J
E
J
E
Am
I
wrong?
I
think
that
was
the
conclusion,
because
we
did
we
want
to
to
not
use
the
default
components
which
are
in
a
test
package
to
build
our
command,
because
we
don't
want
to
include
test
helpers
into
the
final
main
file
because
we'll
get
link
and
stuff.
So
we
said,
let's
wait
until
until
the
command
is
removed
and
there
is
a
ongoing
pr
for
for
that.
E
I
think
it's
pretty
close,
but
the
person
did
not
resolve
all
the
jurassic
jurassic's
comments.
There.
E
If,
if
if
people
are
still
around,
I
would
like
people
to
think
about
how
can
we
have
couple
of
more
approvals
on
contribute
and
who
is
willing
to
do
that
pm
me
and
tell
me,
and
maybe
with
a
summary
of
your
contributions
to
the
collector.
So
then,
if
we
match
the
requirements
and
stuff
because
we
are,
we
are,
I
think
we
are
sure
handed
on
the
country-
the
that
that
repo
grew
too
much
for
for
having
only
five
or
six
active
approvers.
There.
K
E
As
well
but
and
my
biggest
pain
is
contribute
now,
so
I'm
asking
for
that
because
it
resolves
a
problem
for
me.