►
From YouTube: 2021-04-07 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
B
Are
you
in
the
call
hey
hey
good
morning,
good
afternoon
yeah,
so
that
really
like?
I
have
a
quick
question.
We've
been
discussing
this
item
in
the
past
and
I'm
wondering
if
you
know
if
someone
is
working
on
that.
I
recall
that
david
was
mentioning
about
some
implementation
being
used
internally.
B
But
I'm
wondering
if
anyone
has
anything
on
that-
and
I
know
that
ben
had
this
idea
of
reusing
a
stanza
code
for
for
buffering
and
I'm
just
wondering
because
this
is
becoming
for
my
team-
something
that
is
more
and
more
an
urgent
issue
and
I'm
wondering
maybe
someone
from
our
end
could
help
if
nobody
is
working
on
that.
Yet.
C
Not
there's
been
some
discussion
of
using
a
wall
in
the
prometheus,
remote
right
exporter
to
reuse
some
of
that
code,
but
I
assume
that
would
be
specific
to
that
and
not
more
generally
applicable
like
it
sounds
like
you're
asking
about
here.
C
A
So
I
guess
no
one
actively
works
on
this
right,
so
you're
free
to
grab
it
and
work
on
that.
Then
then
was
adding
the
the
generic
notion
of
the
storage
to
the
collector,
and
I
think
it
can
be
also
used
for
this
offering
as
well,
but
it's
not
yet
done.
I
don't
know
where
we
are.
I
think
we
nursed
one
pr,
then,
can
you
maybe
tell
us.
D
Yeah
sure
it's
pretty
far
along,
I
still
need
to
write
tests
and
I
need
to.
I
need
to
basically
just
do
validation
at
this
point,
but
the
the
framework
is
there
and
I've
got
so.
I've
got
the
the
first
implementation
of
of
it
mostly
ready,
and
then
I've
got
the
changes
to
the
log
collection
library,
mostly
ready,
which
would
be
unrelated
to
this
buffering
suggestion.
But
but
that's
kind
of
where
I'm
at
overall.
A
Okay,
so
it's
jamaica,
if
you
have
not
seen
it,
the
proposal
is
to
provide
very
simple
interface
for
storage.
It's
like
key
value
storage.
Nothing
else!
I
think
that's
still
good
enough
for
for
this
buffer
as
well,
but
maybe
you
want
to
validate
that
right
independently
to
see
how
exactly
you
would
want
the
implementation
to
work
and
whether
it's
sufficient
or
no,
I
I'm
guessing.
It-
should
go
somewhere
into
the
cube
root
right
implementation
as
something
that
is
maybe
optional,
maybe
configurable
or
something
like
that.
B
A
A
Great,
and
maybe
do
you
want
to
maybe
post
some
some
thoughts
you
have
about
how
you
want
to
implement
that
before
you
go
ahead
and
start
coding,
full
speed
so
that
we
agree
first
out
like
this
is
going
to
support,
I'm
guessing
all
the
all
the
data
types
right,
not
just
vlogs,
for
example,
yeah
precisely
anyway,
post
your
thoughts
on
there
on
the
issue,
maybe
some
very
brief
design
and
yeah
doing
great.
If
you
can
work
on
this.
A
E
Yeah
yeah
yeah,
sorry
for
jumping
in
yeah,
I
saw
the
agenda
was
emcee,
so
I
just
figured
I
could
just
throw
random
questions.
I
have
been
curious
for
a
long
time,
so
one
of
the
one
of
the
things
is
that
I'm
playing
around
with
some
small
project
to
decorate
metrics
and
one
idea,
was
to
just
decorate
the
main
metrics
from
like
otlp
with
a
processor
via
a
processor.
E
But
I
was
wondering
if
there
is
any
guideline
when
it
comes
to
actually
having
different
metrics
having
the
merged.
You
know
like
different
receivers
and
you
just
merge
them.
I
saw
that
there
is
a
reference
to
that
to
the
next
receiver,
for
example
in
the
otlp
receiver,
and
I
was
wondering
if
there
is.
E
Like,
for
example,
let's
say
that
you
are
pushing
otlp
metrics
to
the
collector
and
at
the
same
time
you
have
an
additional
component
like
another
receiver,
that
is
scraping
service
discovery
for
prometheus,
so
you're
having
like
additional
labels.
Yes,
and
then
you
want
the
primary
metrics
to
merge
with
the
prometheus
ones,
like
only
the
service
discovery
ones
which
are
like
not
main
metrics
for,
but
you
know
like
host
metrics.
A
E
They
could
be
actually
transformed.
Yes,
sorry
for
not
explaining
that.
Yes,
so
basically,
for
example,
let's
say
that
the
promise
you
have
the
op
metric
right,
yeah
and
then,
basically,
you
are
just
you
are
attaching
that
as
a
label
to
the
existing
metrics,
so
you're
actually
mutating
the
main
metrics.
A
Okay,
so
technically
I
guess
that
would
be
possible
if
you
attach
both
receivers
to
one
pipeline
and
then
have
a
specialized
processor
which
looks
for
those
metrics
and
does
that
that
particular
transformation
that
we
described
right.
It
needs
to
look
look
at
the
flow
of
the
of
the
metrics
right
and
then
pick
the
ones
that
it
is
interested
in
and
do
this
I
don't
know
how
what
exactly
it's
doing,
but
that
specific
merging
that
we're
referring
to.
I
don't
think
we
have
any
generic
notion
for
doing
that.
A
E
Yeah
totally
totally
yeah
yeah
yeah.
I
was
curious
because
I
have
been
playing
with
this
and
I
was
wondering
whether
there
was
previous
previous
art,
but
if
not,
I
will
keep
on
playing
and
I
will
report
back
yeah
all
right.
Thank
you.
F
I
think
the
closest
that
to
that
could
be
the
group
by
signal
processor,
so
you
can
group
those
signals
by
some
common
characteristic
and
then,
but
that
assumes
you
know
that
you
have
a
common
tag
or
a
common
in
case
of
traces
like
the
trace
id,
which
would
become
for
everything,
but
other
than
that.
I
don't
see
how
you
can
do
that
with
the
current
code.
E
Right,
yeah,
okay,
thanks
so
much
yeah
we'll
be
taking
a
look
at
that
processor.
I
I
didn't
know
about
it,
so
that
may
be.
F
Yeah
I
mean,
if
you
know
a
way
to
group
them
it
should
be.
I
think
trivial
is
not
the
right
word,
but
has
something
close
to
that
to
add
metrics
to
that
processor.
So
there
is
a
recent
addition
to
vlogs,
so
you
can
use
that.
As
you
know,
as
inspiration,
you
shouldn't
be
that
much
work
provided
that
you
know
how
to
group
those
metrics
right.
So
I
think
that's
the
biggest
the
biggest
question
here.
G
Discuss
nobody
has
anything
to
question
about
the
previous
topic.
Sorry,
I
joined
a
little
late
about
the
buffering.
Would
this?
A
I
I
would
expect
yes,
you
have
some
certain
memory
buffer
if
you
exceed
that
or
not.
Even
if
you
accept
that,
maybe
always
you
also
kind
of
spill
out
to
to
the
disc
right
and
but
then
you
also
consume
from
the
disc
in
your
memory
of
shrinks,
so
that
you
have
enough
for
the
patch
to
be
sent
and
also
for
for
the
persistent
purposes.
A
So
you
both
limit
the
size
of
the
memory
you
consume.
You
also
make
it
more
reliable
for
the
crashes
we
start
or
whatever,
provided
that
the
persistent
data
is
available
when
you
start,
but
but
that's
that's
for
that
reason
that
I'm
suggesting
that
it
should
be
in
the
retry
component,
which
today
is
the
in-memory
buffer
right.
B
But
if
this
is
let's
say
500,
then
it
needs
to
be
buffered
and
later
and
doing
this
in
other
place
would
require
like
having
some
channel
to
communicate
this
outcome
of
the
of
the
request
to
the
endpoint
in
exporter,
and
this
would
make
things
more
complex.
Of
course
it's
still
doable,
but
simple,
probably
is
better,
at
least
for
the
first
iteration.
So
I'm
going
to
look
into
that
and
propose
some
design
and
continue
the
discussion
there.
Yeah.
A
Likely
semantically,
it
should
be
something
like
a
queue
with
three
pointers:
the
front
where
you
write
the
new
data
to
the
middle
from
which
you
pick
data
to
send
to
and
the
tale
of
it,
which
is
when
you
get
an
acknowledgement
of
recipient,
delete
the
data
right.
Something
like
that,
then
we
can
wait.
F
I
don't
no
I'm
joking
yeah,
so
I
just
came
back
yesterday
from
from
you
know
pto,
so
I
don't
have
much
reports
since
our
last
meeting
just
that
you
know.
Yesterday
morning
I
tried
to
do
what
we
talked
last
week
and
I
and
I
got
blocked
by
by
the
comment
that
I
made
there.
You
mentioned
that
we
can
then
move
the
two
server
options
to
to
the
start
phase,
and
I
think
I
mean
looking
at
the
code
it.
F
It
will
make
things
a
little
bit,
perhaps
confusing,
because
some
of
the
configuration
options
are
being
validated
and
set
during
the
creation
of
the
receiver
and
and
the
two
server
options
will
then
be
handled
on
the
start
phase.
F
A
C
A
It's
not
like
we
do
everything
possible
that
we
need
to
do
during
the
configuration
loading
phase.
There
is
still
a
lot
of
things
that
you
can
only
do
when
you
actually
begin
the
execution
right
when
you
start
the
component
yeah
so
yeah,
I
don't
know.
Maybe
kind
of
you
have
to
look
into
that
to
see
whether
what
I'm
suggesting
actually
makes
sense
or
whether
it's
not
desirable.
F
Yeah,
the
only
thing
that
I'm
not
really
happy
with
is
the
fact
that
this
change
is
gonna
break
every
hotel.
Oh
sorry,
every
jrpc
receiver
that
we
have
right
so
because
we
are
changing
the
signature
of
the
two
server
options.
It
means
that
whoever
is
consuming
the
configure
pc
is
going
to
need
a
change
yeah
on
a
core.
It's
easy!
It's
only
three
receivers
I
can.
I
can
handle
that
for
for
the
contrib.
A
A
F
I
mean
that's
a
good
point
and
I
think
it
is
something
that
we
eventually
will
have
to
discuss,
perhaps
not
right
now,
but
how
to
do
those
breaking
changes
more
manageable
to
other
folks
right
so
right
now
we
are
not
taking
much
care,
but
what
we
can
probably
do
and
start
doing
from
now
on
or
after
one
after
ga
is
having
changes
in
in
two
steps
like
deprecating
first
and
then
and
then
breaking
the
thing.
A
Yeah
after
dj
the
ga
absolutely
we
can
no
longer
do
this
right.
The
right
way
would
be
to
introduce
a
new
interface
for
components,
so
newer
components
can
implement
that
particular
one,
and
then
you
probe,
which
interface
the
component
is
implementing
you.
Can
you
still
support
the
old
one
yeah,
the
classic
way
of
thinking
these
things
right?
We
can't
do
that
for
now.
I
think
we
shouldn't
bother
like
it's
it's
going
to
delay
us
unnecessarily,
because
we
already
made
a
lot
of
breaking
changes
in
the
last
few
weeks.
A
That
was
the
plan
for
the
ga.
This
was
kind
of
where
beta
we're
doing
the
last
huge
batch
of
breaking
changes
at
once,
so
that
we
annoy
people
only
once
whoever
is
going
to
be
annoyed,
hopefully
not
many
people,
because
we're
handling
the
core
and
the
country
by
ourselves,
but
going
forward
after
the
ga,
yes,
absolutely
you're
right.
We
should
do
that
in
a
completely
backwards,
compatible
manual.
F
A
That's
still
probably
an
open
option.
I
would
not
completely
rule
it
out
if,
if
you
see
that
what
we
were
suggesting
with
this
two
server
options,
changes
does
not
make
much
sense,
and
maybe
maybe
we
do
the
reflection
right.
It's
not
like
I'm
strongly
opposed
to
that.
It's
just
kind
of
I
don't
know
for
some
reason
seems
more
fragile,
but
if
the
alternate
is
worse,
then
we
should
pick
the
least
worst
one
right.
Yeah.
F
F
Visible
yeah,
another
thing
is
that
it
kind
of
opens
a
the
door
for
similar
features
that
are
not
related
to
us
right.
So
if
someone
needs
a
similar
feature
in
the
future,
then
this
one
here
can
be
used
as
a
precedent,
and
then
people
can
say
well.
F
It
is
acceptable
to
do
things
like
that,
and
you
know
we
are
talking
now
that
it
is
an
exception
because
we
think
authentication
is
something
you
know
is
special
yeah
right,
so
yeah,
but
I
I
work
on
that
this
week,
so
I
have
a
few
other
things
that
are
also
so
there
are
two
new
versions
of
the
collector
that
I
have
to
release
with
the
operator,
so
that
takes
precedence.
I
think,
and
then
I'll
resume
working
on
the
authentication.
A
F
F
I
don't
remember
right
now,
but
for
for
the
last
one
or
you
know
for
the
client
side
there,
there
was
a
a
comment
from
from
granville
from
f5,
basically
saying
that
you
know
they
also
have
a
some
requirements
and
it
is
actually
in
between
what
we
have
right
now
and
what
we've
seen
from
pavan.
F
I
think
that
supervan
had
this
feature
where
it
is
a
full
oauth,
2,
client,
meaning
it
has
a
client
id
and
secret,
and
it
just
goes
and
retrieves
the
tokens
from
the
authentication
server
whenever
it
needs,
and
what
we
have
right
now
is
a
static
token
that
people
can
use
and
what
they
need.
A
five
needs,
or
you
know
more
people
also
need,
is
something
in
between.
Like
you
have
a
refresh
token,
then
you
use
that
refresh
token
to
get
an
access
open.
F
So,
on
the
client
side
we
might
need
some
discussions
as
well
on
a
pass-through.
I
think
it
is
quite
clear
what
we
have
to
do,
but,
depending
on
on
what
we
learn
on
the
server
side
or
the
server
settings,
we
might
apply
the
knowledge
to
to
those
parts
as
well.
So
I
would
do
them
in
parallel.
I
would
you
know,
do
one
and
then
apply
what
we
learned.
F
It
is
going
to
be
the
same
code,
so
it
is
in
the
config
jrpc
for
instance.
That
then
calls
config
auth.
So
I
see
imagine
that
config
is
going
to
provide
a
a
facility
to
to
add
that
metadata
to
to
the
client
components.
F
F
F
I
don't
see
a
technical
problem
in
having
this
part
of
the
same
package
right
now,.
A
It
will
be
very
useful
to
also
see
the
drafts
of
this
of
these
portions
as
well,
because
the
the
reflection
not
reflection,
but
I
think
it's,
I
think
it's
more
or
less
settled
the
design
is
more
or
less
set
of
the
actual
implementation
details.
We
may
change
for
the
for
the
server
portion
for
the
receivers,
but
the
rest
is
more
unclear
to
me.
I
I
think
it
would
be
useful
to
see
that
early
to
to
try
to
debate
early
as
well
to
see
what
we
want.
F
A
F
That
yeah,
I
think
I
think
it
is.
I
think
the
most
complicated
discussion
is
what
we
had
already.
So
I
think
you
know
difficult
part
is
behind
us,
because
the
the
receiver
side,
which
wouldn't
just
add
things
to
the
context
and
the
same
context-
can
be
used
on
on
on
the
same
component,
but
then
as
a
interceptor
on
on
the
client
side.
F
So
it
would
just
retrieve
data
from
the
context,
and
I
think
you
know-
and
it
should
not
be
too
problematic
there.
F
On
some
some
now
yeah
yeah
right
exactly
data.
So
but
that's
you
know
the
most
invasive
change
that
we
have
right.
Everything
else
should
be
quite
easy
to
do
right.