►
From YouTube: 2021-07-07 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
A
B
B
Just
staring
at
me
like
all
right,
my
bad
yeah
yeah.
I
just
had
a
quick
I
this
probably
belongs
in
slack,
do
we
collect
some
of
the?
I
noticed
there
was
like
a
report
around
like
sort
of
maybe
like
goals
for
observability
for
the
collector,
like
some
metrics.
Is
there
a
path
right
now
to
collect
some
of
these
specifically
like
incoming
connections
per
receiver,
or
is
that
just
sort
of
is
that
more
of
a
road
map
document.
A
B
Yes,
no
worries
cool
yeah
I've
seen
some
I'll,
probably
file
some
issues.
Soon.
I've
seen
some
stuff
that
is
maybe
getting
attributed
to
an
exporter
when
it
might
be
related
to
like
research
starvation
by
a
receiver.
But
it's
how
you
know
anyway.
Okay,
that's
helpful,
though
just
was
double
checking
that
I
wasn't
missing
anything.
Thank
you.
A
B
Is
there
any
there
the
how
they're
I
know
the
health
check
endpoint
doesn't
emit
any
like
metrics
like
that?
Is
there
any?
What
is
there
any
current
reporting
tools
that
cover
any
of
this?
Not
necessarily
the
you
know,
per
connection
stats,
but.
A
We
do
reports
right
some,
some
metrics,
the
the
components
report
do
report
metrics
like
the
number
of
items
they
receive
or
drop.
So
some
of
the
things
that
are
on
this
list.
They
do
exist
today,
but
many
others
don't
okay,
so
they
are
exposed
some
of
those,
the
things
that
are
that
say
that
we
want
to
be
observed.
They
they
are
currently
exposed
as
collectors
own
metrics.
The
components
expose
them.
B
Okay,
cool
that
makes
sense.
Oh
I'll
use
those
metrics
to
see
what
I
can
reproduce
your
document
all
right.
Thank
you.
A
Okay,
next
item
tian
duo:
how
are
you
in
the
call.
C
Yeah
yeah,
I
have
a
question
about
how
to
use
packaging
in
the
internal
folder
and,
as
we
want
to
add
the
of
the
data
ces
hashtag
extension
to
the
country
ripple,
and
but
we
need
to
use
one
of
the
class
located
in
the
ops
report,
config,
which
is
in
the
internal
folder
of
the
open,
telemetry
collector,
and
so,
but
it
it.
But
we
cannot
really
use
it.
C
So
I
just
want
to
know
if
we,
if
I
can
ask
to
move
it
outside
to
outside
of
the
internal
folder
or
I
need
to
rewrite
a
lot
of
the
same
code
of
the
data
observe
for
config.
A
Yeah,
so
it
we
moved
it
to
internal
intentionally
so
that
we
minimize
the
apis
that
we
expose
the
public
apis.
If
you
need
something
from
from
there,
then
if
you
can
maybe
specifically
open
an
issue
which
says
what
exactly
you
need?
Okay,
we
probably
won't
be
able
to
just
do
a
blanket
exception
and
and
move
everything
to
to
be
publicly
exported
there.
But
maybe,
if
you
need
something
specifically
from
there,
maybe
we
can
do
just
that
right.
So
it
actually
depends
on
what
what
exactly
you
need?
A
A
Have
a
look
at
the
specific
request
if
you
want,
if
there
is
an
api
that
is
currently
hidden,
it's
internal
and
you
want
it
to
be
open,
then
then,
please
file
a
specific
request.
Just
for
that
right.
We
we're
not
going
to
be
able
to
move
the
entirety
of
observer
report
package
related
stuff
from
internal
to
make
it
all
of
it
public,
because
we
spend
a
lot
of
effort
to
actually
clean
up
the
the
public
api
and
remove
the
stuff
that
we
don't
want
to
expose.
C
Okay,
okay,
got
it
yeah
yeah.
We
just
need
to
use
one
of
the
class
of
metrics
in
the
inside
ops,
config,
so
yeah
sure
I
will
create
another
request
for
this
to
get
this
success.
A
D
Sorry
I
got
stuck
finding
my
mute
button.
Apologies
so
super
exciting.
We
have
a
bunch
of
discussions
left
over
from
old
players
and
I
just
wanted
to
actually
is
someone's.
I
I
can
share
my
screen.
If
you
want
to
quickly
glance
through
them,
is
that
okay.
D
D
And
this
is
probably
going
to
become
a
recurring
feature
of
these.
You
know
trying
to
bring
back
the
spirit
of
of
andrew
from
wherever
outside
the
open
telemetry.
He
may
be.
D
So
you
know
awesome
so
there's
this
question
on
wow.
D
D
Better,
okay,
great
I'm,
actually
going
to
do
something
slightly
different
see
if
I
can
share
my
window
instead
of
my
tab,
because
I'm
going
to
keep
switching
tabs.
F
D
So
looks
like
the
last
time.
The
last
the
point
where
this
discussion
got
stuck
two
days
ago
looks
like
it's.
It's
moved
forward
since
the
time
we
compiled
this
list,
so
I'm
not
gonna,
spend
too
much
time
on
it.
E
Yeah,
it
did
yeah
just.
D
A
D
E
Yeah
we
we
discussed
this
last
week
and
jurassic
has
been
going
through
that.
C
A
I
I
think
I
would
still
prefer,
for
I
am
not
totally
sure
that
there
will
not
be
changes
necessary
to
the
public
apis
as
a
result
of
this,
so
I
would
still
prefer
for
jurassic
to
go
through
the
proposal
that
I
made
and
it's
very
draft
and
have
his
opinion
as
well
on
it.
A
It
does
require
changes
to
the
api
which
are
additive,
so
they
are
backwards
compatible,
but
in
case
what
I'm
suggesting
is
not
a
good
solution.
We
may
need
to
do
something
differently,
which
then
may
be
a
breaking
change.
I
would
like
that
confirmation.
I
would
like
at
least
one
more
opinion
that
this
looks
good
as
a
proposal,
and
in
that
case
we
will
be
good
because
it's
an
additive
change,
non-breaking
change,.
D
So,
just
to
under
just
to
confirm
that
getting
that
check
getting
that
sign
off
is
a
ga
blocker
right.
A
I
think
yes,
we
should
at
least
confirm
that
this
proposal,
for
do
we
pass
through
of
the
authentication,
is
good
enough
and
in
that
case
we're
good,
because
it's
it's
an
on
breaking
change.
We
can
do
it
after
the
ga.
I
want
to
make
sure
we're
clear
here
right.
So
otherwise,
if
it's
it's
not
what
we
want
to
do.
We
want
to
do
something
else,
then
that
something
else
is
unclear.
In
that
case,
I
don't
know,
what
exactly
is
it
right
and,
and
that
may
require
some
changes
there?
A
Yeah,
let's
because
because
he's
the
most
familiar
with
the
topic,
I
would
like
his
opinion
on
this
as
well.
Okay,.
E
Let's
add
it
to
the
bug,
because
you
know
it
did
get
moved
to
post.
Okay,.
C
C
D
Okay,
hopefully
that
captured
was
going
on
now
then
there's
also
this
issue
of
issues
that
issues
or
pr's
that
are
repeatedly
marked
stale
and
brought
back
like
zombies.
So
let's
just
see
if
we
can,
if
we
can
find
new
owners
for
some
of
them,
so
this
one
it's
closed
as
inactive,
but.
D
D
Okay,
then,
I
will.
E
I
think
punya
it
can
always
be
reopened
if
there's
an
interest
rate,
if
it
isn't
breaking,
I
mean
if
there's
a
change
needed.
D
Yeah
so
totally
agree
you
can
see
if
so,
this
one's
still
open.
It's
been
reopened
many
times
so
yeah.
I
think
it's
been
like
reopened
seven
or
eight
times.
I'm
just
trying
to
understand
can
be,
can
we
let
it
die,
or
does
someone
actually
want
to
make
it
happen?
I
think
the
original
contributor
is
no
longer
around
so
keeping
on
reopening.
It
will
not
help.
A
A
A
D
D
I'm
addressing
bogdan
because
he's
the
one
who
has
been
bringing
it
back.
E
I
have
a
question
here
often
you
know
new
contributors
to
the
project,
come
and
look
for
issues
right
to
work
on
and
if
we
could
actually
instead
tag
it,
you
know
and
say
that's.
E
Tigran,
what
do
you
think?
Because
you
know
often-
and
I
know
we
haven't
done
as
good
a
job
as
we
could
on
tagging
good
first
issues
or
but
hopefully
we'll
kind
of
pick
that
up
and
be
able
to
attack
that
there
is.
C
B
D
Yeah
yeah
so
fixes
2548,
and
this
is
currently
so
I
can.
I
can
tag
this
as
a
good
first
issue.
D
Perfect
thanks
anna
and
then
I
will
allow
if
bogdan,
has
some
reason
to
do
I'll.
Come
back
and
close
it
myself
if
bogdan
doesn't
reply
in
a
short
while
last
of
these
rename
action
in
processor
helper.
D
E
I
think
john
worked
on
a
different
pr
and
then
this
got
closed.
D
Awesome
yeah
thank
you
for
thank
you
back
to
back
to
youtube.
A
F
Yeah
yeah,
I
see
you
just
put
a
comment
there.
So
let
me
just
give
some
background
here
so
for
this
one
today
we
can
set
up
set
up
the
ballast
memory
for
the
collector
in
the
command
line.
So
I
think
the
motivation
is
saying
that
we
don't
want
to
have
users
to
set
the
you
know
the
this
kind
of
configuration
in
the
command
line.
We
want
to
move
it
into
the
yama
configuration
file.
That's
why,
at
the
beginning
we
created
this
a
new
extension
called
the
ballast
extension.
F
So
users
they
can
either
set
a
absolute
value
for
the
ballast
memory
or
they
can
use
a
percentage
like
what
the
percentage
of
the
total
memory
that
the
environment
has
to
set
up
the
borders
memory.
Then
you
know
I'm
trying
to
you
know
and
also
this
boiler
extension
or
the
memory
limiter,
or
they
are
all
the
core
components
that
we
want
to
get
it
done
before
the
ga.
So
I'm
trying
to
pick
up
this
bolla's
extension
work.
F
Then
you
know
I
found
out
that
there's
one
thing
there
so
because
this,
if
the
users
enable
the
bolus
memory
in
the
bolus
extension,
they
also
need
to
set
the
same
value
in
the
memory
limiter,
because
the
memory
limiter
when
the
memory
limiter
calculate
the
you
know
the
memory
the
collector
has
been
limited.
It
need
to
account
this
this
bottle
of
memory
and
you
know
and
then
to
decide
if
we
need
to
drop
the
data
a
lot
so
which
basically,
the
if
the
ballast
value
is
enabled.
F
If
so,
so
I,
if
I
don't
know
if
someone
who's
sharing,
can
I
share
my
screen.
I
wanna
show
the
the
latest
comment
there.
I
think
yeah.
A
F
F
Yep,
so
so
there's
a
so
what
I'm
trying
to
do
here.
So
I
I
still
want
to
keep
these
two
components:
memory,
limiter
and
the
ballast
extension,
but
I
added
validation
in
this
pr
saying
you
know
if,
if
the
ballast,
if
the
bullet
extension
had
a
balanced
size
set,
the
memory
limiter
should
have
the
same
size.
Otherwise
the
collector
will
be
failed
at
the
beginning,
and
I
think
anira
yesterday
he
put
a
comment
which
I
think
is
pretty
good
he's
saying.
F
If
we
won't
have
this
validation,
why
we
want
to
keep
the
balance
size
memory
in
memory
limiter
and
instead
of
you
know
everyone
we
just
have
one
place
to
set
this
involved
sites.
Then
you
know
everyone
everywhere
who
want
to
use
their
value,
just
get
it
from
one
one
places,
I'm
good
yeah.
I
I
think
it's
a
good
idea.
I
want
to
make
that
change,
but
in
the
morning
I
think,
when
you
are
another
comments
here
saying
why
can
we
just
combine
these
two
components
in
one
right?
F
Let's
just
move
the
ballast
side,
ballast
memory
eliminate
you
know,
setting
into
the
memory
limiter.
I
think
that's
probably
a
lot
of
feasible
because
the
memory
limiter
is
a
processor,
so
it
can
be
applied
to
different
pipelines,
for
example,
trace
of
metrics.
So
if
we
put
this
memory,
ballast
memory
allocation
thing
to
the
processor,
it
will
duplicate
the
efforts
right.
It
will
make
the
memory
allocation
to
multiple
times
if
they
have
multiple
pipelines.
So
I
I
probably
the
the
first
thing
I
want
to
discuss
with
you.
Do
you
agree?
F
We
cannot
move
this
functionality
into
the
memory
limited
processor
and
the
second
question
is,
I
think,
the
one
that
I
want
to
so
today
in
the
memory
limiter,
we
already
have
these
follow
sites.
Can
we
remove
this
configuration
and
only
use
only
use
the
bottle
size
configuration
for
the
ballast
ballast
extension
and
the
memory
limiter
gonna
get
the
value
from
balance
retention
for
for
doing
the
logic
in
the
process
in
the
memory
limiter
processor.
That
probably
two
questions
I
want
to
discuss
here,
yeah,
so.
A
I
I
only
spend
a
couple
of
minutes
thinking
about
this,
so
I
may
be
wrong,
but
here's
what
I'm
thinking
the
memory
limiter
is
in
practice
a
required
processor
if
you're
running
other
anything
other
than
a
toy
instance
of
a
collector.
You
have
to
use
it,
so
it
it
is
present
virtually
always
and
logically,
it
is
coupled
to
the
ballast
value.
The
size
of
the
ballast
affects
the
operation
of
the
memory
limiter.
It
cannot
work
correctly
unless
it
knows
exactly
what
the
ballast
size
is.
A
We
are,
we
have
separate
components
here,
which
is
normally
the
way
that
we
do
things
about
right.
If
we
have
separate
concerns,
you
make
them
separate
components,
but
here
we're
then
coupling
these
two
concerns
coupling
the
components
because
they
need
to
know
about
each
other
about
the
configuration
of
each
other.
So
I
think
it
does
make
sense
to
in
a
way
simplify
this
right.
A
Get
rid
of
the
notion
of
having
two
separate
configurable
components
then
ensure
that
they
are
configured
correctly
by
specifying
the
same
value
for
the
ballast
in
both
places,
and
if
it's
incorrect,
then
that's
a
configuration
error.
Why?
We
even
need
to
do
that
right.
Let's
put
it
in
one
place,
let's
say
that
you
know
what
ballast
is
actually
inseparable.
A
Part
of
the
memory
limiters
functionality
right,
it's
it
in
in
a
sense,
it
is
the
memory
manager
of
the
collector,
it
limits
the
memory
and
it
creates
a
ballast
which
ensures
that
there
is
less
garbage
collections
happening,
which
is
the
purpose
of
the
ballast
that
we
use,
and
then
I
think
it
does
make
sense
right.
It
makes
the
configuration
simpler.
A
As
for
how,
because
there
may
be
multiple
processors,
I
mean
we
can
easily
have
a
checks
that
verify
that
either
all
the
processors
use
the
same
value
for
the
ballast
right.
If
you,
if
you
have
multiple
processors
and
they
are
configured
differently,
then
that
is
a
configuration
error
or
or
you
use
one
of
the
values.
Maybe
and
obviously
you
will
have
to
create
a
single
ballast,
but
that's
easily
doable.
That's
not
a
problem
right,
you,
you
or
whoever
initializes
the
first
creates
the
ballast.
The
rest
just
don't
do
that
anymore.
A
That's
just
very
quick
from
the
top
of
my
head.
Maybe
maybe
there's
something
I'm
missing
here.
F
Yeah
yeah,
I
agree
so
yeah.
I
think
you
both,
I
I
also
you
also
agree.
We
should
also.
We
should
only
maintain
the
ballast
configuration
in
one
places,
but
right
now
I
think
the
debatable
of
the
argument
is,
should
we
put
put
it
into
the
memory
limiter
or
we
should
just
leave
the
you
know,
keep
the
policy
extension
and
that's
a
half.
The
ball
expectation
is
the
only
place
to
set
up
that
value
right.
I
like
you,
propose.
I
don't
know
if
I
got
it
right,
you're
saying
we
still.
F
We
should
only.
We
should
maintain
that
in
the
memory
limiter
and
have
multiple
if
customer
have
multiple
processor,
we
have
another
kind
of
validation,
logic
or
either
pickup
only
pick
up
one
or
or
we
we
do
the
validation.
If
users
they
have
multiple
processor
config
with
different
value,
we
are
doing
the
validation
to
detect
it.
I
I'm
feeling
this
will
make
the
things
pretty
complicated.
F
A
I'm
approaching
this
more
from
the
user
experience
perspective.
Let's,
let's
set
aside
for
a
moment
in
the
implementation
details.
What's
the
easiest
for
the
user,
the
user
would
want
it
to
be
in
one
place
right
and
if
we,
even
if
we
set
the
value
of
the
ballast
in
one
place
and
the
other
extent,
the
other
component
uses
that
value,
we
still
have
to
configure
two
components
instead
of
one.
That
is
one
additional
step
for
the
user.
A
A
If
you
then
I
mean
the
default
is
zero
anyway
right
yeah
and
if
you
do
use
a
balance,
then
not
using
a
memory.
Limiter
is
kind
of
out
of
the
question
why?
Why
is
it
that
you
care
about
performance,
but
you
don't
care
that
your
process
is
going
to
crush
with
out
of
memory,
because
you
don't
limit
the
size
of
the
memory
to
me.
It
seems
like
two
sides
of
the
same
coin,
right
from
the
user's
perspective.
A
D
So
I'm
not
sure
if
I
agree
with
that,
so
I
think
there
are
two
different
things
here.
One
is
to
me
again
again.
I
my
the
useful
perspective
I
can
contribute
is
that
I
have
been
with
open
telemetry
much
less
than
than
most
of
the
group
here.
So
I
have
relatively
fresh
eyes
and
to
me
a
memory
limit
seems
like
a
characteristic
of
a
pipeline,
not
a
stage
in
a
pipeline.
D
A
D
A
A
A
A
A
D
A
It's
a
possibility.
There
is
one
interesting
use
case
where
you
actually
have
multiple
pipelines,
but
do
not
set
the
memory
limiter
to
all
of
those.
Let's
say
I
have
a
very
important
pipeline
with
very
low
volume
of
events
flowing,
and
I
have
a
chatty
pipeline
which
can
blow
my
blow
up
my
memory.
I
can
set
the
memory
limiter
only
to
that
chatty
pipeline,
which
means
that
when
the
memory
approaches
the
limit,
it
will
start
dropping
from
that
particular
pipeline,
but
it
will
never
drop
from
my
important
pipeline
right.
D
F
Yeah,
I
agree:
yeah
yeah,
the
the
thing
is
the
memory
limiter.
It
applies
to
use
different
use
cases
right,
for
example,
if
I
want
to
apply
a
memory
limiter,
I
have
two
pipelines
metrics
or
trace
at
the
moment.
Limiter
usually
drop
the
data
if
it
reaches
some
kind
of
memory
limit
right.
So
let's
say
I,
I
only
want
to
drop
the
data
for
my
matrix
pipeline.
I
don't
want
to
drop
it
for
my
trace.
Then
the
processor
will
be
the
use
case.
F
You
know
I
only
want
to
have
it
happen
in
one
pipeline,
but
the
ballast
is
a
global
thing
is
the
other
use?
The
ballast
is
regarded.
It's
regardless
it
doesn't.
Care
of
the
pipeline
is,
is
about
the
collector
thing,
so
I'm
kind
of
still
struggling.
I
want
to
have
these
two
components
because
they're
for
different
use
cases.
So
if
we
are
going
to
only
have
one
component
but
this
to
to
work
for
these
two
test,
you
know
use
cases
but
they're
really.
You
know
in
some
error
their
in
conviction
of
the
you
know.
F
Their
functionality
is
kind
of
conflicted.
I
cannot
you
know
so
I
I
I
I
still
worry
about.
If
we
merge
the
ballast
into
memory,
limiter
they're,
going
to
make
the
things
pretty
complicated,
like
we're,
trying
to
mix
two
different
use
cases
or
scenario
into
one
component.
A
Yeah
so
anyway,
I
don't
insist
that
what
I'm
suggesting
is
right.
I
didn't
even
have
enough
time
to
think
through
it,
but
that's
that's
one
possibility,
maybe
let's
think
through
it.
Maybe
maybe
we
keep
them
separate,
I
don't
know,
but
if
we
keep
them
separate,
I
think
what
you're
suggesting
does
make
a
lot
of
sense,
because
it's
bad
that
experience
today
that
you
have
to
specify
the
same
value
in
two
places.
So
yeah.
F
Yeah,
yeah
cool.
So
what's
what
are
how?
How
can
we
move
forward
for
this
testing,
because
I
I
know
we
need
to
make
a
quick
decision
on
these
components
we
want
is:
is
a
blocker
right
here.
Blocker
is
a
core
component.