►
From YouTube: 2022-02-23 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
B
D
B
D
B
Yeah,
but
we
cannot
access
private
fields
of
the
exposed
resource
structure
right
so.
D
B
D
B
D
That's
that's
a
separate
thing
so
so,
first
of
all,
I
think
his
his
proposal
resolves
something's
big
for
us,
which
is
we
can
keep
these
things
in
internal
and
people.
Don't
ever
see
them?
Okay,
so
so
so
that's
that's
solves
a
huge
thing
for
us.
So
we
we
keep
this
purely
private
internal,
even
though
they
are
public
for
our
own
modules,
but
they
are
private
externally.
So
it's
it's
a
huge
win.
What
he
proposed
now
now
in
terms
of
the
solution,
how
we
use
this.
F
So
I
think
yeah,
but
we're
maybe
still
not
clear
on
what
should
be
the
part
that
lives
in
an
internal
package
and
what
should
lose
externally.
I
think
baghdad
you're,
proposing
that
things
like
common
parts
of
p
data
like
resource,
would
live
in
an
internal
package,
whereas
I
think
the
otlp
modules
should
live
in
an
internal
package
and
things
like
the
the
common
p
data
resource
should
be
externally
exposed
and
have
a
public
api
to
access
their
internals
that
we
tightly
control.
We
only
expose
through
that
public
api.
What
we
want.
F
D
Oh
sure,
the
only
problem
is,
I
don't
really
want
people
to
use
this,
so
so
so
what
I'm
trying
to
say
here
is,
I
don't
want
people
to
be
able
to
call
those
constructors,
because
we
don't
guarantee
anything
on
this
internal
thing,
internal
otlp
generated
code
or
whatever
it
is,
and
they
they
won't
be
able.
F
F
D
Sure
we
it's
it's
a
valid
option,
but
by
the
way,
stepping
back
to
what
you
said,
which
is,
I
said
something
yesterday.
I
did
not
tell
you
that
this
is
right
or
wrong.
I
just
told
you
that
this
is
how
we
did
things.
We
should
continue
to
do
this
way
unless
we
file
a
separate
issue
to
change
that
way,
which
I
find
with
discussing
the
interface
model.
So
not
not
sure
if
I
said
anything
there
that
this
is
how
we
should
do
it.
I
said
this
is
consistent.
What
we
have
right
now.
F
Sure,
okay,
in
this
case,
we
would
then
still
be
consistent
with
what
we're
doing,
and
my
concern
was
that
it
felt
weird,
I'm
not
sure
it's
wrong,
but
it
seemed
weird
to
be
exposing
the
the
the
type
that
can't
be
created
in
a
public
method.
But
if,
if
we
think
that
that's
an
okay
thing
to
do,
I
think
that
provides
us
a
better
system.
I.
G
Gotta
say
you
know,
just
in
terms
of
like
trying
to
use
the
metrics
api
and
stuff
like
that
and
there
being
fields
that
you
can't
construct.
It
makes
it
very
hard
to
build
systems
that
build
those
things
and
like
help
you
to
do
the
abstraction
when
you're,
given
a
limited
chunk
of
data
that
you
can't
manipulate
private
members
in
public
apis
is
really
hard
to
cope
with.
You
know
as
a
client.
E
F
D
So
we
are
discussing
here
ken
there
is
going
to
be
a
public
aka.
You
can
do
everything
you
want,
but
we
need
some
hooks
where,
where
we
already
have
some
objects,
for
example,
after
this
realization
we
have
some
objects
and
we
want
to
construct
from
there.
So
so
it's
more
or
less
an
internal
api
that
we
are
discussing
here.
I
don't
think
that
will
affect
users.
D
F
This
is
very
specific
to
how
do
we
deal
with
having
deserialized
otlp
off
the
wire
into
some
generated
structs
that
we
use,
because
we
use
a
different
set
of
generated,
otlp
data
structures
in
the
collector
than
everybody
else.
We
don't
use
the
otop
or
the
hotel,
protogo
structures.
F
If
we
did
use
hotel,
protogo
or
if
we
had
publicly
exported
our
otlp
structures,
all
of
that
concern
would
go
away,
but
I
think
there
are
probably
good
reasons
for
not
doing
so.
Like
the
there's,
a
significant
performance
degradation
we
would
take
if
we
went
to
hotel,
proto
go
and
exposing
what
we're
currently
using
for
others
to
use,
I
think,
would
limit
our
flexibility
in
terms
of
what
we
decided
to
do
to
maintain
that
performance
advantage
that
we
have.
B
I
think
I
think,
essentially,
we
are
just
discussing
options
that
are
provided,
whether
two
or
three
and
three
option:
three
being
aliases
for
internal
resource,
for
resources,
that
resource
structures
are
defined
in
journal
package
and
anthony
is,
as
I
understood
you,
you
suggested
to
go
with
option
two,
where
we
have
all
three
exposed
and
we
have
additional
exposing
methods
like
new
results
from
from
proto,
where
we
provide
journal
objects,
yeah.
E
F
Down
on
duplication
to
me,
I
think
I
would
like
to
see
what
the
the
end
result
is
going
to
look
like
with
separate
modules
kind
of
just
a
mock-up
of
where
are
the
module
boundaries
where
the
packages
and
what
would
need
to
be
exposed
of
the
internal
bits
and
what
wouldn't
for
each
of
those
options.
D
There
is
already
a
pr
for
that
correct
the
dimitri's.
Pr,
indeed,
is
not
splitting
in
in
separate
packages,
but
it's
already
moving
everything
to
internal
and
it's
using
aliases
and
you
can
determine
how
much
duplicate
code
exists
or
doesn't
exist
and
and
the
the
splitting
separate
packages
is
just
moving
those
aliases
in
separate
package.
At
this
point
and.
B
And
those
are
kind
of
independent
of
the
approach
when
we
take
like
end
result,
like
structure
of
the
modules
can
can
be
the
same
for
both
approaches,
and
I
do
have
already
like
next
step
for
splitting
them
along
the
generated
code.
I
can
submit
another
pr
to
to
show
how
it
looks
and
you
can
take
a
look
and
also
I
have
like
I
can.
B
I
can
submit
a
draft
pr
for
splitting
and
adding
those
additional
like
create
from
proto
functions,
but
it's
just
like
draft
and
it's
it's
not
ready
to
be
merged.
If
I
want
to
take
a
look,
it's
it's
okay,
but
it's
just
essentially
just
splitting
the
code
and
adding
those
additional
exposed
methods
in
each
of
the
module.
A
I
think
it
would
be
very
useful
to
see
the
draft
with
the
modules
actually
with
with
with
the
four
modules
that
we
discussed.
A
B
Okay,
I'll
submit
a
couple
more
requests
and
yeah
third
approach
next
step
for
the
first
third
approach
and
like
splitting
for
the
first.
A
B
Also,
I
we
have
this
milestone
for
module
split
I
if
anyone
can
have,
I
can
can
get
more
eyes.
B
Yeah,
just
just
like
more
feedback
on
on
the
milestone
on
the
issues
in
the
milestone
and
another
pull
request.
I
I
will
let
me
post
it
in
the
in
the
agenda
as
well.
A
B
Now
I
believe
it
contains
the
right
issue-
probably
I
I'll
probably
submit
another
more
just
one,
but
I
just
wanna
approvers
to
take
a
look
and
some
of
them
and
provide
some
feedback,
whether
we
we
want
to
do
maybe
some,
like
some
suggestions.
Proposals
are
not
reasonable,
for
others.
A
B
A
H
H
This
is
the
only
practical
way
to
have
the
back
pressure,
and
maybe
it's
not
a
bad
one,
because
if
the
exporter
has
trouble
with
pushing
all
that
data,
then
eventually
the
memory
consumption
will
will
grow
and
and
the
model
limiter
will
drop
some
data,
and
I
think
this
is
fair,
especially
when
we
have
a
bunch
of
asynchronous
processors
like
batch,
processing
or
group
by
trace
or
anything
else.
H
Maybe
maybe
this
is
the
way
it
should
operate,
but
I
wanted
to
confirm
that
if
this
is
the
current
idea-
or
there
are
like
some,
some
other
smarter
ideas
floating
around-
I
haven't
seen
any
issues
on
that,
though-
and
the
related
question
too
is
if
we
want
to
extend
the
model
limiter
a
little
bit,
because
currently
it
just
returns
some
card
some
kind
of
error
and
then
something
that
writes
to
memory
limiter
doesn't
even
is
not
even
able
to
tell
if
this
is
a
permanent
error
or
temporary
error
and
if
maybe
the
it
should
behave
accordingly.
H
A
D
A
B
A
A
G
D
And
this
is
would
be
a
good
step
again
if
this
design
is
the
right
design
that
we
we
are
going
further.
As
premise
ask
for,
may
we
may
do
something
else,
but
this
is
an
option.
F
D
Similar
with
memory,
I
mean
we'll
ask
user
to
tell
us
how
I
want
to
use
maximum
two
course
or
whatever
some
number,
that
they
they
tell
us,
and
then,
if
we
measure
we
measure,
let's
say
an
average
over
the
last
10
seconds
or
something
like
that
and
if
the
average
is
is
greater
than
than
the
number
user
gave
to
us,
we
stopped
we.
We
start
rejecting
requests.
F
A
Could
do
I
think
the
point
is
that
you
can
signal
to
the
sources
that
you're
overloaded
they
they
should
stop,
whereas
when
you're
actually
limited
by
by
the
operating
system,
you
probably
won't
be
able
to
do
that
right.
You
you're
going
to
start
timing
out
the
requests
right
failing
the
incoming
requests,
essentially,
which
is
different
from
replying
gracefully
and
telling
the
sources
to
shut
up.
Please
wait
a
bit
right.
A
A
To
answer
the
other
question:
to
make
your
head
whether
it
should
return
a
permanent
error,
I
don't
think
it
should
right,
because
it's
a
transitor
you're
in
a
situation
when
the
memory
you
hit
the
memory
limit,
but
hopefully
there
is
a
hope
that
you
will
recover.
So
you
don't
want
to
permanently
drop
the
particular
piece
of
data.
A
A
What
the
the
net
result
is
going
to
be
that
the
they
are
going
to
hammer
you
again
and
again
and
again
with
the
same
thing,
and
then
you
probably
it
depends
on.
I
think
maybe
this
needs
to
be
a
user
configuration
some
some
other
sort
of
setting,
or
maybe
we
can
be
smart
about
it
and
try
to
understand
whether
this
is
actually
a
short
transient
spike
or
something
that
is
sustained,
which
we
have
no
hope
of
recovering
from,
and
in
that
case
we
actually
start
dropping
it.
G
A
That's
that's
what
I
was
referring
to
right.
So
can
you
figure
out
whether
it's
temporary
or
how
temporary
is
it
short
enough
that
you
can
maybe
keep
up
for
for
for
for
some
period
of
time
and
then,
but
then
tell
the
sources
to
just
just
slow
down
a
bit
and
then
maybe
you
recover
from
that
or
if
you
see
that
it
goes
on
and
on
and
on
and
on
I
don't
know,
I
think
you're
right.
Do
you
have
enough
context
to
understand
whether
it's
really
temporary
or
permanent,
hard
to
tell.
H
Yeah,
so
I
think
the
most
important
thing
here
is
that,
when
something,
for
example,
receiver
is
calling
the
consume
function,
they
have
no
way
to
tell
why
consume
failed
like
if
this
was
let's
say
this
error
due
to
back
pressure,
or
maybe
something
else
has
happened
and
was
something
wrong
with
the
data.
So
if
we
would
have
the
api
that
tells
the
receiver,
what
was
the
cause
of
the
error,
then
at
least
the
receiver
could
return
the
correct
status
code
to
so
something.
A
Wrong
with
the
data
should
be
a
permanent
error
that
that
should
be
reported
as
a
parameter.
That's
the
expectation,
in
other
cases,
like
peg
pressure,
it
is
expected
to
be
returned
as
a
non-period,
exactly.
H
A
You
read
the
the
comments
for
the
receiver
interface.
It
says
that
in
in
those
cases
the
receiver
should
respond
appropriately
to
its
sources
in
case
of
otp.
We
should
be
responding
for
online,
which
we
don't
do.
That's
that's
actually
a
bug
in
the
implementation,
but
are
you
looking
for
more
information
there
like
permanent
non-permanent?
Do
you
do
you
want
to
have?
H
Yeah
exactly
that
like
to
be
to
be
able,
if
this
is
something
maybe
wrong
with
the
data-
and
this
is
why
consume
failed
or
if
this
is
something
wrong
with.
Let's
say
the
capabilities
of
the
server,
and
we
can.
The
receiver
can
return,
non-permanent
errors
of
some
500,
for
example,
sort
of
error
and.
A
D
Question
so,
by
the
way
related
to
this,
there
is
a
pr
which
was
dropped.
Josh
suret
was
pushing
on
that
and
using
grpc
codes
more
into
signal.
The
error
type
like
essentially
having
a
code
that
signals
different
error
types,
the
one
being
memory
pressure,
the
other
one
being
whatever
other
errors.
We
have.
A
D
I
would
say
I
mean
the
metrics
building
our
own
metrics
for
that
logging.
For
for
errors,
the
other
thing
is
in
the
receiver,
based
on
different
error
type.
You
can
do
different
things
even.
A
A
D
A
A
A
A
H
A
There's
actually
one
thing
that
we
could
do
differently
from
the
memory
limiter,
it's
to
signal
the
actual
overload
to
the
to
the
receivers
so
that
they
try
to
they
enter
in
in
into
a
different
operating
mode,
and
they
do
not
deserialize
the
request
at
all
right.
If
the,
if
you
are
memory,
limited,
there's
no
point
in
the
serializing,
you
cannot
or
or
yeah,
you
can
actually
reject
the
request
immediately
right.
A
That
would
help
like
if
you're
already
in
an
overload
situation
and
the
receiver,
receives
a
request
and
then
tries
to
unmarshal
the
protobuf,
which
is
going
to
be
dropped
immediately
by
memory
limiter.
That's
pointless
right!
That's
unnecessary
work,
which
makes
things
worse.
So
if
the
receiver
could
immediately
reject
the
the
incoming
request,
that
would
help
the
situation.
A
The
problem
is
you
don't
know
when
to
switch
from
that
limited
mode
into
the
regular
operating
mode?
When
is
the
memory
limiter
now
back
to
proper
mode?
So
we
don't
have
that
communication
channel
some
sort
of
signaling,
maybe
necessary,
to
indicate
I
don't
know
for
how
long,
maybe
or
or
actively
inform
the
receivers
we're
now
back
to
proper
mode.
C
It
may
actually
be
helpful
to
block
instead
of
dropping,
because
then
you
slow
down
the
prometheus
scrape
intervals
by
just
sitting
and
not
returning
until
memory
goes
back
down
and
the
nice
thing
about
that
could
be
adding
in
not
a
real
notion
of
fairness,
but
maybe
a
little
bit
less
unfairness
than
we
have,
because
I
think
the
prometheus
receiver
sort
of
cycles
through
targets,
and
so
it
would
slow
down
them
all
instead
of
potentially
dropping
metrics
just
from
one
client,
but
something
to
consider
potentially.
A
So
we're
returning
an
error
from
the
memory
limiter
you're
saying
instead
of
returning
it's
better
to
block,
but
the
receiver
could
block
itself
right.
The
receiver
could
understand
that
something
is
wrong
with
the
pipeline
and
the
receiver
could
start
nibbling
or
something
like
that
right.
It
could
could
paste
itself.
A
C
C
Yeah
we've
seen
a
lot
of
cases
where
someone
will
try
and
scrape
so
many
metrics
that
just
the
scraping
itself
and
the
parsing
from
text
into
prometheus
format
and
such
take
up
all
the
memory
before
it
even
hits.
The
memory
limiter
yeah.
So
we're
kind
of
just
in
a
bad
spot,
then,
and
can't
do
anything.
B
A
B
A
A
D
D
I
think
we
can
extract
the
library
that
people
that
receivers
and
have
a
receiver
helper
or
something
that
receivers
can
can
embed
that
library
even
make
an
extension
if
we
want,
but
I
think
the
fact
that
for
for
receivers
that
they
don't
do
anything,
this
is
working
out
of
the
box
is,
is
beautiful.
Yeah.
A
A
D
A
D
I
don't
know
if
we
should.
That's
that's
viable.
Only
for
for
pool
based
for
push-based
is
not
so
I
don't
think.
G
A
A
Minor,
depending
on
how
the,
if,
if
you're
being
overloaded
by
your
sources,
it
may
not
be
minor,
it
may
be
actually
very
significant
if
you're
already
in
a
bad
situation,
you're
you're,
already
at
your
cpu
or
memory
limit,
it
may
be
very
helpful
to
stop
unmarshaling
the
protobuf
requests
right.
Yes,.
D
Yeah,
but
none
of
the
frameworks,
including
I
mean
http-
allows
you
but
grpc
does
not
allow
you
to
do
that
and
http.
D
Also,
you
can
do
as
any
external
source.
You
can
apply
some
logic.
You
are
an
external
source
there,
for
example
in
the
receiver,
and
you
can
start
having.
Okay,
if
I
receive
resource,
exhausting
or
whatever
is
the
error
code,
I
for
the
next
30
seconds,
I'm
stop
receiving
if
in
30
seconds,
the
next
request
I'm
receiving
I'm
sort.
A
D
Correct
and
I
I
think
that
doesn't
require
any
signaling
of
memory
and
stuff,
so
I
would,
I
would
most
likely
do
something
like
that,
which
is
possible
also
to
be
done
from
external
sources
like
you,
you
should
do
from
your
client,
something
similar
and
stuff
like
that,
because
I
mean
unmarshaling,
it's
a
problem,
but
also
receiving
on
the
socket.
A
lot
of
these
data
is
another
problem.
Correct,
like
you
still
you,
you
still
consume
memory
in
in
a
memory
case
when
you,
when
you
have
to
read
from
the
socket.
A
It's
supposedly
it's
smaller
than
actually
unmarshaling
stuff.
With
a
lot
of
allocations,
it's
less
work
comparably
like
relatively
less
but
yeah.
I
I
agree
with
you
it's.
This
is
very
hypothetical.
You
actually
need
to
implement
and
test
it
in
specific
situations
to
see
how
much
it
actually
helps
whether
it
helps
at
all.
D
Should
we
should
we
take
as
an
action
item
to
investigate
if
we
want
to
have
to
extend
the
memory
limiter
to
more
resources?
First,
I
think
that
was
one
of
the
the
idea
that
we
discuss
and
we
should
consider
at
least
looking
into
do.
We
ever
need
to
lean
it
on
other
things
than
memory,
and
maybe,
if
that's
the
the
answer
is
yes,
it's
a
good
opportunity
to
think
about.
Do
we
want
to
have
separate
processors
or
only
one
processor?
I
would
prefer
only
one
and
configure
multiple
limits.
D
D
The
other
thing
that
I
was
mentioning
here
was
if
we
should
start
using
the
grpc
codes
in
our
pipeline.
So
right
now,
as
an
errors,
we
have
only
is
permanent
permanent
error
and
I
think
we
have
retriable
error.
If
I
remember
anyway,
we
have
couple
of
types
of
errors,
but
one
option
for
us
would
be
to
start
using
grpc
codes
errors
stuff,
which,
which
will
give
us
flexibility
enough
to
signal
all
the
the
types
of
errors
that
can
possibly
happen.
D
I
mean
the
set
of
errors
that
were
designed
by
by
grpc
actually
are
status.
Google
status
are
very
generic,
so
are
not
only
for
http
connections
already
for
http
protocol
for
for
grpc
protocols
is
very
generic
in
a
way
how
it
can
describe
multiple
types
of
problems,
and
maybe
that's
another
reasonable
thing
to
do.