►
From YouTube: 2021-06-09 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
C
Thank
you
for
the
comments
yeah.
I
saw
your
comments
there.
Well,
the
song
is
okay.
So
this
this
flag
is
used
to
control.
Like
say
how
often
the
sale
divisor
is
collecting
the
container
data
and
the
the
current
I
mean.
C
The
current
situation
is
like
saleadvisor
doesn't
provide
any
api
for
us
to
customize
these
settings,
and
we
saw
the
kubernetes
code
is
also
doing
the
same
thing
and
I
have
contacted
the
the
also
the
supervisor
I
think,
as
david
ash
po
and
seek
his
opinions,
and
he
also
agree.
Yes,
we
sh,
we
shouldn't
do
that,
but
the
thing
is
wait.
C
Currently
we
have
no
way
to
configure
this
settings
other
than
overwriting
the
flags,
and
I
think
david
also
created
an
issue
in
the
sale
advisor
reports,
and
I
I
already
sent
the
pr
and
tried
to
address
english
issue,
but
only
for,
for
my
case,
let's
say
for
the
one
flag
that
I
was
overwriting.
C
I'm
sure
there
was
a
a
lot
of
other
flags
scattering
around
in
the
sale,
divisor
libraries,
and
so
this
issue
should
be
like
say
addressing
in
the
in
the
future,
but
based
on
based
on
what
I
I
what
I
learned
now.
I
think
this
is
just
a
temporary
workaround
to
resolve
these
issues
and
since
we
cannot
wait
for
say
the
visor
to
release
a
new
version.
D
C
Then
to
including
all
the
fix,
so
I
didn't
see
any
way
like
say
to
to
like
say
to
to
fix
this
issue
in
the
proper
way
in
the
in
the
in
time.
So
I
was
just
wondering
if
we
can
use
this
one
as
a
temporary
solutions
and
then
once
the
supervisor
fix
that
issue,
then
where
you
can
using
the
new
version
and
then
fixed
in
our
code
as
well.
E
Before,
before,
jumping
into
using
a
flag,
is
it
possible
to
actually
make
it
a
config
in
our
yaml
file
and
you
just
pass
it
as
a
flag?
So
so,
instead
of
exposing
it
to
the
user
in
the
collector
as
a
flag,
you
expose
it
as
a
property
in
the
yaml
file
and
you
pass
it
as
a
flag
to
the
library
to
the
supervisor.
So,
essentially
you
don't
expose
that
flag
to
the
user,
but
you
expose
only
a
yaml
config
for
that
property.
C
E
So
so
you
expose
a
property
in
the
yaml
file
and
you
when
you
start
you
set
that
from
that
property.
You
said
the.
B
C
E
My
suggestion
is
you:
should
you
should
at
least
look
into
if
this
is
possible,
because
that
will
then
that
would
be
consistent
with
our
story
of
configuration,
which
is
via
the
ammo
file?
And
you
just
do
a
hack,
small
hack
right.
G
E
C
Okay,
yeah
I
saw
david
is
also
here
david.
Could
you
comment
about
this
approach
like
say
setting
the
flag
into
the
yama
file
and
then
well
civilizer,
take
this
the
environment
variable
in
the
yammer
file
in
time,
so
that
the
its
config
will
be
changed.
C
Advisor
yes,
no!
No!
No!
It's
like
say
just
selling
this
environment
variable
like
using
changing
the
flag
value
of
the
housekeeping
interval
and
then
when,
when
the
when
the
connector
is
running
well,
since
serializer
is
embedded
as
a
library
while
sale,
divisor
get
this
flag
values
and
then
do
the
proper
initialization.
A
G
A
C
Okay,
okay,
I
guess
that
makes
sense,
yeah,
so
I'll
be
introduced,
yeah,
I'm
using
a
new
option
and
there
and
then
take
that
to
overwrite
the
the
flag
values.
Okay,.
A
G
So
I
I
seem
to
have
some
issues
with
like,
but
I'll
I'll,
try
and
say
what
I'm
trying
to
say.
We
should
be
able
to
make
the
change
quickly,
but
c
advisor
doesn't
release
very
frequently.
It
releases
now
down
to
three
times
a
year.
So
if
we're
okay
using
an
unreleased
version,
then
this
should
be
something
that
we
can
accomplish
fairly
quickly.
But
if
we
want
to
wait
for
a
pinned
or
a
tagged
version
of
c
advisor,
then
you
might
be
waiting
a
while.
C
Yeah,
I
I
think
we
I
mean
from
our
side.
We
want
to
use
the
official
release
of
the
satellite
yeah
for
stability
issues.
E
Okay,
ping:
let's:
let's
try
the
approach
that
we
discussed
and
if,
if
that
doesn't
work,
will
revisit
and
discuss
what
are
the
other
options.
But
I
think.
J
Yeah,
so
I
thought
maybe
bogdan
and
jurassi,
or
somebody
was
already
looking
into
this,
but
I
was
wondering
if
we
can
remove
the
basically
remove
the
groups
from
the
code
owners
inca
trib.
J
So
today
like
for
each
component
like
there's,
like
you
know
the
vendor,
approvers
and
then
there's
also
the
team.
Is
there
basically
any
reason
that
we
need
to
have
like
the
global
team.
K
Requested
on
reviews,
yeah
yeah,
so
the
problem,
the
problem
is:
if
you
remove
that
chain.
E
H
E
J
Okay,
that
makes
sense
so
for
the
ones
where
we
can
remove
them.
I
mean
for
the
ones
where
somebody
has
approval.
We
could
remove
it.
I
guess
yeah.
E
J
E
A
E
J
E
Yeah,
let's
listen
is
that
there
is
only
a
small
problem,
but
that's
something
that
we
can
resolve
by
adding
just
the
maintainers
to
everything
the
the
problem
I
wanna
mention
is
when
we
do
upgrade
of
the
collector
of
the
core,
we
need
to
upgrade
all
the
components
and
otherwise
that
will
drop
forever.
E
J
Okay,
cool
and
then
somewhat
related.
I
thought
somebody
posted
about
this
already,
but
so
there's
a
way
to
like
do
the
like
the
round
robin
and
load
balance.
I
guess
this
would
be
more
like
core
would
maybe
be
useful
in
core.
E
So
I
already
went
to
the
teams
and
select
the
the
properties
there
that
for
code
reviews
select
only
one
member
from
the
team-
and
I
don't
know
if
you
saw
but
recently
github
is-
is
adding
the
the
whole
group.
But
it's
actually
selecting
one
person
from
that
group
and
I
think
it
does
not
notify
the
entire
group.
I
may
be
wrong,
but
that's
that's
the
properties
that
I
selected
there.
J
Did
you
do
that
in
core
and
contrib?
Yes
yeah,
it
doesn't
seem
to
be
working.
A
Is
wrong
because
it
seems
to
be
adding
the
proverbs
to
every
pr
again
yeah,
but
I
do
see
the
second
blogging.
I
see
that
it
is
there,
so
I
don't
know.
What's
going
on
there.
E
Yeah,
so
probably
I
need
I
need
to
be
with
someone
on
the
call,
maybe
jay
we
can
play
the
two
of
us
together,
because
I
need
to
test
this
and
I
want
I
want.
I
need
to
see
when
you
receive
an
email
when
you
don't
receive
an
email
and
so
on,
so
we
can
have
a
follow-up
and
play
with
this.
Okay.
Yeah
sounds
good.
I'm
happy
to.
J
Yeah,
that
was
the
second
item
as
well.
The
oh
second
item,
yeah
sorry,
did
we
land
anywhere
on
this
on
moving
like
prometheus
kafka
etc
from
court?
J
M
A
I
think
jay's
question
is
slightly
different.
He's
asking
about
some
of
the
core
components
right.
M
Yeah,
I
think
I
think
what
I
was
proposing
was
also
addressing
that
so
tigran.
I
think
that
that
was
that's
why
I
responded.
A
Okay,
okay,
cool,
so
I
guess
anyway,
jay
we
did
move
some
of
the
previously
they
there
were
processors
like
we
moved
to
the
country.
I
guess
that
there
was
an
open
question,
so
the
answer.
H
A
Yeah,
probably
right,
you
probably
do
want
to
do
that,
because
there
is
a
dependency
on
what
is
talking
about.
So
let's
wait
a
bit
on
that
and
we
will
talk
about
that.
In
fact.
Okay,
all
right,
cool
thanks!
That's
it.
A
Okay,
next
health
checks,
wesley
can
do
all.
Are
you
guys.
C
Yeah
yeah
sure
thanks
hi
everyone,
I'm
kendo
from
the
aws
ecs
team,
and
currently
we
are
building
a
new
health
check
system
for
our
service
and-
and
we
have
a
writer
designed
out
for
this
and
need
a
community
to
take
a
look
and
approve
it
and
because
they
need
to
need
to
add
to
the
country.
C
And
besides
that,
we
have
a
few
questions
for
our
design
and
hope
to
get
some
answers
from
anyone
from
experts
from
the
community.
And
so
the
first
question
is
that
yes,
we
last
time
we
asked
that
we.
If
we
want
to
use
something
like
obvious
report,
we
need
to
make
the
components
to
enable
the
exporter,
helper
or
reserve
helper
to
use
to
actually
get
the
matrix
from
the
component
right
and
so
yeah.
C
And
so
just
last
time
we
asked
that
the
exporters
already
have
the
export
helper
enabled.
But
what
about
the
current
receivers
and
the
processors
do
they
also
have
it?
Because
I
asked
one
of
my
team
member
who
wrote
a
receiver
that
he
didn't.
He
didn't
remember
this
this
part
so
just
want
to
ask.
I
say
that
the
current
receiver
processors
could
use
this
kind
of
feature.
G
E
Percent
of
the
processors
that
I
know
of
are
enabling
that
feature
in
terms
of
the
receiver.
It
was
hard
for
me
to
come
up
with
a
helper
that
that
wraps
the
receiver,
because
our
receivers
are
mostly
base
and.
E
E
So
the
answer
is
so
the
answer
is
for
the
receivers
they
have
to
right
now
they
have
to
manually
call
them
that,
but
last
two
weeks
or
something
like
that,
one
of
the
interns.
H
E
C
And
yeah
so
currently
yeah
so
currently
for
our
design.
C
If
you
can
take
a
look
that
we
need
to
yeah
for
a
currently
for
the
based
on
the
ops
report,
could
only
explore
the
matrix
that,
like
I
failed
to
send
the
number
of
mitres
that
failed
to
send
to
the
destination.
But
what
we
need
is
the
number,
the
time
of
the
exporters
that
could
not
send
to
the
destination.
So
we
want
to
build
our
own
exporter
and
export
the
mattress
by
ourselves,
and
I
build
the
enabled
in
in
our
own
health
check
extension.
C
So
just
such
a
confirmation
that
if
the
current
receiver
or
processor
with
supporter
enables
a
helper
so
our
if
we
use
our
only
exporter
from
in
the
health
studies
and
it
could
export
the
those
kind
of
metrics,
we
want
from
the
components
right.
C
So
we
well
so
actually
we
don't
use
the
office
report.
We
just
use
this
kind
of
user
mechanism
to
export
our
own
metrics.
E
Okay,
I
don't
know
if
you
use
it
or
not
the
ops
report,
but
I
think
that
the
measurements
should
be
recorded
by
the
ops
report
and
you
just
want
to
have
your
own
exporter
that
constructs
it
differently
than
the
prometeos
export,
which
just
exposes
them
as
metrics.
I
can
definitely
take
a
look
after
after
I
come
back
at
home
and
look
at
the
design
and
maybe
comment
there
and
suggest
some.
Some
of
the
things.
C
Okay,
okay,
thanks
and
yeah
sure,
because
another
question
is
from
the
yeah
from
the
difference
between
the
processor
and
the
receiver
and
the
exposure.
Because
for
the
exporter
we
have.
We
currently
take
a
look
at
this
part
of
the
code
and
for
this
porter
we
have
something
like
this:
folder
fail
to
send
spanx,
but
for
the
receiver
and
the
processor
we
have
something
we
don't
have
it.
We
have
some
similar
thing
like
receiver,
refuse
expense
or
the
processor
refuse
expense
and
processor
drop
spans.
C
So
is
this
those
kind
of
matrices
we
want
that.
It
is
what
we
want
like.
This
is
a
marked
as
the
processor
or
receiver
functionality.
Failure.
N
E
Okay,
so
I
can,
if
you
have
a
design
dog
and
if
you
have
open
questions
there,
I
can
provide
comments
and
help
there.
C
Okay,
yeah
sure
yeah.
We
have
paused
our
design
down
here
and
if
you
can
take
a
look.
M
Okay
tiando.
Thank
you.
I
think
we
had
the
next
topic
min.
Did
you
want
to
kind
of
walk
through
this?
We
are
whose
question
is
or
wesley
did
you
have
other
questions?
Sorry,.
N
M
Are
you
done?
Okay,
yeah,
just
checking
okay,
so
we,
you
know
again
bogdan.
This
is
in
response
to
the
discussions
that
we've
been
having
on
issue
3185
and
which
is
a
tracker
issue
for
the
you
know,
thinking
about
processors,
you
know
being
consolidated
in
in
terms
of
handling
the
different
signals,
metrics,
traces
and
logs-
and
you
know
we're
here.
We
have
submitted
a
design
proposal
here
which
is
attached
that
we'd
like
to
kind
of
walk
through.
E
Should
we
should
we
reserve,
should
we
reserve
kind
of
20
minutes,
maybe
starting
next
week
or
even
this
week?
The
last
thing
he
needs
to
do
design
with
you.
M
Yeah,
that's
a
great
idea
involving
because
I
I
think
that
you
know
for
these
proposals.
It
would
be
great
to
actually
have
some
time
to
review
and
have
feedback
discussions.
E
Yeah,
so
so,
let's,
let's:
let's
do
we
can
start
this
week,
but
probably
people
are
not
prepared,
but
let's,
let's,
let's
schedule
for
the
next
week,
the
last
20
minutes,
so
the
normal
meeting
finishes
at
40
and
then
the
last
20
minutes.
We
do
one
design
review
and
we
start
with
this.
One
sounds
good
sounds
good.
M
Okay
sounds
good
yeah.
We
can
certainly
do
that
because
I'd
like
to
you
know,
get
feedback
on.
You
know
an
initial
design
that
we
have
proposed
and
really
request.
You
know
the
google
team,
as
well
as
others
who
are
working
on
this.
You
know
to
actually
comment
the
and
and,
of
course,
bogdan.
M
Okay,
punya
did
you
want
to
go
into
your
question?
Then
we
can
just
you
know,
kind
of
take
the
feedback
review
offline
and
then
reserve
to
walk
through
this
next
week.
L
Sounds
good
mine
might
be
quick,
so
this
is
the.
This
is
the
pr
where
we
wanted
to
pile
on
just
one:
more
small
change
on
a
metric
transform
processor
and
we
signed
up
to
do
the
refactoring
work
afterwards.
Obviously,
that's
very
related
to
the
thing
that
alolita
just
presented,
but
I'm
wondering
based
on
the
agreement,
we
came
the
handshake
agreement.
Can
we
merge
the
change?
L
E
A
Oh,
I
already
no
objections
so
I'm
going
to
merge
it.
Okay.
E
It
please
just
merge
it
please
and
and
create
or
assign
the
issue
we've
already
created
to
put,
because
he
committed
absolutely.
O
Yeah-
and
this
is
really
like
a
quick
question
on
what's
the
timeline
for
updating
otlp
v
point
zero,
eight
or
even
3.09
in
the
collector
and
I've
seen
discussion
on
slack,
but
I
haven't
seen
follow-up
so
so.
A
M
M
E
Follow
an
issue
and
then
split
the
work
and
and
find
people
to
help.
E
O
E
E
Happy
to
yeah
I'm
happy
to
help
reviewing
the
plan
and
then
reviewing
the
pr's,
but
but
somebody
has
to
champion
and
leave
this
effort
so
and
that's
not
going
to
be
for
the
moment.
P
Yeah,
hey:
what's
up
yeah,
I
just
wanted
to
stop
by
here
in
the
company
we
were
considering
about
using
the
collector
and
one
of
the
engineers
I
think
alex
is
on
the
call.
He
was
mentioning
that
at
first
there
are
a
few
processors
doing
very
similar
things,
and
I
did
remember
then
there
was
there
is
a
ticket
regarding
what
processors
would
be
offered.
So
I'm
wondering
there
there's
some
effort
in
plan
for
that.
E
P
Yeah
sure
sure
yeah,
but
I
guess
that
my
question
is
in
that
case,
whether
there's
timeline
or
you
guys
need
help,
because
in
that
case
I
think
we
could
probably
help
put
some
cycles
for
next
step.
M
Yeah,
I
think
bogdan
about
just
repeating
what
bogdan
you
know
is
reiterating
here.
Is
that
the
discussion
carlos
was
that
on
you
know,
the
3185
issue
is
a
tracker
for
the
design
proposals,
because
we
do
want
to
consolidate.
You
know
the
different
processors
that
are
being
implemented
customly
on
demand
for
different.
You
know,
contrib
item
components,
so
the
idea
is
to
you
know,
consolidate
that
towards
the
metrics
processor,
trace,
processor
and
logs
processor
right,
and
that's
the
design
proposal
that
we
want
to
review.
We
have
done
an
initial
draft,
but
you
know.
M
E
H
M
I
mean
that's.
Basically,
there
are
a
whole
bunch
of.
I
think
that,
all
just
from
a
backlog
standpoint
they
were
phase
one
and
phase
two
backlogs
for
rc
release
and
most
you
know
almost
all
of
them
have
prs
associated
with
them
which
have
been
submitted
and
walden,
and
you
know
tigran
have
been
reviewing,
but
again
carlos.
If
you
have
some
bandwidth,
please
also
review
and
other
approvers.
M
P
Yeah,
but
by
the
way
I
am
not
an
actual
approver,
but
they
would
probably
apply
to
that.
I
have
been
reviewing
prs
at
least
trying
to
verify
correctness
in
the
code.
Maybe
I
can
talk
offline
with
tigran
and
bogdan.
What
are
further
requirements
for
me
becoming
an
approver
at
least
for
when
it
comes.
E
E
Yeah
yeah,
definitely
we
can
discuss
that
and
you,
you
should
know
the
requirements
and
I
think
they
are
there,
but
we
we
should
just
check
the
requirements
from
the.
P
P
A
Q
You
this
is
mean
again
yeah.
I
think
I
just
added
one
more
topic
there.
I
I
we
have
an
issue
in
our
like
ga,
bad
luck
about
to
enhance
the
metrics
limiter
or
reduce
the
confusion
or
the
metric
limiter.
I
think
bachmann,
you
already
have
a
you
know,
a
drafted
version
or
metric
limiter
extension
in
the
core
report
right
now,
based
on
you
know
our
discussion
in
the
issue.
I
I
send
the
pr
to
enhance
whatever
you
know.
I
think
from
our
conversation
there.
E
A
E
Yeah,
you
will
have
a
lot
if
you
can
post
something
on
slack
and
remind
everyone
that
hey
next
week,
we'll
do
this,
the
late,
the
last
20
minutes
of
the
meeting
or
whenever
the
meeting
ends,
if
it
ends
sooner.
So
here
is
the
link
that
you
need
to
read
before
that.
E
Those
problems
but
but
I
don't
know
if
we'll
have
time
to
do
more
than
one
or
two.
M
E
M
E
M
D
A
O
Hey
by
the
way
we
have
a
new
member
today,
andrei
who's
working
here
with
me,
anjay
would
like
to
present
the
benchmarks
he
was
working
on
for
open
telemetry
collector
in
help
chart.
A
Welcome
andre-
let's,
let's
get
to
here,
so
I
guess
before
we
do
that
very
small,
quick
thing.
The
logs
protocol
is
now
declared
data.
It's
a
good
milestone
to
hit,
and
I
guess
shows
the
the
intent
and
the
confidence
that
we
believe
logs
are
now
closer
to
being
to
be
useful
for
production.
D
Yeah
yeah,
I
can
present
hello
everyone.
Let
me
share
my
screen.
Okay,
so
this
is
okay.
So
this
is
the
meeting
notes,
and
we
have
this
link
to
this
document
that
I
created
with
help
from
dominic
from
suma
as
well,
and
here
it
is,
let's
start
at
the
beginning,
so
the
task
was
actually.
I
worked
on
introducing.
D
D
This
is
the
chart,
and
this
is
what
we
wanted
to
test
for
performance,
how
many
log
entries
it
can
read
from
the
files
that
are
where
the
logs
go
into
in
docker
in
kubernetes
and
yeah?
That's
it!
So
what
dominique
logic
helped
me
to
set
up
this
death
procedure
and
I
basically
performed
it
after
he
prepared
it,
and
this
is
what
we
did.
We
started
with
creating
a
single
node,
eks
cluster
119
version.
D
This
is
the
basically
the
the
command
to
do
it,
the
the
definition
of
it
and
the
instance
type.
We
would
change
the
instance
type,
but
after
all,
after
some
some
changes,
we
settled
down
on
the
x
at
8x
large,
which
is
big
enough
to
not
cause
any
performance
issues.
I
guess
and
also
we
tested
it
for
x,
large,
which
has
like
four
v
cpus.
D
We
installed
metric
server
to
see
the
load
on
the
node
we
installed
receiver
mark,
which
is
a
sumo
thing
to
receive
logs
from
from
out
from
open,
telemetry,
collector
or
any
other
source,
and
it
displays
some
summaries
of
how
many
logs
or
metrics
it
ingested,
and
so
this
is
the
the
receiver
part,
and
then
we
would
install
the
hand
chart
with
the
with
the
configuration
stated
here
and
in
the
end
we
would
so
the
pipeline
basically
is
to
receive
the
receiver
is
the
file
log
which,
in
the
helm
chart
is
configured
to
track
the
files
where
the
logs
go
into
in
kubernetes,
and
then
we
would
just
export
it
with
the
sumo
logic
exporter,
which
is
a
small
logic,
specific
thing
this
was,
I
guess,
the
easiest
for
us
to
start
with,
and-
and
this
is
what
we
did-
we
turned
off
metrics
and
traces
collection
we
just
focused
on
logs.
D
I
guess
that's
it.
We
also
bumped
the
memory
limits
because
the
more
locks
you
ingest
than
the
memory,
the
default
memory
is,
I
think,
512
million
bytes
for
the
time
chart.
So
we
bumped
it
up
so
that
it's
not
a
problem.
We
also
removed
the
cpu
medium
of
like
one
fourth
of
a
cpu
which
is
default
as
well,
and
then
we
would
install
a
logs
generator.
Basically,
I
think
that
generates
logs,
also
in
some
specific
things.
This
is
a
link
to
it.
What
it
does
is
basically
it
for
a
specific
duration
of
seconds.
D
It
would
output
a
certain
amount
of
analog
entries
per
second,
you
can
configure
the
log
entries
in
certain
ways.
D
D
It
just
goes
to
completion
for
those
five
minutes
and
ten
minutes,
and-
and
it
competes
in
this
that
helped
us
count
the
logs,
because
we
noticed
that
we
are
actually
losing
some
locks,
so
we
wanted
to
make
sure
to
count
the
logs
produced
and
the
locks
ingested,
and
this
is
okay,
and
here
are
we
have
the
the
procedure
of
how
we
did
it
and
the
result,
the
results
are.
D
The
the
the
results
are
summed
up
in
the
summary
table:
the
three
first
for
the
x
large
instance
type
and
the
other
one's
for
8x
large.
D
The
first
thing
that
we
noticed
was
that
we
are
losing
logs
and
thank
you
dan
for
commenting.
This
probably
explains
the
root
cause
of
it.
It's
very
likely
that
this
is
the
thing
that
causes
itself
issue
with
tracking
files
in
kubernetes,
which
are
tracked
by
simulinks,
and
I
didn't
really
go
into
details
unless
issue
to
be
honest
and
anyway
anyway.
D
Hopefully,
when
that
is
fixed,
we
could
go
with
it
with
the
losses
to
zero
and
still
the
throughput
is
quite
decent,
at
least
from
my
perspective,
that's
the
quick
summary
or
not
so
quick
any.
Maybe
questions
or
what
should
I
say
what
else
should
I
say.
S
I
S
D
Yeah
there's
a
this
is
the
equation
that
gives
us
the
total
number
of
logs,
because
I
had
the
for
both
the
settings
set
to
true
to
to
actually
be
able
to
verify
that
the
logs
generator
generated
as
many
logs
as
it
was
supposed
to
right.
I
didn't
want
to
jump
into
conclusion
that
we're
losing
logs
when
the
logs
generator
might
have
been
an
issue,
but
apparently
it
wasn't.
It
was
accelerating
the
the
exact
number
of
locks
it
was
supposed
to.
It
was
a
problem
of
ingestion.
D
S
Seconds
you've
got
31
000,
so
how
many
records
per
second,
if
we
divide
by
say
above
this.
D
Is
sorry
yeah?
This
might
not
be
very
clear,
so
I
had
this
first
line
is
one
pod
and
50
logs
per
second
from
that
pod
50
locks
per
second
right.
If
you
have
more
parts,
then
this
is
locks
per
second
from
one
pot.
So
you
need
to
multiply
these
two
and
you
have
the
number
of
logs
per
second
being
ingested
into
collector.
A
This
looks
like
I
would
say,
somewhat
conservative
blowish,
I
guess
number
of
logs
per
second.
It
would
be
interesting
to
see
what
happens
with
higher
numbers.
You
see
that
it
started
dropping
the
logs.
A
D
That's
why
we
stopped
I,
I
guess
I
I
assumed
it's
not
really
it's
pointless
to
to
go
higher
if
we
are
losing
locks
anyway.
It's
probably
not
a
good
test.
O
Yeah,
and
also
like
one
more
thing,
is
that
we've
been
using
a
sumo
logic:
exporter
which
is
kind
of
let's
say,
legacy
stuff.
It
converts
this
otp
data
to
json,
so
this
also
takes
some
resources
and
in
the
next
duration
I
I
would
like
us
to
use
otop
natively.
A
T
T
You
could
also
send
it
to
like,
but
we
don't
have
anything
else,
do
we.
You
could
also
send
it
to
a
to
like
a
dev
null
file,
handle
on
the
same
machine
and
then
you're
not
actually
testing
the
x4
throughput
per
se.
If
you
know
what
I
mean,
but
that
also
might
introduce
some
other
constraint.
So
it's
yeah
it's
another
option.
If
you
just
want
to
null
out
the
data.
I
A
Yeah,
that's
that's
good.
I
was
I
actually
the
reason
I
suggested
to
use
the
collectory
subject.
You
can
use
otlp,
which
supposedly
is
more
efficient
than
whatever
the
protocol
is
you're
using
right
now,
which
you
said,
uses
json
as
a
format
which
should
be
more.
I
guess
it
should
be
using
more
resources
just
for
that
phase,
whereas
I
guess
we're
more
interested
in
learning
specifically
about
the
file
log
performance
here
so
yeah
anyway.
L
Hi
guys
this
is
from
splunk
a
couple
of
questions
right,
just
curious
to
know.
So
one
is
during
this
performance
testing
round.
Did
we
try
and
like
like?
Did
we
figure
out
any
bottlenecks
or
it's
just
the
sanity
testing,
or
these
are
the
numbers
which
we
did
for
now
and
second
would
be?
Are
we
also
monitoring
resources
like
cpu
and
memory
and
keeping
the
track
that,
when
it's
spiking
versus
when
not
in
what
kind
of
scenarios.
H
T
It's
it's
an
analog
issue,
not
a
digital
issue.
D
So,
to
sum
up,
if,
if
I'm
here.
L
A
D
We
we
want
to
do
it
when
the
bug
is
fixed
about
log
rotation
losing
locks,
and
we
want
to
monitor
the
resources
right
just
to
be
able
to
tell
how
many
resources
were
used
with
that
throughput
right.
F
Yeah
with
after
my
pr
with
data
loss,
fixed
merged,
I'm
planning
to
do
a
performance
test
on
with
splunk
as
a
as
a
back
end
as
well
so
I'll
once
I
have
that,
I
will
also
share
it
here.
Share
with
the
team.
F
Oh
just
one
thing
to
confirm:
I
have
done
a
performance
test
and
then
I
shared
it
with
you
guys
here
right
in
the
past.
A
F
O
Honest,
I
think
we
did
in
march.
Actually,
oh
yeah,
we
did
yeah
it's
it's
also
in
the
meeting
notes
from
march
for
the
first
okay.
H
Yes,
this
has
been
a
tricky
one,
so,
every
time
we
have
a
design
that
fixes
this
issue,
we
see
some
other
flaky
tests
or
something
so
we've
gone
back
to
the
drawing
board
a
couple
times,
which
I
think
rockford
is
the
consistent
effort
on
this,
because
I
do
think
we're
close
at
this
point,
but
we
still
have
one
or
two
issues
that
we
need
to
confirm
or
not
we're
not
regressing.
R
H
H
You
know,
typically,
we
want
to
see
very
robust
unit
testing
in
the
sense
that
we
want
to
see
a
deterministic
testing
right,
but
here
we
have
just
by
the
nature
of
what
this
operator
is
doing.
You
know,
reading
from
files,
we
have
to
have
sort
of
an
independent
writing
of
files.
Reading
files
timing
is
an
issue.
H
Reproducibility
is
an
issue,
and
so
we
tend
to
see
things
that
are
fail,
sometimes
right,
it's
more
more
of
a
probabilistic
test,
and
I'm
sure
everyone
here
has
encountered
this
kind
of
thing
before
so
we're
not
perfect.
Certainly
it's
not
perfect.
Our
set
of
unit
tests
is
not
perfect
and
I'm
sure
that
people
in
this
room
have
experience
would
be
helpful
in
making
them
better,
or
maybe
you
do
have
a
perfect
solution.
That
would
be
amazing,
anyways,
that's
kind
of
what
we're
struggling
with.
R
I
know
it
sucks,
but
I'm
kind
of
happy
that,
like
we
at
least
have
some
tests
that
makes
me
think
the
software
is
actually
real
would
be
too
easy.
Otherwise,
but
it
looks
like
it's
reproducible
right,
because
you
know
they
were
running
into
it
in
a
proof
test
as
well.
So
yeah,
I
don't
know
just.
R
H
A
Crossing
my
fingers
here
this
is
the
one
I've
I've
been
watching
your
discussion.
I
I
see
that
that
the
comments
are
flying
this
way
or
the
other
way
so
yeah.
I
I
know
that
it
is
in
progress.
Just
just
let
me
know
when
you
want
me
to
be
in
to
work
as
well.
O
Yeah,
so
we're
having
discussion
actually
this.
This
goes
back
to
a
little
bit
earlier.
We
had
several
people
like
asking:
okay,
since
body
in
lock,
data
model
is
of
any
type.
It
means
that
it
can
be
string.
It
could
be
it's
a
byte
array,
but
it
can
also
be
a
map
and
it
can
be
a
map,
then,
what
to
put
into
body
and
what
to
put
into
attributes,
which
is
a
map
as
well.
O
So
we've
been
discussing
this
under
the
issue
and
and
came
with
a
nice
very
reasonable
set
of
recommendations,
what
should
be
put
into
attributes
and
what
should
be
put
into
into
body
as
as
attributes
essentially
like
what
are
the
circumstances,
and
I
opened
an
issue
actually
pr
to
specification
with
providing
those
guidelines
and
we've
been
having
discussion
between
me,
tigran
and
yuri,
how
how
to
how
to
approach
that,
and
I
think
that
we,
we
didn't
actually
reach
our
conclusion.
O
Maybe
it
like
any
is
very
nice
because
it's
like
so
generic,
like
you,
can
put
anything
here
structured,
you
know
one
way
or
another,
but
then
if
this
brings
confusion,
maybe
we
should
do
a
step
back
essentially
and
and
let
use
it
only
for
for
string
or
bytes,
maybe,
but
I'm
not
sure
myself.
So
that's
one
of
the
keys
here.
R
A
Possibly,
I
guess
yuri's
comments
are
also
about
whether
the
stack
is
the
right
place
for
that
sort
of
kind
of
more
soft
recommendations,
because
they
are
not.
A
They
are
somewhat
imprecise
in
the
sense
right
they
are,
they
are
actually
recommendations
and
guidelines,
rather
than
something
that
you
can
specify
to
maybe
to
the
required
degree
of
precision
that
is
expected
from
something
you
call
specification.
R
U
R
A
Possibly,
that's
that's
that's
what
I
kind
of
am
leaning
towards
to
have
some
sort
of
other
place,
which
is
about
kind
of
yeah.
How
do
you
interpret
whatever
the
spec
is
saying
and
have
some
sort
of
more
soft
guidelines
around?
How
do
you
use
this
thing?
Okay,
this
is
respect
but
which
is
silent
about
on
certain
matters
about
what
the
field
can
or
cannot
contain
or
should
or
or
is
recommended
to
contain.
A
Maybe,
but
the
problem
is
we
don't
have
that
place
today
right
there
is
no
other,
maybe
some
sort
of
a
documentation
or
manual
or
anything
like
that
which
does
not
exist
today.
You
know,
so
we
will
need
to
come
up
with
something
new
here
or
maybe
we
just
convinced
yuri
that
you
know
what
actually
the
spec
is
a
good
place
to
have
this
thing,
because
it's
it's
the
right
place.
I
don't
know
if
you
guys,
so
I
think
it
will
be
useful.
A
What
project
is
it
was
saying
it
would
be
useful
if
you
guys
have
an
opinion
on
this
topic
to
to
just
comment
right
to
say
that
you
think
that
you
know
what
you
think
it's
it's.
It
is
the
right
place,
let's
put
it
in
the
spec
and
if
the
wording
is
wrong,
then
let's
come
up
with
the
right
with
the
right
wording
for.
U
U
Maybe
we
should
take
a
step
back
start
with
just
a
document
that
we
can
all
agree
on
and
then
once
we've
got
kind
of
an
idea
of
where
we
want
to
go
with
it,
we
figure
out
whether
what
whether
it
fits
in
a
spec
or
something
else,
it
doesn't
make
sense
to
make
that.
I
guess
I'm
just
trying
to
say.
Maybe
we
make
that
two
discussions
where
we
put
it
and
what
we
do.
A
Did
you
have
a
chance
to
read
the
the
outro
proposal,
it's
kind
of
like
five
specific
rules
which
are
yes,
they
by
necessity.
A
They
are
kind
of
the
language
is
not
really
precise,
but
I
I
wonder
if
they
we
can
be
more
precise
than
that.
It's
probably
difficult.
I
don't
know,
that's
that's
the
best
I
could
come
up
with.
So
if
somebody
is
able
to
clean
up
that
that
language
and
come
up
with
something
that
is,
that
sounds
more
more
specific,
more
has
more
precision
to
it.
Then
that
may
remove
the
objection
right,
because
then
it
may
look
like
a
better
fit
for
the
specification.
O
Yeah
then,
the
specification
also
includes
examples
which
I
think
fall
into
somewhat
similar.
A
R
So
when
I
was
sort
of
throwing
out
this
cliffnotes
business,
maybe
it
will
be
useful
for
this
particular
because
you
know
because
logging
is
so
open-ended
right,
you
know
versus
versus
some
of
the
other
stuff.
Maybe
some
sort
of
the
cliff
notes
can
turn
into
like
a
practical
guide
to
logging
with
hotel
or
something
you
know.
Even
if
it's
just
a
couple
paragraphs
in
the
beginning,
they
kind
of
talk
about
it.
Maybe
we
can
extend
it.
I
don't
know
I.
If
it's
just
a
couple
of
paragraphs,
I
really
don't
think
necessarily.
R
We
need
to
kind
of
bend
over
backwards
to
put
it
in
a
different
document
rather
than
an
appendix
of
the
of
the
spec,
but
I'm
just
plotting
out
ideas
here.
So,
let's,
let's
probably
all
take
another
look
at
this
thing
and
yeah-
I'm
not
100
up
to
speed
myself.
So
just
I'm
gonna
shut
up
now,
yeah.
O
Yeah,
but,
but
then
going
back
to
like
to
the
discussion
under
under
this
ticket,
it's
in
a
way
it
to
me
is
like
okay
razer,
like
maybe
we
don't
actually
need
having
a
body
being
of
any
type
and
yeah.
So
I
think
that
was
my
original
point
is.
Is
that
may
like?
There
are
like
two
items
discussed
there
like
one
is
guidelines
are
in
this.
Is
this?
Is
the
right
place
for
the
guidance
or
not?
A
That
maybe
we
should
have
maybe
even
open
a
separate
issue,
because
that
that
is
going
to
be
a
different
discussion.
Right
then
yeah.
This
pr
is
not
at
least
it's
not
trying
to
do
that
yeah.
It
assumes
that
the
data
always
right
we're
just
making
clarifications.
What
you
are
proposing
is.
Actually
you
know
what
the
model
is
not
right
and
that
that's
why
we
are
not
able
to
properly
explain
what
to
put
there
well.
U
I
think
also
going
back
to
a
log
is
always
text.
It
doesn't
feel
quite
right
to
me
either
that
you
know
if
we're
looking
at
logging
from
it
inside
of
a
logging
library.
Ideally,
we
can
go
straight
to
otlp
and
it
never
actually
becomes
serialized.
So
it
might
not
be
json.
It
could
actually
be
structured
data.
A
That's
that's
my
position
as
well,
but
if
there
is
a
desire
to
have
this
discussion,
maybe
let's
do
that
right.
Let's,
let's,
let's
file
an
issue,
let's
have
a
discussion
on
that
and
maybe
after
we
conclude
on
that,
we
decide
one
way
or
another.
Maybe
then
we
can
continue
talking
about
what
exactly
goes
to
the
body
versus
attributes,
because.
H
A
A
Okay,
are
we
good
on
this
anything
else
on
this
topic.
T
Yep
just
wanted
to
share
quick
news.
One
minute,
we've
been
doing
some
work
on
the
java
logging,
we're
putting
together
a
couple
of
pr's
that
are
just
about
ready
to
go
and
we're
going
to
continue
working
through
the
different
steps
that
are
outlined
in
the
issue.
That's
on
the
dock
and
just
an
update
that
things
are
moving
slower
than
anticipated,
but
it's
kind
of
a
side
project.
So
I
can't
put
it
as
high
priority
for
for
for
us,
but
I
should
see
some
more
action
going
against
that
issue.
T
That
was
opened
three
weeks
ago.
So
hi.
V
Jenna,
this
is,
will
sergeant
hey.
Well,
I
posted
on
the
pr
that
I
am
interested
in
contributing
to
that
effort.
I
I
work
for
eero,
that
is
a
subsidiary
of
amazon,
so
I
reached
out
to
amazon,
legal
and
eloita.
Schwammer
manages
the
aws
observability
team
to
make
sure
that
I
was
legally
in
the
clear
doing
this
as
a
personal
project
rather
than
part
of
the
aws
observability
team,
because
I'm
not
part
of
their
team.
V
But
anyway
I
have
a
background
logging
and
I
know
log
back
very
well.
So
if
there's
anything,
I
can
do
to
help.
T
Yeah,
I
mean
I'm
not
sure
what
the
first
pr
will
look
like,
but
it
should
be
coming
next
week
so
but
yeah,
if
you
do,
want
to
take
part
of
that
work,
then
by
all
means
as
soon
as
you
get
approval
to
do
that
work
and
I
work
with
albolita
as
well.
So
I
think
she
should
be
pretty
encouraging.
V
V
T
Sounds
good
and
you
are
on
the
cncf
slack.
I
assume.
V
R
R
V
V
R
Do
you
mean
josh
josh
zuresh?
No,
I
mean
you
actually.
Oh
sorry,
I
mean
you
and
another
person
because
you
know
like
like
representing
sumo
logic
here.
You
know
our
stuff
is
all
in
scholar,
so
we'll
be
okay.
We
would
certainly.
We
would
certainly
at
some
point,
want
to
go
and
have
something
you
know
figured
out
sort
of
at
the
scala
level
right.
A
T
That's
correct:
it
should
be
against
this
issue,
so
hopefully
all
the
prs
will
tie
back
to
the
issue
that
I
just
linked
to.