►
From YouTube: 2021-07-21 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
C
I
missed
the
agent
collector
call
today
and
yeah:
I'm
not
not
seeing
any
items
on
the
on
the
agenda
either,
but
let's
give
maybe
one
or
two
minutes.
C
D
Sounds
like
tigran,
I
heard
tigran
is
out,
but
I
also
see
his
name
on
the
attendees
list.
Sure
that's
an
artifact.
D
All
right:
well,
I
think
we
can
get
started.
Does
anyone.
F
Yeah
sorry,
I'm
new
here,
so
I
just
can
introduce
myself,
I'm
well
sergeant.
I
work
at
euro.
I
am
interested
in
the
logging
sig,
the
proof
of
concept
and
I
am
putting
together
a
there-
is
an
issue:
three
zero,
five.
Five.
Let
me
put
that
in
the
chat.
F
I
am
putting
together
something
in
the
open,
telemetry
java
stuff
to
add
a
logging
package
for
api
and
sdk,
so
that
I
can
fill
out
a
log
back
and
block
for
j
appender
in
the
java
section,
and
it
looks
like
the
the
primary
means
that
the
data
model
is
working
through
is
by
sending
logs
through
oltp,
which
requires
using
the
protobuf
specifications
which
are
already
in
open,
telemetry
java,
and
so
what
I'm
doing
is
I'm
creating
a
draft
pull
request
right
now,
which
just
has
like
the
the
structure
of
it
in
the
package?
F
I
haven't
actually
gone
into
any
implementation
as
of
yet,
but
you
know
like
if
that
works
out,
etc.
Then
I'm
going
to
start
fleshing
them
it
out
with
the
log
and
misser
and
then
build
log
back
and
log4j
implementations
using
the
log
emitter,
that's
already
in
open,
telemetry
java
and
then
working
through
that.
F
So
you
can
add
your
own
stuff
if
you
need
to
send
requests
and
so
forth,
and
I
think
that's
the
right
approach,
but
I
want
to
get
feedback
from
everybody
here
to
make
sure
that
I'm
not
replicating
work,
that
other
people
are
already
doing
or
heading
in
a
completely
wrong
direction.
Okay,
does
that
sound
good.
D
Yeah
welcome
aboard.
I
would
actually
put
this
to
everyone
else
and
is
anyone
more
familiar
with
the
instrumentation
libraries
and
the
sort
of
the
workflow
developing
them.
I
haven't
developed
any
I'm
not
very
familiar
with
the
process,
for
it.
F
I
mean
I
know
I
should
speak
to
john
watson
because
he
sees
you
the
primary
maintainer
of
that
library,
but
you
know
from
from
your
perspective,
there's
a
data
model,
markdown
documents
and
there's
the
logs
and
then
there's
the
oltp
specification,
and
there
is
already
a
prototype,
that's
implemented
in
python,
and
I
just
want
to
know
if
there's
anything
else,
I
should
be
looking
at
or
other
people.
I
should
contact.
D
I
think
you're,
looking
at
all
the
right
things,
I
think
tigran
will
be
the
one
who
can
answer
this
more
definitively
and
unfortunately
he's
out
this
week.
The
process
you're
taking
sounds
correct
to
me,
but
I
haven't
gone
through
it
myself,
so
there
may
be
some
other
things
that
I'm
not
thinking
of.
Okay,
I
think
you
could
reach
out
to
tigran
and
he'll
get
back
to
you
as
soon
as
he's
around.
D
B
Okay,
so
this
is
sdk.
I
was
having
a
really
hard
time
figuring
out.
We
don't
have
an
api
for
it.
Our
in
the
api
side.
We've
only
got
the
sdk
side.
There
is
some
stuff
in
the
contrib
module
that
was
kind
of
towards
this.
That
kind
of
stalled
out.
I
don't
know
if
that's
the
same
approach
that
it
needs
to
be
in
the
sdk
contrib
before
it
can
actually
be
in
the
sdk
itself,
but
you
feel,
like
you,
can
sync
offline,
and
I
can
tell
you
kind
of
where
I
stopped
and
I.
F
Lost?
Okay;
that
that's
a
different
repository
right.
F
B
When
you're
a
standard
well
so
so
I
actually
started
as
a
pull
request
against
the
sdk
proper
and
was
asked
to
move
that
into
contrib,
which
at
the
time
was
in
the
same
repo,
but
it
was
in
a
different
sub-project.
B
D
All
right,
it
looks
like
the
there's
one
item
performance
review
after
fixing
file
tracking
for
rotation
is
that
yours.
E
Hey,
can
you
see
my
screen?
Is
it
too
small
yeah?
It
could
be
okay.
So
before
I
run
test
before
the
fix,
it
has
some
data
loss
at
whatever
eps
for
throughput.
It
was
when
file
was
rotating
rotated.
Out
of
the
include
pattern
we
did
have
we
lost
some
of
the
data
and
then,
after
the
fix,
I
did
more
test
runs.
Okay,.
E
How
do
I
freeze
this
yeah?
Okay,
so
you
can
start
looking
with
the
b
b
twos
and
the
after.
So
with
the
I
did
see
it's
a
single
file,
maximum
throughput
testing,
starting
with
27
k,
and
then
the
data
was
getting
100
using
1.6
cpu
out
of
3
given
and
just
500
megabytes.
E
So
and
then
I
increased
the
message
sizes.
You
know
it
was
all
okay
until
I
hit
I,
I
used
the
1k
byte
eps
the
message
size
and
then
I
did
some
data
loss,
so
he
wasn't
able
to
keep
up
with
it
handling
this
long
messages
at
this
this
rate
so
well,
so
I
did
try.
I
did.
I
did
run
one
more
time
and
then
I
did
try.
I
increased
the
eps
and
then
the
this
is
in
this
rate
times
generated.
So
this
is
effective.
E
The
throughput
of
the
agent
eps
eps
agent,
so
it
kind
of
top
hit
the
top
a
ceiling
at
the
25k
eps,
would
be
a
larger
log
size
and
then,
as
I
go
and
lower
the
log
size,
the
ceiling
eps
maximum
eps
for
a
single
file,
kind
of
increases
up
to
3k
and
I
30k
and
then
so
yeah.
That
was
one
take
away
from
it
like
single
file.
Max
eps
was
around
like
30k.
E
So
if
you
you
can
support
up
to
a
single
container
logging
that
much
pretty
high
and
then
I
did
test
scaling
up
similar
total
generation,
but
then
coming
from
multiple
containers,
because
it
can,
you
should
see
it
should
be
able
to
linear
scale
up
well
and
yeah.
It
does
scale
up
well
generate
ingesting
like
40k
and
then
out
of
five
generator
log
generator
yeah.
You
can
ingest
that,
like
54k
52k
like
so
here
as
you
can
see,
I
have
increased
from
five
seven
to
ten.
E
It
should
scale
up
well
and
then,
if
they
they,
there
is
cpu
available
for
that
cpu
memory,
all
of
it
for
for
the
agent
to
use.
But
then
cpu
usage
is
not
going
up.
The
maximum
agent
eps
is
not
also
going
up.
It's
kind
of
hit
the
ceiling
here
again
so
put
yeah.
So
I'm
thinking
potential
bottleneck
is
maybe
I
need
more
a
stronger
node
cpu.
E
I,
the
total
cpu
on
this
node
is
eight
cores,
so
maybe
with
everything
running
the
other
daemons
log
gen
and
the
agent
collector,
even
though
I
assign
four
to
it,
you
cannot
utilize
all
four
of
it
because
it's
shared
with
other
other
services
applications.
So
yeah,
that's
one
thing
and
also
file
io.
E
There
could
be
something
yeah
so
and
besides
so
that's
another
takeaway.
I
kind
of
see
the
ceiling
with
the
scaling
up
at
around
53k
and
then
also
I
did
test
auto,
detecting
container
runtime
configuration.
So
you
look
at
the
log.
E
I
think
a
lot
of
us
are
familiar
with
it,
so
you
look
here.
You
look
at
the
log
format
and
then
whatever
it
matches
to
you,
send
it
to
different
parser
docker
cryo
container
d.
So
I
was
in
my
in
my
intuition.
Was
I
mean
you
are
running
this
regex
match
at
every
log
event
right,
so
I
was
like
okay.
This
is
going
to
have
a
significant
performance
impact.
I'm
not
going
to
have
it
right
just,
but
then,
when
I
tested
it
it
didn't
have
much
performance
impact.
E
The
numbers
are
really.
It
has
a
slight
degradation
but
then
very
similar
to
the
one
without
auto
detection.
So
I'm
going
to
add
this
auto
detection
into
by
default
into
my
hem
chart.
E
So
yeah
just
wanted
to
share
those
numbers,
any
questions
or
any
comments
for
improvement.
D
Yeah
thanks
rock
for
sharing
this.
So
as
far
as
the
data
loss
goes
here,
it
sounds
like
there's
it's
possibly
that
it's
possible
that
we're
hitting
just
limit
resource
limitations,
but
do
you
have
any
indication
that
it's
it's
on
the
software
side
of
things
or
do
you?
This
is
just
an
open
question
here.
E
E
So
you
know,
as
the
file
gets
rotated
so
it
then
you
keep
falling
behind,
delaying
delaying
and
then
eventually
bar
gets
rotated
two
times
and
done.
Then
we
will
lose.
It
lose
the
track
of
it
right.
So
if
it's
like
a
burst
of
like
50k
and
then
kind
of
slow
period
afterwards,
then
it
will
catch
up
to
it.
But
then,
if
it's
a
constantly
50k,
then
eventually
will
yeah
will
fall
off
yeah
and
then
yeah
yeah,
okay,.
G
So
here
rock
two
questions:
this
is
shubham
guys.
So
one
is
you
mentioned
the
potential
bottleneck,
but
we
don't
know
anything
for
sure
right.
What's
the
bottleneck
right
now,
we're
still
in
the
investigation
mode
to
figure
out?
Where
is
the
actual
body
yeah.
G
E
Something
in
in
the
environment
that
you
know
you
should
linear
up
and
then
you
should
scale
up
linearly
and
you
should
utilize
all
the
cpus
and
then
yeah
if
it's
like
I'm
using
four
hour
four,
and
also
seeing
data
loss
like
eighty
percent.
That
makes
sense,
but
then
yeah
so.
G
It's
it's
not
choking
up
at
resources.
That's
what
I
mean
we
can
say
from
the
table.
Okay
and
the
second
thing
is
you
mentioned
that
you're
planning
to
add
the
auto
detecting
container
on
time
by
default,
right
yeah,
but
from
the
result
it
looks
like
it
does
have
around
10-ish
percentage
of
data
loss
right.
No.
E
E
Divide
by
this
yeah,
it's
like
a
zero
0.6
percent
and
yeah.
You
should
compare
this
one
with
the
with
this
one
yeah
here,
because
that
this
is
the
everything
else.
The
same
throughput.
G
D
So
back
to
the
I'm
curious
back
to
the
the
cpu
question,
I
mean
it's
clearly,
the
cpu
is
not
the
bottleneck,
but
it
sounded
like
you
suspected
file,
I
o
as
being
the
bottleneck
that
might
no
no.
E
D
Okay,
I
think
what
what
I
would
want
to
get
out
of
this
aside
from
the
fact
that
this
is
a
nice
baseline
for
where
we're
currently
at
on
your
current
setup,
I'm
very
curious
if,
if
it's
possible
at
all
to
identify,
if
there
is
or
is
not
an
issue
with
the
logging
libraries,
if
there
is,
of
course
we
want
to
address
this,
but
it
does
sound
like
right
now.
There's
some
ambiguity
about
it.
So
I'm
not
seeing
anything
actionable
right
now,
but
if
you
identify
something
that
we
can
fix
or
or.
E
Yeah
yeah,
okay
yeah.
I
haven't
seen
anything
that
involves
the
logging
library.
D
Yeah,
okay,
awesome.
I
think
I
mean
not
that
I'm
asking
you
to
rerun
your
benchmarks
or
you
know,
do
it
do
as
you
will,
but
certainly
it'd
be
interesting
to
see.
You
know
where
you
know
not
cons
when
not
constrained
by
hardware.
Yeah
yeah.
Definitely.
D
Thanks
any
other
items
from
anyone
today.