►
From YouTube: 2020-07-15 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
C
D
A
Type
ID
to
open
telemetry
protocol
now,
officially
still
obviously
alpha
stage,
but
that's
good,
so
I'm
right
now
adding
this
to
the
collector
to
be
supported.
On
the
collector
side,
we
had
the
draft
version
of
that
already
implemented
in
the
collector
I'm,
getting
rid
of
that
and
adding
the
official
version,
which
is
slightly
different
as
well
yeah.
So
I'll
do
that
and
I
think.
That's
it
from
me,
yeah
so
so
I
have
nano.
Ben
has
some
updates,
so
you're
gonna
show
something.
So
if
others
have
updates,
maybe
they
can
go
first
and
then
you
can.
D
G
Screen
so
basically
so
I've
been
working
the
last
couple
weeks
on
the
one
bit
and
Gration,
with
the
collector
so
basically
trying
to
get
fluent
bits.
Spord,
protocol
output
writing
a
receiver
to
take
that
in
convert
it
to
the
poetry,
logs
format
and
then
sticking
on
the
pipeline
and
then
also
a
way
to
kind
of
sell
to
a
running
fluent
bit
in
a
real
deployment.
G
So
I've
got
an
extension
written
that
is
basically
just
runs,
flip
it
as
a
sub
process
and
kind
of
does
a
few
little
just
two
minor
things
to
kind
of
like
integrated,
better
into
the
collector
and
so
yeah.
What
I've
got
here,
I've
actually
got
my
laptop
running
I'll
show
you
this
little
utility
I
found
called
F
logs.
That
basically
will
just
create
a
bunch
of
random
logs,
these
particular
ones
here,
and
the
Apache
common
format.
So
pretty
common
like
web
server
type
log
and
they've
got
quite
a
few
record
fields.
G
I've
got
like
I
think
six,
some
value
pairs
associated
with
each
dog,
and
so
then
I've
got
my
extension.
Nothing
I'll
show
you
that
I'm
using
right
here.
This
is
collector
open.
Just
like
your
config,
this
is
what
it
looks
like
right
now,
I'm
just
kind
of
my
initial
rough
draft.
So
this
extension
here
is
it's
not
a
fluent
extension
and
basically,
you
provide
like
your
own
little
config
for
it
in
the
fluent
that
config
format,
and
then
you
provide
the
path
to
a
little
bit,
you're
using
and
ideally
at
some
point.
G
Basically
that's
what
this
is.
This
dashboard
is
showing
the
current
my
status
of
this
one.
It's
it's
tailing
one
log
file
right
now
and
so
I'm,
getting
it's
sending
it
roughly
in
batches
of
around
10,000
records
per
event,
so
basically
in
fluent
at
least
yet
for
the
Ford
protocol.
There's
a
notion
of
an
event
which
is
basically
one
message:
pack
array
structure
and
then
within
that
there's
different
types
of
events,
but
within
that
it
can
be
multiple
records.
G
G
We're
getting
you
see
here
about
10,000
records
per
second,
which
I
don't
know
what
kind
of
scale
a
heavy
production
server
is
gonna
have
so
I
need
to
see
if
I
can
scale
up
to
that
kind
of
level,
but
seems
like
a
lot
to
me
and
and
in
terms
of
CPU
utilization,
we're
getting.
It
depends
on
the
batch
size.
G
G
If
the
batch
size
goes
up
a
little
bit,
it's
kind
of
low
right
now
what
I
was
running,
but
if
it
goes
up
a
little
bit,
I'll
get
up
to
like
80
or
90
thousand
records
per
second
per
core.
So
it
should
scale
pretty
good.
If
you
give,
it
seems
to
scale
free
linearly.
I
haven't
found
much
higher
than
this
yet,
but.
G
Everything
seems
so
far.
Heap
usage
is
also
very
low
for
I.
Don't
have
it's
not
actually
doing
anything
in
the
pipeline
other
than
just
sending
it
through
the
logging
exporter
but
heap
usage,
and
when
I
initially
did
this.
It
was
much
higher
and
I
did
some
optimizations
to
get
that
down
to
basically
nothing
so
I
think
the
biggest
limitation.
Here's
gonna
be
CPU,
but
it
seems
to
be
it's.
It's
pretty
good
for
an
initial
implementation
of
a
thing
and
bait.
G
So,
basically
like
one
of
the
considerations
with
this
was
like
you
know,
if
we
use
fluent
bit
how
much
of
an
overhead
is
it
versus,
if
the
collector
actually
did
the
work
itself,
so
you
can
see
here.
The
CPU
ization
is
roughly
the
same
between
the
collector
and
fluent
bit
this
over
here.
I
guess
you
can
see
it
as
bar.
Isn't
there
for
you,
but
this
this
over
here
is
fluent,
but
CPU
ization.
G
So
it's
a
little
higher
consuming
us
because
it's
actually
doing
the
work
of
tailing,
but
and
and
then
the
collector
is
a
little
bit
lower.
So
it's
it's
roughly
I,
don't
I
wouldn't
say
it's
probably
twice,
but
it
may
be
closer
to
like
a
like
a
hundred
percent
overhead
of
it.
But
you
know
with
that
we
get
a
lot
of
get
all
the
functionality
flew
them,
but
not
having
to
implement
any
better
okay.
So
that's
kind
of
trail
through.
A
A
A
A
H
D
A
Weeks
thousand
because
I'm
guessing
that
a
large
batch
size
at
10,000,
what
is
it
like?
One
kilobyte
or
won't
record?
That's
that's
10
10
megabytes
already,
just
when
receiving
the
batch
and
probably
more
in
memory,
I'm
guessing
I,
wonder
if
that
would
reduce
the
memory
usage
a
than
either
even
more.
G
I
G
A
So
this
this
is
not
going
to
add,
even
if
you
send
to
somewhere
through
all
TLP,
it's
not
going
to
add
a
lot
of
CPU
on
top
of
what
are
you
already
here,
because
most
of
the
work
is
done
on
the
receiving
side
on
the
allocations
on
the
garbage
collection,
you
just
need
to
say
allies,
all
that
that
is
it's
already
there.
It's
not
adding
much
overhead.
Obviously,
we
will
need
to
measure,
but.
B
A
A
G
G
C
A
It
will
be
whatever
people
implement,
plus
whatever
we
ask
or
maintainer
do
every
will
add
a
few
I
guess,
but
it's
it's
very
easy,
typically
to
implement
an
additional
exporter.
If
you
look
in
the
country,
all
three
there
are
dozens
of
exporters
to
various
vendor
formats
and
I
expected
log.
Vendors
will
will
do
the
same
once
once
it's
available.
For
now.
It's
not
yet.
Okay,
so
because
experimental
we
don't
we
don't
make.
This
interface
is
publicly
available.
So
you
cannot
physically
it's
impossible
to
implement
components,
login
components
yet,
but
we'll
open
up.
A
F
F
If
are
on
the
collector
side,
we
use
different
bit
it's
an
extension,
so
we
don't
have
to
recreate
a
will
to
deal
with
lots
of
feature
foreign
bid
already
created,
but
what
kind
of
edition
value
in
the
collector
itself
it
put
on
the
the
stream
coming
from
the
front
bit,
because
the
front
bit
looks
like
he's
telling
the
foul
he
producing
those
log
events
just
change
those
two
things
together.
What
additional
feature
we,
along
from
the
collector
side,
right.
A
Good
question,
so
we
intend
to
to
do
to
our
processors
that
can
process
log
data
in
a
similar
manner
that
they
do
today
for
traces
and
metrics
and
do
that
uniformly
and
that's
very
valuable,
so
allows
you
to
basically
specify,
let's
say,
doodle-doo
the
enrichment
of
your
telemetry
and
all
three
signals
of
telemetry
to
be
enriched
in
in
the
same
way,
consistently
in
one
place
configured
in
one
place.
In
the
collector
we
have
today.
A
We
have,
for
example,
kubernetes
processor
in
the
collector,
which
automatically
adds
kubernetes
related
attributes
to
all
traces
and
metrics
that
pass
through
the
collector.
We
can
do
exactly
the
same
thing
for
the
logs
and
that
will
guarantee
that
when
the
logs
traces
and
metrics
end
up
in
your
back-end,
the
the
names
of
the
attributes
and
values
are
precisely
exactly
the
same.
So
you
can
clearly
do
the
correlations
between
ok.
F
G
E
Yeah
yeah,
so
I
think
this
is
actually
super
interesting
and
like
we
have
something
now
running.
You
know,
thanks
to
your
efforts
been
there
that
basically
shows
sort
of
the
whole
thing
soup
to
nuts
right
and
and
that
that's
it's
pretty
sexy
I
have
to
say
in
the
considering
you
know,
you
know
how
many,
how
many
times
we
dial
into
so
it's
quite
good
actually
quite
happy
to
see
that.
E
So
you
know
from
the
from
the
Summa
logic
perspective.
You
know
the
path
for
us,
you
know
I,
think
that's,
probably
not
a
big
secret
is
sort
of
you
know.
You
know
really
double
down
on
open,
telemetry
collector
and
you
know
you
know
we
kind
of
know
that
the
dead
you
know
the
tracing
thing
is
coming.
You
know
in
a
very
close
to
see
a
you
know.
E
Matrix
is
kind
of
a
little
bit
behind,
but
you
know
if
the
locks
think
we
don't
really
quite
like
yet
have
a
working
solution
and
I'm,
not
necessarily
you
know
going
to
go
down
the
route
of
you
know,
calling
this
a
G
a
solution
necessarily
is
the
no
offense
I
hope.
But
you
know
it's
it's
certainly
you
know
kind
of
charts
of
half
there
so
so.
D
E
H
G
E
So
the
other
thing
that's
interesting
about
this
is
the
way
that
you
implement.
It
is
by
basically,
you
know,
spinning
up
the
open,
sorry
idea,
fluently
process
right,
sorry,
fluid
bit
process
from
within
the
collector
right
and
yeah
in
the
extension
exactly
and
so
I
think
you
know
they
have
been
sort
of.
You
know
discussions.
You
know
you
know
months
ago
out,
so
that
is
sort
of
uber
collector
framework
right.
That
can
kind
of
you
know.
E
You
know
basically
pull
in
other
other
existing
collectors
like,
for
example,
somebody
might
want
to
you
know,
collect
using
telegraph
because
they
just
have
a
working
solution
for
that,
but
they
may
want
to.
You
know,
shoot
it
up,
really
open.
Telemetry
collector,
you
know
infrastructure
and
you
know
like
enrich
and
all
of
this
stuff
to.
D
E
Okay,
do
and
showed
it
out
on
the
other
end
right,
I
think
the
Google
guys
that's
to
some
degree,
signaled
interest
in
that,
and
that's
something
that
and
at
some
point
we
have
also
discussed
internally
at
all
in
our
place
and
so
I
think
this
is
a
nice
PSC,
for
you
know
for
basically
laying
out
how
that
could
work.
You
know
you're
using
fluent
bit
in
this
particular
case,
but
I
think
there's
you
know,
there's
additional
sort
of
interesting
value
there.
E
You
know,
because
I
could
become
a
pattern
right
where
you
know
we
could
get
used
to
the
fact
that
it
is
actually
okay
to
spin
up
a
process.
Why
are
the
output
even
eval
even
over
localhost,
and
it
just
sort
of
works
and
seems
like
it's
performing?
Well
too?
So
that's
that's
the
other
thing
I
really
like
yeah.
B
I
know
that,
like,
as
you
mentioned,
Google
there's
two
scenarios.
What
we're
doing
this
so
the
first
is.
We
have
some
interns
who
are
adding
various
integrations
for
prepackaged
applications
like
metrics
integrations
for,
like
my
sequel,
things
like
that
and
for
each
of
those
they're
looking
at
the
existing,
like
single
effects,
integrations
for
the
collector
that
they
had
in
signal
effects
agent
as
well
as
once
for
Prometheus
and
Telegraph,
and
then
on
a
case-by-case
basis,
they're
porting,
some
of
them
over,
but
those
are
encapsulated.
B
A
We
can
support
other
protocols,
and
that
makes
it
more
valuable
make
the
collector
more
valuable
makes
it
easier
to
use
together
with
other
collectors,
I
think
this
is.
This
is
a
good
approach.
Generally,
we
don't
it's
unlikely
that
we
will
be
able
to
implement
everything
that
is
already
implemented,
implement
about
it.
Some
sound
collector,
instead
of
trying
to
do
that
and
I,
don't
think
we
have
time
and
resources
to
do
that
which
we
should.
This
is
a
good
approach
right
to
to
support.
A
A
D
C
Okay,
great,
so
what
I'm
trying
to
do
is
support
use
case
that
wants
to
do
structured
logging
and
it's
going
to
be
beacon,
like
tracking
beacons
on
server
side
for
node
and
Java
and
we're
what
we
want
to
do
is
use
an
open,
Salama
tree,
compatible
logging
approach,
and
you
know
from
discussion.
You
want
to
use
the
native
loggers
like
log4j.
C
C
What
I
thought
we
could
do
is
like
so
David
had
done
some
prototype
code
a
couple
months
ago.
That
was
actually
offering
the
kind
of
Java
SDK
prototype
SDK
that
had
the
logging
model,
it
had
the
forward
serialization
within
the
Java
side,
and
then
that
could
just
be
sent
to
flew
in
bit
or
even
flew
a
D
and
from
there
I
could
get
out
on
for
sure.
So
in
looking
at
that,
I
was
like
wow.
This
is
really
cool.
C
Why
is
it
everybody
talking
about
David
stuff,
so
I
kind
of
wanted
to
ask
about
that
directly
and
see
what
options
we
have
because
I
you
know
for
people
that
want
to
do.
You
know
straightaway
structured
logging.
They
will
want
that
whole
data
path.
You
know
where
there
there's
code,
support
in,
say,
Java
and
node
I,
don't
think
I'm,
the
only
one
so
I.
Can
you
guys
comment
on
that.
D
So
so
I
think
there
are
a
lot
of
paths
like
so
one
of
the
big
powers
of
the
affluent
bit
side
is
that
we
can
actually
tale
from
files
and
that
there's
a
lot
of
complexity
there
that
we
kind
of
don't
want
to
pull
in
and
we
wouldn't
be
able
to
pull
in
very
well.
So
you
know
that
that's
kind
of
one
one
side
there
that
I
think
is
really
cool
with
a
flim
bit
stuff.
The
stuff
I'm
working
on
is
a
little
bit.
What
you're
talking
about
where
you've
got?
D
You
know
standalone
program,
we,
you
know
you
write
to
your
logging
library,
one
way
or
another,
and
it
kind
of
follows
the
same
thing
as
with
our
cases,
where
we'll
a
great
that
we'll
put
it
together
and
then
we'll
have
an
exporter
and
the
exporter
can
be
ot
LP
and
that
that
goes
to
the
collector
and
the
collector
can
can
go
off
somewhere
else
or
it
could
be
a
an
exporter
that
goes
to
fluent,
and
then
that
goes
through.
Where,
for
whatever
your
your
pathway,
is
there
there's?
C
D
I
think
I
think
probably
for
any
scenario:
you'll
be
able
to
come
up
with
three
or
four
or
five
valid
yeah
solutions
that
will
work
for
you,
it's
just
which
one
actually
works
the
best
for
you
and
is
going
to
be
more
maintainable
in
your
situation
like
we
do
only
the
fluent
bid
situation,
that's
going
to
work
great
for
some
people,
and
some
people
are
going
to
really
have
a
hard
time.
You
know
working
with
that,
but
if
we
work
with
just
the
in
process,
collation
then
sending
it
off
to
something
else.
C
D
D
A
A
C
A
We
want
to
make
sure
all
those
scenarios
are
nicely
supported
to,
to
certain
extent,
to
the
extent
possible
right,
so
we
don't
want
to
just
mandate
and
require
a
single
way
of
doing
things.
We
want
to
to
have
a
nice
way
of
doing
things,
regardless
of
what
your
preferences
are
for
further
motive.
Totally,
that's
exactly
what
David's
decision
tree
is
showing
the
one
that
I
saw.