►
From YouTube: 2020-11-25 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
A
A
D
Okay,
I
don't
see
any
other
topics
in
the
document
and
I
don't
have
anything
from
my
side
either.
So
george,
I
think
we
can
start
right
away.
E
All
right
yeah,
so
I
I
didn't
know
what
to
title
the
topic,
but
this
is
going
to
be
a
hello
from
folks
at
google
who
are
looking
to
use
some
open,
telemetry
logs
protocol,
and
I
will
let
them
kind
of
introduce
it
if
that's
cool
so
yeah,
who,
who
wants
to
start
we
had
by
the
way,
did
for
folks
from
google?
Did
you
open
the
log
sig
meeting
and
put
your
name
in
the
attendee
list.
E
Okay,
we
should
we
should.
We
should
do
that,
but
we
can
do
that
later.
So,
if
kenny,
do
you
wanna,
you
wanna,
say
hello.
B
So
the
reason
as
josh
mentioned,
verizon
we
joined
this
meeting
is
to
try
to
present
what
we
are
up
to
develop
as
an
integration
or
well,
perhaps
not
an
integration,
but
a
use
of
open
transfer.
Logging
for
specific
network
purpose,
network
observability
purpose
and
we'll
probably
do
some
intros.
So
do
you
want
me
to
dive
in
how
what's
the
right
way
to
use
the
time.
E
I
think
we
should,
we
should
say
hello
who
we
are,
which
is
what
we
did
and
then
just
a
little
bit
about
what
we're
planning
to
do
and
then
kind
of
have
like
like
have
a
discussion
about.
You
know
what
everyone
thinks
kind
of
thing:
sound
good.
B
Cool
so
we're
looking
into
enhancing
networking
observability,
primarily
focusing
on
the
envoy
proxy,
as
the
network
element
to
plug
into
it
is
right.
It's
using
multiple
deployment
types
by
a
lot
of
providers,
I'm
sure
that
it
doesn't
need
any
introduction
in
this
forum
and
we
were
exploring
so
in
the
last
few.
B
In
recently,
we
have
explored
the
options
of
transferring
network
samples
like
services,
level
samples
into
an
observability
engine
and
obviously
open
telemetry
comes
to
mind
as
a
commonly
used
protocol
as
of
today
and
definitely
going
forward.
We
have
compared
tracing
and
logging
as
the
transport
of
sending
that
kind
of
an
information.
B
The
bottom
line,
as
far
as
we
understood,
is
that
logging
is
a
more
suitable
framework
for
our
purposes
for
multiple
reasons.
One
is
we
do
not
expect
to
require
that
huge
amount
of
bandwidth
on
one
hand,
so
we
do
not
see
logging
throughput
as
a
bottleneck
for
whatever
that
solution
is
going
to
become,
and
one
of
one
of
the
considerations
had
to
do
with
client
or
sdk
availability
relevant
to
the
android
proxy.
B
So
an
advantage
of
using
logging
is
is
that
we
do
not
have
to
use
the
sdk
for
it,
but
instead
we
can
send
rpcs
to
the
collector
or
whatever
pipeline
comes
next
yeah
and
right
and
and
that
can
also
fit
the
rpc's
telecos
implementation
of
android
proxy
without
interfering,
multiple
models
that
might
have
performance
issues
or
at
least
concerns
for
such.
If
that
makes
sense
to
you.
E
Yeah
yeah,
so
if
I,
if
I
can
recap,
the
the
high
level
is
envoy
proxy
would
be
instrumented
to
send
out
open
telemetry
as
a
protocol,
but
not
necessarily
using
the
sdk
using
kind
of
built-in
mechanisms
and
envoy
to
fire
out
fire
out
rpcs
right,
yup,
okay,.
B
B
Yeah
so
think
of,
like
on
a
high
level,
supposedly
tango
proxy,
whether
it
is
a
sidecar
or
a
middle
proxy
suppose
that
it
message
of
it
handles
right,
request
response
pair,
and
we
are
talking
about
sending
samples
of
metadata
for
that
for
such
request
response
so
think
of
it
as
okay.
Where
did
I
send
the
traffic?
What
was
the
destination
where
the
stream
like?
What
was
the
response
code?
How
much
time
did
it
take
stuff
like
that.
F
F
Latency
headers
payload
size,
a
lot
of
things
for
every
access
they
have
their
own
grpc
logging
implementation.
They
think
it's
called
als,
which
I
think
is
not
widely
used
in
the
community,
perhaps
like
in
enterprises.
F
F
F
So
it
could
it's
a
very
good
match
for
us
and
we
are
probably
going
to
do
some
sampling
as
well.
So
we're
going
to
add
it
to
the
to
our
envoy
implementation,
because
we
don't
we
not
necessarily
want
every
x
every
access
in
envoy,
and
perhaps
others
will
also
find
it
useful.
D
Yeah,
it
makes
a
lot
of
sense.
It's
one
of
the
intended
use
cases
of
otop
so
that
we
can
directly
send
the
logs
instead
of
writing
them
into
a
file,
for
example.
First
and
we're.
D
Right,
so
you
are
going
to
be
implementing
a
way
to
send
logs
you're.
Not
you
don't
need
to
implement
a
login
library,
but
a
lot
of
the
elements
of
what
you're
doing
are
what
you
would
find
in
an
open,
telemetry,
compliant
loaded
library,
which
uses
the
otlp
format
for
exporting
the
data
c
plus
plus
c
at
open.
Telemetry
is
right
now
looking
into
implementing
a
logging
library,
and
I
advise
them
to
implement
it
in
a
way
that
decouples
the
api
from
the
implementation.
D
So
if
you
take
only
the
implementation,
it
gives
you
the
necessary
pieces
that
you
can
reuse
in
what
you
need
to
do
in
the
onboard
side,
so
throw
away
the
api,
which
is
the
open,
telemetry's
invented
api,
and
what
is
left
is
basically
what
you
need
right.
There
is
going
to
be
a
log
record
concept,
there's
going
to
be
something
called
blogger
to
which
you
give
the
log
record,
and
then
there
is
the
the
watching,
the
retrain
logging,
the
implementation
of
the
old
tlp
exporting
protocol,
all
that
you'll
get
for
free.
D
F
So
we
are
looking
at
the
the
sd.
The
c
plus
plus
implementation
for
open
telemetry
envoy
has
a
very
strict
structure
in
the
way
they
want
things
done.
F
So
we
we
will
need
to
see
how
everything
fits
together
right
now
there
is
a
consensus
with
the
envoy
people
that
we
will
use
the
protocol,
as
is
that's,
that's
a
given
regarding
the
implementation
itself,
so
we
are
still
checking
both
ways,
both
using
the
cpus
plus
libraries
from
open
telemetry
and
using
the
existing
grpc
implementation
unvoided
for
their
own
way
of
vlogging,
because
they
did
a
lot
of
the
optimizations
there
to
behave
well
inside
the
envoy
binary.
F
So
we'll
see
how
everything
fits
together.
Even
if
we
won't
use
the
squarespace
libraries
at
the
beginning,
I
guess
eventually
they
will
go
in
there,
even
if
not
in
the
initial
pull
request.
D
Okay,
okay,
that
makes
sense,
yeah.
Sorry,
one
more
thing,
and
if
you
decide
to
go
with
the
with
the
envoy's
implementation,
it
would
be
very
useful
for
us
to
know
why,
and
if
there
is
something
that
let's
say
they
do,
I
don't
know
rate
limiting
they
do
they
have
pound
queues
not
to
exceed
limit
like
they
limit
the
memory
size
stuff
like
that
right,
it
would
be
very
useful
for
robert
telemetry
to
to
learn
from
what
what
you
learned
right.
So
what
what
is
it
that
shows?
D
What
is
the
reason
you
chose
the
envoys
implementation
over?
What
open
television
does
today?
So
please
do
give
us
some
feedback
on
that.
C
Can
I
say
something:
we
have
a
very
similar
use
case
that
what
you're
trying
to
do
like
we
have
some
application
proxies
load
balancers
that
is
trying
to
like
export
some.
You
know,
telemetry
data
and
teams
have
been
like
looking
at
like
proprietary
things
like
I
mean
I'm
coming
from
aws,
so
we're
looking
at
our
pro
party
stuff
versus
open
telemetry,
everybody
likes
the
idea
of,
like
you
know,
being
directly
exporting
to
open
telemetry,
because
you
know
we
want
to
make
the
collector
available
pretty
much
everywhere.
So
from
the
environment.
C
You
will
be
able
to
basically
export
to
the
collector
and
that's
that's
sort
of
like
what
they
are
trying
to
evaluate,
but
we
are
in
the
same
boat
as
you
are
like
we
either
can.
Actually,
you
know,
take
the
c
plus
plus
libraries
contribute
them
like
take
them
to
the
a
decent
level
or
the
instrumentation
will
be
like
the
login
piece
will
be
completely
done.
You
know
internally,
but
we
will
be.
You
know,
speaking
the
protocol.
C
You
know
I'm
telling
this
as
an
fyi,
because
you
know
we
may
need
to
fund
the
c
plus
apis
to
be
able
to.
You
know,
unblock
us
if
we
decided
to
go
with
the
c
plus
plus
open
telemetry
libraries,
so
yeah.
E
Yeah
so
I've
I've
been
on
the
c
plus
plus
sig
and
reviewing
the
log
sales
as
they
come
in,
and
I
I
do
think
that
I
the
it
depends
on
what
timeline
we're
talking
about,
whether
or
not
we
should
rely
on
the
samples
plus
sdk,
but
I
know,
like
riley,
doesn't
plan
to
have
that
even
bga
until
june,
far
after
open,
telemetry's,
ga
and
even
when
it
is
ga,
I
I
expect
some
of
the
things
like
specifically
for
a
project
like
envoy
that
is
super
high
performance.
E
You
know
you're
you're,
counting
every
single,
like
reference
collection
that
you
have
or
or
allocation
deallocation,
and
you
have
to
be
very,
very
careful
with
memory
management.
I
expect
that
kind
of
sophistication
not
to
have
rippled
through
the
library
yet
like
it's
literally
just
kind
of
getting
invented
now,
so
I
think
it's
going
to
take
some
time
until
it's
up
to
this
use
case.
Just
that's
my
my
read
on
the
library
right
now,
just
as
an
fyi
yeah.
So
yes,
I
agree.
We
have
to
invest
in
it.
D
Yeah
yeah
and
you
may,
I
guess,
do
your
own
implementation
and
then
we
can
use
that
as
learning
to
do
the
open,
telemetry
implementation
right.
F
Yeah,
that's
nice.
We
are
currently
relying
a
lot
on
the
are
the
als
the
the
envoy
logging
rpc
mechanism.
There
is
some
knowledge
about.
They
did
a
lot
of
optimizations
there
on
how
to
handle,
reconnect
and
buffering,
and
things
like
that,
so
we're
trying
to
leverage
that
to
get
like
a
solid
implementation
for
the
client
side.
F
But
I
hope
we
can
actually
export
that
into
the
open,
telemetry
in
c
plus
plus.
So
maybe
it
will
be
useful
envoys
like
as
just
extremely
optimized
for
their
use
case,
so
it
could
be
useful.
D
Sounds
good,
I'm
sure
the
c
plus
plus
c
guys
will
be
very
open
to
your
suggestions.
E
Well
I'll
also,
I
don't
know
if,
if
anna
got
to
say
hi,
but
so
so,
if
kenny,
anna
and
omri
are,
if
you
see
their
names,
you
know
that's
that's
who
we
are
that's
what
we're
driving
so
hello,
hi.
D
Cool
josh
you,
you
initially
were
intending
to
show
something
as
well.
Do
you
want
to
postpone
that?
Is
that
no
yeah
we'll
postpone
that
yeah,
okay,
okay,
anything
else
that
we
need
to
discuss
on
this
topic
or
we're
good?
D
Okay,
also
on
all
tlp
logging
definitions,
like
of
the
actual
format,
if
you
have
any
feedback
on
that
as
well.
I
would
also
like
to
learn
about
that,
because
I
wrote
most
of
that
keynote
here.
Okay,.
B
Yeah
so
around
that
we
may
have
some
feedback
or
thoughts
regarding
optimizations
related
to
using
that
structure
I
mean
there
is
obviously
some
trade-off
between
using
it
with
high
sampling
rate
or
high
performance
requirements
and
versus
very
low
one,
and
we
may
be
looking
into
specific
into
specific
suggestions
regarding
optimizations
for
these
structures
and
definitions.
D
Okay,
good
next
remake,
you
have
an
icon
here.
You
want
to
talk
about
that.
A
Yeah,
it's
it's
just
a
note
that
I
have
prepared
this
group
by
attributes.
Processor
apr-
and
this
will.
This
is
one
of
the
items
to
unblock
fluent
bit
kubernetes
metadata
tagging,
which
I
think
is
one
of
the
major
workflows
right
now
for
logs,
so
that
I
think
it
will
be
helpful.
So
if
anyone
wants
to
look
and
see
if
this
pr
makes
sense,
it's
that
would
be
great.
There
will
be
one
more
pr
coming
in
a
couple
of
days
with
pod
uid
support
to
kubernetes
processor
on
open,
telemetric,
collector
contrib.
D
C
So
next
is
me,
I
think
so
I
filed
an
issue
about
you
know
the
log
records
they
have
a
name
and
a
body,
but
there
is
not
a
recommended
size
limit.
There's
actually
like
a
recommendation
for
the
name,
but
it's
not
definitive
and
body
is
basically
like.
There
are
no
size
limitations
and
there
are
some
vendors
out
there
that
actually,
like
you
know,
sets
a
limit
or
you
know
if
people
are
like
writing
to
like
elastic
or
like
to
their
own.
C
You
know,
storage,
they
will
probably
likely
to
you
know,
have
some
limitations
based
on
their
solution.
They
might
may
or
may
not,
but
may
so
I
was
wondering
like:
is
the
spec
a
good
place
to
set
this
limitation,
or
should
we
recommend
a
behavior,
because
vendors
are
behaving
differently
like
some
of
them
are
truncating
the
the
body?
If
it's
you
know
outside
like
out
of
their
limit,
some
of
them
are
like
completely
rejecting
the
request.
C
D
There
is
a
very
similar
thing
for
the
traces
right
now
at
open
telemetry
and
I
don't
remember
the
exact
details,
but
there
was
a
pr.
I
remember
there
was
one
that
introduced
the
notion
of
limits
for
spans.
D
I
think
it
was
more
on
the
the
number
of
the
attributes.
I
don't
remember
if
there
was
any
limitation
on
the
size
of
the
of
the
attributes
or
no,
but
what
I
want
to
say
is
that
it
would
be
very
useful
to
approach
this
uniformly,
see
whatever
we
have
already
specified
for
traces,
whether
it
can
be
applied
to
logs
as
well,
and
if
you
know
why
and
and
whether
something
needs
to
be
changed
in
the
choices,
I
think
what
we
have
in
the
specification
is
likely
very
rudimentary
yet
right.
D
So
it
would
be
great
if
you
could
have
a
look
at
what
exists
there
for
the
traces,
yeah
and
and
then
make
a
proposal
right.
I
think
it's
it's
reasonable
likely
it's
hard
to
tell,
but
possibly
it
needs
to
be
configurable
right
so
that
the
maximums
are
possible
to
choose
for
in
india
is
the
case
right.
C
F
So,
regarding
traces
like
a
lot
into
the
spec,
the
main
limitations
are
the
number
of
attributes
and
the
number
of
events
and
there's
a
field
to
indicate
the
number
of
dropped
events
and
dropped
attributes.
So
now
the
implementation
details
differ
because
each
vendor
has
their
own
limitations.
I
think
I
listed
it
somewhere
in
the
doc.
F
But
I,
as
far
as
I
remember,
not
many
I
mean
the
spec
doesn't
handle
bite,
size
limitations.
They
only
handle
count
limitations,
I'm
not
sure
how
it
will
work
with
the
body
attribute,
because
the
kind
of
the
fun
thing
about
the
body
attribute
is
that
it's
I
mean
when
we
originally
looked
at
it.
Actually
we
thought
about
using
attributes,
but
what
we
feared
is
that
it
we
will
lose
some
attributes
due
to
some
propagation.
F
So
we
went
to
the
body
because
we
wanted
that
not
to
leave
truncation
into
the
implementation
side.
So
if
the
I
think,
if
the
as
long
as
it's
like
string
based
or
something
like
that,
truncating
makes
sense.
But
if
it's
a
binary
in
any
way
truncating
it
could
just
corrupt
the
data.
So
there's.
C
Yeah
exactly
that's!
Why,
like
json
as
well,
like
you
know,
I
know
I
think
new
relic
is
truncating
the
body
and,
like
imagine
you're,
just
like
writing
tons
of
like
json,
and
it's
just
you
know,
truncated
and
completely
corrupt.
In
the
end
of
the
day,
so
yeah
anyways,
I
felt
that
you
should
just
kind
of
like,
let's
not
forget
about
this.
E
E
We
can
define
a
new
truncation
spec
for
how
to
truncate
nested
structures.
I
think
that
that's
obviously
the
right
thing
I
I
think
I
think
teasing
out
the
actual
fundamental,
like
questions
of
like
what
open
telemetry
believes
in
is,
is
important
here
so
like
when
it
comes
to
limitations
on
sizes.
E
I'd
love,
if
those
were
always
driven
by
optimizing,
the
collector
and
then
secondarily
by
vendors,
if
possible.
Right,
because
I
do
think.
E
If,
if
you
can
get
the
data
to
the
vendor
and
then
let
the
vendor
decide
what
they
want
to
do,
based
on
their
limitations,
might
be
better
than
limiting
it
right
at
the
api
level.
That
said,
that
means
your
errors
are
exactly
where
you
don't
want
them
as
far
away
from
the
thing
that
caused
the
problem
as
possible.
So
anyway,
I
I.
E
I
look
forward
to
your
proposal,
but
I
just
wanted
to
throw
out
that,
like
I'm
curious,
where
are
I'm
still
learning
the
like
zen
of
open,
telemetry
and
like
what
the
actual
principles
are
that
guide
the
design
so
far,.
D
Actually,
the
the
principle
behind
introducing
these
limitations
in
open
telemetry
in
the
traces
portion
was
to
try
to
prevent
one-way
processes.
If
you
have
a
bug
somewhere
right,
which
starts
generating
thousands
and
thousands
attributes.
D
For
whatever
reason,
this
was
the
aim
right
to
try
to
prevent
that
from
happening,
rather
than
to
try
to
come
up
with
reasonable
limits
over
which,
if
you
go,
you
start
seeing
performance
issues
which
is
valid,
but
it
was
not
the
reason
why
this
was
introduced
initially
in
the
specification,
but
I
think
I
think
performance
obviously
is
also
very
important,
so
that
needs
to
be
considered
as
well,
but
that
was
the
original
intent
there.
I
tried
to
prevent
some
implementation
bugs.
C
Sorry
I
have
one
more
yeah,
so
I
have
a
question
and
I'm
just
ramping
up
on
this
project
and
I'm
not
super.
You
know
familiar
with
all
the
bits.
So
one
thing,
I
wonder
you
know
it's
a
the
old
tp,
it's
or
otlp
otlp
protocol
has
the
login
right
in
it.
The
proto.
D
C
But
we
don't
have
you
know
if
you
write
a
exporter
that
writes
to
the
exporter
the
logs,
we
will
be
able
to
export
them,
but
we
still
have
to
you
know,
write
the
vendor
specific,
you
know
exporter
or
we'll
use
the
login
export
or
whatever.
Okay,
that's
great.
C
So
one
thing
that
I
was
seeing
is
for
elastic
they've
been
trying
to
like
some
of
the
people
who
are
trying
to
like
at
aws
experimentally,
trying
to
build
an
exporter
directly
exports
to
elastic
from
c
plus,
plus
it's
a
exporter,
reason
c,
plus
plus
you
know,
one
of
the
other
options
was
like.
Can
we
go
through
the
collector
rod
and
it
was
not
super
clear
to
us.
We
should
probably
you
know,
build
our
exporter.
C
You
know
I
mean
we
should
export
to
the
collector
and
then
write
our
elastic
exporter
to
export
logs
and
consider
that
as
an
option
right.
D
D
Logs
likely
can
be
added
there.
I
know
there
is
a
splunk
exporter
for
logs,
there's,
probably
more,
but
definitely
yes,
that's
that's
something
that
we
would
definitely
want
to
do
with
vlogs.
That's
that's
what
we're
doing
collecting
right.
We
have
bunch
of
choices
and
metrics
exporters.
C
Yeah,
I
just
wanted
to
make
sure
that,
like
the
you
know,
it's
at
this
stage
that
I
can
start
you
know
right
in
the
exporter,
so
you
know
sometimes
like
things
are,
I
mean
these
are
all
moving
pieces
right
this.
I
just
wanted
to
know
that,
like
the
entire,
like
there's
a
happy
path
and
even
though
things
may
change,
you
know
at
least
the
curtain.
Happy
path
is
working,
yeah.
D
Okay,
it
is,
I
think,
it's
yes,
you're
right
that
things
may
change.
We
did
not
declare
the
logs
proto
stable,
so
it
may
change
over
time
and
that
will
affect
the
implementations
in
the
collector,
but
I
don't
anticipate
any
major
changes
there
and
whatever
we
have
there
in
the
collector
side,
it's
very
uniform
with
traces
and
metrics
the
interfaces
and
I
think
it's
reasonably
safe
to
at
least
begin
experimenting
with
locations.
F
I
can
tell
you
I
had
I
actually
wrote
the
blog
exporter
like
a
very
basic
one,
but
it's
aside
from
I
mean
writing
didn't
go
for
the
collector
is,
is
fun
except
you
have
to
go
through
all
the
nested
key
value
in
the
end
in
indian
struct.
That's
kind
of
the
next
thing
goes
a
bit
deep.
F
E
Sdk
too,
not
that
I
write
go
yet,
but
I'm
just
curious,
because
I
want
that
in
c
plus.
Even
you
know,.
F
D
D
In
the
collector
as
well,
for
I
think
there
is
both
or
maybe
only
the
receiver
is
implemented-
I
don't
quite
remember,
but
yes,
the
intent
at
least
is
to
support
otlp.
The
otp
http
transport
supports
both
protobufs.