►
From YouTube: 2021-01-06 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
D
D
E
E
Let
me
just
throw
out
a
quick
fyi.
I
had
updated
the
github
labels
as
per
our
previous
discussion
in
a
previous
meeting.
E
Down
here
no
action
item
from
anybody,
just
letting
people
know
I
deleted
some
redundant
labels
and
in
the
con
collector
repository
and
in
the
collector
contract
repository.
E
As
for
the
status
of
where
we
are
in
triage
issues,
we
are
38
complete
of
all
opened
issues.
At
this
point
we
actually
improved
from
the
last
meeting.
I
think
because
some
issues
were
closed,
some
older
issues
that
were
not
triggered
or
closed
so
just
like
just
we.
F
E
E
We
did
not
improve
from
last
time
for
the
collector
issues,
but
the
collective
contrib
there's
more.
That
needs
to
be
triaged
because
more
issues
got
opened.
Still.
We
have
zero
p
zeros,
which
is
good.
We
still
have
two
p
ones
in
the
collector
and
not
on
a
contract.
E
So
we
have.
We
can
do
a
time
box,
10
minutes
for
triaging
collector
issues,
that's
come
in
and,
as
usual
I
will
start
with.
So
I
think
this
is
the
stuff
from
here
to
here.
This
came
in
over
the
holidays,
I'll
start
with
the
bugs.
D
D
Yeah
this
needs
to
be
fixed.
If,
if
it's,
if
it
actually
does
that,
then
can
we
make
it
one
faces.
D
E
Okay,
is
there
an
appropriate
assignee
for
you?
Can
you
assign.
D
G
E
Is
this
what
you're
looking.
G
D
E
Okay,
so
those
are
the
bugs
for
collector,
we'll
take
a
look
at
the
bugs
for
oops
contrib
before
we
switch
back
to
the
collector
non-bugs
before
the
end
of
the
just
have
four.
A
No,
I
don't
think
so.
None
of
these
yeah
it
should
be
expert.
E
Fine
so
allowed
for
jr
or
after
ga.
E
G
D
D
E
Okay,
they
have
to
comment
on
on
it
before
the
I
can
assign
mx
psi
there.
We.
A
A
D
A
E
E
D
A
part
because
it's
a
failure
in
the
test
somewhere
yeah
you
open
the
relief
to
the
failed
trailer,
there's
a
link.
Yes,.
D
Okay,
let's,
let's
make.
E
J
E
E
There
we
go
okay,
all
right,
I'm
happy
to
keep
sharing
if
you
like,
or
you
can
take
control.
If
you
want
to.
I
You
can
keep
sharing
so
there's
a
couple
folks
at
google
who
are
on
the
call
today
who
are
interested
in
adding
buffering
to
the
collector.
I
D
Sounds
good
so
yeah,
as
I
already
commented
there,
this
may
or
may
not
be
a
good
idea
to
do
before
the
ga,
depending
on
how
exactly
you
approach
it,
how
many,
how
how
much
changes
you're
going
to
do
to
the
existing
code
or
whether
it's
going
to
be
an
independent
component
for
now.
I
suggest
we
label
it
as
after
ga,
but
we
can
change
that
right
if
there
is
a
design
that
is
not
risky
for
us
to
do
before
the
gs.
We
can
do
that.
Okay.
D
So
let's,
let's
mark
that
as
after
ga,
I
would
say,
p2
and
again,
it's
open
for
an
adjustment.
If
we
think-
and
we
see
that
it's
more
important
to
do
and
I'll
be
happy
to
comment
on
the
design,
but
whenever
it
exists.
I
don't
have
time
to
work
on
it
myself,
but
I
will.
I
will
review
it
once
you
have
it.
D
Well,
it
depends
right
and
again
it
depends
how
exactly
it's
going
to
be
implemented.
Let's
leave
it
out
for
now
sure.
D
C
So
so
so
yeah,
so
we
our
use
case
involved
hi
everyone.
You,
may
you
didn't
see
me
the
first
time
I
show
up
in
this
meeting.
My
name
is
you
I'm
from
google
and
jay,
and
I
are
in
actually
in
waterloo,
oh
canada,
so
our
use
case
involved.
We
I'm
in
the
gke
team
and
basically
we
want
to
run
gk
cluster
outside
of
google
in
some
other
data
center.
So
this
offline
buffering
is
very
important
to
our
use
case.
C
Because
then
even
it's
like
it's
data
center,
the
connection
may
not
as
good
as
like
may,
not
as
good
as
like,
inter
data
center
thing,
so
we
already
implement
the
same
feature:
offline
buffering
for
for
logging
site
for
flow
and
bit
so
for,
but
but
we
are
like
newbies
in
in
the
open
telemetry
community.
So
I
in
today's
meeting
what
I
would
like
to
know
more
is
like
one:
do.
C
Others
have
the
same
requirements
if
that,
if
that
I
really
want
to
know
understand
the
requirements
for
us,
you
can
honestly,
I
the
tldr,
for
our
requirement
is
like
we
want
to
buffer
the
offline
matrix
or
chase
to
up
to
x
hours
or
up
to
this
size
of
of
the
thing.
So
that
is
the
requirement
we
have
in
mind
when
we
really
want
to
start
a
real
design.
The
second
thing
I
want
to,
I
want
some
opinion
or
guidance
on,
because
I'm
so
new
like
I
know
the
problem
in
general.
C
However,
I'm
new
to
the
open
tenant
machine
community,
so
I
would
like
to
get
some
guidance
on.
Where
should
we
put
these
offline
buffering
function?
Is
this
should
be
a
separate
processor?
It
should
be,
should
it
be
in
the
retry
queue
as
common
in
the
anything
or
should
should
it
in
like?
Should
it
be
like
global,
or
should
it
be
like
per
exporter
those
kind
of
thing?
I
want
to
get
some
guidance
on
that.
D
Yeah
hi
and
welcome
so
I
think
it's
a
fairly
common
functionality.
D
I
had
it
in
the
past
in
a
logging
agent
as
well,
so
I
do
believe
it's
a
useful
thing
to
have
and
there
are
use
cases
which
require
it
as
for
how
exactly
to
implement
it
and
what
features
it
needs
to
have
you
mentioned
the
couple
right.
Definitely,
there
needs
to
be
some
sort
of
maximum
size
limitation
right.
They
don't
want
it
to
in
an
way
for
for
the
placement,
whether
it's
it
needs
to
be
a
processor
or
it
needs
to
be
in
the
exporter
it.
D
It
depends
on
how
you
want
it
to
behave.
If
you
put
it
in
the
exporter,
it's
going
to
be
separate,
cues
per
exporter
right,
which
means
that
either
you
have
to
somehow
solve
the
data
sharing
problem
so
that
you
don't
store
the
data
multiple
times
on
disk,
or
you
accept
that
it's
okay
and
you
do
just
that
right.
D
If
you
do
it
as
a
processor,
it
simplifies
the
data
check.
There
is
one
processor
instance:
you
store
it
once
it
may
be
that
everything
else
is
simpler.
If
you
go
with
the
exporter
route,
so
if
the,
if
the
duplication
of
the
storage
is
not
a
concern,
that
may
be
the
preferable
approach
right,
but
anyway
I
I
would
try
to
do
a
design.
Maybe
try
a
couple
alternate
designed
alternate
designs
to
see
what
are
the
cons
and
pros
before
you
make
a
decision
on
this?
D
J
J
For
like
networking
or
is
it
like
also
about
you,
know,
persistence,
I
mean,
let's
say
things
crash.
Like
the
collector
crashes,
you
still
want
to
be
able
to.
You
know,
retry
the
things
that
has
been
dropped
in
memory
or
something
like
do
you
do?
You
also
need
something
like
a
right
ahead,
log
type
of
like
thing,
or
is
it
just
mainly
about
like
networking
issues.
D
Yeah,
supposedly
anything
unsent
that
is
not
yet
delivered
should
be
in
this
persistent
queue
right
and
if
you,
if
the
collector
is
restarted
for
whatever
reason,
whether
it's
a
crash
or
a
regular
restart,
it
should
pick
up
from
where
it
was
where,
if
everything
that
is
not
yet
sent
should
should
should
be
the
beginning
of
the
the
new
operation
for
the
expert,
that's
how
I
I
I
saw
it
implemented
previously
in
other
agents.
I
think
that's
where
it
can
be
useful
right.
D
In
essence,
it
guarantees
the
delivery
in
the
situation
when
your
destination
is
unavailable
or
slow,
or
when,
when
the
collector
is
crushed,
when
the
collector
is
let's
say,
yeah.
A
J
B
We
have
the
same
use
case
and
I
would
like
to
add
two
notes
on
on
the
this
design
strategy,
because
if
I
think
that
this
is
a
good
discussion
and
doing
this,
as
a
processor
probably
would
require
providing
some
sort
of
back
pressure
capability
to
queue
to
retry
helper,
which
might
be
not
that
trivial
to
implement.
B
The
other
thing
is
that
if
there
are
several
exporters
actually,
maybe
only
some
of
them
have
issues
with
exporting
the
data,
because
maybe
there
is
some,
let's
say
problem
with
some
other
service
that
receives
the
data
from
the
exporter
and
then
not
all
of
them
need
to
cue
this
data
and
so
yeah.
There
are
sort
of
considerations,
but
those
two
come
to
my
mind.
K
B
K
K
Queuing
it
up,
but
you
know,
but
you
got
to
be
sure
and
be
able
to
turn
it
off,
because
in
a
lot
of
cases
you
know
if
you
can't
send
it
on.
You
want
to
just
start
dropping
stuff,
because
there's
just
too
much
data
here
now,
if
you're,
you
know,
if
you
got
a
remote
thing,
that's
not
generating
much
data,
then
it
makes
perfect
sense
to
cue
all
this
stuff
up
on
disk,
but.
J
C
That
is
not
the
case,
because,
if
you
just
think
of
cracker
as
one
like
process,
it's
it's
like
harder
to
to
crash.
However,
if
you
think
of
running
collector
as
a
workload
on
top
of
kubernetes,
then
it's
possible
that
stuff
yeah,
because
of
some
some
scheduling
issue
that
the
part
got
restarted.
So
so,
I
think,
handling
back
pressure
and
hand
have
the
these
offline
buffering
as
one
as
a
backup
for
the
the
in-memory
queue
would
be
a
really
good.
I
can
provide
one
use
case
here.
C
Actually
it's
from
the
fluent
bit
in
the
front
bit
implementation,
what
we
so
so
so
what
fruit
and
bit
community
does
was
initially
what
we
have
is
we
have
these
everything
before
we
send
out.
We
can
say
it's
only
in
memory
or
we
can.
We
can
say
it's
on
disk,
so
basically
we
have,
or
we
already
have
a
capability
to
persist,
all
the
queue
on
disk.
C
Before
sending
out
that's
the
step,
one
of
the
mutation,
the
step
two
is
to
to
to
to
handling
the
retry,
to
handling
order
like
limitation
like
size,
location
or
title
patient,
and
how
we,
how
we
can
drop
the
the
the
the
trunk
outside
of
those
limitations
so,
and,
and
and
also
the
use
case
for
fluent
bit-
is-
is
actually
implement
as
the
processor
instead
of
in
the
exporter,
so
basically
for
each
chunk
of
data.
What
we
have
is
that
there
is
a
like
a
bitmap
to
say,
okay
to
destination
abc.
C
Do
I
also
do
I
also
this
bit
needs
to
go
to
destination,
one
abc
and
if
all
of
them
are
word
success,
I'm
going
to
change
the
bit
to
zero,
then
I
then
this
means,
like
all
the
data
in
this
chart,
can
be
dropped,
so
so
so
so,
basically
that
solves
the
data
duplication
problem,
because
we
only
store
the
the
data
once,
but
we
only
say:
okay,
this
is
all
the
destination
we
need
to.
We
we
need
to
send
so
this
is
just
a
use
case
to
share.
M
Hey
you,
this
is
joe
lynch
from
google
happy
2021.
Everyone
haven't
seen
in
a
while,
haven't
been
here
fyi.
We
did
implement
this
in
our
stackdriver.
You
know
importer
for
gk
on-prem.
It
actually
has
this
buffering
for
metrics
logs
was
already
there,
but
we
actually
built
a
buffering
mechanism
for
metrics
in
case
it's
useful.
I
I
don't
know
that
it.
I
don't
think
it'll
be
useful
in
terms
of
leveraging
it
directly,
but
in
case
it's
useful
for
ideas
or
whatever
something
that
worked.
You
know
it's
there.
N
I'll
pile
on
here
and
just
say
that
we've
also
implemented
something
similar
stanza
recently,
which
is
in
go,
so
there
may
be
some
useful
implementation
details
that
could
be
bought
from
there.
So
I'd
be
happy
to
collaborate
on
this
as
well.
N
We
ended
up
primarily
like
I
guess
our
primary
design
considerations
were
to
ensure
that
it
was
sort
of
up
to
each
individual
component
to
decide
whether
persistence
would
be
necessary
or
not,
and
then
provide
separate,
separate
persistence
mechanism
from
retry
mechanism,
so
that
can
be
managed
separately
as
well,
but
yeah.
We
ended
up
doing
this,
mostly
in
the
exporters,
for
the
reason
that
mentioned
that
failures
could
be
individual,
but
of
course
that
suffers
from
the
data
duplication
problem.
O
Hey
out
of
curiosity
for
those
that
have
implemented
this
for
metrics,
thus
far,
did
you
implement
this
for
prometheus?
Is
that.
O
Prometheus
the
time
problem
with
labels.
M
Yes,
I
don't
know,
I
wasn't
that
close
to
the
details,
but
I
do
know
the
people
that
were
so.
If
they're
you
we
can,
I
can
put
my
name
in
the
dock.
If
you
want
to
trade
emails
or
something
I
could
put
you
in
touch
with
the
people
that
actually
built
it.
O
J
The
next
topic
is
about
the
you
know
the
prometheus
receiver.
So
last
year
around,
like
november,
you
know
we
were
evaluating
the
the
receiver
and
there
are
a
couple
of
like
problems.
It's
been,
it
was
not
like,
you
know,
actively
maintained,
so
it
kind
of
like
got
together
with
a
couple
of
folks
from
open
telemetry.
To
kind
of
this
you
know
brainstorm.
Basically,
what
are
the
problems
and
like
what
we
want
to
fix
there
and
so
on
and
so
on.
J
So
there
is
this
talk
that
I've
written,
which
was
being
presented.
Maybe
I
mean
we
can
do
it.
I
can
give
some
time
people
to
you
know
get
themselves
as
collaborators
which
kind
of
like
became
a.
You
know,
summary
of
like
a
couple
of
things
that
we
want
to
do,
but,
like
since
november,
there's
been
so
many
other
things
going
on.
So
maybe
I
can
first
explain
some
other
conversations
that
was
going
on
like
since
november,
we
talked
to
the
prometheus
team
like
some
of
the
contributors.
J
They
also
have.
I
don't
know
actually
how
to
drive
this
conversation,
because
there's
been
so
much
going
on.
Is
there
anyone
here?
Who
is
who,
like.
J
Maybe,
like
I
mean
last
november,
we
were
thinking
about
like
hey
we're
gonna
either
rewrite
the
receiver
or
we're
gonna
completely,
remove
it,
and
you
know
replace
it
with
something
and
that's
how
this
like
design
dock
came
around
and
since
then
we
haven't
met.
But
you
know,
there's
been
like:
we've
been
in
different
conversations
with
like
other
groups
and
so
on.
J
So
not
everybody
actually
knows
like
what's
going
on
in
the
prometus
landscape,
because
from
me
just
scraping
one
of
the
reasons
that,
like
the
the
receiver
is
not
is,
it
is
a
complicated
space
for
us
to,
like
you
know,
take
any
action
right
now
is
scraping.
J
Libraries
from
prometheus
is
not
like
at
a
state
that
we
actually
like
they're,
not
really
rich
in
terms
of
like
it's
just
they're
hard
to
use
so
and
because
of
the
you
know,
the
protocol
differences
between
prometheus
and
open
telemetry
we're
doing
so
much
in
the
collector.
So
there's
like
all
this
like
so
you
know,
big
complexity,
and
you
know
the
type
of
like
features
that
we
want
to
provide
is
just
harder
to
achieve.
J
Given
the
complexity,
so
we've
been,
like
you
know,
looking
things
from
a
higher
level
perspective
and
like
talking
to
people
about
like
is
there
a
way
to
you
know
make
the
protocol
slightly,
like
you
know,
looking
like
each
other
or
you
know,
can
we
actually
like
have
like
better
scraping?
You
know,
libraries
in
the
prometheus
and
so
on,
and
so
on.
J
So
there's
been
like
some
discussions
going
on,
but
I
think
nothing
came
up
as
a
conclusion,
so
we're
at
the
same
stage
that
we
left,
like
you
know
in
november
and
in
the
november's
conversation
is
the
receiver.
Currently
there
are
a
couple
of
problems
with
the
receiver
and
I
can
explain
what
they
are.
It's
designed
to
like
run
as
a
demon
set,
but
you
know
there
are
a
couple
of
like
places
that
you
don't
want
to
run
it
as
a
demon
set
or
the
demon
set
is
not
available.
J
J
You
know
all
the
discovery
tools,
it's
just
not
giving
us
an
easier
way
to,
like
you
know,
implement
like
sharding
and
stuff
like
if
you're,
relying
on
prometheuses
or
the
discovery
it
just
goes
and
like
finds
all
the
you
know,
targets
like
on,
for
example,
kubernetes.
It
goes
like
it
finds
all
the
targets
and
at
the
collector
you
have
to
like,
you
know,
do
sharding,
but
the
disk
there's
no
way
to
like.
We
need
to
build
like
manual
things
basically
to
shard,
and
the
other
thing
was.
J
The
existing
primitives
users
are
already
like
coming
up
with
their
like
very
custom
ways
of
sharding
things,
because
it's
a
very
difficult
problem
like
they
either
do
this.
They
either
run
one
prometheus
per
cluster
and
it
works
or
they
use
hash
mode
or
like
whatever
they
do
and
like
you
know,
they
shard
in
their
own
ways.
So,
like
our,
I
think,
promise
here
with
a
prometus
receiver
that
works
for
everything
and
like
scales.
J
Well,
it's
just
kind
of
like
a
very
difficult
topic,
so
the
conversation
in
november
was
maybe
we
shouldn't
have
the
prometheus
receiver,
because
it's
very
difficult
to
implement
something
works
in
the
collector
model
and
something
like
that
scales
with
the
collector
so
and
the
the
alt
we
decided
to
like
you
know,
take
a
look
at
the
alternatives.
One
of
the
alternatives
was
completely.
You
know
splitting
the
scraping
into
a
different
process,
so
you
know
you
would
just
kind
of
like
have
a
drop
in
replacement
for
prometheus
server.
J
This
goes
in,
like
you
know,
scrapes
discovers
and
scrapes
and
then
writes
to
the
collector
as
a
sink
and
as
a
part
of
that
discussion.
You
know
I
there's
already
a
solution
like
maybe
in
the
dock.
We
can
skip
to
the
solutions.
I
don't
know
like
how
we
should
drive
this
conversation.
To
be
honest,
like
I'm
not
sure,
if
everybody's
interested
in
these
details,
I'm
not
sure
if,
like
people
in
this
com
like
in
this
meeting,
will
be
the
people
who
will
you
know,
make
a
decision
about
it.
J
I
can
talk
a
bit
about
like
the
challenges
and
like
figure
out
who's,
actually
interested
in,
like
we
can
have
like
a
separate
call
to
discuss,
but
I
feel,
like
you
know,
it's
just
kind
of
out
of
the
place
it's.
This
is
a
really
big
topic
and
I
think
that
you
know
having
difficulty.
You
know.
I.
P
I
think
I
think
you
know
to
add
to
your.
You
know
what
you
outlined.
P
You
know
when
we
discussed
this
in
november,
and
I
think
I
remember
tigran
bogdan
you
know
and
others
being
in
the
calls
we
figured
out
that
you
know
we
actually
needed
to
spin
off
a
specific
group
to
focus
in
on.
You
know
the
known
issues
that
we
have
uncovered
and
also
you
know
some
of
the
solutions
as
you've
outlined,
and
I
think
that
the
decision,
at
least
at
that
point
in
time,
was
that
we
would
have
a
focused
work
group
to
work
in
that.
P
So
andrew
is
there
a
process
that
we
need
to
follow
to
set
this
up
or
what's
what's
the
next
step
here,
because
I
do
think
that
there
is,
you
know,
a
deep
amount
of
work
that
needs
to
be
done
here
and
obviously,
we've
been
looking
at
the
problems
all.
P
E
Issue
asking
for
the
the
subgroup,
because
we've
we've
got
presidents
for
that
in
the
past,
like
in
the
matrix
seg.
F
E
Sig,
what
not,
what
not
so
it
could
be
presented
there
and
then
all
the
administrative
details
about
like
meeting
times
zoom
links,
google,
doc
and
in
coordination.
P
Yeah
yeah
because
I
mean
we
had
informally
already
kind
of
decided
to
do
the
work
group,
but
just
you
know,
want
to
make
sure
that
that's
formalized
and
we
do
have
this
discussion
because
again,
even
you
know,
with
some
of
the
googlers
we
worked
with,
it's
definitely
a
high
priority
and
it
is
core
requirements
for
ga.
So
it's
something
we
need
to
address.
D
Yeah,
I
think,
having
a
separate
working
group
is
the
right
thing
here.
Do
make
sure
that
you
are
bogged
into
it.
I
am
not
of
much
help
here.
This
is
not
my
area,
but
pogden
is
interested
in
it.
So,
yes,.
D
D
And
I
don't
know
jay
if
you
are
interested,
maybe
you'd
want
to
be
there
as
well
yeah
any.
P
D
Well,
otherwise,
you
don't
need
to
wait
for
any
formalization
of
the
work
group
guys
you
can
self-organize
here
and
as
for
adding
it
to
the
communities
just
for
the
discovery
for
the.
D
J
Does
that
make
sense,
just
like
kind
of
like
do
some,
you
know
prototyping
before
even
presenting
that's
what
you
know,
bogdan
actually
suggested,
which
we
were
able
to
like
find
some
time
to
like
kind
of
evaluate
some
of
the
things,
but
you
know
I'll
just
catch
up
with
them.
J
Yeah,
sorry
for
that,
like
I
just
you
know,
started
to
give
so
too
many
details,
and
I
struggled
because
this
is
probably
not
it's
a
really
large
group.
N
Sorry
about
that,
having
some
audio
issues,
can
you
hear
me.
N
Yeah
I
just
wanted
to
like
call
attention
to
this.
More
than
anything,
I
don't
think
a
lot
of
people
have
had
a
chance
to
look
at
this
and
tigran
has
briefly,
but
you
know
I've
heard
of
if
this
has
kind
of
come
up
a
few
times
over
the
last
couple
of
months,
where
someone
is
sort
of
working
with
one
signal
type
wanting
to
generate
signals
of
a
different
type
and
basically
asking
how
do
we
do
this
so
just
kind
of
having
a
proposal
to
start
from
for
this
idea?
N
Basically,
I
think
there
are
two
use
cases
really
there's
this
first
use
case
of
translating
from
one
signal
type
to
another,
but
then
there's
also
perhaps
a
use
case
around
just
wanting
to
process
the
same
signal
in
two
different
ways.
Perhaps
you
want
to
export
the
same
signal
to
two
different
back
ends
and
with
one
you
need
to
add
some
extra
labels,
so
the
idea
is
basically
to
sort
of
create
a
formal
mechanism
for
linking
pipelines
together.
N
I
was
really
looking
for
a
way
to
do
this,
with
minimal
changes
to
the
existing
pipeline
structure
and
the
existing
config
files,
and
so
on.
So
there's
a
lot
of
detail
here
and
we
hopefully
articulated
the
rationale
behind
it
decently,
but
definitely
interested
in
any
feedback
on
this.
G
D
Thanks
for
posting
this
then-
and
I
think
probably
the
primary
concern
I
have
is
that
this
will
likely
require
us
to
make
maybe
significant
changes
to
how
we
create
the
pipelines,
because
the
way
it
is
working
now,
it's
probably
not
going
to
work
easily,
at
least
which
is
probably
okay,
but
if
you
haven't
seen
how
it's
done,
maybe
it's
worth
looking
into
the
code
base
to
see
how
how
the
the
building
is
done
right
now
and
see.
D
N
Sure
I'll
look
into
that
some
more.
I
did
see
your
initial
note
on
that
and
it
seemed
that
there
would
be
a
way
to
do
that,
just
by
sort
of
creating
a
topological
ordering
among
the
pipelines
but
yeah,
I'm
sure,
there's
some
considerations
there.
I
need
to
look
and
look
at
a
little
more
so
I'll.
Take
a
closer
look
and
try
to
add
some
detail
around.
D
G
D
D
Yeah,
so
I
guess
we
just
need
to
remind
them
to
have
a
look
at
that,
since
he
did
the
review
and
requested
for
changes.
D
D
E
A
E
Yeah,
so
anybody
else
is
also
interested
in
understanding
our
triaging
process.
It's
it's
now
a
good
time
to
to
observe
or
participate,
but
if
not
then
feel
free
to
drop
off
and
don't
feel
bad
about
it.
Let
me
see:
where
do
we
leave
off?
This
was
telemetry
collector
and
let's
bring
this
back
here.
We.
A
D
Yeah,
so
we
did
not
want
to
do
this
for
some
time.
The
reason
was
that
if
you're
filtering
individual
spans,
you
are
breaking
the
trace.
I
am
not
sure.
What's
the
intent
here,
it
says
trace
and
we
have.
We
have
sampling
processors
which
do
the
filtering,
but
I'm
guessing
because
this
is
a
separate
request.
It's
about
spence,
I
I
don't
know
if
we
do
want
to
do
that.
D
Mean
yeah:
if
people
need
it,
maybe
maybe
this
can
be
a
component,
but
I
don't
think
I
will
implement
it
myself.
Let's,
let's
make
it
p3
and
it's
about
traces
and
then
help
want
to
defend,
but
he
wants
to
tackle
it.
O
I'm
trying
to
think
of
what's
the
use
case
here,
because
we
typically,
I
know
that
we
we
filter
these
types
of
traces
on
the
front
side
right
so
where
they
originate.
O
D
Yeah,
okay,
yeah!
I
don't
think
we
should
reject
this.
But
sorry
if
people
have
the
use
case,
they
can
implement
the
component
idea.
That's
fine!
Okay!
Anyway,
let's
make
it
after
ga.
A
O
Do
you
have
one
question
because
I
haven't
actually
experienced
the
two
nation
process
before?
Can
we
mark
something
as
after
ga
is
that,
just
assuming
that
they
aren't
the
ones
creating
the
pull
request
and
that
they
are
the
one
creating
the
pull
request?
Would
it
still
just
wait
to
be
merged
until
after
ga.
E
There's
also
a
stability
component
to
the
change
for
the
pr
that
goes
in
right.
It's
not
just
willy-nilly,
it's
like
oh
yeah.
Any
pr
that
takes
care
of
this
can
definitely
go
in
there's
a
certain
level
of
stability
that
collector
has
achieved.
That
I'd,
also
like
to
point
out,
is
desirable
to
maintain
yes,
since
we're
closer
to
the
g8
point.
D
Yeah,
we
only
have
a
couple
p
ones.
Technically,
we
can
be
ready
for
gaining
in
a
couple
weeks.
I
don't
think
we
should
be
ga
before
the
sdks,
ga
so
yeah.
We
would
probably
wait
until
the
languages
are
ready
or
a
couple
of
languages
already
and
then
maybe
even
coordinate
and
do
the
ga
together.
E
E
The
status
of
items
required
for
ga
release
in
the
spec
sig
and
also
the
maintainer
sig.
So
those
are
probably
good
places
to
to
get
the
overall
status
of
how
things
are
headed
for
open
telemetry.
But.
F
E
In
the
background,
but
oh
there
we
go
so
yeah,
there's
still
more
detail
needs
to
be
sorted
out
for
how
much
implementation
is
left.
That's
needed
in
the
corresponding
language.
Libraries.
A
A
D
J
This
is
really
enhancement.
I
mean
I
filed
it
because
people
were
trying
to
like
you
know.
Logging,
for
example,
is
an
experimental
thing
so
in
the
instrumentation
libraries
and
so
on,
like
there's
no
way
to
generate
logs,
but
it's
possible
to
use
a
protocol,
and
you
know
for
that
type
of
cases.
It's
good
to
document
this.
The
other
thing
is
people
will
maybe
write
like
custom
instrumentation
libraries.
So
they
want
to
understand
what
the
protocol
is.
Like,
I
think,
generally
like
documenting
the
protocol.
J
Yeah,
that's
like
I
think
you
know,
like
I
think,
proto
needs.
I
think
proto
is
a
better
place,
but
I
think
collectors
should
give
links
like
for
external
people.
It's
just
really
hard
to
follow.
You
know
the
organization
structure.
They
don't
necessarily
think
that,
like
proto
is
the
place
that
will
so
you
know
we
can
just
link
to
the
proto
from
the
collector
I
mean
the
the
problem
right
now
is
like
there
is
not
an
easier
reference
point.
J
If
I
want
to
use
the
otlp,
how
do
I
generate
the
you
know
the
clients
and
so
on,
and
so
on?
The
protozoa
by
the
way,
are
not
general
order
generated
right
now,
right,
like
you
have
to,
I
had
to
generate
my
own
products.
Yes,.
E
D
Yeah
yeah
miscellaneous
and
I
don't
think
we
have
documentation
as
a
separate
thing
right.
We
don't
have
that
so.
D
P
Yeah
and
tigran
again,
you
know
it's
just
super
helpful.
I
mean,
as
you
know,
for
users,
new
users
and
customers
to
come
in
kind
of
come
in
and
be
able
to
have
more
detailed
documentation.
You
know,
especially
for
scenarios
where
the
collector
works
and
and
the
agents,
and
then
we've
had
several
requests
for
that.
So
again,
you
know
would
strongly
support
really
being
able
to
have
a
documentation,
sub
label.
J
Josh
from
google
was
also
mentioned
about
this,
like
when
you're
going
to
a
customer.
You
know,
in
order
for
you
to
explain,
I
think
the
value
proposition
like
some
explain.
Some
of
these
internal
components
is
important,
so
you
know,
inter
the
documentation.
Currently,
this
doesn't
capture.
All
of
that
stuff
like
which
is
normal.
Probably
because
you
know
documentation
is
just
always
like
done.
You
know,
after
things
are
more
settled,
so
I
think
like
this
is
somewhat
related
to
that.
J
Well,
it's
just
like
one
issue
right
like
we,
we
can
do
a
better
job
in
terms
of
I
think
documenting
the
internals,
and
you
know
what
is
some
of
the
flexibility
that
open
telemetry
gives
people
it's
just.
Sometimes
they
know
that
it
there's
a
disconnection
from
the
perspective
of
customers
because
they
don't
know
the.
D
Internals,
okay,
what
is
this
one?
The
time
is
goals
time.
D
64-Bit
integer.
I
don't
think
so.
J
J
J
D
J
Is
264's
but
you
know
I
couldn't
really
read
the
blog
actually.
D
D
E
So
that
catches
catches
us
up
for
everything.
Since
the
last
meeting,
we
still
have
more
to
go
further
on,
but
I
think
we've.
G
K
D
D
So
the
first
one
is
about
the
logger
name.
There
is
a
pull
request.
There
is
a
discussion
around
that,
but
it
seems
like
not.
Everybody
agrees
that
this
is
a
necessary
thing,
so
people
who
have
not
already
reviewed
or
approved
even
the
pr
please
have
a
look.
I
would
like
to
see
a
bit
more
thoughts
on
this.
Maybe
before
or
against
it.
I
don't
know
since
I
created
the
pr
like.
D
I
think
that
it's
the
right
thing
to
have,
but
maybe
maybe
I'm
wrong,
I'd
like
to
have
a
to
continue
the
discussion
on
this
thing.
I
don't
know
if
anybody
has
any
thoughts,
but
normally
we
can
talk
about
it.
Now.
Q
I
proved
it
already,
it
seems
intuitive
to
me,
I'm
not
sure
whether
I'm
an
actual
approver
here,
but
I
I
said
a
proof
so
and
I
think
yuri
also
approved
it.
So
I'm.
D
Before
okay
cool,
so
the
other,
the
next
one
is,
is
the
new
thing
that
came
up
recently.
There
is
a
discussion
about
what
do
we
consider
so
this
started,
as
as
we
were
discussing
the
stability
guarantees
for
open
telemetry,
and
particularly,
do
we
actually
guarantee
that
the
shape
of
the
data
the
instrumentation
needs,
whatever
whether
it's
traces,
metrics
or
logs,
applies
to
logs
as
well
and
does?
Is
there
any
sort
of
guarantees?
D
People
can
probably
know
the
guarantees
that
the
structure,
the
shape
of
the
composition
of
the
telemetry,
that
we
need
is
part
of
the
part
of
the
stability
guarantees,
and
so
I
proposed
here
my
proposal
basically,
was
that
it's
actually
not.
D
We
should
not
be
making
this
an
unchangeable
thing
where
the
instrumentation,
once
you
start
emitting
some
data,
it's
locked
in,
and
you
can
never
change
it,
because
it's
part
of
the
guarantees,
instead
of
that,
what
I
proposed
was
it's
connected
to
what
we
discussed
in
the
past,
that
there
is
actually
a
schema
for
telemetry
data
and
the
instrumentation
it
needs
telemetry
that
confirms
with
that
schema
and
that
schema
is
also
communicated
together
with
the
telemetry
either.
D
D
The
telemetry
can
then
use
the
information
about
what
schema
was
used
to
produce
the
telemetry
and
also
the
schema
that
it
is
expecting
to
see
to
either
do
an
automatic
trans
translation
between
these
schemas
or
do
whatever
it
wants
to
do
right,
like
maybe
reject
the
data
or
mark
it
as
invalid,
or
not
try
to
interpret
or
whatever
right.
D
The
idea
is
that
there
there
is
a
possibility
to
describe
the
telemetry,
that's
the
shape
of
telemetry,
something
that
we
call
schema
and
then
use
this
in
a
few
different
ways
like
convert
validate
whatever.
So
this
is
call
for
comments.
Basically,
what
do
people
think
about
it?
I'd
like
to
understand
whether
the
idea
is
a
good
one,
an
open
discussion.
You
want
us.
Q
Yeah
so
changes
I
mean
there
has
to
be
a
way
to
evolve
right.
Otherwise,
it's
just
not
gonna
fly
and,
and
so
what
you're
proposing
here
makes
sense.
I
that's
a
first
plushie
but
like
this
does
make
sense,
basically
figure
out
a
way
to
version
it
figure
out
a
way
to
have
a
machine,
readable
specification
right
so
that
that
all
that's
all
intuitive
to
me
for
what
it's
worth,
and
so
I
think
the
one
potentially
sticky
part
is:
you
know
the
the
log
body
right.
Q
I
think
you
know
from
the
log
six
perspective,
we
have
kind
of
said
that
you
can
put
whatever
you
want
in
there
all
right
and
I
think,
on
the
metric
side
and
on
the
on
the
on
the
tracing
side.
Folks,
I
think,
have
much
stronger
sort
of
opinions
right
on
on
not
having
open-ended
there's
a
little
bit
of
a
guess
here,
because
I'm
not
that
closely
involved
there
but,
like
I
think
you
know
just
you
know
sensing
a
little
bit
like
what's
going
on
here.
Q
Q
So
so
that,
like
for
the
sort
of
you
know
like
people
that
have
like
you,
know,
really
really
strong
notions
like
opinions
or
schemas.
That
that'll
be
a
sticking
part
right,
or
this
will
be
that
they
will
probably
forever
be
a
little
bit
like.
You
know,
freaked
out
by
that.
But
I
I
still
don't
think
that
we
can
change
that.
D
Q
D
I'm
trying
to
also
apply
to
the
other
signals,
not
just
the
laws,
because
I
think
that
the
problems
are
very
similar.
D
But
yes,
you're
right,
we
did
discuss
that
and
the
solution
is
very
similar
to
what
we
were
discussing.
Have
some
sort
of
schema
then
point
to
it
in
the
data.
I
don't
know
how
achievable
it
is.
This
needs
like
a
lot
more
detail
until
we
are
certain
we
want
to
do
that
hey.
This
is
just
an
idea
at
this
point.
D
R
Up
is
there
an
existing
expectation
of
stability
from
logs,
like
in
fluent
d
like
if
I'm
running
fluent
d
on,
say,
like
postgres
or
something,
and
I'm
using
the
logs
that
I
ingest
from
there
is
an
expectation
that,
when
I
upgrade
postgres
itself
that
the
log
statements
don't
change
and
that
my
logging
doesn't
break
in
the
community
like
I
I
just
feel
like
logs
are
different,
possibly
and
there's
like
a
li.
You
know,
if
you
think
about
the
three
types
of
telemetry
metrics
has
the
most
rigid
set
of
requirements.
R
D
Yeah,
I
I
don't
have
a
clear
answer
to
that.
If
you,
if
you
have
a
logging
product
which
collects
let's
say
apache
logs,
for
which
there
is
a
very
well
defined
format,
what
it
should
look
like,
although
it's
customizable
right
and
then
you
have
some
sort
of
dashboards
which
know
how
to
deal
with
this
particular
logs
in
your
logging
products,
then
I
would
say
yes,
there
is
an
expectation
that
it
has
to
confirm,
with
whatever
people
call
apache
logs
right.
D
D
So
I
guess
the
answer
can
be
yes
and
no
right,
depending
on
how
exactly
people
interpret
received
logs,
but
I
don't
see
why
we
cannot
apply
the
same
logic
we
use
for
metrics
and
for
traces
to
the
logs
here.
If
there
is
a
way
to
achieve
that,
and
if
people
do
actually
build
things
like
dashboards,
which
only
work
which
depend
on
the
shape
of
the
data
on
what
what
attributes
we
have.
What
are
the
values
that
the
attributes
carry,
then?
D
R
I
guess
what
I'm
proposing
or
what
I'm
saying
is,
I
think,
there's
a
limited
amount
of
the
data
that
that
open
telemetry
can
actually
control.
So
if
open
telemetry
is
generating
the
log
statements,
then
it
can
keep
them
stable
from
say,
auto
instrumentation.
If
it
is
generating
attributes
and
labels,
it
can
keep
those
stable,
but
the
things
that
it
is
not
generating
itself.
I
don't
think
you
can
guarantee
stability
of
absolutely
yes
right,
you're.
D
Absolutely
right,
and
then
in
that
case
the
the
responsibility
for
the
stability
and
for
providing
the
schema
would
be
on
whoever
is
generating
the
logs
themselves,
whether
it's
some
sort
of
application
or
or
maybe
I
don't
know,
an
instrumentation
library,
but
yes,
you're
right
in
case
of
inflammatory.
We
are
not
emitting
the
loads
ourselves.
D
So,
yes,
we
are
not
going
to
provide
the
schemas
ourselves,
unlike
metrics,
for
example,
you're
completely
right,
where
open
telemetry
does
provide
a
very
clear
set
of
system,
metrics,
for
example,
and
tells
what
the
names
of
the
metrics
and
labels
should
be
for
system
metrics.
Yes,
that's
that's
an
important
distinction,
I
would
say,
but
if
we're
going
into
the
trouble
of
doing
this
for
metrics
and
traces,
I
think
there
is
still
at
least
it's
still
worth
trying
to
define
this
for
the
logs
in
a
uniform
way.
D
S
You
know,
basically
anything
that
comes
off
of
aws
right
is
highly
structured,
and
so,
if
we
have
a
way
of
being
able
to
to
say
hey,
this
is
a
log
event,
but
it's
actually
highly
structured
and
here's.
What
the
structure
is.
Q
Yeah,
so
I
think
this
discussion
that
I
was
kind
of
obliquely,
referring
to
there
for
for,
like
six
months
ago,
with
with
tickrun
at
the
beginning,
for
those
of
you
who
were
not
following
along
with
that,
we
were
discussing
essentially
a
way
to
put
in
the
defined
part
of
the
of
the
of
the
logging.
You
know
schema
a
way
to
basically
express
what
to
like
optionally,
to
express
what
you
can
expect
to
find
in
the
undefined
part,
which
is
the
body
right,
so
the
envelope
basically
contains
the
information.
Q
That
seems
like
a
fairly
standard
approach
to
doing
stuff
right.
It's
optional.
I
think
that
needs
to
be
optional.
There's
many
examples.
You
know,
for
example,
you
could
say
well
it's
kind
that
it's
kind
of
you
know
ecs,
for
example,
and
then
you
know
you
can
go
and
look
up
what
what
what
ecs
is,
or
you
put
even
like
a
schema
link
for
that
there
or
other
things
right
arcsightsef,
you
know
splunk,
you
know
sim
and,
and
you
know,
etc,
etc.
Q
There's
various
sort
of
protocols
which
which
for
some
folks
it
will
make
and
aws,
is
a
good
example
like
them
having
everything
in
json
now,
and
I
think
they
also
have
kind
of
you
know,
pointers
to
the
sort
of
specs
for
that
json
or
other
schemas.
That
was
the
basic
idea.
I
I
still
continue.
I
continue
thinking
it's
actually
a
good.
T
So
it's
optional,
but
also
informational,
in
the
sense
that
even
if
a
spec
schema
specified
there
is
no
guarantee
that
the
unstructured
part
conforms
right.
So
so
tooling,
tooling,
that
receives
a
log
record
with
a
schema.
Annotation
may
still
have
to
deal
with
the
fact
that
it
you
know
it
doesn't
match
up
is
that
is
that
how
to
think
about
it?.
D
Correct,
I
think
we
would
not
want
to
enforce
this,
neither
at
the
meeting
site
nor
in
the
receiving
site,
although
I
mentioned
here
in
the
proposal
that
we
could
provide
some
tooling
that
validates
and
sees
whether
the
emitted
data
confirms
to
the
schema.
But
I
don't
think
that-
and
it
may
be
also
unnecessary
performance
impact
if
we
even
try
to
enforce
that
depending
on
how
complicated
the
schema
description
may
be,
but
as
a
tool
as
a
debugging
or
diagnostic
troubleshooting.
We
may
have
this
thing
right.
D
D
Okay,
anyway,
I
put
it
out
there.
I
can
probably
do
a
more
detailed
tap.
D
For
now,
what
open
telemetry
needs
is
the
decision
of
whether
the
shape
of
the
data
is
part
of
the
guarantees,
and
I
think
we
should
make
the
decision
that
it's
not
for
now
and
then
work
on
this
carefully
and
then
it
becomes
part
of
1.x
release
sometime
in
the
future
whenever
it's
ready
yeah.
If
anybody
has
any
comments,
any
thoughts,
additional
thoughts
afterwards,
please
go
and
comment
here
on
this
issue.
D
Feel
free
feel
free
to
comment.
I
I
don't
see
any
harm
in
this.
D
Probably
I
could
maybe
probably
split
this
out
as
a
separate
issue,
not
to
derail
from
the
primary
topic
of
what
is
in
the
guarantees
and
what
is
not,
and
then
in
that
separate
issue.
Maybe
we
can
have
a
conversation
and
discussion.
What
will
detail
the
stimulus
and
all
that
stuff
that
probably
would
be
best?