►
From YouTube: 2022-03-09 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
A
Okay,
I
think
we
can
start
so.
The
declaration
of
log
data
model
stability
did
not
see
any
objections,
which
is,
I
think,
great,
so
I
removed
I
converted
the
pr
to
regular
one
from
a
draft,
and
I
think
now
it's
we.
We
can
just
go
ahead
review,
there's
nothing
much
to
review
and
approve.
If
we
agree
with
that,
and
once
we
have
the
unnecessary
number
of
approvals,
you
can
get
married
and
we'll
be
done
with
that.
One
which
is
great,
very
exciting,
can't
wait
for
it.
A
But,
like
I
think,
yeah
it's
going
to
be
an
additive
change
right.
If
we
decide
that
we
want
the
name
field,
it's
it's.
It's
going
to
be
a
new
optional
field.
So
that's
that's
fine
right!
You!
We
can
always
decide
that
we
need
something
more
new
in
the
data
model
and
that's
okay
as
soon
as
it's
not
required
right,
it's
and
if
you're
not
removing
something
or
changing
something
that
is
already
there.
A
A
Okay,
so
approvers,
please,
please,
go
and
approve
that
if
you're,
okay,
let's
move
to
the
next
one,
it's
about
the
the
elastic
common
schema
go
ahead.
C
A
A
We
want
to
make
sure
that
a
lot
of
people
are
aware
of
this
of
this
initiative,
a
lot
of
who
a
lot
of
people
who
have
a
saying
or
influencing
how
the
specification
is
shaping
outside
the
log
clogging
seek,
because
this
is
going
to
be
to
have
an
impact
on
at
least
on
traces,
I
believe,
or
also
metrics,
not
just
traces.
B
A
The
awareness
is,
there
increase
the
awareness
during
the
specifications,
sig
meetings
and
let
people
know
post
in
slack
contact
anybody
who
you
know
who
you
think
is
may
or
may
be
interested
in
this
right.
So
the
the
expectation
is
always
that
the
author
of
the
authentic
needs
to
drive
it,
and
this
could
take
particular
particular
case.
The
awareness,
I
think,
is
the
most
important
aspect
to
to
to
make
sure
we
can
make
progress
on
this.
C
A
I
I
think
it
works
either
way,
but
yeah
I
mean
yeah.
Probably
that's.
That's
that's
a
valid
suggestion,
because
that's
the
the
way
that
the
auteps
eventually
end
up
becoming
approved
autopsy
anyway,
so
you
have
to
even
if
you
start
as
a
google
document,
you
have
to
convert
it
to
a
pr
against
all
type
repository
anyway.
A
B
So
can
you
share
with
us
whenever
you
like,
with
like
the
day
when,
whenever
you
decide
to
you
know,
go
to
this
backseat
meeting
like
which
one
okay,
so
I
might
actually,
I
don't,
usually
participate
in
those,
but
if
I
have
time
I
might,
I
might,
you
know,
want
to
listen
in
and
sort
of
absorb.
Some
of
that
feedback
also
live.
C
C
A
Week,
I'm
out
starting
from
tomorrow
so
won't
be
there,
but
you
guys
can
move
forward
with
it
tonight,
because
I
already
posted
my
comments
on
the
document.
I'm
good.
A
C
C
Initially,
I
started
with
a
tighter
integration
with
elasticsearch,
but
it
works.
It's
a
pluggable
interface
that
works
with
everybody
on.
I
think
it's
a
new
kind
of
new
use
case
for
open
telemetry,
where
we
use
it
as
a
data
transfer
protocol
api
more
in
some
ways,
more
than
just
a
logs
shipping
api,
and
we
will
allow
jenkins
user
administrators
to
completely
offload
the
storage
of
their
logs
to
observability
back-ends,
with
the
benefits
of
getting
full
observability.
With
all
the
signals
correlated
together.
C
You
already
have
traces
now
you
will
have
logs
as
well
on
your
metrics.
Another
problem,
which
is
jenkins
specific,
is
a
scalability
of
jenkins
that
has
problems
due
to
the
volume
of
logs
you
store
within
jenkins,
and
so
we
will
bring
a
solution
to
this
problem
as
well,
so
it
it
may
be
interesting
for
the
community
to
be
aware
of
yeah
yeah.
C
C
C
Any
system
open
image
in
a
new
tab,
so
jenkins
has
already
been
capable
using
the
open,
telemetry
sdk
to
publish
metrics
on
traces
to
an
open
symmetry
collector,
and
then
we
test
with
jaeger.
We
test,
of
course,
with
elastic,
but
jager
is
almost
our
primary
validation
and
now
we
have
added
the
capability
to
ship
logs
of
your
pipeline
bills.
So
if
you
for
the
people
who
are
familiar
with
jenkins
your
pipeline
logs,
they
look
like
sorry.
C
I
guess
almost
everybody
has
used
jenkins
once
in
his
life.
It's
the
screen.
If
you
are
familiar
with
this
and
now
what
we
are
able
to
do
is
to
use
some
extensions
api
in
jenkins
to
say
all
the
logs
of
your
pipeline
executions.
I
will
send
them
to
or
through
the
open
dimension
protocol.
I
can
route
them
to
an
open
symmetry
collector
so
that
I
have
all
the
benefits
of
the
collector
and
then
you
push
them
to
an
observability
backend
which
is
capable
of
receiving
open
symmetry
logs
on
what
we
do.
C
We
have
a
pluggable
api
in
sorry
in
jenkins
to
say
either
the
back
end
will
provide
just
an
ip
link
to
tell
people
if
you
want
to
visualize
your
logs
of
your
jenkins
pipeline.
You
click
on
this
link
on
whatever.
Maybe
it
will
be
a
graph
analogy,
or
maybe
splunk.
C
If
you
you
want
to
contribute,
integrations
within
hyperlink,
you
redirect
your
practitioner
to
your
product
and
we
have
done
also
a
more
feature:
complete
integration
where
there
is,
you
continue
to
have
your
logs
in
your
observability
backend,
but
you
also
provide
a
continuity
in
the
user.
Experience
of
the
jenkins
practitioner
where
he
can
continue
to
visualize
logs
in
through
jenkins
screens
on
here,
you
have
to
use
the
proprietary
read
apis
to
read
logs
from
your
backend
or
maybe
a
splunk,
observability
sweep
or
or
loki.
C
A
This
is
cool,
so
you
had
to
implement
the
read
path
as
well.
Here
is
what
you,
I
guess
you
didn't
want
to
store
them
on
jenkins
locally,
as
they
are
stored
normally
because
of
the
limitations
you
mentioned
earlier,
or
was
it
a
problem?
You.
C
Do
not
have
the
choices
the
jenkins
api
tell
you
either
you
store
them
remotely.
A
C
Yes,
I
can
show
you,
we
have
a
test
instance.
Sorry,
I
should
have
prepared
more.
So
all
my
logs,
my
I
have
a
trace
id.
I
will
just
go
with
a
trace
id.
I
also
go
with
a
jenkins
build
identifier,
but
just
the
test.
Identifier
is
enough.
B
C
A
C
A
C
Is
the
first
thing
we
did
because
it
was
very
valuable
for
jenkins
administrators
to
troubleshoot
the
provisioning
problems
of
jenkins
agents
on
the
regressions
of
jenkins
bills?
I
should
have.
C
This
is
an
integration
of
both
your
jenkins
built
steps
with
the
maven
steps
as
well,
and
we
have
done
all
this
with
traces.
We
have
context
propagation
on.
We
have
added
logs
just
this
last
week
two
day
two
weeks
ago,.
C
It's
great
on
something
interesting
is
I
started
to
int
to
instrument
jenkins
pipelines
because
for
administrators
I
felt
the
pipeline
was
the
most
important
thing,
but
then,
when
I
I
wanted
to
implement
the
logs
visualization,
it
was
so
hard
for
me
to
in
to
understand
the
jenkins
apis
on
all
the
attract
scores
that
what
I
did
is
that
I
also
instrumented
jenkins
http
calls
instrumenting
jenkins
http,
call
this
time
to
reverse
engineer
the
behavior
of
the
apis.
C
I
have
something
here,
a
call
called
progressive
html
on
the
logs,
which
is
attracts
goals
to
get
more
fragments
of
logs.
Here
the
control
is
the
beginning
of
the
fragment
of
logs,
and
I
to
be
able
to
reverse
engineer
jenkins.
I
had
two
instrument
http
calls
as
well
on
something
that
I
did
also.
I
initially
I
instrumented
jenkins
with
log
messages
which
everybody
has
done
in
this
life
with
the
logging
apis.
C
I
in
your
login
framework-
sometimes
you
say
I
will.
I
will
compose
a
message
only
if
logging
is
enabled
I
do
the
same.
I
I
switch
between
the
no
ob
tracer
on
the
real
tracer
to
get
to
add
granularity
to
my
log
messages.
So
I
have
my
complexing,
which
is
the
iterator
on.
C
To
retrieve
progressive
me,
my
log
from
my
backend
on
here,
you
can
see
an
example
where
I
will
take
the
tracer
if
the
level
finally
is
enabled,
I
will
add
more
a
lot
of
intermediate
spans
to
collect
tons
of
attributes
on
in
production.
I
don't
call
it
this
intermediate
spans.
I
just
have
a
very
limited
number
of
spent
in
each
transaction,
which
was
very
very
interesting
on.
I
think
it
it's
a
way
to
replace
logging.
C
A
A
That's
the
value
of
distributed
tracing
right,
yeah,
the
the
value
of
logs
is
in
that
they
exist.
You
don't
need
to
do
this
instrumentation
to
for
to
have
the
logs.
Usually
they
typically
they
exist.
So
that's
the
value,
that's
the
I
guess
the
largest
value,
but
if
you're
starting
from
from
scratch,
you
probably
can
get
more
value
from
yeah.
C
C
C
I
can
select
any
single
span
name
on
search
broadly
to
understand
what
are
all
the
http
calls
that
invoke
them,
and
for
me
this
was
a
question
of
this
attracts
the
way,
the
progressive
rendering
of
logs
with
many
different
http
endpoints
that
were
invoking
the
logs
storage
apis,
and
I
could
only
reverse
engineer
this
leveraging
this
capability
to
explore,
distributed
traces,
choosing
a
spam
name
on
finding
all
the
ht,
all
the
traces,
whatever
they
are
http.
C
Whatever
is
the
entry
point
that
allows
you
to
to
to
access
it
to
invoke
this
call
this
api
in
your
code,
so
I
felt
it
was
very
interesting.
C
C
It
was
not
the
primary
goal
of
observability,
but
I
feel
it's
it's
a
very
interesting
capability.
B
C
On
something
else
here
very
interesting,
I
was
on
pto
last
week
on,
I
got
a
message
from
our
internal
elastic
ci
team,
saying
we
have
a
risk
of
leak
of
credentials
in
our
ci
jobs,
because
they
were
somebody
using
environment
variables
to
put
credentials
on
them.
Somebody
putting
set
minus
x
on
the
shell
script
when
you
different
people
on
here,
I
think
we
can
tell
ci
people
as
we
observability
back
and
receive
all
these
traces.
C
C
We
we
will
be
able
to
integrate
a
credentials
leak
detector
in
our
ingestion
pipelines,
and
I
guess
this
reason
I'm
looking
at
you
christian
for
sumo
and
I
guess
ut
run
with
a
splunk
on
anybody
on
this
call
who
I
have
to
look
at
the
people
who
are
on
the
call,
but
I
think
it
will
resonate
with
many
people
who
build
observability
back
ends
to
integrate
credentials
scanners
in
the
pipelines.