►
From YouTube: 2022-03-31 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
B
The
first
probably
eight
days
we
were
there
were
very
humid
and
hot,
and
then
there
was
a
kind
of
a
big
storm
that
came
through
and
everything
cooled
down
and
was
breezy
and
a
lot
cooler
the
last
few
days,
so
that
was
nice
cool.
It
still
feels
cold.
Here,
though,.
C
A
A
A
Yeah
I've
been
loosely
following
this,
but
do
you
want
to
give
us
give
us
your
pitch
for
it.
D
D
We
have
submitted
with
lolita
sharma
of
aws,
with
sumo
logic,
with
logzy.io
on
the
idea
to
add
support
for
the
open
telemetry
as
the
elastic
command
schema
in
open
telemetry,
with
a
focus
on
logs
so
for
people
who
are
not
familiar
with
the
elastic
command
schema
it's
the
idea
that
is
very
similar
to
the
open
symmetry
semantic
convention
of
standardizing
correlation
attributes,
contextual
fields,
to
enable
correlations
between
signals
between
systems
on
elastic
common
schema
is
quite
older
than
open,
dimension
semantic
convention.
So
it
covers
a
much
broader
scope
of
use
cases.
D
A
lot
of
network
devices
supports
it
on.
It
is
broadly
used
in
security
use
cases,
for
example,
and
so
we
also
something
that
the
elastic
command
scheme
has
done
is
to
standardize
the
attributes
to
collect
logs
on
components
provided
by
different
vendors
like
nginx
on
apache
on
hp
proxy
access
logs.
D
They
are
called
fields
inelastic,
but
it's
the
same
inside
open,
telemetry
semantic
conventions
so
that
we
could
broadly
expand
the
open,
dymetry
semantic
convention.
D
A
use
case
that
could
some
use
cases
that
came
to
our
minds
that
could
be
pretty
popular.
One
is,
as
we
said,
parsing
http
access
logs
on
one
author
that
I
have
worked
on
very
recently
on
the
jenkins
integration
with
hotel
is
the
idea
when
you
have
a
let's
say,
a
java
application
to
capture
the
authentication
events
of
of
your
application
and
to
capture
them
with
a
right,
a
standardized
structure
for
your
authentication
event.
D
And
so
we
we
feel
it's
it's
a
great
extension
of
the
instrumentation
use
cases
of
provided
by
open
symmetry
to
also
connect
to
a
security
system,
and
so
people
who
would
do
open
temperature
instrumentation
of
their
application
immediately.
It
would
benefit
observability
operations,
use
case
and
also
security
force
could
consume
all
the
authentication
events,
for
example,
and
so
I
think,
journey
work
for
splunk,
which
has
a
security
offering
typically
trust.
I
guess
that
at
microsoft,
in
azure
monitors
there
are
also
systems
to
monitor
security.
D
D
So
this
was
an
introduction
and
we,
we
have
submitted
an
otep
and
we
are
inviting
people
to
review.
We
tried
our
best
to
identify
to
compare
the
two
world
of
lso
elastic
common
schema
on
open
symmetry
semantic
conventions
to
help
people
to
understand
how
it
would
relate.
D
The
vision
that
elastic
about
the
elastic
command
schema
is
that,
once
the
donation
is
done,
then
elastic
common
schema
should
in
some
way
disappear
in
favor
of
elastic,
embracing,
open,
telemetry
semantic
convention,
augmented
by
this
donation,
to
ensure
there
is
no
fork,
as
we
have
seen
forks
in
the
past,
related
to
open
diametery.
So
here
we
have
really
the
desire
to
to
be
sure
that,
at
the
end
of
this
process,
there
would
just
be
one
schema
for
everybody,
which
would
be
the
open,
dimitri
semantic
convention.
A
So
is
it
targeting
only
logs.
D
However,
in
addition
to
this
hotep
that
is
working
on
principles,
we
are,
we
start
to
evaluate
to
do
a
proof
of
concept
of
this
contribution
with
the
authentication
events,
as
we
said
until
we
doing
this,
we
see
that
it's
often
log
messages.
D
For
example,
at
the
moment
we
look
at
octa,
it
seems
some
waste
vlogs,
but
it's
more
even
uneven
plug
for
octa
authentication
event,
but
for
I
I
truly
believe
when
I
did
this
exercise
for
jenkins
authentication,
that
there
is
also
a
huge
opportunity
to
map
a
primary
security
events
in
custom
applications,
so
it
would
not
be
only
logs
but
also
instrumentation
of
let's
say,
java
code
net
code
or
whatever
code.
D
I
would
say
what
we
came
to
was
to
say:
apm
agents
are
capable
of
instrumenting
any
framework,
so
here
we
would
not.
The
best
thing
is
to
not
instrument
the
lock4j
logback
in
java,
but
directly
to
or
instrument
your
authentication
framework,
because
in
spring
security
you
have
this
event:
api
for
authentication
events
in
jenkins.
You
also
have
one
authentication
api
so
that
you
clearly
can
identify
what
is
a
logging
that
was
used.
What
is
the
source
ip
address?
A
A
D
D
Yeah,
it's
a
lot,
our
security
team.
Only
if
we
think
about
the
lock
forge
problem
that
we
had
last
year
in
the
java
ecosystem,
typically
capturing
all
the
details
of
the
libraries
that
are
loaded
by
the
runtime,
which
is
a
jvm
in
our
case,
could
be
very
helpful
on.
If
we
standardize
it,
then
many
organizations
will
be
able
to
consume
these
details
for.
A
Security
purposes
are
there
fields
in
there
already
for
emitting
dependencies
like
jar,
jars
that
are
found
on
the
class
path,
kind
of
a
thing
if.
D
A
Cool
yeah
yeah
yeah
that
that
that
topic
definitely
came
up
around
the
in
post,
log4j.
D
A
Do
you
have
thoughts
on
so
one
thing
that
we're
struggling
a
little
bit
on
the
open,
telemetry
logs?
Is
that
they're
both
logs
and
events,
they're,
structured
events
and
so
sort
of
knowing
differentiating?
Well,
I
guess
we
don't
really
have
any
cement.
We
hardly
have
any
semantic
conventions
for
logs
so
far
but
like
if
you
say,
if
you
stamp
it
with
one
of
these
security
fields,
then
on
the
back
end,
I
guess
that's
how
you
know
that
it's
not
like
a
user
log
versus.
D
We
raise
this
point
with
a
lolita
on
the
question
of
the
fact
that
events
produced
by
a
hotel
on
typically
log
messages
they
don't
have
a
content
type
on
between
the
spam
event.
On
the
log
message,
it
will
be
hard
to
understand
what
type
is
the
thing
until
to
process
it
properly
and
to
visualize
it
properly
to
display
it
in
context,
and
we
raise
this
point
that
it
could
be
useful
on
what
I
I
saw
also
doing.
D
The
first
exercise
of
of
mapping
is
when
you
work
on
security
events,
you
need
to
provide
the
content
type
in
elastic
common
schema.
It's
called
the
category
and
action,
so
the
category
could
be
authentication,
the
action
could
be
user
logging,
but
I
think
we
could
contribute
to
this
thought
process
here,
because
we
are
in
some
way
I
think,
filling
this
gap
that
is
required
for
a
security
event,
use
cases
analyze.
These
use
cases.
C
Cool
in
in
the
conversations
in
the
log
sig
that
type
of
thing
has
come
up
so
basically
because
the
log
data
model
is
doubling
for
kind
of
application
logs
and
also
events.
It's
it's
serving
both
use
cases.
You
often
need
a
way
to
differentiate
between
different
event,
types
and
different
scopes
of
events
so
or
like
you
know,
yeah
like,
like
you
said,
there's
gonna
be
types
like
a
user
login
and
then
a
classification
which
is
like
an
authentication
type
of
event,
and
so
the
the
discussion
so
far
has
been.
C
If
you
wanna
represent
that
type
of
thing
to
do
so
via
attributes
and
have
those
attributes
be
defined
by
the
semantic
inventions,
you
could
have
an
attribute
that
was
called
something
like
authentication
event
type,
and
then
the
value
would
be
user,
authentication
or
user
login,
or
something
like
that
and
back
ends
could
key
off
of
that
to
do
processing
based
on
that.
A
C
Yeah
and
so
there's
been,
there's
been
different
discussions
on
whether
there
should
be
additional
optional,
top
level
fields,
indicating
kind
of
the
event
category
and
then
the
event
type.
But
so
far
I
think
you
know
the
there
hasn't
been
a
strong
enough
reason
to
do
that
versus
just
using
attributes
to
define
that
type
of
thing.
C
It
could
go
the
other
way
there.
There
could
be
additional
fields,
be
added
at
the
top
level
of
the
log
data
model,
but.
C
D
So
I
we
have
the
idea
here
to
come
with
with
use
cases,
as
we
said
until
it
could
help
the
pro
the
community
to
validate
these
models.
D
B
D
Yeah
I
I've
not
talked
to
morgan
for
a
while,
so
yeah,
okay,
I
will
reach
out
to
him.
Yeah.
A
And
surreal,
what
do
you
see
sort
of
the
path
forward
for
this?
Is
it
sort
of
like
to
try
to
like
if
this
otp
is
approved
and
the
process
goes
forward?
Would
it
be
to
bring
in
like
piece
by
piece
yeah.
D
So
we
we
proposed
the
process
on,
as
we've
said
this
donation,
this
yeah
internally
at
elastic.
We
call
it
a
donation,
but
here
it's
a
more
the
introduction
of
more
attributes.
It's
it
will
be
a
substantial
effort,
so
we
didn't
do
much.
Our
first
effort
internally
at
elastic,
was
to
verify,
with
top
leadership,
that
we
had
the
acknowledgement
to
go
zero
on
for
top
leadership,
to
understand
what
it
meant
really
to
donate
so
to
merge
onto
at
the
end.
D
Our
new
internal
model
would
be
opentv
semantic
conventions
v2,
and
so
we
would
have
to
merge
because
there
would
be
some
quad
compatibility
challenges,
probably
more
for
us
than
for
hotel,
because
hotel
has
already
defined
some
stuff.
So
we
have
validated
this
internally
and
we
we
came
to
the
community
with
a
first
of
all
type
that
is
more
to
validate
the
principle
to
see.
If
the
committee
is
interested
on
then
to
validate,
we
thought
about
the
second
step,
which
would
be
to
validate
the
methodology.
D
So
I
share
the
link
where
we
clarify
this.
We
are
also,
we
think
also.
It
will
be
useful
to
understand
how
the
dom
stream
schemas
will
operate,
because
we
believe
in
the
open,
elastic
command
schema.
For
example,
you
when
you
look
at
the
data
types,
it's
not
only
about
the
data
types
in
transit,
but
it's
also
about
the
data
types
at
rest
to
optimize
queries
after
this
on
this
data
like,
if
you
have
a
string,
you
can
you,
you
may
want
to
optimize
the
storage,
how
to
index
it
for
queries.
Is
it?
D
Is
it
a
fixed
string
like
a
keyword
or
is
it
more
prepared
for
wildcard
and
so
phone?
D
So
we
believe
that
they
will
be
downstream
schemas,
at
least
for
vendor,
to
optimize
their
storage
of
the
data
and
maybe
also
because
they
will
need
some
vendor
specific
attributes,
and
so
we
had
the
idea,
this
second
phase,
to
also
validate
this
model
of
downstream
schemas,
to
ensure
that
we,
we
would
not
end
up
with
forks
on
elastic.
Also,
we
want
to
validate
the
governance
of
this
schema
because
at
elastic
we
have
some
a
team
that
is
dedicated
to
evolve.
D
The
elastic
common
schema
to
meet
our
goals
because
it's
critical
for
our
offers,
primarily
our
security
offer
so
to
validate
with
the
elastic
with
the
hotel
community,
how
our
how
elastic
could
continue,
at
least
at
the
beginning,
have
some
kind
of
warranties
to
be
able
to
contribute
to
the
maintenance
to
be
maintainers
of
them
this
model,
and
then
we
we
understand
that
it's
a
meritocracy
and
if
you
contribute
you
are
in,
if
you
don't
contribute
and
you
go
out,
this
was
the
second
level
we
we
identified
and
we
saw
that
on
the
methodology.
D
It
would
be
more
like
an
iterative
process
because
there
are
so
many
fields
that
it
could
make
sense
to
to
start
with
with
subset
on.
Probably
what
we
are
doing
at
the
moment
is
to
take
some
popular
use
cases,
as
I
said,
authentication,
maybe
also
http
access
logs.
D
Some
things
like
this
to
propose
an
evolution
of
hotel
semantic
conventions
to
integrate
this
to
support
these
use
cases
on
once.
We
have
understood
what
it
would
look
like
with
these
use
cases
and
we
would
propose
the
entire
namespaces,
because
we
have
broken
down
our
schema
in
namespaces.
I
want
to
say
we
have
illustrated
this
namespace
of
user
definition
with
the
authentication
event
use
case.
D
D
D
Do
you
think
that
does
this
approach
resonate
with
you?
Does
it
sound
too
complex,
too
slow
too
fast.
A
C
Yeah,
I
don't,
I
don't
have
a
lot
of
feedback
myself.
I've
been
caught
up
with
metrics
and
heads
down
on
that,
so
I
haven't
had
time
to
review
this
in
depth
and
there's
obviously
a
lot
to
unpack
here.
You
know,
hopefully
in
the
next
couple
weeks
I
can.
I
can
take
a
a
really
close
look
at
this
otep
and
and
try
to
offer
you
something
more
concrete
I'd
say
at
a
high
level,
I'm
aligned
with
the
idea
of
consolidating
conventions
and
having
less
duplication.
A
D
The
easy
elastic
search,
primarily
the
elastic
elastic
search
communities,
a
bitsy
logstash
community
also,
but
lockstache,
is
not
as
standardized
as
hotel,
and
I
think
it's
a
great
strength,
something
that
hotel
collector
did
right
from
the
beginning
to
say
the
signals
will
be
standardized
on
the
convention
at
ingest
time
on
the
edge
rather
than
at
rest
in
the
storage,
when
the
elastic
search
community
had
more
the
philosophy
to
standardize
onto
unreached
at
rest,
so
yeah.
A
Cool
hey
since
you're
here
is
there
anything
you
wanted?
Did
you
want
to
chat
briefly
about
the
maven
extension
stability?
A
D
Yeah,
so
I
didn't
have
time
to
put
us
on
it.
I've
spent
a
crazy
time
to
be
able
to
start
jenkins
job
logs
through
tlp.
It
took
me
a
crazy
time
so,
but
I
think
it's
a
very
compelling
story.
You
know,
and
it's
reliable,
that's
why
I
had
to
neglect
maven.
In
the
meantime,
no
worries
yeah.
So
if
you
use
the
jenkins,
please
look
at
what
we
have
done.
We
store
the
pipeline
logs
in
through
otlp.
Now
it's
amazing.
D
D
So,
for
those
who
are
familiar
with
jenkins
jenkins
usually
store
its
build
logs
on
the
file
system
of
the
jenkins
controller
on
it's
a
scalability
issue
and
it's
a
reliability
earth.
So
this
diagram
has
been
truncated.
It's
not
great!
D
It's
a
challenge
when
your
build
logs
are
on
the
file
system,
because
it
creates
stability
issue
on
the
scalability
issue,
on
your
jenkins
controller
on
also,
you
don't
have
the
audit
that
you
want.
If
you
think
about
the
solar
wind
attack
two
years
ago,
you
want
to
keep
your
logs
of
your
release,
build
at
least
in
a
very
safe
place
to
for
audit
on
a
if
it's
under
jenkins
controller,
it's
it's
not
great
on
here.
D
As
I
work
for
elastic,
we,
we
first
validated
the
storage
in
elastic,
but
what
we
did
the
minimum
use
case
is
in
jenkins.
We
replace
the
rendering
of
the
build
logs
by
just
an
hyperlink
to
your
visualization
back-end.
So
it's
very
simple,
then
what
we
did
on,
I
hope
more
vendors
will
do.
D
It
is
to
say,
in
addition
to
be
able
to
visualize
your
pipeline
logs
in
your
observability
back
end
jenkins
also
has
apis
to
retrieve
and
demand
the
pipeline
logs
that
are
stored
about
try
to
retrieve
them
to
provide
them
in
the
jenkins
graphical
user
interface.
D
So
you
don't
change
the
user
experience
for
your
developers
who
use
jenkins
on
until
we
we
retrieve
them
on
demand,
and
so
you
have
no
change
in
the
user
experience
of
your
developers
who
use
jenkins,
but
behind
the
scene,
all
your
logs
have
been
stored
in
an
observability
system,
so
you
make
the
life
of
the
jenkins
admin
easier
because
less
storage
on
disk
on
you
have
one
single
place
where
you
have
all
your
logs
on
traces
on
metrics
of
jenkins,
to
enable
this
rich,
unified,
troubleshooting
on
hopefully
later
down
the
road.
D
We
will
be
able
to
enable
some
security
use
cases.
One
of
them
that
we
know
on
github
is
to
detect
credentials
leak
because
credentials.
Leaks
on
the
ci
platform
is
something
that
makes
everybody
scared
on
here.
We
imagine
that
observability
vendors,
they
could
implement
credentials
leak
detector
in
their
ingest
pipeline.
A
Cool
yeah
I
should.
I
should
make
sure
that
I
should
update
all
of
our.
I
haven't
thought
about
the
registry
in
a
while,
but
when
it's
nice
to
either
document
these
things,
I
was
thinking
to
have
a
link
to
external
like
open
telemetry.
A
I
guess
this
is
not
specifically
java,
although
I
think
of
it.
It's
very
popular
in
the
java
community.
A
D
And
we
had
to
do
it
because
to
to
display,
oh
that
distributed,
trace.
D
Initially,
we
did
it
for
pipeline
execution,
but
when
we
were
doing
for
when
you
display
the
logs
of
a
build,
you
have
some
ajax
calls
made
to
your
to
jenkins
because
it's
a
progressive
rendering
of
logs.
Unless
we
did
not
understand
anything
of
the
behavior
of
jenkins
http
apis.
D
We
had
to
instrument
the
http
endpoints
of
jenkins
to
be
able
to
understand
how
the
logs
api
we're
working.
D
Until
now
yeah,
this
is
typically
a
distributed
tray,
a
trace,
http
trace
with
displayed
in
jager
of
how
jenkins
is
collecting
logs
when
it
renders
this
webpage
showing
the
junkie
clocks.
A
Yeah
I
I
like
the
use
case
of
using
tracing
tools
for
as
debuggers.
D
As
reverse
engineering
yeah,
I
would
say,
when
you're
reverse
engineering,
something
on
what
we
discovered
here
is
we
had
some
methods,
we
didn't
understand
what
was
calling
them
and
if
you
just
write
a
log
message,
it's
extremely
difficult
to
understand
from
a
log
message:
what
was
the
invoker
and
if
we,
if
we
take
a
distributed,
trace,
visualization
typically
like
eager
for
each
each
span.
Name
is
an
entry
point
of
your
search,
and
so
you
can
say
for
this
span
name,
which
is
a
what
you
want
to
reverse
engineer.
D
D
E
D
E
E
A
D
Discovered
also
is,
if
you
have
logs,
if
your
logs
are
not
very
well
intricated
with
your
response,
what
you
typically
want
is
to
have
spans
on
logs
visualize
together
when
you
debug,
when
you
want
to
understand
the
behavior
on
if
your
tool,
if
your
backend
is
not
able
to
have
a
linear
rendering
of
all
suspense
with
their
attributes
on
the
log
message
in
one
flow,
then
you
will
have
to
debate.
Something
will
be
duplicated
both
as
spans
and
also
as
log
messages,
because
you
lose
context
between
the
two
views,
which
is
very
sad.
D
E
D
We
to
solve
this
problem
of,
is
it
logs
or
is
it
spans?
We
have
to
solve
this
problem
to
say
it.
It
doesn't
matter
much
but
often
span.
Attributes
will
be
more
very
efficient
to
structure
your
search.
E
E
B
D
E
D
D
Could
log
it
yeah,
but
if
I
log
it
in
a
tool
like
jaeger
on
hopefully
soon
elastic,
I
cannot
search
for
a
certain
invocation.
A
certain
entry
point
until
ask
what
are
all
the
execution
pass
to
reach
this
point
of
code.
D
E
D
A
Not
if
the
logs
are
s
vehicle
have
a
trace
id
span,
id
attached.
D
Again,
if
I
have,
if
I
emit
as
logs
on
if
yeah,
but
I
I
sti,
I
need
my
traces
here
and
I
would
want
my
if
I,
if
I
emit
it
as
logs,
it
could
be.
I
would
have
to
triangulate
to
find
tell
me
all
the
trace
ids
that
invokes
this
jenkins
overall
log
right,
html2
and.
C
D
I
will
search
all
the
distinct
trace
names
over
this,
but
with
my
in
jaeger,
I
find
immediately
all
the
distinct
http
urls
when,
if
I
add
a
hook
with
logs
search
for
a
search
for
a
line
of
logon,
then
find
all
the
matching
traces.
D
A
I
don't
and
I
don't
think
we
had
any
other
topics
today.
Anybody
have
anything.
A
Yeah
we'll
chat
this
evening
about
sdk
stable
release.
That's
a
big
deal
coming
potentially
next
week,
yikes.