►
From YouTube: 2022-01-04 CNCF TAG Observability Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
B
B
Sorry
evan,
yes,
I
can
I'm
happy
to
again.
I
had
one
topic
that
I
wanted
to
just
add
to
our
agenda
and
again
folks
feel
free
to
actually
matt
has
already
added
that.
So
that's
cool
and
feel
free
to
add
yourselves
to
the
to
the
attendees
list
for
this
new
year's
meeting,
and
then
we
can
get
started.
B
B
Or
we
can
talk,
sounds
good,
sounds
good,
sounds
good
again,
so
I
what
what
I
wanted
to
do.
First
is
again,
you
know
kind
of
give
a
quick
update
on
what's
happening
on
the
open,
telemetry
project.
As
you,
many
of
you,
you
already
know
it's
a
very
large
project
and
you
know
there's
a
lot
of
different
components
that
are
in
development
or
moving
towards
stability,
and
one
of
the
you
know,
goals
the
project
has
had
is
to
actually
take
each
data
signal
for
telemetry
traces.
B
First,
then
metrics
and
then
logs
and,
and
you
know,
stabilize
the
collection
process-
implementation
through
the
collector,
as
well
as
the
11
language
sdks,
as
well
as
build
out
auto
instrumentation
for
each
language
to
make
it
easier,
for
you
know
any
users
to
be
able
to
take
and
use
the
collection
agents
or
the
sdks
and
be
able
to
auto
instrument
their
configurations
and
and
set
up
to
easily
be
able
to
collect
right
from
a
diverse
city
of
sources.
Now,
obviously,
you
know
let
me
dive
a
little
bit
into
trace's.
B
First
traces
has
stabilized
last
year
in
september,
so
the
functionality
you
know
supporting
the
open
telemetry
protocol,
which
is
again
interoperable
with
the
standard
protocols
in
tracing
open
tracing
in
specifically
and
others
has
been
built
out
and
that's
fully
usable
in
production,
so
tracing
you
know
was
actually
the
first
given
its
maturity
from
even
the
predecessor
projects
that
intersected
into
open
telemetry.
B
The
next
part
is
metrics,
which
is
where
the
bulk
of
the
work
for
the
last
few
months
has
been
ongoing
and
the
way
that
open,
telemetry,
you
know,
has
kind
of
targeted
an
iterative
process
for
achieving
stability
has
been
that
the
collector
you
know,
with
the
collector
agent
being
stable
for
metrics,
specifically
otlp
and
the
full
open
metrics
prometheus
interoperability
are
the
two
guarantees
that
the
project
provides
for
all
metrics
data
and
then
the
second
part
being
the
sdks,
the
libraries
at
least
three
of
the
major
libraries
offering
stability
for
end-to-end
collection,
processing
and
exporting
right.
B
So
that's
that's
kind
of
the
core
metric
stability
guarantees
that
the
project
is
aiming
to
complete
and
deliver
by
the
first
quarter,
hopefully
by
end
of
january.
There's
a
big
milestone
for
java
to
be
available
as
stable
the
data
model
and
the
protocol
is
already
stable
and
then
the
collector
also
at
the
same
time,
with
otlp
being
stable
first,
with
prometheus
interoperability
being
completed
and
completely
compliant
as
well.
B
As
you
know,
all
those
results
for
regular
compliance
tests
being
being
published
and
passing
and
then
going
into
you
know
providing
metrics
end-to-end
pipelines,
data
pipelines
for
the
other
sdks
right.
So
it's
kind
of
been
an
iterative
approach
and
again
javascript
as
well
as
go
as
well
as
python
and
dot
net.
Are
you
know,
close
follow-ups
from
java
right
so
again,
those
are
the
first
five
sdks
and
then
the
others
will
follow
like
c
plus,
plus
or
rust
and
other
implementations.
B
So
that's
where
we
stand
with
metrics
and
you
know
through
the
most
of
q1.
The
objective
is,
to
you
know,
stabilize
the
major
sdks
as
well.
As
you
know,
the
collector
and
java
landing
in
first
gen
end
of
jan.
That
said,
then
we
move
on
to
logs,
which
is
actually
a
very
important.
You
know
audience
and
constituency
in
in
the
open,
telemetry
project,
and
one
of
the
areas
that
we
have
been
actually
working
towards
is
making
sure
that
the
logs
data
model
is
first
of
all,
interoperable
with
major.
B
You
know
common
formats
as
well
as
the
formats
for
you
know,
which
are
defined
in
the
data
model
being
fully
supported
on
the
project
and
to
that
end
again,
we've
been
working
closely
as
an
industry
group.
You
know
to
also
identify
common
schemas,
such
as
the
elastic
schema,
to
make
sure
that
that
you
know
schema
is
fully
interoperable
and
fully
compatible
with
the
logging
implementations
and
the
data
protocol.
So
that's
that's
an
area
that
is
underway
right
now.
B
It's
just
you
know
in
its
alpha
of
stage,
if
you
will
right
now,
but
there's
a
lot
of
exciting
work.
That's
coming
up
in
the
next
few
months,
where
an
initial
implementation
of
logging
will
be
available.
You
know
through
through
the
end
of
q1,
going
into
q2
and
then
also
thereafter.
B
You
know
picking
up
the
more
complex
use
cases
of
logging
and
being
able
to
support
that
with
you
know,
sophisticated
formats
such
as
elastic
format,
right
so
again,
that's
where
we
are
right
now
and
then,
of
course,
picking
up
implementation
and
core
languages
first
and
then
going
after
the
rest
of
the
sdks.
So
there's
a
lot
of
lot
of
work
happening.
You
know
in
terms
of
just
building
out
the
functionality
for
the
each
of
the
data
signals
to
be
fully
fully
supported.
B
Alongside
that,
there
are
some
fundamental
areas
that
are
also
being
worked
on.
One
is,
you
know,
expanding
the
types
of
metrics
that
are
being
used
and
collected
ebpf
included,
and
there
is
a
component
which
you
know
was
donated
as
a
receiver
in
the
in
the
open,
telemetry
collector
from
splunk,
which
is
flowmill,
and
that
has
been
you
know,
primarily
served
as
the
basis
for
integration
and
support
of
ebpf
data
for
collection,
and
then
that
will
be
used
as
a
basis
for
expanding
upon.
B
You
know:
support
for
more
types
of
metrics
right
in
in
the
ebpf
world.
The
other
part
that
is
super
interesting
is,
you
know
again
taking
database,
metrics
and
other
other
data
types
with
sql
commentator,
which
was
donated
as
a
component
from
google
and
taking
that
as
the
baseline,
to
be
integrated
also
into
the
collector
agent,
which
is
going
to
be,
then
you
know
a
service
and
also
another
smart
component
in
in
in
the
pipeline,
for
you
know,
processing,
different
metrics
from
databases
and
other
types
of
related.
B
You
know
trace
correlation
and
other
stuff
that
can
be
done
so
related
to
that.
We
are
also
then
looking
at
building
out
more
processing.
B
You
know,
processor
based
analysis
in
the
collector,
as
well
as
other
areas
such
as
sampling
for
tracing.
You
know
some
more
sophisticated
sampling,
also
correlation
across
the
different
data
signals
right.
So
there's
that
is
some
of
the
discussion
and
and
some
design
work,
that's
ongoing
on
the
project
and,
if
you're
interested
in
you
know,
leading
or
or
participating
in
any
of
these
discussions.
B
You
know,
please
bring
me
happy
to
connect
you
with
the
you
know,
maintainers
and
the
contributors
there,
the
and
and
last
but
not
least,
there's
an
and
fair
bit
of
work
happening
in
the
instrumentation
area,
where
we
do
have
instrumentation
and
instrumentation
sig.
They
wear.
You
know
different
large
customers,
end
users,
different
vendors,
who
are
you
know,
working
towards
and
bringing
their
expertise
towards
defining
semantic
conventions
as
well
as
then
building
out
instrumentation
conventions
and
and
then
building.
B
You
know
the
functionality
out
or
participating
and
and
that's
an
area
where
you
know,
there's
a
fair
bit
of
work,
also
happening
where
we're
adding
tests
we're
adding
you
know
what
what
instrumentation
conventions
are
four
different
types
of
data
and,
and
you
know,
building
that
out
so
again,
you
know
that's
kind
of
the
summary
at
a
very
high
level
of
all
the
different
moving
parts
of
you
know
what's
ongoing
on
the
project
today.
B
But
but
it
is,
it
has
been
a
very
open
project
and
you
know
different
teams
who
have
been
interested
or
different
engineers.
Who've
been
interested
in
leaving
leading
a
specific
effort
and
just
come
and
particip
have
participated.
B
We
had
a
tremendous,
you
know,
amount
of
interoperable
operability
work
that
happened
last
year
and
we,
you
know,
continue
that
this
year
with
the
open,
metrics
and
the
prometheus
communities
and,
of
course,
also
work
very
closely
with
the
kubernetes
project
and
other
related
projects
such
as
you
know,
cube
prometheus
and
other.
You
know,
projects
which
are
supporting
helm,
charts
for
deployments
and
other
integrations
to
to
continue
to
build,
build
that
the
functionality
out
and
that's
all.
I
had
any
questions.
A
This
correlation
of
signals
topic-
where
is
this
currently
being
discussed
in
which
a
sig.
B
Daniel
this
is
in
the
sampling
sig.
There
is
some
amount
of
discussion
that
is
ongoing.
That's
at
8
am,
I
think,
on
thursdays.
B
The
sig
is
held
and
there's
also
some
discussion
in
the
metrics
sake
at
the
moment,
and
I'm
I'm
hoping
that
you
know
that
will
actually
continue
on
and
the
logs
say
also.
B
Yeah,
I
mean
that's
something
which
is,
you
know
very
useful
to
end
users,
and
I
think
that
that's
something
that
easily
can
be
done
at
the
processor
level.
You
know
to
a
larger
extent,
pre-processing
some
of
the
data
or
aggregating
some
of
the
data
which
can
then
be
sent.
You
know
downstream
on
the
pipeline.
D
Makes
sense
I've
been
testing
the
the
open,
telemetry
collector
operator
recently,
which
is
to
be
honest,
I'm
it's
simplified
so
much
to
the
work.
So
it's
I
was
so
so
excited
when
I
saw
it
and
I
also
utilizing
the
flint
bit
operator
that
has
been
released.
I
mean
one
two
months
ago
or
I
don't
remember
when
exactly
and
I
was
when
you
were
referring
to
the
logs.
D
I
was
wondering
if
there
was
any
conjunctions
or
anything
that
could
be
mutualized
between
what
flynn
bit
operator
is
doing
and
what's
the
open,
telemetry
collector
will
probably
do
soon
on
logs.
So
maybe
there
will
be
a
simplify
a
lot
of
things.
Yeah.
B
I
mean,
I
think,
that's
a
very
good
call
out
hendrick,
because
definitely
you
know
the
objective
is
to
be
able
to
provide
that
ease
of
use
through
the
operators
and
the
helm,
charts
and
the
operator.
Specifically,
we
did
so
there's
there's
some
interesting.
You
know
again
discussions
on
going.
B
I
think
that
we
have
just
not
reached
that
point
in
the
project
itself,
where
you
know
we
have
really
good
integration
in
the
operator,
as
well
as
home,
charts
available
for
you
know,
pulling
in
fluent
bit
data
or
supporting
that
full
integration
for
fluent
bit
through
the
collector
or
through
you
know
other
through
the
collector.
Specifically,
that
said,
there
are
helm
charts
that
we
have
built
downstream.
On
I
mean
as
aws,
we
have
a
distribution
downstream
where
we
have
built
home
charts
to
make
available
both
fluent
bit
logs.
B
You
know
base
logs
as
well
as
metrics,
for
example,
which
we
will
actually
add,
contribute
back
to
the
project
right,
so
I
mean
our
objective
has
been,
even
if
it
is
built
anywhere
on
a
different
project
if
we
can
integrate
it
into
or
make
it
available
through
documentation,
even
on
open
telemetry,
it's
ease
of
use
to
users
right.
So
definitely
that's
something
again.
If
you
have,
you
know,
please
file
an
issue,
you
know
ask
for
it.
Tag
me
happy
to
you
know:
I'd
love
to
see
that
integration
actually
coming
into
place.
D
Sure,
and
also
I
think
there
is
a
lot
of
by
just
playing
around
with
the
operator
and
looking
at
what
flynn
bit
operator
is
able
to
do
with
new
crds.
I
was
thinking
that
maybe
instead,
I'm
having
this
crd
open
from
the
collector
that,
where
you
have
the
entire
pipeline,
described
there
being
able
to
split,
creates
open
chemistry,
receivers,
open
symmetry,
so
this
separate
objects.
So
then
the
pipeline,
you
can
easily
update
it.
So
it
makes
like
easier
for
projects
at
the
end
of
the
day.
B
And,
and
to
that
henrik,
you
know
one
of
the
areas
that
we've
been
working
towards
is.
There
is
a
go-based
builder
tool
that
we
have
built.
You
know
on
the
project
which
is
which
you
know
we
are
continuing
to
refine
and
add
and
enhance
where
you
can
actually
build
more
streamlined
pipelines
right.
That
is,
collectors
with
specific
receiver.
B
With
a
specific
you
know,
processor
and
a
specific
exporter
where
it's
it's
completely
streamlined
only
with
those
components
right
and
not
everything
in
it
right,
so
you
can
pick
and
choose,
and
this
builder
over
time
will
enable
you
to
configure
your
own.
You
know
set
of
components
that
you
can
then
just
build
and
use
as
a
snapshot
right
as
an
image.
B
So
that's
something
again,
you
know
which
is
quite
popular.
Even
in
the
go.
You
know:
packaging
space,
that's
a
model
that
we've
been
actually
using
and
building
out
gradually
I
can.
I
can
ping
you
the
link,
if
you.
B
Because
that's
something
that
we'd
love
to
get
more,
I
mean
think
of
it
as
a
package
manager
right,
whereas
in
in
previous
times
on
other
large
projects
such
as
you
know
when
rpm
came
about-
or
you
know,
some
of
the
other
linux
package
managers
which
have
developed
over
time.
B
D
B
So
that
that
optimizes,
you
know
what
the
operator,
then
you
know,
can
actually
just
define
a
simple
crd
for
and
be
able
to
use
writers
as
part
of
the
collector.
B
D
B
D
B
B
So
again,
you
know,
would
have
would
we've
built
out
some
of
the
fluent
bit
integration
downstream
already
so
again,
would
be
very
happy
to
you
know
add
that,
in
into
the
help
add
that,
in
into
the
operator
on
the
project
again,
our
baseline
has
been
that
you
know
everything
that
we
build
or
see
use
cases
for
we're,
adding
back
to
the
project
itself.
B
I
have
a
question
for
you.
All
of
you,
though,
do
you
see
any
other
integrations
fluent
bit
is
a
very
important
one
and,
of
course,
logging.
You
know,
as
logging
becomes
a
major
use
case
for
the
for
the
components
on
hotel.
Are
there
other
sources
that
or
formats
that
you
see
that
are
supported?
I
mean
elastic
goes
without
saying
any
other.
You
know
integrations
that
you
see
would
be
useful.
B
I
mean
it's
just
a
question
again,
you
know
you
don't
have
to
respond
to
it
now,
but
it's
just
that
more!
It's
it's!
You
know
the
project
is
constantly
looking
at
okay,
what
what
is
most
useful
to
end
users-
and
you
know
making
sure
that
at
least
the
basic
pipelines
guarantee
support
for
that.
D
A
D
Bit
covers
most
of
the
features
that
trendy
provides,
but
the
fact
that
d
has
so
many
plugins
as
compared
to
flintbit.
There
are
still
some,
not
a
large
number
of
users,
still
using
d.
B
D
That
support
for
me
also
here
I
was
most
also
especially
when
we,
when
we
were
start
doing
a
lot
of
log
stream
pipelines.
D
There
is
a
clearly
a
need
to
also
get
observability
on
on,
what's
going
on
in
your
pipeline.
So
how
many?
How
many
streams
coming
in
in
the
pipeline
haven't
been
ingested?
I
mean
any
type
of
operation
that
could
happen
and
then
goes
out
and
I
think
there's
two
solutions
that
provide
like
visualizations
of
what
happened
happening
on
the
pipeline,
but
I
think
that
should
be
if
we
have
something
that
process
logs
or
do
transform
logs
or
whatever
it
is
or
similar
to
the
metrics.
D
Having
also
like
a
a
significant
level
of
details,
what's
going
on
the
pipeline,
because
when
you
design
those
pipelines
usually
at
the
beginning,
you
yeah
you
use
this
to
be
out
and
then
you
cross
the
fingers,
see
it
it's
going
to
work,
and
but
you
don't
have
much
visibility
on
what's
going
on
except
yeah
yeah
just
doing
it.
B
A
very
good
point-
and
I
think
that
henrik,
you
know
one
thing
that
I
think
it
definitely
you
know
it
has
been
discussed
but
actually
has
not
had
enough
engineering
work
done
on
yet
is
health
metrics
for
you
know
and
health
pro
logs,
if
you
will
for
the
entire
pipeline
right
and
and
being
able
to
make
those
that
data
available
easily
for
anyone
there
is
there
are
you
know
there
is
some
initial.
B
You
know
components,
initial
implementation
of
components,
as
you
know,
on
the
collector
itself
to
watch
and
observe
the
collector,
and
you
know,
emit
some
of
the
health
stats
for
the
each
of
the
components
within
the
collector.
But
again
that's
something
that
is
you
know.
Once
we
have
the
pipeline
the
processing
pipeline
stability
done,
then
we
will
actually
start
working
and
targeting
that
you
know
those
health
metrics.
B
It's
a
request
from
all
you
know:
users,
actually,
every
every
end
user
who's
participating
in
the
project
actively
or
you
know
other
customers
who
have
been
providing
feedback
so
definitely
agree
with
you.
There
I
mean,
maybe
I
can
reach
out
to
you
offline
and
we
can
file
an
issue
to
you.
Know
make
sure
that
those
use
cases
are
actually
captured,
where
we
can
actually
make
sure
that
the
you
know
work.
That's
a
development
work
supports
that,
like
what
kind
of
metrics
would
you
like
to
see
right?
B
D
I
think
we
should
have
the
the
flexibility
to
sort
of
turn
on
debug
or
normal
mode,
whatever
it
is
and
get
some
sort
of
traces
out.
So
when
you're
stuck
in
your
pipeline,
so
then
you
you
can
understand
what
happens
on
on
the
log
transformations.
So
that
could
be
also
very
helpful
because,
usually,
when
you
manipulate
logs,
it's
you
you're
working
in
a
dark,
a
lot
and
then
it's
very
time
consuming
to
to
create
a
an
efficient
pipeline,
and
I
think
yeah
for
the
metrics.
D
I
think
the
how
how
is
the
health
of
the
the
pipeline
number
of
streams
coming
in
coming
out,
anything
that
that
could
send
a
signal
to
let's
say
the
team
that
that
should
manage
all
those
pipelines
that
there
is
one
component
that
is
almost
dying
or
where
he's
not
doing
he's
not
ingesting
as
many
as
as
as
many
logs
as
the
number
of
logs
that
we
are
supposed
to
ingest.
So
I
think
anything
that
that
gives
extra
visibility
and
help
to
detect
and
proactively
detect
problems.
B
Definitely
I
agree
totally,
I
mean
again
each
of
the
data
streams
has
their
own
complexity
right.
So
again,
I
think
that
having
a
standard
set
of
metrics
and
logs
that
are,
you
know,
useful
and
and
don't
you
know,
affect
performance
when
turned
on.
Like
you
know,
if
you
turn
on
debug
mode,
what
does
the
thing
should
not
slow
down
significantly?
B
You
know
in
terms
of
emitting
more
data,
but
really
being
able
to
have
that
and
then
a
standardized
visualization
that
is
available
for
a
dashboard,
that's
available,
for
you
know
being
able
to
visualize
that
data
is
super
useful,
so
I
mean
I'll
I'll,
create
an
issue
and
and
then
maybe
we
can
work
together
on
adding
some
more
detail
there.
You
know
again,
even
the
cases
that
you
called
out
were
pretty
good
matt.
Sorry,
you
were
your
hand,
was
up.
C
C
One
and-
and
some
of
these
are
questions
because
I'm
not
super
deep
into
open
telemetry,
and
I
know
that
there's
quite
a
few
groups
and
cigs
and
and
there's
so
many
work
streams,
so
apologies
if
some
of
these
are
already
in
existence,
but
if
not,
I
think
it
might
make
sense
to
at
least
propose
that
we
do
stuff
like
this
on
on
the
fluent
deep
part,
you
know,
as
stated,
there's
a
huge
ecosystem
of
vendors
and
other
projects
that
all
leverage
solidity.
So
is
there
today
in
the
hotel.
C
You
know
logs
bits,
ci
and
cd,
to
kind
of
confirm
that
you
know
there
aren't
breaking
changes
that
might
well
still
that
might
impact
fluency
or
some
of
those
scenarios.
B
C
B
I
think
I
think
matt
you
bring
up
a
very
good
point.
I
mean
this
is
something
that
we
you
know
addressed
given
we
were,
you
know,
have
been
kind
of
heads
down
focused
on
metrics
right
now
we
did
the
same
thing
with
open,
metrics
and
and
prometheus.
You
know
interoperability
where
you
know
not
only
have
we
made
sure
that
feature
wise.
B
You
know
there
is
full
compatibility,
but
also
from
a
data
signal,
a
data
type
which
you
know
at
the
data
protocol
and
end
to
end
right
what's
being
emitted,
and
is
it
valid
coming
out
of
each
part
of
the
collector
right?
So
the
same
thing
actually
doesn't
exist
for
fluent
bit
yet
fluently
yet,
but
should
I
mean,
as
we
pick
up,
you
know
that
compliance
area
interoperability
area,
it's
the
same
thing
even
with
elastic.
B
C
Yeah,
and
so
we
had
also-
we
had
reached
out
to
eduardo
da
silva
last
year,
actually
about
potentially
being
a
tl
tech
lead.
So
you
know,
obviously
I
think
we
could
reach
out
there
and
develop
those
ideas
further
on
the
log
side,
I'll
be
I'll,
be
kind
of
I'll
try
to
be
brief.
I've
got
like
two
or
three
things
here,
one
in
terms
of
scenarios
around
logs
two
that
come
to
mind
directly
or
three.
C
Actually,
they
come
to
mind
from
from
at
least
how
I've
used
some
of
this
stuff
and
we've
used
fluency
and
feeling
bit
as
well
as
prom
tail
pretty
extensively
in
various
places.
One
is
the
log
derived
metrics.
Is
there
you
know?
Is
there
a
standard
or
of
good
practices,
either
document
or
or
opinion
on
the
hotel
side
about
how
to
do
this,
or
is
that
viewed
as
something
downstream,
because
it
seems
like
in
the
collector?
C
You
know
it's
a
natural
place
to
potentially
have
log
derived
metrics
happen,
like
you
know,
from
things
happening
in
logs,
but
perhaps
more
importantly,
is
events
from
logs.
You
know
there's
a
lot
of
legacy
systems
that
output
events
either
json
or
other
formats
and
having
a
well-formed
way
to
turn.
C
You
know
things
that
are
embedded
in
logs
into
events.
If,
if
that
you
know
becomes
part
of
that,
ecosystem
is
another
scenario
and
then
third
is
there
sort
of
a
an
hotel,
collector
transformation
or
other
wealth
best
practices
around
redaction
for
compliance
right,
so
many
logs,
sometimes
for
better
or
worse,
can
surface
things
like
customer
data
or
sensitive
information
or
credentials,
even
though
they
shouldn't
you
know
this
is
live,
so
you
know,
or
is
that
just
considered
like
that
that'll
be
a
transformation
step
and
it's
an
open
ecosystem
and
that's.
B
No,
I
think
I
think
these
are
all
very
good
use
cases
that
what
I
can
say
is
that
for
even
for
your
first
use
case,
where
there
are
log
log
derived
metrics.
B
B
Yeah
and-
and
I
think
that
also
you
know
leads
into
your
second-
you
use
case
of
events
from
logs,
which
is
where
you
know
the
iot
real
user
monitoring
use
cases
you
know
kind
of
kick
in,
and
real
user
monitoring
definitely
is
being
addressed.
There
is
a
sig,
you
know
discussion
around
how
to
process
ebpf
data
which
is
similar
to
you
know
the
events
driven
events,
data
coming
in
from
logs,
but
real
user
monitoring
and
iot
use
cases
also
have
other.
B
You
know
specific
types
of
events
which
are
not
necessarily
derivable
from
logs
right.
So
that's
some
gray.
C
B
Yeah,
exactly
exactly
and
and
that's
you
know-
inevitably
you
know
in
hybrid
environments,
where
somebody's
running
and
huge
you
know,
network
of
such
devices.
That
information,
you
know,
will
also
be
collected
and
collated
back
into
the
you
know,
a
monitoring
you
know
service
or
or
a
visualization.
B
You
know
where
you
want
to
see
the
health
or
status
of
different.
You
know
extremes
of
data
coming
in
right,
so
that's
inevitable
as
a
use
case,
and
it's
something
that's
being
looked
at.
I
think
that,
just
to
be
very
clear
that
use
case
is
known
to
us
in
hotel,
but
is
not
being
worked
on
actively
yet
because
of
the
immaturity
of
logs
as
an
implementation.
B
So
the
first
objective
you
know
as
we
look
at
logs,
is
to
actually
make
sure
that
the
server
based
the
you
know,
cloud
native
use
cases
are
first
obviously
addressed,
but
then
it's
definitely
on
our
radar
to
support
log
derived
metrics
first
and
real
user
monitoring
slash
events
from
logs
also,
so
those
are
definitely
two
areas
that
are
actively
being
worked
on,
but
they're
still,
you
know
in
development
on.
C
The
event
side,
it
might
make
sense
to
collaborate
with
the
cloud
events
cncf.
You
know
big
their
whole
payload,
you
know
kind
of
which
is.
I
won't
it's
a
nice
interesting
architecture
that
could
mate
quite
well
right
and
and
and
avoid
a
lot
of
duplication.
The
third
kind
of
thing
I
was,
I
was
kind
of
curious
about
around
scenarios.
C
Are
you
know
both
encryption
and
compression
in
terms
of
transformations,
and
you
know
best
practices
around
that
again,
like
anyone
can
kind
of
hack
in
the
collector
and
add
transformations,
but,
like
you
know,
if
there's
something
tested
and
certified
to
be
secure
right
on
the
encryption
side
and
on
the
on
the
on
the
on
the
compression
side.
C
You
know
you
know
those
are
things
that
can
obviously
be
accelerated
in
hardware
in
various
appliances
as
or
other
you
know,
hyper
converged,
infrastructure
vendors,
you
know
do
do
do
have
problems
like
that.
So
you
know
you
know
so.
Similarly
to
how
kubernetes
workloads
can
you
know
start
to
leverage
things
like
gpus
based
on
various
perks.
You
know
for
vmware
and
everybody
else.
You
know
the
same
might
be
the
case
for
log
processing.
At
scale
right,
it's
a
natural
place.
To
put
you
know:
hardware
acceleration
yeah.
B
And
and
that's
where
I
think
you
know
some
of
the
work
matt,
it's
not
actively
being
done
on
the
project
today,
but
elastic
you
know
obviously
has
a
lot
of
experience
in
some
of
these
specific.
You
know
complex
use
cases
so
does
logs
and
other
you
know,
contributors.
B
And
I'd
like
to
you
know,
actually
I'm
hoping
that
you
know
again.
The
teams
that
have
this
expertise
will
actually
work
on
the
project
together
and
and
daniel.
Here,
perhaps
you
know
you
can
shed
some
more.
You
know
light
on
some
of
the
work
that
has
already
been
done
on
the
elastic
search
side,
which
actually
plays
to
the
more
complex
you
know,
use
cases
of
security
or
redaction
of
security.
You
know
credentials
etc.
For
and
how
that
is
handled.
C
Right,
but
I
mean
the
the
use
briefly
I'll,
just
the
last
thing,
I'll
say
that
I'll
come
back
to
you
for
a
little
bit.
The
last
thing
I'll
say
this
is
coming
from
the
perspective
of
like
so
you're
new
to
the
cncf.
C
You
know
so
you
have,
you
know:
you've
been
using
your
own
thing,
grey
log
or
whatever
you
know
on
your
own
in
front
yeah
as
part
of
your
cloud
transition,
you're,
saying:
okay,
well,
we're
going
to
adopt
open
telemetry,
because
that
is
you
know
blessed
and
and
gives
us
a
lot
of
access
to
this
whole
ecosystems.
These
are
the
questions
of
what
that
a
cio
might
ask
right.
C
So
so,
even
though
hotel
might
not
implement
them,
I
think
having
some
sort
of
at
least
position
statement,
even
if
it's
to
say
like
hey
for
this
slice
of
the
of
the
capability
stack.
You
know
this
is
a
place
where
vendors
compete
and
provide
different
loadouts
of
value
and
cost
to
customers,
and
we
don't
or
just
like
some
guidelines,
because
again
I
think.
B
I
think
maybe
that
they're
the
tag
you
know
our
tag
observability
group
can
actually
help
there,
because
I
think
at
the
project
level,
as
you
know,
you
know
unless
it's
an
absolutely
an
implementation
driven.
You
know,
spec
that
fits
in
into
the
you
know
the
requirement,
as
well
as
what
is
being
developed.
Typically
there's
some
documentation,
but
then
there's
links
to
you
know
other
additional
references
for
good
practices
or
implementation.
B
A
Real
quick,
I
cannot
honestly
talk
about
elastic
in
this
case
because
I
just
don't
know
at
this
point,
but
I
know
still
from
dynasty-
and
this
was
a
huge
topic
also
for
us
also
when
talking
to
customers,
to
be
frank,
because
of
course,
if
someone
now
decides
to
kind
of
start
logging
all
credit
card
data
that
can't
kind
of
run
through
it,
you
cannot
prevent
it.
Then
it's
hard
to
defend
against
it.
I
mean
usually
a
back
end,
can
do
that.
A
Of
course
I
mean
to
apply
filters
when
the
data
comes
in,
and
this
is,
I
guess,
the
logic
v.
I
think
the
dynamistic
was,
if
I,
if
I
get
it
right
still,
okay,
they
can
send
whatever
we
have
to
be
defensive
on
on
the
back-end
side,
to
make
sure
that
we
don't
at
least
lock.
This
show
this
data
right
away,
so
that
is
filtering
in
place
and
you
cannot
trust
what
comes
in
there
and
I
think,
from
a
vendor
perspective,
you
will
never
trust
fully.
A
What
what
I
mean
you
never
know.
If
someone
creates
a
you,
have
an
open
interface.
You
don't
know
what
you
cannot,
trust
that
the
agent
they
can
tell
or
open
telemetry
collector
removes
it,
because
someone
creates
a
fork
and
there
you
go
so.
C
Yeah,
but
in
terms
of
what
we
could
do
in
the
tag
I
do
agree.
Oh
you
know,
you
know
we
could
even
you
know,
frame
these
scenarios
and
personas
and
articulate
them
and
then
in
a
tag
document
be
like,
and
here
are
the
vendors
that
can
make
a
pr
to
add
themselves
to
the
list.
You
know
we
can
see
that
that
offers
solutions
and
just
leave
it
at
that.
C
If
they're
cncf
members,
you
know
like
we
can
have
sort
of
a
part
of
that
part
of
that
rolodex
she's
a
really
dated
term
now
of
observability
vendors
and
projects
and-
and
I.
B
B
In
one
sense,
you
know
you
could
also
leverage
like
for
large
projects.
Their
registries
right,
open,
telemetry,
maintains
registry
of
you
know
different
third-party
components,
as
well
as
components
on
the
project
of
what
you
know
provides
what
functionality,
but
I
think
it's
also
going
one
step
forward
beyond
that.
To
say
that
you
know
what
are
the
for
each
of
these
use
cases.
You
know
where.
C
Yeah,
I'm
pulling
up
an
issue
to
link
here.
That
is
actually
a
you
know.
It's
a
needs,
help
issue
for
anyone
watching
this,
I'm
putting
it
in
the
chat
and
I'll
put
in
the
document
too,
but
to
kind
of
track.
Just
just
what
we've
been
talking
about,
and
it's
there
issue
number
41.
C
just
just
to
make
something
actionable.
Okay,
I
mean
it's
been
a
great
conversation.
I
I
want
to
watch
it
later
and
maybe
make
some
notes,
but
that
might
be
one
place
that
anyone
who's
interested
could
hop
on
and
contribute,
as
as
you
like,.
B
B
C
Yeah,
the
in
the
last
essay
on
this
one.
I
think
this
also
dovetails
in
nicely
with
a
discussion
at
the
toc.
I
think
it
was
in
early
december
one
of
the
last
open
meetings.
You
know
talking
about
the
technology
radars
and
some
of
the
ambiguity
from
the
cncf
end
user
radars
and
what
could
be
done
to
provide
a
little
more
context
around.
You
know
the
giant
eye
chart.
That
is
the
landscape.
You
know
in
the
observability
space,
at
least
to
kind
of
you
know,
do
some
of
this.
C
B
Yeah,
definitely
that's
that's
again.
Is
the
doc
actively
discussing
that
well,
yeah.
C
That
wasn't,
I
believe
it.
I
believe
it
was
the
early
december
toc
call.
I
can
put
a
link
in
the
doc
after
or
it
was
either
or
late
november,
but
I
believe
it
was
when
they
were
discussing
the
end
user
radars
and
some
of
the
ambiguity
around
around
that
yeah.
C
To
say
ambiguity
I
want
to
say
opportunities
to
to
do
better,
not
to
be
again
not
to
be
disparaging,
but.
C
I
could
be,
I
could
be
brief,
bright
and
gone
so
over
in
the
last
month
month
and
a
half,
some
of
us
have
been
meeting
around
the
observe
k8's
working
group.
There's
a
couple
links
in
there.
It's
a
draft
pr
for
the
turner,
which
is
basically
the
google
doc
we've
been
iterating
on
for
last
six
or
eight
weeks.
It's
actually
started
three
months
ago.
C
I
think
you
know
just
to
have
it
be
there
low
and
slow
and
anyone
who
wanted
to
jump
in
and
it's
a
it
just
takes
what
was
in
the
google
doc
and
puts
it
into
markdown.
So
now
we
can
start
with.
You
know
having
it
in
a
more
well-formed
place.
I've
got
an
action,
you
know
we'll
leave
it
in
this
pr
state
for
a
couple
of
days,
but
you
know
we've
already
kind
of
it's.
C
It's
draft
form,
but
it's
enough
that
we
can
start
and
just
get
the
working
group
formalized,
which
would
just
be
an
email.
But
you
know
feedback
on
it
is
welcome
it's
it's
still
pretty
rough,
but
at
least
it
captures
some
of
the
initial
ideas
and
brainstorming
around
you
know.
What
would
this
be?
Who
you
know
you
know
what
is
it?
Who
is
it
for
you
know
why
and
and
then
the
how
can
follow?
C
Second,
can
finland
and
michael
hasselblast
have
both
been
kind
of
regularly
showing
up
and
then
helping
to
drive
this
over
the
last
month
and
a
half
and
they've
put
their
hands
up
to
to
help
steer
and
run
the
working
group?
So
you
know
that's,
that's
a
secondary
part
of
the
mail
to
the
toc
again,
the
the
the
meeting
today
was
cancelled.
C
You
know
I
had
meant
to
you
know,
bring
it
up
there,
but
I
think
we'll
do
it
next
time
around,
but
we'll
do
an
email
in
the
next
few
days
and
there's
a
project
very,
very
recently
created
project
to
track.
You
know
just
administrative
stuff
as
we
launched
this.
C
So
that's
that
and
then
the
second
thing
was
just
so
richie
and
I
come
I
believe,
april
april
or
may
whenever,
whenever
the
tag
was
formed
will
be
two
years
and
the
chair
chair,
durations
are
two
years
so
so
we'll
have
chair
elections
coming
up
in
the
spring
here
for
two
of
the
seats
and
as
as
always
we
you
know,
we,
we
absolutely
should
have
something
more
on
the
order
of
three
technical
leads
for
the
tag
we've
just
kind
of
been
working
along
with
one,
and
you
know
I
think,
to
to
really
scale
out
efforts,
and
you
know
frame
these
things
out,
there's
a
huge
opportunity.
C
B
Yeah,
I
think
I
think
it
would
be.
I
mean
thanks
for
calling
that
out
matt,
because
I
think
we
could
definitely
all
help
in
getting
more
folks.
You
know
to
be
aware
of
the
process
and
and
being
interested
in
participating,
because
many
people
actually
participate
in
the
tag
you
know
meetings,
but
it's
just
that
I
think
folks
are
just
not
familiar
with.
C
I
agree
some
folks,
such
as
prabha,
I
believe
from
the
new
york
times
has
reached
out
about
you
know.
You
know
I
was
able
to
point
to
you,
know,
there's
actually
quite
a
well-formed
governance
and
process
around
both
technical
terrorists
and
how
they
work.
So
I'm
sure
there
are
others
in
the
community
that
might
be
interested
in
leadership
positions
that
can
really
you
know,
help
drive
it
as
domain
experts
versus
practitioners.
B
No
definitely,
I
think
that
that's
a
good
call
out.
I
I'll
ping,
you
and
you
know
we'll
figure
out.
You
know
how
we
can
further
kind
of
spread.
The
word
here,
because,
most
of
the
time,
I've
seen
that
you
know
things
are
very
last
minute
in
terms
of
just
announcements
or
just
folks
finding
out
about
how
to
participate
in
yeah
and.
C
Actually,
I've
been
trying
to
get
richie.
I
haven't
been
able
to
connect
with
him
through
the
you
know
since
before
the
holidays,
but
hopefully
soon
he
might
remember,
but
there
was
a
logo
elections
just
being
getting
words
out
and
having
a
brand
around
the
tag,
I
believe
for
like
a
third
round
of
logo
elections,
I
I
had
missed
a
few
minutes.
I'm
not
sure
what
the
current
status
is,
but
I
asked
amy
and-
and
I
haven't
had
an
action
to
go
chase
this
down
so.
C
B
C
Yeah,
the
the
only
other
thing
I
will
call
out
that
I
should
have
put
in
the
I'm
fetching
it
now
there's,
as
you
can
see,
there's
there's
our
there's
a
cncf
here.
It
is
issue
number
three
in
observation.
C
Last
meeting
when
we
talked
to
scott
rigby
from
app
deploy,
there's
this
the
toc
issue,
584,
that's
linked
there,
that's
kind
of
the
most
recent
time
that
chris
at
the
cto
at
the
at
the
tlc
had
worked
with
a
working
group.
You
know
to
launch
the
get-offs
working
group
and
this
was
sort
of
the
generic
checklist
of
things
that
would
be
needed
to
actually
launch
you
know,
observe
paid.
So
you
know,
while
it's
not
a
tag,
specific
thing,
it's
more
of
a
working
group
thing
right.
C
I
think
that
could
form
you
know
a
base
template
for
future
working
groups
for
the
tag
about
like
if
you're
gonna
launch
something
these
are
the
places.
These
are
things
to
do
and
there's
a
whole
bunch
of
junk
in
there.
You
know
around
you
know
transferring
domain
names
and
trademarks
and
just
all
the
stuff,
the
logistics.
So
if
anyone
was
curious
about
how
the
sausage
is
made,
you
know
that's,
that's
a
that's!
The
the
model
we're
working
from.
B
Cool,
I
think
anybody
else
had
any
questions.
Otherwise
I
think
we
can
give
back
10
minutes.
At
least
let's
do
that.
C
C
Many
of
them
are
not
a
whole
lot
of
time
and
it's
a
wide
open
field
of
opportunities
there.
So.
B
That's
that's
good
again,
I
think.
Definitely
you
know
kind
of
reaching
out
to
get
more
folks
involved
on
a
regular
basis
is
useful.
B
But
that
said,
okay,
I
think
I
guess
we
can
close
the
meeting
and
thank
you
everyone
for
joining
in
on
the
first
meeting
of
the
year
and
look
forward
to
doing
more
together
to
the
year.
Thank.