►
From YouTube: 2022-04-28 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
B
E
E
B
E
B
Yeah,
I
think
we
just
got
three
items
basically
for
the
agenda
today.
B
If
we
want
to
oh
yeah
typing
something
in
jump
in,
so
this
meeting
was
extended
out
for
one
hour
since
we
keep
getting
cut
off
in
other
times,
especially
with
the
lead
up
to
kubecon,
and
so
that
was
one
of
the
items
that
we've
been
trying
to
chat
through,
but
have
not
yet
found
the
time
so
for
kubecon
eu
planning.
B
What
what
do
we
want
to
chat
about?
I
know.
Originally.
We
had
suggested
that
for
the
cube
connie
use
sessions
around
open
telemetry
that
we
could
potentially
have
it
like
a
qr
code
that
people
in
the
audience
could
scan
and
then
take
a
survey.
B
I
think
we
probably
have
to
make
that
decision
really
quick
in
order
to
get
the
content
into
the
decks
before
they
get
submitted.
So
should
we
start
with
with
that
subject.
B
E
I
think,
in
my
mind
I
think
well,
I
may
have
missed
some
of
the
talks,
but
we
talked
about
in
the
past.
I
know
we
talked
about.
One
vertical
is
like
what
was
it
like
trying
to
get
started
with
this
thing,
and
then
the
other
kind
of
vertical
is
like
what
does
operating
hotel
over
time
feel
like,
and
I
would
say,
maybe
for
this
audience
like
just
the
getting
started.
E
One
would
be
the
that
that
personally,
that's
the
one
where
I
want
more
feedback
from
people
about
anyways
and
it's
the
one
that
new
users
could
could
fill
out.
B
B
I
wonder
if
that's
the
right
survey
for
like,
because
really
the
intent
of
that
survey
is
to
capture
that
feedback
reasonably
fast
after
somebody
has
gone
through
the
experience-
and
I
don't
know,
is
that's
going
to
be
the
audience
in
these
sessions.
Unless
the
sessions
include
hey,
go,
install
and
config
open
telemetry
during
the
demo
portion
we've
got,
I
see,
we've
got
reese
lee
on
the
line
she's,
giving
a
kubecon
talk
about
the
open,
telemetry
tail-based
sampling
in
the
collector
at
kubecon
reese.
A
On
my
talk
specifically,
I
did
set
there's
like
an
option
to
set
like
the
knowledge
level,
and
I
just
said
it
at
beginner.
I
can
see
my
like
the
attendee
list.
I
suppose
I
could
stock
them
and
see
what
roles
they
have
but
yeah.
I
I
don't
really
know
otherwise,
though,.
F
F
B
F
Think
that'd
be
a
good
first
question
like
how
much
you
know
on
a
one
to
five
like
how
much
do
you
know
or
how
you
know,
how
much
do
you
know
about
open,
telemetry
phrasing
we
can
play
with
but
like
how
much
do
you
know
about
open
slanderity?
Are
you
currently
using
open
telemetry?
F
E
F
E
E
What
what
are
your
pain
points
essentially
and
then,
maybe
just
like
a
link
to
the
install
hotel
surveys
like
if
you
are
using
hotel?
Could
you
fill
out
this
thing
explaining
to
us
like
how
installing
it
when.
F
F
I
think
we
want
to
be
specific
about
saying
what
are
you,
what
are
you
when
you
have
to
create
a
trace
or
a
metric
or
a
log
like
what
is
the
library
you're
using,
but
that
might
be
such
a
specific
question
like
people
might
not
know
because
they
might
just
be
using
it's
like.
Oh,
the
platform
team
has
a
rapper
and
we
just
do
you
know
contin.
What's
the
stupid
microsoft,
one
cantino
containo,
yeah,
dot
log,
you
know,
and
all
of
that
that's
it's
all
through
an
abstraction
that
we
don't
care
about.
E
Yeah,
well
maybe
the
the
information
we
want
is
just
people
describing
their
current
pain
points
or
confusions,
or
what
they
wish
was
better
about
observability
and
then
there's
just
a
follow-up,
but
I
think
it's
important
to
have
some
question
where
we're
like
what
what
stack
are
you
using
just
so
we
know
how
to
maybe
map
people's
like
pain
points
and
questions.
To
you
know
the
the
observability
stack
they're
using.
E
I
know
it's
kind
of
hard
to
ask
like
a
very
generic
survey
and
get
like
actionable
data
back
right.
I
think
that's
like
the
problem.
If,
if
you
want
to
get
something,
that's
more
like
quantifiable
or
actionable,
you
kind
of
have
to
get
specific
and
you
then
you're
kind
of
making
presumptions
about
whether
the
audience
has
tried
hotel
or
something
like
that.
C
But
I
think
yeah
you
mentioned
that
there
are
two
it's
a
bit
tricky
to
I
mean
we
will
find
a
technical
solution
with
that,
but
you
should.
We
need
to
first
ask
before
we
jump
into
the
actual
survey
to
say
what
have
what
are
what
they
have
been
able
to
do
so
far
with
open
entry.
C
If
they
have
done
nothing
then
get
yeah
reach
out
to
us
later,
if
you're
just
starting
and
we
want
to
have
the
getting
started
survey
and
if
you're,
already,
using
and
and
collecting
and
utilizing
data,
then
we
can
then
show
the
other
survey.
I
think
I
think
we
should
start
with
like
a
first
question
that
brings
different
links,
a
different
survey,
depending
on
their
experience.
E
I
think
google
docs
does
have
that
ability
like
you,
can
basically
put
a
switch
statement
into
it
based
on
a
select
field.
E
B
C
C
B
F
B
Okay,
I
like
where
this
is
going
from,
like
fundamentally
our
intent
for
the
surveys
to
help
shape
our
own
value
messaging
and
understand
awareness
and
basically,
the
field
at
large
and
and
then
funnel
people
into
a
more
directed
survey.
If
they're
in
the
right
audience.
C
C
C
F
C
So
if
we
have
the
ability
to
sort
of
build
like
a
10
five,
ten
minutes
sequence
with
a
couple
of
sentence
of
real
people
using
open
territory,
I
mean
there
will
be
a
way
of
advertising.
So
somehow
the
the
project
and
seeing
hey
guys
there
is
real
project
using
it.
And
here
is
the
story,
I
don't
know.
That's.
I
like.
F
That
idea
a
lot,
I
think
I've
actually
morgan
like.
D
F
E
D
D
E
D
C
Should
we
accept
of
the
qr
code?
Should
we
because
sometimes
people
yeah
qr
code?
If
we
are
at
the
booth,
do
we
want
to,
for
so
have
people
say
hey?
Do
you
have
five
minutes,
so
we
can
just
fill
in
the
survey
together.
You
just
do
at
least
somehow
engage
and
bring
people
answering
the
questions.
I
don't
know
why
we
wouldn't.
I.
D
F
Would
I
agree
like
with
henrik
like
I
just
think
there
should
be
guidance
for
whoever's
at
the
booth
at
any
given
time
like
you
know,
this
is
something
that
you
can.
You
have
the
qr.
You
know
have
the
key.
Have
the
standee
with
the
qr
code
and,
like
hey,
take
our
survey,
but
like
that's
your
cta,
like
when
you're
doing
conversations
like
that's
what
you're
trying
to
drive
people
towards
is
like,
go
and
take
the
survey.
F
C
Yeah
and
by
the
way,
just
a
question
side
of
that,
because
I
I
I
will
have
to
be
at
the
booth
of
captain
and
dana
choice.
Of
course,
do
we
have
like
a
planning
or-
or
I
don't
know,
list
of
people
that
should
be
present
at
the
booth
at
during
certain
time?
Is
that
been
so
we've
agreed?
We
have
not
put
one
together
yet.
D
I
will
take
action
on
trying
to
organize
that
sorry.
I've
been
like,
unofficially
organizing
a
bunch
of
our
cube,
cut,
stuff
sort
of,
in
addition
to
doing
everything
else,.
F
E
To
be
there
and
man
that
booth
as
much
as
possible,
assuming
lightstep
marketing,
hasn't
like
completely
screwed
this
up.
We
should
have
like
a
bunch
of
copies
of
my,
like
o'reilly,
embrace
about
hotel
that
we
can
give
away.
So
we
should
have
some
nice
tidy
stuff.
If
and
if
we
can
come
up
with
swag
to
give
away,
then
you.
D
D
I
will
also
be
like
manning
the
booth
and
things
but
but
to
henrik's
point.
We
should
probably
try
and
work
out
a
rotation
amongst
all
the
hotel
attendees
to
make
sure
there's
people
there
yeah
at.
E
C
E
C
B
Oh
okay,
I've
got
the
zoom
chat
off
the
side
of
my
screen,
so
I
didn't
pop
up.
Thank
you,
okay,
cool,
so
I
would
say
that
we,
we
could
continue
the
agenda
and
if
we
have
any
time
we
could
start
to
actually
collaborate
on
that
survey
content.
But
I
would
think
that
we
want
to
get
that
survey
content
wrapped
up
in
the
next
few
days,
so
that
we
can
also
shop
it
to
tag
observability
and
get
get
it
embedded
everywhere.
We
can
so,
let's
continue
to
the
next
thing.
B
General
survey
update,
so
I
and
I
put
out
an
issue
in
the
hotel
community
repo
with
links
to
the
surveys
and
some
of
the
intent
behind
the
surveys.
As
far
as
I
know,
I
haven't
gotten
any
feedback
yet,
but
particularly
if
we're
going
to
like
highlight
the
install
and
config
experience,
I
think
we
should
go
through
that
survey
here
and
just
like
get
it
into
a
good
place.
I
think
it's
70
there,
it's
the
one
that
I'm
the
least
confident
on.
B
So
I
will
throw
that
link
into
the
zoom
chat
and
you
can
go
jump
into
that
doc
and
we
can
collaborate
on
this.
I'd
say
for
the
next
10
minutes
or
so.
B
E
B
C
On
the
instrumentation,
do
we
have
a
specific
survey
for
this
one,
though.
E
C
Was
more
thinking
I
mean
it's
very,
very
general,
so
did
were
you
able
to
get
enough
details
out
of
the
auto
instrumentation
library
provided
by
the
community
so
far.
C
E
There's
a
survey
about
like
yeah
does
the
is
the
data
qual?
That
survey,
I
think,
should
be
really
just
like?
Is
the
data
quality?
What
you
expect
are
you
missing
data?
Can
you
can
you
are
like
the
pipeline
tools
and
the
collector
appropriate
for
you
being
able
to
you
know
massage
the
data?
The
way
you
want
et
cetera,
et
cetera,
so.
B
I
so
we
call
it
installation
configuration,
but
then
I
was
like
I
don't
have
any
necessarily
configuration
generic
configuration
questions,
but
I
do
know
like
a
couple
of
the
sigs,
have
very
specific
questions
that
are
unique
to
them,
and
so
maybe
some
of
those
configurations
go
into
the
v.
What
I'm
calling
v2
that
can
be
like?
Okay,
so
you
you
pick
the
c
plus
plus
sdk.
Here
are
the
particular
questions
from
the
c.
B
C
Something
for
sampler,
I
guess,
but
center
is
the
sampling
decision.
You
discovered
when
you
have
the
data,
so
it's.
E
To
me,
it
kind
of
just
falls
under
the
general.
Like
troubleshooting
kind
of
questions,
I
mean
one.
One
way
to
subdivide.
It
is
maybe
saying
like.
E
Was
it
clear
what
the
right
way
to
set
up
open
telemetry
was,
like
you
know,
was
it
clear?
Did
you
have
a
clear
idea
about
what
success
looked
like
or
like
how
you
were
supposed
to
configure
it?
Was
that
clear
I
and
then
like?
In
other
words,
it's
like,
did
you
understand
what
the
goal
was
and
then
the
the
other
question
is
like
when
things
went
wrong.
E
E
E
E
The
debugging
question
is
maybe
already
covered
there
under
the
free
form.
Like
did
you
experience
any
errors.
E
E
I
think
this
is
like
pretty
good.
I
mean
I'm
always
like
super
wary
about
making
the
surveys
like
too
long-
and
this
looks
like
like
really
good
to
me
like
like
this,
is
feels
like
short
enough.
You
could
actually
get
someone
to
fill
it
out.
C
B
I
tried
to
capture
that
in
like
question:
seven
and
eight
of
like
well,
what
documentation
did
you
use
and
how
satisfied
are
you,
but
it
doesn't
have
the
necessarily
free
form
like
how
how
should
those
docs
be
improved.
C
What
I'm
afraid
of,
if
you
have
too
much
free
form,
questions
the
text?
It's
it's
first
for
refresh,
first
of
all
to
be
able
to
retrieve
statistics,
it's
more
difficult
or
too
yeah,
but.
E
I
I
think
for
the
for
the
free-form
questions
we
have.
I
think
they
truly
are
free-form
questions
where
we
will
need
to
to
go
in
there
and
actually
figure
out
what
the
the
cat,
like
part
of
the
part
of
the
important
data
we
are
going
to
get
back,
is
like
understanding
what
the
actual
like
categories
of
problems
are
from
people.
E
So
I
I
would
be
fine
with
with
just
having
one
freeform
field,
that's
just
under
docs
and
troubleshooting.
That's
just
like
I,
I
kind
of
like
the
way
it's
phrased
right
now.
It's
like
did
you
experience
any
problems.
E
How
could
debugging
be
improved,
especially
given
they
just
answered
two
questions
about
docs
right
above
it.
I
think
it
would
be
sufficiently
primed
to
be
like
the
docs
didn't
tell
me
you
bastards,
or
you
know,
like
the
docs
talked
about
like
api
stuff,
but
they
didn't
really
talk
about
installation
or
like
how
to
debug
installation.
I
think
I
think
people
would
give
us
that
feedback.
Maybe
we
could
word
the
question
better,
but
I
would
be
fine
with
if
they
could
would
just
give
us
any
information
about
what
was
frustrating
there.
B
C
B
E
B
E
And
it
would
be
interesting
to
know
whether,
like
when
debugging
people
are
like,
I
didn't
have
the
docs
I
needed,
or
they
were
like.
I
didn't
have
like
the
tools
I
needed
and
I
would
like
them
to
like.
Maybe
tell
us
that
without
being
too
primed
to
focus
on
one
or
the
other,
like,
I
would
love
to
know
like
what
it
was.
They
were
really
grasping
for,
because
devs
are
different.
E
So
you
know
it
would
be
interesting
if
to
to
see
whether
our
audience
is
like
more
like
that
or
if
they're
more
like,
I
really
needed
some
like,
like
you,
don't
have
any
installation
docs
for
javascript
or
whatever,
like
I
didn't.
I
didn't
know
where
to
start,
and
that
was
like
really
frustrating,
because
I
do
think
in
some
languages
we
we
have
that
that
problem
for
sure
the
docs
aren't
there
or
they're
they're
laid
out
so
poorly
that
you
really
have
to
hunt
to
find
the
little
copy
paste
snippets.
B
E
B
That's
awesome,
okay,
so
for
this
install
config
survey,
I
guess
like
just
figuring
out
what
that
question
number
four
is
and
then
we
should
be
relatively
good.
E
Oh
this
one,
oh,
has
the
timeline
yeah.
I
just
I
don't
know.
If
timeliness
I
got
a
little
confused
doesn't
resonate
yeah.
It
was
like
the
time
like
like
did
the
the
beta
release
land
when
it
was
like
convenient
for
reason
I
was
suggesting,
maybe
just
if,
like
the
the
scale
is
for
like
to
maybe
ask
if
they
felt
like
it
took
too
long
or
whether
they
felt
it
should
be
faster
like
they
could
give
us
a
sliding
scale
about
like
whether
it
met
their
expectations.
E
Yeah
yeah,
yes,
totally
likert
scale,
but
they
can
express
yeah,
okay,
so
fast
and
easy.
No
one's
gonna
collect
that.
But
then,
like
way
too
long,
I'm
I'm
I'm
actually
genuinely
interested
to
to
know
what
people
what
people's
like
at
light
step.
We
have
this
like
internal
idea
that
it
needs
to
take
five
minutes
or
less
to
get
up
and
running,
which
is
like
a
great
goal
like,
and
I
can
use
that
as
like
a
timer.
E
If
I'm
like,
literally
like
like
running
a
study
where
I'm
like
timing,
people
who've
never
done
it
before
to
do
it,
which
I
would
love
to
run
those
again.
I
ran
those
in
the
past,
but
then
I
lost
the
funding
to
do
it,
but
but
I'm
I
I
feel
like
we
just
made
those
numbers
up.
I
don't
actually
know
how
long
someone
would
imagine
it
would
take
to
like
set
up
a
telemetry
system.
B
B
That's
not
how
they've
they
phrased.
E
That,
but
like
actually
like
installing
a
service,
that's
that's
really
sad
and
unfortunate.
This
is
also
where
I
kind
of
expect
expectations
to
probably
differ
based
on
language,
like
all
the
the
like
java
people
are
like
so
pampered
with
their
java
agents
like
they're,
like
I
expected
it
to
take
zero
time.
Why
did
I
have
to
think
at
all
versus
like?
E
No,
it's
serious
like
go
is
so
interesting.
Go
is
like
the
most
manual
of
our
setup
experiences.
You
just
have
to
do
everything
yourself
by
hand,
but
this
we
have
found
at
lightstep
that
the
the
speed
and
the
success
rate
and
the
developer
satisfaction
is
highest
in
go
because
the
go
developers,
that's
just
their
life.
Their
life
is
when
they
need
to
like
do
anything.
E
They
go
look
at
the
api
docs
and
then
they
go
write
code
to
set
it
up,
and
that's
just
like
how
everything
works
over
there
and
so
otel
doesn't
seem
weird
or
anything
to
the
corners.
It's
just
like
yeah
I
go
to
like
you
know
the
the
godox,
and
I
just
like,
do
it
like
normal
and
it's
like
not
actually
complicated
or
that
complicated.
So
so
they
they
it's
more.
E
The
people
who
are
in
like
the
middle
zone
of
like
python
or
java
when
the
agent
didn't
work
as
well
as
it
does
today,
where
they're
like
I,
I
don't
know
what
I'm
doing
here
and
like
we
have
like
weird,
like
monkey
patch
helpers
in
python,
but
like
it's,
not
a
java
agent
concept,
they
don't
have
that
so
they're
like
I
don't
I
don't
know
what
that
weird
thing.
Does
you
know.
B
Do
you
think
more
of
that,
like
auto
instrumentation,
like
over
the
course
of
time,
will
come
to
additional
sig
language
areas.
E
I
think
I
think
I
think
so
like
I
have.
I
have
an
idea
for
how
to
do
it
and
go
the
go
way
which
is
not
monkey
patch
injecting
but
go
lets.
You
write
preprocessors
to
rewrite
your
language.
Oh,
we
got
hamrick
back
the
the
go
way
to
do
it
is
you
add
these
little
preprocessor
comment
lines
and
then
you
run
a
preprocessor
that
will
write
code
for
you,
and
so
it
should
be
possible
to
create
a
preprocessor
that
what
it
does
is
it.
It
looks
at
your
go
package.
E
Manifest
looks
up
a
list
of
all
the
available
stuff
matches
them
together
and
then,
like
spits
out
a
template
of
like
code,
along
with
the
like
default
sdk
setup
code,
and
if
we
added
a
config
file
to
open
telemetry,
which
we
don't
have
yet,
then
it
could
pre-configure
that
sdk
code
to
just
grab
the
com
file
or
whatever,
so
there
anyways
like
okay,
I
have
a
nifty
idea
for
how
like
that,
could
be
made
at
least
turned
into
like
the
equivalent
of
like
a
copy
paste
solution
where
they
maybe
have
to
fill
out
a
couple
details,
but
they
aren't
like
it's
not
working,
because
I
didn't
install
the
http
instrumentation
library
because,
like
why
would
I
know
I
need
to
do
that,
which
is
where
people
currently
get
stuck,
and
you
could
build
something
similar
for
for
python
and
stuff.
E
We
do
have
auto
instrumentation
stuff
for
those
languages,
but
it's
it's
kind
of
weird.
It's
like
change
your
setup
script
for
running
your
thing
to
like
wrap
it
in
this
weird
function.
That's
gonna
do
a
bunch
of
like
weird
and
you
know
it
like
works,
but
it's
easy
to
see.
There
would
be
like
plenty
of
production
environments
where
they're
like
no
I'm
not
changing
my
setup
script.
To
do
this,
I
don't
even
have
access
to
that
or
whatever
so
anyways.
E
I
I
think
I
think
it'll
get
better,
but
it's
mostly
about
trying
to
match
expectations
in
the
languages
and
the
languages
that
don't
do
everything
automatically
for
you
making
sure
it's
like
super
super
clear
which
instrumentation
packages
you
need
to
install,
and
we
have
like
some
things
in
python
and
maybe
ruby
or
some
other
languages
that
will
like,
like
there's
a
python
thing
that
it'll
just
spit
out
the
the
manifest
list,
like
the
python
ini
file.
It'll.
Do
that
thing
where
it'll
look
at
what
you
have
look
at
what's
available?
E
It's
like
this
is
the
list
of
instrumentation
you
need,
but
we
just
need
to
like
improve
the
the
docs
or.
E
Around
using
that
stuff,
so
people
know
that
it's
there
and
how
to
use
it.
That's
to
me
is
like
the
three
ways
people
screw
get
confused
are
like:
they
don't
know
where
to
send
the
data
right
like
they
don't
know
how
to
configure
the
exporters
or
the
propagators
and
so
something's
broken
there.
They
fail
to
install
instrumentation
that
they
need
and
three
like
there's
missing,
instrumentation
like
it's,
not
they
failed
to
install
it.
It
just
doesn't
exist,
and
so
their
traces
are
broken
or.
C
E
We
have
a
little
thing
at
lightstep
called
dev
mode.
That
austin
and
I
created
like
ages
ago
that
I
would
love
to
port
to
like
the
collector.
That's
just
just
like
a
d
like
it's
just
all
it
is
is
like
set
everything
to
like,
not
buffer
anything
and
just
like,
as
you
spew
information
out
of
your
process.
It
just
like
prints
it
somewhere
useful,
and
you
know,
lets
you
know
that,
like
you
did
start
your
sdk
and
connect
it
to
the
collector.
That
happened.
E
You
at
least
did
that
you
know
and
like
you
did,
send
some
data,
but
like
you
can
see
that
the
traces
are
broken
and
maybe
like
so
some
some
kind
of
tooling
like
that
would
be
as
like
a
collector,
plug-in
interface
or
something
I've
talked
to
some
people
who
are
interested
in
building
that
that
that
is
like
the
gold
yeah.
I
don't
know,
that's
like
the
the
holy
land
if
we
could
get
there,
because
without
that,
it's
it's
really
hard
to
write.
Your.
E
It's
just
really
because
you
don't
have
feedback
and,
like
context
propagation
is
like
this
freaking
ghost
in
the
machine
that
is,
is
like
slippery
to
begin
with.
You
know,
if
you
don't,
if
you
lose
your
span
and
fail
to
end
it,
then
it's
just
gone
right.
We
don't
emit
an
event
that
says
you
at
least
started
it.
E
E
Instead
of
just
like
no
data
anyways,
I'm
ranting
in
this
meeting
so.
B
I
want,
I
want
cool.
E
B
E
Yeah
and
we're
getting
a
logging
and
an
event
api
like
pushed
into
it,
so
we've
got
some
like
active
proposals
now
for
that
and
all
that
is
like
trivial
to
implement
compared
to
the
tracing
and
metric
systems.
So
I
predict
yeah
once
it's
once.
It's
like
in
the
spec
it'll
just
be
like
out
the
door
in
most
languages,
so
yeah
and
then
then
all
kinds
of
fun
projects
I
think
around
debugging
and
like
config
files
and
like
installation,
simplifiers
there's.
I
have
like.
C
E
C
B
B
B
So
four,
it's
just
like
a
copy
paste.
If
you
see
any
sigs
that
are
missing
like
I
started
going
down,
the
oh
should
we
put
like
kubernetes
operator
and
helm
charts
in
here
and
I
was
like
yeah.
I
don't
know.
C
B
B
E
B
E
I
was
actually
kind
of
amazed
at
how
much
oh,
why
am
I
spacing
on
his
name,
so
bad
the
pm
from
microsoft,
who
set
up
the
last
one
he
met.
He
put
a
lot
of
complicated
logic
into
that
thing.
I
was
like
impressed.
E
E
Information,
I
did
we,
we
had
a
difference
of
approaches.
I
think
I
I
felt
like
he
was
doing
a
user
survey,
the
kind
of
user
survey
that
would
be
very
effective
for
when
you
promise
someone
like
a
like
25
gift
card.
If
they'll
like
sit
down
with
you
for
a
scheduled
interview,
kind
of
thing,
I
think
that
was
more
where
his
background
or
like
where
he
was
coming
from
for
that
one
yeah.
E
C
I
have
one
minute
before
jumping
to
another
meeting.
What
is
the
next
action
plan
for
or
next
week.
B
I
don't
have
access
to
the
open
telemetry
calendar
still.
So
if
somebody
wants
to
create
a
meeting
for
thursday
and
we
can
really
like
dig
into
the
cncf
thing
I'll-
I
think
I
don't
know
when
the
next
tack
observability
meeting
is,
but
it.
C
B
Yeah,
I
also
don't
recall,
but
I
do
think
it'd
be
good
just
to
go
there
and
cross-check
the
survey
so
if
y'all
could
meet
before,
like
I.
C
B
Okay,
yeah,
I
would
love
to
get
their
opinions
because,
if
we're
gonna
put
it
to
all
observability
like
there
could
be
some
really
good
stuff
that
comes
through
there.
So
I
think
we'll
have
to
start
a
thread
on
slack
and
we'll
figure
out
like
the
80
content
before
showing
it
off
to
them.
B
Okay,
cool,
oh
there,
it
is
on
my
calendar.
Thank
you
yeah,
so
I
think
we,
if
you
have
any
time
friday
or
monday,
or
we
can
try
to
do
it-
asynchronously.
B
E
Thanks
for
driving
it
over
the
finish
line,
sure
that's
great
thanks.
So
much.