►
From YouTube: Extensions and Telemetry WG - 2021-04-21
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
So
mandar,
you
recorded
a
couple
decisions.
I
think
yesterday
from.
B
Yeah,
so
we
actually
reached
like
an
important
decision
regarding
how
we
are
going
to
implement
the
the
api
and
specifically
sql
agent,
will
pull
the
oci
images,
and
this
was
this
particular
piece
was
the
plan
of
record
before,
but
then
some
of
the
other
things
changed
later
easter
agent
will
stop
writing
images
to
the
disk.
Instead,
it
will
just
populate
the
inline
bytes
and
hand
it
over
to
envoy
and
stody
will
distribute
secrets
to
his
two
agents
that
it
will
need
to
go
fetch
from
oci.
B
The
the
third
part
was
optional
before,
but
since
red
hat
has
come
back
and
said
that
docker
urban
k
does
not
support
the
chart
based
authentication,
we
we
need
to
be
able
to
distribute
secrets,
so
this
this
part
may
be
non-trivial,
and
there
was
some
work
done
in
gateways
to
distribute
secrets,
and
it
has
some
security
implications
so
that
that
part
may
take
a
while
and
then
the
last
part
is
that
for
this
particular
approach,
one
concern
was
previously
raised,
which
was
that
if
you
have
10
000
proxies,
all
of
which
are
trying
to
fetch
from
some
say,
private
ocr
register.
B
At
the
same
time,
then,
it
would
create
this
thundering
hurt
problem.
However,
now
we
decided
to
just
use
just
document
that
and
potentially
use
a
node
local
oci
registry
as
a
as
an
interim
kind
of
cache.
B
B
A
And
who
is
owning
the
that
item
for
111.,
so
we
should
just
put
that
on
the
roadmap,
while
I'm
thinking
about
it.
B
B
So
daniel
owns
the
item.
However,
I
think
peter
is
also
going
to
own
parts
of
it
as
as
necessary,
so
you
can
put
down
daniel
and
peter.
C
Yeah,
I
wanted
to
understand,
what's
happening
with
the
wasm
api,
and
I
can
totally
do
this
later
because
I
know
there's
a
pr
outstanding.
There
are
quite
a
few
comments
and
and
there's
a
lot
of
back
and
forth
going
on
there,
and
I
haven't
yet
added
my
comments
because
I
just
don't
want
to
complicate
things.
So
is
there
already
an
out-of-band
consensus
that
has
happened
and
if.
B
Yes,
so
so,
actually
we
we,
we
just
covered
it.
So
if
you
look
at
the
first
item
that
that
is
the
summary
of
the
specific
things
that
we
talked
about
in
the
sig
and
reached
consensus
on
implementation,
I
think
I
think
you
can.
I
mean,
of
course,
feel
free
to
comment
on
the
api.
If,
but
but
yeah
they,
we
had
a
lot
of
back
and
forth
about
how
it
can
be
implemented
and,
what's
the
what's,
the
best
way
to
go
about
implementing
it,
it
turned
out
that
it
did
not
have
so.
B
The
risk
was
that
because
implementation
may
force
us
into
either
an
api
or
make
the
api
unimplementable,
but
that
has
not
happened
so
then
it.
It
basically
meant
that
we
went
through
some
options
and
yeah.
We
went
through
some
options
and
then
this
option
seemed
most
kind
of
most
people
agreed
on
this
option
and
that's
the
plan
of
record.
C
B
D
C
Yep
yeah,
that's
fine!
This
is
good
for
me
and.
A
Sure
anything
else,
I
know
rob
you're
still
concerned
about
security
issues
around
the
secret
distribution.
F
F
A
F
Would
use
would
be
the
pulse,
the
the
default,
pull
secrets
that
you
would
configure
in
a
kubernetes
instance
right
that
users
don't
have
access
to
those
for
polling
things
they're
only
used
at
the
infrastructure
level
because
there
may
be
other.
You
know
besides
just
oh,
I
can,
you
know,
be
malicious
with
this
organizations
might
be
concerned
about
users
using.
F
To
pull
down
images
that
they
don't
have
licenses
to
and
those
kinds
of
things,
so
I
mean
those
aren't
really
things
that
are
obvious
when
you
first
look
at
it,
but
you
know
those
could
complicate
things
for
users
and
I
think
it
just
sort
of
ends
up
being,
you
know,
kind
of
bad
practice
to
be
passing
those
around
because
then
they're
not
really
secret
right.
You
lose
control
of
them.
F
E
B
B
Like
I
mean
addressing
concerns
that
rob
just
described
or
just
loosen
our
promises
quite
a
bit
and
and
this-
and
this
is
where
the
api
discussion
actually
does
come
in
right,
which
is
that
if
we
had
our
api
separated
as
registration
and
use,
which
we
have
talked
about
before,
where
secrets
and
all
that
is
part
of
the
registration
and
all
that
it
tells
istio
is
here-
is
where
you
fetch
images
from,
and
these
are
the
credentials
and
then
the
second
part
is
used
now
that
we
have
combined
those
two
for
for
simplicity.
B
We
are
in
this
world
where
we
are
even
considering,
or
we
have
to
have
secrets
at
every
agent.
If
we
split
those
two,
then
all
we
are
saying
is
that
someone
in
the
system
needs
to
be
able
to
fetch
images.
B
Yeah
so
yeah,
someone
in
the
system
needs
to
fetch
images
and
then
how
they're
distributed
still
decides
that
so
so
I
think
that
that
we
may
have
to
reopen
that
discussion
if
doing
the
secret
distribution
correctly.
For
this
to
be
becomes
a
difficult
problem.
A
Okay,
is
there
anything
else
we
should.
I
think
that
that
covers
basically
the
state.
There
is
anything
else
anyone
wants
to
to
ask
or
bring
up
regarding
wasm.
A
Okay,
the
next
sort
of
item
that's
come
up
and
that
there
is
been
some
discussion
of
is
the
pilot
agent,
metrics
publisher
and
the
desire
here
is
to
have
the
agent
metrics
get
published,
not
just
as
prometheus,
but
as
some
oth
sort
of
a
push
to
some
other
endpoint.
A
So
there's
this
document
that
aram
has
put
together
and
he's
proposing,
using
prom
to
json
as
the
format
to
get
json
formatted
metrics
and
using
this
bit
of
config
to
configure
pushing
them.
And
so
I
think
there
are
a
couple
open
questions.
I
I
just
want
to
sort
of
circulate
that
with
the
group
and
make
sure
everyone
had
a
chance
to
take
a
look
at
this
and
maybe
raise
raise
some
issues
or
propose
alternatives
or.
A
B
I
think-
and
I
think
one
one
more
one
more
thing
I
would
I
would
like
to
highlight-
is
that
this
does
go
against
the
principle
of
keeping
less
number
of
things
in
sto
or
supporting
less
number
of
things
in
sdo,
so
like
anyone
reviewing,
it
should
should
consider
that
as
well,
not
just
the
particular
design,
but
basically
at
the
same
time
should
also
consider
whether
we
should
do
this
at
all.
B
I
I
just
I
just
want
kind
of
more
views
on
that
that
as
well
because
they're
yeah,
because
there
are
other
ways
of
getting
this
without
adding
new
code
into
histo.
So
so
maybe
we
should
add
some
of
those
details
from
the
the
issue
discussion
into
the
design
dock.
E
Yeah,
if
I
can
add,
when
you
push
the
matrix
to
the
target,
would
you
also
need
some
kind
of
credential,
so
the
url
might
not
be
just
sufficient.
E
This
yeah
and
add
some
other
issues,
so
at
least
from
the
the
the
the
definition
here
I
can
see.
You
probably
have
a
lot
of
more
things
see
regarding
the
security
aspect
of
this.
So
may
not
be
just
you
know,
username
password
pair
and
activity
certificate.
This
can
grow
pretty
complex.
A
I
forget
what
exactly
remote
service
offers,
but
you're
right.
I
don't.
I
don't
think
this
covers
all
of
the
all
of
the
concerns
around
authentication.
Yeah
remote
service
has
tls
settings,
but
I
think
that's
it
so.
B
Right
right
now,
yeah,
so
so,
yes,
so
that
that
is
that
is
in
this,
and
that
that's
why
I
said
we
need
to
bring
that
here
and
that's
why
I
kind
of
consider
if
we,
if
we
are
going
to
do
anything
here
at
all
as
well
right
we
like
on
on
the
issue.
We
had
a
soft
consensus
of
okay,
let's
do
it,
but
I
think
it
it
would
be
good
to
have
wider
agreement
on
that.
C
B
B
B
Metrics
through
that
prompt
to
json,
which
is
also
a
prometheus
kind
of
defined
api,
and
then
then
the
next
question
comes
in
whether
well
we
are
going.
If
we
are
going
to
push
things
from
sdo
should
we
use
the
prompt
json
type
pushing
or
should
we
use
on-voice
metric
service,
which
already
has
a
pushing
and
that's
yeah?
So,
though,
those
are
the
considerations
of
whether
to
do
this
at
all
or
not.
This
document
only
talks
really
about
how
to
do
this.
If
we
are
going
to
do
it.
A
Yeah
I
mean
there
are
two
varieties
of
open
telemetry
in
some
sense
right.
We
use
the
istio
package
metrics
stuff
to
export
anyways
and
we
export
in
prometheus
format.
That's
scrapable.
We
could
add
a
one
to
that
package.
That
is
a
publisher
that
pushes
to
one
endpoint
or
we
could
say
the
solution
is
use
open,
telemetry,
but
use
it
as
another
sidecar
or
agent
or
somehow
and
scrape
from
our
published
into
some
other
and
translate
and
push.
C
I
I
see
I
mean
so
the
first
solution,
really
the
istio
community-
does
not
really
provide
any
guidelines
there
right
it's
for
everyone
to
do
it
on
their
own.
As
of
now.
C
The
second
solution
is
run
our
agent,
but
we
don't
really
have
docs
or
anything
related
to
it,
how
to
do
it?
What
to
do
how
to
enable
that
integration
right.
A
C
A
Oh
sorry,
yeah!
No,
I
we
when
we
wrote,
I
think
the
istio,
the
agent
uses
the
istio
package
metrics
library
and
that
doesn't
it
doesn't
require
prometheus.
It's
just
prometheus
is
the
the
one
the
default
exporter
there,
and
so
we
could
add
a
com,
another
type
of
exporter
and
have
configuration
that
allows
you
to
select
that
and
do
that
and
it
would
all
be.
It
wouldn't
require
running
anything
else.
A
It
would
just
require
piping,
they
can
fake
through
and
having
that
work,
and
that
would
be
istio
fully
owning
the
solution,
whereas
we
could
say
hey
we're
not
going
to
change
any
of
the
seo
code
or
any
of
the
seo
bits.
But
here,
if
you
use
prometheus
here's
how
you
you
know
you
write
it
back
in
for
prometheus,
that
puts
it
somewhere
else
or
if
you
run
the
open
census
agent
as
another
piece
in
your
your
cluster.
C
That's
interesting
right
so,
from
an
ecosystem
point
of
view,
how
does
the
stack
driver
integration
work
currently
sto
provides?
A
stack
driver
wasn't
plugin.
Is
that
how
it
works.
A
Yeah
and
interestingly,
for
for
stackdriver
specific
builds
for
pilot
for
for
sdod
metrics.
We
do
use
the
one
that
is
built
into
istio
package.
We
have
an
export
to
export
to
stackdriver
bit
in
there.
A
C
C
I
think
we
can
put
our
eggs
in
the
basket
of
open
whatever
the
combined
thing
is
open,
telemetry
now
yeah
and
then
say:
histo
will
natively
support
it,
like
you're,
saying
whether
we
do
it
in
our
packages
or
whether
we
deploy
a
new
sidecar
for
a
user
it's
by
enabling
a
configuration
option.
I
think
that
will
be
sleek
right.
The
so
implementation.
B
A
F
B
To
but
if
you
have
10
000
parts
or
whatever
50
000
pods,
adding
additional
sight
cars,
there
is
not
feasible
so,
which
is
why
sidecar
is
not
an
option,
but
we
can
still
have
a
different
kind
of
option
with
open
telemetry
and
just
provide
guide
on
on
how
to
do
it.
And
because
we
already
support
prometheus
and
prometheus
is
actually
a
fine
integration
point
for
many
things.
C
I
think
if
mandar
you,
you
are
saying
adding
side
cards
is
just
not
feasible
on
a
large
customer.
I
can
agree
with
that.
So
why
not
provide
a
high
level
configuration
option
and
then
internally,
in
the
sdo
components
we
switch
from
prometheus
to
open
telemetry
or
something
like
that,
like
doug
was
saying
right,
that's
a
good
way
of
enabling
our
ecosystem
my
mind.
C
E
A
It
all
depends
what
what
level
you're
talking
about
compatible
right.
So,
if
you're
talking
about
reading
the
formats
or
actually
just
we
publish
directly
in
stackdriver
native
using
the
open
telemetry
apis,
which
is
what
we're
doing
right
but
yeah,
I
think
any
of
those
options
is
possible,
and
so
I
think
what
I'm
hearing
from
you
niraj
is
that
you
think
it
is
worth
doing,
and
then
the
implementation
is
something
that
we
should.
We
should
focus
on
making
sure
that
it's
not
just
hey,
here's,
how
you
run
a
third-party
utility
and
how
you
configure
it.
C
C
B
So
so
I
think
so
what
what
we
should
do
is
then
have
open,
telemetry
exporter
available
for
all
components,
right,
distribute
s2
agent.
Well,
those
are
the
camera
and
then
just
make
sure
that
it
actually
satisfies
rama's
use
case,
even
if
it
doesn't.
I
think
this.
This
may
probably
end
up
being
a
being
a
fine
thing
to
add,
but
yeah
we
should.
We
should
just
check
if
that
actually
satisfies
the
requirements.
B
The
counter
argument
to
doing
actually
the
argument
to
doing
nothing
right.
No,
not
this
and
not
even
anything
else-
is
that
most
of
the
other
vendors
like
datadog
new
relic.
They
all
have
prometheus
integrations,
so
they
can
all
deploy
a
prometheus
scraper
with
scripts
and
then
then
sends
it
to
their
to
their
things.
B
C
I
think
there's
a
different
problem.
Think
of
it
from
a
customer
point
of
view,
they
will
have
to
now
manage
their
own
prometheus
right
plus
at
scale
and
they
are
buying
data
dog,
but
data
dog
won't
manage
their
prometheus
for
this
use
case
right.
So,
as
a
user,
you
are
basically
getting
the
worst
of
everything.
E
G
How
does
datadock
user
scrape
away
metric
today
do
they
use
does
and
we
provide
first
class
support,
no.
G
G
Yeah
yeah,
so
my
point
is
that,
like
these
two
agent
I
used
to
do
is
on
our
control
like
we're
looking
at
like
exporter,
for
those
renders
no
problem,
but
that
only
solves
like
20
percent
of
use
case
of
telemetry,
I
mean,
like
most,
users
still
relies
on
the
mvo,
metrics
and
angular
max,
mostly
important,
and
so
it.
D
A
D
C
C
B
B
C
That's
I
think,
merging
is
an
optimization
in
my
opinion,
which
is
a
excellent
optimization
that
john
added
right.
I
will
be
okay
with
it,
not
working
because
it
solves
a
prometheus,
specific
problem.
The
issue
is
aside
from
merging.
Also
you
have
the
fundamental
problem
of
how
do
you
get
access
to
istio
metrics?
C
So,
let's
see,
how
do
you
get
access
to
sorry
onward,
metrics,
I'm
guessing.
If
for
any
istio
metrics
like
steer,
request
total,
we
will
be
able
to
create
some
sort
of
plugin
right
which
will
export
things
in
open,
telemetry
format.
C
Okay,
so
the
onward
problem
remains
right
when
that.
B
C
C
C
So
let
me
help
you,
I
think
doug.
The
thing
here
is:
can
we
can
we
provide
steel
metrics
for
steel,
all
listed
components
by
borrowing
onward
in
open
tracing
format
in
sorry,
in
open,
telemetry
format,
right.
C
A
Okay,
the
next
topic
siliman's
still
here-
and
she
was
here
earlier-
you
know-
are
you
still
here.
B
She
she
had
to
drop
off
in
the
second
half
of
the
call,
but
but
yeah
I
I
think
you
yeah.
If
you
know
the
issue,
you
can
just
talk
about
it.
A
A
You
know
junk
traffic
into
the
mesh
from
the
open
internet,
where
the
host
header
was
set
to
to
completely
random
things,
not
matching
the
source
of
traffic
or
or
the
destination
of
traffic,
and
so
that
was
leading
to
a
whole
bunch
of
time
series
in
prometheus
with
the
junk
service
names
and
that's
because
we're
using
host
header
fallback
both
at
the
gateway
and
inside
the
mesh.
A
You
know
because
there's
no
metadata
to
exchange,
but
still
we
needed
something
to
populate
for
service
names
and
so
the
the
I
think,
there's
a
question
that
came
out
of
that
work.
That
lehman
was
doing
about
whether
or
not
we
should
be
using
host
set
or
fallback
or
if
we
should
disable
it
for
prometheus.
A
And
so
I
want
to
sort
of
raise
that
issue
for
the
group.
So
we
can
make
a
decision.
A
Yeah
I'm
trying
to
I'm
trying
to
remember.
I
would
I
don't
remember
her
exact
setup,
but
I
remember.
B
There
is
there
a
yeah
yeah,
which
is
why
it's
not
enabled
at
the
gateway.
However,
the
the
issue
comes
into
play
if
you're
regular.
So
if
you're,
if
you're
doing
permissive
mode
and
your
even
what
we
call
just
side,
cars
are
just
accessible
from
other
places,
then
essentially
they
are
like
gateways
in
in
sense
of
the
exposure,
and-
and
I
is
that-
is
that
what
is
happening
here
because
yeah
like
peter
said,
we
turn
off
posted
or
fall
back
at
the
gateways.
A
Okay,
so
we
should
verify
that
it's
totally
off
the
gateways,
but
yes,
I
feel
like
she
was
seeing
traffic.
Yes
coming
into
side
cars,
not
from
you
know
not
traditionally
through
the
just
the
gringo's
gateway,
and
so
this
is.
B
Also
junk
traffic,
okay,
so
so
then,
in
in
that
case,
if
if
there
is
traffic
from
random
places
directly
coming
to
side
cars,
then
in
those
cases
that
deployment
should
disable
holster
fallback,
I
I
think
that
that's
a
that's
like
a
deploy,
that's
more
of
a
dock
and
deployment
type
decision,
at
least
that's
that's
what
I
feel,
because
the
assumption
was
once
you
protect
the
gateway
inside.
It's
all
controlled
traffic
and
there
you
can
have
posterior
fallback,
but
if
it's
not
controlled,
then
you
should
disable.
It.
G
G
G
Do
we
want
to
revisit
that
decision,
or
always
saying?
Oh
I
mean
do
we
want
to
disable
it
by
default,
or
should
we
still
like
make
it
more
clear
in
the
document
and
ask
user
to
pay
attention.
A
C
I
think
this
was
added
based
on
jay's
request
at
some
point
for
trying
to
show
some
visualization
and
kiyali
for
failed
traffic,
or
something
like
that.
I
don't
remember
that
use,
but
I
feel
like
I'm
in
your
camp,
that
you
should
just
disable
it
by
default.
G
D
D
D
I
mean,
if
you
use
this
token
configs,
you
should
probably
have
a
list
of
those
available
allowed
websites
right.
Unless
you
allow
everything.
Maybe
we
need
to
couple
that
to
that
option.
So
if
you,
if
you
would
like
to
enumerate
all
the
external
addresses,
then
we
use
host
header
and
if
you
don't
enumerate,
then
we
just
don't.
C
D
C
B
A
Okay,
so
it
sounds
like
maybe
we
should
have
like
just
a
short
design
dock
and
make
sure
that
we've
covered
all
the
bases
and
then
and
then
move
forward.
Does
it
sound
right.
B
A
B
A
E
B
A
The
other
thing
I
added
to
the
agenda,
which
maybe
we
can
delay
this
roadmap
discussion
just
because
I
think
we're
supposed
to
present
in
three
weeks,
so
I
wanted
to
sort
of
get
the
pulse
of
what
people
think
we
should
do
for
111.
But
niraj,
did
you
add
the
data
dog
tracer
metric
support?
Should
we
do
that
quickly?
A
C
Yeah,
if
we
can,
I
hope,
it's
spread,
especially
after
the
earlier
discussion.
So
my
question
was:
what's
the
extensions
working
group
recommendation?
If
someone
comes
and
says
I
have
data
dog
and
I
want
the
sto
to
send
tracing
and
metric
to
data.
So
let's
go
with
tracing
first,
because
it
looks
like
I've
kind
of
gotten
the
answer
for
metric,
so
we
currently
provide
mesh
global
and
proxy
mesh
config
option
which
enables
data
dock
tracer
in
onward.
A
C
Okay,
so
basically
we
are
a
config
transporter
at
that
point,
but
we
don't
know
if
it
actually
works
and
we
don't
provide
any
of
the
customization
options
in
data
log.
Also.
Is
that
correct
or
do
we
do
support
customization
options.
A
Not
I
mean
anything
that
anything.
That's
generic
and
envoy
like
custom
tags
or
you
know,
tag
length,
restriction
should
work
for
datadog
and
I
think
we
even
offer
the
ability,
at
least
in
the
old
style
of
providing
authentication
like
a
token
or
something
for
deadlock.
Although
it
has
a
look.
Maybe
it's
just
the
address,
but
but
the
standard
tracing
stuff
should
all
work
for
datadog.
A
C
Got
it
so?
The
second
question,
then,
is:
is
this
native
envoy
integration
recommended
or
using
the
open,
sensors
integration
and
then
using
the
exporter,
and
maybe
that's
not
a
question
that
you
guys
have
to
answer,
but
I'm
curious
if
you
have
an
opinion
on
it,.
A
For
tracing,
I
think
it's
just
native
native
envoy,
I
don't
believe
it's
with
open
census,
although
I
think
there's
nothing
stopping
you
from
using
opencensus
to
do
that.
If
you
want
it.
B
So
so
neither
I
have
a.
I
have
a
meta
meta
question.
So
are
you
specifically
only
talking
about
data
dog
or
data
dog
and
one
other
thing
or
like?
What's.
C
B
I
think
I
think
that
that
then
then
the
then
we
could
actually
take
the
opposite
approach
here
is
that
we
should
reach
out
to
the
vendors
like
we
I
mean
we
would
like.
We
would
like
to
add
tests
and
we
would
like
to
like
make
sure
it
works,
but
I
think
the
vendors
are
most
motivated
to
actually
do
that
and
to
to
make
it
happen.
C
That's
totally
fair,
I
I
think
overall,
though,
as
a
community
like,
if
onward
tomorrow,
supports
new
relic,
we
will
expose
that
option
or
we
will
we're
going
to
say
no
use
the
open,
open
census
exporter,
like
I'm,
trying
to
understand
from
an
api
surface
point
of
view.
What
are
we
going
to
do
for
things
like
this.
F
C
That's
not
correct
right,
we
can
always
say
hey.
We
have
a
new,
better
way
of
doing
this.
Why
don't
you
provide?
Why
don't
you
use
that
the
issue
is,
I
don't
know
if
you
have
an
answer
like
that?
Like
does
the
extensions
working
group
says
this
is
the
better
way
of
doing
it.
If
you
use
that,
we
will,
you
know,
add
docs
will
allow
you
to
add
tests,
for
example,
right.
B
Okay,
that's
a
that's
a
fair
point,
however,
if
something
is
natively
supported
in
envoy,
exposing
it
and
then,
if
someone
wants
to
expose
it
and
make
sure
that
the
tests
work
and
and
then
kind
of
maintain
it,
I
I
still,
I
would
still
find
it
difficult
because
someone
can
say
that
hey
look
like
we
added
this
integration
code
into
envoy
and
it
is,
it
is
actually
a
better
integration
than
than
the
two-step
integration
common
integration
that
you
are
suggesting.
C
It
depends
from
what
point
of
view
it
might
be
better
for
a
vendor
who
is
a
telemetry
backend
provider.
It
might
be
worse
for
a
customer
who
might
want
to
switch
telemetry
providers
a
few
years
right,
so
they
don't
have
to
change
their
application
if
they
just
use
open
tracer,
for
example,
or
open.
A
I
would
say
the
counter
argument
there.
Madar
is:
we
have
pushed
back
on
using
the
dynamic,
open
tracing
bit
of
extension.
That
envoy
provides
right.
We've
said
we
don't
want
to
be
involved
in
trying
to
get
loading
of
the
the
driver
in
and
all
that
other
stuff.
So
we've
pushed
back
on
on
some
some
things
that
envoy
provides
before.
A
D
E
C
Okay,
so,
just
to
summarize
for
tracing
sorry
that
I
know
it's
taking
longer
than
what
I
thought
it
looks
like.
The
current
situation
is
use
the
data
dog
tracer,
because
it's
natively
supported
you
can
try
to
venture
for
open
sensors
integration.
But
who
knows
how?
Well
that
will
work?
C
B
C
B
B
B
Yeah
they
have
these
two
specific
instructions
and
I
hope
they
test.
A
Yeah,
hopefully
those
are
up
to
date
right,
yeah,
okay,
so
we
have
well,
you
know.
Five
six
minutes
left
are
the
things
that
we
as
a
group
want
to
see
done
in
111.
Either
things
we
didn't
get
to
in
110
or
have
bubbled
up
and
have
since
become
important
that
we
should
sketch
out
here
that
I
can
sort
of
take
and
mold
into
a
more
official
roadmap.
Just
any
ideas
here,
don't
don't
hesitate
to
shout
them.
A
D
Are
you
talking
about
external
b
or
internally.
D
B
A
B
The
the
one,
the
one
that
niro
it's
implemented
in
one
line.
B
B
B
D
B
D
A
A
No,
I
think
this
is
probably
a
pretty
good
list
given
given
what
I
think
are
constrained
resourcing
so
and
I
think
there's
some
some
maintenance
work
in
terms
of
tests
and
dashboards
and
things
that
bubble
up.
So
I
think
this
is
probably
a
pretty
good,
pretty
good
list.