►
From YouTube: 2021-09-15 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
C
Compliance
thing,
so
should
I
run
through
that
with
the
prometheus
developer
community
before,
because
I
want
to
make
sure
that
whatever
the
prometheus
compliance,
I
I
think
the
list
is
incomplete.
If
you
ask
me
right
now,
yeah.
A
I
think
so
too.
Actually,
I
think
that
what
we
I
would
recommend
is
that
again,
this
exercise
was
to
identify.
You
know
what
gaps
are
there
also
on
the
current
compliance
tests,
because
the
compliance
tests
right
now
are
only
for
remote.
A
Right
and-
and
so
I
think
that
the
receiver
tests
will
be
additionally
will
be
additional
in
the
in
that
list,
so
I
think
that
I
would
recommend
at
least
we
discuss
that
on
our
end.
First,
in
terms
of
this
say
meeting
and
then
okay
and
then
kind
of
have
a
more
detailed
discussion
with
the
prometheus
community,
when
we,
you
know,
add
more
details
to
the
doc,
do
you
think
that
would
work.
A
Sounds
good
yeah
and
whenever
you're
ready,
you
know,
let's,
let's
kind
of
walk
through
it.
We
can
all
help
because
david
has
been
looking
at
it.
We've
been
looking
at
it,
so
we
can
help,
and
then
we
have
you
know
brian
as
well
as
others
from
the
prometheus
community,
so
that
we
can
at
least
get
good
feedback
if.
C
A
All
right
cool,
hi,
grace
hi,
hi
ryan,
thanks
for
joining
hired
me,
let's,
we
were
just
waiting
for
you
guys
to
join
in.
A
Okay,
are
there
any
other
items
that
you
would
like
to
join?
Add
to
the
notes,
and
I
can
I
can
get
started
with
the
collector
update
so
as
as
we
had
also
discussed
last
week,
what
we
have
been
doing
is
getting
the
collector
to
be
able
to
declare
stability
for
tracing
on
the
core
pipeline
for
otlp,
and
that
effort
is
almost
almost
you
know
complete.
A
What
we
are
doing
right
now
is
doing
weekly
releases
on
the
collector,
so
that
that
you
know
the
interim
releases
can
be
tested.
If
there's
anything
breaking
you
know,
we've
been
making
a
fair
bit
of
changes
on
p
data,
as
well
as
other
configuration
changes
and
one
of
the
things
I
did
want
to
call
out,
because
that
is
related
to
some
of
the
work
that
we
are
doing
is
the
milestone
the
which
I
just
pointed
out
too.
A
So
that's
the
last
part
of
what
is
pending
in
the
collector
to
be
trace,
stable
and
that's
really
guaranteeing
configuration
stability
for
otlp.
Now.
Why
is
this
related?
It
isn't
directly
related
to
prometheus
metrics,
but
we
do
need
the
collector
to
be
stable,
to
unblock
the
backlog
that
we
have
on
our
merging
prs
for
any
other
items
that
we're
working
on,
especially
the
prometheus
prs
that
we've
all
been
filing
right.
A
We
are
anticipating
that
we
can
finish
these
five
issues
in
this
specific
configuration
stability
milestone
by
the
beginning
of
next
week,
so
hopefully
cut
a
release,
maybe
end
of
this
week
or
or
if
all
the
changes
emerge
or
early
next
week,
and
that's
going
to
be,
you
know,
a
version
that
actually
has
all
the
at
least
the
code
complete
and
then
can
sit
and
bake
for
a
couple
of
for
for
some
limited
amount
of
time
before
you
know
it's.
A
If,
if
there
are
any
p0
is
found
after
that
it
it
will
fix
it,
but
I'm
we're
not
expecting
any
major
updates
there
anthony
did
you
have
any
other
updates
regarding
that
or
you're
good.
A
Yeah,
it's
just
a
lot
of
stuff
in
progress
so
again
bear
with
us.
I
know
that
things
are
very
slow
on
the
vr's
being
merged,
and
that's
why
I'm
kind
of
asking
you
guys
to
you
know
note
them
down
so
that
it's
I
can
make
sure
that
they
are
reviewed
after
as
soon
as
we
are,
we
have
configuration,
you
know,
components
stable
and
in
the
collector,
so
you
can
also
see
the
reason
I
highlight
this
is
because
this
will
be
useful
also
for
our
metric
stability
later.
A
But
you
know,
because
once
we
are
working
on
this,
then
we
will
switch
over
to
matrix
stability.
Also
in
this
four
zero.
Two
four
issue
note
here
also.
A
These
are
the
stability
guarantees
that
we
are
providing
initially
and,
of
course,
we'll
keep
expanding
on
this
list.
As
we
build
out,
you
know
metrics.
These
are
the
trace
stability.
A
High
level
guarantees
that
they're
targeting
the
collector
to
support
david
feel
free
to
add
more
details.
You
know,
as
we
build
out
stability
criteria
for
metrics,
but
that's
something
again
we'll
address
later.
It's
just
more
that,
and
I
just
want
to
make
sure
everybody
here
is
aware
of
what's
happening
there.
Why
why
we
are
sitting
on
on
pr's
getting
merged.
C
A
Okay,
thank
you.
Okay,
any
other
questions.
Folks
have
again.
This
is
just
information
for
us
not
near
really
pertinent
to
the
exact
work
of
this
group,
but
nonetheless
we're
all
in
intertwined
with
the
collector.
That's
it's
only
all
of
you
know
all
our
pr's
are
backed
up
because
of
it
from
being
merged.
A
Okay,
that's
all
I
had,
but
were
there
any
other
areas
we
wanted
to
discuss
again
vishwa
you
brought
up
the
you
know,
area
of
the
prometheus
receiver
tests,
and
is
that
something
that
we
could
actually
discuss
in
terms
of
what
general
tests
we
have
in
mind.
C
So
what
I
did
I
just
started
working
on
that
yesterday.
So
what,
as
initial
pass,
what
I
did.
B
A
C
All
the
bugs
that
we
had
in
the
receiver
that
are
actually
relevant
to
this.
For
example,
the
untyped
metrics
were
not.
C
Of
the
requirements
you
know
there
should
be
flowing
as
a
double
gauge,
so
I'm
just
harvesting
things
like
these
to
put
up
in
a
document,
but
I'm
not
sure
if
that
document
would
be
complete
in
the
first
pass.
So
I
would
like
some.
A
Okay,
so
you
are,
I
mean,
there's
specific
categories
of
of
tests
right.
We
could
identify
first
and
then
maybe
also
flush,
finer
details,
then,
but
brian,
it
would
be
helpful
to
have
your
guidance
here,
because.
E
Sorry
I
got,
I
got
a
phone
call,
what
what
exact
tests
for
you
want?
No.
A
Receiver
output,
and
you
know,
testing
where
we
can
verify
for
from
a
compliance
standpoint,
that
a
receiver
meets
the
injection
criteria
and
the
you
know
for
for
the
prometheus
exposition
format,
as
well
as
any
other
compliance
requirements.
And
we
want
to
add
that
similar
to
what
we
did
with
the
remote
right
exporter.
A
Okay,
that
sounds
good
again
I'll.
I
will
also
you
know,
ask
some
far
engineers
and
others
to
kind
of
add
more
details.
I
think
that
carlos,
if
you
could
add,
ask
josh
also
to
provide
some
feedback
on
some
of
the
criteria
and
the
categories
of
you
know
testing
that
we
could
do
for
verifying
the
receiver
behavior.
Oh
there's
josh
hi
josh.
A
Great,
so
we
we
were
just
talking
about
the
prometheus
receiver,
compliance
tests
that
we
could
outline,
and
this
is
something
again
you
know-
we've
been
discussing
with
the
prometheus
community
with
richard
and
others,
to
see
how
we
can
actually
increase
the
number
of
compliance
tests
that
exist
today,
which
are
primarily
focused
on
the
remote
right
exporter
and
we're
wondering
you
know
what
areas
you
would
recommend
from
a
ingestion
standpoint
and
a
receiver
behavior
standpoint
for
prometheus,
specifically
beyond.
You
know:
metrics
type
validation.
D
D
I
it's
a
good
question,
though
it
just
caused
me
to
bring
back
some
memories
from
like
six
months
ago.
I
should
give
you
a
more
careful
answer
by
looking
into
our
sidecar
and
trying
to
remember
some
of
the
corner
cases
that
we
ended
up
studying,
so
I
can
try
and
bring
back
an
answer.
A
Because
I
know
that
you
had
looked
at
these
in
some
of
these
issues
when
you
have
were
implementing
the
sidecar
sure
yeah
implementation,
and
I.
D
Have
to
page
all
that
back
into
memory
but
yeah
I'll
take
a
look
at
that.
A
Okay,
okay,
cool
good.
That
would
be
great
because
again
we're
just
collecting
feedback
right
now
and
and
then
you
know
kind
of
we'll
make
sure
that
we
add
these
additional
tests.
You
know
it's
just
easier
on
our
end,
also
to
test
compatibility
whether
the
receiver
is
functioning
as
expected
before
we,
you
know,
push
all
that
data
across
the
across
the
collector
to
export.
A
And
is
there.
A
Yeah,
I
think
this
was
one
of
them,
but
this
is
a
tracker
issue.
I
think
there
is
a
specific
issue
wish
I
think
you
had
opened
it.
Let
me
see.
A
And
there's
a
dock
associated
with
this
issue,
which
is
basically
capturing
some
of
the
requirements
that
we
are
outlining
and
then
we
can
work
with
the
prometheus
community
to
kind
of
walk
through
them
and
if
they're
we're
missing
any
of
the
you
know,
other
criteria
for
testing
again.
The
idea.
B
A
Yeah
very
similar
to
what
we
did
with
the
remote
right
specification
and
the
remote
right
tests
that
you
know
finally
were
put
together
to
support
some
of
the
requirements
we
were
looking
at.
On
our
end.
A
Okay,
so
I
think
the
only
I
didn't
have
any
other
items
on
the
agenda
other
than
you
know
asking
folks
to
just
list
what
is
blocked
for
you
from
a
vr
review
point
of
view
that
we
can
get
these
unlocked
right
after
the
collector
is
stable
next
week.
Hopefully,.
A
Any
items
josh
that
you
wanted
to
bring
up
from
proto.
D
D
D
So
that
means
that
now
we're
someone
shall-
and
I
filed
the
issue,
as
I
mentioned
last
week,
update
the
collector
model,
p
data
to
include
the
stillness
marker
stuff
and
then
coming
around
to
your
quest
request
for
me,
which
is
to
look
at
the
receiver
compatibility
stuff,
is
that
we
should
spec
out
in
language
that
you
know
an
implementer
can
take
and
do
how
to
handle
staleness
and
such
that's,
probably
one
of
the
more
important
areas
for
us.
D
So
I
I
haven't
followed
the
issue,
I'm
I
don't
know
if
anyone's
picked
it
up,
it's
not
very
hard
I'll
I'll.
Take
that
to
figure
out
if
anyone's
looking
at
that,
and
if
not
I'll,
promote
the
idea
that
someone
should
this
would
be
to
get
basic
support
in
the
collector
and
get
the
collector
revved
up
to
0.10
before
which
is
necessary
before
we
can
make
any
of
these
changes.
A
A
A
Which
is
applicable
for
metrics,
so
I
think
that
that
will
be
super
useful
as
soon
as
we
have
gotten
this
release
out
sounds.
B
E
A
And
I
think
I
I
don't
have
any
other
topics
to
cover
today.
Anybody
else
otherwise
we'll
obviously
end
early.
A
C
So
this
week's
collector
release.
A
A
Yeah,
this
week's
release
hasn't
happened,
yet
we're
hoping
we
can
do
a
release
tomorrow,
early
late
today
or
early
tomorrow.
Yeah
there
are.
C
A
few
pr's
that
we
are
waiting
for,
especially
the
glaciers
pr
for
untyped
metrics
yeah.
A
So
vishwa
anything
that
has
been
merged
will
be
released.
We
are
kind
of
focusing
in
on
these
the
configuration
stability
issues
right
now
and
getting
those
you
know
done
so
we
are
just
totally
focused
on
getting
those
items
done
and
then
just
releasing
and
then,
if
it's
not
merged
yet
then
it'll
be
out
in
the
next
one
next
release
and
as
soon
as
those
emerge.
So
it's
just
a
question
of
a
few
days
now,
it's
more
that
I
mean.
A
I
know
it's
frustrating,
because
we
also
have
a
whole
bunch
of
prometheus
prs
that
we
are
that
we
are
waiting
on
which
we
have
completed
on
our
end
and
we're
just
waiting
now
for
many
days.
A
Any
other
updates
carlos
from
or
josh
from
an
metrics
sdk
standpoint.
I
know
that
we
are
moving
towards
stability
on.
D
Yeah
we've
frozen
the
feature
we've
given
it
feature.
Freeze
there
are
a
few
pieces
of
sdk
spec
that
are
sort
of
still
in
flight,
but
even
the
they
are
winding
down.
We
are
now
really
facing
issues
that
are.
D
Tricky
the
one
that
came
up
most
recently
was
whether,
when
you
have
a
callback
based
asynchronous
instrument,
whether
you're
allowed
to
register
it
multiple
times
and
there's
like
lots
of
pros
and
cons
to
both
directions
there
and
it
it
affects
the
user
either
way,
and
so
we're
the
debate.
Finally,
about
that
kind
of
detail
is
coming
up
and
I
think
that's
good,
because
it's
it's
not
a
it's,
not
a
structural
issue.
It's
not
like
really
a
data
model
question.
D
It's
really
a
like
a
user
interaction
question
and
it's
like
you
can
find
good
answers
in
both
directions
and
and
that's
an
interesting
kind
of
debate.
It
seems
that
we
have
entered
the
final
stretch.
You
know
if
we
built
what
we
have
specked
out
and
had
three
or
four
of
them
and
things
went
well.
I
think
we'd
be
more
or
less
ready
to
call
it.
I'm
sure
we'll
find
and
discover
new
issues
on
the
way
to
1.0,
but
we're
definitely
stopped
making
changes
and
focusing
on
details.
A
Okay,
I
mean
that's,
that's
good
to
know
the
again,
so
I
have
a
related
question
josh
to
prometheus
to
the
prometheus
pool
exporters
that
exist
in
some
of
the
language
libraries.
Today
and
again,
you
know
there
has
been
discussion
earlier
also
of
whether
those
full
exporters
should
be
supported
in
the
as
well
as
remote.
A
Right
exporters
should
be
supported
at
the
language
level,
or
should
they
actually
just
you
know,
talk
pure
otlp
and
then
just
use
the
collector
to
use
the
you
know,
pull
exporter
or
the
push
exporter
for
prometheus
from
there
right.
So
what
again?
I
was
curious
about
people's
opinions
here,
because
some
of
the
maintainers
definitely
have
recommended
that
we
kind
of
remove
at
least
the
remote
right
exporters
from
all
the
language
implementations,
because
it
doesn't
make
sense
for
us
to
kind
of
have
that
you
know
remote
right
implementation
on
language,
scks.
D
Yeah,
I
think
I
agree
if
I
had
known
what
I
know
today
a
year
and
a
half
ago,
I
probably
would
have
discouraged
us
from
ever
writing
those
remote
right
exporters
in
their
sdks
and,
it's
not
to
say
that's
a
terrible
idea
or
a
wrong
idea.
It's
just
that.
It's
pushing
up
against
this
convention
that
if
you
want
to
push
data,
you
have
a
third
party,
do
it
for
you,
because
then
you
can
observe
stainless
and
so
on.
So
I
would
definitely
support
removing
the
prw
exporters.
D
I
mean
you
can
keep
them
in
contrib
and
I
think,
there's
a
valid
use,
but
I
would
prefer
to
see
people
move
towards
otlp
export.
It
sort
of
does
the
same
thing
in
a
more
accepted
way.
I
guess,
since
I
don't
think
anyone's
ever
going
to
really
like
the
idea
that
you're
pushing
prw
from
a
from
a
client
library,
at
least
not
in
this
this
decade
or
something
like
that,
all
right.
D
So
then,
as
for
the
prometheus
receiver
or
I'm
sorry,
the
pull
exporter
being
supported
in
all
the
sdks,
it
seems
like
it's
worth
doing.
I
I
always
thought
we
would
because
it
helps
an
incremental
deployment
for
a
user
who
would
like
to
begin
using
the
hotel
collector
alongside
an
existing
deployment,
so
perhaps
to
help
migrate
from
prometheus
server
to
hotel.
D
Collector
you'd
want
to
have
both
working
at
the
same
time,
and
then
you
could
deploy
your
sdks
with
both
pull
exporter
and
otlp
push,
and
then
you
can
configure
the
collector
to
do
the
same
and
eventually
turn
down
the
pull
export.
It's
also
not
very
much
complexity,
in
my
opinion,
to
implement
the
pull
export
for
prometheus.
A
Right
right,
it's
a
lot
simpler
and
and
again
from
a
usage
point
of
view.
I
think
that
the
the
main
use
case
for
the
prw
exporter
has
been
through
the
collector.
It
has
not
been,
you
know
through
the
sdks
for
sure
yeah,
so
I
I
mean
again,
I
just
want
to
get
you
know,
use
cases
rishwa.
You
know
from
these
cases
you
guys
are
looking
at
or
david
what?
What
is
your
take.
D
And
I
guess
my
like:
if
we
went
back
18
months,
that
the
thinking
was,
we
have
a
back
end
that
accepts
prw.
It's
cortex
compatible
or
something
like
that,
and
I
just
want
to
send
all
my
data
to
that
back
end
and
either
that
means
a
server
pulling
from
the
clients
and
then
turning
it
into
prw,
or
it
means
the
client's
pushing
prw
and
what
we
discovered-
and
I
didn't
know
all
this
back
then,
is
that
you,
you
lose
a
lot
of
this
semantic
nature
of
the
data.
The
way
prometheus
describes
it.
D
A
Totally
I
agreed
agreed
david
any
any
thoughts.
B
A
Okay,
but
do
you
use
the
pull
exporters
at
all
in
any
use
cases.
B
I
mean
yeah.
We,
for
example,
in
our
gk
pipelines,
use
this
receiver
extensively
because
it's
baked
into
kubernetes
or
kubernetes
right.
B
B
C
Then
we
have
our
custom
conversion,
you
know
from
otlp
to
our
custom
store
and
then
cortex.
You
know
to
query
to
query
the
metrics
back
from
the
custom
store
by
converting
into
raw
prometheus
time
series
again:
okay,
yeah,
so
the
same
very
similar
to
what
google
or
david
was
actually
saying.
We
don't
use
the
remote
exporter.
A
I
see
I
see
I
mean
we,
we
of
course
on
our
end,
use
the
remote
right
export
heavily
because
of
having
a
cortex
based
service
endpoint.
So
but
that's
from
the
collector
that
we
are
seeing
that
you
said
you're
not
seeing
as
much
usage
from
the
sdks
so
again,
just
in
terms
of
use
cases,
I
think
that
having
the
remote
right
exporter
and
the
sdks
actually
at
this
point
I
think,
is
not
seeing
much
usage.
C
Yeah
and
on
those
starts,
is
there
any
plans
for
push
gateway
scenarios
through
collector.
A
Can
you
be
more
specific
krishna?
What
what
do
you
mean
so.
C
So
there
are,
you
know,
jobs,
you
know
that
actually
come
do
something
and
then
they
push
the
metrics,
you
know
to
the
push
gateway
and
then
they
die
right.
A
I
mean
that's
a
good
question,
because
I
think
that
the
last
discussion
I
remember
on
this
topic
specifically,
was
that
I
think
that
that
was
functionality
that
we
needed
to
be
explored
on
the
prometheus
server
side,
and
that
was
something
that
you
know.
We'd
have
to
work
and
ask
the
prometheus
community
directly
to
see
you
know
what
their,
what
their
plans
were
to
be
able
to
support
remote
right
or
push
gateways.
A
I
mean,
as
far
as
I
know
there
I
haven't
seen
any
plans,
but
maybe
brian
or
where
maybe
you
guys
have
a
better
idea.
D
I
mean
I
think
that
there's
a
I've
been
getting
this
question
a
few
few
times
like
pierre
corinthians.
Remote
right
is
not
very
different
from
otlp
push,
there's
not
a
great
deal
of
difference
there.
It's
very
close
and
but
but
one
very
significant
difference
is,
is
the
way
they're
expected
to
be
used
and
prw
is
meant
to
be
a
second
leg
in
metrics
collection.
So
that's
not
specked.
The
the
prometheus
system
just
doesn't
use
it
for
reporting
directly
from
clients.
D
D
But
if
you
push
your
own
up
variable,
you
are
lying
in
some
sense
and
that
like
breaks
the
model-
and
so
I
think,
probably
when
the
dust
settles
of
all
this
open,
telemetry
and
otlp
and
prometheus
stuff
like
there's,
there's
an
interest
in
having
clients,
push
metrics
data
and
then
get
joined
with
resources.
The
service
discovery
is
the
big
part
of
what
prometheus
does
and
that
we
can't
escape.
D
We
have
a
lot
of
interest
in
this
too.
You
know
just
because
we
are
maintaining
services
that
receive.
D
You
know,
metrics
data
from
platforms
like
aws,
and
we
get
this
otlp
stream,
and
then
we
have
to
go
reach
out
and
find
all
the
resources
and
do
this
joining
ourselves.
If
there
were
conventions-
and
it
could
be
done
automatically
by
hotel,
it
would
save
us
work,
but
it's,
I
think
too
early
to
try
and
standardize.
A
Yeah,
I
think,
I
think
also
some
of
the
work
that
we
have
discussed
in
you
know.
Re-Uh
designing
the
metrics
processors
in
the
collector
actually
will
help
address.
Some
of
this
lack
of
you
know
richness
from
the
semantic
convention
semantic
information
that
that
is
not
attached
today
to
the
metrics
at
all.
Some
of
it
can
obviously
be
addressed
there,
but
I
think
there
is
more
spec
work
also.
That
needs
to
be
done
in
order
to
clearly
be
able
to
yeah.
D
A
Exactly
exactly
so,
that
may
be
an
area
that
you
know.
Actually
we
will
definitely
you
know,
look
at
digging
into
as
we
make
progress,
because
I
agree
with
you
josh
that
you
know.
Ideally,
if
you
have
metrics-
and
you
have-
you
know
the
semantic
information
available
with
it,
then
it
makes
it
a
far
richer
experience
for
any
endpoint
to
be
able
to
take,
and
you
know,
build
upon
and
process
right
further,
whether
you're
applying
anomaly
detection,
you
know,
algorithms
and
or
correlating
or
whatever
right
so
yeah.
D
We
did
a
hack
day
project
here
last
spring,
where
we
just
took
the
prometheus
server
and
chopped
it
in
half
and
sent
all
the
service
discovery
data
you
know
through
a
pipeline
and
then
you
know
you
build
up
a
simple
map
inside
of
one
of
those
processors
that
says
for
any
resource
id.
Here's
all
of
your
other
attributes,
and
essentially
that's
not
much
more
than
we're
asking
for
here,
but
doing
that
in
a
semantic
specified
way,
is
way
harder
than
doing
it
for
a
hacking
project.
A
Yep,
it
definitely
requires
a
lot
more
work
on
the
on
the
protocol,
actually
on
the
on
the
spec
itself,
the
data
model
so
yeah.
D
I
feel
like
we
we
put
in
some
work
in
the
data
model
to
allow
it
to
happen,
but
it
still
requires
all
kinds
of
semantic
conventions
to
get
off
the
ground.
So
the
the
thing
that
that's
much
harder
in
a
push-based
world-
that's
not
hard
essentially
in
a
pull-based
world-
is
that
you're
you're
asked
to
do
a
join
at
a
moment
in
time
that
you
essentially
control
as
the
prometheus
server.
So
I
know
my
service
discovery
state,
I'm
going
to
snapshot
metric
state
and
I'm
going
to
join
them
in
a
push-based
world.
D
You
have
two
timelines,
you
have
the
service,
discovery,
state
timeline
and
you
have
the
metrics
timeline
and
the
idea
is
that
you'd
like
to
to
join
those
in
a
temporally
correct
way.
Otherwise
you
get
a
different
result.
So
it
means
that
if
I
delay
my
metrics
by
10
minutes,
I
want
to
join
it
with
10
minute
old
metrics
service
discovery
state,
and
that
requires
a
time-based
lookup
structure,
which
I
think
we've
put
enough
spec
into
the
data
model
to
do,
but
it's
just
a
hard
programming
problem
at
that
point.
D
You
end
up
having
to
spec
out
more
details
than
you
even
wanted
to
when
you
start
so
so
the
question
is:
okay,
I'm
now
sending
10-minute
old
data,
but
it's
a
revision
to
10-minute
old
data
because,
like
I,
my
service
discovery
data
was
delayed
as
well.
So
I
want
to
be
able
to
rewrite
metrics
data
as
soon
as
I
start
doing
this
join.
I
want
to
rewrite
data
and
that's
just
asking
for
another
leap
forward
in
the
data
model.
A
Yeah
yeah
great
agreed
this
is
this
is
very
good.
I
think
I
think
we
should
just
I'll
I'll
create
an
issue
to
track
this,
because
I
I
do
think
that
we'll
come
back
to
this
later
and
and
and
josh.
You
know,
if
you
know
we
can
kind
of
work
on
this
area.
D
It's
not
actually
the
white
steps
split,
some
of
its
teams
and
hotel
has
a
we
have
an
hotel
team
which
I'm
on
right
now.
We
also
have
another
team
that
handles
this
type
of
question,
which
you
know
like
we're
doing
all
those
services
gonna
be
joining
us
on
the
ingress
point
right
now
that
would
love
to
talk
about
this.
D
So
I
can.
I
can
bring
in
more
folks
to.
D
A
A
Okay,
I
think
then,
let's
call
it
a
day
and
we
can
end
and
give
back
20
minutes
to
everybody
thanks.
Everyone
thanks.