►
From YouTube: Ceph Developer Monthly 2022-02-02
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
Hey
just
I
think
I
may
summarize
a
bit,
so
this
is
related
to
the
testing
that
paul
is
doing
in
the
scale
up
and
he
recently
found
that.
Well,
you
have
details
in
link
the
document,
but
he
has
a
well
for
document
on
the
scalability
issue
that
he's
been
facing
and
most
of
them
come
apparently
from
from
the
major
promises
exporter
so
and
specifically
to
the
perf
counters.
So
he
reported
like
six
hundred
thousand
metrics
different
metrics
for
a
per
script
interval
or
two
thousand
osd.
B
So
that's
huge
and
that's
causing
a
lot
of
wealth
issues,
so
that
would
be
the
an
intro
to
the
main
issue
there
and
he's
also
proposing,
apart
from
these
two
ideas,
he's
also
doing
some
shorter
term
fixes,
like
creating
a
list
of
a
low
list
of
the
metrics,
that
we
really
want
to
export
in
the
matches
from
issues
so
not
to
export
every
metric,
but
only
the
ones
that
are
really
used
in
grafana
and
information,
jar
manager,
basically
and
yeah.
So
and
today
he
was
also
sharing
this
in
a
meeting.
B
A
Okay,
thanks
ernesto,
and
so
let's
see,
there's
like
like
three
different
proposed
approaches.
Maybe
I'm
talking
about
a
few
of
them
around
separating
the
exporting
metrics
out
into
separate
areas
so
to
send
them
directly
from
where
they're
being
collected
to
prometheus
and
bypassing
the
manager
and
possibly
doing
that
within
each
given
or
possibly
one
demon
proposed,
and
the
third
idea
would
be
to
continue
doing
it
through
the
manager
but
trying
to
scale
out
to
manager
more.
B
I
think
the
initial
idea
was
to
put
this
in
every
diamond
right
in
every
step
service
or
demo,
that
modular
subcomponent
that
basically
would
export
this
paul
mentioned
that
he
was
talking
with
casey
and
they
basically
raised
concerns
on
the
fact
that
well
adding
this
kind
of
logic,
basically
an
http
endpoint
yeah,
might
was
a
risk
right,
security,
wise
and
yeah.
A
Place
the
question
that
comes
to
my
mind
around
the
idea
would
be:
how
do
you
know
that
you're
gonna
be
able
to
deploy
in
every
single
host
where
things
are
running,
and
how
do
you
set
up
that
communication
between
all
the
processes
on
that
host.
B
Yeah,
that
might
be
trickier.
I
think
I
remember
sebastian
saying
that
they
were
talking
about
that
in
the
past.
How
do
the
discovery
of
service
discovery
or
the
kind
of
discovery
I'm
not
sure
if
with
for
the
rbd
mirror
or
something?
But
I
remember
that
that
was
to
mention
that
is
here.
You
know
about
this
conversation
about
how
to
discover
the
exported
endpoints
in
case
that
we
go
with
this
approach.
D
I
guess
so
just
for
just
discovering
the
rgw
endpoints,
I
think
when
we
talked
about
it,
it
was
going
to
be
except
idioms
job
to
configure,
depending
on
how
we
did
it,
whether
we
wanted
to
have
it
if
it
was
directly
on
your
w.
We
didn't
figure
prometheus
to
know
where
to
look.
I
think
we
had
its
own
thing
scraping
things
and
it
was
going
to
be
a
figure
whatever.
That
is
so.
I
think
we
were
going
to
handle
it.
E
B
F
I
just
want
to
say
it's
not
necessarily
the
the
amount
of
data
that
just
returned,
but
it
takes
time
to
collect
the
data
and
paul
has
looked
into
into
how
how
parts
of
the
collections
are
done
and
how
much
time
it
takes
them,
and
there
are
were
some
some
odd
results.
F
Probably
he
said
it
like
that
that
there
would
be
victims
rather
rather
than
causing
the
issue
parts
of
the
manager
took.
It
took
them
longer
to
collect
the
data
than
then
was
reasonable,
probably
due
to
some
some
locking
inside
the
manager,
which
is
probably
already
an
issue.
So
not
necessarily
only
the
the
amount
of
data
that
is
gathered,
but
that
the
logging
is
so
heavy
that
it
could
possibly
also
affect
other
manager
modules
and
and
cause
some
issues
on
its
own
for
the
manager.
B
I
think
I
can
share
my
screen
just
to
display
this
stop.
That
has
been
working,
so
it's
basically
a
chart
of
the
well
scaling
of
the
cluster
and
the
how
the
scale
of
matrix
ground
zone.
Let
me
try
that.
B
G
B
Yep,
that's
the
amount
of
osds
versus
the
amount
of
matrix,
I
think,
or
how
the
number
of
metrics
is
increasing.
So,
as
you
may
see
in
this
worst
case,
this
600
more
than
600
000,
metrics
and
not
metrics,
but
the
samples
to
perf
counter
may
generate
samples
per
osd,
for
example,
so
yeah
that
explains
the
the
amount
of
metrics
and
I
think
he
later
disabled,
all
the
perf
counters
and
yeah.
B
We
got
just
30
000
metrics
instead,
so
most
of
them
are
first
counter
related
and
I
think
then
later
enabled
just
the
ones
the
he
run
an
allow
filter
just
to
only
let
pass
the
the
magics
that
were
actually
used
in
the
in
the
dashboards
and
that
counted
for
like
80,
something
thousand.
So
that's
still,
apart
from
the
600.
F
By
default,
we
have
most
of
them
unused
but
enabled
and
causes
issues,
and
if
we
cherry
pick
only
those
which
we
really
need
it
scales
to,
let
me
see
around
four
thousand
dollars
these.
F
Easily,
probably
not
so
easily,
I
mean
I've
seen
a
chart
where
it
was
scaled
up
to
8
000
osds.
I
guess,
but
there
was
pretty
much
the
limit
for
stinger
prometheus
instance.
Even
though
the
the
perf
candles
were
reduced.
H
H
We're
trying
to
report
on
replication
activity
per
volume
or
per
bucket
and
and
then
other
other
things
along
those
lines.
H
Big
first
four
minutes
of
the
call,
but
I
mean
I
is
the
proposition:
are
we
just?
Are
we
rejecting
the
proposition
that
I
thought
had
been
agreed
by
jason,
like
by
consensus
in
previous
meetings
that
that
that
these
that
said,
that
routing
all
metrics
that
the
stuff
manager
is
is
now
a
tolerable
topology?
Because
I
don't,
I
don't
think
it
is.
F
Yes,
what
do
we
see
here
at
the
screen
shared
by
ernesto
is
the
this
solution
we
can
implement
in
the
self
manager
mode.
Sorry,
yes,
f
manager,
prometheus
module,
and
that
will
help
a
lot,
but
there
will
be
a
point
where
it
might
not
be
sufficient
anymore,
and
considering
that
this
is
the
whole
data
is
going
through.
One
point
right:
basically,
the
bottom
neck.
H
I
Possible,
I
think
I
think
you
know
paul's
intent
was
there's
two
problems
I
mean
one
is
the
ability
to
you
know
scale
through
a
single
manager
instance
and
the
other
is
you
know
the
fact
that
we
have
all
this
these
metrics
and
a
lot
of
them
aren't
even
needed
so
as
a
result
he's
showing
two
different
things,
and
I
think
we
have
to
fix
both
problems
in
the
long
run.
This
is
how
I
would
interpret
my
conversations
with
them.
So.
H
A
Yeah,
I
believe,
the
same
matt
so
yeah.
I
don't
think
anybody's
just
agreeing
that
one
python
process
is
not
going
to
be
able
to
handle
a
large
scale
of
metrics
and
we
all
agree
on
that
aspect
of
things.
I
think
there's
three
different
ideas
outlined
in
the
wiki,
one
of
which
would
include,
is,
I
explained,
the
manager
more
to
have
active,
active
managers.
A
H
J
Or
anyone
else
is
there
anything
that
prevents
the
manager
from
handling
metrics
metric
collection
at
different
frequencies,
like
if
there's
data
that
that
we
only
want
rarely,
you
know
every
hour,
every
couple
hours
is
there
anything
that
prevents
us
from
doing
that
right
now,.
A
No,
that's
another
option,
that's
an
interesting
thing
to
think
about.
I'm
not
sure
if
paul's
listening
to
that
mature,
a
good
idea
to
look
at
that
in
terms
of
individual
metrics,
it's
there
are
probably
a
lot
of
counters
that
we
don't
necessarily
need
all
the
time.
But
a
lot
of
these
things
are
things
that
you
would
want
in
the
modern
monitoring
system,
where
you
need
to
access
like
the
alpha
system
at
a
pretty
frequent
rate
as
well.
So
I'm
not
sure
like
every
hour
would
be
vision
for
a
lot
of
things.
K
K
J
K
Just
be
a
queue
so
that
the
frequency
is
relatively,
there
would
just
be
a
first-come,
first-served
queue,
and
then
they
reported,
as
I
just
reported
with
the
same,
the
the
period
would
be
relatively
constant,
but
it
might
be.
You
know
seven
and
point
three
minutes
or
something.
You
know
what
I
mean
it
might
be
just
depending
on
the
number
of
metrics
to
report.
F
Having
two
of
them
one
with
higher
one,
with
a
lower
priority
that
work
so
that
prometheus
would
be
configured
because
prometheus
is
pulling
the
data
from
the
exporters.
It
is
not
pushed
to
prometheus.
F
So
either
we
would
be
very
selective
about
having
a
metric
in
in
the
snapshot
of
the
exporter,
or
we
would
need
separate
exporters
for
prometheus
to
be
able
to
scrape
them
with
different
intervals.
K
F
K
F
J
Yeah
you'd
have
to
adopt
graphing
methods
that
would
assume
the
possibility
of
of
non-constant
intervals
right.
J
M
Yeah,
so
the
the
the
wording
in
this
wiki
document
it
seems
to
be
specific
to
rbd
mirrors,
ffs,
mirror
and
rgw.
M
So
it's
not
clear
to
me
if
we're
talking
about
perf
counters
everywhere,
including
osds
and
otherwise,
or
if
we're
just
talking
about
replication
metrics,
but
something
that
I
recall
from
discussions
around
rb
rgw's,
multi-site
metrics,
is
that
we
want
very
fine
grain
stuff
like
stats
per
bucket
and
with
zephs
proof
counters.
All
of
the
keys
are
hard
coded,
and
so
we
have
no
way
to
have
like
dynamic
per
bucket
metrics
through
perf
counters.
N
All
right,
I'll
I'll
just
go
ahead.
I
think
the
this
ties
into
the
topology
question
right,
because
I'm
not
sure
why
the
why
the
the
like
a
case
where
each
individual
demon
would
set
up
an
http,
endpoint
and
report
metrics
by
itself
got
scraped,
but
you
know,
because
that
was
the
approach
that
I
I
thought
was
agreed
upon
a
long
time
ago.
But
with
that
approach
we
we
would
cut
out
a
set
of
birth
counters
right.
The
demon
can
do.
N
The
demon
has
all
the
flexibility
it
can
do
whatever
whatever
it
is
that,
like,
basically
everything
that
primitives
allows,
we
don't,
we
don't
need
to
first
somehow
package
the
data
that
we
want
to
export
into
the
the
earth
counters
framework
then
get
those
curved
counters
sent
over
the
network,
whether
it's
as
it
is
today
to
the
manager
or,
as
this
thing
seems
to
seem
to
suggest,
to
to
a
local
demon
which
would
then
unpack
it
and
right.
N
So
so
I
I
think
the
perf
counters,
whether
the
birth
counters
need
to
be
extended
to
be
more
dynamic
depends
entirely
on
like
which
which
of
these
well
originally.
H
If
we
wanted
it,
but
everything
casey
said
is
correct.
People
are
asking
us
our
w,
and
I
think
it's
true
of
the
other
replication
services
to
community
to
send
more
things
but
more
to
send
more
structured
data,
sparse
data,
I
think
unfortunately,
but
like
replication
counters
per
bucket
as
one
but
we've
talked
to
the
prometheus
experts,
including
paul
about
that
and
that's
especially
they've
said
well,
there
are.
There
are
prometheus
data
structures,
data
types
that
that
account
that
probably
cover
that
space,
but
yeah
we
wouldn't
need
to
put
them
into
the
perf
counters.
H
We
wouldn't
even
even
as
even
a
single,
even
even
the
original
notion
of
proof-
counter
wouldn't
necessarily
need
to
be
expanded.
But,
of
course,
the
individual
daemon
or
w
in
this
case
would
would
need
to
would
need
to
track
that
they
could
present
it
to
an
exporter,
but
that
but
the
idea
that
came
up
as
I
understand
it,
I
don't
know
what
if
it
has
been
apologized.
H
You
know
a
couple
of
warning
bells
there
and
that
and
that
to
increase
security.
You
wanted
to
to
separate
the
job
of
the
exporter
from
the
job
of
the
dame
and,
and
then
paul's
proposal
was
well.
Let's,
the
sidecar
can
be
a
a
device,
a
component
on
the
node
which
absorbs
the
the
counter
data
and
then
exports
it
prometheus,
compatible
style.
H
Well
that
well,
I
I
was
I
would
have
I
was
I
was.
I
was
in
the
camp
with
this
guy
all
the
cards
on
the
table,
that's
casey's
view,
and
that
apparently
that's
your
view.
I
I
I
my
position
would
have
been
that
that
it
is
possible
that
that
one
can
safely
can
have
the
expert
in
an
osd
and
gain
the
access
to
that
to
trust
trusted.
E
I
have
this
suggestion,
so
a
lot
of
this
stuff
clearly
stems
from
the
step
manager,
module
that
needs
to
re-encode
all
of
the
performance
data
that
the
manager
is
collecting
from
the
manager
reports
and
then
organize
that
into
data
which
can
be
queried
through
the
command
line.
Interface,
that's
clearly
non-scalable
and
the
core
issue
here
I
believe
so.
E
E
E
So
my
suggestion
is
that
one
option
we
could
explore
is
having,
in
the
c
plus
plus
side
of
the
manager.
Storing
all
this
data
in
rados,
probably
on
lips,
have
sequel
light
and
there
exists
methods
for
having
prometheus
scrape
this
data
out
of
the
sql
light
database.
H
E
Well,
I
think
the
convenient
part
about
this
mat
is
that
we
can
separate
out
the
exporting
of
the
data
from
the
manager
and
and
dorian
sql
allows
us
to
do
some
natural
things
like
build
queries
which
prometheus
would
be
able
to
then
use.
E
So
I
I
just
I
just
did
a
quick
google
and
I
pasted
a
link
in
the
chat
for
for
one
such
project
to
various
equal
databases
or
prometheus,
and
that
that
should
work
with
sql
lite,
and
that
would
be
an
option
for
the
manager
to
collect
all
the
data
store
it
on
in
ratos
and
then
let
the
prometheus
itself
scrape
that
out
whatever
is
convenient
for
it.
We
don't
even
have
any
communication
going
on
between
the
manager
and
prometheus
well.
H
Rules
there
means,
if
you've
changed,
move
the
topology
speed
up,
that
that
was
original
theory
of
what
of
what
what
the
scalability
problem
was.
But
in
in
and
you
you
moved,
you
moved
on
to
the
python
problem
which,
with
the
python
signal,
the
gill
executor
problem,
which
I
think
is
real,
and
so
it
solves
that,
but
but
now
it
moves
no,
no,
no,
no!
No!
No!
No!
No,
but
it
also
up
levels.
The
model,
the
information
model
to
be
the
one
where
the
data
is
indexed
in
stable
storage.
H
It's
cool,
I
mean
that's,
that's
more
powerful
and,
and
it
gives
you
and
you
can
build
all
kinds
of
interesting
tools
with
it,
but
but
just
but
prometheus
is
gonna.
For
me,
this
ends
up
then
kind
of
mirroring
data.
That's
this
that's
already
stable
in
rados.
It
changes
the
semantics
quite
a
lot.
By
doing
that,.
O
H
I
was
trying
to
say
the
same
thing.
It's
it's!
It's
turning
ephemeral
data
that
prometheus
stabilizes
into
data
stabilized
in
readers.
A
Patrick,
what,
regardless
the
the
persistence
piece,
which
I
think
is
a
separate
issue?
What
are
you
talking
about
in
terms
of
moving
to
processing
and
making
it
more
efficient
to
be
in
the
cpl
plus
code
rather
than
the
python
code?
And
do
you
have
an
idea
of.
E
So
if
we
organized
all
the
performance
data
in
c,
plus
plus
structures
and
then
allowed
python
to
call
into
the
c
plus
plus
code
to
get
whatever
it
needs
to
satisfy
from
the
cli
command,
that
would
potentially
be
more
efficient.
How
much
so
I
don't
know
I
mean,
there's
also
the
complexity
of
the
pipe,
the
prometheus
manager
module,
setting
up
an
http
server.
E
E
But
I
would
I
mean
it
sound
that
the
double
persistence
sounds
like
a
drawback,
but
I
think
there's
a
lot
of
utility
in
keeping
the
performance
perf
counters
for
all
of
our
the
demons
in
some
central
location,
that's
queryable!
This
would
have
wide
utility
beyond
prometheus
other
major
modules.
A
E
C
One
thing
I
wanted
to
add:
isn't
this:
what
we
trying
to
achieve
with
tracing
open
telemetry
like
we
are
deploying
a
sidecar
and
then
with
with
the
trace,
we
can
kind
of
plug
in
the
matrix
that
we
are
interested
in
in
form
of
logs,
and
this
could
be
stored
in
the
local
host,
where
our
domains
are
deployed
in
a
collector
demon
and
then
from
these.
C
These
humans,
then,
can
export
those
matrix
to
prometheus
and
like
this
blog,
I
linked
specifies
something
similar
and
I
think
from
previous
discussions.
We
got
to
know
that
yeah,
open
generator
is
kind
of
creating
a
local
http
server
kind
of
architecture.
C
So
would
that
suffice
our
use
case
here
or
are
there
some
loopholes.
P
P
I
would
not
try
to
force
the
tracing
mechanism
to
to
be
used
for
stats
collection.
P
Q
I
was
saying
that
like
actually
between
metric
and
trace,
there
is
a
link
that
you
can
have,
which
is
called
exemplar
in
the
in
the
parameters
exporting
format
which
is
open
metric,
and
it
would
be
nice
that
the
demon
told
directly
this
format,
because
then
we
could
have
all
these
native
performances
data
structure
and
we
could
easily
add
those
links
to
traces.
Q
O
Yeah,
but
we
should
separate
tracing
for
matrix.
This
matrix
are
connected
to
alerting
and
other
things,
and
usually
they
don't
need
to
be
fine-grained
and
stressing
and-
and
we
usually
set
them
on
like
interesting.
Usually
you
can
settle
it
off,
but
basically
the
idea
of
this
exposure.
It
could
be
explore
up.
You
know
the
explorer
edema
and
we
can
think
about
the
topology,
but
in
any
case
like
in
kubernetes.
Usually
there
are
lots
of
exporters,
they
don't
do
as
centralized
explorer.
O
They
have
not
exported
that
actually
takes
the
old
performance,
counters
and
stats
and
export
them.
You
are
paid
demand,
explorer
different
services
to
have
different
exponents
and
you
don't
have
a
centralized
point
because
basically
primitives
become
the
centralized
point.
A
F
Yes,
because
it
makes
sense
for
prometheus
to
have
specialized
exporters.
It
is
also
easier
to
get
prometheus.
That
way.
You
can
eventually
use
a
prometheus
instance
to
scrape
all
node
exporters
and
other
instances
to
scrape
all
osd
exporters
if
I
may
say
that,
and
then
rgw
and
whatever,
and
that
way
there
would
be
no
bottleneck,
no
single
bottleneck,
no
single
service,
and,
if
I
may
add
that
I
don't
think
the
the
suffix
porter
has
a
performance
issue
because
it
runs
python
and
python
is
so
slow.
I
do
agree
that
this
there's
a
locking
issue.
F
But
what
the
prometheus
exporter
does
is
it
only
requests
the
data
from
sev
on
on
on
all
different
nodes.
I
don't
know
how
how
the
data
is
get
stuff
internally,
but
then
it
receives
the
data
and
it
does
not
a
lot
of
processing
it
just
outputs
it.
So
it's
not
very
cpu
intensive.
The
problem
is
likely
the
locking.
I
And
well
everything
you
know,
you
know
five
seconds:
sending
54
megabytes
of
data
to
be
processed
by
prometheus,
there's
a
lot
so
independent
of,
and
that's
what
we've
been
seeing
at
scale-
and
I
don't
know
if
ernesto
mentioned
that
earlier,
but
that's
that's
a
lot
of
damn
data.
You
know
every
five
seconds,
so
I
think
you
know
this.
You
know
going
through
one
point
to
get
all
that
versus
multiple
points.
I
mean
it's
definitely
going
to
scale.
You've
got
to
go
to
you
know
some
type
of
distributed
architecture
for
the
scraping.
I
You
know
we
also
have
a
reduction
in
the
data
that
we
have
to
do
and
I
think
we
need
more
pocs
and
I
think
the
the
interval
thing
is
probably
more
of
an
enhancement
in
the
you
know,
as
as
we
fix
all
these
other
problems
as
we
move
forward,
but
yeah,
that's
a
good
feature
to
have
as
well.
Couldn't
agree
more.
You
know
so,
but
we
gotta
figure
out
what
that
means
and
how
it
works,
and
you
know
I
think
a
poc
is
required
for
that,
probably
as
well.
A
Well,
for
the
interval
part
it's
like
if
it's
controlled
for
exporter,
that
suggests
that
it
might
be
used
more
useful
to
go
with
the
per
demon
approach.
If
you
wanted
to
have
a
separate
exporter
rather
than
for
host,
so
that
you
could
have
like
that,
pretty
even
control
over
what
is
it
being
exported.
I
But
even
even
from
a
per
demon
perspective,
you
might
have
different
intervals
for
different
metrics
that
you're
collecting
on
a
pre-human
basis.
So
I
mean
I
think,
we've
got
to
do
a
little
more
investigation
there
personally,
but.
J
A
A
It
may
be
like
what
patrick's
suggesting
some
videos
about
the
processing
like
going
making
it
go
through
json
or
through
python,
to
transform
and
it's
a
textual
representation
that
could
potentially
be
one
one
bottleneck
that
gave
this
next
level
of
scalability
not
clear
to
me
whether
that
would
be
sufficient
for
supporting,
say
a
8000
node
cluster
or
not
with
the
current
level
metrics.
But
I
think
it
might
be
another
avenue
worth
investigating
in
the
short
term.
I
It
seems
like
to
me
54
megabytes,
every
five
seconds
for
a
3800
node.
That's
3800,
osd
busters
feels
like
a
lot
to
me.
I
don't
know
that
I
I
guess
I
just
view
it
differently,
so
you
know
well.
H
I
like
well,
I
liked
on
it
when
jason
first
proposed
it
or
I
guess,
but
it
was
the
topology
change.
I
think
it
is
a
couple.
I
think
it
has
a
couple
of
discrete
goods
it
and
I
think-
and
I
think
it's
well,
I
I
think
I
think
the
proposals
have
changed.
I
don't,
I
think
it's.
I
think
it's
compatible
with
different
ways
of
embedding
the
different
exporters.
H
In
case
you
know
how
you
that
it
shouldn't
be
converge
with
osds
or
or
rhws
or
is
legit.
It's
a
good
argument
for
that,
but
but
but
but
it
works
the
same
if
it
has,
but
but
the
approach
moves
lose
the
data
between
the
or
between
the
target,
the
is
where
it
where
it
originates
and
the
targets
where
it
wants
to
be.
It
gives
it
unless
it
lets
us
arbitrarily
scale
and
the
the
the
those
those
are
the
end
to
end
relation
cardinalities
doesn't
require
any
further
software
changes
in
the
stuff
system.
To
do
so.
H
A
Yeah,
I
agree
matt.
I
think
we
definitely
need
to
do
the
scaling
out
approach
longer
term,
suggesting
that
the
existing,
like
python
implementation,
could
potentially
be
optimized
for,
for
example,
stable
releases.
J
F
Yes,
if
the
prometheus
does
in
simple
terms,
it
doesn't
http
get
and
it
it
it
fetches
everything
that
the
exporter
returns
and
the
exporter
is
supposed
to
be
done
and
and
just
export
is
supposed
to
to
return
all
the
data
of
of
the
instant
where
it
is
requested.
J
Get
us
an
improvement
in
compression.
J
K
Is
why
all
this
stuff
is
moving
out
of
them
on
it?
Can
it
can
be
scary,
though,
like
pg
pgs
can
be
pg's
can
be
shown
as
unknown
or,
like
I
mean
that's,
that's
the
main
thing
I've
noticed
with
the
slow
manager
is
that
pg
show
is
unknown
and
then
date
transitions
are
not
updated
quickly
with
if
someone's
watching
sef
status,
for
example,.
N
Yeah
well
well,
the
the
other
thing
is
that
the
manager,
even
though
it
was
supposed
to
be
something
that
can
be
you
know
that,
can
fail
and
isn't
necessarily
available.
N
There
is
more
and
more
things
that
are
added
in
one
example,
for
example,
on
the
rbd
side
is
the
is
the
deferred
deletion
of
rbd
images
or
or
or
purging
of
rbd
trash?
N
N
I
think
the
same
goes
or
similar
goes
for
cfs,
where
the
entirety
of
the
subway
management
lives
in
the
manager,
and
so
if,
when,
when
kubernetes,
when
a
kubernetes
user
requests
a
volume
that
goes
to
the
manager
and
if
the
manager
is
down
as
far
as
the
kubernetes
user
is
concerned,
like
nothing
works
or
at
least
new
pvcs
wouldn't
bind,
and
things
like
that,
so
the
manager
is
becoming.
N
You
know
more
and
more
of
a
critical
component
that
we
cannot
that
that
that
can
be
slow.
I
guess
that's
the
point,
and
that
was
the
one
of
the
core
observations
you
know
behind
jason's
proposal
to
move
to
you
know
not
look
into
short-term
things
that
we
can
do.
N
That's
probably
you
know
there
is
probably
a
lot
of
things
that
can
be
done
to
optimize
the
existing
prometheus
module
based
on
on
cherry
pie
and
whether
it's
you
know
moving
more
stuff
into
c,
plus,
plus
or
just
drastically
cutting
down
the
number
of
turf
counters
that
are
getting
translated
into
plain
text
and
then
exported.
N
N
N
That
can
be.
You
know
that
has
to
be
discussed,
but
I
think
we
we
should
not
conflate
the
the
short
term,
things
that
can
be
done
and
the
long
term
goal
of
moving.
Basically,
you
know
getting
rid
of
the
premier
space
module
in
its
current
form.
We
would
likely
need
something
in
the
manager
still
to
point.
N
You
know
to
to
list
out
all
the
endpoints,
so
things
like
that
or
perhaps
perhaps
some
other
auxiliary
functionality,
but
the
metrics
themselves
should
should
really
not
go
to
the
manager,
because
it's
it
has
other
responsibilities.
Now
I
guess
that's
that's
the
best
way.
To
put
it.
J
Ellia
I've
I've
always
been
a
little
suspicious
and
worried
that
the
manager
has
kind
of
turned
into
the
dumping
grounds
of
you
know,
other
things
that
we
didn't
want
in
them
on,
and
it's
kind
of
almost
now
becoming
its
own
critical
piece
of
infrastructure
for
for
clusters.
J
N
The
the
the
based
on
python
part
is
not
a
problem
as
long
as
it
as
it's
really
about
like
management.
For
example,
you
know
creating
creating
this
ffs
sub
volume
or
marking
the
image,
an
rbd
image
for
deletion
and-
and
you
know,
managing
that
dilution.
That's
that's
totally
fine
and
that's
you
know,
there's
nothing
wrong
with
using
python.
N
There
shoving
dozens
of
megabytes
per
second
through
the
for
the
through
the
python
interpreter
probably
can
be
optimized,
but
just
not
a
good
idea,
especially
given
that
there
is
no
upper
bound
on
that
on
that
amount
of
data.
We're
talking
about.
You
know
right
now,
all
of
the
data
that
is
being
that
is
being
that
goes
for
the
through
the
through
the
exporter.
N
N
So
talking
about
you
know
about
measures
you
know
about,
you
know,
priorities
and
whether
we
can
maybe
just
export
less
and
yeah
in
the
short
term.
It
may
bring
some
fruit,
but
longer
term.
We
will
be
back
to
50,
megabytes
or
100
megabytes,
just
because
there
will
be
a
lot
more
metrics
that
that
we
want
to
make
available
that
the
that
we
are
being
pressured.
You
know
make
available.
J
Yeah,
I
I
think
the
what
I
was
kind
of
trying
to
guess.
It
feels
like
we've,
almost
recreated
the
exact
same
problem
we
had
in
the
mods
now
in
the
manager
right
where
we
have
these
critical
things
that
that
need
to
happen,
that
are
impacted
by
metrics
data
that
we're
collecting
and
we're
trying
to
right
to
square
one
again
right
and.
N
Yeah
I've
been
saying
that
you
know
all
along
that,
and
but
the
the
manager
is
becoming,
you
know
almost
a
monitor,
at
least,
if
you
consider
kubernetes
environments,
there
is
a
lot
of
things.
N
You
know
inaudible
right
now
that
that
depend
on
the
manager
like
and
not
only
not
only
things
like
you
know,
image
and
volume
management,
but
also
disaster
recovery,
so
the
the
the
entirety
of
snapchat
scheduling
for
snapshot
based,
rbd,
mirroring
and
set
of
smearing,
which
is
always
snapshot
based,
although
that
also
lives
in
the
manager
and
yeah.
N
Point
of
view
of
like
the
end
user,
while
it's
not
strictly
in
the
data
path.
From
their
point
of
view,
you
know
it
almost
is
because
you
know
if
the
manager
is
down.
This
means
that
your
data
stops
replicating
to
the
secondary
cluster.
If
you
set
up,
if
you
set
up
a
disaster
recovery
like
if
you
set
up
regional,
dr
in
in.
B
I
would
just
like
to
add
that,
despite
the
well
final
decision
on
where
to
move
this
to
a
separate
exporter
or
not,
there
are
at
least
six
modules
that
consume
the
postcounter
api
major
api.
So
well
just
for
everyone
to
keep
that
in
mind.
So
with
that,
I'm
not
sure
what
we
are
proposing
here
is
that
all
these
modules
should
scrape
the
data
from
prometheus
instead
of
from
the
manager
api,
or
we
still
plan
to
leave
this
api
available
in
the
in
that.
B
And,
additionally,
I'm
a
bit
fearful
about
the
fact
that
we
are
facing
ourselves,
because
we
currently
have
a
single
point
of
failure,
which
is
the
manager,
but
we
are
moving
the
same
amount
of
metrics
to
different
components
that
are
going
to
generate
not
only
that
traffic
but
extra
traffic
right.
We
are
talking
about
that
and
that's
going
to
be
routed
to
the
a
single
point
of
failure,
which
is
a
single
parameters.
Instance,
unless
we
are
planning
to
vertically
or
or
horizontally
scale
the
cluster.
H
B
Yeah,
but
I
guess
it's,
it
will
be
harder.
I
mean
we
don't
know
exactly.
What's
the
traffic
and
the
throughput
of
metrics
that
we're
generating,
but
if
we
distribute
this
across
2000,
osds
plus
x,
all
other
demos
right,
it
will
be
harder
to
know
where
the
traffic
is
coming
and
so
it
might
be
more
scalable.
A
Yeah,
I
think
I
mean
I
think
foreign
scale,
regardless
of
the
strategy
used,
is
that
just
preparing
like
I
was
describing
we're
going
to
be
sending
lots
more
data
from
premiering
metrics
and
from
all
the
different
components
in
addition
to
the
current
data
set.
But
I
think
that's
that's.
In
the
cute
mind,
we
all
need
to
think
about
just
killing
for
meteos,
but
it's
going
to
be
in
the
mix
regardless.
A
B
Yeah,
I
just
grabbed,
and
I
saw
well
basically
all
the
metrics
related
modules,
the
influx
squad,
telegraph
and
the
restful.
Probably
we
could
deprecate
that
yeah
and
that's
where
I
promised
those
are
the
block
them.
A
B
That's
where
displaying
the
metrics
on
demand.
So
only
if
you
access
the
given
demo
details,
you
will
see
that
that
is
not
printably
or
well
prefetching,
the
the
magics
as
the
prometheus
module.
So
it's
easier
at
least
to
identify
spikes
on
and
load
due
to
that.
N
Yeah,
I
I
think
we
talked
about
you
know
dashboard
needing
to
display
metrics
when,
when
we
were
discussing
the
you
know
the
moving
out
to
the
to
the
individual
demons
approach-
and
I
think
I
I
think,
would
you
know
our
agreement,
which,
as
far
as
I
remember,
was
that
since
grafana
can
be
can
be,
you
know
basically
pointed
at
the
same
data
stream,
and
the
idea
was
that
the
dashboard
would
just
have
an
embedded
refinery
instance
that
would
basically
handle
the
scraping
the
same
way
that
prometheus
server
would
I'm
not
sure,
like.
N
This
was
actually
seen
as
as
a
step
forward,
because
today
we
have
like
the
same
curve
counter
data
that
that
gets
that
gets
that
is
delivered
to
the
manager
that
data
is
interpreted
by
the
dashboard
and
by
the
prometheus
exporter
separately
and
the
the
idea
was
that
here
we
would
have.
N
You
know
a
single,
a
single
set
of
data
and
a
single
sort
of
viewpoint,
because
there
wouldn't
be
two
different
pieces
of
code
that
are
essentially
they're,
essentially
parsing
out
the
the
same
stuff,
yeah
again
I've.
This
is,
I
I've
only
been
tangentially
involved
in
those
conversations,
but
I
think
that
was
the
idea.
F
F
On
the
other
hand,
safe
dashboard,
I'm
not
sure
how
how
how
detailed
the
data
can
be
queried,
because
if
this
dashboard
does
not
require
as
much
data
as
prometheus,
does
it
doesn't
collect
at
all.
F
Just
it
just
requested
for
being
displayed.
F
The
other
thing
is,
I
do
think
that
I
mean.
I
know
that
different
monitoring
systems
and
we
have,
for
instance,
the
zubix
module
in
the
manager.
They
do
have
different
models.
So
not
not
every
monitoring
pulls
the
data,
so
I'm
expected
to
be
pushed,
but
I
think
most
do
that
using
http.
So
if
the
data
to
be
queried
from
prometheus
as
http
servers,
it
could
be,
it
could
be
provided
in
a
different
format
and
it
could
even
be
pushed
to
the
different
monitoring
solutions.
A
You're
suggesting
we
could
have
something
like
a
manager
api
to
access
the
metrics
for
different
modules
that
needed
to
look
at
them
at
the
dashboard
or
maybe
telemetry,
or
maybe
other
models
in
the
future
that
might
want
to
analyze
performance
data
from
the
cluster.
They
would
go
back
to
prometheus
or
potentially
other
backends
if
folks
wanted
to
implement
other
other
sorts
of
monitoring.
F
B
F
Not
only
because
of
storage,
but
also
because
of
scaling,
we
use
several
prometheus
instances
to
scrape
different
types
of
exporters
and
do
that
redundantly
so
that
they
would
be
distributed.
That
would
be
actually
the
way
prometheus
is
supposed
to
be
used
if,
if
necessary,
to
to
be
scaled
and
there's
as
this
option
that
the
prometheus
instance
could
could
scrape
other
prometheus
instances,
you
get
the
high
level
view
of
things.
A
O
F
Yeah
there's
some
derivatives
to
prometheus,
which
I
do
not
know
very
well,
but
would
which
claim
to
be
doing
some
parts
of
would
prometheus
does
better.
For
instance,
prometheus
does
not
claim
to
be
a
long-term
storage,
the
data
is
compressed,
metrics
are
compressed,
but
the
sampling
rate
is
not
reduced
and
the
system
that
does
long-term
storage
would
be
expected
to
to
reduce
the
sampling
rate.
A
All
right:
well,
we've
had
a
long
discussion
already
and
we've
kind
of
covered
a
lot
of
different
areas.
I
think
we
have
an
agreement
that
you
want
to
split
things
out
and
from
the
manager
in
general
and
make
that
scalable
sounds
like
we
still
need
some
more
investigation
to
figure
out
exactly
the
best
way
to
do
that.
What
do
folks
think
the
next
steps
would
be
that
jeff
would
like
would
follow,
or
ernesto
or
patrick,
would
you
guys
be
looking
into
more
more
experiments
here.
B
I
think
paul
is
driving
some
work
with
perry
and
avan,
I
think
from
dashboard,
but
I
don't
see
them
here:
nope,
okay,
so
yeah,
basically
yeah
everyone's
here
kevin.
Can
you
summarize
models?
What
have
you
talked
with
with
paul
about
the
actions?
The
short-term
actions
that
you
are
planning
for
for
this.
G
Well,
he
said
basically
we
will
send
right
by
the
end
of
this
week,
but
to
start
he
just
asked
to
face
the
number.
G
The
which
chef
is
using,
for
example,
the
dashboard
and
the
the
alerts
in
the
alerts
as
well,
so
we
figure
out
and
those
were
around
kind
of
130,
or
so
I
mean
the
unique
matrix.
Also
I
can
I
I
can
face
the
extra
numbers,
but
yeah.
It
was
around
that
I
guess
yeah.
That
was
the
initial
step.
We
asked.
I
So
I'm
just
confused
by
the
outcome
and
the
decision
so
we're
going
to
move
forward
with
the
poc
we're
going
to
distribute
to
demons
we're
going
to
distribute
to
nodes,
we're
not
going
to
do
anything
in
that
regard,
so
I'm
just
confused
yeah.
I
sat
through
the
whole
thing
and
I'm
just
wondering
why.
A
Do
we
decide
well,
I
think,
I'm
a
little
confused
about
the
demons
versus
nodes
distinction
as
well,
but
with
the
extra
information
about
how
exporters
function
and
how
they
have
their
own
like
frequencies
and
and
sets
of
metrics.
I
think
we
maybe
need
some
more
design
around
how
that
might
look.
I
So
it
sounds
like
doing
the
prototypes
that
we're
talking
about
is
very
important,
we'll
continue
that
work
and
we'll
continue
doing
it,
probably
I'm
not
sure,
whatever
direction
we
deem
appropriate,
whether
it
be
per
demon
for
now
we'll
go
that
approach,
I
guess
and
we'll
basically
you
know
come
back
with
the
appropriate.
You
know
decisions
as
a
result
of
that
work,
along
with
the
interval
periods
and
differing
interval
periods,
how
we
can
deal
with
that,
etc.
So
that
sounds
reasonable.
Sure.
N
I
think,
as
far
as
you
know,
something
short-term
the
idea
to
only
serialize
performance
counters
that
are
are
either
above
or
under
a
certain
limit,
depending
on
how
you
look
at
it
a
certain
priority.
N
Well,
I
imagine
some
of
the
priorities
may
not
be.
You
know,
may
not
reflect
the
actual
priorities
since
we've.
Never
really,
you
know,
just
as
with
blog
messages,
these
tend
to
be
a
sign
assigned.
You
know
semi-randomly
with
not
much
thought
put
into
that.
N
So
if
we
were
to,
you
know,
pick
a
priority
level
as
a
cut
off
and
then
do
an
audit.
You
know
make
sure
that
there's
nothing.
You
know
that
there
are
no
bulky.
You
know
entries
that
generate
a
ton
of
data
above
that
priority,
then,
in
the
short
term
we
could.
We
could
cut
that
50
megabytes
to
something
something
more
reasonable.
N
So
if
you
have,
if
you
have
resources
to
you,
know
to
look
into
that
and
allocate
for
that,
then
I
would
definitely
do
that,
because
this
seems-
and
it
seems
to
me
like
a
low-hanging
fruit
that
would
would
provide
a
significant
benefit.
N
But
other
than
that
yeah
continuing
the
experimentation
with
either
pure
demon
or
our
paranoid
exporters
still
seems
like
the
best
like
long-term
approach
to
me,
the
point
in
the
wiki
about
bringing
true
active,
active
you
know,
load
balance
and
scalability
to
chef
managers
and
making
them
truly
highly
available
is
definitely
also
something
that
is
really
worth
looking
at,
but
I
I
don't
feel
like,
even
if
the
manager
was
highly
available
today,
which
I
don't
think
it
is,
I
I
don't
see
how
that
would
how
that
solves
the
metrics
issue,
even
if
you
had
you
know,
say
three
managers
that
were
active
at
the
same
time.
N
How
would
you
partition
the
the
the
you
know,
the
work
that
prometheus
needs
the
manager
to
do
among
those
given,
given
the
requirement
that
when,
when
the
prometheus
asks
for
for
for
data,
you
need
to
provide
all
of
it.
You
know,
and-
and
it
needs
to
be
consistent
with
those
like
each
entry
needs-
needs
to
be
consistent
with
respect
to
all
other
entries
for
that
instant
of
time.
N
So
I
I
don't
see
how
you
could
partition
that
among
the
multiple
active
active
set
managers,
even
if
that
was
supported
today,
so
it
feels
like
moving
out
to
either
per
demon
or
pure
node
is
really
the
only.
You
know
realistic
option
that
we
have.
A
Yeah,
I
agree,
I
think
the
manager
scalability
and
active
active
piece
pieces
a
bit
of
orthogonal
to
the
metrics
collection.
I
think
we
want
to
move
that
out
of
the
manager,
regardless.
N
Yeah
I
I
can
see
I
I
I'm
not
sure
who
wrote
this.
This
wiki
entry
might
have
been
paul.
This
last
point
here
about
about
deleting
or
undermining
the
purpose
of
self
manager
that
I
would
actually
disagree
with,
because
while
it
is
super
convenient
to
dump,
you
know
a
piece
of
python
there
it
it.
It
wasn't
intended,
for.
You
know,
for
this
sort
of
thing,
the
the
pga
like
the
original
motivator,
which
was
the
pg
stats
right
and
all
and
on
all
of
the
related
data.
N
N
There's
no
upper
bound
on
that,
because
you
could
have
a
ton
of
tiny
images
in
each
of
which
would
have
its
own
set
of
metrics.
That
again
needs
to
be
depending
on
on
the
interval,
which
we
don't
really
control,
I
mean
we
can
we
can.
We
can
say
that
we
don't
support
something
that
is
less
than
whatever
number
of
seconds,
but
but
still
that's
not
a
very
good
solution
in
my
opinion,
so
it
needs
to
be.
N
It
needs
to
be
decentralized.
I
I
think
that
that
is
something
that
we
just
can't
argue
with.
B
I
I
wrote
that
second
part
of
that,
the
first
one
I
think
it
was
false,
yeah,
well,
the
undermining
of
the
lyrium.
It's
about
the
feeling
that
actually
you
mentioned
the
biggest
stats
and
the
piece
that
has
caused
the
serialization
of
that
has
caused
issues
in
the
in
the
past
and
same
for
the
usd
map.
So,
even
if
those
structures
are
quite
bonded
well,
under
some
circumstances,
serialization
such
a
big
chunk
of
data
has
has
cost.
So
it's
it's
not
just
the
probably
the
magics
alone.
B
It
is
the
combination
of
the
metrics
plus
all
the
goals,
but
all
of
them
basically
content
for
the
deal
and
the
logs
and
all
of
that
stuff.
So
part
of
the
active
active
suggestion
was
decoupling.
This
interaction
might
also
improve
the
the
overall
performance
of
the
manager.
So
it
is
not
you
know
or
horizontal
scaling,
the
reverse
counters,
but
maybe
just
pinning
different
modules
to
different
managers
miles
improve
the
performance,
the
over
performance.
B
In
the
end
I
mean
if
we
have
to
install
the
an
exporter
in
every
module,
what
what
are
we
going
to
call
that
a
safe
manager,
something
so
it's
kind
of
evolving
to
a
new
diamond
that
is
going
to
live
together
with
a
sidecar
module
that
is
going
to
live
together
with
the
other
ones.
So
I
feels
kind
of
reinventing
the
manager.
B
So
I
was
wondering
if
we
could
make
that
new
module
from
the
manager
from
the
ashes
or
the
manager
or
whatever
right,
if
we
can
somehow
tweak
it
and
to
fulfill
this
this
requirement,
because
in
the
end,
this
is
going
to
be
a
new
chef
service,
or
so
it's
basically
going
to
interact
with
the
first
counters
of
every
demo,
which
is
partly
what
the
template
is
doing.
B
P
I
think
the
idea
is
that
each
collector
would
collect
from
its
own
demon,
so
you
don't
have
a
centralized
thing
that
need
to
know
about
the
stats
of
all
demons.
The
whole.
The
whole
point
is
that
whatever
collects
data
from
the
rgw
knows,
the
counters
of
the
rgw
would
ever
collect.
Data
from
the
osd
knows
the
counters
of
the
osd,
and
that's
it
and
just
send
them
to
point.
Yours
doesn't
do
anything
else.
H
P
B
M
P
So
if
you
run
it
to
the
side
car,
for
example,
if
it
uses
something
that
is
specific
to
the
daemon
like
using
a
unix
domain,
socket
then
would
be.
It
would
be
a
specific
exporter
for
each
for
each
demo
because
running
it
to
run
it
in
its
own
pond
right
and
they
cannot
just
pull
from
other
parts
because
to
be
different
name.
Space.
A
Yeah,
I
think,
I'm
not
sure
we
have
the
information
to
make
a
final
decision
on
the
architecture
exactly
right
now,
but
I
think
we
can
experiment
with
a
few
different
ways
to
have
this
kind
of
like
sidecar
approach
and
see
what
works
well
and
what
scales
with
prometheus
as
well
like
it
could
be
end
up
being
a
scalability
problem
for
prometheus
or
we
might
need
to
scale
meteors
more.
We
have
thousands
of
of
the
exporters,
I'm
not
sure.
K
Is
it
is,
there
does,
is
there's?
Does
it
exist,
a
a
proxy
on
the
on
the
endpoint
side
so
that
we
could
have
one
still
have
one
exporter
for
the
entire
cluster?
That
would
run
as
a
new
demon
and
it
would
be
able
it
would
know
about
all
of
the
thousands
of
exporters
on
all
the
demons
and
anyway
so
like
and
eventually
your
prometheus
would
only
query
one
endpoint.
It
would
aggregate,
and
maybe
cash
or
whatever.
K
B
K
K
It
needs
to
be
one
one.
Okay,
yeah
I
mean
I
guess
I'm
just
worried.
I'm
also
like
you
know.
If
right
right
now,
the
the
osd's
are
pushing
the
stats
on
there
when
when
they
decide
to
like
when
they
have
time,
but
if
we
go
to,
if
we
start
pulling
the
osds,
I
mean
we
have
to
make
sure
that
this
is
not
just.
This
can't
be.
This
can't
disrupt
the
ost
important
operations.
H
Model
there
is,
there
is
a
decoupling
there.
They
put
they,
they
probably
push
or
pull
down.
While
something
happens
there
either
either
here
it
is
either
a
super
low
overhead,
unix
main
interface,
where
the
the
like,
like
the
perfect
like
that,
like
we
have
now
that
they
haven't,
or
maybe
it
is
the
abdomen
socket,
or
else
it's
going.
The
other
way
right.
K
M
Yeah,
that's
true
mark
suggested
that
we
could
use
some
kind
of
delta
or
only
sending
delta
of
metrics
instead
of
sending
everything
out.
At
the
same
time,
at
least
between
the
demons
and
the
per
node
exporter,
we
could
only
send
the
deltas
there
and
then
prometheus
would
query
the
whole
thing
from
the
exporter.
A
But
even
even
today,
I
think
it's
not
an
issue.
We're
effectively
like
arthur
is
saying
the
chats
using
the
the
same
thing,
interfaces
that
have
been
selected
commands
to
get
the
curve
counters
essentially,
and
that
hasn't
had
any
noticeable
impact
on
the
data
path.
K
There
was
one
issue
in
the
past:
I
don't
know
if
this
has
been
optimized
but
large
clusters,
you
had
to
decrease
the
mgr
stats.
Yes
like
we,
we
we
use
15
seconds
everywhere
on
our
like
one
or
two
thousand
osd
clusters,
because
the
default
five
seconds
too
much
and
that
kept
the
mgr
too
busy
yeah
in,
in
that
case
yeah.
That
did
not
impact
the
the
osd's
themselves,
that
was,
that
was
actually
just
load
on
the
mdr
yeah
fine.
J
I
do
remember
seeing
in
crimson
in
in
the
the
reactor
thread,
at
least
with
blue
store,
that
that
work
actually
was.
You
know
a
a
moderate
contributor
to
I'm
in
the
reactor
on
the
other
of
like
five
or
ten
percent,
so
in
that
kind
of
a
model,
it
might
actually
be
something
we
need
to
watch
carefully.
A
Yeah,
I
think
the
crimson
piece
is
a
separate
issue.
I
think
that's
probably
more
like
a
vlog,
but
we
can
figure
that
out
later,
but
I
think
for
now.
I
think
we
have
like
a
lot
of
good
ideas
here,
a
lot
of
interesting
things
to
explore
in
terms
of
the
how
the
different
characters
will
scale.
M
Well,
yeah:
I
just
want
to
remind
people
that
the
use
case
for
mirroring
and
replication
wants
to
use
finer
grained
metrics
than
the
existing
perf
counter
supports.
So
if
we're
just
talking
about
rewiring
how
perf
grounders
work,
we
also
need
to
figure
out
how
to
extend
them
to
cover
these
new
use
cases
yeah.
I
agree.
I
definitely.
H
A
Maybe
we
can
come
back
to
the
next
cdm,
which
would
be
more
friendly
to
paul's
time
zone
and
hear
more
about
that
meteor's
architecture
aspect
and
what
kind
of
what
he's
figured
out
at
that
point.