►
From YouTube: Ceph Developer Monthly 2022-08-03
Description
Join us monthly for the Ceph Developer meeting: https://ceph.io/en/community/meetups
Ceph website: https://ceph.io
Ceph blog: https://ceph.io/en/news/blog/
Contribute to Ceph: https://ceph.io/en/developers/contribute/
What is Ceph: https://ceph.io/en/discover/
A
And
I
think
we
have
a
good
amount
of
people
here,
so
I
think
we
can
start
so
welcome
to
the
august
cdm
everybody.
I've
shared
the
agenda
in
the
chat
for
everybody
to
have
this
time.
We've
divided
the
topics
into
four
different
categories,
to
kind
of
make
sure
that
we're
touching
all
sides
of
stuff
that
we
can
so
this
first
topic
that
we
have
today
is
concerns,
usability
and
stuff
and
enhancing
the
staff
user
experience.
A
The
first
topic
on
there
we
had
was
from
eval,
but
I
see
he
won't
be
able
to
present
today,
so
he
I
know
he
had
a
really
great
talk.
So
perhaps
we
could
move
that
to
the
next
cdn,
but
this
next
one
is
accessibility.
A
Accessibility,
improvements
in
the
ceph
dashboard
is
cedric
on
the
call
today.
A
Not
seeing
him
so
we
may
have
to
come
back
to
this
topic
I'll
check
in
with
him.
But
let's
see.
A
Thanks
thanks
ernesto
we'll
move
to
the
next
topic
for
now
and
we'll
come
back
to
the
first
one.
So
this
next
category
is
about
quality
and
stuff,
improving
ceph's
testing
and
release
processes,
and
this
topic
is
from
patrick.
A
I
actually
asked
him
to
present
this
because
it
seems
really
interesting
a
pr
he
introduced
just
recently.
So
let
patrick
take
that
away.
D
D
Thanks,
patrick
sometimes
my
video
just
is
static
and
I
have
to
restart
the
browser.
Okay,
let
me
pull
up
that
pr,
real
quick
that
I
had
a
moment
before
we
would
get
to
this
topic.
D
D
So,
to
kind
of
go
over
what
this
is
up
to
now
or
up
to
before
this
pr
was
merged
technology.
We
construct
a
matrix
of
yemo
fragments,
which
kind
of
lets
us
turn
various
knobs
on
or
off
and
select
different
distributions
to
run.
Cephon
have
a
set
of
tasks.
We
run
for
all
the
different
configurations
and
you
just
broke
this
gigantic
matrix
of
all
the
jobs
and.
D
All
these
fragments
are
merged
together
into
one
single
yemophile,
so
technology
would
you
know,
had
these
these
these
methods
for
actually
constructing
the
matrix.
You
could
like
create
sub
matrices,
which
we
multiplied
with
the
the
larger
matrix
or
we
could
even
add
all
of
the
yaml
files
together.
So
if,
like
I
had
a
folder
of
overrides
that
I
wanted
to
apply
to
all
of
my
my
jobs,
I
would
just
stick
them
in
a
folder
and
then
add
a
little
plus
sign
file.
D
You
know
a
ton
of
different
configurations
of
of
jobs
to
test
stuff
and
it's
fairly
unique
technology,
and
that
it's
it's
a
very
you
know
in
in
the
way
where
we're
actually
defining
the
jobs
that
we
want
to.
We
want
to
or
the
qa
jobs
we
want
to
run
against
steph,
it's
more
of
a
you
know,
something
more
programmatic
that
might
be
created
to
create
a
workflow
of
jobs.
D
But
what
was
missing
was
sort
of
a
way
to
have
finer
control
over
which
fragments
you
actually
want
to
merge
into
a
job
which
you
do
not
so
you'd
have
to
do
kind
of
awkward
things
like
if
I
wanted
to
apply
an
override
to
the
set
configuration,
but
only
if
I'm
running
on
ubuntu,
you
had
to
co-locate
those
two
yag
yamo,
fragments
together
and
potentially
like
copy
all
the
the
links
to
the
common
yemo
fragments
we
have
in
the
fqa
repository
for
all
the
different
distributions
we
use.
D
You
have
to
construct
your
own
folder
and
then
you
have,
to
you
know,
lay
out
the
matrix
in
a
way
that
it
would
only
apply
the
override
to
that
particular
to
jobs
that
use
ubuntu
as
an
example,
and
it
could
be
quite
disruptive
and
sometimes
even
impossible
to
actually
statically
define
your
matrix
in
terms
of
the
number.
You
know
this,
the
the
number
of
files,
the
ammo
fragment
files
that
you
lay
out
for
it
in
order
to
get
the
jobs
that
you
want
and
what
we're
missing
is
it.
D
What
we
were
missing
was
just
a
way
to
to
write
scripts
and
say
you
know,
should
I
merge
this
fragment,
you
know,
given
that
I
have
these
other
fragments
that
are
already
merged
so,
for
example,
coming
back
to
the
ubuntu
one
with
the
particular
configuration
override.
So
if
I'm
not
running
ubuntu
I'm
running
centos,
then
I
don't
actually
want
a
yaml
fragment
included,
so
let's
say
drop
it
or
modify
it.
D
Likewise,
sometimes
you'd
also
have
configurations
where
you
just
you
don't
want
to
run
the
job
at
all,
because
it
doesn't
make
sense
like
in
in
one
of
the
examples
for
the
in
the
in
the
documentation
was.
I
have
an
upgrade
test
with
16,
which
upgrades
from
two
different
versions
of
pacific
1624,
the
tagged
version
and
whatever
the
latest
specific
is,
but
to
do
the
specific
tests
I
want
to
do.
D
So
I
can't
do
it
so
it's
useful
to
to
continue
to
do
to
to
just
throw
in
this
knob
which
which
I
can
turn
on
or
off
to
to
do
this
particular
test,
but
it
won't
work
with
1624
and
it's
not
really
possible
to
statically
define
the
matrix
so
that
I
only
turn
that
knob
if
I'm
running
16
210..
D
So
we
kind
of
needed
this.
This
ability
to
script
it
so
that,
if
I'm,
if
the
knob
is
turned
on
and
I'm
upgrading
from
16
to
4,
I
just
dropped
the
job
entirely,
which
is
a
lot
simpler
to
do
than
like.
Maybe
copying
the
entire
matrix
into
another
file
just
to
test
that
one
knob.
D
It
keeps
everything
nice
and
compact
and
neat
and
just
use
a
script
to
drop
those
jobs.
So
that
was
the
the
kind
of
idea
behind
it.
So
now
you
know
we,
instead
of
just
a
static
matrix
of
jobs,
it's
a
little
more
dynamic.
I
shouldn't
say
it's
now
dynamic
matrix
of
jobs,
because
it's
not
really
true.
The
matrix
is
static.
D
It
continues
to
be
static
because
the
first
thing
technology
does
is
it
constructs
a
matrix,
and
you
know
we
don't
we're
not
adding
jobs
to
the
matrix,
we're
not
adding
ammo
fragments
to
the
matrix.
D
D
You
can
also
just
drop
a
job
from
the
matrix
entirely,
which
is
actually
equivalent
to
filtering
it
out.
Now
all
the
filtering
on
the
command
line
for
technology
is
now
implemented
using
scripts.
So
it's
it's
actually
equivalent
to
filtering
out
it.
The
filter,
command
line
option
is,
is
basically
the
same
as
what
we're
doing
here
and
that
the
description
you
might
define
in
your
yaml
fragments
can
just
drop
a
drop
a
job
entirely.
D
Just
the
exact
same
way,
the
filtering
does
so
again.
The
matrix
the
matrix
is
still
static,
but
you
know
now
we
have
a
little
more
dynamic
control
over
over
what
it
looks
like
you
can
even
now
use
scripts
to
generate
yaml
dynamically.
D
You
know
I
don't
actually
have
a
use
case
in
mind
for
it,
but
it
was
easy
enough
to
to
plumb
that
that
mechanism
in
so
you
can,
even
I
don't
know,
create
a
100
exact
tax
or
something,
but
you
know
it
just
because
you
can
do
something
doesn't
mean
you
should
it's
probably
easier,
just
to
run
a
bash
script
to
do
the
same
thing,
but
you
know
that
that
ability
is
also
there
and
you
also
edit
the
ammo
as
well.
D
Not
just
add
it
to
it,
but
I
would
also
caution
that
you
know
at
some
point,
like
you
know
again
as
they
say
in
jurassic
park.
Just
you
know
we
got
so
hung
up
on
what
we
could
do.
We
didn't
stop
to
think
if
we
should
so
just
because
you
can
edit
the
yaml
file
in
the
script
doesn't
mean
you
should
because
it
could
get
really
confusing
really
fast,
but
you
know
for
simple
things:
it
may
may.
E
D
I
could
go
through
the
code
or
or
something
if
folks
want
me
to.
I
don't
want
to
force
it
on
anyone,
so
let
me
know
what
we
should
do
next.
A
Yeah,
I
was
I'm
actually
just
going
to
bring
up
the
use
case.
I
was
thinking
of
with
telemetry
and
run
it
by
you
and
see
if
it
would
make
sense
with
this
configuration
so
yuri
and
I
have
been
working
to
develop
an
upgrade
test
for
telemetry
since,
with
the
the
new
opt-in
flow
that
was
introduced
in
quincy.
A
We
would
like
to
test
things
before
and
after
an
upgrade
to
make
sure
that
certain
features
are
working
as
expected,
but
we
also
with
this
upgrade
test.
We
want
to
test
different
scenarios
so,
for
instance,
if
telemetry
is
opted
in
and
then
we
upgrade,
we
want
to
test
certain
things,
but
then,
if
we,
if
telemetry
is
not
opted
in
and
then
after
the
upgrade,
we
would
want
to
test
for
different
scenarios.
A
A
There
are
like
two,
two
or
multiple
different
scenarios
that
we
would
want
to
test
for
so,
for
instance
like
if
there
was
some
way
to
trigger
this
condition
happens.
Only
if
telemetry
opted
in
is
equal
to
true,
or
this
condition
happens
only
if
telemetry
is
opted
in
is
false.
A
A
So
do
you?
Do
you
kind
of
see
how
that
could
work?
Would
that
make
sense
in
the
scenario
that
we're
looking
to
test.
D
I'm
sure
I
mean
yeah,
you
know
it's
kind
of
similar.
You
would,
you
know
either
turn
on
telemetry
or
opt-in
or
opt-out.
You
know
ahead
of
time
before
doing
the
upgrade
and
then
afterwards
you
know
you
could
have
your
yamo
fragments
that
either
you
know
are
included
if
or
you
actually
run
the
job.
D
If,
if
you
want
to
run
it
with
sythology
opt
it
in
or
you
just
drop
it
entirely.
If
it's
opt
out,
you
can
have
all
those
knobs
to
turn
on
and
you
might
end
up
dropping
like
half
the
jobs
of
the
matrix.
But
you
get
you
effectively
get
what
you
wanted.
D
Sometimes
it's
it's
easier
to
look
at
something
if
it's
like,
if
it's
a
static
matrix
without
any
of
the
scripting
involved,
but
you
know
it
can
quickly
get
unwieldy,
so
you
know
it.
It
can
make
sense
total
sense
to
to
have
a
mix
of
both.
You
know
just
to
use
that
use
the
scripting
to
to
filter
out
the
things
you
don't
actually
care
about,
but
I
mean,
as
you
describe
it,
it
sounds
like
you
would
be
a
good
fit.
A
Cool
yeah
and
we're
thinking
of
going
first,
the
static
route
just
to
get
one
scenario
right,
but
then
this
scripting
could
be
added
in
to
test
for
alternative
scenarios
so
yeah.
This
is
a
a
really
good
thing
that
we
have
to
consider,
because
to
my
understanding,
there
aren't
many
tests
in
tautology
right
now,
like,
for
instance,
parallel
tests,
or
even
stress
split
that
really
handle
this
conditional
situation.
D
Yeah
I
mean
oftentimes.
What
we
end
up
doing
is
we
have
to
run
those
conditionals
in
in
some
kind
of
python
method.
To
actually
do
the
actually
do
that
for
us,
which
can
obviously
quickly
get
awkward
because
then
you
have
to
also
you
know
within
the
python
dynamically
detect
the
feature
which
you
know
it
can
get
messy.
So
there's
some
of
that
in
pathology.
D
I
think
for
a
lot,
largely
we've
avoided
trying
to
get
into
those
kinds
of
messes
and
instead,
maybe
just
not
do
as
much
test
coverages
as
we
would
like,
and
try
to
avoid
it
that
way
and
sort
of
limit
our
our
testing
based
off
of
the
limited
abilities
of
our
tools,
but
hopefully
that
becomes
less
of
a
problem
with
this.
A
Yes,
yeah
and
I
see
a
lot
of
potential,
for
I'm
sure
others
have
considered
writing
a
conditional
test
and
have
limited
themselves
to
not
go
that
route.
So
I
think
this
is
a
great
feature
to
be
added
and
in
general,
the
upgrade
suite
there's
not
a
lot
of
documentation
around
it
or
how
the
processes
work.
So
any
any
awareness
that
goes
into
this
and
the
pull
request
as
you've
shown
is
already
merged.
A
So
that's
great,
but
then
I'm
glad
we
have
cem
to
kind
of
raise
awareness
about
that.
This
is
a
feature
available
to
anyone
who
wants
to
design
a
test
in
that
it's
upgrade
suite.
D
Yup,
hopefully
again
it
sees
wider
use,
especially
with
upgrade
tests.
I
think
that's
one
of
the
cases
where
we'll
it
can
be
the
most
useful,
and
then
I
you
know,
I
think,
broadly
speaking,
you
know
our
upgrade
upgrade.
Testing
and
stuff
can
be
improved
a
lot,
especially
you
know:
scenarios
like
newer
clients,
talking
to
older
clusters
and
minor
upgrades
minor
version,
upgrade
testing
and
that
type
of
thing
so
yeah,
that's
that's
something
that
deserves
more
attention,
and
hopefully
this
feature
helps
with
that.
A
Thanks
so
much
patrick
did
you
want
to
go
through
any
parts
of
the
code
or
it's
totally
up
to
you.
D
F
I
had
a
question
so
you
mentioned
that
the
filtering
with
the
filtering
options
also
got
some
massage
with
this
with
this
change,
but
I
I
don't
see
anything
mentioning
the
filter
options
in
the
in
the
in
the
fragment,
merging
doc.
D
No
you're
correct.
It
was
because
it's
more
of
an
implementation
detail.
I
didn't
bother
including
it
in
the
doc,
but.
D
So
yeah
it
is
it's
not
in
the
dock,
but
you
know
we
can
talk
about
it.
If
you
like.
F
Well,
I
was
just
curious
because,
because
I
haven't
actually
run
into
into
the
this
case
of
you
know
just
having
something
like
having
a
metrics
that
that
is
impossible
to
which
is
just
plain
impossible
to
construct
using
the
existing
tricks
that
we've
all
learned
with.
You
know
arranging
directories
and
lexicographically
ordering
the
the
fragments
by
prefixing
the
then
with
with
you
know,
with
digits
from
one
to
nine
and
so
on.
F
D
F
If
this
pr,
finally,
finally
got
that
in
place.
D
But
to
just
because
you
brought
it
up
so
yeah
I'll
link
to
the
code
which
handles
the
filtering
in
the
lua
script.
It's
mostly
just
a
transplant
of
the
python
code
into
lua.
D
Logically,
it
actually
probably
takes
up
most
of
the
space
required
and
like
handling
the
filtering
that
was
already
existing
in
topology,
takes
up
more
space
in
the
low
file
than
anything
else
to
set
up
the
actual
scripts,
but
that
that's
all
that's
all
it
is.
D
There
is
one
missing
piece
which
may
may
be
more
relevant
if
you
want
to
do
dynamic,
filtering
like
before
running
like
the
rbd
suite
you
want
to
filter
out,
I
don't
know
something,
but
so
we
can
you
know.
Obviously
we
have
the
command
line
options
to
say
I
want
to
filter
out
all
the
ubuntu
jobs
and
that
that's
easy
whatever.
D
But
if
you
wanted
to
do
something
more
complicated,
don't
ask
me
what,
but
you
know,
let's
say
we
did
and
you
wanted
to
say,
write
a
lewis
script
for
it
and
then
say
put
it
in
a
yaml
fragment
that
you
passed
to
the
topology
suite
command
at
this
time.
The
this
pr
doesn't
handle
that,
because
we
don't
actually
add
the
base
that
extra
fragment
you
pass
on
the
command
line.
We
don't
add
that
until
later
on,
but
it
would
be
totally
feasible
to
just
also
add
that
in
so
that
it's
always
supply.
D
You
know
it's
it.
It
goes
with
the
merging
process,
but
at
the
time
I
didn't
have
a
need
for
it
and
it's
just
if
anyone
really
wants
it
but
yeah
right
now,
if
you
add
like
filters
on
the
to
a
fragment
that
you
pass
on
the
command
line
that
won't
work.
F
Okay,
I
see
what
you
mean
thanks
thanks
for
clarifying.
A
Does
anybody
I
would
can't
imagine
how
anybody
would
have
objections
to
patrick
going
through
the
code.
A
G
A
Give
a
high
level
like
you
don't
have
to
go
into
all
the
super,
because
you
also
patrick,
has
also
linked
documentation
to
this,
so
that
will
help
for
anybody
that
has
further
questions.
D
D
A
little
bigger!
Second,
that's
all
too
big.
E
D
All
right,
so
the
the
the
place
to
start
would
be
here.
Is
this
run
dot?
Pi?
Let's
see
so
here.
Here's
this
main
method,
which
was
called
after
creating
the
of
jobs.
Is
this
collect
jobs,
method,
and
this
goes
fairly
obvious
in
what
it's
doing
it's
just
joining
all
of
the
fragments
reading
the
files
into
a
single
string
and
then
passing
that
to
yaml
loader
and
is
doing
that
for
every
job
in
the
matrix.
So
this
is
that
lexicographic
merging
of
all
the
fragments
into
one
set
of
yaml.
D
And
so
that
was
this
code
is
basically,
you
know
the
the
but
it
how
it
worked
before,
and
so
the
changes
required
started
here.
We
needed
to
to
do
the
merging
elsewhere
and
you'll
see
that
now
I
have
this
extra.
D
Piece
of
the
tuple
for
this,
these
job
configs,
which
also
includes
the
parse
yml,
and
that's
just
the
text,
representation
of
the
the
actual
merge
emol
that
will
generate
in
a
different
part
of
the
code.
So
that's
now
included
in
this
configs
and
now
this
collect
jobs
method
doesn't
need
to
worry
about
merging
the
animals
anymore.
D
And
so
this
is
the
this
is
now
the
where's,
the
name
of
this
method
schedule
jobs,
method.
D
No,
I
apologize
it's
actually
hard
to
read
code
in
get
help.
Surprise,
surprise
schedule
suite,
so
this
is
called
by
the
pathology,
suite
command
eventually
and
one
of
the
main
things
that
happened.
That's
of
interest
to
us
it
that
happens.
Is
this
build
matrix,
so
this
this
method
actually
like
loads.
All
the
ammo
fragments
looks
at
the
directory
structures
and
actually
creates
the
matrix
and
what's
returned
to
us.
Is
this
configs.
D
And
normally
again,
the
collect
jobs
method
we
had
just
looked
at
would
actually
read
those
yaml
fragments
and
then
just
pass
all
of
it
together
it
as
one
string
to
yemel,
though
we're
not
going
to
do
that
anymore.
D
So
now
we're
just
adding
another
method
called
config
merge
which
takes
all
those
configs
is
going
to
return
back
to
us
a
new
set
of
configs,
which
you
know
some
of
the
jobs
may
be
filtered
out
whatever,
and
you
can
see
here
we're
also
passing
in
these
command
line,
filter
and
filter
out
filter.
D
All
to
this
to
this
method,
where
it's
gonna
get
past
the
little
code
to
actually
do
the
filtering
so
we'll
get
into
config
merge
in
a
moment,
but
you'll
see
that's
actually
what
gets
passed
to
collect
jobs.
Now
is
this
this
new
merge,
configs
array
and
so
from
this
side
that
you
know
the
changes
required
to
modify
how
our
merging
configs
was
pretty
minimal.
D
That's
just
this
generated
is
the
number
of
jobs
that
the
matrix
actually
created
and
you've
got
to
keep
in
mind
and
so
well.
Prior
to
this,
this
change.
D
You
know,
you
may
see,
like
you
know,
rados
we
generated
a
hundred
thousand
jobs
and
you
filtered
out
like
fifty
thousand
well,
hopefully
more
than
that,
you
didn't
actually
schedule
fifty
thousand
jobs,
but
you
know
you
would
you
would
see
a
message
about
that
that
that
some
amount
of
jobs
is
filtered.
If
you
didn't
use
any
of
the
filter
options,
you
would
never
expect
to
see
anything
filtered
out.
D
You
would
always
see
all
the
jobs
scheduled,
but
that
may
not
be
the
case
going
forward
because,
again
with
the
lua
scripting,
some
things
will
always
be
filtered
out.
That's
just
the
way
this
the
scripts
were
written,
but
the
job
still
existed
in
the
static
matrix,
so
the
generator
is
the
number
of
jobs
in
the
static
matrix
and
then
after
we
pass
it
to
config
merge.
D
This
configs
will
have
fewer
jobs,
so
you
may
always
see
some
jobs
filtered
out,
even
though
you
didn't
specify
any
command
line,
option
to
filter
out
jobs,
and
that's
just
the
way
it
is
with
the
lua
scripting
now,
okay,
so
the
next
thing
to
look
at
is
this
merge
dot
pi.
So
this
is
a
brand
new
file
which
handles
how
to
merge
the
ymo
fragments,
here's
the
main
method,
but
before
we
get
there,
there
is
here's
where
we
just
load
the
lua
runtime.
D
So
this
is
using
a
python
library
called
lupa,
which
is
a
derivative
of
the
original
lunatic
part
project
lunatic
was
just
the
original
library
that
allowed
lua
and
python
to
kind
of
co-exist
and
talk
to
each
other,
mostly
to
just
allow
you
to
embed
lua
in
in
python
and
not
the
other
way
around,
and
so
there's
just
some
there's
a
few
things
that
allow
louis
to
be
able
to
talk
to
python.
Sometimes
it
can
look
a
little
ugly.
D
Usually
it's
pretty
simple,
though,
and
here
we're
actually
loading
this
fragment.lua
that
we
glanced
at
earlier
and
then
we
execute
that
script.
So
we
load
all
the
functions
in
that
file
into
this
lua
runtime.
D
Sorry
coming
back
up
so
here
we
get
the
configs
array,
suite
name.
This
is
used
for
the
filtering.
We
don't
really.
It
is
it's
just
to
reuse
the
same
logic
that
was
existing
in
the
filtering.
I
don't
really
understand
what
the
option
does,
but
it's
there
and
then
the
keyword
ours
is
just
whatever
filters
may
be
passed
in.
D
Comments
so
here
we're
getting
we're
going
through
the
list
of
job
configs
that
was
generated
from
the
static,
build
matrix
method
and
so
again
it's
an
array
of
arrays
leave
us
an
array
of
tuples.
Sorry,
I
forget
which
does
it
matter
here.
We
got
the
description,
that's
just
the
you
know
what
you
see
in
a
technology
config
job
config.
The
description
is
just
like
the
pretty
pretty
eyes
version
of
all
the
ammo
fragments
in
the
job.
D
D
Here
we're
actually
going
to
go
through
all
these
pads,
then
we're
going
to
open
it.
So
one
interesting
thing
that
we're
doing
that
is
new
in
this
pr
is
we're
actually
maintaining
a
cache
of
yaml
objects
associated
with
each
fragment.
This
was
just
a
small
optimization.
I
did
that
had
a
somewhat
big
impact
in
that,
if
you
notice
from
the
old
code,
we
we
read
all
the
yaml
fragment
files,
joined
them
into
a
single
string,
and
then
we
passed
that
to
yml
load.
D
While
the
parsing
ammo
is
expensive,
it
actually
slows
down
technology
suite
command
a
lot.
So
what
we're
doing
now
is
we're
just
loading
the
fragment
generating
the
ammo
associated
with
it
and
we're
catching
it
because
it
never
changes,
and
we
just
do
the
merging
manually,
which
we
already
had
a
method
for
in
python
for
doing
deep
merges.
D
This
is
actually
it's
documented
in
the
commit,
but
actually
has
a
significant
impact
on
the
runtime.
That
was
actually
I
ate
up
a
lot
of
cpu
for
a
topology
suite,
especially
if
you're
doing
the
rato
suite
scheduling,
but
so
that's
all
this
is
is
we're
just
maintaining
a
cache
of
the
animals
for
each
fragment
and
then.
D
Deep
copy
of
the
fragment,
so
if
you
know,
if
I'm
running
a
lewis
script
that
modifies
its
own
yaml
fragment,
I
remember
I
said
earlier
that
you
know
you
could
actually
use
lua
to
change.
What's
in
the
ammo
or
in
order
to
avoid
changing.
What's
in
the
cache
we
create
a
copy
of
it,
then
we
get
what's
called
a
pre-merge
script,
so
this
is
run
before
a
gamble.
D
Fragment
is
actually
merged
into
the
little
monolithic,
ammo
config
for
a
job
and
again
it's
done
in
lexicographic
order
as
as
normal
as
it
was
done
before,
but
the
primer
script
gives
you
the
option
to
completely
drop
a
gamo
fragment
prior
to
merging
it.
It
just
doesn't
get
added
to
the
to
the
actual
yaml
config.
D
So
here
we're
going
to
load
that
script.
If
that
exists
is
specified
in
the
fragment,
we're
going
to
be
looking
for
the
technology
dictionary
and
then
a
pre-merge
string.
D
If
it
exists,
then
we're
going
to
create
a
lua
script
for
it
by
passing
that
pre-merge
script
and
then
some
helper
methods
that
need
added
to
the
script,
namely
the
the
log
framework,
the
deep
merge
method
and
the
safe
load,
yeml
method,
which
is
at
this
time
not
used
and
then
just
some
environment
variables
for
the
script
to
be
able
to
access
like
the
suite
name.
D
The
a
complete
yaml
object
up
to
that
point.
So
everything
before
it's
already
been
merged
into
the
into
the
the
jobs
yaml.
So
it
has
access
to
that
in
this
yemel.
D
Dictionary
and
then
there's
the
cmo
fragment
itself,
which
is
just
a
copy
of
its
own
yaml,
which
you
could
modify
if
it
wants
to,
but
at
this
time
none
no
job,
none
of
the
scripts
that
I'm
aware
of
use
it
and
some
other
things
like
the
description.
D
So
we
add
all
these
things
to
the
then
we
also
here
at
the
keyword
arcs
remember,
I
said
if
we
pass
like
filter
and
filter
out
to
this
fake,
merge
method.
It
also
adds
it
to
the
environment
and
that's
done
here
and
then
finally,
we
actually
run
the
script
and
if
it
returns
false,
then
we
don't
actually
merge
this
fragment
and
we
actually
note
in
the
this
new
pathology
dictionary
with
this
fragments
dropped
array
that
this
particular
fragment
decided
not
to
be
merged
into
the
job
yaml.
That
way,
it's
not
a
mystery.
D
D
If
you're
looking
at
a
pathology
job
after
the
fact,
otherwise
we
deep
merge
the
yaml
fragment
object
associated
with
that
fragment
into
the
complete
object
associated
with
the
job,
so
that
we
do
so
again.
We
do
this
loop
for
all
the
fragments
in
the
job
and
then,
finally,
when
that's
done
now,
we
do
the
pulse,
merge
step.
D
So
every
fragment
that
is
going
to
be
merged
into
the
jaw
into
the
object
complete
object
is,
is
has
done
so
so
now
we're
running
the
pulse
merge,
and
this
is
where
the
normal
filtering
command
line
filtering
would
happen
and
that's
included
in
the
the
post,
merge
step
and
I'll
show
you
that
in
a
moment,
so
here
we're
we're
getting
the
post
merge
or
it's
an
array
documentation.
Just
oh
you-
and
this
is
what
it
would
look
like
in
a
yamo
fragment.
D
Some
one
of
the
fragments
would
add
to
the
post,
merge
array
like
some
lua
code
to
be
executed,
and
now,
since
I
have
this
open,
here's
the
pre-merge,
the
notable
difference
is
pre.
Merge
is
just
a
string,
whereas
post
merge
is
an
array
you
might
have
multiple
fragments
all
adding
to
the
post
merge
script.
Whereas
for
a
pre
merge
script,
it's
each
each
yemel
fragment
only
runs
its
own
primer
script.
It
doesn't
run
the
other
fragments.
D
I
mean
back
here
so
we're
gonna
join.
All
of
these
pulse,
merge
scripts
together
and
then
finally
run
the
post
merge
job
again
some
environment
configuration.
If
that
all
looks
good,
then
we
run
the
script
and
it
returns
false.
Then
we
drop
the
job.
Otherwise
we
yield
it.
So
here's
the
complete
yaml
object.
D
So
that's
that
is
I
like
to
believe
it's
fairly
simple.
We
can
now
talk
about
the
lua
script,
which
generates
this
new
script
function
that
we
saw
being
called.
Where
is
that
right
here.
D
D
So
it
has
a
very
small
subset
of
the
lua
built-ins
and
it
also
has
access
to
some
python
methods
which
are
exported
by
lupa,
which
will
allow
you
to
do
things
like
create
dictionaries
or
enumerate
arrays
or
iterate
through
a
dictionary
things
like
that,
but
a
pretty
but
doesn't
have
access
to
anything
else.
D
So
here's
an
accept
and
reject
methods.
That's
talked
about
the
documentation.
These
are
pretty
simple,
except
just
stops
the
script
by
calling
code
routine.yield
all
scripts
are
running
code
routines,
and
this
just
allows
us
to
make
sure
that
you
know.
If
I
have
a
nested
set
of
functions
that
the
scripts
the
technology
scripts
are
running,
I
can
actually
stop
it
anywhere
and
enforce
it
to
return
true
back
to
the
back
to
this
call.
D
So
that's
why
all
scripts
are
our
code
routines.
So
I
can
jump
multiple
stack
frames,
so
this
causes
it
to
be
accepted.
Otherwise
it's
rejected
false
again.
This
is
just
all
of
the
code
that
came
from
elsewhere
to
implement
filtering.
I'm
not
going
to
talk
about
that.
It's
not
particularly
interesting
and
then
here's
the
function
which
generates
a
new
script
here,
we're
creating
a
new
environment.
With
some
of
these
helper
methods,
I
talked
about
earlier
logging,
deep,
merge,
yaml
load,
these
are
included
in
the
environment,
plus
access
to
the
allow
list.
D
D
Oh
I'm
sorry
here
it
is
so
here
we
when
we
create
the
function,
should
probably
start
down
here.
So
load
is
how
you
turn
lua
text
into
a
little
function.
D
So
we're
going
to
this
takes
like
an
iterator
or
a
generator
of
strings
to
to
load
into
the
little
parser.
So
we
have
a
standard
header
and
then,
if
there's
a
script,
we
load
that
otherwise
here's
the
footer
give
it
a
name
of
the
script.
We
only
allow
it
to
load
text,
not
binary
data
and
the
environment
associated
with
the
script,
and
then
once
we've
created
that
function,
we're
gonna
turn
it
into
a
co-routine
and
then
we're
gonna
pass
it.
D
D
And
finally,
after
that's
done,
it's
going
to
actually
run
this
main
function,
which
includes
all
the
the
code
that
came
from
the
fragments
themselves.
D
I
think
that's
that's
all
there
is
to
talk
about.
There
was,
you
know,
there's
various
plumbing
outside
in
the
other
parts
of
the
pr,
but
not
particularly
interesting.
So
anyone
got
questions
about
the
code.
A
Very
good
overview,
patrick
thank
you
and
I
think
the
most
relevant
parts,
it's
good
to
know
the
internal
workings
here
then,
I
think
the
most
important
parts
for
people
wanting
to
use
this
feature
will
be
the
pre-merge
post,
merge
pieces
that
you
would
specify
in
the
the
yaml.
Is
that
correct
and
that's
all
in
your
documentation,
yeah
cool
yeah-
it's
of
course
important
to
see
how
that
works
internally.
So
thank
you,
patrick.
This
is
a
really
great
overview
of
all
of
this.
D
Yeah
and
since
we
spent
some
time
here,
you
know
if
someone
had
the
the
need
for
it
and
then
we
we
did
not
or
we
wanted
to
go
to
dynamic
matrix
construction.
God
help
you
you
would,
you
would
want
to
be
replacing.
D
A
Patrick
well,
if
not
this,
this
meeting
is
of
course
recorded
so
once
the
the
youtube
link
is
available.
If
anybody
is
curious
about
how
this
process
works,
the
video
will
be
available.
A
This
is
almost
turned
into
like
a
code,
walkthrough,
so
fantastic
to
have
this
on
file,
and
I
think
this
will
be
really
helpful
for
for
people
to
know
in
the
upgrade
suite
thanks
again,
patrick.
A
Awesome
cool,
so
in
terms
of
the
next
topic,
I
see
I've
heard
from
cedric
that
he's
experiencing
some
network
issues,
so
we
may
have
to
postpone
his
topic
to
the
next
cdm,
but
if
anybody's
interested
in
learning
about
the
accessibility
improvements
that
have
been
going
on
in
the
dashboard
cedric
has
linked
his
etherpad
with
an
explanation
of
the
work
he's
done
on
the
agenda
so
yeah,
but
hopefully
this
topic
will
be
covered
more
in
depth
and
perhaps
with
a
demo
in
a
future
cdm
see.
A
The
next
topic
we
have
on
our
agenda
is
regarding
performance,
and
this
is
all
about
boosting
stuff,
speed
and
memory
usage.
A
The
topics
we
have
on
here
today
are
from
I'm
not
sure
exactly
who
put
this
on,
but
I
know
it's
from
the
dashboard
team.
There's
been
some
refactoring
going
around
with
the
perf
counter
priorities
and,
of
course,
perf
counters
concern
every
component
involved
in
stuff.
A
So
this
is
why
this
makes
for
a
great
cdm,
topic
and
I'll.
Let
whoever
put
this
on
the
agenda.
Take
it
away.
G
Well,
hi
hi
guys.
Well,
as
laura
said,
the
topic
here
is
about
birth
counters
and,
as
you
might
be
aware,
scalability
of
the
manager
tightly
coupled
with
the
number
of
proof,
counters
and
pretty
much
in
big
cluster.
Where
you
see
thousands
of
voices
mainly
because,
for
example,
I
think
with
oysters
there
are
like
20,
plus
per
counters,
with
useful
priority.
G
So
that
means
that
20
times
the
number
of
ways
these
metrics
are
sent
to
the
manager
and
then
passed
to
be
serialized
in
the
prometheus
site
and
also
most
of
them
are
sent
to
the
telemetry
module.
So
that's
another
module
that's
affected
by
this,
and
what
we
want
to
do
is
to
change
the
priorities
of
most
of
the
components
itself,
mainly
osds.
H
G
And
to
be
honest,
I
don't
know
why
developers
would
want
the
priority,
because
I
mean
you
can
still
call
birthdamp
or
perfect
schema.
Also,
I'm
aware
there
is
one
command
in
laura.
You
should
meet
that
one.
The
theft
theme
on
perth,
which
takes
into
on
the
useful
priorities
from
osd,
and
it
builds
a
a
table
of
counters
and
all.
G
You
could
easily
create
a
layout
for
this
birth
counters
and
forget
about
the
priorities
there
and
yeah.
Well.
I've
opened
that
vr
regarding
the
changing
of
priorities,
and
the
first
thing
that
I
did
in
the
usd
site
is
defaulting
the
priorities.
To.
I
don't
remember,
I
think
uninteresting,
but
I
did
that
mainly
because
we
don't
want
to
be
sending
all
metrics
to
the
manager,
and
I
think
that
setting
the
if
counter
priorities
to
default
in
what
to
useful
as
default
is
not
the
best
way
to
do
it.
G
Well,
two
things:
the
first.
The
first
thing
is
changing
the
priorities
in
within
this
pr.
I
think
we
can
all
argue
that
most
of
the
priorities
in
in
all
the
components
are
well
don't
have
the
the
priority
that
it
should
and
oh
yeah
we
should.
If
we
wanted
to
keep
the
priorities
for
developers
to
be
used,
we
might
benefit
from
having
tags
in
perth
counters.
G
G
Also,
I
think
lara
in
the
comments
said
that
maybe
there
should
be
a
configuration
option
to
enable
disable
this
change.
Once
it's
done.
Maybe
we
could
modify
the
perf
counter
from
the
cli
the
priorities
dynamically
and
I
think
that's
pretty
much
it.
But
I
want
to
see
what
you
guys
think
and
maybe,
if
you
guys,
can
get
involved
in
the
pr
and
maybe
come,
we
can
move
that
forward.
A
So
the
way,
the
way
I
see
it,
the
biggest
questions
are
with
this,
with
this
change
we're
having
to
modify
like
every
single
proof
counter
that
has
the
priority
assigned
to
it.
If
we
want
to
change
that
priority
and
the
potential
impacts
of
this
could
be
that
say
if
this
was
included
in
a
future
in
a
brief
release,
people
who-
and
this
is
just
concerning
the
manager-
this
would
not
affect
the
admin
socket
command-
that
dumps
perf
counters.
A
So
the
command
is
a
perf
dump
and
that
would
not
affect
these
commands,
as
that
comes
directly
from
the
demon
you're,
calling
it
from
itself
but
say
we're
we're
changing
some
perf
counters
from
not
being
sent
to
the
manager
module.
A
If
some
people
using
the
manager
module
are
used
to
seeing
a
particular
perf
counter,
but
then
in
this
next
release,
they're,
not
that
would
be
an
issue
that
would
be
considered
a
regression
if
they're
not
seeing
a
perf
counter
that
that
we
somehow
decided
isn't
interesting,
but
perhaps
to
one
user.
It
is
so
how?
A
How
would
we
that
that
was
why
I
suggested
if
we
do
end
up
doing
this,
having
some
kind
of
config
option
associated
with
it,
so
that
if
a
user
decides
or
realizes,
oh,
I'm
missing
a
proof
counter.
Now
they
can
turn
this
option
off,
so
they
would
still
be.
You
know
getting
that
perf
counter,
but
what
what
are
their
options
or
how
would
you
address
that.
G
G
G
D
A
Oh,
I
know,
could
you
explain
I
sort
of
know,
but
just
for
everybody
can
you
explain
how
the
perf
counters,
how
the
priorities
are
functioning
right
now
and
how
it
or
that
they're
all
getting
sent
to
the
manager,
but
only
that
on
the
manager
side
it
filters
out.
Can
you
kind
of
explain
that
process.
G
G
And
so
then,
in
the
manager
side
we
gather
all
the
perf
counters
and
then-
and
there
is
one
specific
call-
they
get
all
perf
counters
in
the
monitor
side.
So
within
a
python
module
you
can
call
that
method
and
you
can
specify
which
priorities
you
want
to
filter
out
in
the
manager
side,
I
mean
in
the
provision
site
we
filter
by
useful
priorities
and
that's
where
the
problem
comes
from,
since
a
lot
of
useful
priorities
are
not
needed,
and
then
I
think
there
were
some
what
telemetry
also,
I
think
just
useful.
F
A
Right
right,
ernesto
from
what
I
remember,
you
had
suggested
some
alternative
ways
of
filtering
as
that
right,
and
could
you
potentially
go
into
some
of
those
alternative
ways,
other
than
changing
all
of
the
priorities.
B
Sure
yeah.
Well,
I
think
that
perry
has
already
covered
one
of
them,
which
is
that
in
the
tag
and
also
the
having
a
kind
of
an
allow
list
or
a
denial
list,
could
also
work,
so
that
would
be
configurable
if,
for
example,
a
user
finds
that
the
birth
control
has
been
removed,
they
could
easily
add
that
to
the
allow
list,
if
we
want
to
take
that
that
way,
I
think
those
would
be
the
three
possible
approaches,
relying
on
the
priorities,
howard.
What
we
found
there
is
that
priorities
are
used
for
different
purposes.
B
One
is
for
developers
to
set
the
basically
filter
out
the
some
specific
perf
counters
and
that
might
conflict
with
the
other
use
case,
which
is
the
operators
basically
so
for
exposing
the
counters
as
a
metrics
and
yeah.
So
I
guess
that
developers
probably
might
be
unaware
of
the
impact
that
the
increase
in
pricing
or
decreasing
the
priority
of
upper
counter
could
have
in
the
manager.
So
that's
why
the
proposal
of
using
attack
like
a
manager
or
something
like
that
would
be
useful
here.
A
And
yeah,
and
with
the
allow
list
it
would
be
sort
of
it
would
be
like
by
default.
Everybody
on
the
manager
gets
this
new
filt
everything
that
we've
decided
filtered,
but
then
they
can
opt
into
or
or
specifically
allow
certain
perf
counters
to
come
through.
Is
that
what
it
is.
B
B
I
think
currently,
that
hasn't
been,
I
mean,
proved
to
be
about
journey.
It's
been
more
on
the
python
api
side,
with
the
modules
but
anyway,
given
that
we
are
clean
this
part
a
bit.
We
could
also
put
this
improvement
there
right
and
also
in
the
interface
between
the
manager
modules
and
the
c
plus
part
of
the
manager
right,
the
the
manager
demo,
that's
probably
where
we
should
put
this
or
strengthen
this
limitation
right.
B
So
right
now,
probably
most
of
the
manager
modules
whose
use
this
get
offered
counter
scroll,
which
basically
dumps
all
the
different
counters
and
then
go
through
either
through
that
list.
Getting
all
the
data
for
the
different
counters
that
clearly
doesn't
scale
with
the
that
was
our
experience
with
the
posse
testing
right,
so
with
anything
beyond
a
few
thousands
of
osds.
That
will
be
a
clear
bottleneck.
B
So
probably
we
should
have
these
two
different
barriers
and
also
with
the
self-exporter
approach,
which
basically
will
start
exporting
the
metrics
of
the
specific
demos.
Probably
for
anything,
that's
not
cluster-wide,
so
osd
is
rgw
mds.
Everything
probably
should
be
not
filtered
out
from
the
manager,
if
not
required
right.
So
if
a
specific
monitor
module
requires
that
and
that's
the
alternative
of
using
a
low
list,
probably
that
will
be
easier,
because
if
we
find
you
know,
I
mean
once
this
is
deployed,
we
found
that
we
needed
a
specific
birth
context.
B
H
I
have
a
question
related
to
the
self-exporter
piece,
also
related
to
the
priorities,
so
the
perf
counter
priorities
right
now
just
filter.
What
get
what
gets
sent
to
the
manager?
Is
there
any
any
plans
to
use
priorities
in
staff
exporter
to
control
what
gets
sent
onto
prometheus.
G
G
A
So
yeah
to
clarify
about
the
allow
list
piece,
so
that
would
be
more
like
a
filter
between
this,
the
manager,
demon,
c,
plus
plus
part,
and
then
the
the
python
side.
It
would
be
like
editing
or
putting
a
filter
in
just
one
spot
instead
of
this
current
pr,
which
is
changing
one
by
one.
All
of
the
priorities
of
the
perf
counters
that
we
don't
want
is
that
right.
G
A
What
I'm
seeing
with
this,
this
pr
that
you've
been
working
on
having
to
change
all
of
the
the
perf
counters
in
every
single
spot
just
seems
like
a
lot
of
changes
to
me
both
on
your
part
and
on
the
the
people
who
are
reviewing
it
side.
A
So
I
almost
think
it
would
make
more
sense
to
make
a
change
in
just
one
spot
and
have
a
have
this
allow
list
option.
That's
you
know
putting
the
filter
up
in
one
place
instead
of
changing
the
filter
in
a
bunch
of
places,
if
that
makes
sense,.
G
Yeah
sounds
good
to
me.
We
could
start
with
this
since
it's
the
programmatic
one
but
yeah.
I
just
created
these
prs
with.
I
changed
the
priorities
of
the
the
metrics
that
I
know
are
being
used
in
prometheus.
A
G
A
Yeah,
it's
good
to
start
it's
good.
It
was
a
good
way
to
start
discussion
and
I,
I
think
that's
a
good,
always
a
good
thing
to
do.
But
yeah
I
don't
the
issue
is
I
don't
see
this
being
merged
anytime
soon,
just
because
it's
so
many
tiny
changes
and
I
I'm
starting
to
think
that
the
allow
list
option
would
would
make
more
sense
in
the
long
run.
G
A
I
think
the
the
priorities
they're
they're
currently
affecting
what's
or
they
affect
what's
getting
sent
to
the
manager
and
the
intention
is
to
align
the
sef
daemon
perf
command,
which
shows
up
first.
So
I
think
it's
the
certainly
the
functionality
of
them
is.
Has
gotten
muddled
from
what
I
see.
G
G
A
Yeah
and
whatever
method
we
decide
to
go
with,
I
think
it's
going
to
be
just
really
important
to
weigh
the
pros
and
cons
and
to
reduce
whatever
potential
regressions
might
occur,
and
you
know
to
have
it
have
it
be
as
the
most
streamlined
choice
rather
than
a
bunch
of
tiny
changes.
I
think
is
the
most
important
things
to
consider.
G
A
I
think
a
proof
of
concept
would
be
great.
Just
so
you're
not
you
know,
working
on
a
solution
and
then
it
almost
reaches
completion,
and
then
people
will
have
issue
with
it.
I
think
a
proven
concept
sounds
great,
even
if
it's
more
even
if
it's
closer
to
just
a
google
doc,
and
then
you
can,
even
you
know,
make
it
more
close
to
code,
but
just
having
some
solution
that
we're
all
agreeing
on
and
that
we
agree.
A
Won't
impact
have
a
negative
impact
on
future
releases
and
then
moving
forward
from
that
proof
of
concept
to
completing
the
solution.
I
think
that's
that'll
be
the
best
way
to
get
a
solution.
Finally
decided
on.
A
Seems
like
not,
but
if,
if
anybody
has
any
comments,
just
please
feel
free
to
speak
up
and
let
us
know,
and
thanks
perry.
A
Okay.
The
last
topic
that
we
have
here
on
our
agenda
is:
it
falls
under
ecosystem
building
the
involvement
with
the
ceph
community.
So
this
will
be
a
lighter
topic
for
us
to.
F
Yeah,
so
there
is
one
more
which
is
closely
related.
I
put
it
in
the.
A
F
Yeah
sure
so
it's
closely
related,
but
but
it's
it's
it's
different.
So
in
this
previous
discussion
it
was
mentioned
a
couple
of
times
that
you
know
what
happens
if,
due
to
a
bunch
of
these
priority
changes
a
particular
counter,
a
particular
metric
that
was
there
just
you
know
it
just
disappears
and
someone
who
relied
on
it
would
now
you
know
their
automation
or
whatever
they
had
built
on
top
of
it.
F
Just
just
ceases
to
work
and
there's
a
larger
concern
here,
because
I
I
don't
think
performance
counters.
I
don't
think
we
ever
promised
any
kind
of
stability
guarantees
and
not
just
not
just
for
a
particular
performance
counter.
F
You
know
being
there
or
not
being
there,
but
also
as
far
as
them
being
renamed
or
the
you
know,
the
semantics
just
getting
changed
in
obvious
or
even
non-obvious
ways,
so
this
just
just
recently
came
up
in
a
vr
that
that
rudislav
put
up
for
exposing
in
the
messenger
there's
a
bunch
of
perf
counters
and
that
just
that,
just
basically
just
a
sum
of
bytes
being
either
sent
or
received
and
with
the
addition
of
the
messenger
to
secure
mode,
these
bytes
can
represent
encrypted
traffic
and
there's
a
desire
to
distinguish
plain
text:
traffic
from
encrypted
traffic
and
so
radislav
added
a
perk
counter
for
for
encrypted
bytes,
but
at
the
same
time,
the
existing
earth,
counter
for
which
has
always
represented
plane,
bytes
right
or
at
least
historically
this.
F
This
perp
counter
is
also
incremented,
and
you
can
kind
of
see
you
know
the
problem
here
is
that,
ideally,
we
would
just.
We
would
change
the
semantics
of
the
existing
curve
counter
so
that
if,
if
the
human,
if
a
particular
demon
is
configured
to
to
talk
in
secure
mode,
then
the
existing
kind
of
plane
counter
would
just
remain
zero.
F
And
then
the
new
encrypted
calendar
would
be,
you
know,
continuously
incremented,
instead
of
have
both
being
incremented
right
so,
but
that
is
a
semantic
change
and
this
this
kind
of
ties
into
the
discussion
of
of
whether
disappearing
curved
calendar
is
an
issue,
and
this
is
just
kind
of
going
one
level
deeper.
F
This
has
come
up
in
in
in
other
projects,
for
example
in
the
linux
kernel,
where
you
know
the
equivalent
of
perf
counters
was
voice
initially,
also
just
something
that
was
there
just
for
developers
and
for
you
know,
people
closely
familiar
with
respective
kernel
subsystems,
but
later
on,
it
basically
became
a
part
of
the
kernel
api,
and
this
actually
scared
some
of
the
like
some
subsystems,
still
don't
have
any
any
counters
exposed
whatsoever
because
they
don't
want
to
get
like
they
don't
want
to
maintain
them
forever
right,
because
if
the
implementation
changes,
if
a
particular
function
that
incremented
a
particular
calendar
just
gets
removed,
then
you
would
need
to
you.
F
You
would
need
to
maintain
to
somehow
maintain
the
semantics
of
the
counter
going
forward
still.
F
So
I
I
think
this
is
you
know
this
is
a
fundamental
guarantee
that
I
don't
think,
we've
ever
provided
and
with
increased
use
and
increase
kind
of
exploitation
off
the
perf
counters
into
into
various
environments,
whether
it's
prometheus
sugar
fauna
or
something
else.
F
This
is
bound
to
become
an
issue
going
forward,
because
I
I
I
don't
think
like
when
we're.
When
we
are
reviewing
prs,
I
don't
think
we
pay
close
attention
to
maintaining
kind
of
stability.
So
that's
a
decision
that
we
need
to
make
and
kind
of
abide
right
going
forward.
C
A
question
we
have
some
some
in-project
consumers
of
of
the
pep
counters.
C
I
guess
dashboard
some
monitoring
stuff
and
in
this
particular
case
I
would
feel
better
if
we,
instead
of
redefining
the
current
behavior
of
the
already
existing
counter
it.
I
think
it
would
be
better
if
we
strip
them
entirely
remove
and
you
know,
replace,
replace
with
something
new,
something
having
basically
plain
semantic.
There
will
be
plane
counters
for
for
unencrypted
traffic
and,
let's
say
secure
or
encrypted
for
for
the
secret
one.
F
C
F
A
rename
right
that
semantic
change,
but
by
the
rename
yeah
so.
C
I
I
don't
want
that,
but
what
the?
What
about?
What
about
backward
compatibility
is?
Do
we
have
in
project
any
consumer
consumer
that
really
expects
to
have
to
have
all
the
counters
I'm
asking,
especially
about
the
upgrade
sequence.
F
Yeah,
so
that's
that's
exactly
the
question
that
I'm
raising.
I
I
don't
want
to
get
on
up
on
this
particular
example
of
playing
traffic
versus
secure
traffic.
F
Yes,
exactly
and
whether
we
can
you
know
somehow
massage
the
existing.
You
know
perfect
counter
and
just
you
know,
leave
it
there.
Whatever
in
this
particular
case,
there
are
multiple
there.
There
are
multiple
ways
to
kind
of
address
this
issue
or
work
around
it,
but
there
is
a
fundamental
question
of
whether
the
birth
counters
are
considered
stable
because
up
like
at
least
as
far
as
I
know,
just
as
far
as
I'm
aware,
we've
never
made
any
promises
and.
F
The
concern
is
that,
with
with
multiple
external
users,
kind
of
starting
relying
on
on
on
on
on
these
perf
counters,
being
there
and
oftentimes
this
is,
you
know
different
projects,
and
sometimes
it's
you
know
it's
a
project
that
is
using
a
project.
So
that's
like
two
projects
away
from
seth.
F
That's
that's
when
you
know
that's
when
these
things
you
know
us
changing,
something
is
bound
to
bite,
so
I
I
think
we
need
to
like,
as
a
project
decide
whether
we
are
going
to
guarantee
stability
with
the
understanding
that
is
going
to
come
at
a
great
cost
or
whether
we
just
you
know
kind
of
maintain
the
status
quo
and
don't
provide
like
don't
give
any
promises.
C
Well,
personally,
I
don't
know
the
answer
for
your
question.
I
don't
know
whether
the
perf
counters
interface
is
part
of
our
public
api
or
not,
and
that's
the
only
reason
why
I'm
so
conservative
in
dpr.
C
To
be
honest,
I
would
love
to
not
have
such
a
guarantee,
and
maybe,
if
nobody
pop-ups
here
with
concrete
answer,
maybe
we
should
fire
an
email,
basically
emails
to
chef
users
and
have
dev
just
in
case
and
maybe
even
saying
climbing.
You
know
guys
we
would
love
to
to
even
if
we
made
such
as
an
any
such
a
claim,
we
would
love
to
take
it
back.
F
Right
so
one
one
option
might
be
to
actually,
you
know
similar
to
priorities.
Add
a
at
a
designation
for
each
proof
counter
whether
it's
considered
to
be
stable,
because
things
like,
for
example,
the
the
number
of
bytes
transferred
by
the
messenger
like
no
matter
how
the
messenger
is
implemented
and
no
matter
how
many
times
we
rewrite
it
right.
F
The
the
such
a
basic
calendar
is,
you
know,
would
always
be
there,
so
it
would
make
sense
to
to
have
it
as
they
you
know,
mark
it
as
as
stable
and
have
it
be
relied
on
by
external
projects.
But
on
the
other
hand,
we
have
a
ton
of,
for
example,
take
blue
store.
F
There
is
a
ton
of
internal
counters
that
many
users
wouldn't
even
be
able
to
make
sense
of,
because
they
are
closely
tied
to
the
to
the
you
know,
to
the
implementation
details
to
the
way
that
the
kv
kv,
saying
thread
works.
C
I
see
your
point
that
you
are
saying
that
our
perf
counters
aren't
heterogeneous.
Three
there
aren't
homogeneous.
There
are
heterogeneous
when
it
comes
to
potential
stability
guarantees.
Some
of
them
are
clearer.
It
looks
like
and
look
at,
there's
nothing.
It
really
looks
like
a
part
of
somebody's
developer
features.
Basically.
F
Yeah,
so
that's
exactly
what
I'm
saying
and
that
the
way
kind
of
the
way
I
see
you
know
us
going
forward
with
this
is-
is
putting
on
the
perf
counters
into
these
kind
of
two
groups,
because
with
with
prometheus
and
grafana-
and
you
know
opposite
dashboards
and
so
on
and
so
on.
We
we
are
bound
to
run
into
issues.
F
If
we
just
say
that
counters
are
unstable,
you
know,
don't
rely
on
them,
people
just
people
will
rely
on
them
anyway,
and
so
it
would
be
much
better
if
we
could
just
designate
the
things
that
that
that
would
more
or
less
always
be
there.
F
That
are
high
level
to
the
point
that
it
makes
sense
to
have
a
generic
name
for
them
and
have
them
export
it
to
all
kinds
of
dashboards
and
then,
on
the
other
hand,
still
maintain
the
possibility
that
a
developer
can
just
add
a
perf
counter
to
to
the
function
that
they're,
adding
just
because
they
want
to.
You
know
see
how
it
behaves
and
not
worry
about
someone
relying
on
it
as
soon
as
it
as
soon
as
it
ships
and
then
claiming
you
know,
claiming
stability
like
on
that
going
forward.
G
H
H
So
it
would
be
nice
if
we
are
going
to
offer
stability
for
certain
things
to
have
those
documented
and
maybe
just
documented
versus
not
is,
is
how
we
say
what
we're
promising
to
keep
stable.
F
I
I
think
there
needs
to
be
like
I
would
prefer
if
if
this
was
you
know
similar
to
how
the
you
know
the
configuration
options,
there
is
a
description
which
is
separate
from
which
is
kind
of
a
freeform
string,
and
that
is
separate
from
a
level
or
whatever
it's
called
like.
F
There
is
a
dev
there
is
you
know
I
I
forget,
but
there
is
a
bunch
of
levels
that
you
know
that
one
can
choose
from
for
each
configuration
option,
and
I
think
this
should
be
separate,
because
you
know
because
at
some
point
this
would
need
to
be
automated,
and
we
can
just
say
you
know
if,
if
the
description
string
is
empty,
it's
unstable.
If
if
there
is
even
a
single
word
and
then
descriptions
string
right.
F
So
I
I
would,
I
would
prefer
for
these
to
be.
You
know
two
separate
dimensions
with
the
rule
that,
if
something
is
marked
stable
ever
both
description
is,
you
know,
should
be
provided
in
the
same,
commit
in
the
same
change
set.
B
Yeah,
I
was
going
to
say
that
this-
the
this
also
is
a
point
in
favor
of,
for
example,
using
the
approach
for
for
the
birth
counters,
because
it
could
be
a
way
to
put
the
some
hints
in
place
right
if,
for
example,
if
we
go
with
the
yellow
list
or
the
earliest
approach,
that
probably
would
lie
in
a
different
place
of
the
code.
B
So
a
developer
will
have
to
double
check
whether
if
they
are
changing
the
perf
counter,
renaming
it
or
doing
this
mantis
or
whatever
they
will
have
to
check
double
check
with
a
different
part
in
the
code,
whether
that's
being
used
by
the
manager
or
the
dashboard
or
whatever
right.
So,
if
we
expose
these
tags
within
the
part
counter
definition
that
probably
would
make
easier
for
developers
to
know.
Okay,
I'm
modifying
this
first
counter
and
it's
been
consumed
by
manager,
dashboard
or
genomic
or
whatever
right.
B
F
Yep,
I
agree
separate
from
from
whatever
allow
lists-
or
you
know,
at
the
manager
level,
or
you
know
at
at
whatever
higher
level
the
information
as
to
whether
the
perf
counter
stable
must
live.
You
know,
at
the
same
place
where
the
perp
counter
is
defined
together
with
a
description
string.
This
this
needs
to
be.
F
You
know,
a
single
place,
a
single
header
file
where,
where
these
are
defined
so
that
they
can
be
referenced
and
in
the
future,
I
I
I
do
see
some
animation
builder
on
this,
because
you
know
just
similar
to
how
we
have
the
configuration
options
defined
in
the
declarative
fashion
and
then.
F
Using
those
definitions,
one
can
programmatically
come
up
with
a
list
of
you
know,
configuration
options
that
are,
for
example,
at
a
certain
level
or
configuration
options
that
go
on
to
a
certain
subsystem.
I
I
I
see
the
same
kind
of
scheme
applied
to
perp
counters,
because
these
are
very
similar
entities
at.
D
F
H
So,
moving
away
from
the
discussion
about
stability
guarantees,
I
want
to
go
back
to
radislav's
specific
pr
about
messenger.
H
So
I
think
the
most
natural
way
to
to
do
this
messenger
thing
would
be
to
have
a
label
for
encrypted
traffic,
but
both
encrypted
and
non-encrypted
would
use
the
same
counter
itself
so
in
prometheus.
You
could
look
at
that
counter
and
see
the
the
accumulated
stats
of
both,
but
you
could
you
could
also
filter
by
encrypted
or
not
to
see
the
to
see
the
breakdown?
That
way.
C
It
boils
down
like
almost
everything
in
this
pr
about
the
target
audience
I'll.
To
my
knowledge,
this
will
be
for
security
engineers
for
people
who
rage
are
not
so
terribly
interested
in
graphical
interfaces,
richer
in
very
low
level
stuff.
I
think
I
believe
that
I
can
draw
some
assumptions
on
in
this
area.
I
that
I
can
say
that
those
people
will
be
low
level.
Programmers
system
engineers,
security
specialists-
that
will
be
the
target
audience
here.
C
So
I,
even
if
we
have
any
machinery
to
claim
those
new
accounts
about
stability
of
those
new
counters,
will
definitely
say
well,
they
are
not
so
stable.
They
are
not
part
of
our
public
interface.
They
are
not
even
for
regular
operators.
There
are
mostly
for
developers,
slash
security
engineers.
C
So
any
fancy
presentation
layer,
I
think,
will
be
another
to
be
honest.
F
You
kind
of
casey
was
asking
something
anything
slightly
different,
because
this
is
irrespective
of
of
stability,
guarantees
right
any
any
perv
counter
can
be
assuming
a
user
defines
or
sets
the
the
priority
option
to
to
a
value
that
is
equivalent
to
exporting
everything.
F
Any
birth
counter
can
reach
prometheus,
whether
it's
considered
stable
or
not
stable
and
gases.
Point
was
about
the
the
kind
of
fundamental
issue
or
general
issue
that
we
have
with
our
curve
counters
is
that
there
is
a
mismatch
between
how
we
historically
defined
birth
counters,
which
is
for
for
for
every
new
value.
We
just
chose
a
different
name
or
for
every
metric.
We
just
you
know,
picked
a
a
new,
unique
name.
C
That,
basically,
those
we
got
new
value
types
for
those
for
already
encounters.
We
can
say
that
the
same
counter
is
is
carrying
is
carrying
some
kind
of
gender
of
value.
Extra
general
value,
like
I
recorded
42
bytes
of
encrypted
traffic
and
24
of
uncrypted.
D
F
At
the
permissions
level,
like
casey
said
there
is
this
concept
of
labels
and
the
the
names
the
names
are
supposed
to
be
really
generic.
So
if
it's
a
count
of
transferred
bytes,
then
it's
just
transferred
bytes
and
then
the
kind
of
those
bytes
such
as
whether
they're
playing
or
encrypted
or
whatever,
is
generally
communicated
via
labels,
and
the
problem
is
that
the
perf
counter
interface,
like
the
c
plus
plus
stuff
in
ceph,
does
not
support
that.
F
My
understanding
is
that
there
is
an
ongoing
project,
which
you
know
the
rgw
team
has
kind
of
took
on,
and
I
I
think
ali
in
particular
is
involved
with
that.
To
add
support
to
to
the
to
these
key
value
tags
to
curved
counters
at
the
sap
level,
so
that
they
can
be
very
straightforwardly
like
in
a
straightforward
fashion,
be
mapped
to
prometheus
labels,
but
that
doesn't
exist
yet.
C
I
see
so
this
will
be
for
future
and
also
it
would
change
semantic
of
us.
I
guess
some
of
will
change
some
already
existing
counters,
like
the
one
for
read
and
written
bytes.
F
F
Currently,
some
of
this
exists
in
the
manager
just
to
just
to
kind
of
work
around
the
fact
that
our
curve
counters
are,
you
know,
don't
have
any
support
for
for
for
labels,
and
instead,
these
labels
kind
of
get
embedded
in
the
names
we're
working
around
that
in
the
parameters
module
in
the
manager
where
there
is
there's
a
method
that
just
you
know,
performs
some
text-based
text-based
processing.
F
F
We
work
around,
we
are
already
we
already
have
this
workaround
part
of
this,
like
part
of
the
integration
of
this
feature,
would
be
getting
rid
of
it
and
applying
some
of
this
work
around
at
the
sub
level,
although
they
wouldn't
be
working
around,
this
would
be
actually
a
proper
solution.
At
that
point,.
C
C
C
Well,
if
you
are
already
talking
about
the
pr,
there
is
another
another
dimension
for
for
the
discussion
it
would
be,
it
would
be
about
the
support
for
for
connection
modes
at
the
moment.
Well,
we
have
we
have
a
few
of
them,
but
my
impression
is
that
only
few,
only
a
very
limited
subset
of
our
connection
modes
connection
types
is
being
really
useful
is
being
reduced.
C
Just
for
the
sake
of
completeness
of
completeness,
I've
introduced
dedicated
counters
for
all
of
them,
because
well,
if
they're,
when
a
security
guy,
when
security,
when
security
folks
are
taking
a
look,
are
ensuring
whether
we,
whether
we
adhere
to
our
guarantees,
our
what
is
encrypted
and
what
it's
kept
playing,
and
this
actually
depends
on
how
particular
on
particular
connection
modes.
C
Well,
they
need
counters
for
all
of
them
which,
as
you
point
out
in
the
interview
during
the
review
process,
is
complex
this
in
many
situations
it
looks
like
an
overkill.
C
If
we
take
care
about
it,
it
would
suggest
it
would
suggest
to
introduce
counters
for
all
of
them
or
maybe
maybe
better,
the
better
way
to
say
that
to
limit
the
flexibility
of
connection
of
configuring,
encryption,
compression
etc
in
the
for
for
particular
for
particular
modes.
Maybe
we
could
strip
some
of
them,
I'm
not
sure.
C
That,
well,
not
all
of
them
look
looks,
looks
useful.
F
Yes,
so
we've
sort
of-
let's
not
hijack
the
the
perf
counters
discussion
here,
but
just
to
you
know
I
I
very
much
agree
with
with
what
you
said
about
instead
of
further
exposing
the
the
insanity
that
that
that
is,
that
is
our
existing
connection
modes,
that
I
don't
think
anyone
can
you
know,
can
explain
or
why
it
is
the
way
it
is.
F
F
F
C
I
fully
agree:
they
are
confusing,
they
are
hard.
Well,
they
are
hard
to
understand.
Even
for
developers,
I
would
love
to
strip
them
them
off
the
project
really
well.
The
only
thing
is
that
it
could.
It
could
take
some
time
because,
maybe
maybe
not
I
feel
with
there
might
be
an
urgency
to
go
over
the
full-blown
duplication
process,
so
it
might
be
even
a
multi-release
gizmo,
maybe
a
multi-release
effort.
C
C
C
Different
topics
are
related
to
counters
by
the
single
junction
point
we
have
in
this
particular
pr.
But
yes,
it's
it's
a
it's!
It's
it's
a
topic
for
another
longer
than
maybe
even
discussion.
F
Yeah,
I
think
this
would
definitely
need
a
kind
of
wider
input,
and
this
is
ultimately
a
a
core
issue.
Deprecating
the
the
myriad
of
configuration
options
that
we
have
and
replacing
replacing
that
with
something
much
simpler.
F
That
would
just
allow
to
configure
either
an
a
cluster
that
is
just
using
the
the
plane
mode
throughout
or
the
secure
mode
throughout,
or
the
third
option
would
be
using
the
secure
mode
just
for
the
monitors.
F
F
But
that's
yeah,
that's
a
different
topic
and
that
would
need
you
know
that
would
need
to
be
closely
investigated
by
the
core
team
and
it
is
not
related
to
perfect
counter
discussion
at
all.
C
But
I
guess
exactly
like:
with
the
perf
counter
stability,
we
will
need
to
keep
everything
explicit
and
well
communicated
to
observe
death.
Instead,
users.
F
H
Yep
I'll
once
he
has
something
more
something
to
show
I'll
make
sure
that
it
gets
shared
with
the
list.
A
Okay,
so
the
way
I
see
it
in
terms
of
reducing
the
load
of
the
perf
counters
on
the
manager,
which
is
what
perry
and
ernesto
and
the
dashboard
team
are
working
on.
We
first
need
to
answer
the
question
of
stability,
which
will
help
out
a
lot
in
when
the
dashboard
team
does
decide
to
implement
an
allow
list
or
change
priorities.
A
As
long
as
they're
not
changing
any
stable
proof
counters,
then
it
will
be.
A
Then
then
we're
not
guaranteeing
any
proof
counters
that
do
get
changed
so
certainly,
I
think
we
need
to
answer
the
stability
question.
First,.
A
And
I
wrote
a
couple
of
bullet
points
down
based
on
our
discussion,
such
as
that
we
need
to
reach
out
to
the
user
mailing
list
and
that
we
have
a
lack
of
documentation
around
this.
That
needs
to
go
in
because
I
agree.
Any
perf
counter
that
is
counted
as
stable
should
be
better
documented
than
what
it
is
now.
A
You
know
casey
the
the
work,
that's
that
ali
is
doing.
This
is
going
toward
proof
counter
stability,
or
did
I
misunderstand
that.
F
Just
just
one
final
kind
of
idea
on
this:
maybe
to
avoid
kind
of
the
friction
of
introducing
an
additional
field
to
the
curved
counter
definition.
F
Maybe
we
could
reuse
the
priority,
just
perhaps
rename
it
and
basically
say
that
everything
that
well,
we
would
need
to
do
a
full
audit,
of
course,
but
basically
say
that
everything
above
a
certain
priority
like,
for
example,
everything
that
is
currently
marked
as
important
and
above
that
is
stable,
because
you
know
it
makes
sense
for
something
that
is
designated
as
important
to
stay
important
going
forward
and
therefore
be
stable.
B
But
I
feel
like
that's
something
more
implicit
means
to
this,
which
is
something
that
we
want
us
to
avoid
right:
yeah,
because
useful
man,
basically
something
was
exported
to
the
manager,
and
we
that's
why
we
were
considering
adding
an
attack,
but
it
no.
That
also
means
time.
Stability,
commitment,
that's
adding
a
like
a
side
effect
right,
an
implicit
side
effect
to
that.
B
To
that
mean,
so
I'm
not
sure,
I
think
it's
better
to
make
things
more
explicit,
and
if
we
really
want
to
express
the
stability
of
the
first
counters
I
mean
attack,
I
mean
I'm
not
sure.
If
that's
you
know
requires
so
much
overhead,
but
I
personally
prefer
to
make
things
explicit
and
not
adding
more
hidden
meanings
or
or
implicit
means
to
the
priority,
and
if
we
find
that
no
one's
actually
using
the
priority
for
anything,
then
let's
get
rid
of
that
and
use
this
other
meanings
right.
B
The
manager
or
whatever
or
the
stability.
F
Yeah
that
makes
sense,
it
was
just
a
thought
so
yeah
having
having
this
kind
of
two
dimensions
and
keeping
them
separate,
is
also
completely
fine
and
obviously
more
more
straightforward.
F
I
think
the
priority
would
still
be
useful,
even
maybe
not
so
many
priorities
as
we
have
today,
but
some
definition
of
priority
would
still
be
useful
because,
fundamentally,
that's
a
gate
for
what
gets
exported
like
would
get
sent
to
the
manager,
and
that
translates
to
to
to
physical
overhead
that
is
controlled
by
that
option,
so
that
that
option
would
probably
remain
there.
No
matter
what.
A
Yeah
and
about
the
priorities,
whether
they
have
a
use
or
not
the
only
other
way.
I
see
that
they're
being
used
aside
from
what
gets
sent
to
the
manager
is
the
seth
demon
perf
command.
As
we've
said
it,
the
priorities
decide
what
order
they
show
up
on
when
you
run
the
sef
demon,
perf
command,
so
the
ones
with
highest
priority
will
show
up
in
the
list
first
and
then
going
down,
but
I
don't
know
who
really
uses
the
or
beyond
the
demon
perf
command.
I
don't.
A
I
think
that
anything
I've
seen
with
the
perf
counters
and
the
priorities,
the
the
the
meaning
of
priorities,
is
not
super
clear
and
I
don't
think
it
was
meant
to
have
a
super
important
meaning
in
the
beginning
when
it
was
introduced,
and
then
it's
kind
of
become
something
that
we
are
depending
on.
A
So
aside
from
the
effects
on
the
demon
perf
command,
I
think
that
having
a
more
explicit
tag,
as
ernesto
said,
makes
a
lot
of
sense,
since
the
meaning
of
the
perf
counters
is
or
the
the
priorities
is
kind
of
muddled.
At
this
point,.
A
And
then
just
one
more
thing
before
we,
unless
anybody
has
any
other
topics
or
questions
comments
to
say
about
this,
I
always
like
to
end
these
kinds
of
things,
with
an
action
plan,
a
clear
action
plan.
So
how
do
we
want
to
start
this?
This
work?
Do
we
want
to
start
out
by
reaching
out
to
the
user
lists
or
the
dev
lists,
or
how
do
we
want
to
start.
C
I
think
the
stability
is
is
the
dependence
for
every
for
everything
else.
I
would
start
from
that.
I
will
start
from
reaching
out
to
bluff
chef
users
and.
A
C
A
Good,
I
think
that's
a
good
place
to
start
just
to
get
some
initial
feedback
from
the
users,
and
I
can
take
on
sending
out
that
email
and
those
of
you
on
this
call
radic,
ilia
ernesto.
I
can
check
with
you
guys
first
before
sending
out
the
email
to
make
sure
it
sounds
good.
A
Thanks
a
lot
yeah,
of
course,
and
whatever
happens,
we
should
just
always
have
a
clear
plan
going
forward
and
I
don't
want
this
to
end
up,
because
this
is
so
important.
I
don't
want
this
to
end
up
just
as
another
a
conversation
we
had
on
cdm
that
didn't
go
anywhere.
So
I
I
like
this
idea
where
we're
starting
out
by
mailing
the
user
lists
and
we'll
just
keep
keep
going
forward
with
clear
intentions.
A
Sounds
good,
does
anybody
have
any
wrapping
comments
for
this
topic.
A
E
Yeah,
I
just
don't
wear
the
yeah
for
this
implementation
for
this,
like
sending
all
the
puff
corners
to
mdr.
So
what
I
I
just
want
to
make
clear
that
what
exactly
means,
because
here
there
are
two
type
of
things
we
one
which
will
mean-
is
the
mgr
module
which
which
we
do,
the
cat
for
all
puff
counters,
calling
the
python
side.
Otherwise
what
we
actually
saw
in
the
c
plus
plus
the
all
the
parkour
understand
here.
So
I
just
want
to
make
sure
like
the
what
proposition
can
be
based.
E
If
we
follow
the
yellow
list
thing
I
mean
just
you
can
share
my
screen
just
want
to
make
clear
like
how
like
how
there
are
two
different
problems
here,
like
one
problem,
which
you
mean
like
sending
birth
counters
to
mgr.
So
basically,
that's
one
problem:
how
to
reduce
that.
Another
problem
is
that
the
all
the
different
modules
consuming
those
perf
counters.
So
that's
that's
kind
of
a
different
problem
because
they
are
using
the
mgr
module
call
and
well
if
somebody
wants
to
reduce
that
they
can
use
the
yellow
list.
E
Also,
like
I
guess,
yeah
dictionary
also
which
they
will
said.
We
just
want
this
type
of
contrast.
Also,
they
can
filter
it
out
from
the
wolf
counter
score,
but
this
different
problem-
I
just
want
to
explain
and
yeah.
E
I
can
quickly
share
my
screen
that
just
to
life
yeah,
so
basically,
I
just
first
want
to
clear
how
this
like,
how
actually
performers
are
coming
from
different
set
of
demons
so
yeah
these
are
just
the
example
rgw,
etc
they
just
so
each
have
the
mgi
client,
basically
mjr
cal
called
mj
client
collects
all
the
performers
and
yeah
it's
so
it's
kind
of
a
report
which
is
sent
to
the
mgr,
and
this
is
sent
like
on
a
stat
speeder
basis,
which
is
by
default
around
five
seconds
or
so
so
like
it's,
so
every
demand
which
you
have
will
send
to
the
mj
clan
and
then
into
mgr.
E
So
it's
like
a
huge
load
on
the
mg
right
now.
So
this
is
basically
what
I'm
saying
is
on
the
c
plus
plus
so
say,
plus
plus
I
how
it's
being
done.
So
if
we
really
want
to
reduce
the
load
on
the
mgr,
then
things
need
to
be
done
here
I
mean
somewhere
how
this
report
is
being
sent
to
else.
If
we
just
do
the,
if
else,
if
we
just
want
to
go
with
like
how
the
module
wants
to
consume
it,
then
well
here.
E
Basically,
this
is
the
performance
underscore
which
is
being
done
in
the
prometheus
model,
for
example,
similarly
spring
down
into
elementary
module.
So
basically,
we
can
just
have
a
kind
of
a
filter
also
which
will
say
that
I
just
want
to
fetch
only
this
counters
or
this
type
of
or
any
rejects
that
we
can
have
like
I'm
just
discounters.
E
E
So
this
is
a
different
problem
if
we
just
filter
out
from
this
all
both
contrast-
and
I
guess
there
is
already
been
appear
which
in
which
ernesto
has
tried
to
do
something
similar
so
I'll,
show
that
as
well,
so
yeah,
basically
yeah
if
you
filter
out
some
of
the
counters,
which
can
be
useful
on
the
on
the
promises
module
side.
So
we
can
do
something
similar,
and
so
this
is
like
this
is
a
respondus
solving
its
own
problem,
so
I
mean
whatever
whatever
wants?
E
So
this
is
a
separate,
I
guess,
a
different
problem
when
you
mean
to
solve
the
yeah,
the
performance
load
or
the
through
priorities
or
through
a
low
list
of
whatever
and
the
one
which
I
saw
before
it's
different,
because
this
is
kind
of
a
complex,
I
would
say
a
problem
to
solve,
because
right
now
every
demon
performers
are
getting
sent
to
mgr
because
yeah
in
cli
it
may
be
useful.
Someone
wants
to
fetch
some
value,
or
so
so
yeah
it's,
I
guess
it's
maybe
necessary
or
needed
for
developer
user.
Whatever
so
yeah.
A
Evan,
that's
really
in
that's
really
insightful
this.
This
is
a
great
visual
to
have.
Is
there
any
way
that
you
could
link
this
and
the
or
at
least
to
the
the
pr
from
ernesto
on
the
agenda?
I
think
that
would
be
great
for
us
to
have
yeah.
A
A
good
discussion
there,
I'm
glad
we
actually
got
somewhere.
I
was
not
sure
if
we
would
reach
any
any
conclusive
thoughts,
but
we
seem
to
have
so
good
job
everybody
and
and,
of
course,
I'll
take
care
of
sending
out
an
email
to
the
user
list
to
get
the
ball
rolling
there.
A
Unless
anybody
has
any
last
comments
on
this
topic,
I
can
move
to
the
last
one,
which
is
pretty
short,
so
we
won't
be
stuck
in
cdm
for
too
much
longer.
A
Okay,
it
doesn't
seem
like
it,
but
if
so,
just
please
speak
up.
So
this
last
topic
on
the
agenda
is
about
ceph
ecosystem
building
involvement
with
the
ceph
community.
A
So
as
as
we've
done
in
the
past,
seth
is
participating
in
the
grace
hopper
open
source
day
event
and
the
grace
hopper
celebration
is
is
mainly
an
event
to
encourage
women
women
to
be
more
involved
in
technology
and
open
source
days
for
people
from
this
event
to
contribute
to
stuff
for
a
day.
So,
essentially,
what
what
I
wanted
to
bring
up
to
you
guys
is
in
preparation
for
the
event
which
is
happening
in
september.
A
We
need
to
have
some
low-hanging
fruit
issues
available
for
the
participants
to
handle.
We
need
to
have
at
least
30.
I
believe
so
in
this
link.
A
I've
begun
tagging,
low,
hang
low
hanging,
fruit,
trackers,
and
we
have
like
about
60.
So
it's
a
lot,
but
some
of
them,
I
noticed,
are
old
or
outdated
or
you
know,
may
still
be
a
little
bit
difficult
for
first-time
users
or
first-time
developers.
A
So
I
essentially
want
to
make
sure
that
we
have
a
good
selection
and
we
have
a
lot
of
low
hanging,
fruit
issues
from
raido's
and
the
dashboard
and
some
managers
issues
as
well,
but
we
are
lacking
some
issues
from
stuff
of
us,
rbd,
rgw
and
even
tautology
as
a
contender
there.
So
if
you
are
aware
of
any
kind
of
low
hanging
fruit
issues-
and
by
that
I
would
mean
like
code
cleanups
or
clarify
this
logging
here
or
there's
a
compiler
warning,
that's
popping
up,
can
you
please
fix
this?
A
Those
kinds
of
smaller
issues,
please
feel
free
to
tag
those
with
the
low
hanging
fruit
tag
or
you
can
create
any
new
issues
that
pop
up
those
are
always
kind
of
hard
to
create.
But
I'm
sure
we
have
a
lot
that
are
out
there.
That
just
aren't
tagged.
A
So
please,
if
you,
if
you
have
a
moment,
it
would
be
great
if
you
could
go
through
if
you
know
of
any
trackers,
that
would
be
good
contenders
for
low
hanging
fruit
issues
go
ahead
and
tag.
Those
and
that'll
really
help
out
in
osd
day
and
you
might
get
a
you
know
you
might
get
somebody
taking
care
of
these
little
issues
for
you
that
that
would
otherwise
you
know,
take
a
while
to
get
completed.
A
That's
all
I
had
there
and
and
the
link
I
shared.
If
you
tag
it
correctly,
it
should
show
up
on
that
list,
but
yeah,
that's
it
for
me.
Anybody
has
questions
about
that,
but
that's
pretty
straightforward
and
I
think
that's
a
wrap
on
this
month's
bdm
good
discussion.
Everybody-
and
I
hope
you
all-
have
a
wonderful
rest
of
your
week
and
rest
your
month
and
I'll
see
you
at
the
next
cdn.