►
From YouTube: What's New in Pacific Ceph Release - Sage Weil
Description
Sage Weil shares updates in the upcoming Ceph Pacific Release in this Ceph Tech Talk episode.
https://ceph.io/ceph-tech-talks
B
A
In
the
latest
left
release
specific,
which
is
due
out
in
the
month
of
march,
though
just
a
few
weeks
away,
if
all
goes
well
all
right,
so
there
are
just
a
few
introduction
slides
here
that
we'll
go
through
quickly
for
those
of
you
who
don't
know
what
seth
is
it's
software
defined
storage,
it's
unified
storage,
scalable,
distributed
storage.
These
are
sort
of
the
ways
that
people
describe.
A
A
Seth
is
free
and
open
source
software,
which
means
you
it's
free
to
use.
It's
free
to
modify
and
to
share
you're
free
from
vendor
lock-in
and
you're
free
to
extend
and
innovate
using
the
system
and
stuff
is
designed
to
be
reliable,
to
build
a
reliable
storage
service
out
of
unreliable
components.
A
Now,
in
the
month
of
march,
we
are
almost
coming
up
on
the
pacific
release
date,
so
in
the
next
few
weeks,
we'll
be
releasing
that
we
backport
bug
fixes
to
releases,
so
nautilus
will
reach
its
end
of
life
shortly
after
pacific
pacific
is
released
and
you're
allowed
to
upgrade
two
releases
at
a
time.
So
you
can
either
jump
a
single
release
or
you
can
jump
from
dallas
to
pacific
or
octopus
to
quincy.
A
Whatever
it
may
be,
so
what
is
new
in
pacific?
We're
gonna
break
this
down
into
sort
of
five
topical
groups,
starting
with
usability.
These
are
at
a
high
level
sort
of
the
focus
areas
that
we're
looking
at.
So
on
the
usability
front.
Most
of
the
work
here.
A
A
A
A
That
adm
knows
how
to
deploy
automated
for
rgw,
so
h.a
proxy
and
people
id
and
there's
some
details
behind
the
scenes
too.
You
can
mark
a
host
in
maintenance
mode.
If
you
know
you're
going
to
take
it
offline
and
it'll,
stop
complaining
about
it
and
there's
a
new
exporter
agent
mode
for
set
videom.
That
runs
an
agent
on
each
node
that
improves
the
scalability
and
performance,
although
that
isn't
turned
on
yet
by
default.
A
A
C
A
A
The
smart
diagnostics
are
surfaced,
and
so
disk
replacement
workflows
are
integrated.
Now,
there's
some
multi-site
capabilities
now
in
the
dashboard,
so
you
can
monitor
the
status
of
arbitrary,
mirroring
across
clusters
and
the
raiders
gateway,
multi-site,
sync,
also
as
well
improved
integration
with
seth
adm
and,
to
some
degree
rook,
there's
now
an
official
rest
api.
That's
fully
documented
versioned
and
supported
that's
actually
based
on
the
dashboard
journal.
Api
and
a
bunch
of
security
related
features
around
setting
our
back
policies
for
logins
and
secure
cookies
and
sanitize
logs
and
all
that
stuff.
A
On
the
raiders
front,
lots
of
small
usability
improvements
here
too,
prove
the
hands-off
the
defaults
you
get
when
you
install
the
cluster,
we
use
the
app
map
balancer
now
by
default,
which
is
sort
of
the
most
reliable
and
does
the
best
job
of
keeping
utilization
across
all
your
osd's.
Even
the
auto
scaler
has
some
improvements
too.
A
A
A
This
is
ongoing
work
in
progress,
but
it
allows
developers
and
sort
of
power
users
to
understand
better
where
latency
is
being
introduced
in
the
data
path,
integrating
with
the
jaeger
tool
that
can
help
with
end
to
end
performance
analysis
on
the
sessifs
side,
a
bunch
of
good
stuff,
the
multi-file
system.
Support
has
been
around
for
several
releases
now,
but
it's
finally
been
marked
stable.
All
the
testing
is
in
place
to
give
everyone
confidence
around
that
and
there's
sort
of
a
new
cleaned
up
entry
point
for
creating
new
file
systems
called
volumes.
A
A
A
Although
the
current
client
implementation
is
still
work
in
progress,
a
few
other
items
on
the
usability
and
just
new
feature
in
that
bucket
on
the
rbd
front,
there
is
this
new
ability
to
create
an
instant
blown
of
an
rbd
image,
that's
stored
somewhere
outside
of
stuff,
for
example
an
arbitrary
ul
url.
Maybe
it's
sitting
in
s3,
you
can
clone
that
url
to
an
image
and
immediately
start
reading
and
writing
from
it
and
it'll
do
the
the
copy
on
write,
flattening
stuff.
A
In
the
background,
there's
built-in
support
in
the
barbity
now
for
lux
encryption,
which
is
useful
for
many
cloud
applications,
or
you
aren't
necessarily
don't
necessarily
want
to
put
that
inside
the
vm.
But
you
just
want
to
have
it
implemented
by
liverpool
itself.
There's
a
native
windows
driver
now
that
I
guess
is
sort
of
in
a
beta
phase.
A
We
expect
to
have
signed
and
pre-built
drivers
available
soon,
hopefully
soon
on
step.io,
but
initial
testing
shows
that
it
has
been
stable,
performant
and
there
are
also
some
improvements
to
the
rbd
nvd
daemon,
so
you
can
restart
it
now
previously,
that
would
map
demons,
and
now
you
can
restart
the
demon
without
they're
breaking
the
images
that
are
currently
in
use
for
raido's
gateway.
A
A
There's
some
integration
of
lua
scripting
in
the
rgbw
request
path.
So
you
can
add
some
mark
caching
logic
and
redirect
logic.
Lots
of
work
going
on
there
and
d3n
is
a
caching
rule
that
is
under
heavy
development
by
the
hw
folks.
Some
initial
versions-
implementations
that
are
available
now
in
pacific
on
the
quality
front
lots
of
work,
as
always
going
into
radios
to
make
the
system
more
robust
and
reliable
pg
deletion.
A
A
The
theft
manager
is
doing
a
better
job,
managing
cpu
progress,
module
usage
issues
that
have
been
resolved
and
some
efficiency
problems
have
been
fixed
and
also
simple,
but
we've
added
the
ability
to
just
view
the
where
level,
as
reported
by
your
ssds
in
the
system
in
the
device
os
output,
so
you
can
very
easily
get
a
bird's
eye
view
of
what
what
the
wear
level
is
in
those
systems
on
this
ffs
side-
lots
of
stuff.
Here
too,
there's
the
ability
to
turn
on
and
off
feature
bits
for
certain
file
system
features.
A
So
you
can
prevent
older
clients
from
connecting.
If
the
file
system
is
using
newer
features,
the
online
scrub
has
been
improved
so
that
multiple
mds's
can
scrub
in
parallel
in
a
collaborative
fashion,
proving
that
better
support
for
messenger
2.1
on
the
criminal
side,
support
for
recovering
mounts
after
they've
been
blocklisted.
So
if
a
client
has
been
kicked
out
of
the
cluster
because
it
was
disconnected,
it
can
then
reconnect
and
try
to
recover
its
session
without
having
to
reboot
the
box
or
redo
them
out
and
improvement
of
test
coverage
expanding
the
test
matrix.
C
A
Reliability
of
the
system
telemetry
and
crash
reports
project-
that's
been
going
on
for
a
while,
mainly
focused
around
learning,
more
about
stuff
clusters
that
are
deployed
in
the
wild
and
what
bugs
users
are
encountering.
There
are
a
whole
series
of
public
dashboards
that
have
expanded
immensely
over
the
past
year.
I
encourage
you
to
take
a
look.
You
can
learn
all
sorts
of
things
about
what
versions
people
are
deploying
how
big
clusters
are
so
on.
A
A
Cluster
will
only
send
data
to
the
developers
if
you
will
at
it
and
if
we
ever
change
the
information
that's
being
sent
you
have
to
re-opt
in
so
definitely
want
to
make
sure
that
we're
respecting
everyone's
privacy
here,
avoiding
any
concerns.
Most
of
the
initial
work
on
analyzing
that
data
has
been
around
the
crash
reports.
A
Mapping
it
back
to
bug.
Trackers.
There's
part
work
right
now
to
try
to
put
that
data
into
sentry
and
then,
of
course,
there's
also
the
device
dashboard
that
I
mentioned.
That
gives
you
all
kinds
of
insight
about
what
kinds
of
storage
devices
people
are
using
performance
lots
going
on
as
always
to
make
that
faster.
So
on
the
blue
storefront,
this
is
the
primary
storage
backend
for
usds
lots
of
work
landed
in
the
pacific.
A
A
On
the
qos
front,
we've
been
talking
about
and
working
on
on,
qos
and
theft
for
a
very
long
time
and
we've
finally
sort
of
landed
pieces
of
this
that
can
be
used
in
production
systems.
Now
in
pacific,
that's
exciting,
so
phase.
One
we're
using
the
mc
algorithm
to
schedule
between
recovery
and
background
work
and
client
io.
A
A
Eventually,
we
want
to
use
the
same
algorithm
to
also
be
able
to
do
qos
between
different
clients
that
are
using
the
system
and
not
just
between
background
work
and
point
workloads
so
still
have
our
eyes
looking
forward
there,
crimson
project
has
been
going
on
for
several
years
now.
This
is
essentially
high
performance.
Rewrite
of
the
osd
data
path.
A
Lots
of
progress,
recovery
and
backfill
are
now
implemented,
which
is
very
exciting.
Scrubbing
is
mostly
there.
The
code
has
been
refactored
and
on
the
classical
osd
and
is
now
being
integrated
with
the
crimson
osd
and
then
seesaw,
which
is
going
to
be
the
new
storage
backend
for
crimson.
That's
targeting
both
dns
ssds
and
traditional
ssds.
A
A
A
On
the
cefs
front,
several
performance
changes
there
ephemeral
pinning
just
sort
of
a
policy
that
pins
certain
subtrees
to
different
metadata
servers.
They
don't
bounce
around
and
they
sort
of
stick
where
you
want
to
go,
make
it
easy
to
distribute
the
workload
across
the
servers.
That's
been
in
place
now
and
it's
working
and
improving
performance
and
scalability.
There
there's
improved
capability
in
cash
management
by
the
mds
for
large
clusters,
based
on
experience
with
logic
clusters
like
those
at
cern
and
some
throttling
for
client
workloads
to
make
sure
that
that
that
works
correctly.
A
And
then
this
support
for
asynchronous
unlink
can
create
operations
so
that
the
the
client
doesn't
have
to
wait
for
a
full
round.
Trip
to
the
nds
has
been
ongoing.
Now
for
a
while,
there's
been
partial,
mgs
support
since
octopus,
ongoing
fixes
and
improved
testing
and
upstream
kernels
now
and
the
centos
eight
rally
based
kernels
as
well,
have
support
for
that
as
well.
Now
so
big
improvements
for
like
armed
hr,
type,
workloads
and
ontario
files.
A
A
A
On
the
multi-site
front,
the
big
item
here
is
stepfs.
Now
has
a
snapshot
based
multi-site
mirroring
feature,
so
you
can
set
up
replication
targets,
remote
clusters,
bigger
than
any
directory
and
any
stuff
of
smear
daemon,
will
take
periodic
snapshots
of
the
source
cluster
and
then
migrate
it
to
the
remote
cluster.
Those
demons
are
managed
either
by
rook
or
by
stuffy.
A
A
A
Rgw
rgw,
so
the
current
rjw
multi-site
supports
lots
of
things.
You
can
federate
multiple
sites.
You
can
have
a
global
bucket
and
username
space,
and
you
can
do
asynchronous
data
replication
on
zone
granularity
in
octopus.
We
added
bucket
granularity
repetition
as
well,
but
it
was
sort
of
in
an
experimental
state
because
the
code
is
pretty
new
with
pacific
we've
added
all
the
testing
in
qa
to
move
that,
like
a
granularity
replication
out
of
experimental
status.
So
you
can
say
this
individual
bucket.
A
I
want
to
link
between
this
site
and
that
site,
and
also
primarily
in
this
release
cycle.
There's
lots
and
lots
of
refactoring
and
code
cleanup,
building
the
foundation
to
support
bucket
restarting
in
a
multi-site
environment,
and
this
whole
host
of
other
features
that
we
have
planned
around
multi-site.
A
So
a
lot
of
these
changes
are
sort
of
stacking
up
and
didn't
merge
in
time
for
specific,
but
most
of
them
don't
add
sort
of
user,
visible
features
yet
anyway,
it's
mostly
just
laying
the
groundwork
for
future
work.
So
most
a
lot
of
that's
going
to
emerge
in
sort
of
the
first
phase
of
the
quincy
cycle
and
so
enabling
more
new
stuff
frequency,
and
so
the
last
category
integrations
ecosystem
and
so
on.
So
on
the
rook
front.
A
A
A
Sort
of
corner
issues
to
deal
with
with
asymmetric
network
network
fails
and
so
on,
but
that
is
now
present
in
pacific,
which
is
exciting
and
supported
by
by
rook
and
stuff.
Smearing,
as
I
mentioned,
is
new
in
pacific
and
work
has
full
support
for
this
out
of
the
gate,
so
you
can
configure
it
via
crds.
A
A
On
the
rbd
front,
there's
support
for
dmcrypt
encryption
with
fault,
key
management
and
integration.
There
tvs
can
be
snapshotted
and
cloned
and
that
drives
the
rbd
snapshots,
though
fully
integrated
there
with
csi
topology,
where
provisioning
is
new
and
there's
also
integration
with
the
snapshot
based
mirroring
that's
still
in
progress,
but
coming
soon
and
some
other
miscellaneous
items.
A
There's
some
effort
this
year
to
remove
instances
of
racially
charged
terms
from
the
code
base
and
from
the
clis
and
so
on.
So
things
like
blacklist
and
whitelist
are
replaced
with
blacklist
and
ignore
list
there
weren't
very
many
instances,
many
instances
of
master
slave,
but
we've
pulled
them
out.
So
in
a
few
cases,
apis
and
clies
were
affected.
Field
calls
are
deprecated
will
be
removed
in
the
future.
A
There
are
ongoing
documentation
improvements,
as
many
of
you
are
probably
aware.
The
foundation
is
hired,
documentation,
contractors
working
full-time,
just
improving
and
cleaning
up
the
documentation,
so
ongoing
improvements
there
and
there's
also
a
website
redesign
project.
That's
been
ongoing
now
for
the
main
set
website,
moving
away
from
wordpress
to
a
static
site,
generator
built
out
of
repository
and
git
that
we
hope
is
going
to
launch
this
spring.
A
The
design
is
together,
we're
just
sort
of
finalizing
information
architecture
and
filling
in
copying
content,
but
we're
excited
about
that
other
sort
of
ecosystem
things
we've
been
talking
about
arm64
for
years,
but
we
have
sort
of
yet
to
achieve
reliable,
builds
with
all
of
our
releases
using
arm64
packages,
but
we
finally
have
a
bunch
of
hardware
that
is
actually
fast
enough
and
enough
of
it
to
do
that,
both
on
the
release
side
and
also
for
ci.
So
all
of
our
ci
builds
are
producing
arm
packages
and
containers.
Now
so
we're
excited
about
that.
A
I
think
pacific
will
be
the
one
where
we
actually
have
all
the
proper
artifacts
for
arm
so
fingers
crossed.
We
still
have
limited
capacity
to
do
qa
and
regression
testing
on
arm
just
because
we
have
a
lot
of
testing
hardware
and
we
can't
do
all
the
same
things.
We
do
on
x86
that
we
do
in
arm,
but
we
have
some
and
so
there'll
be
some
testing
there
and
that's
pretty
much
it
for
pacific.
A
I
can't
really
tell
you
what
that's
going
to
be
yet
because
we
haven't
done
done
the
planning,
but
we
do
have
some
virtual
events
planned
for
the
spring,
so
the
stuff
developer
summit,
of
course,
is
going
to
happen
sometimes
sometimes
shortly
after
pacific
is
out.
So
we
can
do
that
quincy
planning
and
find
out
by
what
we're
going
to
build
next
and
that'll
have
the
traditional
format
that
we've
been
using
for
years,
where
it's
scheduled
topical
sessions
and
it's
all
online.
Video
chat
remote
virtual
whatever.
A
A
Compare
notes
about,
what's
going
on
so
hoping
to
make
that
sort
of
a
useful,
meaningful
way
for
ceph
users
to
interact
with
each
other,
we'll
probably
have
a
lightning
talks
scheduled
in
there
somewhere,
and
this
would
all,
of
course
be
fully
virtual,
so
video
chat
and
recorded
for
future
people
to
come,
look
at
and
not
too
much
time
at
once
so
not
going
past.
You
know
a
couple
hours
a
day,
don't
want
to
overwhelm
people
and
that's
it
thanks!
That's!
A
Thanks
somebody
asked
the
question
in
the
chat:
what's
the
issue
with
rbd
nvd
restarting
in
prior
versions,
so
my
understanding
is
that
if
the
rbd
nvd
daemon
would
stop,
then
the
block
device
would
go
away
and
you
would
have
to
start
up
a
new
one.
A
If
you
restart
the
demon,
it'll
create
a
new
block
device,
but
the
thing
that
was
using
the
old
block
device
would
be
hung
and
the
change
is
that,
as
I
understand
it,
you
can
restart
the
daemon
and
the
block
device
that
you're
already
using
will
continue
to
function
once
the
demon
is.
A
A
A
A
A
Yeah,
there's
there's
so
much
like
angst
and
confusion
around
the
whole
sentence.
Change
and
I
think
it's
I
don't
know
a
little
bit.
Well,
not
a
proportion,
but
from
our
perspective
we
just
need
a
stable
operating
system.
Image
to
put
inside
the
container
and
center
stream
is
going
to
hit
that
bill.
Just
as
well
as
the
latest
point
releases
were.
A
So,
let's
see
what
is
the
status
of
stuff
idiom?
How
big
of
clusters
is
it
designed
for
so
stuff?
Adm
has
been
supported
since
octopus
in
pacific.
It's
got
lots
of
improvements,
we've
also
been
backboarding
stuff.
It
works
pretty.
A
Well,
as
is
the
main
issue,
is
that
when
your
clusters
get
really
big,
it
can
be
kind
of
laggy
in
terms
of
like
hoovering
up
the
status
of
whether
containers
are
running,
for
example,
across
all
the
nodes,
because
it's
going
like
one
node
by
node,
so
I'm
not
sure
I
would
use
it
on
a
cluster
with
more
than
like
30
nodes-
maybe
currently
I
don't
know
or
in
there,
but
there's
the
stuff
adm
exporter
work.
That
is
like
90
done.
A
That
basically
runs
an
agent
on
every
host,
that's
doing
all
the
local
scraping
and
then
it's
a
very
fast
process
to
just
like
query
all
the
agents
and
just
get
the
latest
status,
and
so
that
will
vastly
improve
the
scalability,
and
I
think
there
won't
be
any
practical
limits.
I
think
it
should
be.
It
should
be
fine
they're,
just
like
a
few
issues
that
we
need
to
deal
with.
A
As
far
as
like
making
sure
things
work
completely
correctly
with
that
mode
before
we
switch
that
to
be
the
default,
I'm
not
certain
that's
going
to
be
back
order
to
pacific.
It
probably
will
be
as
soon
as
it's
done,
but
yeah
we'll
see
once
it's
done.
A
D
Come
on
the
time
scale
is
like,
I
think
the
consensus
is
something
like
next
year
for
early
prototype,
and
then
it
will
be
more
time
as
teachers
get
at
it.
For
instance,
at
that
time
it
will
probably
only
well
it
will
only
support,
replicated
pgs
and
some
elements
of
7s
or
rgw
may
be
partially
supported.
D
We
would
hope
to
have
at
least
some
kind
of
c
store.
Next
year,
though,.
A
A
Any
plans
onto
duplication,
yes,
that
didn't
make
it
into
these
slides
because
nothing
really
landed
in
pacific
except
actually
one
thing
did
land
a
pacific.
So
there
is
a
there's.
A
tool
called
stuffed
youtube
tool.
A
That
is,
I
think,
it's
in
the
cef
test,
it's
in
one
of
the
sort
of
extra
packages,
and
you
can
basically
give
it
the
name
of
a
pool
and
it
will
do
a
bunch
of
reads
and
sample
the
data.
That's
in
that
pool
and
then
it'll
estimate
what
d-dupe
ratios
you
would
be
able
to
achieve
with
different
chunk
sizes,
chunking
parameters,
so
there's
sort
of
some
underlying
work
around
the
chunking
algorithm.
But
one
of
the
next
steps
is
to
figure
out
like
how
effective
it
would
be
on
actual
data
deployments.
A
So
it
would
be
great
if
you
run
that
that
dedupe
tool
on
your
actual
data
set
that
you
think
you
wanted
to
do,
and
you
can
sort
of
see
what
kind
of
ratios
you
would
get.
But
it's
a
trade-off
where,
if
you
have
chunk
sizes
that
are
smaller,
you
get
better
ratios.
But
then
the
read
I
o
is
is
much
slower
because
you're
reading
all
these
little
pieces
that
are
spread
around
the
cluster,
whereas
if
your
trunk
sizes
are
large,
redi
was
much
better.
A
But
then
your
ratios,
that's
good,
so
that
tool
will
basically
tell
you
what
that
means.
What
you
get
out
of
that
as
far
as
actually
storing
dupe
data,
there's
sort
of
two
paths
forward
there
one
is
to
make
rados
gateway,
directly
store
data
in
a
deduped
radio,
spool
and
there's
a
relatively
modest
amount
of
work
to
be.
That
would
be
done
in
raido's
gateway
to
allow
to
support
that
then
there's
a
separate
track.
A
That's
basically
making
a
radius
pool,
be
able
an
arbitrary
radio
spool
with
any
consumer
be
deduped,
and
so,
like
sort
of
the
new
fresh
active
data
would
be
stored
in
normal
replica
3
standard
format
and
then,
as
the
aj
gets
cold,
it
could
be
pushed
out
into
a
content
addressed
deduped
pool
and
that
work
has
been
ongoing
for
a
while,
but
is
still
not
stable
and
needs
additional
work.
A
Yeah
yeah,
that
was
the
first
part
I
mentioned
it's.
It's
it's
been
sort
of
on
my
mental
roadmap
for
a
while.
I'm
actually
not
sure
if
anybody
has
done
anything
written
any
code
yet,
but
I
think
it's
a
relatively
small
amount
of
code
to
write
in
order
to
support
it.
So
I
think
we
have
to
have
a
that's
part
of
the
quincy
planning.
We
should
have
a
conversation
with
the
rgw
folks
and
decide.
A
I
don't
know,
I
don't
anticipate
problems
with
wanting
to
split
a
hard
drive
into
multiple
osd's.
I'm
not
sure
that,
though,
that
would
be
a
good
idea
because
I
don't
think
we're
gonna.
A
A
A
I
think,
actually
I
don't
know
it
hasn't
tested,
so
maybe
it
wouldn't
work,
but
in
theory
you
could
have
a
mix
of
v4
and
v6,
but
a
single
osd
can't
have
both
currently.
But
a
lot
of
the
framework
foundational
work
was
is
now
in
place
so
that
you
could
conceivably
have
osds
binding
to
both
v4
addresses
and
v6
addresses
to
support
both
v4
clients
and
v6
clients.
I
don't
know
how
important
that
really
is.
A
I
haven't
actually
heard
a
real
user
say
that
they
need
it,
and
so
I
haven't
worried
about
it,
but
it
could
be
done.
I
think
it's
mostly
probably
a
not
too
extensive
amount
of
development
and
then
a
whole
bunch
of
testing
to
like
make
sure
things
work
still,
but
that's
been
done.
A
Is
there
news
regarding
safe
mon
backups?
They
have
dmcrip
keys
in
case
of
power
outages.
So
in
general
you
can't
really
back
up
the
mons,
because
you
have
stale
mon
information,
then
it's
stale
and
isn't
especially
useful,
but
things
like
the
encrypt
keys
are
easy
to
back
up
like
you
can
just
do
a
cli
dump,
config
key
dump
command
and
you
can
dump
them
all
out
to
a
json
blob
and
put
that
wherever
you
want.
A
A
B
Don't
think
so
yeah
I
think
we
can
cover
some
of
these.
You
know
topics
that
people
have
interested
in.
We
haven't
delivered
in
pacific
in
our
cds
sessions
coming
up
frequency,
then
we
can
prioritize
based
on
that.
A
Yep
yeah
and
maybe
now's
a
good
time
to
mention
if
there
are
specific
things
that
you
need
that
we
don't
do
yet.
That
would
be
a
good
time
to
make
your
voice
heard
by
emailing
the
dev
list
or
asking
an
rc
or
whatever,
so
that
we
know
what's
important
to
users
when
we
do
our.
A
A
Oh,
that's
not
the
right,
one,
sorry,
and
also,
if
you
haven't
turned
on
telemetry
on
your
cluster,
you
should
do
that.