►
From YouTube: CDS Pacific: Orchestrator, cephadm, rook
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
Perfect,
so
if
you
have
any
disgusting
discussion
items
and
feel
free
to
write
them.
A
So
one
one
does
Adam
is
of
course
the
day.
One
experience
with
this.
The
the
first
is,
can
I
describe
the
class,
and
just
one
file
use
is
to
create
the
complete
cluster
I
think
at
least
for
safe
ATM,
with
all
of
this
already
using
the
pull
request,
which
is
kind
of
generic
and
should
also
work
for
for
look
at
least
partially,
with
a
very
reduced
feature
set.
C
D
D
E
F
B
I
actually
yeah.
Somebody
has
a
point
here
that
we
also
need
the
host
add
stuff,
the
wonder
if
you're
like
a
first
map
or
something
like
that.
First
list.
D
D
E
Yes,
to
put
some
context
on
this,
okay
I
have
been
talking
with
the
OpenStack
people
this
past
week.
Okay-
and
this
is
one
of
the
features
that
they
would
like
to
have.
Okay,
do
to
have
a
only
one
file
to
describe
the
cluster,
the
whole
cluster
and
what
even
they
they
wanted
to
do
to
have
this
file
like
declarative
description
of
the
cluster
okay
in
order
to
keep
the
cluster
with
a
with
the
same
status,
always
have.
E
D
D
B
It
makes
it
makes
sense
for
to
be
analyzing.
Would
it
be
just
with
the
speck,
accept
it
I
wonder
where
all
the
labels,
if
you
do,
what
is
it
is
it
apply
I,
is
that
the
thing
yeah.
A
D
D
G
D
B
B
Think
then
Seth
idiom
would
have
to
be
all
changed
up
right
because
it
would
have
to
like
if
we
get
rid
of
that
container
as
the
sort
of
generalizing
point
Arnim,
it
is
a
little
bit
slow
to
generate
the
container
like
it
takes
a
minute
to
copy
much
binaries
around,
but
I
think
that's
still
a
reasonable
turnaround
time.
If
it
would
advise
you
is
the
ability
to
like
run
something
I've
rgw,
specifically
like
writing
multiple
HW
zones
and
so
on.
B
B
I
think
the
main
caveat
here
is
that
it
works
on
my
machine
because
I
just
put
on
Tonto
Said's
stream
on
my
dead
Fox,
and
so
it
matches
the
hot
stream
container,
which,
and
so
everything
just
works
if
you're
on
a
different
district,
and
it's
new
builds
inside
a
container
if
to
figure
out
that
part
of
the
workflow,
maybe
that's
a
reasonable
price
to
pay.
In
order
to
do
this,
I've
often
sent
us
or
get
an
upstream
image
that
you
can
base
it
on.
That's
based
on
whatever
your
distress.
That
would
be
neither
sorry.
D
B
Haven't
tried
yet
in
which
case
yes,
but
so
all
the
way
that
the
way
to
see
patch
forest
is
it
takes
the
upstream
and
she
just
pulls
it
from
Quay
and
it
just
copies
stuff
from
your
build
directory
in
on
top
of
it.
But
it's
it.
Just
I
just
has
to
match
basically
I.
G
B
C
A
Though,
are
there
any
other
safe,
ADM
related
discussion
items
when
one
can
think
about
tautology,
for
example
right
in
improving
the
the
tautology
integration?
But
what
what
would
be
the
point?
I.
D
D
B
D
B
D
A
Okay
services
for
big
pot.
B
Okay,
so
I
was
just
thinking
NFS
as
a
work
in
progress,
but
once
it's
done,
we
should
backward
it.
I'm,
I
scuzzy
is
also
in
progress,
and
it
it
kind
of
feels
like
any
of
these
things
that
we
add
wouldn't
yet
support
you
might
as
well
backward
them,
because
we're
sort.
A
B
B
B
B
I
B
It's
because
you
basically
have
to
mount
a
filesystem,
that's
on
the
device
you
have
to
mount
it
at
the
location,
that's
also
the
containerized
bind
or
whatever,
and
so
it's
just
it's
awkward
to
figure
out
where
that
should
happen.
All
the
other
demons
have
a
have,
a
directory
that
exists
in
varlets,
f,
FS,
ID
and
then
the
demon
ID,
and
that
sort
of
this
is
a
containers
home
and
it's
bound
into
the
right
location
and
so
on
and
for
a
file
storage
just
that
it
wouldn't
it
doesn't
work.
Quite
right.
B
Well,
because
blue
store
doesn't
work
that
way
so
with
blue
store,
there's
a
directory
that
just
has
like
a
handful
of
flat
files
and
then
you
start
to
container
it
puts
those
in
the
right
location,
and
then
you
start
up
and
they're
like
sim
links
to
block
devices
and
metadata
and
still
wone,
though
there's
yeah
it's
just
a
file
that
already
it
says
something.
That's
already
mounted
and
exists
on
the
host
and
you
find
it
into
the
container.
D
B
That
but
the
directory
already
exists
and
has
to
have
stuff
in
it
to
tow
the
turkey
has
to
exist
and
have
the
basket
that
actually
runs.
So
maybe
that
could
be,
but
then
we
have
to
mount
over
itself.
It's
whatever
it
just
mounts
in
the
wrong
place.
I
think
it
doesn't
quite
work,
so
you
have
to
figure
out
yeah
it's
awkward.
B
F
I
think
there
is
an
activation
sequence
being
done
by
a
container.
It
is
actually
by
announcing
why
I
have
to
remember
so
they're
with
me.
It's
I
think
it
is
activating
the
device
through
an
activation
in
it
container,
and
then
the
stories
then
mount
it
already.
So
when
you
start
the
container,
then
you
just
use
that
by
month,
wait
well,
I
have
to
look
sorry
I,
don't
remember
how
to
do
it.
I
think.
B
B
I
mean
circadian
was
it
basically
has
this
assumption
that
that
directory
is
the
data
directory
and
that
all
containers
sort
of
behave
in
a
similar
way
and
that
it
maps
that
directory
to
that
that
spot
it
just
happens
to
be
the
same
location
that
you
also
have
to
mount
on
top
of,
and
so
it's
just.
It
was
just
a
bad
choice.
In
that
sense,.
B
D
D
B
H
And
one
of
the
things
we
about
for
Vitas
was
mr.
Madrid
like
that
to
do
formats
before
Matt
server
settings
changes
on
on
this
format
for
boost
or
and
seemed
the
kind
of
sequence
works
for
before
many
as
well.
Does
that
require
a
natural
file
store
support
first
fa
DM,
so
the
kind
of
boost
yeah,
I
think
so
yeah.
B
Well,
maybe
not
so
the
which
just
kind
of
leads
into
the
adoption
topic,
so
the
way
that
originally
the
idea
was
adopt
will
take
any
sort
of
existing
daemon
and
it
would
just
basically
rename
the
directories
and
you
would,
when
you
restart
it,
it
would
start
up
in
the
stadium
style
container,
but
the
reality
is
that
they're
only
you
only
really
need
to
do
that
for
the
monitors,
because
they
have
actual
data
and
for
the
OS
DS,
because
they
have
actual
data
and
for
maybe
the
first
manager.
Just
so.
B
You
actually
are
running
the
the
right
manager
code
and
set
video
module,
but
all
the
other
demons
are
stateless,
but
you
could
just
delete
the
old
ones
and
just
provision
new
ones.
In
fact,
you
probably
won't
do
that
anyway,
so
that
the
naming
is
sort
of
in
the
new
style
and
consistent
with
everything
everything
else
and
the
inventory
that
stuff
ATM
fetches
will
show
you
all
the
legacy
stuff,
and
so
it
could
be
that
you
could
automate.
B
You
except
the
logging
every
host-
and
you
run
the
command
manually,
just
sort
of
silly,
and
so,
if
that
were
to
Senate,
could
be
aware
that
the
file
Soros
tees
are
there
and
it
could
delete
them
one
by
one
and
reprovision
those
discs
as
boost
or
I
guess.
Do
the
conversion
that
way
without
actually
supporting
file,
store
explicitly.
D
B
H
F
H
D
C
D
C
A
E
What
I
mean
is
that
if
the
user,
you
stay
the
same,
for
example
spec
file
or
placement
file
for
st.
and
service?
What
is
going
to
happen
and
the
other?
The
other
thing
a
lot
of
questions
about
that,
but
exit
resilience
of
the
of
the
system,
a
what
is
the
target,
for
example,
if
we
specify
a
rtw
service,
we
served
in
best
placement,
okay
insert
a
node.
E
E
Okay,
but
in
it,
doesn't
matter
that
if
it
is
a
TW
or
another
service,
what
what
I
mean
is
when
you
have
to
find
the
specification
of
the
service
with
one
or
more
service?
Ok:
placement
in
13
host:
ok,
I!
What
is
what
is
going
to
happen
if
some
of
the
containers
finding
deaf
ADME
is
going
to
restart
they
again,
the
discontinuous
or
not,.
A
E
A
E
B
E
B
D
G
A
D
A
B
Right
so
my
personal
cluster
at
home
had
a
bunch
of
early
early
early
blue
store,
those
DS
that
were
created
like
Kraken
or
possibly
been
pre
Kraken
and
I'm
I
got
adopted
to
adopt
them.
B
It's
a
little
bit
fragile
because
it
basically
when
you
do
the
adoption
it
copies
stuff
out
of
the
Etsy
death
volume
right,
CCF
disk,
whatever
simple
that
weird
stuff
volume
flat
file
thing
remand
puts
it
in
the
most
edirectory,
so
it
works
once
you
adopt
it,
it'll
work
and
it
will
start.
But
if
you
were
to
delete
that
directory-
and
so
you
have
a
new
disk,
you
take
that
same
disk
or
plug
it
into
a
new
host.
B
I'm,
not
sure
that
there
is
a
working
procedure
to
read
it
hydrator
recreate
a
container
whatever
it'll
start
or
might
be
a
two-stage
process
where
you
have
to
do
like
the
volume
simple
discover
whatever
it
is
on
the
host
and
then
adopt
it
again
or
something
like
that.
I
think
it's
I
think
it's
fine,
because
these
are
very
pretty
rare,
use
a
little
blue
store,
and
since
this
they
almost
everybody
deployed
blue
store
was
that
volume
so
I
think
it's
okay.
B
D
B
So,
if
you're
adopting
an
arbitrary
existing
cluster,
a
lot
of
old
clusters
by
most
little
clusters,
I
generated
with
set
comp
files
on
every
host
to
configure
all
the
random
stuff
for
the
daemons,
and
when
you
do
the
adoption
it
just
basically
sort
of
that
or
ignores
that,
can
take
that
completely
and
assumes
that
everything's
already
in
the
cluster
config
database.
Oh,
it
needs
to
I'm.
B
The
first
step
is
probably
to
import
all
this.
If
you
look
at
the
adoption
document
or
whatever
the
first
step
is
to
get
all
your
configs
in
the
cluster
and
then
go
and
adopt
all
the
demons.
I
think
this
whole
thing
makes
the
whole
idea
of
it.
Automated
option
sounds
nice,
but
it's
also
a
little
bit
scary,
because
it's
such
a
you
can
have
clusters
of
like
any
vintage
with
such
a
wide
variety
of
deployment
tools.
A
B
Well,
it
doesn't
work
well
on
what
the
pitfalls
are
and
then
once
we
have
some
confidence
that
the
procedure
works
too,
then
then
look
at
automating
it,
but
I
think
that
mutation
still
could
use
some
simplification
and
streamlining
because
it
there's
still
this
thing
where
you
can
adopt
all
the
old
stateless
demons
like
MDS,
is
energy,
debuts
and
so
on,
but
it's
probably
better
just
to
delete
them
and
deploy
new
ones,
and
it's
hard
to
write
that
out
the
procedure
to
do
that.
C
A
A
Okay
telemetry:
does
it
report
on
spec
end
if,
as
far
as
one
of
those
telemetry
returns
or
process
and
the
piece
off
of
the
settings
that
were
changed
right.
B
G
B
D
C
A
J
And
you're
asking
the
question:
do
we
want
to
make
it
work,
so
drive
groups
creating
OS
these
on
PVCs,
because
that
I
mean
I,
guess
the
way
I
see
that
that
we'd
have
to
get
that
working?
Is
that
so
the
drive
group
request
comes
in
to
the
orchestrator
Orchestrator
now
has
to
go,
generate
pv,
pv
s
like
local
PBS,
based
on
the
request
and
then
generate
the
correct,
requester
CR
that
will
go
consume
those
PBS.
J
So
if
I
mean
it
could
work.
If,
if
the
manager
has
an
opinion
about
how
to
create
those
PBS
and
to
be
able
to
translate
from
grab
groups
into
PBS,
and
then
these
did
I
mean
it's
a
bit
complicated,
it
could
work
I
wonder
if
it's
not
worth
making
that
whole
thing
work
on
PVCs,
because
then
the
manager
has
to
be
opinionated
on
how
to
create
those
local
TVs.
I,
don't
know
needs
a
lot
more.
That.
B
J
J
J
G
J
Probably
love
I,
guess
PV
creation,
which,
if
you're
in
a
bare-metal
environment,
rooks
current
position,
is
that
well
create
the
PBS
before
you
go,
tell
Brooke
how
to
consume
them.
So
if
we
can
make
that
as
same
that
same
assumption
with
drive
groups,
I
think
it'll
fit
nicely
into
rooks
model
and
we'll
keep
it
simple.
If
we
try
and
bring
the
local
PV
creation
in
Tobruk,
that's
where
I
would
worry
more
or
into
the
orchestrator.
G
B
B
Then,
in
this
general
case
we
have
an
existing
storage
class
and
you
just
want
to
provision
those
T's
on
top,
but
then
I
think
we
seem
to
decide
what
the
drive
group
looks
like,
because
it
doesn't
because
it's
not
I,
think
drive
groups
currently
are
imagined
so
that
the
device
is
already
exists
and
you're,
just
picking
them
and
selecting
them.
Whereas
here
we
have
to
say
like
create
this,
many
or
hosts
I,
think
or
something
something
like
that.
K
K
J
The
thing
with
using
storage
classes
and
PBC's
is
that
all
you
need
to
tell
Brooke
is
basically
which
storage
class
do
you
want
to
use
to
create
for
the
OS
DS
and
then
it
goes
and
requested.
So
at
that
point
there
aren't,
like
you,
wouldn't
define
really
the
properties
on
the
devices
we're
gonna
go.
Look
for
you
just
have
to
tell
us.
What's
the
storage
class
name
and.
J
F
D
F
But
I
feel
like
the
OED
spec
could
be
really
similar,
not
to
say
maybe
identical
to
what
we
do
with
device
sets
already
in
work
when
we
provision
on
BBC,
because
at
the
end
of
the
day
of
the
translation,
will
have
to
be
really
simple.
It's
almost
a
one-to-one
mapping
at
this
point.
If
it's
a
new
section
form
drag
verb,
/
or
its
back.
B
It
might
confuse
the
current
dashboard
stuff
because
the
the
generator
code
and
Python
common
and
the
dashboard
sort
of
assumes
that
you
feed
an
inventory.
You
apply
to
storage
class
that
source
across
the
drive
group
back,
and
then
it
tells
you
what
do
I
so
that
would
have
used.
But
this
is
different.
J
D
J
Yeah
well
he's
getting
the
audio
working,
definitely
curious
about
this
item.
Since
rook.
Today
we
don't
support
taking
over
deaf
clusters,
or
we
haven't
tried
to
figure
out
how
to
document
it
or
anything.
Just
because
it's
such
a
challenging
problem
to
go,
install
kubernetes
on
all
the
nodes
and
then
cart
broke
with
all
the
assumptions
we
make
about
how
we
create
and
opinionated
set
install.
A
D
A
E
G
B
E
B
That
external
cluster
support
removes
a
lot
of
the
pressure,
I
think
actually
set
Beatty
and
will
remove
a
lot
of
the
pressure
to
because
they'll
get
most
of
the
capabilities
of
work,
you're
good,
like
that
over
shooter,
abstraction
and
so
on,
but
medium,
and
so
that
I'll
really
will
be
good
enough.
I
think
for
most
cases
it
also
means
that
if
we
ever
do
want
to
do
this
conversion
to
rough,
we
can
do
it
only
from
stuff
ATM
to
rook,
and
then
everything
else
is
set
ATM.
B
B
On
the
flip
side
of
the
way
that
they
adopt
stuff
works,
it
it
has
a
style
of
deployment
for
anything.
That's
not
set
vidiian
and
currently
legacy
is
sort
of
the
traditional
farlap
sense.
Whatever
stuff
may
be
pretty
easy
to
add.
Support
for
rook
wherever
Brooke
is
putting
the
data
directories
to
the
adopt,
so
you
could
take
a
rook
deploy
cluster
and
adopt
it
into
the
ATM.
B
Probably
the
hardest
part
would
just
be
getting
the
initial
I'ma
initial
keys
at
if.
D
J
B
J
D
A
So
I
think
oh
look,
orchestrate
a
feeling
of.
A
C
A
Cuban,
a
despot
are
runnable
on
specific
us
right
and
we
could
manage
that
from
the
orchestrator.
Even
though
that's
I
don't
know
if
you
want
to
do
that,
and
the
option
to
this
actually
add
new
physical
host,
Cuba
native
Tessa.
A
B
A
B
J
So
what
I
was
gonna
say
is
that
the
so
I
when
I
see
this
meeting
for
rook
is
that
oh
you've
got
a
label
where
you
want
to
deploy
all
your
demons
or
euro
as
deeds
or
whatever.
So
you
can
have
the
labels
on
the
nodes,
so
adding
a
node
to
rook
means
or
removing
it
means
adding
or
removing
the
label
that's
being
used
there,
and
then
we
would,
for
example,
add:
oh,
is
these
on
to
the
the
node
where
the
label
is
just
applied?
B
So
I
wonder
at
it
seems
like
if
you
have
a
case
where
you
have
a
kubernetes
cluster
has
100
nodes
and
Brooke
is
running
on
four
of
them.
There's
sort
of
two
ways
to
look
at
it.
One
is
where
you
do
for
Costel
s,
and
you
see
those
four
nodes:
that's
the
entire
world
from
works
perspective
and
in
that
case,
adding
and
removing
hosts
would
mean
adding
some
magic
label
that
expands
the
subset
of
hosts
that
Brooke
views
as
to
its
world.
The
other
way
it
would
be
when
you
do
or
toast
LS.
G
B
Not
sure
which
one
is
not
sure
the
one
is
strictly
better
than
the
other
they're
just
like
different
ways
of
viewing
the
same
situation,
but
it
might
make
sense
to
think
about
it
in
the
context
of
how
I
want
to
think
about
host
labels
in
general.
I
did
spend
a
little
bit
of
time
last
week
trying
to
figure
out
how
to
do
this
and
I
finally
got
to
the
point
where
I
could,
like
Polk's
Cooper
Natives
API
in
this
the
right
way.
B
D
B
In
the
way
that
the
orchestrator
api
views
labels
is
it's
just
a,
it
doesn't
value,
it's
just
a
name
like
a
sticker,
I
guess
and
the
kubernetes
labels
have
values,
they're
actually
matching
based
on
a
value.
Then
it's
not
a
set.
So
you
couldn't
have
like
you
know
or
should
read.
Labels
equals
a
list
of
comma
separated
labels
or
something
like
that.
You
have
to
have
one
per
label.
B
F
B
B
J
So
when
you
say,
when
you
have
a
host
that
you're
adding
so
you
say,
host
or
set
or
chose
to
add
I
guess
what
does
that
mean
conceptually?
Does
that
mean
I'm,
adding
a
host
that
I
expect
any
of
the
set
demons
to
be
deployed
on
or
their
way
to
say?
This
host
is
for
OS
deeds
and
that
host
is
for
moms.
C
K
C
E
E
Really
we
we
have
a
need
of
manage
hosts
because,
in
truth,
is
something
that
is
provided
by
medical
units,
environment,
okay,
there
is
something
that
I
think
that
we
shouldn't
do
in
in
the
rocky
environment
and
pour
the
labels
is
what
I
they
say
in
this
model,
for
example,
is
easy
to
get
all
the
ports
that
are
running,
for
example,
a
DW
by
using
labels?
Okay,
it's
not
the
same
that
in
thief,
a
DM
that
the
label
concept
has
been
set
a
few
weeks
ago.
E
A
G
B
D
G
B
B
B
Yeah
so
I
started
to
try
to
write
this
function
and
I
just
didn't
really
know
enough
about
how
to
properly
construct
the
simplest,
because
there's
so
many
different
ways
to
specify
things
and
sometimes
they're,
strict
affinities
and
sometimes
they're
like
wrong
and
there's
just
like
forced
at
scheduling
but
not
at
I
have
no
idea
what
that
means.
There's
just
all
this
weird
stuff,
but
this
seems
like
step
one
that
then
step
two
would
be
to
basically
take
whatever
the
kubernetes
CR
is,
which
could
be
like.
Who
knows
what
and
try
to
fabricate
a
placement
spec?
B
That
is
the
reflects,
what
it
means,
and
it
seems
of
like
there,
and
the
initial
step
would
be
basically
to
make
sure
that
any
Brooks
er
that's
produced
by
this
generate
function
will
give
you
back
the
same
place
in
spec
that
you
started
with
so
that
at
least,
if
you
go
and
you
use
Yorkshire
API
to
say,
use
this
label
and
then
you
go
and
you
look
at
it.
You'll
actually
see
it,
meaning
what
it
meant
and
then
maybe
you
could
try
to
be
more
clever
with
other
common
things
that
kubernetes
users
do.
B
A
J
B
B
A
To
do
everything
conclusion
I
think
it
doesn't
make
sense
to
have
it
right
if
we
really
want
to
support
clusters
from
the
whoa.
B
J
G
E
Yes,
okay:
okay,
we
need
that
in
the
indicator
line;
okay,
yes,
and
it's
something
that
has
been
removing
the
true
path.
Okay
and
I
think
that,
when
the
reason
for
remove
this
functionality
was
only
just
to
stay,
safe,
know
it
to
operations.
That
could
finish
in
that
data
lost.
Okay.
So,
when
I
think
that
that
is
not
enough.
G
J
B
B
B
E
K
J
K
G
F
E
D
B
So
I
think
the
start
is
trivial
to
map
under
Brooke,
like
you
would
just
go
update
the
CRD
to
change
the
image
property
of
the
CR
and
then
you'd
be
done.
Brooke
would
go
and
trigger
an
upgrade
right.
The
on
the
other
hand,
there's
also
upgrade
stop,
which
will
castle,
cancel
and
in
progress,
upgrade
and
just
sort
of
leave
the
cluster
right
where
it
is
and
there's
a
status
that
will
query
it
and
I
think
those
don't
map
on
to
what
our
claves.
G
A
B
Get
everything
back
in
the
same
version
again
would
be
nice
to
actually
have
a
progress
item
to.
G
B
B
A
B
At
some
point,
for
both
of
these,
we
need
to
support
the
major
version
upgrade,
but
I
think
we
can
kind
of
kick
that
can
down
the
road
until
the
end
of
the
Pacific
cycle,
when
we
actually
need
to
upgrade
from
octopus
to
Pacific
and
that'll
be
the
first
one.
D
J
C
B
J
B
D
G
B
D
I
G
A
B
B
The
fitting
I'm
just
starts
the
demons
okay,
unless,
unless
we
want
to
at
least
for
stuff
ATM
that
works
for
Brooke,
it's
going
to
be
trickier
because
work
is
gonna
have
CRTs
to
express
this
stuff
and
we
need
to
figure
how
to
map
map
maligne.
Whatever
keep
these
things
working,
oh
I,
guess
there's
more
to
do
there,
but
at
least
for
now,
if
you
can
still
do
it,
you
should
be
able
to
do
it.
E
B
G
A
B
D
G
E
B
F
A
D
D
A
G
A
Tiff's,
yes,
but
what
about
more
advanced,
getting
algorithms
and
end
resource
limits?
This
is
already
something
for
Pacific
or
something
after
Pacific
depends
a
bit
on
how
good
the
current
algorithm
works.
I
think.
B
Yeah
I
mean
it's
purely
random
right
now
it
could
be
improved
slightly
so
that
it's
just
like
fix
the
server
with
the
fewest
demons
or
something
like
that.
That
I,
don't
think
it'll
be
too
long
before
people
want
I.
A
Five
startups
and
I
would
do
it
only
if
someone
really
demands
it.
It
is
not
for
it
yeah
exact
neck,
it's
really
in
in
maintenance
month.
That's
the
base
for
remote.
Oh,
it
works
right
now.
The
question
is
how
long
and
are
we
going
to
head
into
trouble?
If
we
don't
do
anything.
F
B
There
isn't,
but
you
can
look
in
VAR
lives.
Ffs
ID
name
is
a
unit
that
run
look
at
sub
a
script
which
is
just
the
pod
man
command.
That
runs
the
container,
so
you
can
just
tap
that
file
and
edit
the
line
with
whatever
extra
options
you
want
like
get
rid
of
RM,
maybe
or
whatever,
whatever
you
want
to
do.
B
D
C
F
F
For
people
when
doing
troubleshooting
is
to
get
into
the
exact
same
environment
as
the
container
is
just
before
crashing,
so,
for
example,
they
might
have
no
idea
which
what
what
the
by
modes
are
or
things
like
this,
and
even
though
you
remove
that
at
RM,
then
the
container
is
already
gone.
So
the
idea
is,
we
need
to
maintain
it
alive.
F
B
F
B
F
F
B
F
B
The
most
frequent
problem
I
have
with
this
is
that
on
my
developer
box,
my
root
partition
is
not
that
big
for
some
stupid
reason
and
it
keeps
filling
up
with
either
just
container
images
that
don't
get
deleted
automatically,
because
I'm
constantly
relaunching
the
master
container,
which
updates,
like
every
other
day
every
day
and
or
I,
leave
a
cluster
a
test
cluster
running
and
it's
logging
all
this
crap
to
the
whatever
the
standard
error.
That's
in
some
file
and
virally
containers
I
think
there's
a
file
where
all
those
logs
accumulate.
B
G
C
E
Okay
and
now
what
we
have
is
yester
would
have
started
down
the
testing
using
the
integration
and
by
the
method
that
we
have
in
truck
okay,
but
I,
think
that
is
important
to
have
all
the
duration
test,
with
a
with
a
with
a
hold
on
all
the
coverage
for
all
the
functions:
okay,
because
what
there
are
some
things
that
are
not
working,
and
this
yes
because
are
not
tested
okay,
so
we
need
to
have
this
in
place
as
soon
as
possible.
It's
my
it's
my
opinion.
C
A
E
G
E
B
It
seems
like
the
austere.
We
should
make
it
so
that
the
OSD
remove
works
like
SEF
or
CH,
the
RM
implement
just
enough
in
rook,
so
that
it
can
properly
deeper
Rosano
st
and
then
we
should
also
implement
SAP
feels
like.
If
we
do
those
two
things,
then
everything
else
will
be
built
on
that
right.
We
can
create
new
OS
DS,
we
can
destroy
all
Louis
T's
and
we
can
adapt
devices.
B
It's
a
little
bit
weird
I
think
there's
this
sort
of
loophole
right
now,
where
if
you
go
and
you
delete,
I,
honesty
and-
and
you
have
a
well
I-
can
start
two
of
them
like.
If
you
remove
an
OSD,
then
if
you
go
list
the
inventory
you'll
see
it
you'll
see
it's
still
there,
because
it
wasn't
zapped
and
Q
or
rook
will
recreate
it
think
right
it
just
sort
of
instantiates
pods
whenever
it
sees
devices.
B
But
that
might
be
one
thing
and
then
there's
a
similar
thing
where
with
SEF
ATM,
if
you
have
a
drive
group
that
says,
you
know,
create
Oh
Steve's
out
of
every
hard
disk
and
then
you
go
and
you
believe
in
Oh
Steve
and
you
zap
it
because
it's
a
it's
a
dead
drive
and
you
want
to
replace
it.
Then
we'll
go
and
create
a
new
one
immediately,
but
maybe
the
answer
there
is
just
not
to
zap
it.
I.
F
Feel
only
like,
in
order
to
get
that
preper
pod
running
again,
you
would
have
to
go
through
a
new
orchestration
in
that
orchestration.
Is
that
triggered
automatically?
It's
only
triggered
if
you
you
have
that
special
mode
where
you,
where
we
look
at
you
them
and
then
we
search
for
add
events
and
then
with
sugar
in
orchestration.
If
you
don't
activate
that
you
don't
need
it,
so
it's
hot
plug
in
the
operator
settings
and
it's
false
by
default.
So
yeah,
it's
not
a
good.