►
From YouTube: 2019-12-04 :: Ceph Developer Monthly
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
A
A
You
guys
see
that,
yes,
all
right
all
right,
so
the
orchestrator
it's
there
was
a
common
API
in
the
manager,
that's
the
key
part
and
it
much
a
common
code
that
sits
on
top
of
it
and
it
calls
out
to
implementations
that
go
and
deploy
daemons,
basically,
so
that
can
be
either
rook
or
SSH.
There
are
other
implementations
too,
but
they're
in
various
states
of
partial
completion
and
are
unlikely
to
be
maintained,
at
least
by
the
core
team
going
forward.
A
So
we're
can
SSH
recipt
if
you
were
focusing
on
rook
for
kubernetes
environments
and
SSH
for
non
kubernetes.
So
the
goal
is
to
have
something:
that's
really
simple:
that
works
out
of
the
box
so
that
you
can
have
a
complete
cluster,
install
that's
as
few
steps
as
possible.
The
instructions
are
simple
and
we
also
want
to
have
a
full
set
of
de
to
operations
like
replacing
disks,
adding
nodes,
removing
nodes
all
that
stuff.
A
We
want
to
limit
the
dependencies
and
external
tools
just
so
that
it
is
easy
to
deploy
and
build
and
maintain
and
all
that
stuff
and
ideally
want
this
to
be
the
documented
default
install
process
for
octopus.
So
right
now
the
docks
is
talk
about
set,
deploy,
sometimes
they'd
like
to
have
ansible
rook,
but
it's
all
very
confusing
and
most
of
the
instructions
on
day,
two
operations
sort
of
wave
your
hands
at
restarting
daemons
and
such
so.
The
idea
is
to
have
sort
of
a
normalized
way
that
you
can
do
this,
mostly
through
the
orchestrator
api.
A
So
what
actually
matter
whether
it's
work
or
stuff
or
SSH,
but
by
the
way,
so
the
idea
is
to
fully
replace
SEP,
deploy
ansible
deep-sea
puppet
all
that
stuff
with
something
that
sort
of
is
a
rel
canonical.
The
bull
way
to
do
everything
uses
containers
using
the
SIP
container
image
and
conveniently
the
containers
avoid
all
the
complexity
of
packaging
tools,
so
the
instructions
owner
to
say
if
you're
on
Debian
type
apt
this.
If
you're
on
these
five
distributions
type
rpm
this,
it's
all
replaced
by
single
string
in
it.
A
A
It's
like
this,
like
eliminates
that
one
step
and
that's
all
it's
the
same,
regardless
of
what
your
distress
that's
good
and
then
the
idea
is
basically,
you
do
a
bootstrap
that
brings
up
the
monitor
and
manager
like
a
very
single,
simple,
single,
node
cluster,
and
then
everything
is
else
as
a
day
to
operation
to
add
more
monitors
at
OS
DS,
a
team
guesses,
Steph
daemon
is
the
low-level
tool
that
actually
deploys
that
the
containers
on
the
localhost
mosasaur
the
low-level
command
tool.
A
A
Several
more
steps:
ok,
so
yeah,
it's
bootstrap,
and
then
it
has
some
management
stuff
for
humans,
though,
there's
like
an
enter
command.
That
will
let
you
watch
a
shell
or
run
a
command
inside
a
running
container.
There's
a
shell
command
that
just
runs
the
shell
or
runs
the
command
in
a
new
container
and
there's
something
to
tail
the
logs,
basically
the
same
as
pod
man,
logs
a
for
whatever
that
passes
all
the
right
arguments
for
you
and
there's
cleanup
stuff
that
will
remove
all
traces
of
demons
and
as
a
single
Python
script.
A
So
it's
easy
to
eat
just
curl
and
run
it.
So
it's
designed
to
be
real,
easy
use,
a
very
relies
on
system,
D
works
with
Python,
2
or
3.
You
can
use
pod
men
or
docker,
and
your
changeably
those
function
more
or
less
though
technically
and
it
needs
LVM
for
set
volume,
but
otherwise
the
host
doesn't
really
need
anything
else.
The
base
Python
install,
so
you
can
just
curl
it
and
run
it,
and
it's
going
to
work
almost
all
the
time,
except
for
installing
docker
or
a
patent
it.
Usually.
The
bootstrap
is
really
simple.
A
You
just
curl.
It
utrom
attacks
it
and
you
run
it
the
only
if
there's
only
one
required
argument,
that's
the
monitor,
IP
to
use
and
we
could
actually
teach
it
to
pick
one
automatically
if
it
can,
although
that
requires
extra
Python
library.
So
if
we
didn't
do
that
and
at
the
very
end
it
gives
you
a
URL
for
your
dashboard
or
you
can
run
this
command
to
watch
a
shell
that
you
can
do
stuff
see
like
against
super
quick
means
that
you
get
up.
A
So
you
can
do
shell
I
last
LS
lists
demons
on
the
minute
logs
will
tell
logs
you
can
delete
clusters
and
by
default
everything
logs
to
standard
error,
so
it
goes
into
journal
D,
but
we
also
set
up
with
our
log
staff
directories
in
the
sort
of
the
traditional
style.
So
if
you
want
to
use
traditional
log,
so
you
can
do
a
config
option
and
those
could
turned
on
and
all
the
paths
are
set
up,
regardless
of
which
you
choose.
This
is
also
convenient.
A
The
paths
are
a
little
bit
different
inside
the
container.
It's
all
that
you
to
low
log
locations
and
path
locations
outside
we
put
the
FS
ID
in
the
path
name
that
you
can
have
multiple
demons
from
multiple
clusters.
Coexisting
keeps
things
nicely
isolated.
You
can
easily
test
bootstrap
plots
clusters
on
the
same
that
list
all
that
stuff.
A
As
long
as
you
have
enough
monitoring
piece
and
the
unit
names
are
a
little
bit
different,
also
because
they're
tagged
with
the
FS
ID
and
theirs
top
function
that
will
take
old
demons
and
just
rename
the
directories,
basically
so
they're
in
the
new
style
so
to
take
an
existing
old
cluster
and
just
convert
it
to
the
new
style.
It's
basically
a
big
for
loop,
it's
pretty
straightforward!
It's
going
to
be
in
Hanceville,
playbook,
I!
Think
that
I'll
do
this
this
one!
That's
the
state.
A
Registration
just
runs
this
over
the
network,
so
SSH
could
basically
do
exactly
one
thing:
it's
run
set
daemon
on
a
remote
host
and
that's
the
only
way
that
ever
actually
uses
SSH
is
to
do
that.
One
thing
and
it
does
that
by
running
the
Python
interpreter
and
piping,
the
script
do
the
Python
interpreter.
So
it
does
it
set
theme.
It
doesn't
have
to
be
installed
anywhere.
It
can
just
as
long
as
you
have
an
SSH
key.
A
You
can
just
run
the
command
on
a
remote
host,
there's
a
separate
mode
that
requires
the
set
demon
package
to
be
installed
and
it
does
sudo
to
run
it.
So
you
can
sort
of
look
at
the
security's
cook
a
little
bit.
Both
are
tested
and
supported,
but
the
root
as
the
sage
mode
is
the
easiest
one
to
set
up,
and
the
shell
gives
you
a
shell
pretty
convenient.
You
can
add
new
hosts
so
in
this
example,
I'm
adding
just
City
key
to
the
root
user
and
after
you
do
the
bootstrap.
A
A
Do
that
you
can't
blame
our
managers
with
one
command.
It'll
go!
Add
more
managers,
you
can
list
all
the
services,
all
the
devices
on
all
the
different
hosts
and
you
can
create
all
these
services.
This
shows
all
the
running
containers.
What
hosts
are
on
what
version
of
set
they're
running
and
so
on
on
their
command,
so
like
restart
daemons
redeploy
demons
with
the
new
containment
Keener
image
stuff,
like
that,
as
I
mentioned
they're,
two
security
models
upgrades
are
currently
being
worked
on.
A
That's
what
I
was
working
on
this
week,
basically
to
automatically
go
and
redeploy
containers
round
up
for
demon
in
the
right
order,
and
so
on.
The
other
thing
is
right.
Now
the
dashboard
is
initially
only
going
to
know
to
do
a
few
different
things
with
your
consider
like
replacing
those
DS,
but
eventually
we
want
to
do
more.
Paul
Kooser,
in
particular,
wants
to
talk,
wants
to
figure
out.
A
A
Lost
Chindi
pH,
so
we
have
a
bunch
of
stuff.
There's
a
there's,
a
Trello
board
to
track
all
this
all
this
stuff
here
that
I
can
put
up
I
think
it
should
be
public
now,
I
didn't
click
the
button
before
and
there's
a
line.
Basically,
if
all
the
stuff
that
I
really
wanted
getting
get
in
for
octopus,
I
think
we'll
manage
and
then
it's
a
bunch
of
stuff,
that's
optional,
like
NFS
gateways
and
I,
scuzzy
and
so
on
at
the
big.
A
A
Mds
affinity,
the
this
came
up
because
the
way
that
the
orchestrator
api
to
plays
in
the
esses-
it's
grouped
by
file
system-
and
this
is
done
that
way
because
it
maps
basically
to
the
way
that
the
Roxie
Rd
works.
Will
you
deploy
a
mini,
deploy
a
file
system,
CID
and
rook?
There's
the
number
of
metadata
service
to
go
with
it,
but
we
recently
changed
it
so
that
the
standby
metadata
server's
aren't
actually
assigned
to
a
file
system.
They
can
sort
of
vote
between.
Whatever
file
system
needs
it.
B
B
Maybe
bigger
MDS
is
on
a
production
file
system
and
smaller
MDS
was
on
a
testing
one,
so
I
think
having
some
kind
of
a
power
system
affinity
makes
sense.
Then
there's
another
dimension.
We
wanted
to
consider
which
is
availability
zones.
We
have
a
stretch,
stretch
step
cluster
want
to
be
able
to
have
an
MDS
in
each
availability
zone
and
then
maybe
use
something
like
subtree
pinning
to
pin
certain
sub
trees
to
those
and
yeses
those
india
shrinks
the
clients
in
that
in
that,
don't
only
talk
to
them.
B
The
next
thing
we
need
to
do
is
look
at
the
or
set
up
a
file
system,
availability
zone
or
availability
zone
to
a
rank.
I
have
a
example
command
of
what
that
would
look
like
just
be
another
piece
of
metadata.
We
have
in
that
FS
map
loading
where
that
ring,
which
MDS
is
from
which
abilities
only
should
be
used
for
that
rank
as
far
as
standby
replacement,
I
think
it'll
it'll
prefers
prefer
MDS
is
in
the
same
availability
zone,
then
the
file
system,
affinity
and
then
finally
it'll
take
any
available
standby.
B
One
I
think
we
should
depart
from
previous
behavior
where
we
never
needlessly.
We
never
go
over
a
running
rank
deliberately,
but
I
think
we
should
change
that
so
that
if
we
are
using
a
standby
in
some
other
availability
zone
or
just
some
generic
standby,
if
another
one
becomes
available
that
is
as
stronger
affinity.
We
should
just
do
the
fell
over
so
that
it
corresponds
to
what
the
operator
wanted.
B
A
C
A
A
Cool,
let
me
do
that
mostly
the
orchestrator
is
staying
out
of
the
like
game
of
setting
configuration
options,
I'm
trying
to
push
everything
into
the
mom
config,
but
in
the
rgw
case
it
does
one
do
one
thing
and
that's
just
set
this
option.
A
A
That
would
be
done
by
the
manager.
It
would
be
done
well
right
now
in
the
rjw
case,
the
ssh
orchestrators
doing
it
when
you
create
it
just
so
so
that
winner
actually
runs
the
daemon.
It
doesn't
have
to
pass
an
option
that
says
by
the
way
you're
in
this
zone
or
whatever
it's
sort
of
implicitly
could
set
just
from
the
way
that
it's
named,
oh
I,
think
if
I
just
do
the
same
thing
here
and
if
we
think
of
a
better
way
to
do
that,
then
we
can
always
change
it
later.
B
B
A
Well,
the
way
that
the
manager
creates
MTS's
and
rook
is
to
create
the
C
or
D,
but
the
CRD
could
probably
be
I.
Think
probably
what
we
want
is
the
CRD
to
only
have
the
like
the
diamond
definitions
and
not
the
information
about
the
file
system,
except
what
name
it
is,
and
so
the
file
systems
already
created
and,
conversely,
in
rook,
when
you
create
a
file
system,
C
or
D,
just
half
the
CID
is
just
for
us
to
control
it.
A
A
A
Yeah
there's
this
I
think
there
general
pattern
with
rook,
where
all
these
CRTs
are
going
to
have
a
wave
where
they're
just
managing
the
demons
when
you're
like
managing
this
cluster
through
the
dashboard,
say
or
they're
doing
the
whole
thing.
Well,
if
you're
managing
everything
through
the
wreck
without
complete
api's
duties,.
A
Yep:
okay,
okay,
okay
for
AC
affinity.
Can
we
just
use
crush
location
for
that
we're
the
MDS,
basically
reports
its
its
crush
location
and
then.
A
Monitor
decides
if
the
now
and
whether
it
should
pay
attention
to
that
this
is
currently
what
Louis
these
know.
Something
does
this
lines,
do
it
clients
you
can
set
crushed
location
and
it
will
drive
the
affinity
whatever
the
like
great
local,
read
stuff
all,
but
that's
not
actually
normally
enabled
there's
one
other
case.
I.
Think
we're
doing
this,
but
I'm
linking
exactly
what
it
is.
A
A
Because
I
think
the
way
I
think
what
I
think
Brooke
is
already
doing
this
in
some
cases
where
it
basically
is
just
passing,
specifying
the
crush
location
on
when
it
runs
things
on
the
command
line.
If
it
knows
because
it
in
many
cases,
it
knows,
for
example,
what
region
and
zone
it
is
running
in
an
AWS,
because
kubernetes
has
notes
labels
appropriately
and
on
bare
metal.
A
A
B
A
That's
wondering
that,
because
it's
yeah
I
think
it
makes
sense
only
because
you're
pinning
drinks
already,
but
what
you
really
probably
want
to
do.
Is
you
want
to
pin
some
trees
to
locations
but
not
a
rank
and
then
pin
that
rank
to
publication?
So
it's
sort
of
the
this
annoying
number.
That's
the
big
go
between
yeah.
B
A
Mean
ultimately,
that's
what
mechanically
will
have
to
happen
right,
I
rank
a
particular
rank,
is
matched
to
a
zone
and
then
either
way,
and
then
we
send
demons
map
demons
for
that
rank
and
sub
trees
to
that
rank.
But
it
might
be
that
the
the
user
interface
that
eventually
we
want
is
actually
a
little
bit
more
friendly.
So
you
don't
to
think
about
what
rank
is,
which.
B
A
Yeah
they
sort
of
this
secondary
issue.
Also,
we're
implicitly
means
that
we
want
to
be
able
to
schedule
not
just
control
through
the
orchestrator
interface,
not
just
control
the
number
of
MTS's
but
like
where,
where
they're
distributed,
where
they're
either
placed
Buster
right
now,
you
can
pass
down
labels,
but
it's
for
the
whole
group.
It's
not
for
like
subsets
of
MTS's
within
that
group,
and
so
before
this
will
be
fully
functional.
We
need
to
figure
how
to
like
make
that
little
bit
more
granular,
I
guess
yeah.
A
B
I
have
someone
on
my
team.
Mullen
is
working
who's
going
to
work
on
probably
another
manager
plug-in
which
will
supply
yeses
and
response
to
a
great
analysis.
And
then
one
of
the
things
we
should
look
at
is
the
availability
zones
on
the
ranks
and
make
sure
that
there's
an
MDS
and
that
the
Aussie
song
then
sufficient
standbys.
All
of
that.
A
A
A
Just
we
were
just
talking
about
this:
they
mentioned
that
that
the
Ceph
volume
create,
or
whatever
would
only
have
to
pass
the
maxim
D
s
the
size,
we're
just
basically
trying
to
get
to
that
point
with
the
SSA
chakra
shooter.
A
Initially,
the
scheduler
and
that
pull
request
is
like
just
or
they
don't
think
it
just
picks
a
random,
a
random
set
of
nodes,
but
eventually
I
can
get
smarter
or
like
the
node,
with
the
least
services
on
it
and
so
on,
but
they're
a
bunch
of
questions
there
about
what?
Because
they're,
basically
they're
I,
think
a
couple
different
things.
You
can
specify
the
number
of
demons,
you
can
specify
a
label
or
you
can
specify
a
list
of
nodes
and
if
you
specify
more
than
one
of
those,
how
should
those
things
intersect.
A
So,
for
example,
if
you
say
I
want
one
MDS,
but
I
have
five
notes:
labeled
as
MDS
notes,
I
think
it
should
just
pick
one
of
those
out
of
those
five
using
whatever
the
scheduler
intelligence
is.
If
you
pass
an
explicit
node
list
of
five
notes-
and
you
say,
I
want
to
I
think
it
should
do
the
same
thing.
A
E
E
We
had
this
role
thing
too,
and
you
know
you
could
apply
those
labels
to
in
various
ways,
like
you
know,
pick
a
host
name
or
pick
a
house
name
flop,
or
something
like
that,
and
we
had
various
other
ways,
but
it
always
ended
up
being
a
host
name.
Glob
that
applied
the
roles
and
then,
which
is
blindly
everything
that
has
an
MDS
role,
goes
out,
deploys
name
yes,
and
that
worked
quite
well,
although
we
had
it
more
complex,
just
as
a
point
of
experience
and.
E
Yeah
I
mean
they
could
they
could
either
do
that,
but
they
so
basically
we
would
give
them
a
way
to
say.
Okay,
this
host
name
glob
is
all
role:
MDS,
okay
and
yeah.
Like
sure,
like
you
know,
enterprise
customers
would
have
much
control
over
their
host
name,
so
they
would
just
include
MDS
and
the
host
name
and
have
an
easy
way
to
crop
this.
But
there
are
certainly
other
ways,
but
it
was
rarely
that
we
saw
people
trying
to
match
on
other
things
than
host
names
and
to
have
like
wacky
host
names
where
they
go.
E
A
Wondering
if
so,
we
haven't
contemplated
in
doing
the
gloves
really
for
SSH,
except
for
that
one
discussion:
oh
yeah,
this
the
other
day,
I,
think
partly
it
I'm.
Not.
It
doesn't
make
sense
to
me
because
I'm
not
imagining
that
this
is
really
written
down
somewhere
like
if
you're
writing
like
a
manifest
file
or
whatever
you
call
it.
A
Then
the
globs
are
convenient
because
you
know
to
type
out
all
the
host
names
and
copy
and
paste
a
list
of
hosts
around
and
the
different
subsections,
or
something
like
that.
But
in
this
case
the
way
the
hosts
are
at
it
is
like,
via
an
imperative,
CLI
command.
This
is
at
host
or
I
would
imagine
if
you're
doing
it
through
the
dashboard.
As
you
add
a
host,
you
would
like
to
check
the
boxes
for
which
labels
you
want
to
apply
to
that
host.
A
More
more
likely.
Probably
what
I
would
love
to
see
would
be
like
a
dashboard
change
that
lists
all
the
hosts
and
lists
all
the
labels
in
like
another
column
and
then
has
like
easy
check
boxes
or
something.
So
you
can
easily
like
relatable
nodes
like
if
it
were
our
column
for
each
label,
for
example,
or
something
that
so
just
makes
it
really
easy
to
like
select,
multiple
and
just
like
apply
a
label
to
them
and
to
it
so
yeah,
that's
visual!
You
do
it.
E
A
C
C
That
in
the
dashboard
for
OSD
creation,
you
get
a
table
that
lists
all
the
house
or
the
distress
attached
to
that
house,
and
then
you
can
create
your
draft
group
specification
by
simply
selecting
the
criterion
that
should
match.
Then
you
get
a
result
table
that
lists
all
the
discs
and
host
that
this
might
apply
to.
So
you
can
then
proceed
something
like
that
would
be
suitable.
Okay,.
A
C
E
A
A
E
B
Yeah,
so
in
response
to
your
question
earlier
staged
about
whether
you
should
deploy
based
off
of
labels,
that
is,
if
I
add
a
host
with
the
given
labels
shouldn't
him
like
with
an
MDS
and
MDS
automatically
be
deployed
there.
I
would
I
think
it's
kind
of
the
approaching
the
problem
in
the
reverse
direction.
B
Deploying
services
based
off
of
those
labels
I
would
think
we
would
want
to
only
have
the
orchestrator
always
drives
or
whatever
is
adding
services
to
via
the
orchestrator
should
drive
where,
when,
when
we
deploy
new
demons
so
like,
if
we
yeah
as
far
as
how
many
there
are.
So
if,
if
I
increase
Maxentius
and
this
new
and
yes
module,
would
would
go
out
and
deploy
in
England,
yes,
and
eventually
that
NDS
node
would
be
used
or
maybe
not.
Okay,.
A
Okay,
one
thing:
that's
this
occurs
to
me.
Actually
one
thing
that's
missing
right
now
from
this
is
actually
no,
though
right
now
you
the
count
to
the
orchestrator
MDS
update
to
adjust
the
number
of
demons
and
that's
independent
from
that
maximum.
Yes,
so
if
you
want
some
number
of
stand
lights
and
you
just
use
a
larger
number,
then
what
Maksym,
DSS
I
think.
A
A
A
Was
I
see
I
forgot
about
that
I
mean
we
could
just.
We
could
just
make
the
rook
Orchestrator
report
the
existing
labels
that
are
at
least
all
the
ones
that
are
prefixed
with
rook
so
that
you
have
to
modify
the
rooks
through
kubernetes
or
not.
You
can
do
it
through
the
dashboard
in
CLI,
I
think
it'd
still
be
a
you
gently
set
visibility
in
terms
of
it
there
and
or
we
could
have
an
optional
switch
or
something
like.
F
Yeah
I
think
it
does
get
a
little
complicated
because
Rick
also
can
like
you,
have
multiple
stuff
clusters
in
a
kubernetes
clusters.
Yeah
and
so
like
you're
like
I,
want
this
node
to
be
able
to
run
monitors
might
be
a
sort
of
like
I
want
those
know
to
be
able
to
monitors
for
this
specific
cluster
or
for
this
other
specific
cluster.
So
I
don't
know
if
we
want
to
make
make
sure
that
ins
F,
like
our
like
that
label
matching,
is
programmable
with
some
like
sort
of
saying
default.
A
My
assumption
would
be
that
we
would
make
the
the
kubernetes
labels
have
to
be
prefixed
by
like
the
rook
namespace,
or
something,
though,
that
effectively
those
labels
would
be
private
to
that
particular
cluster,
and
then
they
could
even
be
like
edited
by
the
time
they
reach
stuff.
You'd
only
see
the
suffix
and
not
the
prefix.
Looking
like
that,
yeah.
D
A
And
I
guess
the
way
I'm
thinking
about
this
is
that
the
node
labels
are
just
one
way
of
trolling
where
things
are
scheduled.
If
you
leave
them
off
entirely,
then
the
scheduler
would
basically
just
work
with
the
set
of
all
notes
for
every
scheduling
decision
and
so
know
it's
like
kubernetes
normally
wears.
Today.
Things
are
just
randomly
spread
everywhere.
Ssh
one
can
work
the
same
way.
A
A
A
Alright
they're
growing
kind
of
out
of
order
here,
so
unfortunately
Paul
Cruz
neurs,
not
here,
because
it's
like
2:00
a.m.
or
3:00
a.m.
or
something
in
and
Australia,
but
he
sent
out
an
email,
I
think
earlier
this
week
or
last
week
lens
about
basically
taking
so
in
the
last
for
downstream
for
last
release.
He
wrote
this
whole
thing
for
cockpit.
That's
a
little
GUI
based,
install
thing
that
drives
sensible,
and
so
he
literally
like
just
did
this
for
novice
and
sensible,
and
so
he
was
basically
gonna.
A
Look
at
reimplemented,
basically
the
same
thing
in
the
dashboard,
so
that
after
G
at
that
bootstrap,
you
can
click
the
link
and
you
can
go
into
a
guided
wizard
thing
that
lets
you
walks
you
through
adding
new
nodes
and
setting
up
your
SSH
keys,
deciding
which
nodes,
Gatos,
T's
and
all
that
stuff
and
like
starting
that
work
immediately.
Hopefully,
in
time
they
have
something
that
falls
shortly
after
octopus,
or
maybe
there's
an
initial
version
for
octopus
in
my
dock,
for
you
to
run
or
whatever
I
try
to
get
that
done
quickly.
A
C
Replied
to
him
and
waiting
for
his
replay
at
this
moment.
So,
ok,
when
he
contacted
me
about
two
weeks
ago
and
I,
couldn't
get
a
run
answering
because
I
wasn't
sick
leave
a
few
days,
but
they
back
then
he
was
basically
in
fact-finding
phase
and
trying
to
figure
out
whom
to
talk
to
so
I
replied
gave
him
some
pointers
about
people
yeah.
Also
looking
into
this
interested
in
how
we
can
put
this
together,
but
I,
don't
know
what
he
has
been
up
to
since
then
we
haven't
arrived
in
contact.
A
C
A
Okay,
I
should
mention
most
most
people
name
might
not
be
aware,
but
we
did
add
a
Orchestrator
stand
up
daily
stand
up
everyday,
that's
15,
minutes
before
the
course
stand
up,
and
so
all
this
organization
itself
we're
meeting
daily
and
we
just
added
one
on
Wednesday
afternoon.
That
is,
will
work
for
him.
Basically,
so
that's
coming
up
in
like
four
hours
or
whatever
don't
think
I
put
them,
then
they're,
just
quick
questions,
so
Paul
is
basically
kind
of
redesigning
what
he
did
in
the
corporate
integration
based
on
the
S&H
orchestra.
A
This
time,
basically
yeah.
The
idea
would
be
that
a
similar,
a
similar,
guided
workflow
but
implemented
in
you
know,
angular
in
the
dashboard
driving
York,
City,
okay,
yeah
well
forward
and
I.
Think
one
of
the
things
that
we
should
be
sort
of
thinking
about
as
we
do.
That
is,
since
this
is
all
sitting
on
top
of
the
or
consider
API.
C
Well,
maybe
just
need
to
be
a
bit
more
clever
on
how
we
do
this
on
the
UI,
but
we
should
make
sure
that
it's
not
adding
overhead
by
clicking
around
too
many
times
well
and
the
combined.
Then
you
would
just
have
a
wild
card
for
the
same
operation.
Just
to
give
you
an
example,
so
I've
seen
the
work
that
he's
done
on
the
cockpit
installer.
C
C
Our
agenda
anyway,
right
now
or
so
far,
our
focus
from
a
dashboard
perspective
has
been
looking
into
how
we
can
we
make
de
to
operations
based
on
the
orchestrator
disk.
Osd
maintenance
was
our
first
step
because
that's
usually
the
stuff
that
you
are
working
most
of
the
time,
while
the
installation
is
a
one-off
thing,
our
time,
yeah
yeah
so
but
really
the
caveat
and
the
trade-off
that
you
have
here
you're
putting
a
lot
of
energy
and
resources
into
something
that's
used
once,
but
then
again.
C
C
Mean
with
what
we
have
in
the
dashboard
already
I
mean
basically
starting
from
the
single
Mon
man
did
note
in
a
way
and
having
SSH
Orchestrator
already
configured
to
have
SSH
access
to
all
the
other
nodes
in
your
cluster.
You
could
basically
at
least
do
the
OSD
deployment
in
the
dashboard
already
by
using
the
OSD
ad
function,
that
we
have
oh
right
and
that
that's
how
it
is
done.
Well,
that's
more
about
in
other
services,
our
GW
and
FA.
Yes,
those
kind
of
things
yeah.
A
And
those
ones,
fortunately,
are
easier
OS.
These
are
sort
of
the
hardest
one
because
you
have
to
map
to
physical
disks.
So
I
haven't
been
following
the
drive
group
stuff
super
closely,
but
based
on
what
I
was
told
yesterday,
it's
actually
pretty
close
to
what
you
need,
because
you
can
have
you
have
device
filters
and
they
also
have
a
host
filter,
and
so
you
can
apply
a
group
to
a
whole
collection
of
notes
or
even
all
notes.
So
you
can
certainly.
C
Yeah
all
right
next
step
from
our
end
until
we
have
synchronized
with
Paul
about
additional
features,
is
being
able
to
remove
all
of
these
once
that's
done.
The
replacement
is
another
one
that
we
have
here:
Kieffer's
the
primary
guy
working
on
that
that's
the
current
lay
of
the
land
and
our
hope
and
expectations
of
things
that
we
can
get
realistically
in
before
the
official
octopus
feature
phrase.
A
Yeah
I
mean
to
your
earlier
point
that
the
goal
of
all
this
is
to
make
sure
everything
is
sitting
on
top
of
this
common
orchestration
API,
so
whether
you're
driving
it
the
dashboard,
might
have
a
nice
cute
wizard,
but
you
can
also
do
it
all
when
they
CLI
and
so
as
far
as
making
sure
that
we
capture
the
city
tzer's,
who,
like
just
say
like
this,
is
my
whole
cluster.
Go
deploy
the
whole
thing
all
at
once.
I
mean
that
could
be.
A
That
could
be
a
30
line
basket
that
just
loops
over
hosts
and
discs
and
there's
some
work.
That
needs
to
be
done
to
basically
paralyze
the
the
work
queue
and
yes,
it's
Orchestrator,
so
that
you
can
have
a
whole
bunch
of
those
tubes
being
created
in
parallel,
asynchronously
I
think
see
if
we
structure
to
do
that
is
there
and
that
there
is
a
there
is
a
work
you
already
it's
just
to
see.
Oh
I
can
answer
all
kind
of
blocking,
and
so
they
don't
beat
it
more
than
one
item
at
once.
I
think
that
yeah.
C
How
can
we
take
a
long-running
task
because
the
dashboard
man
well
it
fires
over
an
an
API
call,
but
it
can't
just
wait
for
it
to
return
and
because
it
would
run
into
a
timeout
so
right,
basically,
integration
within
the
long.
The
progress
notify
reference
would
be
nice,
yeah
already
visualize.
All
the
progress
is
tracked
and
captured
by
the
progress
module.
So
that
would
be
one
way
to
integrate
that
so
basically,
dashboard
probably
needs
to
get
a
handle
of
where
this
job
is
being
tracked
and
then
be
able
to.
It
was
a
clear
yeah.
A
A
A
And
I
think
with
the
drive
group.
Probably
one
of
the
next
steps
is
basically
so
what
I
would
like
to
do
at
a
high
level
is
have
the
equivalent
of
rooks
use
all
devices
and
use
all
hosts
not
use
all
hosts,
because
you
already
have
to
like
add
the
host
explicitly
to
the
cluster,
but
use
all
devices
or
it'll
just
Hoover
up
any
device
that
is
unused
and
matches
some
basic
filtering
and
provision
host
you
own.
A
It
automatically
so
probably
getting
something
like
that
to
work
with
one
of
these
drive
groups
that
has
applies
to
all
devices
and
have
that
trigger.
Like
a
parallel
background,
those
two
creations
would
be
one
of
the
next
steps
make
sure
this
kills.
Ok,.
A
C
A
A
Alright
yeah
I
mean
yeah,
I,
think
I
think
there's
sort
of
two
ways
to
look
at
that.
I've
been
approaching
this
from
like
how
do
we
make
it
as
easy
and
painless
for
the
user
as
possible,
and
right
now
like
using
Ed's
fold
setting
all
that
stuff?
C
Part
too,
whatever
soda
is
used
in,
we
are
kind
of
circling
back
now,
but
I
think
the
good
part
about
that.
We
really
try
to
concentrate
what
is
very
safe,
specific
business
logic
and
that
only
chef
knows
best
how
to
do
into
the
the
orchestrator.
Now
then
leaves
the
execution
to
a
more
yeah
slim
lined
execution
using
the
SSH
Orchestrator.
A
A
A
So
I
think
that's
that's
pretty
good.
The
next
step
is
I.
Have
a
work-in-progress
that
basically
implements
upgrades
start,
stop
pause,
resume
and
status.
The
start,
stop
pause.
Resume
are
all
pretty
simple.
They
just
say
basically
just
record
what
the
target
version
is
that
we
want
to
upgrade
to
you
and
that
were
in
progress
and
then
pause,
we'll
just
suspend
whatever
activities
happening,
resume,
stop
we'll
throw
it
all
the
way.
A
The
last
thing
that
actually
asked
me
implement
if
the
part
that
actually
does
it
we're
assuming
the
upgrade
is
ongoing,
then
it
it
looks
at
your
service
inventory
says
where
all
the
monitors
upgraded,
if
not
upgrade
the
monitors
are
all
the
managers
have
waited.
That
just
goes
through
that
that
logic
sequence
I'm
implementing
pretty
much
what
the
work
one
is
doing
for
now.
I
was
just
implementing
this
in
the
SSH
vote
and
not
on
that
generic
Orchestrator
code,
so
that
we
can
basically
just
attach
these
same
commands
to
rook
as
we
do
to
SSH
later.
A
We
can
think
about
whether
it
makes
sense
to
unify
all
those
into
a
single
implementation
that
sits
above
the
orchestration,
API
or
not.
I
think
it's
not
completely
obvious
what
whether
or
how
that
should
be
done,
but
the
one
of
the
main
questions
that
came
up,
though,
is
that
is
just
this
start/stop
pause/resume
thing
I
think,
makes
sense
from
user
perspective.
It's
what
I
would
want
to
be
able
to
control
if
I
was
doing
an
upgrade
on
a
big
cluster
I.
A
A
No
stopping
it
yeah,
so
I
was
thinking
for
pause
and
resume.
It
might
make
sense
to
add
a
property
to
the
cluster
CFG.
That's
just
like
pause
equals,
true
or
I
was
upgrade
or
I'm
suspend
version
updates
or
whatever
equals
true,
or
something
and
basically
that'll,
just
make
rook
stop
updating
deployments.
Obviously,
if
a
deployment
has
multiple
of
demons
and
it's
midway
through
its
staying,
then
kubernetes
has
to
finish
doing
that
thing.
A
F
A
A
A
It
might
not
be
worth
it
I
guess,
but
I
could
imagine
in
the
I.
Could
imagine
that
you
have
a
cluster.
You
start
an
upgrade
and
you're
partway
through
and
you're
like
there's
something
wrong.
I
gotta
wait
like
a
week
until
there's
a
new
update
or
something
like
that
and
in
the
meantime,
during
that
week
you
might
have
new
Oh
Steve's
being
deployed
or
thing
like
that,
and
he
wanted
to
apply
the
old
version
that
not
the
new
version.
D
A
A
A
A
Yeah
one
of
the
one
of
the
things
on
my
to-do
list
is
to
look
at
there's
some
Python
module
that
lets
you
interact
with
the
docker
registry
directly
right
now,
in
order
to,
for
example,
figure
out
what
the
current
hash
is
for
the
latest
image
that
the
upgrade
uses.
It
just
pulls
the
image
and
then
looks
inspects
it
on
some
random
node
in
the
cluster.
C
Right
because
otherwise
I
mean,
if
you
have
a
cluster
that
has
slightly
different
versions,
update
available,
it's
something.
That's
too
vague.
You
need
to
be
a
bit
more
specific,
which
nodes
need
the
updates
things
like
that
yeah
in
a
distributed
system.
There's
not
just
one
single
version.
That's
thick
really,
then
yeah.
A
Okay,
all
right,
so
all
this
there's
one
item
here
on
the
list:
that's
just
being
more
declarative.
This
discussion
keeps
coming
up
in
the
in
the
daily
the
daily
calls.
We
seem
to
be
inching
toward
the
s
it
recruiter,
operating
in
a
more
declarative
fashion,
I'm
so
far,
everything
we've
done
is
imperative,
like
you
issue
a
command,
and
it
does
that,
but
with
drive
groups,
for
instance,
or
this
idea
of
being
able
to
say,
set
a
flag
that
says,
use
all
devices
and
all
notes
whatever.
A
Tell
it
to
go,
do
do
a
pass
to
do
an
orchestration
or
whatever,
or
you
could
do
it
automatically
periodically,
and
but
we
are
reducing
to
be
moving
in
that
direction,
particularly
with
the
labels
also
like,
for
example,
if,
if
the
labels,
change
and
demons
were
previously
deployed
based
on
labels,
then
at
some
point
probably
you
want
it
to
like
move
those
demons
around
based
on
the
change
in
the
labels.
That
sort
of
thing
I.
A
Think
the
the
nice
thing
is
that
so
far
this
seems
to
be
sort
of
a
spectrum
of
how
declare
do
we
want
to
be
and
how
about
declarative
and
those.
So
far,
everything
still
functions
in
an
imperative
way,
but
we're
sort
of
inching
our
way
or
towards
together
into
the
spectrum
just
something
to
to
keep
in
mind.
Well,
if
there's
any.
D
A
Yep,
okay!
So
yes,
this!
The
last
ident
is
the
monitoring
one
which
maybe
it's
a
Pandora's
box,
but
there's
a
thread
about
this,
so
monitoring
is
sort
of
taken
to
mean
Prometheus
crow
fauna,
as
the
two
main
pieces
that
are
necessary
to
make
the
dashboard
sort
of
fully
functional,
there's
also
something
that
goes
with
crow
fauna,
that's
called
node
alerts,
think
or
maybe
that
alert
manager
that
goes
with
Prometheus.
A
That
one
so
I
think
that
would
you
just
avoid,
along
with
Prometheus
and
then
there's
also
a
node
exporter,
which
is
the
agent
the
per
node
agent
that's
sending
stuff
to
to
Prometheus.
That
includes
metrics
that
the
dashboard
consumes
so
I
think
at
a
minimum.
Those
are
the
things
that
we've
identified
as
things
that
we
want
to
deploy
the
way
the
way
I've
been
thinking
about.
This
is
that
this
is.
C
A
At
a
minimum
it
has
to
be
the
bare
minimum,
but
I
think
that
it
should
be
as
close
to,
if
not
fully
like
sufficient
for
a
production
system.
Also
yeah
but
I
mean
you
know
what
I
don't
know
if
we
want
to
bite
off
like
deploying
medius
initiate
configuration
like
before
I
don't
want
to
do
that
and
what
what
is
sufficient
for
a
quote-unquote
production
deployment
varies
from
person
to
person,
like
obviously
you're,
probably
going
to
want
I'm
sort
of
something
that's
going
to
make
her
pager
go
off.
A
Maybe
in
some
cases
that's
something
that's
built
on
Prometheus.
In
my
experience
it's
always
been
built
on
something
else,
but
I
I
don't
know,
I,
don't
know
how
people
are
doing
these
things
these
days,
so
I
think
in
my
mind,
I'm
not
afraid
to
make
the
out-of-the-box
turnkey
opinionated
one
like
complete
and
functional
I.
C
Somewhat
funny,
especially
that's
the
Nordics
porter,
which
is
the
the
the
the
service
that
scrapes
your
hosts
for
performance,
metrics
and
everything
we
have.
We
probably
need
to
have
one
that
is
running
on
the
the
actual
hosts
in
a
way
to
capture
this
matrix
and
then
in
each
of
the
running
containers
to
get
the
container
specific
metrics.
E
E
Another
Pandora's
box,
you
can
certainly
get
away
with
just
having
a
node
exporter
per
host.
It,
however,
is
you
get
more
information
or
better?
You
get
them.
You
know
better
structured
information
out
of
a
container
host
if
you
actually
the
node
exporter
in
the
containers
too,
or
often
as
a
sidecar
container,
though
I
don't
know
if
that
concept
actually
exists
of
kubernetes
and
the
the
benefit
you
get
from.
That
is
that,
on
the
host
level,
you
know
you
get
from
the
node
exporter.
You
get
the
TP
use,
that's
by
mode.
E
So
you
get
you
know,
user
mode,
sister
mode,
IO,
weight
and
such,
but
you
don't
actually
get
insight
into
a
per
process
level.
Cpu
usage,
for
example.
If
you
put
that
into
the
container,
you
do
get
for
at
least
certain
modes,
the
per
container
CPU
usage,
which
I
think
is
generally
something
that
we
would
want
same
for
obviously
ram
and
network.
E
C
C
C
A
E
And
you
could
you
could
do
this
with
per
host
exporter,
but
then
you
know
you're
kind
of
in
a
in
a
between
a
rock
and
a
hard
place
to
relate
which
container
talks
over
which
virtual
interface,
for
example,
to
get
a
network
graph
in
the
Perot
is
the
dashboard,
this
kind
of
stuff-
it's
it's
all
possible,
but
it's
very
cumbersome.
If
you
just
set
one
exporter
per
node,
I,
guess
that
all.
A
Of
the
containers
that
Ceph
daemon
is
deploying
are
using
host
networking
and
they
all
run
exactly
one
process.
That
is
the
ones
that
demon
and
so
I'm
wondering
if
there's
actually
anything,
to
be
learned
from
individual
containers
that
you
don't
already
know
from
the
hosts
except
I.
Guess
the
memory
usage
I
guess
inside
that
container.
E
Yeah
memory,
CPU
and
yeah
that
the
network
is
just
easier
to
extract
and
again
I
we
could
I
haven't
looked
at
pot.
Man
has
like
you
know,
or
what
what
actually
goes
into
a
concept
of
a
sidecar
container,
I
think
in
kubernetes,
it's
like
inside
of
a
pot
I,
don't
know
if
we
couldn't
make
something
like
that
in
pot
man
as
well,
but
what
that
might
be
a
way
to
go.
If
it's,
if
it's
not
too
high
hanging
fruit,
it
would
be
definitely
nice
if
we'd
actually
consider
that
okay,
hey.
A
D
E
Of
plays
a
part
here
where
we
really
need
to
like
have
a
some
kind
of
primitive
or
CLI
call
or
so
to
get.
You
know
all
the
exporters
that
the
orchestrator
deployed
that
we
can
export
that
to
an
external
Prometheus,
so
this
external
Prometheus
can
scrape
those
and
they're.
Obviously
it's
kind
of
a
difference
if
we
deploy
one
node
exporter
per
node
right
right,
all
these
things
so.
A
I
think
I
think
this
is
the.
This
is
the
question
in
my
mind
of
whether
so
there's
in
one
extreme
we're
deploying
our
opinionated
out-of-the-box
reconfigured
all-in-one
version
of
everything,
and
you
have
to
think
about
it
and
the
other
extreme.
You
have
your
own
Prometheus
of
F
and
your
run
you're
installing
your
own
node
exporters,
you're
like
running
all
the
stuff
and
Yorkshire,
isn't
doing
anything
you're
just
pointing
the
dashboard
at
your
other
previous.
A
It's
not
clear
to
me
whether
we
should
worry
about
any
of
the
in-between
space
and
if
we
should
how
much
so
what
you
just
mentioned
is
sort
of
an
in-between
point
where
that
is
applying
the
node
exporters.
But
it's
not
managing
Prometheus
like
is
that
actually
a
useful
thing
like?
Is
there
a?
Is
there
a
situation
where
you
render
infinite
this,
but
you
don't
want
to
install
and
run
the
node
exporters.
E
Yeah
I
mean
no
not
not
regarding
that
not
exporters,
but
we
at
least
have
to
have
a
way
to
expose
the
manager
Prometheus
module
to
an
external
for
me:
yes,
absolutely
yeah
and
it,
and
when
we
do
that,
you
know
we
think
about.
Okay,
we
have
a
certain
requirement
of
gravano
dashboards
that
the
dashboard
expects.
You
know,
that's
a
JSON
blob
that
wouldn't
hurt.
E
A
F
E
A
E
A
E
A
A
A
Think
there's
one
more
implementation:
question
I
had
around
the
monitoring,
there's
Prometheus
and
all
our
manager
and
then
in
Griffin
Oh,
but
those
Paul
had
started
to
write
a
script
that
is
separate
from
Seth
demon
that
deploys
all
those
things,
but
he
did
it
in
a
different
way
than
50.
Men
did.
Oh,
it
I'm
dizzy.
The
main
thing
is
that
it
would
it
put
everything
in
Etsy
and
then,
when
it
runs
the
container
it.
A
What
binds
the
things
in
your
hosts
Etsy
into
the
containers
Etsy
the
way
I
made
all
of
the
Seth
stuff
work.
Is
that
everything
everything
everything
for
everything
basically
is
in
our
lips:
F,
that's
us
ID
and
then
the
container
name,
though
the
monitors
each
demons,
config
file
and
keyring,
is
in
its
data
directory.
A
Just
so,
it's
like
neatly
compartmentalize
in
one
place,
and
the
host
configuration
is
imploded
by
things
that
are
sort
of
particular
to
a
particular
to
the
cluster.
The
exception
to
that,
of
course,
is
there's
a
system
to
unit
that
starts
the
container
and
that
lands
on
the
host
unit,
C
system,
DS.
E
A
Okay,
and
so
then
the
second
part
is,
should
we
just
put
this
all
in
stuff
demon,
but
that
okay
I
mean
if
this
is
why
what
I
was
originally
assuming?
We
would
do
where
you
would
do
instead
of
joining
stuff
demon,
deploy-
and
you
know,
manager,
X
or
whatever
it
is,
it
would
be
stuff
team
and
deploy
from
easiest
and
then
there'd
be
an
argument
that
passes
a
Jason
config
or
something
like
that.
C
For
reference,
we
do
have
a
darker
based
development
environment
that
we
use
for
most
of
our
dashboard
development
work
and
we
basically
spawned
a
different
docker
container
for
both
prometheus
alert
manager
and
graph
fauna
and
then
have
some
scripting
around
interconnected.
So
if
you
want
to
take
a
look
at
how
that's
done
over
there,
I
pasted
thing
in
the
chat
here,
deaf
deaf
talker
is
the
name
that
we
do
have
a
what's.
It
called
I.
C
C
C
E
That
also
makes
sense,
as
you
know,
Prometheus
has
had
updates
in
the
past
that
you
know
break
the
metric
store
like
like
the
format
and
stuff
like
that.
You
don't
want
to
run
into
something
like
that.
Just
because
you
wanna
upgrade
Raavan
or
something
like
that.
Yeah
and
and
they're
also
meant
to
be
that
way.
I
mean
Prometheus
is
as
its
root
in
kubernetes
right,
yeah,
okay
and
yeah.
E
A
A
A
F
Am
I
I
do
kind
of
agree
that,
just
in
conversation
when
I
hear
that
demon
like
I,
don't
see
it
written
out,
I
don't
feel
like
it's
a
little
confusing.
I
I,
don't
I,
don't
know
if
I
feel
like
some
food
strap
is
a
better
name,
but
it
may
be
a
better
name
in
that
it
would
be
less
confusing
than
a
conversation.
A
C
A
There's
one
other
thing
that
I
really
want
take
this
somewhere
and
probably
in
the
same
tool.
The
one
thing
that
Steph
deploy
does
today
still
that
nothing
else
does
is
it
knows
how
to
configure
repositories
for
packages.
So
when
it
comes
time
to
install
all
your
client
packages
or
if
you,
whatever
want
to
install
packages
under
very
mild,
host
I
still
use
that
deployed.
There's
nothing
else
to
do
it.
It
would
be
nice
if
that
were
built
into
this
tool.
A
C
Is
that
the
initial
preparation
phase
is
very
distraught,
specific
yeah
and
it
may
also
depend
on
what
tools
are
or
the
ploughman
tools
are
being
used
at
the
users.
Data
center
are
ready
to
get
this
out.
They
may
have
their
pre-configured
pixie
boots
service
to
get
there
or
as
images
deployed
somewhere.
So
we
I
mean
that's
a
small
piece
of
the
puzzle,
I'm,
not
sure
if
we
should
really
get
down
to
that
level.
Well,.
A
We
have
it,
we
have
a
check
command
right
now
that
just
checks
to
see
if
the
commands
are
there,
is
there
a
pod
man
or
a
knocker
and
LVM
commands?
Are
they
are
they
present?
And
that's
that's
I
think
that
bare
minimum,
but
what
I'm
thinking
is
like
this
get
started?
Do
you
know
these?
Are
the
10
steps
to
follow
to
install
the
cluster
right?
Now,
it's
like
by
the
way
you
need
to
have
Padma
and
docker
installed
and
I.
Don't
want
to
have
a
big
thing
like
if
you're
on
Debian
run
this.
A
If
you're
on
openSUSE
run
zipper
blah
blah
blah
I
may
be
nice
to
just
say
this
should
do
it
run,
set
daemon
prepare
and
it
will
just
kind
of
like
are
installed.
Apps
Script
in
the
source
repository
does
its
best
and
or
nothing
if
everything's
already
present,
I
guess
so
that
was
sort
of
mine.
Thank
you,
I.
Imagine
that
for
most
downstream
products,
usually
there's
some
infrastructure,
that's
being
used
to
deploy
your
software,
and
so
all
the
stuff
is
already
taken
care
of
and
there's
actually
mattress
is
mostly
for
us
to
upstream
users.
Exactly.
C
A
A
C
And
for
what
it's
worth
while
we're
talking
about
deployment
tools,
since
we
do
have
that
down
spring
requirement
for
our
next
product,
we
are
planning
on
having
a
small
bulb
based
boom.
That
does
what
you
were
just
talking
about,
creating
a
sage
keys,
setting
up
NTP
and
making
sure
that
pot
its
thought.
C
C
A
Anyway,
okay,
that's
I,
think
that's
everything,
I
had
I
missed
other
questions,
comments,
topics,
but.
C
C
That's
intended
to
plant
for
SES
7,
which
is
our
articles
based
on
same
product.
Since
we
don't
need
deep
sea
anymore,
we
basically
need
to
figure
out
something
else:
yeah,
ok
and
it's
kind
of
it
really
just
does
the
minimum
use
I'm
Suzy,
Enterprise,
Linux,
Enterprise,
Service
the
base
OS
and
then
just
install
the
required
packages,
configure
NTP
the
firing
firewall
stuff,
just
the
basic
OS
preparation
making
sure
the
services
are
running,
and
then
we
handed.
D
C
Just
it's
it's
salt
formula,
so
it's
a
different
implementation
compared
to
deep
sea
yeah,
much
trim
down,
because
simply
all
the
safe,
specific
logic
that
deep
sea
used
to
have
is
no
an
SSH
Orchestrator
or
it's
going
to
be
there
pretty
soon.
So
this
is
a
floating
us
from
quite
a
ton
of
stuff,
but
we
still
need
to
have
something
that
integrates
with
our
deployment
tools
that
we
have
as
other
products,
but
also
something
that
works
as
a
standalone
solution
in
case
customers
just
want
to
buy
the
set
based
downstream
product
and
nothing
else.