►
From YouTube: 2018-08-22 Ceph Testing Weekly
Description
Weekly collaboration call of all community members working on Ceph Testing.
http://ceph.com/testing
A
All
right
so
so
wow
you,
you
asked
for
this,
although
it's
only
something
that
we've
been
interested
in,
so
do
you
want
to
you
want
to
kick
off
what
you
were
thinking
about
or.
B
Yeah
putting
me
on
the
spot
like
that.
That's
not
very
nice
of
you
now
in
all
seriousness,
the
problems
of
not
not
a
problem.
The
things
we've
been
discussing
has
been
a
lot
to
do
with
how
we're
going
to
test
serve
once
we
go
on
to
containerized,
environment
and
I,
mean
testing
at
scale
and
making
sure
that
things
still
behave
as
they
are
supposed
to
behave
and
or
well
since
ever
we've
been
relying
on
technology
for
all
of
this,
and
it
internally
might
be
in
the
CP
lab
and.
B
That's
where
most
of
the
self
testing
coverage
is
it's
it's
in
technology,
it's
in
nothing.
Nothing
else
compares
to
our
thorough
pathology
really
is
when
it
comes
to
testing
self,
and
so
it
makes
sense
to
find
a
way
to
run
tautology
against
the
containerized
environment,
and
this
is
where
the
problem
begins
is.
What
is
the
real
scope
that
we
we
intend
to
on
which
we
intend
to
use
technology?
B
Whether
tautology
is
just
going
to
be
the
test,
testing
framework
being
supplied
with
a
kubernetes
cluster
and
then
just
somatic
or
where
tautology
we'll
also
have
the
responsibility
to
deploy
it
closer,
especially
when
we
start
having
rook
involved.
How
does
Brooke
fit
into
the
whole
thing
is
to
totally
responsible
to
also
ploy
book
and
tests
F
against
different
versions.
Rook,
should
that
be
necessary?
How
will
we
test
upgrades?
B
So
this
is
the
the
the
complexity
matrix
starts
to
to
grow
quite
a
bit,
and
we've
discussed
a
bit
of
that
internally,
but
regardless
of
what
we
discussed
internally,
it
always
ends
up.
We
always
end
up
the
the
buy-in
from
the
other
stake
holders
of
self,
because
it
doesn't
make
sense
to
just
throw
resources
into
this
and
well
eventually,
potentially
diverge,
which
is
basically
wasting
resources
on
all
parties.
A
Yeah
yeah,
that's
about
where
I
am
I,
do
have
a
few
more
things,
which
is
that
we've
already
identified
as
a
critical
gap,
that's
or
that
mm
test
installation
methods
right
now.
So
whatever
we
do
with
this
we're
gonna
need
to
test.
You
know,
like
I,
think
we
do
need
to
take
cover
testing
rook,
that
it
works
and
depending
on
how
quickly
we
go
to
the
altar
in
an
environment.
If
that
actually
happens,
and
we
also
need
even
to
like
tests,
advancable
and
stuff.
C
We
can
maybe
do
something
like
having
a
bit
like
a
per
rack
or
whatever
kubernetes
cluster,
and
instead
of
having
tooth
ology
sort
of
lock
nodes
by
doing
all
that
sound
changeling
stuff,
it
can
lock
nodes
by
applying
some
labels
to
the
kubernetes
nodes
and
then,
when
we
schedule
the
containers
within
a
particular
tooth
ology
run,
we
would
say
you
know,
run
all
these
labeled
notes.
I
think
I
think
we
do
need
some
level
of
per
run
isolation
on
top
of
communities.
C
D
So
if
you,
if
you
look
at
the
the
way
we
currently
use
OpenStack
it
uses
this.
This
use
this
lip
cloud
back-end
that
I
wrote,
I,
don't
know
a
year
and
a
half
two
years
ago
and
part
of
the
part
of
the
concept
of
that
back-end
is
you
can,
in
your
configuration,
define
define
machine
types
like
sort
of
virtual
machine
types
that
correspond
to
a
given
OpenStack
configuration,
and
so
when
you
try
to
lock
a
node
of
that
machine
type,
what
it
really
does
is.
D
It
goes
and
creates
one
creates
instance,
an
instance
in
in
the
cloud
that
corresponds
to
it's
the
one
that
via
configuration.
So
we
could
do
something
similar
where
I
don't
know.
If
I
guess,
it's
appropriate
to
still
look
for
individual
machines
that
are
inside
of
a
kubernetes
cluster,
based
on
what
you
just
said,
John,
but
a
an
additional
you
know,
provisioning
back-end
for
tooth
ology
could
could
feasibly
say
alright.
I
want
to
either
select.
D
A
We
have
a
simple
test
framework
for
our
integration
testing
that
it
does
use
cuvee
TM
to
start
up
a
one
node
kubernetes
cluster,
and
when
it's
done
it
tears
it
down,
and
it's
done
it's
just
a
AWS
or
GCE
run.
So
it's
simple,
but
if
you
actually
have
a
full
scale
out
lab
I,
don't
think
we'd
want
to
necessarily
bring
up
kubernetes
every
time
we
want
to
test
something
like
urbanize
is
just
there.
A
Hopefully,
there
could
be
some
some
external
process
with
the
whole
test
framework,
which
knows
how
to
set
up
kubernetes
but
then
yeah
once
that's
going
yeah.
We
could
certainly
use
the
node
labels
and
things
too
to
tell
Rick
where
to
run
the
Mons
and
os
DS
and
other
demons
and
yeah.
That's
all
definitely.
D
A
They
can
it's,
it
takes
a
little
bit
of
care
and
things
to
make
sure
you
label
your
nodes
correctly
and
then
tell
things
were
to
run
so
that
you
don't
have
them
stomping
on
each
other,
but
it
is
possible.
I
mean
the
well
customer
scenario
is
just
run
one
cluster
because
need
one
story
set
of
storage
for
your
rear
cluster,
but
it's
possible
to
partition
it
like.
B
A
Other
testing
rook
or
are
we
gonna,
need
to
be
able
to
run
it
against
multiple
goober.
These
versions,
frequently
I
mean
the
the
rook.
Are
integration
testing?
That's
automated
with
every
PR
and
every
urge
it
it
does
run
against
versions
of
kubernetes.
We
support,
so
we
spin
up
we're
Nettie's,
1.8,
9,
10
and
11
right
now
to
go
run
the
test
against
it.
E
B
C
Yeah,
we
kind
of
we
kind
of
need
to
model
coop,
Lexi's
versions.
The
same
way
we
do
operating
system
versions
at
the
moment
so
like
we
used
to,
we
used
to
have
machines
that
were
statically
configure
to
different
OSS
and
schedule
on
them.
We
move
toward
dynamically
installing
the
OS
we
want
and
the
the
equivalent
to
that
with
kubernetes
would
be
to
like
re-emerge.
Some
loads
with
a
particular
version
with
a
particular
operating
system
had
a
particular
version
of
kubernetes
on
them,
but
I
wonder
if
that's,
maybe
something
that
we
need
to
think.
C
C
That
that
that
fancy,
fancy
matrix
testing,
which
might
require
having
more
automation
to
build
up
kubernetes
clusters
on
demand,
is
a
can
that
we
should
kick
down
the
road
a
little
bit.
So
we
can
do
the
simpler
thing
to
begin
with,
where
we
just
have
where
and
not
just
to
begin
with,
but
for
most
of
our
testing
most
of
the
time
where
we
just
have
like
one
thing
moving,
that
is
cluster.
B
E
C
Then
it's
almost
a
good
thing
for
us
to
be
taking
that
pain
and
our
testing
environment,
so
that
that's
some
of
the
standard
way
of
doing
things
rather
than
for
us
to
do
all
our
testing
on
the
assumption.
We
have
a
cluster
to
ourselves
and
then
have
our
users
potentially
run
into
the
sort
of
practical
realities
of
sharing
I.
E
E
Likely
going
to
be
like
a
quicker
path
forward
in
the
short-term,
and
then
maybe
that
means
that
in
the
short
term,
for
the
next
couple
SEF
releases,
we
don't
officially
support
multiple
clusters
in
the
same
kubernetes
cluster.
Until
we
have
good
testing
of
one
cluster
and
it
could
burn
in
these
cluster.
C
Yeah
I
guess
I
comes
down
to
loss,
which
is
less
what
the
this
sort
of
setup
and
teardown
and
isolation
of
multiple
works
for
the
kubernetes
cluster
versus
the
set
up
antenna
and
isolation
of
multiple
kubernetes
clusters
within
an
estate
of
physical
hardware.
I'm
not
sure
I
have
a
intuitive
sense
of
which
amazes
bracelet,
since
I've.
Never.
A
Also
need
to
like
maintain
testing
of
not
rook
and
not
kubernetes,
so
I've
been
told
that
it's
possible
to
run
system
the
inside
of
containers,
and
so
we
can
keep
on
doing
things.
We
like
set
danceable
and
are
you
just
D
units
and
stuff
and
I
have
no
idea
how
plausible
that
is
or
how
much
it
overlap.
A
C
The
stuff
we're
talking
about
now
is
a
comparatively
long
term
project
and
that
we're
we're
sort
of
talking
about
things
today
that
yet
we
need
the
initial
versions
off
quite
soon
that
this
might
not
be
fully
rounded
out
for
for
a
year
or
two
and
in
a
year
or
two.
We
hope
that
much
more
of
the
world
will
have.
You
know,
joined
us
on
the
on
the
ground,
kubernetes
bandwagon
and
in
the
event,
in
the
meantime,
for
for
people
who
want
to
test
outside
of
kubernetes.
Well,
they
already
have
their
test
framework
right.
C
They
have
the
existing
that's
framework,
and
the
question
is
whether
we
need
to
build
something
new
or
testing
sensible,
and
my
my
feeling
about
that-
and
this
is
all
very
opinionated,
of
course-
is
that
we've
gone
like
four
or
five
years
with
this
current
situation
of
tooth
ology,
doing
things
its
own
way
and
they're.
Also
being
these
external
tools
that
weren't
in
use
in
technology
and
either
got
tested
separately
or
got
tested,
less
or
tested
manually
or
whatever
it
is,
and
if
the.
C
A
More
than
that,
if
you
think
that
what
we're
doing
with
human
eyes
isn't
gonna
be
like
the
community's
testing
framework,
isn't
gonna
exist
for
a
year
or
two
I
think
we're
gonna
get
there
if
it's
not
incremental
I'm
not
like
I
want
to
be
clear.
I
don't
actually
have
an
opinion
about
much
these
things.
I,
just
don't
I
want
to
know
what
a
path
looks
like
and
that
it
covers
the
matrix
of
testing.
We
need
and
I.
Don't
I,
don't
see
how
that
happens.
Yet,
mostly
cuz
I,
don't.
C
A
C
A
B
D
A
C
A
C
I'm
not
saying
that
these
is
what
we're
gonna
be
using
in
50
years
time
and
then
I
gotta
leave
its
app,
but
when
we
decide
we
want
to
move
to
a
different
framework,
we
we
move
right.
We
rewrite
to
a
different
framework
rather
than
in,
rather
than
trying
to
maintain
a
a
sort
of
multi-headed
thing
throughout
the
whole
process.
Right.
If
we,
if
we
adopted
kubernetes
today
and
then
in
five
years,
we
decided
we
want
to
adopt
something
else,
then
great
in
five
years.
B
And,
to
be
honest,
regardless
of
how
much
testing
deep
sea
deployments
and
upgrading
and
whatnot
useful
to
us
as
well,
I
personally
believe
that
I
personally
agree
with
John
I
do
think
that
if
it's
in
any
way
possible
or
feasible,
it
would
be
nice
if
ethology
would
be
pluggable
enough
to
just
have
people
write
whatever
is
their
poison
of
choice
and
do
their
own
testing
on
with
whatever
may
be
that's
possible
already,
but
I
I'm,
not
there
enough
with
the
Tala
G's
inner
workings.
Actually.
A
When
I
was
when
I
was
skimming
through
things
like
it's,
mostly
the
set
Buster
install
instead
of
tasks
that
that
feels
with
that,
and
we
actually
like
I
thought
that
we
were
doing
a
lot
of
Ross
that
OST
indications
with
the
Thrasher
and
stuff.
But
we
actually
don't.
So
it's
really
just
a
few
places.
You
need
that
you
need
to
poke
ad
to
switch
things
right
now
and.
B
If
I
recall
correctly
of
how
things
worked,
the
idea
I
have
is
mythology
has
a
bunch
of
primitives
that
allows
us
to
just
run
like
this.
For
us,
these
monitors
stop
them
start
them
all
of
that,
and
well
assuming
those
are
abstracted
and
there
is
a
module
or
something
or
black
magic,
Python
black
magic.
B
C
That's
I
was
about
to
say
the
sort
of
interface
that
that
you're,
describing
to
let's
to
follow
g-drive
different
orchestrators.
It's
gonna
look
very
similar
to
the
orchestrator
interface
within
the
manager
break
and
that's
you
know
it's
not
a
it's,
not
a
trivial
thing
right:
we've,
we've
kind
of
been
a
reasonable
amount
of
time
already
trying
to
sort
of
whittle
down
that
interface
into
something
which
is
reasonably
minimal,
but
also
general
enough
for
the
different
backends
and
we're
already
at
the
point
where
we
have.
C
C
Some
of
them
support
groups
of
drives
some
don't,
and
if
we
actually
want
to
go
down
that
route,
then
maybe
we
should
be
looking
more
at
tooth.
Ology
driving
safe
manager,
but
right
so
to
ethology,
would
know
itself
how
to
set
up
that
manager
and
set
monitor,
but
then
for
everything
else
when
it
wanted
to
generalized
ease
and
whatever
it
would
just
do
that
fire
manager
so
that
it
was
using
the
same
paths
that
it
use.
It
would
hopefully
be
using
if
we
wanted
this
sort
of
multi,
back-end
type
environment,
I.
E
Think
the
problem
I
see
with
that
is
that
I
mean
deploying
things
and
testing
that
the
deployments
and
like
adding
OS
T's
and
things
get
done
is
good.
But
there-there
is
also
sort
of
the
case
where
you
want
to
just
like
full-stop,
an
OSD
or
or
like
crash
a
monitor
or
something,
and
that
that
has
to
look
different
for
the
different
backends
like
for
something
that
runs
on
hardware.
You
just
kill
it
and
make
sure
system
D
doesn't
restart
it
automatically.
E
Assuming
that
that's
the
testing
that
that
you
even
want
one
to
be
doing
and
I
suppose
to
start
with,
you
know
the
asking:
what
happens
when
a
monitor
has
failed
could
be
something
entirely
separate
from
that,
but
we're
we're
kind
of,
or
at
least
as
I,
see
it
we're
kind
of
at
a
place.
Now,
where
were
where
the
tooth
ology
has
to
be
semi,
aware
of
whatever
it's
running
on
top
of?
A
B
C
A
A
I
mean,
for
instance,
it
could
be
that
we
fully
buy
into
running
on
top
of
kubernetes,
but
that
we
maintain
the
ability
to
test
other
things
within
that
platform,
so
that
we
can
use
Cabrini's
job
scheduling,
but
we
run
VMs
as
containers
or
whatever
it
takes
to
do
other
times.
Testing
within
that
environment,
they're.
B
No,
but
I
think
brings
us
to
what
Blaine
was
talking
about,
because
as
soon
as
we
move
to
the
kubernetes,
there
are
a
lot
of
operations
for
which
semantics
may
change.
You
cannot
simply
kill
a
monitor
and
expect
it
to
be
down
or
stop
it,
even
because,
as
far
as
I
understand,
kubernetes
will
or
maybe
Brooke
will
restart
those
those
demons
of
spots,
because
it
feels
like
well
because
it
one's
self
to
be
happy
and
as
happy
as
possible.
B
So
so
I
think
it
was
two
weeks
ago
when
we
first
discussed
this.
This
meeting
John
mentioned
something
that
I
I
actually
agree
with
ensuring
that
technology
itself
is
nothing
more
than
a
testing
framework
and
remove
a
lot
of
the
craft.
I
think
that
was
your
point
right.
John
I'm,
not
interleave.
It
yeah.
B
And
I
I
agree
with
this,
then
I
I
got
to
talk
I
think
it
was
with
Blaine
Blaine
mentioned
this
whole
semantics
or
possible
semantics
change
and
I'm
now
just
trying
to
figure
out
whether
just
having
technology
as
a
simple
testing
framework
would
be
possible
without
some
form
of
platform
awareness,
because
I
have
the
feeling
that
it
isn't
really.
Although
it
would
be
amazing
if
we
could
simplify
tautology
enough
or
point
that
writing
tests
to
it
would
be
easy,
running
totality
itself
would
be
easy
and
all
the
other
things
around
ethology
could
be
there.
B
C
Way
to
to
deal
with
the
issue
of
books
might
be
to
say
that,
rather
than
to
follow
gee
having
to
be
aware
of
the
different
orchestrators
that
we
just
say,
if
you
want
to
be
a
circus
traitor,
you
need
to
expose
the
following
set
of
standard
hooks
right.
That
way
to
follow,
gee
doesn't
have
to
be
blood
and
or
tooth
ology
just
can
do
whatever
it
needs
to
do
with
any
beautifully
compliant
Orchestrator
and
it's
all
open
source.
So
we
can,
you
know,
change
the
orchestrators
hell
if
we
want
to
blend
them
as
opposed
things.
C
E
What
comes
after
kubernetes
I
think
it
makes
sense
to
start
planning
for
for
us
to
be
able
to
separate
the
concerns
out
more
distinctly
to
make
sure
you
know
tooth
ology
only
tests
def
and
that
there
are
things
that
can
plug
into
that
for
scheduling
hardware
or,
for
you
know,
installing
operating
systems
or
installing,
or
you
know
like
running
a
deep
sea
test,
suite
or
ansible
test
suite.
You
know
that
means
to
me.
Those
things
do
seem
in
addition
to
SEF
and
not
the
same
as
deaf.
A
C
Then
what
we
had
been
discussing
was
having
tooth
ology
know
both
ways
of
killing
a
service
right,
so
it
would
have
to
know
here's
an
API
call.
You
do
on
rook,
and
here
is
like
a
Comanche
round
and
sensible,
whereas
the
other
way
where
ground
to
do
that
is
to
say
that
tooth
ology,
when
it
wants
to
kill
a
service,
will
always,
for
example,
run
the
following
command
and
it
would
be
up
to
the
hands
of
all
or
rook
to
make
sure
that
it
provided
an
implementation
of
the
command
that
matched
Pathology's
expectations.
C
No,
it's
that
the
point
is
the
point.
Is
that
you
don't
have
you
don't
have
one
piece
of
software
that
has
multiple
ways
of
doing
things
right,
so
rook
would
have
one
way
that
it
exposes
its
books
for
testing,
and
that
would
be
the
tooth
ology
way
and
tooth
ology
would
have
one
way
that
it
calls
into
things
to
access
the
hooks,
and
that
would
also
be
the
technology
way.
B
A
So
I
get
that
it
just.
It
seems
like
a
much
more
natural
interface.
Boundary
is
to
say,
like
to
thaw,
he's
going
to
expect
to
have
a
module
named
in
this
fashion
that
you
drop
into
a
directory
somewhere,
and
then
it
invokes
these
functions
on
in
this
way.
And
that
and
that's
the
interface
and
that's
all
that
we
like
the
to
thought
s
to
Orchestrator
or
we
when
writing
tasks
right
against.
But
that
seems
much
more
natural
than
like
specifying
some
call
out
that
we
have
to
like.
C
This,
just
briefly,
it
depends
on
how
complex
the
implementation
of
the
hook
is
don't
like.
If
the
hook
for
taking
down
the
service
in
Brooke
involves
not
just
doing
something
inside
rook,
but
also
like
fiddling
around
inside
communities.
You
might
find
that
to
implement
that
as
a
dot
py
file,
in
fact,
inside
tooth
ology,
would
actually
potentially
be
impossible
because
it
can't
reach
the
internals
it
needs
to
reach,
as
opposed
to
in
the
legacy
environments
where
everything
is
just
SSH
and
nor
route
everywhere,
and
you
can
just
go.
A
C
B
D
D
E
That
can
still
be
done
with
with
rook,
like
I,
actually
see
that
this
is
better
because
you
know
for
running
on
bare
metal.
You
just
SSH,
you
kill
9,
but
for
rook.
You
have
to
like
find
where
that
pod
lives.
You
have
to
I
mean
you
have
to
attach
to
that
pod
and
then
you
can
kill
9
it,
but
you
also
have
to
make
sure
that
rook
isn't
going
to
restart
that
pod,
and
so
you
know,
maybe
you
have
to
simulate
what
that
health
check
looks
like
and
you
always
just
return
hey.
E
My
pod
is
healthy,
but
inside
of
the
container
the
demon
is
killed,
but
to
have
tooth
ology
have
to
know
how
to
do
that.
This
seems
like
if
you
take
that
to
the
extreme.
You
know
if
you
have
like
rook
and
then
something
else
and
then
something
else
and
then
something
else,
then
pathology
has
to
know
how
to
do
that
for
every
single
backend
when
really
all
tooth
ology
wants
to
do
is
to
kill
not
in
that
process.
Yeah.
D
Yeah
no
I
hear
that,
but
so
I
think
that
I
want
to
touch
on
the
the
idea
of
of
only
supporting
one
installer
method
at
a
time.
I
think
that
even
if
we
switched
even
if
you
Ceph
upstream,
was
solidly
solidly
on
rook
tomorrow
and
we
had
rewritten
the
parts
of
tooth
ology
that
we
needed
to
to
support
rook
exactly
how
we
wanted
it
to.
We
would
still,
for
several
years
at
least
downstream,
at
least
at
Red
Hat.
You
know
we
would
still
have
to
test
everything.
D
All
of
the
you
know
in
all
the
different
ways
that
it
that
it
used
to
work
because
we
have
customers
on
those
systems
and
and
testing
with
entirely
different
test
frameworks
upstream
versus
downstream
is
is
pretty
far
from
ideal.
In
my
view,
I
just
think
even
front
from
a
resource
standpoint.
Is
it
already
the
case
that.
C
Are
without
wanting
to
divulge
anything
commercially
sensitive?
Are
our
customers
in
the
field
are
using
systems
that
were
installed
using
ansible
or
or
other
main
means
mostly,
but
we
are
still
doing
most
of
our
testing
using
tooth
ologies
own
idea
of
how
to
install
SEF
is
a
lot
already.
The
case
that
we're
doing
most
of
our
testing
with
a
different
install
map
and
mechanisms
and
customers
actually
use.
A
A
C
A
D
So
what
at
least
internally
with
our
downstream
testing
and
I,
don't
I
don't
have
nearly
as
much
information
as
ideally
would
about
it,
but
yeah
it's
it.
It's
a
fork.
It
is
not.
It
is
not
like
extremely
different
from
what's
upstream,
but
it's
different
enough
to
cause
complications,
and
so
I
kind
of
see
that,
as
like
a
preview
of
what
you
know,
what
might
happen
if
we
just
if
we
jumped
it
upstream,
rewrote
something
entirely.
You
know
different.
D
A
It's
not
just
our
down
streams
like
we
found
out
that
they
were
like
three
other
people,
other
companies
running
running
tooth
ology,
just
and
two
showed
up
at
cephalic
on
and
the
more
drastic
change,
the
harder
it
is
for
them
and
the
less
likely
we
are
to
get
them
to
cooperate
with
us
instead
of
redundantly
going
off
and
doing
their
own
localizations
and
sitting
on
their
own
thing.
For
for
forever,
I
mean
we're
on
for
incursions
of
$2,
G
and
so.
B
Then
what
I'm
not
getting,
though,
is
what,
in
this
conversation,
seems
to
be
a
blocker
for
reconverge.
Inc
is
if
what
John
was
suggesting
actually
comes
into
place?
What
we're
talking
about
here
is
to
define
operation.
Does
that
have
certain
semantics
and
whatever
is
happening?
Will
whatever
is
using
technology
we'll
just
have
to
comply
with
those
semantics
I
get
that
there's
already
code
implemented
and
implementing
that
code
in
a
different
way
is,
it
seems,
wasteful,
but
the.
A
A
Guess
those
are
items
to
them,
so
it
would
be
good,
but
there
are
sort
of
operational
issue
like
from
a
community
perspective
and-
and
one
is
like-
and
that's
something
I'm
trying
to
do
is
like
get
more
of
a
DEP
testing
into
college
II
community
going
so
that,
like
hopefully,
all
the
different
labs
in
the
world
can
start
working
together,
but
so
it
could
be
that,
for
instance,
we
can
reduce
the
number
of
homegrown
services
we
have
as
part
of
tooth
ology
by
switching
to
use
Curie
at
ease,
as
as
the
provider
for
a
bunch
of
that
entirely
independent
from
whether
we
use
q
ready,
whether
we
test
rook
inside
of
it
or
not,
but
a
lot
of
the
manpower
available
to
step
testing.
A
It's
not
is
not
pulling
the
same
direction
and
isn't
working
together
because
they
have
specific
requirements
about
how
they
test
that
are
not.
That
will
certainly
not
be
met
by
the
kubernetes
vision.
That
John
has
and
I
mean.
That's
fine,
but
it's
something
that
we
want
to
like
make
it
choice
about
now
when
I
hear
out
its
meaning.
A
You
eat
people,
the
suit
acuity
people
us
in
this
room,
flip
card
dream,
host
side
that
place
where
Robin
went.
I
forget
their
name
right
and
and
the
more
available
we
are
to
all
of
those
groups.
I
think
the
more
successful
will
be
in
the
long
term,
even
if
it's
a
little
bit
of
short-term
or
a
little
bit
of
like
local
coding
pain.
So.
C
We
talked
about
more
in
terms
of
separation
of
concerns,
rather
than
in
terms
of
adding
to
to
ecology
as
it
exists
today.
So
if
we
could
think
about
that,
Dedham
interfaces
that
you're
writing
for
talking
to
different
installers
as
as
something
that
would
be
separated
out
and
then
you
would
have
to
follow
G
and
then
some
new
service
that
were
externally
to
technology
and
provide
that
generic.
C
You
know
here
are
all
the
different
knobs,
the
tooth
ology
needs
and
here's
a
couple
of
implementations
that
talk
to
different
backends,
then
I
think
I
would
be
more
comfortable
with
that,
because
if
you
talk
about
it
in
terms
of
like
adding
things
to
topology
I,
just
think
our
technology
is
already
too
big.
Yeah.
D
So
so
can
I
just
want
to
jump
in
here
and
something
that's
been
on
my
mind
for
quite
some
time
and
and
and
I
haven't
really
been
able
to
work
on
thorgy
in
a
long
time,
but
but
probably
the
first
thing
I
would
want
to
do
if
I
did
start
working
on
it
again
is
split.
It
up.
So
I
think
that
a
lot
of
you
know
the
lab
management
stuff
into
ecology.
You
know
we're
still
going
to
need
that
for
the
foreseeable
future.
D
D
Without
having
to
worry
about
about
the
rest
of
it
so
much
if
we
had,
if
it
were
split
up
into
into
sub
components
that
had
clearly
defined
interfaces
between
the
between
them,
I
think
making
changes
like
anything
that
we're
talking
about
here
would
be
a
lot
less
daunting,
because
as
it
stands
right
now,
you
have
to
have
much
much
too
large,
of
an
understanding
of
like
the
entire
behemoth.
That
is
technology
to
make
any
interesting
changes.
I
mean.
Does
that
sound
at
all
attractive
to
you,
John.
C
Yeah!
It's
for
me.
It's
it's
about
the
sort
of
conceptual
separation
rather
than
like
whether
they're
separate
projects
as
such
I
mean
it's
probably
from
a
practical
point
of
view.
We
probably
still
won't
like
one
repository,
for
example,
I
know
that
can
be
a
controversial
thing,
but
it's
I
mean
even
actually
running
things
as
separate
services
that
potentially
have
RPC
between
them,
rather
than
having
Python
libraries.
I
think
would
be
a
big
step
forward.
C
The
trouble
with
the
Python
libraries
approach
is
you
never
know
what's
calling
what,
whereas
if
we
can
actually
wrap
things
up
in
in
services
that
use
you
know
an
RPC
mechanism
or
something
like
that,
then
we
can.
We
can
have
not
just
that
sort
of
separation
of
concerns,
but
enforcement
on
it,
because
you
have
a
process
boundary
rather
than
a
big
mushroom
code.
I
say
that
as
a
big
Python
fan,
but
you
know,
beyond
his
own
line
of
code
account,
it
does
turn
into
a
big
mush
yeah.
A
D
Like
a
couple
of
examples
would
be
what
exists
in
the
orchestra
sub-packages
they're,
the
things
that
we
that
the
like
the
remote
classes,
the
things
that
we
do,
that
we
use
to
invoke
processes
on
remote
machines.
Possibly
even
you
know,
the
power
management
related
stuff
could
go
into
one
service.
The
provisioner
could
be
its
own
service.
C
A
C
C
Those
things
really
are
just
like
commands
right,
so
you
would
have
some
generic
thing
that
was
like
go
run
a
safe
binary
with
admin
permissions,
which
would
be
what
we
would
use
to
invoke
all
the
tools
and
that
on
bare
metal.
That
would
probably
do
something
like
SSH
into
a
monitor
mode,
monitor
mode
and
on
kubernetes.
It
would
do
something
like
executing
on
the
set
tools
container,
but
that's
just
that's
just
one
example
right.
So
it's
not
as
general.
It's
just
write
a
command,
but
it's
also
not
super
specific.
D
So
another
thing
another
thing
that
might
deserve
to
be
its
own
sort
of
service
is:
there
is
I'm
trying
to
remember
the
name
of
this
this
this
task,
it's
the
set
manager,
I
think
and
it
existed
before
the
manager.
Daemon
did
excuse
me
and
it
has
methods
that,
let
you
do
things
like
stop.
You
know
all
s
DS
and
whatnot.
If
that
were
its
own
service
it
could.
They
could
know
about
some
of
these
things
right
now.
It's
it's
a
thing.
A
D
A
A
Yeah
you
had
a
sharing
of
ideas.
I
gotta
tell
you,
I'm
like
running
a
redhead
like
down
Shing
focused
but
general.
How
we
do
testing
I,
don't
know
it's
called
a
DFG
which
doesn't
mean
anything,
the
tiger
team
sort
of
thing
and
we're
not
thorgy.
Yet,
although
we
will
be
in
so
like
my
definitely,
my
personal
interest
is
I.
Think
just
gonna
be
setting
up
internal
interfaces
and
it's
like
I
need
I
need
to
make
it
so
we
touched
ansible
in
in
the
lab,
but
you
know
those
I
think
will
be
useful
tools.
A
Whichever
way
we
end
up
going
with
kubernetes
here,
I
do
have
one
other
thing
I'd
like
us
to
think
about
which
is
and
which
I
definitely
don't
answer
to.
But
what
what
running
on
kubernetes,
like
as
the
whole
cluster
manager
would
mean
for
people
who
aren't
Red,
Hatters,
ooza
or
even
who
are
or
even
for
us
like
in
terms
of
testing
the
changes
we
make
to
the
system
like
would
we
have
to
start
deploying
humanities
inside
of
public
OpenStack
clouds?
If
you
don't
have
physical
hardware,
or
is
it
even
remotely
not
insane
to
test
it?
A
C
D
C
A
Yeah,
okay,
anyway
regions,
but
that
was
I
think
that
was
useful
for
me.
I
hope
it
was
useful
for
other
people.
We
do
this.
B
C
B
Weird,
you
know,
but
anyway,
I'm
getting
tired,
though
allow
me
some
like
anyway,.