►
From YouTube: CDS Jewel -- Ceph Mesos
B
A
Okay
hi,
my
name
is
Joey
I
am
in
Intel
Shanghai
I
am
going
to
introduce
to
you
a
set,
a
mesos
framework
that
we
have
developed
called
Seth
mesos.
It
is
a
mesos
framework
to
scale
safe
arm
clusters
on
Apache,
mesos
and
I've
developed
this
with
my
colleague
John
Kuhn
Duncan.
Do
you
want
to
say
hi.
A
A
Okay,
so
I'm
pretty
sure
that
you
all
have
heard
of
apache
message,
but
just
to
be
sure
I
will
do
a
recap
of
what
it
is
and
then
I
will
introduce
to
you
our
our
proof
of
concept,
our
deployment
of
missiles
and
explain
why
we
need
itself
in
our
message
deployment
and
then
I
will
introduce
you
to
what
Seth
mess-ups,
how
it
works.
What
and
what
our
goals
are
our
scopes
and
our
current
challenges
and
development
goals.
A
Okay,
so
a
pochemu
sauce,
so
a
pochemu
sauce
is
a
distributed
systems
colonel.
So
what
this
means
is
that
it
is
built
in
the
same
principles
of
the
Linux
kernel,
but
at
a
different
level
of
abstraction.
A
linux
kernel
manages
a
a
single
links
box,
but
apartamentos
a
distributed
systems.
Colonel
iké
manages
the
whole
data
center,
so
what
apartamentos
does
is
it
provides
and
your
applications
with
api's
to
run
on
an
entire
datacenter.
So
these
api
is
our
resource
management
and
also
binary
scheduling.
A
If
you
have
any
questions
on
jump
in
anytime,
okay,
so
the
architecture
of
Apache,
my
sauce
is
Apache.
Mesos
consists
of
three
components:
first
is
the
framework,
and
then
there
is
the
ipod
mesos
master
and
then
the
mess
of
slave,
the
the
framework
consists
of
a
scheduler
and
a
executor.
So
what
happens
in
apache
mint
sauce
is
that
these
slaves
periodically
report
their
resources
to
the
mezzos
master
and
then
the
master
semester.
A
If
there
is
a
framework,
then
the
message
of
semester
also
periodically
a
sense
resource
offers
to
the
framework,
and
if
the
framework
needs
resource
to
run
their
task,
then
they
accept
their
offer
and
then
they
send
the
task
info
and
the
the
the
required
resources
and
then
the
meso
smashed
er
sense,
redirects
these
task
info
to
a
slave
and
runs
their
task
on
how
these
tests
are
run
is
by
the
the
framework
sending
their
executor
to
the
slave.
And
then
the
executor
runs
the
task
on
a
slave
okay.
A
So
this
is
what
I've
talked
about.
What
apartamentos
was
and
in
this
slide
I
will
talk
about
what
we
are
doing
currently
with
Apache
mesos.
So
we
are
trying
to
build
a
big
data
platform
as
a
service
based
on
mesos.
A
So
what
the
frameworks
that
we
use
in
our
POC
is
myriad,
which
scales
node
managers,
what
yarn
nodes
that
is,
and
then
there's
HDFS
missiles
which
scales
HDFS
cluster
in
mesos
and
then
there's
something
called
seed
app,
which
is
the
past
layer
of
our
big
data,
analytics
platform
and
the
marathon
and
message
dms,
and
then
we've
added
a
SEF
into
our
POC,
because
we
needed
self
r2
to
run.
It
run,
run
safe
as
a
big
data.
Storage
back
end,
our
alternative
to
HDFS
missiles
and
I'm.
A
Sure
most
of
you
know
about
the
retro
skate
WAV
file
system
that
Joey
n,
my
my
colleague
is,
is
developing.
So
we
are
trying
to
integrate
the
rattlesnake
WAV
file
system
and
the
NSF
into
our
POC.
A
Ok.
So
our
motivation
is
that
we
are
trying
to
integrate
retro
skate
refile
system,
which
is
a
HC
FS
Hadoop,
compatible
file
system,
and
then
we
are
currently
using
Miriah
to
manage
our
known
managers
and
we
want
to
a
guaranteed
data
locality
when
we
use
safe
as
our
storage
backend
for
our
big
data
analytics.
So
we
needed
a
framework
to
manage
Seth
in
the
same
cluster
that
we
manage
our
node
managers
and
also
one
of
the
things
that
Apache
mesos
is
becoming
an
application
to
playing
a
deployment
platform.
A
When
this
project
started
at
09,
it
was
only
for
like
running
spark
distributed
workloads,
but
currently
the
mesosphere
DCOs,
it
installs
Cassandra,
my
my
sequel,
all
of
those
long
running
services.
So
why
not
safe
so
we've
we
thought
that
eight
deployment
flame
work
force
F
is
will
be
also
needed.
So
that
is
what
we
why
we
started
this
project.
A
A
We
will
provide
weather
service
in
our
in
a
mesos
cluster
and
our
current
main
target
is
to
support
a
rather
skate
way
and
the
block
device
and
file
system
will
also
be
supported
in
future
releases.
So
our
scheduler
will
schedule
SEF
binaries
and
how
these
binaries
will
run
on
the
slaves
are
in
the
sift
on
docker
containers.
A
So
the
architecture
is
like
this:
in
the
very
top,
there
is
a
safe
mess
of
scheduler
and
there
are
three
threads
running
inside
it.
There's
a
file
server,
there's
a
REST
API
and
then
there's
the
callbacks
implemented
by
the
mezzos
sdk.
And
then
there
is
the
message:
master
slave
and
our
executor.
A
So
what
happens?
Is
that
one
once
you
run
safe,
besos
scheduler,
it
deploys
a
monitor
and
then,
after
the
monitor
is
deployed,
the
configuration
files
that
are
in
bar
lib
SEF
are
copied
to
this
file
server.
And
then,
if
you
want
to
scale
a
OSD
node,
then
you
send
a
flex
up
request
to
the
rest,
api
and
then
x,
0
through
IPC.
A
This
request
goes
to
the
callbacks
and
then,
when
message
master
sends
a
resource
offer,
then
the
callback
accepts
the
offer
and
then
sends
the
task
info
to
the
meso
slave
through
all
the
meso
semester.
And
then
the
meso
slave
knows
the
the
URL
of
the
file
server
and
then
downloads,
the
SMS
of
executors
from
from
this
file
server
and
also
the
configuration
files
from
the
file
server
and
then
the
executor
launches,
OSTs
or
rattles
gateways,
or
no
other
binaries
like
that.
A
Ok,
so
our
version
is
currently
00
that
one
currently
we
only
can
deploy
10
SD
in
each
host,
but
you
know
osts
need
to
be
run
on
multiple
discs
on
a
single
house
and
also
the
journal
is
best
written
on
the
SSD.
Currently,
we
don't
support
these
two
features,
but
this
feature
is
currently
in
development,
so
it
will
be
implemented
soon.
A
Also
on
the
the
data
traffic
we
want
to
make
it
go
through
the
10
GB
or
over
bit
or
40
gb
network
and
the
the
nic
selection
or
the
CIDR
selection
will
also
be
implemented
in
the
future.
Also,
we
want
to
implement
flex
down
with
constellation.
We
also
want
to
integrate
mac
VLAN
to
our
framework
and
etc.
A
Sn,
our
source
code
is
currently
on
github.
So
if
you're
interested
in
mesos,
you
can
fetch
the
code
and
try
out
yourself.
C
Yeah
so
you're
talking
about
right
now
that
you're
restricted
its
restricted
to
only
one
I
was
d
/
host,
mmhmm,
yeah
and
typically
like
it
I
need
apartments,
will
end
up
sharing.
C
Anyone
has
a
single
SSD
for
many
different
OSD
journals.
How
do
you
envision
implementing
that
with
this
framer,
because
it
looks
like
without
knowing
too
much
by
messrs
itself?
It
looks
like
it's
running
like
one
container:
that's
a
kind
of
isolated
took
for
each
OSD,
so
how
will
they
be
able
to
like
prime
will
they
be
able
to?
How
are
you
going
to
implant
like
petitioning
journalists,
yet
shared
among
different
containers.
A
Well
well,
the
the
journal
thing
right
now
is
that
in
the
in
the
docker
container
provided
by
the
self-organization
it
it
simulates
a
journal
by
creating
a
file
inside
inside
the
docker
container.
So
well
I'm,
not
that
clear
of
how
we
are
going
to
solve
that
problem,
because
the
the
journal
performance
is
very
critical
for
production
environments.
D
C
D
I
mean
I
have
another
question
sort
of
in
this
is
Sam.
Sorry
in
the
sort
of
in
general
case
with
me.
Zoe's
these
are
applications
that
are
running
are
usually
stateless.
With
respect
to
the
local
machine.
Right,
yes,
correct,
yeah,
okay,
Seph
is
the
OS.
These
are
super
duper,
not
stateless
right.
There's
a
el
dia
disk
full
of
big
fat
data
right.
So
how
do
you
plan
on
control,
so
I
guess
they're
about
two
pieces?
One!
You
can't
rapidly
expand
and
contract
SF
cluster
without
really
messing
with
client
I.
D
Oh,
it
might
make
sense
to
dynamically
deploy
and
unda
ploy.
Temporary
SEF
clusters
for
working
for
working
space
for
a
job.
Is
that
what
you're?
Looking
for
or
you're
talking
about.
A
A
D
A
D
A
A
C
Nothing
that
comes
to
mind
is:
is
it
possible
for
message
to
schedule
like
a
bringing
yep
several
monitors
at
once
and
rather
than
doing
a
single
one
first
and
then
adding
more
later?
Yes,.
A
Yes,
it
is
also
correct
that
is
also
possible.
We
oh
I,
I,
haven't
shown
you
a
example
of
our
rest
rest
api.
So
in
our
REST
API
you
can
specify
how
many
instances
of
a
single
of
a
binary
you
want
to
expand.
So,
let's
say
instance
is
10
profile
OSD
then
that
means
you
want
to
increase
10,
OS,
DS
and
also
in
this
rest,
api.
You
can
specify
a
particular
rack
or
a
particular
host
okay.
So
it's
very
it's
very
flexible.
So.
C
A
C
Okay,
it
just
an
idea
for
the
future
might
be
being
able
to
us
instead
of
started
with
one
monitor
start
with
I'm
30
and
had
that
set
in
the
monitor
numbers
almost,
but
it
up
at
that
point,
you
need
to
have
the
eyepiece
of
all
of
them
before
you
can
yeah
start
up,
so
that
might
be
problematic.
Yeah.
A
Yeah,
so
we
will
make
it
possible
to
like
up
specify
in
the
in
the
config
file.
You
want
three
initial
monitors
and
then,
when
it
bonds
three
monitors,
it
will
first
spawn
the
first
and
then
copy
the
configuration
configuring
files
to
the
second,
and
then
the
information
will
be
merged
to
the
the
file
server.
So
yeah,
the
the
IPS
will
be
shared
eventually
among
among
the
three
monitors
se
make.
C
A
C
A
A
A
Tantrum
yeah
yeah,
it
is
intentional
because
you
know
you
know
a
host
might
go
white,
have
a
problem
with
the
power.
So
if
you
reboot
the
host,
you
still
want
the
data
to
be
persistent
on
the
same
host.
So
there
you
go,
you
can
yeah
yeah
you
just
you
just
restart
your
container
and
you
have
your
data
over
there.
So.
C
D
C
A
Well,
currently,
we
haven't
done
any.
You
know
big
scale
testing,
so
yeah
currently
well.
Well,
there
is
this
one
problem
where,
where
you
have
a
single
host
with
multiple
OS
DS,
that
there's
no
problem
with
that,
but
if
you
have
multiple
hosts
which
have
multiple
OS
DS
inside
then
there
is,
you
know.
The
whole
system
goes
crazy
and
you
know
there's
are
there
are
many
reports
on
the
web
that
have
that
say
that
they
have
the
same
experience
but
I'm?
Currently
we
don't
know
what
what
the
problem
is.
A
So
yeah
we're
even
without
mesos,
even
without
my
sauce,
if
you
you
know
manually,
launch
multiple
OS
seized
on
multiple
hosts
and
the
the
monitors
stone
kit
are
just
hangs
in
a
sittin
certain.
It's
probably.