►
From YouTube: Ceph Month 2021: Ceph On Windows
Description
By Alessandro Pilotti
Slides: https://www.slideshare.net/Inktank_Ceph/ceph-on-windows
Ceph Month 2021 schedule: https://pad.ceph.com/p/ceph-month-june-2021
A
All
right,
okay,
seven
minutes
why
parking
set
from
windows,
so
ceph
is
the
most
popular
distributor
on
open
source
target
solutions
out
there.
So
this
is
without
discussion
right
and
at
the
same
time,
windows
server
has
a
relatively
large
market
share,
especially
in
the
enterprise.
So
you
cannot
enter
a
large
company
these
days,
but
not
even
a
medium-sized
one,
without
finding
a
mixture,
of
course,
of
a
lot
of
windows,
servers
and
linux
servers,
and
so
on
so
having
something
that
will
bridge
the
gap
between
them
when
it
comes
to
storage.
A
It's
quite
ideal
in
this.
In
this
context,
of
course,
this
is
not
something
that
is
the
first
time
that
comes
up.
Actually,
companies
tend
to
use
a
lot.
This
is
called
sapphire
sketch
gateway
for
that,
but
there
have
been
always
a
lot
of
complaints
related
to
performance
and
scalability
for
that.
Okay,
so
that's
that's
something
that
we've
been
hearing
and
discussing
since
a
long
time,
and
finally,
recently
I
mean
approximately
one
year
ago,
we
started
actually
doing
all
this
work.
How
did
it
happen?
A
It
happened
thanks
especially
to
suzy,
so
big
kudos
to
to
suse
for
that,
especially
lars
and
mike
for
helping
a
lot
in
in
this
effort
working
with
absolutely
phenomenal
for
us,
and
we
managed,
thanks
to
this
partnership,
to
to
release
to
break
to
the
community.
Let's
say
this
particular
effort
for
for
the
party.
A
So
what
were
the
architectural
goals?
We
were
looking
at
to
begin
with,
we
wanted
to
have
a
user
experience
on
windows
which
was
as
close
as
possible
as
the
linux
one.
What
does
that
mean?
It
means
that
we
want
to
have
people
which
have
a
linux
experience
with
ceph
to
be
able
to
just
log
in
in
a
windows
server
and
find
themselves
at
home
as
much
as
possible
without
having
to
to
go
completely
to
a
new
learning
curve.
A
At
the
same
time,
we
wanted
to
make
sure
that
also
that
the
window
side
of
the
equation
would
be
well
integrated,
meaning
that,
if
somebody
is
assisted,
mean
or
devops,
let's
say
on
the
on,
the
windows
side
could
easily
start
working
on
them
using
ceph
by
finally
natively
integrated
and
not
just
like
something
that
doesn't
feel
at
home.
A
At
all,
okay,
so
those
were
some
of
our
goals
in
terms
of
user
experience
in
terms
of
performance,
we
want
definitely
to
outperform
the
ice,
casi
gateway,
otherwise
it
didn't
make
necessarily
too
much
sense
and
get
as
close
as
possible
to
the
linux
native
performance.
I
will
add
some
more
info
on
that
soon
secure
I
mean
it.
Anything
which
is
cloud
related,
of
course,
has
to
take
into
consideration
security
and,
in
terms
of
platform,
support.
We
wanted
to
have.
A
Windows
start
2016
and
19
as
default,
and,
of
course
it
will
support
also
the
upcoming
2022
version.
It
works
also
windows,
10
for
development
purposes
and,
of
course,
it
could
theoretically
work
on
also
in
previous
versions,
but
it's
not
something
that
we
we
plan
necessarily
to
support
at
the
moment,
non-goal
porting
osd,
directly
to
windows
so
right
now,
the
way
it
works
is
that
the
windows
server
nodes
will
connect
to
external
osd
nodes
to
access
the
storage
itself.
A
This
is
a
very
common
question,
especially
when
people
ask
it
with
how
it
compares,
let's
say
to
native
distributed
storage
options.
Okay,
a
quick
look
about
the
architecture,
so
here
we
have
the
the
windows
side
on
the
right
and
we
have
a
separate
user
space
and
kernel
space
space
part
the
user
space.
A
We
have
a
process
called
rbd
wmbd,
which
contains
the
the
cell
fiber
base
specifically
barbed
liberators,
and
also
an
additional
component,
which
is
called
the
wmbd
win
bd,
how
we
call
it
which
has
a
dll
in
user
space
and
connects
to
a
fast
channel
as
called
device,
io
control
channel
to
the
vmbd
kernel
module
that
we
developed
specifically
for
this
project
and
from
there
we
have
access
to
the
map
disk.
A
So
talking
specifically
about
user
space,
we
have
a
bd
liberators
and
all
the
relevant
cli
components
which
have
been
ported
and
by
the
way,
all
this
effort
thanks
a
lot
to
the
community
for
this
has
been
merged
into
into
into
pacific,
and
we
are
still
actively,
of
course,
working
on
potential
bugs
that
might
find
out
potential
irrigations
as
we
move
forward
with
the
development
rbd
mappings
are
managed
by
a
process
called
rbd
wimbidi,
which
is
very
similar
in
scope
and
and
experience
to
the
rbd
and
bd
on
linux.
A
Actually,
that's
where
the
experience,
let's
say,
inspiration
for
that
came
from.
We
have
three
typical
command
that
we
use.
Rbd
will
be
the
map
and
map
and
list
specifically
due
to
rbd
volumes.
Okay,
images
mappings
are
managed
afterwards
by
a
windows
service,
meaning
that
if
you
reboot
the
host,
you
will
find,
of
course,
your
mappings
in
the
same
identical
place
from
a
user
perspective
and
from
a
operating
system
perspective,
the
rbd
images
will
just
look
like
regular
windows.
Disks
like
you,
would
expect,
with
iscsi
or
anything
similar
rbd
and
many
cli
tools
themselves.
A
Of
course
work
exactly
like
on
linux.
What
we
don't
have,
for
example,
there
are
a
bunch
of
python
specific
scripts
that
all
do
they
can
work
with
some
very
minimal
changes
on
windows
itself.
You
know
on
windows,
you
don't
automatically
associate
that
the
python
script,
let's
say
with
an
executable,
so
they
need
to
be
wrapped,
which
is
something
that
was
out
of
scope
of
what
we
needed.
So
you
cannot
just
run
seven
and
have
a
bunch
of
commands,
while
everything
will
say
executable.
A
We
just
work
out
of
the
box:
okay,
wimpy
d,
that
was
the
hardest
part
of
the
work
that
we
did
so
big
kudos
also
to
to
the
team
for
that.
A
It's
a
windows
kernel
driver,
it's
just
called
virtual
store-bought
mini
port
type
of
driver,
which,
in
short,
without
getting
too
much
into
details,
is
what
allows
basically
to
create
a
storage
which
is
backed
in
this
case
by
a
networked
storage
origin
and
show
up
as
disks
within
the
operating
systems.
A
It
implements
the
md
protocol
protocols.
That's
where
we
started
from
with
two
communication
options:
the
first
one
is
via
tcp,
which
is
compatible
with
any
linux
mbd
server.
Actually,
when
we,
when
we
designed
this-
and
we
worked
on
that,
we
wanted
it
to
work
like
in
in
in
order
to
be
able
to
work,
let's
say
with
any
ndp
server
itself,
so
not
only
ceph,
and
actually
it
still
allows
us
to
do
that.
But
performance
was
really
not
where
we
wanted
it
to
be.
A
So
what
we
did
is
that
we
did
is
that
we
developed
a
separate
user-space
camera
communication
channel,
which
proved
to
be
significantly
faster
and
allows
us,
of
course,
to
reach
the
performance
goals
that
we
wanted.
It's
licensed
it's
open
source
across
like
everything
else
that
we
did
here.
Lgpl
2.1,
to
be
also
consistent.
Let's
say
with
the
majority
of
the
safe
ecosystem,
and
it
currently
resides
at
the
github
repository
that
you
can
see
on
there
on
the
link
below.
A
Now,
let's
get
a
bit
on
the
how
the
configuration
works
you
know
on
on
linux,
you
will
find
everything
in
our
default
location
which
would
be
etc
self,
while
on
windows,
the
equivalent
of
vtc
will
be
program
data.
So,
for
example,
if
you
have
your
operating
system
and
everything
in
c,
that
would
be
the
column,
backslash
program,
data
and
then
self,
so
very
similar,
very
consistent.
Let's
say
in
how
it
works
on
linux,
so
the
typical
workflow,
considering
in
getting
ceph.cof
and
confident
keying
files
over
from
a
linux
node
install
windows.
A
That
means
that
the
windows,
binaries
and
and
and
just
starting
everything
so
very
simple,
beside
obd.
The
second
goal
that
we
had
was
to
get
also
cfs
ported.
A
A
There
are
a
bunch
of
them
in
the
community
and
after
we
looked
around
at
the
various
options,
we
decided
to
go
with
dokini,
which
is
a
very
stable
one,
works
extremely
well
and
we
are
very
happy
with
it.
So
that's
a
preach
that
needs
to
be
installed
in.
In
the
moment
we
choose
ffs
on
windows.
Beside
that,
of
course,
we
implemented
all
the
all
the
integrations
with
dockety
and,
and
it
works
like
you
would
expect
on
on
linux.
A
A
A
It
can
be
fully
automated.
So
if
you
want
to
deploy
the
spark,
for
example,
of
of
an
ansible
playbook
or
whatever
else,
it
just
works
out
for
the
box.
A
It's
a
resource,
as
I
mentioned
before,
and
and
there
are
also
continuous,
builds
available
at
the
link
that
you
see
below,
meaning
that
whenever
something
merges
in
ceph
or
any
of
the
dependent
components
that
we
we
looked
at
before,
like
qbd
and
so
on,
there
is
an
automatic
build
job
that
will
just
build
a
new
version
of
the
installer,
so
you're
always
up
to
date
to
the
latest
version
that
we
have
today,
which
is
based
on
pacific
at
the
moment.
A
Okay,
talking
about
windows,
it's
also
very
important
to
mention
virtualization,
hyper-v
and
saf
were
something
we
were
really
really
looking
at
integrating
properly
it's
worth
mentioning
here.
I
honestly
forgot
to
add
a
bullet
point
here
that
we
also
added
openstack
support.
A
So
if
you
have
openstack
with
hyper-v
now
can
also
has
you
can
also
have
ceph
with
the
width
with
cinder
and
and
just
make
it
work
seamlessly
on
windows,
as
you
would
on
on
on
linux,
hyper-v
can
access
now
there
are
rbd
images
in
the
form
of
disk,
of
course,
and
you
can
also.
That
means
that
you
can
also
boot
dark
live
vm
from
an
rbd
image.
A
You
can
see
there
are
a
bunch
of
powershell
commands
that
you
can
use
just
to
create
a
vm.
Add
a
hard
disk
drive
that
comes
from
from
there
and
start
the
vm.
I
will
actually
do
a
demo
right
now
for
that.
A
Let's
talk
quickly
about
performance,
so
we
had
a
very,
very
significant
improvement
over
the
ice
casing
gateway
and
in
our
tests
we
managed
to
outperform
also
the
linux
native
rbd
scenario.
Please
note
that
that
was
not
originally
what
we
wanted
and
it's
not
a
met,
a
matter
of
contest
I
mean
we
don't
want
to
say:
hey.
We
are
faster
than
linux,
okay,
so
that
was
really
never
ever
never
the
goal.
A
The
reason
why
it
happened
is
that
since
we
don't
use
the
tcp
overhead,
on
top
of
the
let's
say,
user
space
to
kernel
communication
part,
we
have
a
faster
communication
channel,
which
also
means
that
we
have
a
improved
the
io
worker
threads
allocation.
That
that's
why
this
happened.
Again,
we
tried
to
be
as
objective
as
possible.
A
We
published
a
blog
post
with
the
performance
results
and,
of
course
all
the
methodology
is
in
the
open
there
and
if
somebody
else
wants
to
do
additional
tests,
we
are
more
than
happy
to
to
validate
it
and
so
on.
We
are
obviously
extremely
happy
about
how
how
things
went.
A
Okay,
demo
time,
I'm
going
really
fast
because
of
course,
climbing
together
both
the
demo
and
the
slides.
Here,
it's
not
necessarily
too
easy
in
the
short
amount
of
time
that
we
have
so,
let's
start
directly.
A
A
A
And
it's
finished
now,
let
me
it
will
set
the
binaries
darkly
in
the
path
for
you.
But
this
way
I
don't
have
to
let's
say
start
a
new
section.
A
So
this
is
opens
user
machine
where
we
have
all
the
other
stuff
cluster.
A
A
As
you
can
see
right
now,
I
have
only
one
image,
which
is
called
hyper
v1
that
I
will
use
in
a
bit
here.
I
should
have
it
as
well.
Yes,
so
I
can
run
the
same
command
from
here
and
I
see
the
same
identical
details.
This
is
windows,
and
this
is
linux,
all
the
data.
Sorry,
I
mean
all
the
configuration
files
are
already
in
place.
I
already
put
them
there,
so
we
don't
have
to
spend
time.
A
A
A
A
Created
there
you
go
and
then
let's
switch
on
the
windows
side
and
well
actually,
first,
let's
mount
it
locally,
so
on
them
now
I
mounted
it
on
them
on
the
windows
side,
also
on
the
linux
side,
and
that
should
be
on
mnt
instead
of
first
demo,
one-
and
I
am
just
writing
very
quickly-
a
file
into
it.
A
A
A
Or
I
can
terminate
this
mapping
since
I
don't
need
it
the
windows
so
specifically
there.
There
is
already
an
rbd
image
called
hyperv1,
which
I
created
with
this
qemo
command,
which
I
don't
want
to
run
again,
but
just
to
show
you,
I
basically
converted
an
open
source,
a
qr2
directly
into
into
the
rbd
image,
which
I
call
hyper
v1,
which
I'm
not
running
just
because
it
takes
a
few
to
three
minutes
and
we
don't
want
to
spend
those
two
three
minutes
just
waiting
for
that.
Okay,
so
it's
already
there.
A
I
want
to
put
it
offline,
as
you
can
see
it's
already
partitioned,
because
it
contains
already
the
operating
system
and
everything
I
will
start
a
powershell
shall.
A
A
A
And
we
have
just
mike:
do
we
have
a
meeting
or
two
for
questions.
C
So
we
got
a
got
a
few
questions
here.
Can
you
clarify
the
cli
python
comment.
A
Yeah
sure,
let
me
let
me
go
here.
A
Okay,
so
this
is
all
all
the
binaries
that
we
have
at
the
moment
installed.
Okay,
as
you
can
see,
there
are
exit
book,
executable
and
a
bunch
of
libraries
we
depend
on
and
so
on.
A
As
you
can
see,
we
didn't
copy
over
as
part
of
the
installer
any
specific
python
scripts
that
are
often
used.
Let's
say
in
the
in
the
management
of
observed
clusters.
The
reason
is
simple
that
for
that
we
will
also
need
a
python
environment.
I
mean
a
pattern
interpreter
which
is
not
available
by
default
on
windows
and
the
way.
Typically,
this
works
just
as
an
example
when
we
port
it
and
we
did
all
the
work
in
openstack,
which
is
entirely
python
based.
What
we
do
is
that
typically.
A
A
If
the
path
is
installed
and
we
have
yeah
if
python
is
installed
and
and
all
the
and
all
the
python
scripts,
and
so
on
that
don't
and
the
modules
don't
have
anything
linux
specific,
like
signals
and
stuff
like
that,
then
it
will
just
work
cool.
C
A
What
is
the
current
status
for
the
installer
inclusion?
You
mean
for
upstreaming
it
or.
B
B
B
A
As
you
as
you
saw,
I
installed
it
basically
as
part
of
the
demo
that
I
did
right
now.
A
Well,
we
are
using
it
already
production
wise.
We
never
incorporated
any
any
bug
report
that
recently
that
hinted
at
stability
issues
and
so
on.
So
this
is,
of
course,
the
first
release.
A
A
A
We
see
some
some
interest
from
from
emails
that
we're
getting
from
users
which
are
interested
in
everything,
but
I
think
that
the
the
the
most
important
thing
would
be
to
return
to
live
events
and
being
able
to
have
the
typical
interactions
that
we
have
at
the
you
know,
worldwide
conferences
and
so
on,
so
hopefully
the
next
cephalocon
and
so
on,
in
which
we
can
really
taste
the
pulls
of
the
situation,
because,
right
now
you
know
with
the
pandemic
and
everything
I
don't
think
we
can
clearly
assess
how
much
interest
there
is
because,
especially
larger
enterprises,
they
just
don't
adopt
something
of
the
internet.