►
From YouTube: Ceph Month 2021: The go-ceph get together BoF
Description
Led by: John Mulligan
Ceph Month 2021 schedule: https://pad.ceph.com/p/ceph-month-june-2021
A
Okay,
that's
probably
sufficient
I'll
start
so
hello,
my
name
is
john
mulligan.
I've
been
one
of
the
maintainers
of
gosef
for
over
a
year
now,
and
I
think
this
is
our
first
public
discussion
of
the
project
it's
been
around
for
a
while,
but
we've
kind
of
tried
to
revitalize
it,
and
that
leads
me
into
my
first
slide,
which
is
a
brief
history
of
gosef.
A
According
to
get
history,
the
project
was
started
in
2014
by
noah
watkins
me
and
the
other
maintainers.
We
joined
the
project
and
started
our
activity
there.
In
october
of
2019.,
a
few
months
later,
we
used
gomot.
We
started
using
go
modules,
which
was
a
somewhat
recent
development
in
the
go
community.
A
A
A
A
I'd
say
a
significant
chunk
of
the
work
we
do
is
on
the
rbd
apis,
there's
quite
a
lot
of
them,
and
one
of
our
most
important
consumers
is
the
csi
project
and
a
lot
of
the
activity
around
those
apis
has
been
driven
by
that
project
and
their
needs.
A
A
couple
of
the
most
recent
important
topics
include
rbd,
mirroring
and
snapshots,
as
well
as
some
recent
work
around
getting
proper,
thick
or
thin
image
provisioning
again,
that
was
driven
by
the
csi
team
as
well.
A
Beyond
the
libraries
that
wrap
the
pareto's
lib
rbd,
we've
recently
developed
various
sub
packages,
I've
been
calling
the
admin
packages.
The
first
was
cephas
admin,
and
this
uses
the
equivalent
functions
that
the
ceph
command
does.
It
uses
a
for
lack
of
a
better
word,
the
the
json
api,
our
command
api
and
then
we've
recently
grown
admin
packages
for
rbd,
as
well
as
rgw,
rbd
and
cefs
work.
Similarly,
using
that
json
api
well,
rgw
uses
a
new
http
rest
style
api,
and
that
is
those
last
two
are
new
in
our
release
this
week.
A
So
what
are
we
talking
about?
Coming
up
in
the
short
term,
you
have
a
few
rbd
mirroring
functions
that
still
need
to
be
completed,
and
then,
after
that,
we'll
be
looking
into
cfs
mirroring
functions.
I
believe
these
are
new
and
specific.
A
There
is
still
significant
chunks
of
the
ceph,
the
apis
that
we
do
not
wrap.
Well,
I
don't
think
we'll
all
you
know
plan
to
wrap
everything.
We
probably
need
a
good
measurement,
slash
understanding
of
what
we
do
or
don't
want
to
wrap
explicitly,
there's
always
stuff
going
on
behind
the
scenes.
As
well.
A
Recently,
one
of
my
teammates
sven
anderson
was
working
on
performance
enhancements
around
sharing
buffers
between
go
and
see
it's
a
very
interesting
development
and
just
other
general
background
tasks
as
we've
grown.
I've
come
to
think
of
us
and
this
kind
of
fits
in
with
the
recent
ad
of
the
rgw
stuff
that
gosef
wants
to
be
the
place
for
library
code
when
you're
interacting
directly
with
a
ceph
cluster
when
you're
writing
in
the
go
language.
A
A
A
The
standard
is
that
you
can
have
api
breaking
changes
as
long
as
you're
v0.
It
basically
means
you
haven't
really
released,
which
isn't
really
true,
and
there
are
libraries
in
perpetual
v0
state
and
we're
kind
of
there
right
now,
but
I
don't
want
to
be
so
at
some
point.
We
really
ought
to
go
to
v1,
but
when
we
do
that
we
have
a
compatibility
promise
to
keep
so
we
need
to
do
some
planning
around
deciding
what
deprecated
apis.
A
We
really
ought
to
drop
or
clean
up,
and
if
there
are
apis
that
are
very
important
and
we
want
to
experiment
them.
We
should
do
that
sooner
rather
than
later,
so
that
if
we
have
to
make
an
api
break
change,
we
do
it
while
we're
in
v0
still,
and
so
that's
the
end
of
my
slides.
Hopefully
I
didn't
take
too
long.
I
think
I
took
longer
than
I
should
so
I
will
stop
talking,
and
this
can
be
a
proper
bof
session
following
this
thanks.
A
B
Hey
john
question:
for
you,
hello,
hello,
would
you
be
interested
all
in
go
seth
in
having
some
sort
of
access
to
the
admin
socket
for
the
davinci.
A
I
have
a
rough
idea
of
what
that
is.
I
know
there's
some
interaction
with
the
json
api
in
that
segment,
but
maybe
fill
me
in
a
little
bit
more
on
the
technical.
B
Yeah
detail
like
minor
science
admin
sock
is
used
primarily
for
like
low
level
debugging
or
poking
of
the
various
statements.
We
use
it
quite
a
bit
internally
to
expand
on
the
number
of
metrics
that
we
can
export
from
the
low
level,
especially
osd
stats.
B
So
I
mean
we
have
a
rudimentary
implementation
of
an
asoc
wrapper
that
we
use
internally
and
I've
been
kind
of
toying
with
the
idea
at
some
point
of
trying
to
contribute
something
to
go
stuff.
That
would
be
better
than
the
implementation
that
we
have
internally,
and
I
wasn't
sure
that
if
there
would
be
interested
in
that.
A
Yeah
that
sounds
very
interesting.
I
probably
need
to
look
at
it
a
little
more
to
understand
exactly
what
you
mean,
but
yeah
generally,
if
it's
something
that
interacts
with
the
ceph
cluster
directly,
I
the
distinction
I'm
trying
to
make.
Maybe
if
I
should
clarify,
is
that
you
know.
A
Obviously
you
can
write,
go
code
that
talks
to
a
fuse
mount,
that's
mounting
cfs
or
you
can
use
the
s3
apis
and
go
to
talk
to
rgw,
but
for
the
stuff,
that's
like
very
specific,
I'm
very
open
to
hearing
suggestions
for
what
what
gosef
can
take
and
and
make
use
of
kind
of
treating
the
project
and
the
various
sub
modules
as
well.
Here's
how
you
make
administrative
changes
to
stufffs,
here's
how
you
set
you
want
to
use,
go
to
setup,
rbd,
mirroring
use
these
api
calls
so
so
in
general,
yeah.
C
C
Sorry,
here's
sven
I'm
also
one
of
the
developers
for
gosef
and
I'm
really
curious
how
many
users
of
code
self
are
actually
online
here
at
the
moment.
If
can
we
find
that
out?
Maybe
with
a
chat
or
I
don't
know,
because
probably
many
people
are
just
interested
in
general,
but
I'm
curious
if
there
are
people
who
actually
use
it
already.
D
E
A
I
know
of
one
or
two
others.
Often
I
don't
remember
exactly,
but
we've
been
contacted
a
couple
of
times
through
issues
about
this
or
that
thing
not
sure
how
mature
those
projects
are.
But
there
are
a
couple
other
ones
out
there.
A
couple
were
using
very
old
versions
of
gosef
when
I
looked
at
the
repo
on
github,
but
so
there's
some
stuff
out
in
the
wild
but
yeah
the
main
ones
I
know
about
are
the
ones
that
have
been
mentioned.
F
So
I
just
want
to
add
what
michael
said
so
wikis
plan
to
use
it,
and
apart
from
that,
I
am
trying
to
develop
the
corsair
driver
for
rgw,
which
also
plan
to
use
courser
for
those
admin
commands
so
yeah,
but
the
driver,
it's
a
price
initial
stage,
nothing
much
or
nothing
product
rise
at
the
moment.
F
E
I
was
about
to
ask:
I
didn't
click
it.
Okay,
good.
E
A
A
Since
I
see
sven's
thumb
I'll
mention
that
throw
this
out
there,
you,
you
spend
you've
been
working
on
getting
ghosef
working
on
mac.
Recently,
you
want
to
talk
about
a
little
bit
about
that.
C
It
all
started
with
the
header
files
on
mac
missing
when
I
want
to
compile
gosef,
and
so
I
worked
together
with
kev,
who
is
also
part
of
the
ceph
team,
because
he
wants
in
in
mac.
We
have
a
kind
of
packaging
system
which
is
called
dru
and
there
once
was
a
brew
package
for
gosef
client
for
seth's
client
at
least,
and
so
I
just
asked
him
if
we
can
get
that
running
again
and
so
yeah
one
one
thing
led
to
the
other
and
now
actually
there's
a
package
again
on
brew.
C
So
you
can
install
the
subclient
just
with
brew,
install
that
client
and
also
installs
the
headers.
So
you
can
just
build
a
gosef
as
as
you
would
be
used
to
also
in
linux-
and
I
didn't
do
extensive
testing
with
it,
but
at
least
it
compiles,
which
was
already
a
big
step,
because
there
were
a
couple
of
like
linux,
specific
things
which
we
had
to
remove
and
yeah.
But
now
it
it
seems
to
work.
It's
it's
quite
nice.
A
C
Yeah
yeah,
one
of
the
topics-
I
I
think,
is
really
interesting
here
and
in
that
forum,
would
be
like
how
we
can
how
we
can
get
a
better
idea
about
the
priorities,
what
to
implement
first
and
what
are
the
demands
from
the
people
actually
using
it
so
because
yeah,
first
of
all,
often
we
don't
know
who
is
actually
using
it,
and
then
we
also
don't
know
what
they
need
most.
C
So
that
was
really
interesting
from
joshua
that
hint
about
the
sockets
stuff
like
that
would
be
interesting
to
have
on
a
regular
basis
like
like
a
feedback
from
where
we
can
have
like
a
priority
list
of
stuff
that,
besides
csi
with
inside
yeah
csi.
What
is
the
demands
in
the
community?
A
Yeah
the
other
day,
a
colleague,
was
showing
me
a
new
tool
from
google,
which
can
see
more
dependencies
yeah.
A
I
can
throw
the
link
in
chat
after
I
take
it
up
later,
and
then
I've
noticed
that
sometimes
the
the
package
documentation
tool
from
the
go
project
and
show
you
some
things
that
are
consuming
your
library,
but
I
don't
know
how
reliable
these
are,
but
in
the
in
the
you
know,
if
there's
a
not
enough
feedback,
we
can
always
kind
of
peak
to
see
using
these
tools
what
what
people
might
be
using
it's
kind
of
the
curse
of
being
a
library.
Is
you
don't?
C
C
But
although
I
or
also
I
don't
know
if
there
would
be,
you
know,
feedback
that
it's
worth
to
set
it
up
at
all.
Maybe
direct
communication
is
enough.
A
A
I
agree:
I
agree
about
the
the
sense
of
it.
Fortunately,
no
one's
shouting
out.
I
want
this
in
this
meeting,
so
please
do
if
you're
here
and
you
want
to
yell.
Let's
see.
C
C
G
A
A
Yes,
yeah,
sorry,
I
should
be
more
specific.
You
don't
know
what
I'm
looking
at.
A
All
right
so
I'll
take
this
opportunity
to
talk
a
little
bit
more
about
the
road
to
v-1o
part
of
it's
on
me.
One
thing
I
keep
planning
on
doing
and
then
finding
the
time
to
not
do
it
is
to
actually
document
our
existing
deprecated
apis.
A
We've
marked
most
of
them
with
godoc,
but
there's
still
a
few
rough
edges
that
we
should
probably
deal
with
places
where
there
wasn't
maybe
as
much
api
design,
and
it
was
kind
of
grew
together
organically.
A
Those
ffs
has
different
error,
handling
than
rados
and
rbd
kind
of
annoying,
because
sef,
ultimately
for
all
the
api
calls
as
returning
air
knows,
we
kind
of
have
a
generic
framework
for
that,
but
it's
partly
hidden
and
partly
obfuscated,
which
were
originally
good
for
some
good
reasons,
but
it
turns
out
you
get
some
odd
results
that
one
I
saw
the
other
day
was
a
call
that
has
nothing
to
do
with
an
rbd
image.
It's
an
rvd
function
call,
but
it
was
saying
that
rbd
image
doesn't
exist.
A
Oh
right,
because
we
coded
it
that
way,
so
those
are
hard
things
and
go
to
change
without
kind
of
breaking
your
api.
Traditionally,
some
calls
scrape
the
error
text.
There
are
better
ways
to
do
that:
go
provides
errors,
dot
is
and
errors
dot
as
nowadays,
but
I'm
always
nervous
about
breaking
things
that
could
easily
be
consumed
by
people.
A
So
I
want
to
get
it
all
together
in
kind
of
a
document
and
then
we
can
start
making
rational
decisions
about.
You
know
this
is
a
priority.
We
should
probably
fix
this
or
clean
this
up
sooner
rather
than
later,
I
have
in
my
head
the
dream
of
like
maybe
early
next
year,
going
to
1.0,
but
this
is
fantasy.
I
don't
know
how
real
it
is.
C
C
A
Can
add
things
so
growing
the
api
and
marking
you
know
function
call
x
or
y
is
deprecated
is
totally
fine
within
the
v1
series.
We
just
can't
take
anything
away.
That's
the
compatibility
promise
that
the
the
go
community
expects
you
can
create
a
v2,
but
when
you
do
that,
it's
some
extra
steps
so
that
the
the
versions
are
very
distinct
and
projects
can
continue
to
use
v1
as
long
as
they
want
to
consume.
It.
C
A
Yeah,
without
without
using
those
exact
terms,
that's
what
we
kind
of
did
for
the
admin
modules
as
we
added
them.
So
in
the
release
notes
for
whatever
releases
ago,
when
cfs
admin
was
new,
I
I
wrote
in
the
release,
notes:
hey.
You
know:
we're
going
to
reserve
the
right
to
kind
of
tweak
the
api
over
the
next
couple
of
releases,
and
I
did
the
same
thing
for
rgw
admin
and
rbd
admin
for
for
the
latest
release.
So
we
have
some
flexibility.
I
just
figure.
A
A
A
A
Speaking
of
documentation,
let
me
complain
a
moment
for
godoc
because
it's
great
at
documenting
your
api,
but
I
find
it
very
frustrating
because
there
isn't
really
a
framework
for
writing
prose
and
explaining
things
I'm
used
to
sphinx
from
the
python
community
and
I
find
it
much
better
and
that
you
can
actually
write
docs
and
then
have
it
incorporate
api
documentation
automatically.
A
We
can
move
that
would
be.
That
would
be
good
for
the
the
previous
topic
of
the
compatibility
stuff,
yeah.
A
Well,
we
still
have
a
few
folks
in
the
in
the
who
joined
the
meeting.
Anyone
just
feel
like
saying
something
I
feel
like
we're
doing
most
of
the
talking.
It's
all
right,
but
I'd
love
to
hear
more
from
people
who
don't
often.
A
A
Well,
all
right
go
back
to
my
topic,
yeah
everyone's
being
quiet
today,.
B
D
A
Well,
one
thing
we
can
talk
about
since
sven
and
I
are
doing
most
of
the
talking
one
thing
that
we've
talked
about
one-on-one
a
couple
times
is
improving
the
adoption
of
or
velocity
of
our
prs
for
a
while.
We
had
a
kind
of
a
two
reviewer
requirement.
We've
been
thinking
about
relaxing
this
a
little
bit
for
non-api
changes.
A
When
you
were
you
the
other
day,
you
were
talking
about
steph's
own
processes,
which
I'm
not
particularly
familiar
with.
You
want
to
ask
the
audience
if
there's
some
people
more
informal
with
corset
that
can
talk
a
little
bit
about
the
way
you
know
what
are
the
requirements
for
for
a
pr
merge.
G
C
Like
like
picking
picking
new
like
doing
the
actual
work,
picking
some
maybe
yeah,.
C
It
and
then
also,
of
course
after
you
did
it
velocity
to
get
it
merged
more
quickly.
Regarding
seth,
I
only
talked
about
the
second
part
of
the
velocity,
so
I
I
just
found
it
interesting
that
it
was
kind
of
quite
agile
or
like
they
don't
have
like
a
strict
rule
like
two
reviewers,
or
at
least
it
appeared
to
me,
like
that.
I
didn't
check
in
into
the
details,
but
it
looked
like
okay.
C
If
it
is
a
small
thing,
it
gets
merged
quite
quite
fast,
and
but
if
it
is
something
that
is
more
like
going
more
deep
or
like
recently,
I
I
I
made
a
pr
for
for
some
redefining
some
some
defines
for
mac,
os
or
other
platforms,
and
and
then
they
start
to
have
a
more
like
deeper
review
and
also
like
doing
ci
it's
interesting,
because
ceph
only
does.
C
Apparently,
I
was
surprised
by
that.
Only
does
the
unit
tests
in
the
ci
the
the
integration
test,
or
you
know,
I'm
not
sure
if
it
is
called
integration
test,
but
the
bigger
tests
like
of
the
whole
functioning
cluster.
They
are
done
kind
of
semi
semi
manually
and
they
batch
them
together.
Like
a
couple
of
pr's
and
then
they
test
them,
so
that's
probably
also
the
way
or
a
reason
why
they
have
to
be
more
like
dynamic
and
let's
see
because
they
cannot
test
everything
anyway.
C
So
but
in
general
I
think
we
can
do
it
also
more
like
agile,
like
if
it
is
like
just
touching
the
make
file,
then
it
doesn't
matter
too
much
and
you
can
merge
it
more,
quick
but
more
quickly,
but.
A
Yeah,
I
was
thinking
about
some
classification
tags
where
we
basically
say
I
don't
know
if
we
want
a
tag
that
said
api
or
non-api
or
something
like
that
and
said
you
know
as
a
developer,
I'm
telling
you
none
of
the
changes
in
my
pr
affect
the
api
and
those
could.
I
don't
know
if
we
could
tell
merge
if
I
about
the
tag
and
have
it
change
the
requirements,
but
at
the
very
least
the
human
being
can
say.
Oh,
I
don't
need
to
wait
for
mergify
for
this.
You
know
it's
already
pre-tagged
as
non-api.
C
Yeah,
anyway,
it's
more
like
a
a
matter
of
like
how
we
agree
because,
like
technically
we
can
merge
already
with
one
review,
so
I
am
at
the
current
setting.
So.
H
But
still
I
I
don't
personally,
I
I
see
that
two
persons
or
reviewer
is
better
than
zero
or
one
or
zero,
because
you
might
have
a
have
a
bad
day
and
you
slip
away
some
code
that
there
is
a
some
combine
of
wrong
place
or
hashtag
or
something
which
breaks
actually
things.
H
And
if
you,
if
you
don't
have
any
pre-flight
checks
that
this
is
valid
code
or
or
this
affects
on
our
api,
for
example,
so
it
will
change
the
output,
for
example,
for
some
query
that
that
shouldn't
be
like
approved
like
yeah.
This
is
fine
master,
commit
that's
done,
I've
done
it
many
times,
but
still
till
I.
I
feel
that
dropping
away
its
second
reviewer
is
not
good.
Unless
you,
you
don't
apply
some
pre-flight
checks
or
rules
that
fix
that
need.
A
Yeah,
that's
a
that's
a
really
good
point.
We
do
try
to
have
a
fair
amount
of
automate
automation
using
github
actions.
We
have
our.
We
have
a
checks
which
is
the
fast
running.
You
know
govet
and
we
use
a
tool
called
revive
which
checks
for
common
go
gotchas
and
stuff
like
that,
and
then
the
the
test
suite
for
those
who
aren't
aware
runs
with
kind
of
a
all-in-one
step
cluster
within
a
container
and
then
our
go
binary
test.
Suite
runs
it's
not
a
lot
of
unit
tests.
A
It's
mostly
what
I
would
call
integration
tests,
because
the
whole
point
of
the
library
is
integration.
Most
of
our
functions
are
fairly
short,
wrappers
around
the
c
apis
of
seth,
but
that
said,
we
can
certainly
always
be
on
the
lookout
for
improved
automation
around
that
that
entire
process
certainly
help
us.
A
How
do
I?
How
do
I
put
it?
You
know,
have
a
good
good
feeling
about
the
code
before
we
start
reading
it
with
our
eyes.
C
Yeah,
I
just
want
to
add,
of
course,
that's
very
important
and
we're
talking
only
about
like
dropping
the
second
review.
Of
course,
after
all,
the
automatic
tests
succeeded,
like
it's
more
like
things
that
you
can
more
easily
repair
afterwards
compared
to
like
api
changes
which,
when
they
are
out,
basically
it's
hard
to
change
them.
So
if
right,
but
if
you
break
things,
then
you
can
still
repair
it
before
the
next
release.
So
yeah.
A
And
we
do
things
we
generate
coverage,
but
we
don't
actually
analyze
it
as
part
of
the
pr
process.
So
you
could
try
to
say
something
like
you
have
to
have
90
net
new
coverage
or
whatever,
but
the
automation
around
that's
a
little
hard
to
write,
that's
something
I
actually
looked
into
in
the
very
first
months
of
when
I
started
working
on
gosef
and
went
oh.
A
This
is
this
is
too
hard
right
now
I'll
look
at
it
later
and
similarly,
I
wrote
a
tool
called
implements
which
actually
tries
to
look
at
what
parts
of
the
ceph
api
were
covering,
but
one
of
the
dependencies
turned
out
to
be
very
hard
to
run
in
the
container
the
base
images
we
were
using.
A
So
unfortunately
I
had
to
turn
it
off,
but
I've
been
talking
about
that
with
some
of
the
others,
including
youth,
and
about
trying
to
get
that
tool
back
into
the
ci
so
that
we
have
you
know
for
every
run.
We
have
a
rough
idea.
A
Oh,
we
cover
you
know:
70
percent
of
the
non-deprecated
apis
in
lib
rbd
or
whatever
so
coverage
for
our
tests
of
the
code
we
implement
and
then
coverage
for
how
much
of
stuff
do
we
actually
cover
that
doesn't
count
things
like
the
json
apis,
because
those
we
won't
even
have
any
idea.
We
have
to
analyze
the
the
code
and
stuff
that
generates
those
I've
looked
at
it
and
I
cannot
claim
I
understand
it
yeah.
A
Well,
I
think
we're
just
about
at
the
end
I'll
stop
talking
once
more
and
give
anyone
a
last
chance
to
say
something
otherwise
I'll
hand
it
back
to
mike.
G
All
right
thanks,
everyone,
thank
you,
john,
and
this
concludes
our
for
thursday
for
stuff
month,
we'll
be
continuing
on
friday
and
for
some
other
talks.
These
talks
that
we
have
today
will
be
recorded
and
posted
up
on
the
ceph
youtube
channel
as
well.