►
From YouTube: Ceph Code Walkthroughs: RADOS Snapshots
Description
Every month the Ceph Developer Community meets to discuss one aspect of Ceph code, spread knowledge of how it works and why it works that way.
This month we're joined by Samuel Just on RADOS Snapshots with Q/A.
Find future Ceph Code Walkthroughs: https://tracker.ceph.com/projects/ceph/wiki/Code_Walkthroughs
Ceph Code Walkthrough Playlist: https://www.youtube.com/watch?v=nVjYVmqNClM&list=PLrBUGiINAakN87iSX3gXOXSU3EB8Y1JLd
A
Hello,
everybody
and
thank
you
for
joining
us
for
another
stuff
code,
walkthrough
march
23rd,
1700
utc
a
lot
of
daylight
saving
time
to
standard
time
changing
so,
hopefully
people
make
it
into
this.
One
there'll
always
be
a
recording
as
well
after
we
produce
these,
so
that
will
be
posted
to
the
youtube
channel,
and
today
we
are
joined
by
samuel,
just
who
has
been
nice
enough
to
take
the
time,
even
during
the
development
for
pacific,
to
go
ahead
and
take
time
to
explain
to
us
afraid
of
snapshots.
A
I
do
believe
if
I
remember
correctly,
sam,
that
a
lot
of
this
back
in
the
time,
at
least
when
I
was
looking
at
radio
snapshots
that
was
around
something
with
a
context
that
gets
pushed
around
to
with
relation
to
other
I
o
activity.
That
is
happening.
I
don't
know
if
snap
context
is
still
a
thing,
but
first
you're
there
yeah.
I
think
that
was
back
in
2009,
but
so
anyways,
but
I
I
will
shut
up
now
and
let
the
expert
of
course
speak.
So
everyone
here
is
samuel.
A
B
With
some
code
snippets,
but
I've
probably
only
got
20
minutes
of
content,
so
please
by
all
means
interrupt
me.
If
you
want
me
to
go
deeper
on
something,
if
not,
there
will
certainly
be
time
at
the
end
for
questions
and
such
all
right
so
rate
of
snapshots.
At
a
super
high
level,
there
are
three
or
three
south
projects:
rgw
rbd
and
zfs.
B
Two
of
these
use
snapshots
heavily
both
rbd
and
cfs,
make
heavy
use
of
them,
rbd
at
a
block
device
level
and
ffs
at
a
subtree
level,
but
just
as
they
both
use
libretos
or
the
object
during
ss
case
to
mediate
io.
They
use
the
same
underlying
machinery
and
liberators
to
do
snapshotting
as
well,
so
for
cephafest.
This
looks
something
like
this,
which
I
stole
from
greg's
presentation.
D
B
Which
you
can
then
access
as
a
read-only
copy
of
that
subtree
for
rbd
snapshots?
We
have
rbd
snap
commands
that
create
list
and
roll
back
snapshots,
and
you
can
mount
snapshots
read
only
so.
The
two
common
features
here
are
that
in
both
cases
we
have
a
large
related
set
of
rados
objects
that
share
the
same
snapshot,
lifecycle
and
creation
time.
B
So,
at
a
really
high
level,
clients
maintain
and
arbitrate
snapshot
metadata
for
object
groups.
Externally,
it's
ffs
in
its
internal
metadata.
D
B
Ordering
via
the
mds
and
capability
system
and
rbd,
maintains
its
snapshot.
Information
on
the
head
object
of
each
of
each
rbd
block
device
with
ordering
via
watch
notify
it
locks.
B
Osds
maintain
per
object,
snapshot
to
state
mapping,
updated
with
snapshot,
information,
piggybacked
on
rights
and
I'll
talk
about
that
in
exhausting
detail.
Rollback
must
be
performed
per
object.
So
if
we
go
back
to
this
rbd
example,
rvd
snap
rollback
is
requires
time,
linear
in
the
number
of
objects
in
the
rvd
image
snapshot.
Space
deletion
is
fast,
but
the
space
reclamation
part
is
lazy.
B
Is
a
lazy
background
operation
that
happens
across
the
cluster
all
right,
so
the
intuition
here
is
that
let's
say
we
have
in
this
case,
let's
say:
let's
say
it's
an
rbd
image
with
five
options
set
up.
So
I
guess
this
is
a
20
megabyte
rbd
block
device
with
five
four
megabyte
objects
up
at
the
top.
Here
we
have
the
client
state
which
currently
has
no
snapshots,
so
the
snapshot
sequence
is
zero
and
no
snapshots
have
been
taken.
B
So
let's
say
we
take
a
snapshot
at
this
point.
The
client
state
now
has
a
snapshot.
Sequence
of
three
and
its
little
snaps
vector
has
just
three
and
up
y
three
and
not
one
because
snapshot
ids
are
unique
and
increasing,
but
not
necessarily
depth
for
reasons
we'll
get
to
later.
B
B
The
head
object
here
should
say
three
colon
three
that
hopefully
gets
fixed
in
a
subsequent
slide.
So
let's
say
we
take
another
snapshot
10
in
this
case
we
perform
a
right.
The
left,
one
here
says
head
three
cool
and
three
that
happened
with
the
initial
right.
I
just
messed
up
this
slide
so
same
deal
here,
except
that
in
this
case,
instead
of
there
only
having
been
one
snapshot
that
happened.
Two
of
them
happened,
so
we
create
a
clone
at
snapshot:
sequence,
10
with
covering
snapshots,
10
and
3,
and
the.
C
B
B
B
B
B
B
B
B
B
B
B
B
B
B
B
B
And
we
fill
in
some
off
context
the
op
context,
snap
context
to
track
the
pending
snapshot
operation.
If
it
exists
note,
this
bit
only
happens
on
mainly
on
on
right.
B
Sent
later
after
we've
already
gone
through
the
the
op
itself
and
had
a
look
at
all
the
individual
operations,
the
last
thing
we
do
before
we
well
one
of
the
last
things
we
do
before
we
send
it
to
before
we
complete
the
transaction
is
we
call
make
writable
and
what
this
does
is
it
compares
the
snap
context
we
constructed
just
a
moment
ago,
and
it
compares
it
to
the
structures
already
on
disk.
As
in
the
diagrams,
I
showed
you
before.
B
If
the
snap
context
is
newer
than
the
object,
then
the
sequence
already
on
disk
and
has
new
snapshots,
we
create
an
object
context
with
an
object
info
t
for
a
new
clone
and
we
add
to
the
transaction
and
operation
cloning.
The
object
in
make
cloud
you'll
also
notice.
The
clone
gets
its
own
pg
log
entry,
the
pg
log,
isn't
actually
aware
of
snapshots.
As
such,
it
directly
represents
each
new
clone
created
with
a
special
operation
code
for
clone.
B
We
also
encode
into
this
log
entry,
the
set
of
snapshots
that
are
stored
on
that
clone
and
if
you
stop
and
think
for
a
moment
at
the
time
that
the
clone
is
created,
we
already
know
all
of
the
snapshots
that
that
clone
will
be
used
for,
because
you
can't
go
back
in
time
and
create
snapshots
that
happened
before
this
point,
so
that
set
will
only
shrink.
It
will
not
grow.
B
B
The
osd
map
contains
these
removed
snaps
cube
structures
that
encode
the
total
set
of
snapshots
that
are
pending
removal
and
the
ones
that
were
newly
removed
as
part
of
the
sosd
map
or
part
of
this
fabric,
each
osd
as
it
consumes.
These
osd
maps
updates
each
pg
with
a
p,
with
a
snap
trim
cube
of
snapshots
to
remove
there's
a
tricky
component.
Here,
though,
because
snapshots
apply
to
many
objects,
not
just
one.
B
To
do
that,
we
mana
we
maintain
a
look:
a
site
index
in
persistence
on
disk
called
the
snapmapper,
see
sourceocsnapmapper.h
for
more
information.
It
gets
updated
online
as
part
of
primary
log.
Pg
log
operation
update
snapmap.
B
B
B
Which
is
how
the
asynchronous
snap
trimmer
works
as
the
osd
advances
its
osd
map,
it
populates
this
pg
snap
trim
queue
on
each
pg
and
asynchronously.
Each
placement
group,
as
the
p
as
the
snapshot
queue
is
not
empty,
will
go
through
each
of
the
objects
in
the
snapmapper
via
getlex
objects
to
trim
and
for
each
of
those
objects
it
will
check
to
see
which
snapshots
are
still
alive,
because
the
removal
of
the
snapshot
does
not
necessarily
mean
the
clone
gets
removed.
B
Pg
reads
are
relatively
simple:
do-op
when
it
is
trying
to
work
out
which
object
to
work
on
calls
into
find
object.
Context
with
the
snapshot,
sequence
of
the
op
that
got
sent
and
it
maps
the
snapshot
sequence
requested
onto
either
the.
D
B
A
clone
depending
note
that
the
behavior
here
is
not
dependent
on
the
snapdrummer,
whether
the
snapshot,
a
snapshot
that
has
been
removed
will
be
detected
as
removed,
whether
the
snapdrama
has
gotten
to
it
or
not
because
find
object.
Context
will
check
the
current
osd
map
for
whether
the
snapshot
is
actually
still
alive
or
not.
B
The
first
is
that
we
need
the
snapset
attribute
on
the
head
in
order
to
work
out
which
clones
are
supposed
to
exist
and
some
other
things
so
before
we
recover
a
clone
to
the
primary.
We
first
recover
the
head
object
and
you
can
see
that
in
replicated
back
end
recover
object.
B
Secondly,
because
the
clone
operation
in
object
store
that
we
use
when
we
get
a
write
on
an
object
that
needs
to
be
that
needs
a
clone
created,
is
meant
to
have
copy
and
write
semantics,
where
the
underlying
persistence
shares
extents
that
haven't
been
modified
between
those
two
clones.
B
For
that
reason,
it's
fairly
important
that
recovery
reserve
that
data
sharing
or
else
after
recovery,
you
would
find
that
a
pg
uses
up
more
space
than
it
should
so.
For
that
reason
replicated
back
end
and
there
are
analogous
there's
analogous
code
in
the
eraser
coded
backend,
I
believe,
use
these
calc
clone
subsets
and
calc
head
subsets
to
compute
ranges
of
the
object
that
can
be
cloned
from
adjacent
clones.
That
are
already
present.
B
A
I
don't
see
any
questions
listed
here,
but
I
had
one
in
particular
for
myself
in
terms
of
originally,
I
was
wondering
how
like
from
a
client
perspective,
if
somebody
was
to
actually
be
interacting
with
it.
Well,
is
this
api
actually
available
directly
to
radius
itself
to
be
making
any
sort
of
snapshots
like
this.
B
So
there
are
two
versions
of
that
question.
The
first
is
no
self-managed
snapshots,
as
you
saw
require
a
great
deal
of
cooperation
from
the
application
code.
So
hypothetically,
you
could
create
a
liberators,
an
application
that
uses
libraries
directly,
in
which
case
yes,
you
can
use
self-managed
snapshots,
but
the
way
you
interact
with
self-managed
snapshots
normally
is
through
the
rbd
snapshot
mechanisms
or
the
cfs
snapshot
mechanisms.
B
B
And
if
you
look
at
the
step
documentation,
I
think
these
are
even
still
represented.
The
trouble
is
that
they
aren't
useful
for
very
many
things.
You
certainly
wouldn't
use
them
they're
mutually
exclusive,
with
self-managed
snapshots,
so
a
single
pool
can't
use
both.
B
So
if
you
do
use
a
pool
snapshot,
you
can
no
longer
use
most
features
of
stepfs
or
rpd
on
that
pool,
but,
more
importantly,
even
with
rgw
or
something
else
or
some
other
application,
they
don't
behave
the
way
you
would
expect
a
feature
that
has
the
word
snapshot
to
behave
really,
essentially
because
they
aren't
point
in
time.
Snapshots
they're,
somewhat
complicated
and
their
state
is
gossiped
along
with
the
osd
map,
so
in
general,
that
feature
is
fairly
deprecated.
I
believe
greg
is
that
right.
B
B
D
A
C
A
Sounds
good
well
if
anybody
else
has
questions,
while
sam
walks,
through
the
code,
feel
free
to
throw
it
in
and
I'll
try
to
catch
sam
at
the
right
time,
so
he
doesn't
have
to
monitor
the
chat
window.
At
the
same
time,
thanks
everyone.
E
I
add
a
quick
thing:
we
can
also
cover
it
when
you're
walking
through
the
code.
Are
there
any
quirks
about
snap
trimming
which
make
it
different
or
like?
Is
there
anything
special
about
snap
running
versus
when
we
do
regular,
pg,
deletions
and
stuff.
E
I
mean
I
am
pointing
this
out
because
you've
recently
seen
some
reports
of
performance
depreciation
with
our
blue
fs
buffered
io
change,
which
only
show
up
in
pg
deletion
case
and
snapshot.
Trimming
case.
D
B
B
B
B
D
B
B
This
map
stop
id
to
clone
is
a
special
path
for
cast
sharing.
Unfortunately,
this
part
models,
the
behavior-
I
showed
you
before,
where,
if
the
snapshot
sequence
is
smaller
than
the
sequence
being
requested,
it's
necessarily
a
request
for
head.
B
This
is
the
portion
where
we
walk
through
the
snapset
to
find
the
correct
clone
for
the
object.
It
will
be
essentially
the
first
clone
with
a
with
an
id
greater
than
or
equal.
I
believe
to
the
to
the
snapshot
requested
and
this
portion
here
at
the
end
validates
that
the
snapshot
in
question
actually
maps
to
that
clone
by
checking
the
clone
snaps
field
it
snaps
up.
B
This
could
happen
if,
for
instance,
you
asked
for
a
snapshot
that
was
never
mapped
to
this
particular
object.
Then
it
won't
fall
into
that
range
and
you
get
nad
instead,
potentially
the
behavior.
Here
we
take
a
little
bit
more
look
at
the
recovery
code.
B
B
Here
so
any
oh,
a
question
got
asked:
does
clients
need
to
know
snapshots
to
talk
to
rados?
How
do
rbd
and
cefs
clients
get
the
correct
snapshot
info
after
a
snapshot
is
created,
so
those
questions
are
slightly
different
for
rbd
and
zfs.
I'm
going
to
tell
a
story
about
the
way
rbd
works,
that
if
jason
wants
to
correct
me,
that
would
be
great
at
a
super
course
level.
What
rbd
does
when
it
creates
a
snapshot?
B
B
B
There
is
an
authoritative
updater,
I
believe,
which
is
the
holder
of
that
advisory.
Lock,
the
other
holders
will
be
rbd
clients
mounting
read-only
snapshots
of
that
image.
I
believe,
does
that
answer.
Your
question.
Cfs
has
a
rather
more
complicated
mechanism
involving
the
cap
system,
but
either
way
the
application
that
is
using
self-managed
snapshots
is
responsible
for
making
sure
that
the
snapshot
information
gets
propagated,
though
in
both
cases
these
liberators
mechanisms
to
do
so
does
that
make
sense.
B
That's
one
of
the
reasons
why
there's
that
there
are
no
high-level
command
line
ways
to
access
this,
the
assumption
or
the
the
key
to
this
system
is.
There
must
already
be
some
kind
of
right
ordering
or
metadata
system
overlaid
on
top
of
liberators,
so
attempting
to
create
a
snapshot
level
metadata
would
be
wasteful,
it's
more
efficient
to
integrate
it
into
whatever
metadata
already
exists.
D
B
If
a
client
does
not
know
about
a
snapshot
that
does
exist,
then
what
will
happen?
Is
they
obviously
can't
be
trying
to
read
that
snapshot,
because
they
don't
know
about
it
right,
so
the
osds
will
see
snap
contacts
come
in
that
are
out
of
date.
They'll
go
well,
I'm
going
to
ignore
your
snap
context
and
simply
give
you
the
snap
id
that
you
actually
want,
and
in
that
sense
it
will
still
be
completely
correct
right.
B
If
I
ask
for
snapshot
5
and
in
fact
we're
up
to
snapshot
30,
then
the
osd
still
know
how
to
map
snapshot
5
onto
the
relevant
object.
Even
if
I
don't
know
that
the
later
snapshots
exist
so
for
reads:
it's
a
non-issue
for
rights:
there
must
be
an
authoritative
writer.
Essentially
you
won't
kill
the
cluster
specific
state.
B
You
still
get
a
well-defined
answer
like
if
you
have
two
different,
authoritative
or
if
you
have
two
different
rbd
writers
that
are
both
trying
to
create
snapshots
in
the
same
blog
image
and
sending
rights
to
the
same
in
to
the
same
image.
B
B
That
was
confusing,
but
I
think
the
answer
is,
generally
speaking,
the
osds
won't
care
for
reads
the
application:
won't
care
either,
but
for
rights
the
application
is
responsible.
If
it
wants
to
maintain
it's
a
snapshot
system
for
making
sure
that
there
aren't
divergent
snapshot.
Metadata
is
floating.
B
B
To
go,
find
the
calc's
clone
subsets
here,
okay,
so
this
is
a
little
tough
to
read
if
you're
not
following
along,
but
the
idea
here
is
that,
because
the
snapset
contains
these
clone
overlap,
fields
which
are
interval
sets
where
any
offset
contained
in
the
interval
set
for
clone
idi
is
an
offset
shared
with
clone,
I
plus
one
or
head.
If
that's
the,
if
it's
the
last
one
so
by.
B
B
Yeah,
so
this
code
doesn't
really
lend
itself
to
going
through
line
by
line,
but
the
notion
is:
let's
say
we
have
three
clones,
one,
two
and
three:
if
we,
if
we
already
have
one
three
local
but
we're
missing
two,
then
we
can
look
at
the
clone
overlap,
field
and
infer
o
clone
two
shares
the
for
its
first
megabyte
with
bone,
one
at
its
third
megabyte
with
clone
three.
B
A
D
B
Those
are
worth
going
over
yeah,
so
the
important
so
when
you
actually
do
a
snapshot
removal,
so
let's
say
in
rbd.
If
you
remove
an
image
snapshot,
rbd
internally
is
going
to
remove
that
snapshot
from
its
own
metadata.
C
B
Issue
a
command
to
the
monitor,
removing
that
snapshot.
The
monitor
is
going
to
update
its
pg
pool.
B
B
Remove
snap
seal
this
one,
I
believe
in
the
next
osd
map
to
as
a
broadcast
to
all
osds
that
the
snapshot
is
now
removed.
Those
osds
will
eventually
receive
an
osd
map
in
the
way
that
posties
always
do.
B
And
activate
so
yeah
activate
happens
on
activation,
there's
a
corresponding
bit
of
code
anytime.
We
process
a
new
osd
map,
I'm
forgetting
the
location
of
where
it
initializes
the
snapdragon
queue
and
the
purge
snap
set
as
long
as
these
things
are
not
empty.
B
B
The
main
states,
it's
structured,
the
same
way
the
peering
state
machine
is
with
one
of
these
blue
state
chart
state
machines.
The
initial
state
is
unsurprisingly
not
turning,
and
the
other
states
are
trimming
whereby
it's
going
one
object
at
a
time
through
and
let's
see,
if
I
get
the
more
important
one.
B
D
B
An
artificial
delay
between
different
operations
between
drum
operations.
In
the
background
into
this
background
process,
I
believe
that's
modeled
by
this
weight
trim
timer
state.
B
B
Log
entries
and
send
pg
transactions
and
log
entries
to
the
replicas,
so
recovery
and
failure
work
the
same
way
so
with
the
normal
write,
actually
not
sure
what
async
the
way
it
is
work
does,
I
believe,
a
weight.
Async
work
waits
for
the
local
operation
to
complete.
B
Time,
weight
scrub
is
a
relation
between
this
and
scrub,
where,
if
we're
currently
scrubbing
the
pg,
we
need
to
wait
for
the
scrub
to
complete
before
we
can
continue.
B
A
So
another
question
that
gets
you
know
brought
up
at
the
at
least
with
the
suff
boost
at
various
events,
is,
is
around
rbd
mirroring
and,
as
I
would
like
to
understand,
this
is
probably
an
after
and
after
event
thing.
After
all,
these
contacts
are
said
and
done
that
correct,
rbd
marion
would
just
be
pushing
out
to
whatever
other
sites
that
would
be
available
by
snap
context.
C
B
Snap
mirroring
works,
is
you
rbd
periodically
takes
snapshots
and
it
mirrors
the
snapshots
over.
It
then
uses
the
rbd
intent
log
to
update
the
current
version
of
the
block
image
on
those
sites
up
to
the
current
state
of
the
current
image.
So
it's
it
uses
snapshots
it's
one
of
its
components,
but
most
of
its
rbd.
B
Level
does
that
make
make
sense
that
is
rbd
itself
maintains,
I
believe,
a
right
ahead.
Log
of
sorts
for
get
some
or
mirroring
purposes,
and
the
remote
cluster
consumes
that
log
to
bring
itself
up
to
a
point
in
time,
consistent
state
time
delayed
by
some
amount.
But
it's
not
directly
a
function
of
snapshotting.
A
C
B
B
A
Greg
and
I
guess,
amongst
all
the
different
components
that
stuff
has,
are
there
any
limitations
in
terms
of
one
component
to
the
other
in
terms
of
snapshot,
features
that
you
are
aware
of.
B
Big
yes,
like
cephas
and
snapshots,
don't
really
resemble
rbds
right.
It's
built
on
the
same
substrate,
but
they're
really
different.
B
As
as
an
example,
a
all
of
the
objects
within
a
snap
within
a
an
rbd
block
image
will
all
have
the
same
snap
context,
but
that's
not
true
of
all
of
the
objects
in
cfs
you,
a
a
a
a
file
in
zfs
can
be
contained
in
multiple
snapshot
domains.
I
believe
they're
called
greg.
That
is,
if
you
snapshot
its
directory,
its
parent
directory
in
its
grandparent
directory
all
at
different
intervals.
B
From
the
point
of
view
of
the
way
self-managed
snapshots
actually
work.
This
is
irrelevant.
It
doesn't
actually
matter,
but
the
the
way
the
feature
is
used
to
make
zephyths's
snapshot
features
wildly
different
for
rvds.
B
B
Are
integrated
based
on
the
way
the
component
actually
or
based
on
the
way
the
external
interface
actually
works.
C
I
think
the
more
interesting
one
is
when
you
move
a
file
out
across
directories
that
have
completely
separate
snapshots
like
patterns
within
them.
They
have
a
file
that
was
in
in
in
my
directory
getting
snapshotted
for
like
a
month,
and
then
I
gave
it
to
you
and
so
now
it's
in
slash
home
as
just
and
it's
getting
your
snapshots,
but
it's
also
still
in
my
old
set,
but
not
my
new
set.
B
I
forgot
about
that
one,
but
the
point
is
that,
yes,
the
features
are
different
because
the
actual
you
know
everything
about
zfs
is
quite
different
from
rbd.
C
In
in
terms
of
snapshot,
shortcomings,
I
think
the
big
one
is
that
these
are
read
only
there
are
some
storage
systems
where
snap,
where
you
can
take
snapshots
but
they're,
not
but
they're,
but
they're
writable.
So
you
can
actually
just
have
like
different
branches
of
versions
of
the
file,
and
people
ask
us
for
that.
One
pretty
often
and
we'd
have
to
completely
redo
it.
B
A
I
think
another
thing
that
we
could
talk
about
because
you
know
one
one
of
the
moments
for
stuff
was.
You
know,
of
course,
during
when
openstack
was
first
coming
aboard
and
there
was
a
need
for
shared
storage
having
a
way
to
give
storage
to
different
virtual
machines
at
the
beginning.
Maybe
you
could
kind
of
talk
about
the
cloning
and
snapshot
on
that
with
rpd
as
an
interest
point.
B
B
So
you
take
a
snapshot
of
the
rel
image
and
then
you
create
a
new
block
image
with
that
rel
snapshot
as
a
base,
and
what
that
means
is
that
rbd
maintains
a
bitmap
of
all
of
the
blocks
that
actually
exist
within
itself
and
anything
that
is
not
in
that
bitmap.
It
passes
through
to
the
parent
image,
in
this
case
the
clone.
That
was
the
original
rel
image
and
every
time
you
do
a
write.
It
copies
up
the
and,
I
believe,
that's
the
operation
of
the
liberty
source
code.
B
It
copies
up
the
parent
images
for
megabyte
block
up
into
the
child
images
layout
and
and
updates
the
bitmap
to
ensure
that
all
future
reads
will
hit
that
block
instead
of
the
parent
one.
In
that
way,
you
get
this
kind
of
thinly
provisioned
behavior,
where
you
don't
actually
create
a
new
image.
B
You
just
create
new
blocks,
corresponding
to
the
things
that
have
been
mutated
since,
since
you
created
the
the
image,
so
just
as
with
the
mirroring
thing
it
makes
use
of
snapshots,
but
most
of
the
behavior
is
actually
rbd
level.
B
A
Okay,
sam
was
there
any
last
things
that
you
wanted
to
share
in
terms
of
oreno's
snapshots.
B
No,
I
think
that's
all
I
have.
I
hope
that
was
useful.
A
All
right,
a
great
thing,
thank
you,
everybody
for
joining
us
for
another
stuff
code,
walkthrough
and,
as
I
mentioned
before,
this
will
be
posted
up
on
the
stuff
youtube
channel.
While
I
have
everybody's
attention
as
well,
I
wanted
to
make
a
note
that
the
stuff
really
shirts
for
octopus
have
finally
have
gone
out.
The
design
for
pacific
is
on
the
way
everybody's
hard
work
on
the
next
pacific
release
for
this
month
and
lastly,
we
have
the
stuff
user
survey.
That's
ending
april
2nd.
So
please
fill
out
that
survey.
A
So
this
coming
this
week
and
we'll
be
having
a
specifically
a
presentation
on
bucket
persistent
notifications,
so
invite
everybody
to
join
us
for
that.
That's
at
1700,
utc
and
there'll
be
updates
on
this
f
mailing
list
and
twitter
as
well.
So,
thank
you,
sam
again
for
taking
the
time
to
go
ahead
and
walk
through
with
us
on
radio
snapshots
and
answer
our
questions
and
we'll
catch
you
all
next
time
for
the
next
f
code.
Walkthrough
next
month,.