►
From YouTube: Ceph Orchestrator Meeting 2022-04-05
Description
Join us weekly for the Ceph Orchestrator meeting: https://ceph.io/en/community/meetups
Ceph website: https://ceph.io
Ceph blog: https://ceph.io/en/news/blog/
Contribute to Ceph: https://ceph.io/en/developers/contribute/
What is Ceph: https://ceph.io/en/discover/
A
So,
to
start
with,
we
have
the
aha
documentation.
I
see
somebody
put
a
couple
links
in
there.
B
Yeah,
that
was
me,
oh,
so
one
of
them
is
a
link
for
rgw
aj,
which
has
a
really
great
diagram,
and
probably
a
really
nice
description
of
a
lot
of
the
ways
to
set
up
an
ingress
service
for
our
gw.
The
other
link
is
nfsij
which
duplicates
this
to
some
degree.
The
service
spec
is
similar,
but
it
doesn't
have
a
diagram.
It
has
less
of
a
description
and,
interestingly,
it
has.
It
describes
the
monitor
port
config
option
where
the
rgw
dock
does
not
so
looking
at
these,
I
guess
the
proposal
is,
or
the
suggestion
is.
B
How
do
we
want
to
maintain
these
moving
forward?
Should
we
have
just
a
dedicated
ingrow
stock
would
be
nice
to
have
more
complete
documentation
in
one
place.
A
Yeah
yeah,
I
think,
ideally
we
would
have
one
because
we
might
add
it
to
more
services
as
well
later,
but
I
do
really
like
this
diagram
everything.
But
it's
like
rgw
specific.
B
Well,
it
may
be
okay,
it's
just
an
example.
I
mean
using
rgw
as
an
example.
B
Yeah,
I
think
conceptually
it
makes
sense
to
you,
know,
use
rgw
as
a
example
service,
perhaps
but
fundamentally
and
ingress
just
you
know:
fronts
a
back-end
service,
so
really
coming
down
to
just
that.
One
property
in
the
ammo
that
differs
between
services.
A
Yeah
that
could
be
like
an
idea.
I
guess
we
start
with
is
just
well,
I
guess
our
arena
agreement.
We
wanted
to
generic
ingress
section,
so
we
don't
have
to
worry
about
having
copies
of
this
to
the
place.
A
This
is
the
frequency
I'd
be
in
favor
of
bitcoin.
I
think
there's
a
chance.
We
add
this
to
other
services
in
the
future
as
well.
I
don't
know
how
to
keep
having
a
section
in
every
single
one.
Even
that's
like
this.
We
probably
want
like
a
or
we
probably
have
a
section,
but
it's
like
a
really
small
one,
mostly
just
links
to
the
main
one.
C
Yeah,
it's
kind
of
interesting-
I
mean
for
the
lazy.
You
could
just
remove
the
rgw
word
from
the
diagram
and
then
just
link
the
diagram
into
locations
unless
it's
purely
rendered-
and
in
that
case
we
just
change
the
text
on
the
fly.
But
assuming
it's
like
a
png,
that's
like
saved
in
the
source
tree
somewhere.
A
I
think
it's
just
an
image,
I
don't
know
for
sure.
I'd
have
to
check.
I
think
it's
an
image,
but
again
even
like
where
I
was
saying
we
could
totally
use
that
just
say
like
this
is
an
example
of
it
with
rgw,
because
I
guess
setup
should
be
similar
across
the
demons.
A
Yeah,
I'm
kind
of
favorite
I
just
sort
of
was
lifting
most
of
the
rgw
one
out
putting
it
somewhere
that
we
say
this
is
the
the
ingress
section
and
then
having
the
nfs
rgb
one
sort
of
link
there
and
then,
if
there
is
something
specific
to
that
one,
we
can
put
it
in
the
demons
section
like
the
rw1
or
the
nfs
one.
But
most
information
is
probably
common.
B
A
B
For
a
different
service
yeah
so
so
like
by
way
of
example
for
rgw,
what
you
would
do
is
you
would
change
the
port
to
some
like
say
you
know
180
80,
and
then
you
set
the
ingress
with
the
port
of
8080.
A
A
A
All
right
people
agree
with,
what's,
in
the
other,
pad
we're
going
to
sort
of
lift
the
rgw
one
out
make
its
own
ingress
section,
like
mostly
lifted
out,
probably
a
few
details
that
aren't
left
out,
but
with
that
they
just
have
like
just
very
specific
things
in
the
breach
demon.
C
B
A
A
A
Does
that
mean
anything
else
want
to
say
on
that
topic?
Are
we
sort
of
an
agreement
and
where
we're
going
to
go
with
that.
A
A
I
believe
I
need
to
find
where
the
schedule
this
is,
I
believe,
our
like
our
planning
is
supposed
to
be
done.
Let
me
actually
go
real,
quick
and
find
real
tv.
A
That
was
a
link
to
the
the
reef
and
that
so
we
don't
need
to
put
any
ipad.
That's
just
the
schedule,
but
the
reason
I
bring
it
up
is
because
we
probably
won't
have
a
couple
of
things
planned.
We
want
to
talk
about
in
there
one
of
the
obvious
sort
of
ones
are.
We
want
to
have
some
stuff
about
the
agent
in
there
trying
to
get
that
stabilized
over
the
course
of
or
for
r.
I
guess
right
now.
It's
still
sort
of
untested
and
everything.
A
But
I
want
to
sort
of
open
up
again
with
anyone
has
any
things
they
think
for
sure
like.
We
definitely
want
to
try
to
do
this
in
our
where
we
sort
of
circumvent
with
yes
secondary
ideas.
A
The
things
I
had
like
the
big
topic
that
I
wanted
to
sort
of
work
on
was
like
transparency
with
the
serve
loop
and
everything.
So
we
have
all
these
times
where,
like
something
will
get
like
stuck
or
it'll,
be
hanging
or
you
won't
know
what
the
server
loop
is
doing,
necessarily
which
make
it
hard
for
people
to
do
any
debugging
or
know
what's
going
on.
A
One
of
the
things
I
want
to
do
is
is
find
some
way
to
have
server
loop
transparency,
it's
yes,
it's
sort
of
a
more
general
topic
than
like
a
specific
specific
thing,
we'd
be
adding,
but
I
think
it's
one
of
the
more
important
things
that
we
do
there,
also
maybe
just
in
general,
like
a
way
to
more
easily
find
the
like
errors
that
happen
with
the
demons
and
the
services
and
things.
A
So
I
know
we
have
our
events
and
all
that,
but
I
find
sometimes
people
don't
know
where
to
look
for
those,
because
we
don't
raise
a
house
warning
or
anything
sometimes
it'll
just
be
a
failure
attached
to
like
a
service.
You
won't
see
it
unless
specifically
go
look
for
it,
and
so
people
won't
find
that
stuff.
D
A
A
I
think
that
part
works
finding
that
the
problem
I'm
talking
about
before
was
that
my
bills
and
things
will
go
wrong.
We
won't
actually
raise
any
help.
Warning
we'll
just.
A
A
A
Yeah,
I
think,
upgrade
history.
We
already
have
the
tracker
for
this.
That's
another
thing
I
wanted
to
get
in.
That's
something
that'd
be
nice.
To
have
transparency
on
is
right
now
the
upgrade
ends.
The
upgrade
just
says
that
it's
not
progress
anymore,
just
cleared,
and
so
I
can't
know
for
sure
exactly
what
happened
there.
I
mean
like
an
upgrade.
History
would
be
probably
pretty
nice.
D
A
Yeah
filler,
with
the
staggered,
upgrade
about
to
be
a
thing
soon.
It
could
be
more
useful
to
have
something
like
that.
You
know
see
like
oh.
Is
this
exactly
what
they
tried
to
do
when
they
upgraded.
A
Anyway,
I
always
wanted
to
bring
it
up
that
we
do
have
the
planning
next
week,
and
so
there
are
some
topics
just
put
down
that
they
want
there,
but
if
people
have
some
time,
maybe
to
think
about,
but
you
know
what
things
could
be
useful
to
add
in
I'll-
probably
maybe
add
some
things
to
the
dock
here,
as
I
come
up
with
stuff
as
well
over
the
course
of
the
week,
but
this
is
supposed
to
be
like
the
big
sort
of
our
planning
thing.
A
I
think
one
other
thing
we
should
bring
up
is
still
the
we
still
never
finished
the
compiling
the
binary
thing
to
get
the
the
package,
the
safety
and
package
yeah.
A
B
C
A
That's
been
opened
for
a
while
that
like
compiles
the
binary,
but
then
you
like
split
it
up
and
compile
it
into
one
sort
of
thing,
yeah
that
we
have
to
push
whatever
that
compiles
into
somewhere,
so
people
can
pull
it
off
yep
and
like
download
self.com,
where
they're
pushing
it,
there's
nothing
a
lot
of
work
that
has
to
go
into
making
that
happen
and
then
once
that's
possible,
then
we
can
actually
refactor
and
split
into
different
files
and
all
that
yeah.
So
what
is
this
a
refactor?
C
B
C
Yeah
yeah
yeah,
I
I've
looked
at
it.
I
think
you
were
working
on
it
directly
like
a
few
a
month
ago,
or
so
I
do
remember
it,
but
I
wasn't
sure
what
kind
of
things
they
want
to
hear.
This
are
some
so
yeah.
If
it's,
if
it's
just
like
anything,
that's
a
big
effort.
This
seems
definitely
reasonable.
B
Could
be
good
one
other
minor,
I,
oh
sorry,
another
minor
idea
is
we're
kind
of
abusing
the
config
store
in
the
config
keys.
I
think
some
of
those
are
kind
of
difficult
to
find
occasionally
like,
for
example,
when
I
want
to
set
the
global
container
image.
I
have
to
do
this.
Config
manager,
global
container
image,
thingy.
D
B
C
B
A
We
could
even
actually
extend
that
we
even
schedule
a
redeploy
if
they
did
that,
because
right
now
we
get
doing
two
steps.
You
have
to
change
the
image.
Then
we
have
to
redeploy
after
we
could
make
one
command
for
like
and
change
the
image.
Then
we'll
schedule
a
redeploy
for
you
and
you
could
also
add
some
validation
for
the
inputs.
If
it
doesn't
already
have
it
yeah
right
now,
yeah
you
set
it
manually,
you
can
get
to
any
string.
Pretty
much
just
had
garbage
here,
clean.
B
And
then
I
think
the
only
other
real
big
idea
that
has
been
floated
around
for
quite
a
while,
but
it's
it's
a
task
is
maybe
like
deal
with
air
gap
environments
like
should
we
deploy
a
private
container
registry
through
the
orchestrator
and
aid.
Some
of
those
type
of
things
so
maybe
place
place
that
on
the
admin
nodes
for
those
type
of
setups.
C
A
Yeah,
we
definitely
check
on
it
at
least
make
sure
that
all
the
issues
are
worth
at
least
make
it
easier
to
work
with
the
private
registry
and
everything.
C
A
C
Yeah
I
mean
for
test
scripts
or,
like
someone
wants
to
write
a
blog
like
here's,
how
you
run
the
docker
registry,
with
the
underserved
am
with
the
generic
container
launching
options
great.
I
just
don't
think
that
should
be
like
advertised
heavily
or
like
supported
downstream,
because
all
those
people
are
already
gonna
have
something
else.
They
probably
use,
whether
it's
you
know
quay
or
a
soccer
registry
or
satellite
or
eight
million
different
option.
B
That
there's
a
difference
between
podman
and
docker.
You
have
to
set
like
some
specific
config
files
on
the
host.
I
don't
know
we
want.
C
B
A
All
right
yeah,
I
think
in
general,
though
it's
a
good
thing
to
at
least
clean
up
our
side.
D
Service
in
the
binary
in
in
the
yeah
in
the
banner
like,
for
example,
the
the
new
services
dashboard
team
had
like
one
month
ago.
I
don't
remember
the
names,
some
some
services
related
with
prometheus
and
there
was
reviewing
the
the
pr
honestly,
it's
very
difficult
to
to
know
that
you
are
touching
all
the
the
places
where
you
have
to
added
the
new
services,
because
we
have
like
different
places.
C
A
Yeah
I've
been
trying
to
think
one
thing
with
those
fdm
refactoring
is
that
after
it
happens,
it's
gonna
be
extremely
hard
to
make
further
changes
and
then
backboard
them
without
backboarding.
Coming
up
the
binary
like
that,
so
we're
probably
have
to
do
that
like
closer
to
the
end
or
closer
to
when
our
releases
I'm
imagining,
and
so
we
don't
have
to
we're
not
going
to
backboard
it.
So
we
don't
have
to
backward
other
changes
that
we
do
in
the
binary
that
are
on
zero
to
that
version.
So.
B
A
There's
something
that's
built
on
top
of
that
it'll
probably
be
pretty
late
in
r,
maybe
it'd
be
something
that
comes
in
that
minor
release
of
r.
Once
we
have
sort
of
the
big
picture,
refactoring
yeah
refactoring
done.
A
I
think
it's
a
good
starting
list
for
things
and
I'm
sure
other
people,
maybe
I
know
people
who
aren't
just
us-
are
going
to
show
up
to
that
meeting
and
they
may
have
their
own
ideas
for
things
they
want
to
do
like.
I
think
someone's
they
might
ask.
A
They'll
probably
be
done
earlier,
actually
we'll
see
if
people
want
other
things
automated
or
if
they
want
other
stuff
done
in
sephidim,
then
we'll
have
our
list
of
own
things
that
we
want
to
do
as
well,
we'll
be
able
to
put
a
pretty
good
list
together.
I
think
it's
a
good
start.
A
Yeah
just
feel
free
to
open
the
other
pattern
and
add
to
that
list
over
the
course
of
the
week
and
when
we
go
to
the
actual
meeting
for
the
planning
might
have
his
own
other
pad.
I
don't
know
how
it's
gonna
work,
but
either,
if
it
does,
I
could
just
copy
paste
this
list
over
there
as
a
starting
point.
A
C
A
The
schedule
I
was
saying
they
might
make
an
other
pad.
Oh
do
this,
because
I
think
there
might
end
up
being
like
a
our
planning,
other
pad
or
maybe
they'll
be
in
our
planning
pad
for
each
component.
Specifically,
I
don't
know
how
it's
going
to
work.
I
don't
remember
how
it
worked
last
year,
so
it's
possible.
It
won't
be
in
the
same
orchestration,
weekly
pad
they're
doing
all
this
they're
writing
all
this
stuff
down.
I'm
not
100
sure.
A
All
right,
yeah,
you're,
saying
just
feel
free
to
go
out
to
the
the
one
in
the
weekly,
at
least
for
now,
and
then
we'll
bring
that
all
over
the
doesn't
being
somewhere
else,
we'll
bring
it
all
over
for
the
planning
next
time
all
right.
So
I
had
two
other
smaller
topics
on
here.
The
things
I
was
going
to
clarify
as
our
position
on
certain
things
like
what
we
actually
where
we
stand
on
these.
A
So
first
is
multi-cluster
support
for
september,
easy
mode
clusters
in
the
same
host,
and
I
mean
the
first
bullet
point
in
there
doesn't
work.
I
think
it
does.
I
have
to.
I
don't
remember
how
you
have
to
set
it
up
for
everything,
I'm
pretty
sure
it
does.
A
I
guess
the
more
important
question
is:
is
the
second
one,
which
is
if
people
do
that,
like
what
are
the
actual
criteria
that
we
follow
as
far
as
you
know,
separating
the
clusters
and
differentiating
them
properly
that
we
have
to
keep
track
of,
because
I
know
sometimes
things
will
come
up
I'll
say
like
oh,
you
can't
do
that
because
it
would
break
multi-cluster
support,
but
we
don't
really
ever
have
our
clear
defined
thing
of
like
what.
What
does
that
mean
like?
A
What
is
the
things
we
have
to
have
to
keep
track
of
to
have
multiple
clusters
in
the
same
host?
I
sort
of
just
I
guess
this
is
another
conversation
starter
thing.
A
A
A
D
Up
as
anything,
they
remember
whatever
about
how
this
works
just
understand
how
it
works
right
now,
when
we
have
multiple
clusters,
do
we
have
different
daemons
for
each
cluster
or
the
same
dms
like
the
most
important
ones
to
manager
and
monitor
our
given
service
to
all
of
them?.
A
D
B
Yeah
yeah
they're
totally
separate
seth
clusters
that
have
no
knowledge
of
each
other,
just
co-located,
which
I
personally
think
is
kind
of
a
neat
idea,
because
I
mean
supposedly
I
guess,
if
you're
hyper-converged.
Maybe
you
would
want
to
do
something
like
this.
Perhaps
it's
an
interesting
idea,
but
from
recollection
what
I
remember,
we
tried
really
really
hard
to
keep
everything
out
of
the
etsy
directory
so
like
etsy.com
and
the
keyring
and
placed
most
of
those
under
the
data,
so
they
were
denoted
by
the
fs
id,
and
so
this
would
be
var.
A
Yeah,
that
was
the
one
thing
I
was
going
to
worry
about,
because
it
makes
sense
with
the
different
emphasis
ids
that
it
would
all
handle
itself.
As
long
as
there's
a,
I
think,
there's
a
couple
cases.
I
remember
happening
with
the
inferring
conf
that
it
would
try
to
infer
config
files
from
other
clusters,
other
monitors
or
monitors
and
other
clusters.
A
There
were
some
gaps
there,
but
it
was
generally
it
made
sense.
But
as
soon
as
you
talked
about
scsf.com,
I
know
you
can
pass
in
when
you
do
bootstrap
like
a
certain
directory
where
you
want
to
put
those
files
and
all
that,
like
the
client
or
the
config
in
the
key
ring.
But
I
don't
know
if
we
respect
that
properly
like
later
on
or
what
we
do.
There.
D
A
A
Some
reason.
I
don't
remember
exactly
why
there
was
a.
D
B
B
A
B
A
D
B
The
sephardium
shell
command
use
that,
but
it
sometimes
converts
something.
A
Yeah,
I
know
some
people
still
do
it.
I
know
downs
from
three
people.
I've
seen
they'll
still
have
stuff
common
installs,
rpms
yeah,
especially.
B
A
I
mentioned
that
earlier.
I
think
the
couple
progress
I
put
in
about
that
should
fix
it.
I
think
it
now
checks
that
the
fsid
matches
for
the
monitors
that
you're
inferring
from,
but
hopefully
that
one
works.
A
A
So
I
think
we
wanted
to
properly
support
everything.
We'd
have
to
fix
that
we
have
to
have
a
way
to
provide
a
general
argument.
Almost
they
make.
They
see
that
argument
for
where
to
do
all
the
conscious
stuff
that
you
allow
in
bootstrap
would
have
to
be
sort
of
generic.
You
have
to
somehow
pass
this
to
the
manager,
module
and
keep
track
of
it
and
everything.
A
A
The
least
assembly
would
copy
that
would
handle
all
the
automation
and
all
the
single
cluster
stuff
should
not
break
again.
A
lot
of
people
aren't
manually
modifying
it.
A
It
would
split
them
up
for
multiple
clusters
and
we
know
the
fsid,
so
we
know
which
one
the
reference
as
well.
We
can
fix
all
those
hard-coded
spots.
D
A
A
If
you
had
multiple
clusters-
and
this
was
cluster-
was
not
the
one
that
was
the
the
one
named
stuff
the
default
one,
then
it
would
just
get
some
random
other
clusters
config
counted
in.
I
know
nfs
works
like
that.
I
think
iscsi
also
works
like
that
of
my
head.
A
A
I
think
where
we
try
to
keep
track
of
it
for
people.
We
call
that
or
we
like
write
the
etsy
staff.com
to
all
the
admin
nodes
and
all
that
our
handling
of
it.
If
we
put
it
in
there,
we
know
where
it
is
and
where
it
is
and
know
where
it
is,
and
we
know
it's
like
connected
to
the
right
cluster
and
everything,
and
then
we
still
have
the
default
steps
for
the
sort
of
default
cluster.
A
It
actually
blocks
unless
you,
you
put
a
certain
flag.
If
you
see
something
in
there
at
step.com,
it'll
just
stop
the
boots
drop.
It'll
tell
you
there's
already
one
there,
so
you
have
to
provide
the
override
flag
unless
you
say
that
or
what
you're
supposed
to
do,
if
you
actually
want
to
put
a
second
cluster
in,
is
you're
supposed
to
give
us
a
different
location
to
put
the
config
file.
A
A
We're
saying
if
we
what
we
did
on
bootstrap
is
we
put
it
in
whatever
location,
so
either
the
default
fcsf
or
whatever
directory
they
told
us
to
put
it
in,
and
then
we
also
always
put
a
copy
in
far
left
fsid,
like
the
data
there
and
then
we'd,
always
at
least
for
our
internal
usage.
We'd
always
have
one.
We
wouldn't
have
to
worry
about
it.
What's
in
sccfs.com,
that
would
be
almost
irrelevant
to
us
at
that
point.
D
A
A
Guess
we'll
have
to
make
a
tracker
keep
track
of
that
one
but
yeah
having
it
in
both
places
and
then
favoring.
The
fsid1.
A
D
A
A
So
is
everyone
okay
with
that
solution,
though
maybe
this
is
the
same
solution
we
agreed
on
last
time.
I
just
kind
of
forgot.
B
A
A
This
is,
I
don't
know
if
you
know
for
sure,
and
if
we
export
this
properly
like
at
all,
so
I
was
looking
at
something
something
downstream
related
to
this
and
whether
we
it
works
properly.
So
we
look
at
this
adoption
stuff.
So
that's
that
one
link
the
third
length
down
or
the
third
bullet
coin
yeah
under
the
cluster
different
names
in
the
other
pad.
A
So
when
you
at
least
we
do
an
adoption,
we
do
allow
you
to
specify
the
cluster
name,
but
I
think
right
now
it
doesn't
properly
work
like
you
can't
adopt
something
or
maybe
there's
only
certain
cases.
You
can't
adopt
something.
That's
in
a
different
name.
A
B
I
I
think
that
was
the
intent,
but
from
vague
memory.
I
don't
think
I've
ever
seen
us
test
this
with
anything,
but
like
seth
dash
is
the
name
because
I
think
it
has.
I
think
it's
the
systemd
unit
files
and
some
of
the
conf
directories
are
named
this
way,
except
dash
something
it
was
like
damon
id
seth
dash
et
cetera,
et
cetera,
but
I
bet
I'm
not.
These
are
like
nautilus
based
clusters
or
earlier,
and
I'm
not
exactly
certain
how
one
might
deploy
I'm
using
a
different
name.
I've
never
tried
that.
A
Yeah,
the
reason
I
bring
it
up
is
that,
because
the
openstack
team,
they
use
different
names
for
their
clusters,
sometimes
and
they
were
trying
to
do
some
adoption
stuff
without
working
with
cepherdium
they're.
Having
a
couple
issues,
I
wasn't
sure
if
we
like
there
was
some
effort
in
the
past
for
this,
whatever
it
doesn't
seem
like
it
works
properly
right
now,
at
least
with
the
systemd
files
and
everything
they're
just
kind
of
always
stuff.
A
We
might
just
need
to
do
some
work
there
either
that
or
when
we
adopt
the
cluster.
I
don't
know
if
you
don't
even
change
the
name.
I
think
you
can
change
the
name
of
stuff
when
you
adopt
the
cluster
but
adopt
the
demons.
B
C
B
I
think
what
we've
actually
validated
as
far
as
system
d
units
and
file
path
locations.
D
By
the
way
talking
about
this
chef
configuration,
I
see
you
in
the
code
that
also
that
podman
authentication
is
also
stored
on
that
slash
gtc.
A
I
mean
you'd
expect
it
to
be
the
same
one,
but
it
wouldn't
have
to
be.
I
guess
I
guess
you
could
have
two
clusters
using
different
registries
for
the
images.
But
again
I
don't
know
why
you
would
yeah.
D
But
probably
for
example,
if
you
want
to
test
not
great
and
you
have
multiple
clusters.
A
Case
yeah,
you
have
to
upgrade
from
an
image
on
one
registration
image
on
another
change,
all
that
log
and
stuff.
Please
our
case
yeah.
I
guess
we
could
put
that
in
var
lipstick
id
as
well.
There's
not,
I
don't
think,
there's
anything
stopping
us
from
backwards
compatibility
or
some
other
components
and
stuff
to
put
it
in
for
lipstick.
That's
something
that
we
made
entirely
ourselves.
A
A
Yeah
really
sounds
like
we
don't
want
to
do
it
for
multiple
clusters,
but
I
guess
we're
naming.
We
have
to
still
go
I'll
to
look
back
at
the
adoption
corpus
and
figure
out
what
we
support
there.
I
just
need
some
investigation.
I
don't
think
anybody
knows
for
sure
what
it
does
and
doesn't
work
in
that
that
area
that'll
be
a
thing
I'll
look
at
over
the
next
week
or
two.
A
All
right,
so
that
was
it
for
topics
and
everything
it's
been
a
pretty
long
time
does
that
have
any
other
topics
I'm
gonna
bring
up
here.
A
All
right,
yeah,
sorry
about
the
laptop
difficulties
I'm
having
today.
I
guess:
that's,
that's
everything.
So
you
can
end
here
and
I
will
see
you
all
next
week
in
the
reef
planning
all
right,
bye.