►
From YouTube: Ceph Science Working Group 2021-11-21
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right,
I
suppose,
we'll
get
started
here.
I'll
just
start
with
my
little
spiel
for
in
case
there's
people
here
who
haven't
been
here
before
this
is
just
you
know.
Every
other
month
some
of
us
says
admins
and
you
know
high
throughput,
high
performance
computing
or
you
know
scientific
areas
get
together
and
talk
more
like
big
clusters,
not
really
specific
science.
So
I
mean
there's
people
from
cloud
providers
and
whatnot
here
as
well.
A
Let
me
just
chitchat
with
about
stuff
and
whatnot
for
a
little
bit,
there's
a
link
to
the
pad
in
the
chat
window.
If
you
want
to
add
any
topics
or
sign
in
otherwise.
A
B
I
I
guess,
I'm
a
returning
user
so
long
time,
no
seeing
everyone.
So
in
the
meantime
I
changed
my
job
and
I
work
for
another
research
institution,
that's
new
to
south
as
well.
So
it's
the
first
self
deployment
for
the
radiation,
zurich,
we're
still
actually
in
the
in
an
early
phase.
So
we
are
still
waiting
for
the
hardware
so
not
much.
B
We
can.
We
can
tell
you
about
about
the
deployment.
Yet
the
only
difference
is
is
going
to
be
that
it's
mostly
targeted
to
cfs
usage,
more
than
rvd
or
object
store.
So
like
we
plan
to
have
some
small
use
cases
for
of
the
store,
but
mostly
this
one
is
going
to
be
a
sffs
kind
of
of
setup
and
yeah,
not
incredibly
big
either.
B
We
are
talking
about
16
nodes,
a
little
bit
dense,
so
like
25,
spin,
24
spindles
each
with
the
usual
amount
of
of
the
flash
drives
for
journal
and
right
log.
A
B
Or
whatever
experience
and
like
storage
system
currently
end
of
life,
so
it
was
actually
a
good
opportunity
to
at
least
to
test
drive
the
the
use
cases
with
seth,
and
so
we
started
starting
with
a
smaller
system
than
the
actual
one
that
will
probably
be
needed
to
take
over
all
the
the
use
cases.
But
but
yes,
we
I'm
trying
to
to
see
if
it's
a
good
fit
here
as
well.
B
It
has
all
the
the
right,
let's
say
on
paper,
it's
the
right
solution
of
course,
then
that
I
will
listen
in
detail.
So
we
see.
B
Performance,
wise
and
like
yeah,
mostly
performance-wise,
so
there
is
a
little
bit
of
question
mark
if
we
can
fulfill
all
the
requirements
there.
A
Cool
well
welcome
back
thanks
anybody
else
who
and
lana
share.
D
Hey,
I
can
share
something,
so
I'm
I'm
you
here,
I'm
an
actor
I'm
from
sean
and
when
we
agreed
most
of
our
clusters
to
octopus
and
materials
and
in
front
well
and
well,
we
disable
the
right
cache
on
most
of
our
drives
and
it
has
a
really
good
impact
in
terms
of
performance.
D
D
D
If
you
want
to
try
it,
there
is
a
fire
coming
in
despair
here
that
would
not
yeah
the
the
trick
is
to
to
do
the
fire
with
f
sync,
and
please
because
this
is
the
lego
blue
store
and
this
some
difference
with
getting
the
right
cache
and
if
you
invest,
you
think.
D
I
think
it's
still
to
be
confirmed.
I
guess
there
are,
I
think
they
are
contacting
some
officials
or
some
drive
companies
to
make
sure
it's
a
nice
recommendation
for
everyone,
but
I
I
think,
if
you
just
run,
if
I
go
on
your
drive
and
take
this
out
like
if
it's
a
micro
performance
game
on
your
system
could
be
nice.
A
Yeah,
I
think
the
regulars
of
this
group
will
find
this
particularly
interesting
and
could
definitely
be
worth
changing
among
some
clusters.
If
the
latency
to
commit
latency
is
four
times
three
to
four
times
slower,
it's
impressive.
A
I
did
see
that
there's
now
ever
since
early
november,
a
big
warning
on
upgrading
pacific.
A
That
you
basically
shouldn't
upgrade
to
pacific
from
an
old
version
because
of
the
an
omap
format,
conversion,
corruption
issue.
I
was
hoping
nobody
had
hit
that,
but
has
anybody
done
that.
A
I
don't
know
in
my
experience
I
went.
I
took
three
clusters
from
nautilus
to
octopus
in
like
the
last
two
months.
It
was
seamless
and
wonderful.
B
In
the
pad,
it's
it's
your
own
right,
the
so
I
don't
know
if
maybe
I'm
I
know
sorry,
it's
a
diagonal
one
that
has
the
so
I
wonder
like
how
people
is
managing
the
monitor
the
monitor
nodes.
So
if,
if
it's
like,
people
are
mostly
collocating,
rados
gateway,
monitors
or
they're
using
dedicated
nodes
or
vms,
and
I
wonder
how
that
affects
the
upgrade
process,
speaking
of
upgrades
so
like,
since
you
have
to
actually
upgrade
them
at
a
different
time,
usually
if,
as
that
hasn't
changed,
meanwhile,.
A
Yeah
for
me,
I
co-locate
the
monitors
managers,
mds
and
the
gateway
on
onenote
or
three
nodes
actually
because
there's
three
three
of
each
and
I
just
upgrade
the
packages
and
then
restart
the
processes,
one
at
a
time.
A
What
I've
noticed,
I
think,
is
that
when
you
restart
stuff
mon
it
often
the
manager
will
restart
as
well
and
doing
the
upgrade
process.
So
like
he's
supposed
to
do
them
separately,
but
I
know
those
two
tend
to
go
together
for
me.
A
I
don't
know
if
that's
a
bug
and
just
a
restart
of
how
it
goes
from
one
version
to
the
next
or
if
it's
not
to
do
it
that
way.
But
it's
never
caused
me
problems.
B
A
Yeah
as
long
as
you
don't
reboot
the
whole
machine,
you
can
usually
no
problem
doing
them,
one
at
a
time.
A
A
I
should
say
for
myself
too
yeah
for,
like
my
small
task
cluster,
I
also
locate
everything
with
osds
as
well,
and
that's
as
long
as
you're
doing
one
at
a
time.
It's
never
really
been
a
problem.
Upgrading.
D
F
We've
got
our
virtual
images
on
on
the
ceph
file
system,
so
it
would
cause
software
dependency
problem.
We
we
co-host
all
our
services,
actual
physical
hardware,.
A
Yeah
similar
thing
for
me,
my
over
vms,
run
off
of
my
stuff
cluster.
So
I
really
don't
want
to
virtualize.
You
know,
since
it's
a
dependency
on
each
other,
then.
C
Regarding
the
coal
equation
question,
the
only
thing
that
I
actually
don't
have
collocated
is
the
mds.
C
I
do
had
and
still
have
some
issues
with
cfs
with
snapshots,
and
I
had
hoped
that
putting
a
dedicated
mds
up
with
160
gigabytes
of
ram
would
be
enough,
but
it
seems
to
me
that
the
issue
I'm
facing
with
the
performance
when
snapshots
are
available
is
actually
not
a
problem
of
the
mds
itself,
but
rather
some
sort
of
bug
in
this
have
kernel
client
code.
C
C
But
only
so
I
have
one
specific
folder
with
like
three
terabyte
of
user
data
from
one
user
group,
and
only
that
folder
seems
to
trigger
that
issue,
and
the
issue
then
persists
so
once
it's
triggered.
It
persists
on
this
one
client
that
has
triggered
it
until
iu
mount
and
then
remounts
ffs,
and
it's
only
triggered
if
I
have
snapshots.
C
So
if
I
have
no
snapshots,
everything
is
running
fine.
If
I
have
snapshots
and
I'm
using
diffuse
client
everything
is
running
fine,
albeit
with
the
typical
fuse
client
reduced
metadata
performance.
That's
I
mean
the
fuse
client
in
general.
Has
a
reduced
metadata
performance
depending
on
hardware
by
affected
two
to
ten,
but
at
least
it's
not
subjected
to
these
issues.
C
For
me,
since
that's
I
mean
it's
the
backup
server,
it
needs
to
do
message
operations
quite
fast
as
well,
if
you
have
a
few
hundred
million
files
in
your
system.
So
for
me
the
only
solution
currently
is
to
use
the
kernel,
client
and
just
disable
snapshots.
B
Could
could
you
link
the
the
the
open
bag
you
were
referring
to
earlier.
C
Yeah,
I
will,
I
will
post
it
in
the
pattern,
as
well
as
the
the
chat
here.
So
it's
issue
four
four
one
hundred
it's
actually.
It
has
originally
been
reported
with
nautilus,
but
I'm
also
hitting
it
with
octopus
as
well
and
yeah.
So
if
you
take
a
look
at
what
the
kernel
is
doing
at
the
moment,
it
seems
to
be
running
in
some
sort
of
infinite
loop
with
snap
handling,
so
some
rebuild.
F
So
that's
that's
interesting
because
we
we
use
ffs
and
snapshots
and
nightly
our
things,
and
we
don't
have
any
problems
and
on
nautilus
and
using
kernel
clients
as
well,
and
we
also
yeah
the
only
issue
we
had
with
nautilus.
We
got
bitten
by
the
directory
fragmentation
when
somebody
deleted
millions
of
files
in
one
go
and
that
brought
ourselves
cluster
to
a
standstill.
F
C
Or
something
like
that-
and
I
have
this
one
group
directory
about
three
terabyte
and
that
is
regularly
causing
these
issues.
If
I
backup
anything
else,
it's
it's
working,
fine,
it's
just
this
one
directory
and
I
I've
looked
into
it.
I
don't
see
any
anomaly
in
there,
so
no
huge
number
of
files
in
one
specific
subdirectory
or
anything
like
that.
Nothing,
so
I
don't
know
what
exactly
is
causing
it.
I
cannot
reduce
it,
but
only
with
my
specific
data
set.
C
If
I
try
to
reproduce
it
on
any
other
data
set,
I
fail,
which
of
course
makes
it
rather
difficult
for
the
developers
to
tackle
this
issue
as
well,
because
I
mean
I
can't
just
give
them
here.
These
are
three
terabyte
of
my
university
of
one
of
our
working
groups.
Users,
all
their
home
directories
here,
have
a
go
at
it.
I
mean
it's,
it's
not
possible.
D
Yeah,
just
a
quick
note:
the
yeah,
the
difference
things
is
fixed
in
pacific
and
that's
why
we
want
to
have
a
beta
of
a
service
cluster
in
pacific
with
the
snapshots
because
yeah
we
don't
have
any
snapshot,
enabled
on
the
other
question.
C
I
mean
I
could
try,
of
course,
upgrading
to
pacific
with
the
recent
number
of
bugs
that
have
been
hit
in
pacific
and
have
been
reported
on
the
users
list.
I'm
actually
not
feeling
confident
right
now
to
upgrade
to
pacific,
and
I
plan
to
wait
until
octopuses,
eol,
so
yeah,
I
maybe
I
will.
I
will
set
up
my
test
cluster,
upgrade
that
over
to
pacific
and
put
that
specific
directory
in
there.
C
That's
probably
something
I
can
do
just
to
try,
if,
if
I
can
reproduce
it
on
the
theft
cluster
first
on
a
test
cluster
first
on
octopus
and
then
upgrade
that
over
to
pacific.
D
Yeah,
but
I
don't
think
that
this
would
solve
your
particular
issues,
but
the
yeah
the
thing
about
limit
of
the
frags.
D
If
it's
that,
if
I
remember
correctly
and
in
pacific
they
remove
this
limit
so
yeah
but
yeah,
that's
not
your
issue
and
did
you
try
with
a
newer
general
version
by
any
chance?
Maybe.
C
Okay,
I'm
I'm
so
I'm
running
centos
eight
and
I
have
these
issues
both
with
centos
eight
stock
client,
as
well
as
the
newest,
lreco
client
air
column.
So
that
would
be
I'll
have
to
look
up
5.8
or
something
like
that.
C
It's
5.12,
so
I'm
still
hitting
it
with
5.12
as
well.
Yeah.
C
Yeah
I
mean
for
me:
it's
it,
I'm
pretty
certain
that
it's
a
client
issue
since
any
load,
that's
abnormal
on
the
server
on
the
cluster.
So
I
don't
think
upgrading
anything
on
on
the
server
side
will
actually.
D
D
C
I
have
not
tried
moving
it
over
to
a
test
cluster.
That's
one
of
the
next
steps
that
I'm
gonna
do,
but,
okay,
okay,
I
I
see
it
only
triggered
by
this
specific
folder
on
on
the
current
register.
Yeah.
C
Yeah,
I
mean
bit
of
an
issue,
of
course
for
me,
but
that's
something
that
everyone
probably
knows
is
too
much
stuff
to
do,
and
currently
it's
working
as
long
as
I
disable
snapshots.
So
I
have
five
other
ish
urgent
issues
that
have
to
be
tackled
first
and
then
at
some
points.
In
the
meantime,
I
can
tingle
a
bit
with
with
the
stuff
issues.
A
All
right
moving
on
anybody
want
have
a
outages
or
unexpected
land
outages
or
hit
a
bug
that
cause
a
terrible
outage
that
they
want
to
fess
up
to.
F
Well,
they
get
one
note
die
and
it's
it
all
did
what
it
was
supposed
to
do.
So
we
all
we
are
quite
happy
with
it.
Now
we
we
have
to
hurry
and
buy
a
new
note
to
make
sure
we've
got
enough
space,
but
that's
a
different
problem.
B
On
a
on
a
side,
note
does
anyone?
Is
anyone
actually
trying
to
procure
some
hardware?
They
were
having
actually
a
lot
of
difficulties
to
come
to
come
by
and
vietnamese
and
and
such
lately,
so
even
like
the
the
the
cluster
I
was
mentioning
earlier
is
actually
been
delayed
by
the
vendor
because
they
don't
have
enough
parts.
I
wonder
if
it's
like.
I
guess
it's
like
a
global,
but
I
wonder
if
someone
has
been
beaten
by
this
issue
as
well.
C
Yeah
in
all
my
quotations
that
I'm
currently
getting
from
our
vendors,
they
always
write
that
prices
are
subject
to
change
until
the
order
more
or
less
come
in.
They
will
try
to
keep
all
quotation
prices,
but
they
can't
guarantee
anything.
But
I
guess
that's
just
the
global
situation
nowadays.
A
F
B
Related
to
flash
drives,
so
is
anyone
like
actually
running
with
with
at
least
a
portion
of
the
of
the
of
the
osd
data
on
flash,
or
are
you
also
running
with
only
hdds
so
like
everything
collocated
in
on
hdds.
E
And
you
have
to
remember
when
you,
when
your
cluster
are
trying
to
reach
some
radius
objects,
the
osd
needs
to
read
from
the
blue,
fs
and
rugsdb
the
all
maps
to
reach
the
object
on
the
blue
road
device.
So
each
operation
is
faster
when
you
have
a
better
drives
than
hdd.
C
A
Every
day
I
think
about
migrating
the
blue
store,
but
I
I'm
not
the
weird
one
who's
stuck
in.
We
use
radios
directly.
We
don't
use
the
gateway
or
cfs.
You
know
we're
using
librados
and
bluestore.
They
introduce
this
arbitrary
limit
of
a
file
can't
be
larger
than
four
gigabytes,
and
one
of
the
data
sets
that
we
store
has
files
that
are,
you
know
or
gig
as
the
low
side.
Sometimes
they
go
up
to
eight
gigs.
We
just
haven't
kind
of
tackled
that
issue.
A
A
That's
just
one
of
those,
not
a
terrible
problem,
and
I
don't
look
forward
to
converting
you
know.
11
petabytes
of
this
to
blue
store
anyways
so
keeps
getting
kicked
down
the
road
a
bit.
B
A
A
E
D
A
I
wish
I
had
joined
this
stuff
month
session
because
they
say
we
discussed
this
and
there
were
no
objections
from
anybody
on
the
call.
I
I
missed
that
one
I
totally
should
have
been
on
that
call,
so
I
could
have
checked
as
the
only
person,
but
that's
just
me
kicking
the
can
down
the
road,
and
I
know
this
is
something
I
gotta
address
in
my
cluster.
A
Some
new
features
that
I
kind
of
just
threw
into
a
list
here
was
from
the
latest
releases
that,
if
you're
using
debian
it's
now
been
built
for
bullseye,
which
is
nice
and
stuff
mds
is,
I
know,
a
rolling
upgrade
there's
no
no
longer
this
is
on
pacific.
There's
no
longer
have
to
you
know
just
turn
off
all
your
standbys
and
then
you
know
set
max
mds
to
one
and
whatnot
and
do
one
at
a
time
and
then
turn
everything
back
on.
A
Yeah,
I
didn't
look
to
see
if
it
was
an
octopus,
but
it
wasn't.
F
Actually,
I'd
just
occurred
to
me
when
you
mentioned
debian,
so
do
they
use
the
containerized
version
of
seth
on
debian.
A
I
thought
I
read,
it
was
said:
packages
are
now
being
built
for
w
bullseye,
so
I'm
assuming
that
means
the
native
packages
and
I'm
sure
they
also
are
doing
a
container
as
well,
but
no,
they
wouldn't
do
a
separate
container
for
bulb
debian.
They
would
just
what's
the
base
of
the
stuff
container.
That
was
like
some
nada
space
or
there's
some.
Are
they
that
being
ubuntu
based.
F
I
also
noticed
that
ubuntu
also
carries
specific
now
and
I'm
we're
kind
of
wondering
what
what
we
upgrade
to
for
our
cluster
and,
if
you're,
going
to
go
for
containers
or
not
and
in
a
way
yet
I'm
probably
tending
towards
using
it
out
of
the
box
from
from
ubuntu
or
debian,
rather
than
a
third
party
repo
just
to
have
that
longer.
C
A
I
am
thinking
about
it,
but
that's
another
thing:
I'm
just
gonna
kick
down
the
road.
Further
rocky
definitely
seems
to
have
some
big
name
support
behind
it
now,
with
the
releases
coming,
so
I'm
getting
more
comfortable
with
the
idea
of
switching
from
a
stream
to
rocky.
C
C
A
Yeah
interesting
yeah
I'll
be
nice
to
hear
how
that
goes
for
you
in
the
next
meeting
january,
yeah
yeah.
A
All
right,
so
I
think
it
looks
like
based
on
the
colors
that
is
fun.
I
pronounced
that
correctly.
I'm
sorry,
if
I
did
not,
I
had
a
big
question
here,
throwing
billions
of
objects
you're
getting
a
lot
of
outages.
Quite
often,
once
you
got
to
around
a
billion
of
objects,
I
think
I
see
you
on
the
call
here.
Do
you
want
to
up
this
issue
or.
A
Yeah
three
day,
three
data
center
multi-site
active,
active
octopus.
H
Oh
sorry,
okay,
I
wasn't
sorry
yes,
so
we
would
really
like
to
know.
What's
the
the
proper
way
to
store
this
huge
amount
of
objects,
we
are.
We
have
a
cluster
and
we
just
have
only
sas
ssds
or
these
in
this
one,
and
we
have
the
index
pull
on
nvme
device,
but
it's
almost
impossible
to
bring
back
the
cluster
to
a
kind
of
working
state.
So
I
don't
want
I'm
trying
to
migrate
out
all
the
users
to
temporary
cluster
because
it's
just
completely
crashing
falling
apart.
H
Like
every
hour,
I
would
say:
I'm
not
sure
where
is
the:
where
is
the
battery
like
what
we
miss
that
can
cause
this
very
big
instability.
H
What
you
can
see
see
on
the
cluster,
a
slope
started
to
come
most
of
the
time.
It's
even
reported
in
the
self-lock
like
this
specific
osd
report
is
faired
and
after
the
slopes,
we
are
getting
the
lucky
pages
and
this
time
when
we
have
the
luggy
pg
and
the
slopes,
it
works
already
the
io
in
the
cluster.
H
So
you
can
see
that
the
right
operation,
the
read
operation,
is
just
falling
down
from
50
000.
I
hopes
to
like
couple
of
hundred
or
even
it
just
go
down
completely
zero.
So
we
don't
have
any
client
operation
anymore,
because
the
radars
get
a
crashed,
but
why
the
radar's
gateway
is
crashing
too
badass.
I
don't
really
have
idea,
but
what
we
can
see
on
the
osd
locks
like.
H
I
think
that
the
compaction
is
kick
of
that
time
when
the
radar's
gateway
can
not
write,
but
I'm
not
sure
this
could
block
like
the
I
o.
Also,
for
example,
the
the
client
try
to
write
to
the
cluster
and
rather
get
they
maybe
cannot
write
to
the
osd
because
it's
doing
the
compulsion
and
it
would
block
everything,
I'm
not
sure
that
it
makes
sense.
H
So
yeah
I'm
just
curious
to
guys
what
you
think,
because
what
I
am
trying
to
test
it
out
now.
I
would
like
to
put
like
four
osd
on
each
15,
terabyte,
ssds
and
yeah.
I
will
I
will
run
some
cost
bench.
Try
to
push
to
my
temporary
cluster,
maybe
two
billions
or
two
billions
of
objects
and
let's
see
the
result,
but
I'm
not
sure
is
the
right
direction.
E
H
E
Yes,
yes,
exactly
that
one,
and
one
of
most
important
thing
is
the
network
stability,
because
when
we
see
when
we
get
some
alerts
from
this
cluster
first
thing,
we're
checking
is
our
network
core,
because
most
of
the
problems
are
related
to
some
instability
in
switches
and
we
are
checking
simply
ecmp
to
to
show
what
are
the
times
for
them.
E
E
H
And
when
you
mentioned
compaction,
can
you
set
something
on
like
fine
tune?
The
compaction,
maybe.
H
H
Can
you
share
your
roxy
b
options,
maybe
after
the
meeting
is
okay?
Also,
I'm
just
curious.
B
I
was
actually
trying
to
look
for
for
an
issue
that
I
remember
some
colleagues
from
a
different
organization
had
about
having
too
many
objects
or
because
of
the
other
comp
all
the
index
configurations
of
the
the
third,
the
the
bucket
that
was
actually
containing
a
metadata
was
actually
was
not
was
not
sharded
enough,
so
they
had
basically
gigantic.
B
E
H
E
It
could
probably
be
a
problem
because
when
you
have
too
many
shards,
there
is
also
problem
with
parallelism
itself.
H
H
B
Okay,
thank
you.
It's
just
some
good
idea.
I
have
a
side
of
curiosity
so
when
you
say
that
you
have
a
three
sites,
active
active
cluster,
like
you
have
like
osd,
is
distributed
geographically,
like
like
equal
numbers,
these
distributed
in
three
different
locations.
How
does
it
work
like.
H
B
Yeah,
so
sorry,
what
I
was
wondering
is
actually
globally,
regardless
of
the
number
of
nodes
you
have
like
basically
balance
the
number
of
of
sds
on
the
three
sides.
It's
it's
not
that
it
has
nothing
to
do
with
your
problem.
I
was
just
wondering
how
your
your
your
setup
is,
or
your
cluster
is
designed.
A
H
And
yeah,
I
have
like
all
these
have
seven
notes
with
the
same
number
of
osds.
H
Yeah
and
in
each
site
I
have
three
gateway
and
they
are
communicating
with
each
other
and
trying
to
use
this
bucket
replication
feature
which
is
kind
of
working
kind
of
not
so
the
directional.
If
I
define
I
just
want
to
replicate
one
bucket
to,
for
example,
from
hong
kong
to
singapore
is
working
but
the
the
synchron.
H
How
was
the
name
synchronous?
It
doesn't
work
when,
when
it's
supposed
to
my
replicate
the
bucket
contents
from
everywhere,
I
mean
from
one
side
from
our
side
to
the
other
side,
also
that
it
doesn't
work.
I
have
take
it
with
the
with
the
gateway
step,
guys
also
so
yeah
try
this.
This
replication
feature
I'm
trying
at
the
moment,
but
in
the
one
direction
is
working.
E
A
All
right,
I
guess
the
last
topic
on
the
list
here
is
the
stuff
look
on
next
year
is
actually
still
planning
to
happen,
or
I
believe
it
was
in
portland.
A
I
don't
know
if
it's
co-located
with
any
other
sort
of
event,
offer
it's
just
it's
so
stuff.
I
kind
of
don't
know
if
there's
like
a
cube
con
or
anything
happening
the
next
day
or
whatever,
but
the
coffer
proposals
is
open
for
like
another
couple
weeks
here.
A
Should
we
submit
if
they
do
birds
of
a
feather?
I
would
like
to
do
one
of
these
in
person.
That's
always
great
I'll
see
if
I
can
submit
that
and
see.
If
I
haven't
even
talked
about,
if
they'll,
let
me
travel,
but
I
think
they
will
now.
A
Anybody
have
any
the
thoughts
on
the
decepticon
or.
B
I
guess
this
for
me
would
be
a
little
bit
difficult
and
unless
yeah,
unless
I
can
actually
cluster
it
up
or
with
some
other
events
that
can
justify
the
trip.
A
Yeah
it's
hard
to
justify
overseas
trip
for
just
like
a
two
day
cephalocon.
I
know
when
the
the
last
one
in
barcelona.
I
think
I
only
got
to
go
because
there's
the
europe
cubecon
was
following
it.
A
But
I'm
sure
this
still
definitely
have
decent
virtual
opt-in
options
for
everything
this
year.
A
lot
of
conferences
have
focused
on
that
learned
how
to
do
it?
Well,.
A
A
All
right,
just
not
thanks
everybody
for
joining
will
next
one
will
be
in
january,
you'll
see
the
usual
emails
and
those
we'll
talk
to
you
then,
and
enjoy
your
holidays
thanks
a
lot
for
hosting.