►
From YouTube: OpenStack Austin April 11
Description
Preso: https://docs.google.com/file/d/0BwS8wh44iyEwamR1TWFsLXJITWM/edit
By Ian Colle on Ceph
A
A
B
A
B
A
The
pizza
orders,
and
which
are
by
the
way
you
know
kudos,
to
ink
tank
today.
B
And
we
have
ian
here
to
talk
about
it.
He
flew
in
from
denver
for
the
talk
and
as
you
guys.
A
A
Tweet
about
this,
how
awesome
we
are
or
how
awesome
the
talk
is
giving,
and
you
know.
A
And
I
just
tweeted
out
a
link
to
the
pdf
of
the
slides.
If
you
want,
I'm
gonna
fly
pretty
fast
through
some
of
them
just
due
to
time
constraints,
but
go
back
and
think
through.
So
you
know
he
he
has
like
30
30
45
minutes.
Something
like
that
to
talk
to
and
if
you
have
questions
after
obviously
happily
take
questions.
B
Also,
if
there
is
any.
A
A
First
of
all,
why
am
I
standing
in
front
of
you
guys
hold
on
I'm
the
set
program
manager
at
inktank
we'll
get
in
a
little
bit
later
into
the
presentation,
the
relationship
between
inktank
and
seth,
but
my
job
is
to
make
sure
that
really
smart
guys,
keep
cranking
out
awesome,
set
code
and
there's
all
my
contact
information
on
it.
A
So
we're
going
to
start
out
by
how
do
you
select
the
best
cloud
storage
system
before
I
do
that?
How
many
people
here
have
already
heard
of
seth?
I'm
not
okay,
how
many
people
here
have
played
with
seth
many
people
here
have
contributed
to
set
come
on
guys
all
right.
That's
pretty
good,
though
I'm
impressed
okay,
then
I
don't
need
to
sell
you
a
lot
on
this,
but
a
cloud
storage
system.
A
A
They
made
me
go
through
all
these
monkey
hoops
to
show
that
it
was
not
due
to
our
data
center's
humidity,
it's
not
due
to
our
data
center's
temperature.
They
said
it
might
be
cosmic
radiation
well.
Finally,
when
we
took
down
all
the
serial
numbers
and
they
did
a
trace
and
went
back
to
an
unnamed
vendor,
they
found
out
that
oh
yeah,
we
got
a
manufacturing
problem,
so
I
had
a
73
gig
sdk
drive,
which
this
is
not
necessarily
fail
on
me,
pretty
much
hourly,
so
flashing
lights
were
a
common
scene
in
the
data
center.
A
B
A
That,
if
you're
talking
about
a
million
drives
your
system,
you're
gonna
lose
55
of
these
a
day.
Now
the
one
cool
thing
was
because
we
had
to
destroy
them
all
by
hand,
because
the
environment
I
was
in,
we
had
to
send
back
all
the
faceplates.
I
got
a
lot
of
cool
magnets
to
take
home
with
my
kids,
but
other
than
that
benefit.
I
don't
think
you
guys
want
55
of
these
a
day
to
be
having
to
destroy.
So
how
do
you
get
around
that?
A
B
A
A
These
are
a
couple
of
features
which
some
people
think
are
very
crucial,
and
these
are
areas
that
swift
have,
that
we
don't
so
for
quotas
and
object
expiration.
We
do
not
currently
have
those
concept.
They
are
currently
implementing
swift,
we're
on
our
roadmap,
we're
going
to
have
a
design
summit
which
I'll
talk
about
later
we're
going
to
talk
about
the
architectures
of
those,
but
those
are
looking
to
be
implemented
in
a
release.
That'll
come
out.
This
fall.
A
A
A
A
So
let's
just
start
out
at
the
bottom.
How
does
the
radius
work
you've
got
a
series
of
monitors
that
are
the
brains?
You
have
to
have
an
odd
number.
You
don't
want
to
start
a
brain,
so
if
they're
trying
to
decide
how
to
make
decisions,
the
monitors
talk
to
each
other.
They
get
it
for
them
to
make
a
decision.
The
osds.
That's
the
object,
storage,
demon;
okay,
not
to
be
confused
with
object,
storage
device.
For
some
reason
we
decided.
We
just
want
to
confuse
everybody
by
changing
the
meaning
of
that
well-known
acronym
in
storage.
A
Now
here's
a
here's,
a
typical
layout,
so
you've
got
the
object,
storage,
daemon.
On
top
of
your
lower
level
file
system,
you
can
use
butter,
fs,
ext4
xfs.
Currently
we're
recommending
xfs
we'd
like
to
see
butterfs
get
to
the
place
where
we
could
recommend
that
we're
also
playing
with
zfs
on
linux,
but
we
haven't
fully
implemented
that.
So
that's
an
area
that
we're
just
looking
at
and
then
you've
got
your
underlying
disk.
A
Okay,
this
is
one
system
comprised
of
multiple
osds.
You
got
your
odd
number
of
monitors,
managing
it
now
the
interactions
with
that
it
looks
like
a
single
device
yeah
on
your
file
system
slide,
you
guys
are
recommending
xfs
and
fire
effects
and
support.
You
can
talk
about
the
tradeoffs
about
between,
because
it
seems
like
that's
a
pretty
big
decision.
I
have
to
make
up
front
and
so
choosing
the
right
one
is
pretty
important.
A
She's
like
we
played
with
butterflies
on
it
and
there's
some
issues
with
it,
but
I
don't
have
a
lot
good
documentation,
explain
why
you
choose
one
file
system
or
together
why
xfs
is
recommended
back
when
we
started
with
the
rfs
is
recommended
and-
and
there
are
certain
considerations
I
was
not
going
into
the
underlying
classes.
If
you
want
to
come
talk
to
me
afterwards,
I'd
be
happy
to
follow
up
with
you
later,
but
yeah
right
now,
we're
currently
recommending
xfs.
A
B
A
A
A
So
rados
is
the
rados
gateway
or
rgw,
as
we
call
it,
you
may
see
it
both
ways:
abbreviated,
rgw
or
greatest
gateway,
and
that
is
both
s3
and
swift
compatible.
So
you
can
send
your
either
version
of
commands.
You
will
handle
it
so
that
some
people
have
played
with
writing
and
writing
with
an
s3
and
reading
the.
A
A
One
one
important
thing
I
didn't
mention
is:
this
is
restful
interface,
so
you're
common
with
that
I
mean,
if
you're
familiar
with,
that
it's
a
common
api
interface
and
so
that's
what
allows
you
to
interact
with
the
gateway
via
the
liberators
and
then
again,
because
you're,
using
that
library,
you're
talking
natively
to
the
rados.
A
A
Now,
what's
the
last
component
so
again
we're
working
up.
We've
got
the
underlying
object,
store,
gateway,
library,
we've
got
block
now
file
system.
This
is
probably
the
least
robust.
The
three
main
areas
of
the
object,
block
and
file,
simply
because
of
focus
focus
of
the
team,
has
been
on
building
the
object
and
block
and
for
the
future,
focus
will
be
on
the
file
system
and
all
three
and
really
hardening
them
all.
But
so,
if
we,
the
kind
of
caveats
we
give
is
that
you'll
see
some
people
say
this
is
awesome.
A
This
is
awesome.
This
is
awesome.
This
is
almost
awesome,
so
we
would
say:
hey
put
this
in
production.
Put
any
of
this
in
production,
and
I
don't
know
if
I
want
to
put
this
in
production.
Okay,
this
file
system
itself
is
just
not
quite
there.
Yet
there
are
certain
things
like
it
doesn't
do
an
fs
check
and
things
that
we
consider
to
be
a
necessary
part
of
considering
an
enterprise
or
a
distributed
parallel
file
system,
but
just
turn
through
all
right.
A
A
A
A
Another
way
is
where
it
knows
the
layout,
so
you
tell
it
okay,
I
do
my
hat
when
you
do
your
hashing
here.
Everything
that
falls
in
this
range
throw
it
in
those
racks
in
this
range,
throw
it
in
those
racks.
Maybe
in
this
range
this
different
data
center,
and
it
always
knows
that's
where
it
goes,
but
it's
kind
of
kind
of
like
an
old-fashioned
phone
book.
A
It's
one
of
the
way
I've
heard
it
described,
but
when's
the
last
time
anybody
looked
up
something
in
the
phone
book
yeah
exactly
I.
I
can't
wait
till
my
my
kids.
A
A
C
Yeah
you
specify
this
perk
cluster
for
argos
gateway.
A
You
specify
that
in
your
crush
map-
and
let's
see
that
so
you've
got
the
number
of
pgs-
you
split
the
data,
then
you
run
it
through
the
crush
algorithm,
and
so
it
takes
this
block
and
it
says:
okay,
I'm
going
to
put
one
copy
here,
one
copy
there.
This
is
a
pseudorandom
algorithm,
so
it's
always
repeatable,
no
matter
how
many
times
you
run
it
through
there,
it's
always
going
to
wind
up
knowing
where
to
put
it,
based
on
how
you
split
up
your
data
and
the
configuration
of
your
system.
A
A
So
I
want
to
make
sure
that
if
this
power
feed
goes
down,
I
don't
lose
everything,
and
if
this
power
fee
goes
down,
I'm
covered
over
here.
So
what
you'll
see
is
when
it
when
it
pushes
all
of
your
data
through
the
crush
map
based
upon
those
rules
that
you
have
created,
it
decides.
Okay,
I
need
two
copies.
A
A
Okay,
I
want
them
on
different
bus
rows.
So
one
right
here,
one
right
here-
I'm
not
going
to
throw
another
blue
violet
one
in
here,
because
I
got
one
here
so
you'll
never
see
the
exact
identical
clumps
of
data
ins
in
the
same
osds.
So
does
the
client
hit
the
right
commit
after
write,
four
copies
or
after
one
copy?
Is
there
one?
A
That's
happier
that's
how
we're
going
to
recommend
that
and
then
we'll
get
to
one
other
quick
question
here:
available
capacity
of
those
nodes
or
cpu
utilization
on
those
nodes
as
well,
but
in
places.
No.
A
So
just
to
summarize
here's
what
this
box
is
doing
it's
spreading
around
based
on
the
infrastructure
topology,
it's
not
looking
into
the
things
like
you
said
that
doesn't
have
that
smarts
into
it.
So
you
set
up.
I've
got
data
centers
I've
got
racks.
Here
are
my
rules.
It
doesn't
have
enough
smart
monitoring
built
into
it
like
that.
C
A
A
Oh
something
went
down.
What
do
we
do
so?
You
notice
all
these
guys
are
talking
to
each
other
all
the
time,
which
is
one
of
the
reasons
that
I
said
when
you
created
those
pgs
which
were
the
different
colors
there.
That's
why
I
said
we
typically
say
you
want
to
do
about
a
hundred,
maybe
200,
if
you're
really
crazy.
A
But
if
you
get
up-
and
you
just
want
to
play
with
it-
you
say:
I'm
gonna
make
it
a
thousand.
What
happens
is
you're
creating
you're,
creating
extra
overhead
for
all
the
osds
to
communicate
because
you're
spreading
your
data
now
over
all
these
different
lsds
that
are
constantly
saying:
hey
you
still
there
hey
you
still
there
and
what
happens
in
this
case.
A
A
Well,
we
know,
based
on
your
rules,
that
you
require
two
copies
so
immediately.
The
crush
is
going
to
rerun
your
system,
the
algorithm,
to
see
based
upon
this
new
topology.
Now
it's
not
just
going
to
use
the
old
one,
okay
and
then
take
those
away.
It's
gonna,
say:
okay,
based
upon
the
rules
that
I've
been
given
and
knowing
that
this
guy's,
not
here
anymore.
Where
do
I
put
a
replica
okay,
I'm
gonna
put
a
red
one
here
to
compensate
for
him
being
gone.
A
A
Do
you
have
an
arbitration
system
going
on
where
it
will
quiet
them,
because
it's
not
going
to
be
able
to
satisfy
the
requirement
yeah
if
it
can't
satisfy
the
crush
rules
that
you've
established,
then
yeah
we'll
go
ahead
and
try
to
rebalance
the
best
they
can
they'll
just
say:
hey,
I'm
out!
I
can't
I
can't
satisfy
what
you
you
told
me
to
do,
but
I'm
gonna
do
my
best
and
probably
spew
lots
of
error
messages.
A
Then
mark
that
up
and
then
it
read-
you
know
you'd
mark,
that's
where
you
would
actually
have
to
have
somebody
in
the
middle
trying
to
show
us
the
kind
of
self-healing
aspect
of
it
to
whereas
before
you
didn't
have
to
do
anything
and
your
data
was
just
taken
care
of.
You
know
it's
replicated
in
the
rules
that
you
set
now.
If
you
want
to
come
in
afterwards,
swiss
most
of
us
want
to
say:
okay,
pull
that
guy
out
slap
in
a
new
one
and
then
run
it
again
and
replicate
all
over.
B
A
A
A
Didn't
cost
me
anything
see
it's!
This
is
just
totally
in
logical
space.
I
didn't
take
up
any
additional
space
for
my
system.
Now
I
get
a
client
that's
coming
in
here
and
he
wants
to
write
four
different
objects.
Okay,
so
now,
rather
than
rewriting
this
whole
thing,
I'm
only
writing
the
new
objects.
A
A
You
can't
until
you've
also
blown
away
all
of
his
clones.
Now
when
we
get
to
the
future
another
feature:
that's
coming,
this
fall.
When
you
get
to
live
migration,
then
you
maybe
you
won't
care.
You
could
have
a
thousand
copies
of
him
because
you'll,
just
if
you
want
to
move
whatever's
underlying
storage,
he's
on
you'll
migrate
them
to
something
else,
trash
that
bring
in
some
news,
storage.
A
For
every
read
is
it
say
that
again,
when
you
try
to
read
a
block
person
has
to
figure
out
where
it's
at
and
then
second
and
then
read
the
block.
So
if
this
mapping
is
not
stored
in
memory
you're
doubling
your
I
o.
No,
I
I
know
what
is
what
is
coming
from
the
clone
of
what
is
coming
from
the
original.
So
I
don't
have
to
do
a
lookup,
but
it's
stored
in
memory
on
this
yeah
right
in.
A
Yes,
now:
okay,
how
does
this
work
with
openstack
you're
asking
about
api
support?
I've
got
a
nice
little
picture
that
describes
these
words,
but
we
were
initially
part
of
openstack
but
way
back
in
cactus.
I
say
way
back:
we've
increased
the
features
each
time
you
can
use
swift,
keystone,
cinder,
nova,
a
bunch
of
different
ways
to
depending
on
how
you
want
to
talk
to
it.
How
do
you
want
to
use
seth?
A
That
would
depend
on
which
api
you
use
now.
The
big
thing
which
that's
kind
of
frustrating
we
didn't
get
into
grizzly,
but
it
is
going
to
be
in
havana,
is
that
you'll
be
able
to
create
an
rbd
volume
from
an
image
in
the
horizon
ui
so
that
will
that
will
be
in
the
havana
release.
A
So
just
from
your
ui,
you
click
it'll,
set
it
all
up
for
you,
it's
not
there
yet
so
here's
kind
of
a
prettier
picture
that
shows
exactly
which
of
the
apis
you
use
to
to
work
with
each
of
the
various
areas
of
set.
So
if
you
want
to
do
identity
management,
your
typical
swift
or
your
swift
stuff
to
talk
to
the
gateway
keystone
switch,
then
if
you
want
to
talk
to
rbd
directly,
you
can
come
right
in
with
cinder
and
talk
directly
to
the
rbd
device.
A
A
A
B
A
Is
your
system
able
to
keep
all
these
recoveries
and
redundancies
when
it's
hypervisor
has
its
basis,
but
it's
hypervisor
to
me
sounds
like
something
that's
closer
to
the
bare
metal
you
know.
So
what?
How
do
you
get
your
parameter.
A
You
wouldn't
talk
if
you're
using
this
api.
Let's
say
you
you've
coded
something
to
this.
That
wants
to
just
crank
out
new
vms,
so
that's
going
to
be
talking
then,
to
qmu
or
kbm
that
will
be
the
instantiation
and
those
will
then
talk
to
the
block
device.
So
you're
not
talking
directly
with
nova.
If
you're
talking
about
sender,
then
you're
talking
directly
to
the
block
device,
there's
not
there's
no
intermediary
between
you.
A
A
A
A
A
Majority
of
seth
contributors
weil,
who
is
the
cto
and
the
founder
of
seth,
started
ink
tank
with
some
seed
money
from
his
former
company
at
dreamhost,
as
well
as
some
buddy
mark
shellworth
and
some
other
people
to
create
ink
tank
to
ensure
the
viability
of
this
ecosystem.
A
A
If
there
are
any
areas
that
people
see
that
you
know
it's
it's
lacking
that
it's
keeping
them
from
developing
on
it,
then
we
want
to
know
about
that.
One
of
the
big
things
that
standing
up
in
tanks,
separate
from
seth
allowed
us
to
do
is
to
kind
of
formalize
the
development.
As
there
was
a
recent
article
about
why,
why
does
openstack
do
releases?
A
A
Does
that
transaction
over
there?
It
doesn't
do
the
rsync
that
causes
so
many
issues
that
are
there
there's
a
potential
for
issues
within
swift.
So
what
are
we
going
to
do
then?
In
november
I
don't
know,
there's
going
to
be
some
really
cool
supple,
pod
name.
That
starts
with
an
egg.
In
case
we
got.
The
tie
between
sage
went
to
uc
santa
cruz,
really
likes
cephalopods,
squids
things
like
that.
So
each
of
these
names
are
a
various
type
of
cephalopod.
A
That's
where
the
name
come
from
of
the
other
important
things
that
we
do
is
we
ensure
quality
of
the
product
with
releases,
so
we've
got
a
pretty
powerful
automated
test,
suite
called
toothology
another
reference,
and
that
is
a
suite
that
allows
you
to
do
all
sorts
of
automated
testing.
Where
you
can
say,
I
want
this
many
osds,
this
many
monitors
and
then
run.
B
A
Open
to
the
community
to
play
with
as
well,
you
can
submit
test
jobs
to
it
and
developer
reference
and
custom
architectures
for
implementation.
So
we
want
to
be
able
to
go
out
to
people
and
say
here's
what
we
think
would
be
good
based
upon
your
system
needs
and
then
allow
you
to
implement
it
or
have
us
help
you
implement
it,
and
speaking
of
that,
I'd
be
remiss.
I
didn't
mention
our
friends
at
dell,
so
think
tank
is
a
strategic
partner
of
dell
and
you
can
read
all
the
marquees
there
from.
A
A
B
A
So
if
you
have
any
questions,
please
go
to
this
link.
It'll,
give
you
both
how
to
sign
up
for
the
mailing
list
and
the
ircs
that
we
sit
in
seth
is
the
obvious
one,
but
it'll
give
you
all
the
information
there
where
to
find
it
there.
It
is
if
you
want
to
pull
it
down
on
github
github.com.
A
And
participate
in
the
step
design
summit,
which
will
be
early
may
and
but
we
don't
have.
The
information
should
be
coming
out
shortly
either
this
week
early
next
week
and
it
will
be
a
virtual
design
summit.
So
everybody
can
come
on.
Look
at
blueprints,
come
up
with
good
ideas,
now
final
request,
so
you
saw
the
road
map
we've
got
may
august
november
that
we're
going
to
have
those
quarterly
releases.
A
Is
there
something
that
you
in
your
system
or
something
that
you
saw
that
you
went
yeah?
I
know
you
said
that
you
guys
had
this
and
swift
didn't
have
that,
but
I
think
you've
got
a
hole
here
that
that
you
need
to
fill
or
here's
something
where
I've
had
a
use
case
that
I
don't
think
you'd
satisfy
and
a
couple
that
I
throw
out
there
just
to
kind
of
prime
the
pump
are
iscsi.
A
We
almost
every
time
you
get
the
occasional
use.
What
about
windows,
users,
poor
windows,
users,
and
I
take
a
deep
breath
and
say:
okay,
if
you're
supporting
I
know
some
applications
have
to
run
on
windows,
I'm
sorry
for
them,
but
then
you
can
use.
We've
got
just
a
real
kind
of
prototype
would
be
generous,
a
thing
that
we've
gotten
into
tgt.
That
will
allow
you
to
play
with
having
a
nice
scuzzy
friend
into
our
rbd
on
the
back
end.
A
B
A
A
A
A
A
Any
other
requests
on
the
rest
api.
Are
you
basically
keystone?
Only
if
you
plug
in
with
other
identity
providers,
that.
B
A
What's
the
use
case
that
exposing
objects
says
urls
kind
of
like
s3
and
then
they
would
actually
have
a
real
identity
provider
and
not
necessarily
have
to
have
somebody.
A
A
A
What
happens
in
the
situation
where
you've
got
a
file
system?
That's
been
heavily
used
for
a
long
period
of
time.
Suddenly
you
end
up
with
portions
of
it
that
are
very
empty,
but
does
it
automatically
do
a
rebalance
if
you're
a
tool
gonna
go
back
into
the
rebound?
No,
you
can
you
can
force
it
to
do
a
rebalance,
but
it
won't
automatically.
A
A
That
is
a
great
thing
that
actually,
I
kind
of
said
admin
api.
I
just
that
doesn't
tell
you
enough
of
what
we
are:
we're
kind
of
fully
opening
the
kimono
so
to
speak,
and
we're
going
to
allow
you
via
that
api
to
get,
I
hope,
just
about
any
type
of
data
you
would
want
out
of
the
system
via
that
admin
api.
Do
you
expose
stats
today?
A
B
C
A
A
C
A
Are
there
any
particular
workloads
on
that
are
well
suited
or
poorly
suited
for
the
the
block
interface
through
the
through
clones
nova
or
through
on
the
virtual
machine
side?
I
mean
databases,
mail,
servers,
high
transactional,
stuff,
lots
of
locking
all
that
yep
performs
great.
A
A
A
Can
you
break
out
the
network,
so
you
need
to
network
for
the
underlying
management
program
the
that
was
one
thing
I
didn't
cover.
I
kind
of
glossed
over
at
the
beginning
of
the
monitor
traffic.
That's
kind
of
telling
you
what
this
here
that
traffic
is
totally
separate
from
the
data
traffic.
So
that's
nowhere
in
the
data
path.
A
A
A
A
All
right
again,
that's
just
my
information
pick
one.
It
all
gets
to
me
and
thank
you
for
your
time
really
appreciate
it
and
if
you
have
any
questions
or
specific
situations,
please
come
talk
to
me
afterwards
or
send
me
an
email,
tweet.
B
So
we'll
see
each
other
there
can
we
plug.
C
C
Great
so
yeah,
if
you're
doing
a
session
or
a
part
of
the
session,
please
stand
up
and
talk
about
your
discussion
session.
B
So
I'm
one
of
the
co-authors
of
this
book
here
the
openstack
operations
guide.
It's
a
book.
We
wrote
in
five
days.
B
So
we
actually
gave
a
about
this
book
and
we're
gonna
do
a
panel
on
it
on
tuesday
at
5,
20
p.m.
So
come
out
and
you
can
see
all
the
authors,
the
first
60
people
there
get
a
free
printed.
B
C
I'm
the
chair
of
the
operations
track,
so
part
of
part
of
my
job
was
actually
helping
get
speakers.
You
know
speakers
put
together
some
panels,
and
so
I
actually
have
a
talk
about
reference
architectures
and
using
openstack
heat
along
with
monty
taylor
and
I'll,
be
sharing
this
presentation
about
using
heat
as
a
way
to
describe
reference
architectures
to
make
it
easier
for
people
to
talk
about
how
they
operate
openstack,
so
we're
actually
using
openstack
to
describe
openstack,
which
is
sort
of
cool,
and
then
we
also
were
put
together
at
the
last
minute.
C
On
tuesday,
at
5
20
we're
doing
a
session
on
interoperability,
so
we
pulled
together
a
last
minute
panel,
because
this
is
very
constant.
This
is
at
standard,
so
check
out
the
recording,
but
so
interoperability
there's
some
stuff
going
around
in
the
press
about
openstack
making
a
priority
of
helping
openstack
clouds
work
together,
be
interoperable,
it's
a
really
real
objective
for
the
foundation
and
so
we're
doing
a
panel
on
that
which
promises
to
be
really
interesting
and
pull
together.
Some
interesting
people
for
that
and
then
there's
something
my
schedule's
crazy
for
openstack.
C
But
there's
I
mean
that
the
sessions
are
incredible.
It's
going
to
be
impossible
to
choose
where
to
go
at
any
one
time.
There
is
a
heart.
There
is
a
reference
architecture,
hardware
panel
and
yeah.
We
and
we've
got
so.
We've
got
that
we've
actually
got
a
devops
panel
going
so
we
pulled
in
we.
Actually
we
had
so
we
had
twice
as
many
speakers
as
we
could
take,
and
so
I
pulled
a
couple
of
people
who
were
doing
devops
and
continuous
deployment
into
a
panel.
C
So
we
were
organizing
that
yesterday,
but
this
this
afternoon,
that's
that
those
are
really
exciting
panels.
A
Number
of
others
so
much
and
that's
wednesday
at
3,
40.
A
B
A
So
anyway,
yeah
thank
you
for
coming
everybody.
This
was
awesome
to
see
that
next
people
see
you
next
month.