►
From YouTube: Teuthology Internals: Overview and Scheduling
Description
* Ceph Developer Guide: https://docs.ceph.com/en/latest/dev/developer_guide/testing_integration_tests/tests-integration-testing-teuthology-intro/
* Ceph Teuthology Documentation: https://docs.ceph.com/projects/teuthology/en/latest/
* Ceph Teuthology project wiki page: https://tracker.ceph.com/projects/ceph/wiki/Teuthology
A
All
right
looks
like
we've
got
a
good
group
already.
Let's
kick
this
off
so
today
we're
going
to
be
talking
about
the
the
totology
code
more
I'm
going
to.
We
can
start
by
going
over
the
general
structure
and
then
dive
a
bit
more
deeply
into
how
running
a
suite
works
internally
from
the
the
scheduling
of
it
and
the
to
how
it
actually
runs
a
bit.
A
A
Under
docsistaff.com,
you
can
see
there's
a
whole
bunch
of
different
with
commands
here,
the
the
main
one,
the
topology
command
itself
is
what
actually
runs
tests
and
tasks
and
the
rest
help
with
a
number
of
different
ancillary
procedures,
but
the
for
all
of
these
generally,
you
can
find
underneath
the
scripts
directory.
A
Command
line
tool
itself
is
in
run.pi
and
we
use
a
a
nice
python
module
called
docopt
to
implement
argument
pricing
and
what
this
mitral
does
is.
Instead
of
writing
in
code.
What
the
options
are?
A
A
So
that
the
technology
subdirectory
is
where
most
of
the
main
code
is
here
and
you'll
see
a
number
of
files
in
the
in
in
this
directory
itself,
which
are
often
related
to
those
cli
commands.
So
the
run.5,
for
example,
that
we
were
just
looking
at
corresponds
to
the
main
topology
command
and
if
we
look
for
the
main
function
here,
see
the
entry
point
to
it.
A
So
doc.
Ops
gives
us
an
args
dictionary,
which
contains
contains
all
the
arguments
that
were
passed
in
as
well
as
the
default
values.
If
then,
were
passed
and
we're
just
accessing
those
here
and
instead
of
getting
everything
set
up
to
actually
start
writing
a
job.
A
That's
just
a
brief
look
at
where
to
find
the
code
for
any
of
these
scripts
in
general,
you
can
look
at
the
scripts
directory
to
see
the
particular
command
like,
for
example,
lock
see
the
usage
there.
This
one's
actually
using
an
old
style
of
argument,
parsing
using
arc
bars
but
it'll
end
up
having
a
similar
kind
of
thing
where
it
calls
into
the
a
different
modules
main
function
to
actually
run
the
the
bulk
of
the
execution.
A
But
another
and
a
commonly
used
elements
in
topology
is
the
concept
of
accessing
remote
machines
and
interacting
with
them.
This
is
all
done
through
the
orchestra,
sub
module
or
not
not
sub
module,
but.
A
A
A
Connections
to
represent
a
given
node,
we
have
this
concept
of
a
remote.
A
So
then
an
orchestra
remote,
is
again
a
kind
of
wrapping.
The
ssh
client
this
case
from
using
the
underlying
ssh
library
from
parameco,
but
essentially
it's
letting
you
connect
and
reconnect
to
the
same
host
and
run
commands
on
that
host
and
get
some
information
about
thing.
A
A
This
is
this.
Is
this
can
be
used
to
execute
the
same
command
across
multiple
machines
at
once,
and
it's
often
doesn't
often
see
it
used
in
in
the
technology
via
the
context
slide
cluster
attribute
within
the.
A
If
you
remember
from
last
time,
the
context
variable
in
pathology
and
it
serves
as
kind
of
a
global
object
that
aggregates
state
that's
accessible
throughout
the
the
the
entire
program
and
there's
a
context
like
cluster
object.
That's
part
of
that
which
is
one
of
these
classes,
which
represents
the
all
the
nodes
that
technology
is
talking
to,
and
you
can
do
things
like
filter
to
a
cluster
to
a
particular
subset
of
roles.
A
And,
conversely,
you
can
exclude
a
particular
role
when
you
to
create
a
new
cluster
object,
that
acts
is
able
to
run
commands
on
everything
other
than
a
particular
demon
plant.
A
This
is
in
the
demon
subdirectory,
we'll
see.
We
have
this
concept
of
a
demon
group.
A
Which
is
a
a
collection
of
all
the
all
the
demons
that
are
running
and
this
this
kind
of
object
is
created
by
the
except
for
stuff
adm
tasks
when
they're
setting
up
a
cluster,
so
there's
it'll
be
well
generally
one
daemon
group
for
each
stuff-
cluster-
that's
that's
ed,
is
set
up
and
it'll
happen
a
yet
a
demon
object
for
each
of
those
running
or
non-running
demons
for
each
osd
for
each
monitor
for
each
mds,
for
each
rgw,
et
cetera,.
A
So
these
are
essentially
added
into
this
dictionary
internally,
based
on
the
generic
role
like
the
osv
monitor
mds,
and
that
particular
id
like
osg,
like
a
0
for
osd
0
or
a
for
monday.
A
And
internally,
the
the
demonize
of
understanding,
how
how
to
start,
restart
and
stop
the
demon
and
have
different
kinds
of
modes.
A
You
can
kind
of
see
how
this
is
implemented
more
directly
in
the
demonstrate
object,
which
represents
the
individual
daemon.
A
It's
my
individual
id,
the
python
log
and
the
general
id
id
of
the
cluster.
This
is
not
generated
when
the
cluster
is
is
not
boots
up
and
finally,
we
get
the
process
object
which
search
out
empty
until
they
start
the
process
later
and
extra
keyword.
Arguments
for
that
are
used
when
running
the
particular
demon.
A
A
Those
wave
methods
to
kind
of
restart
we
start
with
new
arguments,
figure
out
whether
the
process
is
running
or
not,
and
to
start
and
stop
the
demon.
A
Social
studies
assembly
wrapper
around
restarted
the
checks,
whether
we're
running
or
not,
and
issues
a
warning.
If
we
are
trying
to
start
a
already
running
demon
and
a
lot
of
these
have
timeouts
so
that
we
don't
end
up
waiting
forever
for
something
that
that,
if
that,
if
the
demon
fails
to
come
up
or
fail
to
stop
for
some.
A
A
This
works
a
little
bit
differently
in
a
separate
m
setup
where
we're
actually
we're
running
the
demons
via
systemd,
just
like
a
regular
stuff
user,
would
we're
not
doing
anything,
especially
in
technology,
so
for
his
the
fading
commands,
rather
than
running
the
beams
directly
we're
using
system
d
commands
to
control
them.
A
And
then
using
the
things
like
the
delicate
command
to
kill
them
and
journal
cuddles,
so
I
get
the
actual
logs
from
that
from
the
the
services.
A
So
this
is
a
slow
class
of
demon
state.
So
it's
in
it's
a
it's
overriding,
pretty
much
all
the
methods
there.
It
has
the
same
basic
structure.
A
A
A
E
adjust
use
limits
batch
groups
which
make
sure
that
we
are
setting
the
correct
the
the
unit
for
core
finals,
for
example,
to
unlimited,
and
so
that
we
can
get
core
dumps.
A
If
you
look
at
the,
for
example,
the
dsf
task
you
can
see
where
the
way
those
are
used,
this
is
back
going
back
to
the
sec.
A
You'll
see
this
is
pretty
typical
of
any
commemorating
technology.
We
usually
have
these
adjusted
limits
and
stuff
coverage
helpers.
The
coverage
helper
is
needed
for
code
coverage
for
t
plus,
plus
programs
and
stuff.
These
aren't
this
isn't
enabled
currently,
but
the
kind
of
building
blocks
are
still
there.
If
code
coverage
was
something
we
wanted
to
resurrect,
it
could
be
done
pretty
without
too
much
trouble
and
for
the
pyramidal
deployments
today,
we're
still
using
this
demon
helper
command
to
keep
connected
to
the
the
cluster
and
terminate
the
demons
automatically.
A
This
makes
it
a
little
bit
simpler
to
interact
with
the
demons
right
directly,
but
it's
it's
not
it's
not
very
helpful
in
sfdm
world,
where
we're
doing
everything
through
system
systemd
anyway,.
A
A
A
So
this
was
improved
recently
over
the
summer
by
a
google
summer
code
student
who
landed
a
new.
A
So
generally,
when
you're
running
it
running
a
test
suite,
you
will
have
pushed
your
branch
to
a
ci
repository
to
kick
off
package
builds
in
shaman,
then
you'll
run
the
technology
suite
command
to
go
ahead
and
schedule
jobs
based
on
using
those
packages.
A
A
Individual
job
json
blobs,
are
stuck
into
a
beanstalk
d
work
hue,
which
is
a
priority
queue
that
runs
strictly
in
order.
A
And
then,
eventually,
on
another
side
in
the
central
on
a
central
machine,
the
topology
dispatcher
command
is
running
listening
to
the
iq
one
for
each
machine
type
and
taking
jobs
out
of
the
queue
and
running
the
next
highest.
One
and
reporting
its
status
to
the
paddles
api,
which
is
a
wrapper
around
postgres
database,
which
isn't
also
the
back
end
for
propito
the
web
interface
for
viewing
the
the
results
of
technology.
A
So
once
the
dispatcher
chooses
to
run
a
job,
it
locks
the
machines
and
then
spawns
a
subprocess
which
is
a
dispatcher
in
in
with
that
in
supervisor
mode
which
goes
ahead
and
does
the
time
consuming
work
of
re-imaging
the
machine,
the
newly
locked
machines
and
I'm
actually
running
the
tests.
A
In
a
in
the
background,
so
this
is
a
bit
of
a
departure
from
the
earlier
operation
where,
before,
prior
to
this
this
summer,
we
had
a
a
technology
worker
process
instead
of
a
dispatcher
which
was
pulling
things
off
the
queue.
But
there
are
a
whole
bunch
of
them
pulling
things
off
the
queue
in
parallel
and
they
were
all
competing
to
acquire,
lock
some
machines
and
run
jobs
at
the
same
time.
A
So
that
meant
that
the
priorities
in
the
queue
weren't
strictly
followed,
and
if
you
had
a
job
that
was
trying
to
block
more
machines
than
others
like
say
five
or
ten
machines,
it
would
often
have
a
hard
time
finding
enough
three
machines
and
there
would
be
jobs
competing
with
it.
That
would
only
be
waiting
for
two
or
three
machines,
for
example.
A
That
would
get
those
machines
locked
much
sooner,
but
with
this
dispatcher
since
we're
only
trying
to
lock
machines
for
one
job
at
a
time,
that's
no
longer
an
issue,
so
we
can
run
jobs
easily
now
that
require
more
machines
and
we're
strictly
following
the
order
prior
to
priority
order
in
the
queue.
So
there's
no
more
kind
of
priority.
Inversions
happening.
A
This
is
a
float
describing
the
general
process
here,
so
the
dispatcher
is
running
continuously
waiting
for
that
for
jobs
to
come
in
from
the
queue
then
locking
machines
and
adding
those
machines
into
the
configuration
of
the
job
under
the
target
section
and
then
invoking
the
topology
dispatcher
in
supervisor
mode,
which
then
runs
the
technology
commands
actually
run
the
test.
A
It's
it's
starting
out
by
re-imaging
the
machines,
and
I
know
that
it's
it's
has.
It's
got
it's
bringing
its
own
log
file
called
supervisor.jobid.log
in
the
jobs
results
directory.
So
if
you
ever
have
some
kind
of
error
occur
before
the
job
actually
gets
run,
you
may
be
able
to
debug
it
by
looking
at
the
supervisor
log
see
if
something
went
wrong,
for
example,
during
the
re-imaging
process,.
A
In
the
end
of
constructing
the
tool,
energy
commands
based
on
the
jobs,
configuration
and
running
the
job
and
waiting
for
it
to
finish,
and
if
it
ends
up
timing
out
we'll
go
ahead
and
the
supervisor
process
itself
will
connect
to
the
machines
that
the
job
was
using
copy
the
logs
over
to
and
compress
them
and
then
clean
everything
up.
A
Jobs
timing
out
in
this
way
are
what
make
the
job
show
up
as
dead
in
torpedo.
A
So
another
big
improvement
with
the
dispatcher
is
that
we
actually
get
logs
when
jobs
go
dead,
whereas
with
the
the
old
technology
worker
mode,
it
didn't
have
a
separate
process
watching
and
and
transferring
logs,
and
we
hit
this
timeout.
So
we
weren't
getting
any
any
kind
of
vlogging
beyond
the
technology
log
itself.
A
So,
let's
take
a
quick
look
through
how
this
works
internally.
A
A
A
We're
kind
of
collecting
information
about
where
we're
gonna,
what
we're
gonna
do
at
the
end
of
the
job
like
we're,
gonna
email,
the
results
somewhere.
A
There's
a
bunch
of
handling
for
what,
if
we
need
to
filter
to
get
information
from
our
previous
job,
for
example,
if
we're
doing,
if
we're
trying
to
rerun
only
the
failed
jobs
from
another
run,
we'd
look
at
the
private
previous
runs
descriptions
to
generate
some
filters
automatically
to
get
those
same
exact
jobs
again,
and
we
end
up.
If
we're
doing
a
rerun
again,
we'll
be
bringing
out
the
the
randomized
seed
from
the
archive
version.
There,
too.
A
Then
we'll
be
constructing.
This
run
object
to
represent
this
particular
instance
of
a
suite
being
scheduled
and
all
the
jobs
inside
of
it
and
we'll
go
ahead
and
prepare
and
schedule
all
the
jobs
in
that
run.
A
And
as
part
of
the
scheduling
process
that
the
totality
suite
command
is
is
figuring
out
which,
where
it's
going
to
get
the
tasks
from
the
sex
repo
as
well,
this
is
called
the
the
sweet
repro
and
the
sweet.
Repro
can
be
also
can
be
edited
the
same
as
this
and
the
sf
repo
or
it
can
be
a
different
branch.
For
example,
if
you're
trying
to
test
how
to
change
to
one
of
these
fidelity
tasks,
you
might
create
a
new
branch
and
and
run
that
against
the
master
version
of
stuff.
For.
A
Example
yeah
another
useful
piece
to
look
at
here
is
when
we're
creating
a
run
and
creating
the
initialize
initial
configuration
for
every
single
job.
We
end
up.
A
A
So
every
single
job
we'll
have
a
branch
and
and
show
one
specified
for
seth
for
pathology.
It's
gonna
have
where
to
upload
things
the
machine
type.
This
is
running
on
for
any
kind
of
schedule,
jobs
we're
always
going
to
you
know
after
ourselves,
and
if
we
encounter
an
error,
we're
not
going
to
leave
the
machines
sitting
around
and
we're
going
to
have
a
particular
os
type
inversion
for
each
each
job.
A
A
Which
provide
extra
configuration
like
turning
on
debug
levels
for
a
bunch
of
different
subsystems,
just
in
general,
and
so
these
are,
these
are
being
applied
to
every
single
job,
you're
scheduling.
A
So
that's
why
every
job
you'll
see
we'll
have
all
of
these
overrides
in
its
configuration
file,
in
addition
to
any
that
are
explicitly
added
by
the
suites.
A
Only
two,
the
seven
geology
branches
are
specified
above
we're,
also
going
to
be
specifying
the
repository
where
we're
looking
for
the
the
sfqa
tasks
directory,
which
is
the
sweet
repo
and
then
the
corresponding
branch
in
sha.
One
that
it's
part
of.
A
Queue
the
actual
interaction
with
with
beanstalk
ends
up
happening
in
a
second
sub
command
call.
A
There's
a
couple
of
kind
of
housekeeping
things
here,
where
we
have
a
a
special
option
for
with
a
to
represent.
Actually,
we
have
a
couple
extra
jobs
in
the
free
as
part
of
each
run.
That
represents
whether
we're
just
starting
off
the
run,
which
is
the
this
is
the
first
in
suite
option
which
lets
the
the
server
side
kind
of
do
some
initialization
in
terms
of
checking
out
the
right
versions
of
the
the
suite
that
we're
running
in
particular
for
this.
A
For
this
this
this
run,
I
think
at
the
specified
shot
one
and
the
correspondingly.
We
have
a
extra
job
added
at
the
end
called
the
last
and
sweet
job
which
pathology
uses
to
and
to
know,
when
a
run
is
over
when
this
one,
when,
when
this
job
is
finished,
it'll
kick
off,
we
can
put
any
kind
of
post
processing
that
we
do,
which
would
include
running
the
scrape
type
py
to
analyze
the
results
and
and
see
what
the
different
kinds
of
failures
were,
as
well
as
sending
in
an
email.
A
A
A
So
this
this
job
is
the
yml
file
like
describing
the
configuration
of
this
particular
test,
we're
just
adding
it
to
the
to
the
suite
at
the
jobs
given
priority
level,
and
then
we're
reporting
talking
to
paddles,
to
tell
it
that
we've
just
queued
a
job.
A
A
If
you
do
want
to
inspect
these
kinds
of
yaml
files
before
you're
running
them,
that
the
duology
suite
command
has
a
dry
run
option,
and
if
you
use
the
verbose
mode,
it
will
spit
out
the
full
gamut
blob
that
it
would
be
scheduling.
So
you
can
see
exactly
what's
going
to
be
scheduled
before
you
actually
run
it.
A
A
Terminology
we're
going
to
have
a
particular
archive
directory
where
we're
storing
all
the
results
of
all
the
jobs
that
are
running
and
we'll
set
up
the
login
levels
and
and
log
file
for
this
dispatcher
particular.
A
We'll
connect
beanstalk
and
watch
our
particular
tube,
which
means
we're
listening
on
for
the
for
new
jobs
coming
out
of
a
particular
queue.
A
A
These
are
internally
just
cloning,
their
repositories
from
github
or
within
cpr,
a
mirror
of
them.
Let's
make
sure
we've
got
the
latest
code
up
today
up
to
date,
and
we
just
got
a
like
an
infinite
loop
that
where
we
try
to.
A
You've
got
a
couple
control
options
where,
if
you
touch
a
certain
file,
you
can
make
the
dispatchers
restart
themselves
or
stop
themselves.
This
is
helpful
if
you
want
to.
A
If
you
want
to
make
sure
that
the
they're
getting
an
updated
version
of
toothology
code,
you
can
touch
a
file
and
they'll
since
that
they're,
not
in
the
middle
of
doing
anything
here
that
once
they've
finished
processing,
whatever
they're
doing
at
the
moment,
they'll
go
ahead
and
restart
themselves
check
out
the
new
version
of
pathology
and
continue
continue
running.
A
Now
this
is
reading
from
the
the
beginnings.q
trying
to
get
the
next
job
insult
will
just
give
it
to
us
directly
in
priority
order.
So
we
this
would
be
the
next
job
in
priority
order
from
from
beanstalk.
A
And
if
there
is
a
60
second
timeout,
so
if
the
key
is
empty,
then
we
wait
for
60
seconds
and
continue
instead
going
through
our
usual
text,
see
if
we
need
to
restart
or
anything
like
that
before
checking
the
cue
again,
but
usually
the
queue
is
not
empty,
so
we
have
a
job
to
run.
A
From
the
yellow,
parser
and
start
setting
it
up,
this
is
all
handled
in
the
prep
job
function,
which
is
right.
I
still
shared
with
the
topology
worker
code.
A
This
is
again
cloning,
the
technology
repository
from
git.
If
we
need
to
and
then
doing
we'll
do
the
same
thing
for
the
cef
branch
and
the
suite
repository.
A
We
won't
actually
be
fetching
them
quite
yet.
I
think
we
will
do
that
later
down
here,
but
we're
just
we're
verifying
that
everything
we
have
a
kind
of
valid
shot
ones
here
and
here
we're
actually
going
ahead
and
fetching
the
qa
suite.
So
this
will
include
all
the.
A
A
A
Then
we'll
report
the
job
as
dead
to
battles
or
similarly,
if
we
for
some
reason,
we
couldn't
finish
cloning
or
bootstrapping
to
ethology
in
time
again.
Mark's
job
is
dead
and
raised,
skip
jobs
to
the
higher
level
process
of
the
dispatcher.
A
A
So
going
back
to
the
caller
here,
so
if,
for
any
reason
we
had
this,
this
job
configuration
was
invalid.
Like
the
branch
wasn't
there,
we
just
raised
the
skipjob
exception.
We
just
continue
on
go
and
go,
try
the
next
job.
A
This
ends
up
talking
to
the
to
paddles,
which
which,
in
addition
to
storing
the
results,
also
stores
the
database
of
machines
and
which
ones
are
locked
and
unlocked,
and
this
call
will
will
block
until
we.
If
I
have
enough
machines-
and
I
have
locked
them.
A
Then
we
end,
we
finally
have
enough
machines
to
run
on,
so
we
go
ahead
and
launch
a
slight
process
of
our
technology.
Dispatcher
in
supervisor
mode
serve
as
a
watchdog
for
this
individual
test.
Job
always
running
it
for
most
would
say,
get
some
extra
logging
and
using
the
specified
version
of
topology.
A
A
Now
dump
the
ammo
file,
you
know
for
the
configuration
there
and
then
finally
run
this
supervisor
as
a
sub
process.
You
know
that
this
is
asynchronous,
so
we're
kicking
off
the
job
as
a
sub
process,
but
typically
we
won't
actually
see
that
wait,
wait
for
it
to
finish.
Let's
go
ahead
and
continue
with
the
next
job,
and
the
only
blocking
call
here
would
be
when
we're
actually
going
and
trying
to
lock
machines.
A
A
A
A
Okay,
let's
take
a
look
at
the
supervisor
process,
then.
So,
if
you
recall
that
this
is
just
basically
taking
the
job
configuration
and
setting
up,
we
already
have
the
machines.
We
need
to
run
it
now,
so
we
we
need
to
do
the
time-consuming
steps
of
actually
re-emitting
those
machines.
These
machines
are
in
the
targets
in
the
drive
configuration
so
we'll
go
ahead
and
re-image
them
this
could
this
is
a
kind
of
abstraction
over
different
kinds
of
machines.
So
if
these
machines
were
virtual
machines,
they
would
just
be
started
with
new
image.
A
There
wouldn't
be
any
actual
being
being
done,
but
if
they're
bare
metal,
then
we'd
be
talking
to
a
fog
server
in
the
cpu
lab,
so
ask
it's
a
image
with
a
particular
os
and
and
version,
and
that
process
could
take
say
10
to
15
minutes,
sometimes
at
most.
A
A
So
if
we
do
have
these,
if
this
job
is
one
of
the
special
ones
where
it's
a
marker,
it's
it's
only
representing
the
the
sweet
beginning
or
the
sweet
ending.
It's
marked
as
the
first
and
sweet
or
last
in
sweden.
A
And
in
that
case,
we
use
the
topology
results
of
command
to
do
some
pre-processing
at
if
it's
first
one,
we
write
down
the
random
seed,
we
that
was
used
to
schedule
this
run
and
the
particular
subset
that
was
used
in
scheduling
this
friend
so
that
we
can
recreate
it
later
if
we
want
to,
and
if
it
was
the
last
and
last
of
the
suite
represent
the
the
end.
A
The
end
of
the
jobs
will
use
the
results
of
the
command
to
wait
for
everything
all
the
tests
prior
to
it,
to
finish,
and
then
send
out
a
email
of
the
results
and
run
the
scrape
and
process
to
analyze.
It.
A
The
kind
of
so
that
this
is
all
just
the
handling
for
the
beginning
and
end
markers
of
this
week
of
the
overrun.
A
This
is
a
little
bit
of
a
compatibility
hack.
It
said
not
not
really
used
in
some
backwards
compatibility
code.
We
could
probably
remove
at
this
point.
A
A
This
is
again
an
asynchronous
call
for
with
the
open
and
then
we'll
watch
that
sub
process
to
make
sure
that
it's
continuing
to
run
and
if
it,
if
it
does
time
out,
we
can
kill
it
and
clean
up
after
it
and
save
the
logs.
A
A
Any
any
more
questions
at
this
point,
I
think
we've
gone
over
how
to
how
the
jobs
start
out
and
from
being
scheduled
to
going
through
the
queue
to
actually
being
run.
A
A
Maybe
it's
worth
noting
just
some
general
other
other
about
developmentally
itself
like,
for
example,
we
have
tests
for
theology
as
well
you'll
see
we
have
several
subdirectories
in
different
areas
called
tests
which
contain
a
bunch
of
unit
tests.
For
the
most
part,
these
can
all
be
run
with
talks.
A
You'll
see
we
have
a
few
different
environments
and
talks.
Pi
3
is
the
one
for
running
the
unit
tests,
there's
also
flakehead
for
checking
style
and
making
sure
that
they're
doing
some
kind
of
a
basic
static
analysis.
A
So
these
are
very
helpful
if
you're
doing
any
kind
of
development
technology,
you
can
run
talks,
minus
e,
pi
3
to
run
the
unit
tests
or
this
talks
to
run
all
the
tests
directly
without
regard
without
regard
to
choosing
particular
ones.
I
think
we
may
have
a
few
more
sessions
like
this
in
the
future.
Let's
figure
out
exactly
what
we
want,
what
else
we
want
to
cover,
but
I
think
this
is
concludes
what
we've
got
scheduled
for
now,
and
at
least
we'll
all
be
up
on
youtube
in
the
future
as
well.