►
From YouTube: Ceph Code Walkthroughs: Paddles
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
For
folks
joining
in
this
recording,
this
is
going
to
be
a
code
walkthrough
for
panels
which
is
like
key
component
for
technology.
Our
integration
testing
framework
stuff.
So
actually
you
want
to
take
it
away.
B
So
for
those
of
you
who
don't
know
what
paddles
is
paddles,
is
a
database
wrapper
that
utology
uses
to
store
totology,
run
and
job
information,
all
the
test,
node
information,
and
it's
also
what
it
uses
to
lock
and
unlock
test
nodes.
Postgres
is
the
database
we
use
currently.
B
So
I
won't
go
through
all
like
the
tautology
code
today,
so
I'll
just
walk
through
what
happens
when
a
tutology
job
is
scheduled
on
the
parallel
side,
so
the
toothology
scheduler
adds
the
job
to
the
beanstalk
queue.
Beanstalk
works
in
a
way
where
it
returns
a
unique
id
to
you
for
every
component.
That's
added
in
a
queue,
so
we
treat
this
id
as
a
job
id.
B
What
utology
does
is
that
it
takes
the
job
id
along
with
some
parameters
of
the
job
configuration
and
stores
them
in
postgres
using
panels
each
time
there's
a
change
in
the
job
status.
Like
say,
we
take
the
job
out
of
the
queue
and
it's
running
or
it's
in
a
waiting
state
for
notes
to
be
available.
B
Another
important
part
is
that
before
teethology
runs
a
job,
it
checks
with
paddles
to
see
if
the
required
number
of
nodes
to
run
the
jobs
are
available
and
it
locks
them
once
the
job
is
finished
running
or
it
fails.
The
nodes
are
unlocked.
B
So,
just
briefly
going
through
what
the
module
structure
of
paddles
is
like
paddles
uses,
a
lightweight
web
framework
called
pikaan,
so
pecan
applications
follow
the
mvc
pattern
where
we
have
directories
for
models,
templates
and
controllers.
For
us.
The
main
two
things
that
we
use
more
are
the
models
and
the
controllers
we'll
go
into
these
in
depth.
B
B
B
So
what
this
means
from
a
routing
perspective
is
that
when
say
like,
you
have
your
paddle
server
and
you
want
to
make
a
request
to
nodes.
So
just
slash
nodes
will
work
but
say
you
want
a
request
specific
to
jobs.
You
need
to
go
first
through
the
runs
controller,
so
it'll
be
slash,
runs
the
name
of
the
run
and
then
slash
jobs.
B
B
B
B
A
Yes,
now
we
can
see
it,
can
you
increase
the
font
size
a
little
bit
yep.
A
B
So
the
conflict
file
is
what
we
use
to
initially
start
off
the
paddle
server.
So
this
has
all
the
server
specific
configurations,
such
as
the
paddle
server
host,
the
port,
the
address,
that's
that
will
be
publicly
available.
B
B
So
what
pecan
does
is
that
it
provides
you
with
something
called
a
transaction
hook.
It
does
pretty
much
exactly
what
the
word
means.
So
what
happens
is
that
when
there
is
a
get
request
to
paddles
or
actually,
I
think
a
better
example-
is
a
post
request.
So
when
you
make
a
post
request
from
two
paddles,
it
lands
in
pecan
and
what
pikaan
does
is
it?
Has
these
five
functions
which
we
have
defined
and
it
uses
them
for
actions
before
the
database
transaction
and
after
it
so
say,
there's
models
models.start.
B
So
what
this
does
is
is
that
it
just
makes
a
connection
to
the
database
and
opens
a
session
yeah
and
then
that's
when
the
actual
transaction
to
your
database
happens
say
it's
a
post
request,
so
you
probably
create
a
new
job,
and
that
happens
so
after
that
one
of
two
things
can
occur
either.
We
have
an
error,
in
that
case,
pecan
itself
does
a
rollback,
which
is
the
function
that
we've
defined
here
and
another,
and
it
clears
the
session
up.
So
it
removes
the
session
and
the
next
time
there
is
a
request.
B
There
will
be
a
new
session
formed
and
in
the
other
case,
where
the
right
goes
through,
fine,
it
does
a
commit.
So
the
reason
that
this
is
pretty
good
is
because
we
don't
have
to
do
explicit,
commits
or
rollbacks
and-
and
we
don't
have
to
keep
checking
for
errors
every
time
we're
writing
to
the
database.
B
Yeah
and
this
in
the
config
file,
this
is
where
you
specify
the
url
of
your
database,
so
parallel
sql,
alchemy
url
over
here,
it's
sql
lite
by
default,
but
in
production
we
do
use
postgres
so
and
you
can
set
this
up
locally
as
well.
B
B
A
B
B
So
this
has
everything
about
like
the
ssh
public
key
which
the
test
will
need
to
ssh
into
the
machine
and
stop
run
the
test.
It
keeps
the
track
off
if
the
machine
is
locked
who
it
was
locked
by
and
how
long
it's
been
locked.
B
B
Right,
okay,
so
every
time
there's
an
update
to
the
nodes
table.
There
are
a
bunch
of
checks
that
happen
because,
especially
because
most
of
it
will
happen
for
the
locking
and
unlocking
of
nodes.
So
these
checks
what
they
basically
do.
Is
they
check
if
we're
trying
to
lock
a
node?
That's
already
locked
unlock
a
node,
that's
with
the
wrong
owner.
So
if
somebody
else
locked
a
node,
you
can't
just
go
and
unlock
that
node
and
it
checks
for
four
such
cases.
B
Right
so
over
here
we
have
the
function
lock.
Many
this
is
used
we'll
see
by
the
http
request
when
you
want
to
lock
a
bunch
of
nodes
together.
B
So
in
this
as
well,
major
thing
is
that
we
have
to
take
care
of
the
commit
and
the
rollback,
because
this
is
not
a
http
request-
that's
happening.
This
is
a
method
that
we
invoke
on
our
own
and
we
check
for
a
race
condition
arrow
here
and
in
case
there
are
multiple
nodes
being
updated.
At
the
same
time,.
B
Okay,
so
this
is
the
job
schema.
I
won't
go
through
like
each
field,
because
most
of
it
is
the
present
in
the
job
configuration
but
few
important
things.
We
create
a
job
notes
table
which
is
a
secondary
table
of
sorts.
B
B
Similar
to
the
nodes
model,
there
are
a
bunch
of
things
we
check
for
over
here
when
there's
an
update
to
a
job.
The
first
one
is
the
status,
then
is:
is
it
even
a
status,
that's
valid,
and
the
next
is.
If
we
do
see
this,
that
the
job
is
a
success,
we
update
the
status
as
well.
B
Yeah,
if
we
also
check
the
what
the
previous
status
was
and
what
the
current
status
is.
So
if
the
old
status
was
that
it,
the
job,
was
queued
and
now
the
new
status
we
have
is
that
it's
running
we
update
the
started
time,
which
is
the
basically
the
time
stamp
which
tells
us
when
the
job
started,
running.
B
So
this
part
over
here
this
is
where
we
look
at
the
targets
configuration
and
we
see
what
target
notes
have
been
mentioned
and
we
check
if
we
have
those
many
nodes
available
in
postgres.
B
Okay,
so
href.
This
is
what
this
is,
where
we've
actually
defined,
what
the
url
for
all
the
jobs
or
methods
will
look
like
so
it'll
be
run,
slash
the
run
name,
slash
jobs
and
the
job
id.
B
Right
I'll
go
through
the
controller
part
first,
I
think
that's
easier.
B
Okay,
so
yeah,
like
I
said,
there's
a
root
controller,
which
is
basically
kind
of
the
landing
page
of
sorts
from
here
we
have
the
runs
controller
and
the
nodes
controller,
there's
also
an
errors
controller.
This
is
this
basically
defines
all
the
frequent
errors
that
we
might
see:
400,
403,
404
and
503.
B
So
if
the
url
ends
in
a
slash,
picon
works
in
a
way
where
it
will
check
for
an
index
method
or
the
last
control
object.
So
over
here
we
have
the
index
method.
This
is
for
a
get
request,
so
it
will
return
all
the
nodes
that
are
currently
there
in
the
database.
B
We
do
have
the
provision
to
filter
based
on,
if
you
just
want
the
nodes
which
are
locked
or
unlocked
or
just
from
mira
just
from
smithy,
depending
on
the
machine
type,
and
so
over
here
you'll
see
that
we
still
do
use
the
index
method.
So
even
for
this,
you
just
need
a
node
slash,
but
the
difference
is
that
we
specify
the
method
as
post,
so
a
get
request
can't
land
here
it
will
only
a
post
request
scan
over
here
like
we're
just
creating
a
new
node.
B
Right
so
this
function
called
lock.
Many.
This
function
has
been
defined
as
a
in
case
like
it's
they're,
trying
to
make
a
get
request
for
lock
money,
so
it
just
alerts
the
user
that
this
is
only
for
post
requests.
B
B
Okay,
so
the
node
controller
also
has
a
few
functions
defined
to
get
job
statistics
over
here.
You
can
see
that
so
basically,
this
is
the
url
you'll
have
to
go
to,
you'll
have
to
do
drop,
underscore
stats
and
the
default
is
that
it'll
be
job
statistics
for
the
past?
Is
you
can
change
that
if
you
want
it
for
a
larger
number
of
days
or
less
so
so
it
filters
the
jobs
based
on
the
time
stamp
and
sends
those
back
to
you.
B
Yeah
machine
type
is
just
it
just
returns
the
or
different
machine
types
for
the
nodes
that
are
in
the
database,
okay,
so
coming
up
coming
to
the
underscore
lookup.
So
what
this
does
is
there's
this
concept
of
like
right.
Now,
we've
been
talking
about
controllers,
there's
a
concept
of
routing
to
sub
controllers.
So
what
that
does
is
that
it
provides
you
with
a
way
to
process
a
portion
of
the
url
and
then
return
a
new
controller
object
to
route
for
the
remainder,
so
I'll.
B
Yeah
the
notes
controller,
but
what
we
want
is
in
our
case,
we
want
to
segregate
the
calls
that
go
to
multiple
nodes
like
when
it's
lock
many
unlock
many
with
lock
one
or
unlock
a
single
node
or
get
details
about
a
single
node.
So
what
we
do
is
we
define
a
sub
controller
called
node
controller.
B
This
will
have
all
the
requests
for
just
that.
Basically
return
data,
just
about
a
single
node,
so
the
get
request
just
returns
or
data
about
the
node
whose
name
has
been
mentioned.
B
Yeah
and
the
job
stats.
The
difference
here
is
that
it
gives
you
the
job,
stats
that
the
statistics
of
the
jobs
that
were
running
just
on
this
node
and
similarly
for
jobs.
It's
going
to
return
a
bunch
of
jobs
that
ran
on
this
node,
like
you,
can
see.
Job.Target
nodes
contains
this
particular
node,
so
the
jobs
that
were
running
on
this
node.
B
B
So
this
also
has
something
similar
to
nodes
controller,
where
we
have
a
sub
controller
here
as
well
called
the
run
controller
and
again
this
returns
details
or
the
get
request
is
just
returning
detail
about
one
particular
run,
and
if
we
want
to
delete
a
run,
we
can
only
delete
one
run
at
a
time.
We
can't
delete
a
bunch
of
runs
together.
B
So
I
I've
shown
in
the
slides
before
that
the
controllers
are
in
a
tree
sort
of
structure
where
the
jobs
controller
comes
under
the
run
controller.
So
this
is
where
that
happens,
so
the
job
is.
Jobs
are
linked
to
one
particular
run
so,
which
is
why
it's
under
the
run
controller
and
not
the
runs
controller.
B
B
B
B
B
A
B
B
All
right,
so
another
thing
that
I
wanted
to
go
through
was
alembic,
so
alembic
is
basically
a
data
migration
framework.
B
What
that
means
is
or
when
we
use
it
is
that
when
we
want
to
modify
our
database
schema,
we
want
to
do
it
in
a
way
that
we
don't
have
to
stop
running
things
and
then
do
it.
So
alembic
helps
with
that
or
the
way
alembic
works
is
that
it.
It
creates
a
revision
number
which
you
can
see
over
here.
B
That
is
the
revision
of
like
the
current
status
of
your
database,
and
if
you
are
making
modifications
to
the
database,
it's
going
to
come
up
with
a
new
revision
number
and
you
can
either
upgrade
to
that
or
we
have
previous
revisions
which
you
can
downgrade
to.
So
it
makes
it
really
easy
to
go
between
different
versions
of
the
database
schema.
You
can
either
just
do
an
upgrade
or
a
downgrade
like
in
this
particular
alembic
file
or
seem
to
be
adding
the
notes
table.
B
So
if
you
want
to
upgrade
to
where
the
current
state
of
the
database,
where
we
do
have
the
nodes
table,
you
can
just
run
this
upgrade
function
and
it'll
create
the
notes
table
as
as
well
as
all
the
constraints.
The
foreign
key
constraint,
the
primary
key
constraint
and
I'll
also
create
the
job
notes
table,
which
was
the
relationship
between
the
jobs
and
the
notes
table.
B
And
if
you
want
to
go
back
like
I
mean,
if
you
want
to
go
back
to
the
previous
version,
you
can
simply
do
a
downgrade
and
you
can
see
that
it
does
it
in
the
reverse
order
that
first
it'll
drop
the
job
notes
table.
Then
it
drops
the
constraint
and
then
I'll
drop
the
notes
table.
So
it
makes
things
pretty
clean.
B
Okay,
so
for
testing
actually
yeah
for
when
we
deploy
paddles
and
like
the
actual
paddle
server
like
not
when
we
have
it
locally,
we
use
something
called
green
unicorn,
which
is
what's
on
top
over
here
green
unicorn.
This
is
the
config
file
for
that.
So
what
this
does
is
that
it
has
multiple
workers
that
it
starts,
which
is
multiple
processes,
and
so
you
have
multiple
processes.
Listening
for
requests
to
paddles.
B
Currently
we
decide
the
workers
based
on
the
cpu
count
and
to
start
this,
I
think
I
have
the
amount
here
yeah.
This
is
the
command
to
start
the
paddle
server,
this
script,
the
container
start
shell
script
is
actually
meant
for
the
docker
file,
so
this
dockerfile
can
be
used
for
testing.
It
brings
up
a
docker
with
paddles
and
it
runs
the
container
start
script,
which
basically
starts
the
paddle
server.
B
I'll
just
go
through
the
test
module
once
there
are
quite
a
few
tests
that
are
written
for
controllers
and
modules,
so
I'll
go
through
one
of
the
model
tests
so
yeah.
This
is
for
the
job
model,
so
it
it's
testing
like
the
run
creation,
the
job
job
deletion
in
this
case,
and
it's
even
testing
if
the
database
relationship
that
we've
defined
works.
So
there's
quite
a
lot
of
testing
here.
B
B
A
Maybe
for
the
testing
piece,
it
would
be
interesting
to
look
at
one
of
the
race
condition
tests.
Those
are
a
little
bit
more
elaborate,
yeah
yeah.
B
The
race
condition
test
the
test.
We're
basically
trying
to
create
here
is
a
situation
where
we're
forcing
a
concurrent
update
to
occur,
so
we
kind
of
want
the
requests
to
fail,
and
because
we
do
have
a
few
retries
that
I
had
showed
on
the
panel
side
and
we're
trying
to
see
if
those
retries
actually
work
like
over.
Here,
we
have
these
attempts
we're
doing
and
how
we
do.
That
is
that
we
use
the
threading
module
and
update
multiple
jobs
at
the
same
time.
B
We
also
do
it
for
over
here
for
a
read,
write
dependency,
or
we
saw
this
while
unlocking
nodes
that
sometimes
it
happens.
It's
trying
to
what
basically
happens
is
as
shown
that
we
check
for
multiple
conditions
when
we
unlock
a
node.
So
it's
trying
to
read
saying:
okay:
is
this
node
already
unlocked,
but
also
somebody
is
trying
to
write
to
the
node
at
the
same
time.
B
A
I
guess
another
area
that
might
be
interesting
would
be
maybe
the
changes
that
you
are
making
around.
B
Okay,
so
the
ongoing
book
in
paddles,
like
josh
manson,
is
the
addition
of
a
cueing
mechanism.
So
the
reason
we're
doing
this
is
because
we
want
to
eliminate
the
usage
of
beanstalk
from
tutology,
so
we
want
to
use
paddles
as
the
queue
as
well
so
having
the
q
in
paddles
actually
gives
us
or
the
ability
to
add
a
lot
of
features,
because,
as
you've
seen,
we
stored
a
lot
of
the
job
configuration
in
parallels
which
we
can
update
at
any
point.
B
So,
similarly,
with
a
new
queue
coming
in,
we
can
also
update
quite
a
few
things
which
we
couldn't
in
beanstalk.
One
example
of
this
is
when
we
add
a
job
to
beanstalk
or
with
a
priority
which
happens
in
all
technology
jobs.
B
We
have
a
priority
for
the
job
once
the
job
is
in
the
queue
and
it's
been
assigned
a
priority,
you
can't
change
the
priority
of
the
job,
so
what
happens
sometimes
is
that
if
it's
a
low
priority
job,
sometimes
it
ends
up
staying
in
the
queue
for
a
really
long
time
without
being
run
with
paddles.
B
A
I
agree
thanks.
That's
right,
you
covered
everything
very
well,
that's
the
same
for
the
questions
feel
free
to
leave
them
as
like
comments
on
this
recording
or
well.
I
said
that
that
stuff
that
I
have
directly.