►
Description
Join us monthly for Ceph Tech Talks: https://ceph.io/en/community/tech-talks/
Ceph website: https://ceph.io
Ceph blog: https://ceph.io/en/news/blog/
Contribute to Ceph: https://ceph.io/en/developers/contribute/
What is Ceph: https://ceph.io/en/discover/
A
Hi
hi
everyone.
Thank
you
for
tuning
in
my
name
is
junior
a
little
bit
about
me.
I've
been
with
red
hat
for
a
year
and
a
little
bit
over
a
year,
I'm
in
the
raidos
team,
and
I
also
worked
a
bit
on
tautology
and
today
I'll
be
presenting
how
we
can
run
tautology
locally,
using
gpu
labs
as
test
notes.
A
So
the
problem
that
we
faced
when
we
try
to
make
some
changes
to
texology
making
it
better
is
that
the
installation
process
is
very
difficult.
Tutorial
contains
like
five
different
services,
postgres
paddles,
pepito,
beanstalk
and
so
getting
them
to
work.
All
together
is
kind
of
challenging
and
as
well
as
like
different
developers
who
want
to
contribute
to
technology
also
has
different.
A
You
know
environment
like
operating
system
packages,
so
our
solution
is
to
automate
this
setup
process
using
docker
containers
and
create
like
clear
documentations
on
the
parts
where
it
cannot
be
automated,
such
as
adding
like
the
machines
to
the
test,
notes
and
yeah,
and
I
guess
the
use
case
for
this
is
definitely,
as
I
said
before,
technology
developers
want
to
contribute,
as
well
as
external
contributors,
such
as
interns
from
ouichi
and
gsoc.
A
We've
recently
posted
a
project
on
aurichi
and
g-sub
on
making
shatology
better
at
detecting
unit
test
cases.
So
this
topology
setup
script
will
enable
these
interns
to
get
started
on
contributing
to
the
project,
and
the
last
case
is
basically
deaf
developers
who
can't
wait
for
their
scheduled
job
to
move
up
the
queue.
So
they
can,
just
you
know,
create
their
own
technology,
start
it
on
locally,
and
then
you
know
and
lock
some
test
notes
and
use
it
for
themselves.
A
I
I
don't
recommend
you
use
it
for
this
case,
but
you
know
I
got
to
cover
all
the
cases
and
yeah
I'll
go
right
to
the
demo
side
of
things
and
how
you
can
set
it
up.
First,
what
you
want
to
have
is
a
totology
config
file,
and
this
can
be
done.
I'm
assuming,
if
you
have
access
to
totology
lab
it's
in
this
directory
and
basically
you
all
you
have
to
do
is
copy
paste.
A
The
whole
thing
to
do
your
own
to
your
own
thing
in
in
in
the
same
directory
level
as
the
readme
file
as
the
clone
tutorial
repo,
and
then
you
have
to
change
a
bunch
of
things
like
log
server.
A
It's
configuration
files,
so
basically
you
copy
that
to
to
the
same
directory
as
you
have
here
in
the
topology
docker
docs
docker
compose
that's
the
readme
file,
so
so
basically,
I
already
did
that
because
I
don't
want
to
show
like
the
process,
because
it
will,
you
know,
there's
some
sensitive
information
regarding
like
ipaddress
and
stuff
like
that
on
topology
config
file.
So
I
already
like
did
that
on
my
part
here
it
should
be
here
and
yeah.
You
have
to
change
a
bunch
of
things
like
lock,
server
resolve
server
q
host.
A
Basically,
if
you
can
follow
these
steps,
you
should
be
good
yeah.
We
want
to
keep
the
reserve
machine
zero
because
I
think
the
default
requirement
for
reserve
machine
is
like
two.
Therefore,
if
you
don't
have
like
two
machines
available
in
your
tautology,
it
won't
start
running
jobs,
so
you
keep
it
the
minimum
as
zero.
So,
even
though
you
have
one
the
one
test
node
it
will,
you
know,
run
the
job
okay.
A
So
next
part,
oh
before
I
want
to
mention
as
well
that
you
should
have
docker
installed
and
also
in
this
case
you
want
to
have
like
a
vpn
access
to
the
cpu
lab,
so
yeah
you
need
vpn
access
to
in
order
for
you
to
be
able
to,
you
know,
grab
the
test,
notes,
lock
the
test
nodes
from
tautology
and
add
it
to
your
own
local
topology
and
make
sure
that
you
have
the
vpn
running
before
you
start
the
docker
process,
because
sometimes
it
can
mess
up
the
network
yeah,
that's
something
I
I
I
realized
so
next.
A
What
the
start
script
does
is
that
it
it
clones,
the
another
technology,
repo
and
and
and
it
builds
it
using
a
darker
image
of
tautology
and
it
it
also
runs
docker
compose
and
I
I'll
go
while
it's
running
the
pro
the
process
you
see
here
will
be
faster
here,
because
I
already
have
the
images
built
here.
It's
only
the
tautology
image.
That
is
not
oh,
it's
there,
but
yeah,
but
you
might
find
the
process
like
slower
when
you
do
it
on
your
own.
A
So
in
the
meantime,
I'm
going
to
explain
to
you
how,
like
docker,
composed
side
of
thing
works
here.
So
basically
we
have
five
services
for
tautology.
We
have
postgres,
which
is,
as
you
know,
is
the
database
of
tautology,
and
basically
we
have
that
we
get
the
image
from.
I
think
the
docker
I
think
docker
hub
here.
So
it
has
it's
pretty
much
a
standard
way
of
of
using
postgres
here
in
docker
compose
and
then
the
next
service
we
have
is
paddles.
A
You
can
grab
the
image
from
the
io,
which
is
where
the
latest
image
of
paddles
is,
and
these
are
all
just
environments
that
we
set
for
it
to
connect
with
postgres
and
how
how
it
works
is
that
paddles
will
wait
until
postgres
is
finished.
A
You
know
doing
its
thing
installing
and
getting
up
and
running
before
it's
actually,
I
will
start
installing
stuff
building
its
image.
You
can
see
here
it
says,
depends
on
postgres,
so
the
condition
of
progress
has
to
be
healthy
and
how
we
check
that
is
using
this
command
line.
Postgres
is
ready,
and
you
can
see
here.
A
The
paddle's
health
check
is
basically
a
curl
to
the
to
to
paddles
url
itself,
seeing
if
it's
it's
up
and
running,
we
expose
it
to
port
8080
and
so
such
that
other
services
can,
you
know,
interact
with
it.
Popido
here
also
gets
the
image
from
e.io,
so
you
don't
have
to
build.
A
And
we
have
the
beanstalk
service,
where
you
have,
you
actually
have
to
build
it.
The
file
is
if,
if
anyone
is
interested,
it
actually
is
like
you
go
to
before
and
then
go
in
beanstalk
alpine,
and
you
can
see
the
darker
file
for
that
and
you
you
build
that
and
totally
and
lastly,
we
have
tutorial.
A
We
also
have
a
docker
file
for
tautology.
It's
right
here.
It's
super,
it's
very
similar
to
if
you
you
know,
go
on
topology
documentation
and
try
to
set
it
up
locally
without
using
my
like
script,
our
script.
You
know
you
have
to
run
like
bootstrap
and
everything
like
that,
but
in
certain
operating
systems
like
ubuntu,
the
bootstrap
will
prompt
you
that
okay,
some
packages
needs
to
be
installed
in
order
to
do
it.
A
So
we
kind
of
automated
that
process
as
well
as
we
create
a
directory
for
archiving
all
the
runs
as
well
as
archiving
the
locks
for
the
technology.
A
And
we
actually
run
a
totality
suite
after
we
finish
with
the
bootstrap
to
schedule
a
dummy
job
which,
which
is
for
the
sake
of
sanity,
where
we
will
make
sure
that
this
is
how
we
make
sure
that,
okay,
after
in
the
steps
you
see
in
the
future
here
that
we
are
running
to
taji,
like
correctly
so
yeah.
D
A
Just
waiting
for
it
to
finish,
I
can
answer
any
questions
in
the
meantime,
if
you
guys
have
any.
A
Yeah!
Okay,
so,
as
you
can
see
here
in
the
locks
that
we
schedule
a
dummy,
a
dummy
drop
and
try
to
share
the
screens
correctly.
Okay,
but.
D
A
Yes,
okay,
okay,
okay,
so
you
go
to
where
popito
is,
is
localhost
8081
and
you
can
see
that
there
are.
There
are
two
jobs
right
now
this
this
one
was.
I
forgot
I
I
didn't
so
before
this.
I
did
a
test
demo
before
and
then
I
forgot
to
you
know:
stop
the
containers,
but
it's
fine.
So
this
is
the
new
job
that
we
that
we
have
in
the
queue
here
now
resulting
from
from
g
note.
A
Sorry
from
this
part
of
the
docker
file,
where
we
schedule
a
dummy
job,
okay,
so
moving
on
we
want.
This
is
the
part
where
you
want
to
add
test
notes
right
to
totology,
and
so
what
you
need
to
do
is
you
need
to
have
like
a
private
key
in
order
to
connect
with
the
machine
in
sepia.
A
So
what
I'm
going
to
do
is
I'm
going
to
go
into
the
tutorial
container
and-
and
I
want
to
basically
like
this-
is
a
manual
process
that
I
need
to
create
like
a
directory.
A
And
basically
copy
paste,
the
id
rsa
so
for
folks
who
have
access
to
the
cpu
lab,
you
should
have
this
in
your
like
machines
that
you
use
to
access,
so
it
would
be
it'd,
be
exactly
like
idr
say
I
just
have
that
in
handy
and
file
I'll
just
do
that
quickly.
So,
hopefully
you
can't
really
see
what
my
key
is
yeah
and
give
that
a
permission
of
600
so
that
so
that
way
it
it
works
and
also
it
is
good
to
have
a
config
file
here.
A
A
And
yeah
moving
on
to
how
you
actually
reserve
a
machine
for
the
demo
purpose
of
things.
I
know
that
it
really
depends
on
how
busy
to
talk
the
technology
in
production.
Is
you
know,
so?
What
am
I
gonna
do?
Is
I'm
going
to
try
to
lock
like
a
smithy
machine
from
from
the
from
the
cpu
lab
and
then
add
it
to
my
local
tautology
as
test
notes,
but
I
kind
of
shooted
this
process
and
already
locked
one
beforehand
just
so
it
it
works.
A
So
my
my
machine
is:
is
smithy
198
so
but
yeah.
So
if
you
have
to
lock
it,
you
use
this
command,
so
this
command
will
be
run
on
the
you
have
to
ssh
into
the
actual
tutorial
in
production
and
lock
it
lock
the
machine
you
give
it
this
command.
It
will
lock
one
machine
for
you.
You
also
want
to
update
the
the
description
of
the
machine.
You
can
see
it
in
in
popito
in
the
production
popito.
You
say
like.
A
A
I
don't
think
I
have
to
show
you,
but
I've
already
done
that
for
smithy
198,
so
yeah
moving
on
to
the
actual
process
of
adding
test
notes
to
paddles,
you
go
to
a
virtual,
you
activate
a
virtual
environment
and
you
go
to
this
file.
So
this
python
file
is
like
a
script
that
you
run
to
basically
add
the
test
notes
to
your
inventory
in
totology
in
your
local
technology,
so
see
here.
I'm
still
in
my
local
autology.
A
A
The
machine
type
should
be
smithy.
If
you
are
adding
the
smithy
machine
to
your
inventory,
lab
domain
should
be
front.cpl,
you
know
how
it
goes,
and
user
is
ubuntu
without
password.
A
If
you
have
again,
if
you
have
like
the
idr
say
and
you're
on
the
cpu
vpn,
you
can
just
you
know,
access
it
without
the
need
of
having
a
password,
and
this
is-
and
this
is
also
a
part
where
you
specify
the
number
of
machines-
are
that
you
are
adding-
I
kind
of
want
to
change
this
since
it's
it's
in
range.
That
means
it's
like
a
list
of
machines.
Assuming
that
you
have
like
this
machine,
but
in
my
case
I
have
only
one,
so
it
would
be
198
to
199.
A
A
Okay,
so
it
did
work.
So
how
do
you
know
if
it
works?
So
basically
you
want
to
have
some
output
that
is
similar
to
what
you
have
here.
A
A
Your
also
check
your
private
key
that
you
have
access
to
the
cpu
lab
your
your
vpn,
if
you're
not
running
on
a
vpn
and
yeah,
just
network
related
stuff
and
oops,
actually,
actually
my
mind
might
fail
or
either
that
or
I
already
added
it
in
my
last
demo
run
and
I
forgot
to
restart
the
docker
containers
but
anyways
I'm
gonna
go
to
the
popito.
A
Okay,
can
you
guys
see
pepito.
A
Yeah,
so
you
can
see
here
we
have
one
note
that
has
been
added
to
our
to
our
inventory
198.,
so
it
is
locked
at
the
moment.
What
you
can
do
is
unlock
it,
because
you
can't
run
while
it's
kind
of
around
any
jobs,
while
it's
locked,
so
what
you
can
do
is
let
me
just
change
that
to
vs
code.
A
Okay,
what
you
can
do
is
use
this
command
where
you
unlock,
so
so
this
we're
still
in
the
local
topology
right,
because
we
already
have
the
198
smithy
in
the
inventory.
So
we
can.
We
can
do
that.
You
want
to
unlock
it
and
you
have
like
the
owner
name.
It's
it's.
It's
called
initial
setup
just
because
the
script
that
I
use
to
add
the
to
test
notes
to
inventory
is:
is
the
default?
Is
that
name,
but
you
can
change
it?
A
You
just
have
to
change
the
script
in
there
so
yeah.
I
run
that.
D
A
Yeah
yeah,
for
so
I
I
have
some
trouble.
So
basically
this
I
I
didn't
really
clear
the
I
think
I
forgot
to
actually
like
remove
the
process.
I
mean
the
containers
for
paddles
correctly,
so
it
still
has
198
in
there
and
then
for
some
reason,
there's
some
some
some
mistakes
but
yeah,
but
I'll
just
move
on
just
take
my
word
for
it.
D
A
A
Creating
ansible
inventory,
okay,
I
gotta
be
honest
on
this
part.
It's
I!
I
don't
exactly
know
why,
like
the
the
you
know,
workings
of
like
ansible
cm,
theoretical
and
stuff
like
that,
but
in
order
for
us
to
run
a
successfully
run
a
deaf
job
that
has
ansible
tasks,
you
know
the
majority
of
ceph
tests
uses
ansible
right.
So
in
order,
in
order
for
us
to
run
it
using
our
local
totology,
we
need
to
add
the
ansible.
A
We
have
to
create
an
ansible
inventory
and
we
can
do
this
by
following
what
is
already
in
production
of
tautology
and
tutology.
Has
this
file
called
smitty.mo
it's
in
this
directory?
So
what
what
you
do?
Is
you
you
copy
paste,
the
you
make,
the
you
create
the
directory
and
basically
copy
paste.
This
part
just
just
this
part.
It
has
other
parts
in
it,
but
you
only
need
this
part
to
get
it
working.
A
This
is
very
like
a
mental
thing
to
do
again.
Improvements
is
basically
try
to
limit
as
manual
process
as
possible.
A
Okay,
so
we
have
that
copy
and
paste
it
next.
We
need
to
create
like
host
machine
configurations
in
the
ansbo
host
cpr
directory.
A
Basically,
if
you
go
to
this
directory
in
the
production
technology,
we
have
like
a
lot
of
things,
but
what
we
want
to
concentrate
is
like
the
if
you,
if
you
are
using
smithy
just
smithy-
and
you
know
you
have
to
add
this
test-
note
children
and
smitty
shooting
as
well,
and
you
need
to
add,
like
the
actual,
like
information
of
the
each
machines.
So
for
me
for
this
demo,
we
only
have
198
smithy
198.
A
Go
to
this
directory
and
copy
paste,
the
the
actual
198
information
so
I'll,
just
I'll
just
demonstrate
it,
but
I
won't
be
actually
copying
the
actual
mac
address
because
that's
again
sensitive
information
and
I
don't
think
we
should
be
exposing
those.
So,
for
me,
the
jobs
add
on
cue
was
is
a
dummy
job,
so
it
doesn't
really
require
like
running
like
ansible
tasks,
so
it
would,
if
this
wouldn't
matter
but
but
for
for
running,
like
real
self-test,
using
like
the
local
tautology
that
you
just
set
it
up.
A
And
you
get
to
the
last
thing
on
the
list
is
running
the
dispatcher.
For
those
of
you
who
don't
know,
the
dispatcher
is
basically
a
process
that
gets
stuff
out
of
the
queue
to
run
jobs,
and
basically
we
need
to
run
this
for
it
to
happen.
We
need
to
start
it
this.
For,
for
the
drops
install,
you
should
happen
so
yeah
we
we
can.
A
I
can
just
describe
a
bit
archive
directory,
is
basically
where
you
archive
the
the
jobs
information
like
the
job
folder,
for
you
know,
dumping
all
the
logs
for
your
jobs
and
then
a
lot
of
directories
like
the
logs
from
to
yourself
and
tube
is,
is
basically
like
a
the
name
of
the
your
machines,
your
test
machines,
and
so
so,
if
you
have
like
other
other
machines,
then
smithy
you
have
like
different,
like
tubes.
A
So,
let's
see,
and
I
think
yeah
while
that
is
running,
I
can
answer
any
questions.
If
you
have
any.
B
Maybe
this
is
obvious
but
remind
me
how
you
would
find
the
the
ip
and
all
that
information,
if
you
don't
have
access
to
the
cpno
like,
if
you
don't
have
a
vpn,
enabled.
A
Yeah,
okay,
so
so
my
presentation
is
my
demo
is
basically
we
we're,
assuming
that
you
have
access
to
cpu
lab,
but
the
good
news,
okay,
okay,
the
good
news
is,
I
will
show
you
like
I'll
talk
about
it.
Like
future
works
that
we
have
right
now.
We
can
run
tautologies
and
use
containers
as
test
notes,
so
you
don't
need
access
to
the
cpu
lab
in
order
to
run
jobs
that
doesn't
set
jobs.
A
That
does
not
like
interact
with
the
kernel
or
test
the
kernel,
because
you
know
containers
yeah,
you,
the
kernel
wouldn't
be
changed,
so
anything
that
doesn't
touch
the
kernel
jobs.
You
can
run
with
using
test
containers
as
test
notes
and
yeah.
Zac
is
the
one
who
has
been
working
hard
on
making
this
happen
and
I'll
I'll
be
talking
a
little
bit
more
in
the
future.
Slides,
okay,.
D
A
This
this,
this
note
is
locked.
I
tried
unlocking
it
as
you
see
from
the
previous
steps,
but
it
didn't
work.
So
if
for
it
to
happen,
I
need
to
restart
everything
again,
and
that
would
take
some
more
time
but
yeah,
sorry
about
that.
So
yeah.
You
just
have
to
take
my
word
for
it
that
if
you
do
it
correctly,
it
would
work,
and
basically
this
this
job
would
be
would
be
running
and
it
would
it
will
complete.
A
So
you
have
a
green
on
it
and
yeah,
so
so
that
is
it
for
the
demo.
Do
you
guys
want
so
there
will
be
like
more
slides
regarding
future
works
and
stuff
like
that,
but
if
you
guys
have
any
questions
regarding
the
demo
itself
like
anywhere,
you
want
me
to
go
through
again
you
just
let
me
know
this
is
a
good
time.
I
think,
since
I
have
to
read
me
open,
I'm
just
gonna
change
that.
A
Yep,
okay,
so
yeah
future
works
and
improvements
that
I
wanted
to
do
with
this
is
to
automate
some
of
the
mental
steps
of
adding
test
notes
to
the
inventory
as
much
as
possible.
I
think
zach
today.
Actually
we
had
a
discussion
of
how
he
can
actually
do
this
and
I,
I
believe
he'll
be
able
to
do
it
and
yeah.
We
work
on
that
in
making
it
more
automated,
but
the
good
news
that
I
talked
about
before
is
that
we
are
able
to
run
containers.
A
It's
almost
done
like
like
98.
I
think
right,
zach,
using
containers
to
test
yeah.
E
E
I
filed
a
draft
pr
on
this
which,
which
I
can
link
for
you
guys
in
a
second,
it's
the
state
that
it's
in
right
now
it's
it's,
not
it's
not
ready
to
be
merged
yet,
but
so
so
the
first
thing
that
I
started
working
on
was
I
wanted
to
see
if,
if
we
could
get
containerized
test
nodes
working
so
this
took
this
took
quite
a
bit
of
effort
and
required.
E
I
don't
know
like
a
dozen
pr's
to
teethology
and
and
friends,
because
we
have
to
skip
a
lot
of
of
tasks
and
certain
certain
operations
inside
containers
junior
was
talking
about
this
a
little
bit
earlier,
so
running,
steph
tests
may
or
may
not
work
really.
Well,
I'm
not
sure
using
test.
Note
containerized
test
nodes,
but
it
does
help
us
a
lot
with
testing
toothology
itself.
E
So
I'm
also
pretty
like
I'm
also
intimately
familiar
with
with
how
you
know
our
our
provisioning
and
stuff
works.
You
know,
junior
is
talking
about
cefc
imaginable
a
little
bit
before
it.
It
will
be
possible
to
automate
the
the
inventory
related
stuff
steps
around
this
for
using
bare
metal
test
nodes.
So
I
think,
ideally,
what
we're
going
to
end
up
with
for
the
use
case
of
using
bare
metal
test
nodes
is
well,
of
course,
you'll
still
need
vpn
access
or
access
to
some
lab
right.
E
You
know
this
in
theory
could
work
outside
of
sepia
you'll.
You
know
you
will
need
to,
of
course,
have
locked
the
nodes
inside
the
real
lab,
but
then
I
ideally
will
be
able
to
take
just
say,
like
a
list
of
the
host
names
that
that
that
we're
going
to
be
using,
take
that
as
an
input
and
then
take,
I
guess,
the
url
to
the
secrets,
repo,
which
is
where
the
inventory
and
stuff
is
actually
stored,
that
we
use
in
production.
E
So
then,
assuming
you
have
some
kind
of
access
to
that
repo
you'll
be
able
to
have
it
cloned,
and
then
you
know,
rather
than
having
to
ssh
into
the
toothology
machine,
to
copy
some
fragment
of
its
inventory.
You'll
just
have
the
real
one
and
then
so
all
the
all.
The
data
that
we
need
to
run
ansible
successfully
is
is
keyed
off
of
the
host
names,
so
so
it'll
all
kind
of
just
work
slightly
more
smoothly.
E
I
think
personally,
the
part
of
it
that
I'm
the
most
excited
about
is
like
being
able
to
to
run
the
entire
stack
inside
of
say,
github
actions
on
every
pull
request.
I
think
it
should
make
toothology
development
a
little
bit
safer
and
faster.
So
that's
kind
of
the
summary
of
what
I've
been
working
on.
I
will
paste
that
pr
in
a
second.
If
anyone
cares
to
look
at
it,
yeah,
that's
about
it,
happy
to
answer
questions
or
elaborate
on
things,
but
that's
that's
kind
of
the
gist
of
it.
D
A
Yeah,
as
zach
was
talking
about
before
yeah,
also
allowing
depths
to
choose
between
using
containers
or
bare
metal
machines
as
test
notes,
so
that'll
be
something
a
really
cool
feature
we
have
and
again
it
it
really
improves
for
technology
deaths.
It
helps
to
tell
your
devs
when
working
on
features
testing
it
out,
but.
E
If
I
could
add
one
one,
one
thing
really
quick
here
that
I
should
have
mentioned,
I
think
the
the
key
part
of
the
summary
for
the
containerized
test
nodes
thing.
The
advantage
is
you
don't
need
a
scrap
of
access
to
sepia
at
all.
The
entire
thing
can
work
on
like
your
laptop,
no
vpn,
that's
the
trade-off.
You
get
for
not
being
able
to
run
real
self-test
very
well.
D
A
And
ultimately,
the
ultimate
goal,
I
think,
is
that
this
is
a
very
rich
goal,
but
I
think
I
talked
to
josh
about
this
and
if
we
can
somehow
use
vms
or
bare
middle
machines
as
test
notes-
and
we
can
give
like
developers
an
option
to
skip
the
process
of
pushing
it
to
ci
and
do
it
on
shaman,
basically,
you
build
dev
once
locally
and
you
you
basically
like
somehow
use
make
to
totally
use
the
or
make
the
vms
use
like
the
binary
files
to
run
any
like
tests
any
topology
test.
A
Basically,
and
then
you
already
have
totology
set
up
locally,
so
I
I
still
haven't
got
the
details
of
it
right
yet,
but
as
far
as
I
like
talked
to
josh
about
it,
this
is
kind
of
like
the
ultimate
goal,
and
I
think
this
would
really
like
help
increase
the
productivity
of
cell
engineers.
So
you
don't
have
to
wait
long
for
your
stuff
to
build.
If
you
can
build
it
locally,
you
can
also,
but,
but
I
can
see
the
drawbacks
of
like
using
vms
locally,
because
stuff
is
like
it.
A
You
know
it's,
it's
pretty
heavy,
so
you,
your
local
machine,
might
not
be
able
to
handle
it.
Something
like
that.
You
know
but
yeah,
that's
our
ultimate
goal,
which
we'll
be
excited
once
if
we
reach
it
someday
and
yeah,
that's
about
it
for
the
future
works
and
improvements.
A
I
want
to
acknowledge,
like
people
who
also
helped
with
making
this
happen,
making
the
docker
script
for
creating
a
developer
environment-
and
you
know
actually
david
josh
here
and
zach.
You
guys
have
been
really
helpful.
You
know
it's
been
a
long
process,
but
I
couldn't
have
done
it
without
you,
guys
and
yeah.
I
think
that's
it
and
I'll
leave
the
rest
for
q.
A
thank
you.
B
Yeah
just
to
add
on
to
the
idea
of
it
being
a
resource
intensive
on
a
local
laptop
going
along
with
this
project.
I
think
it'd
be
good
to
emphasize
that
you
could
that
users
can
specify.
You
know
different
facets,
that
they
want
to
test,
or
just
they
can
filter
out
only
the
jobs.
B
They
need
and,
of
course,
that's
already
possible,
but
I'm
just
putting
an
emphasis
on
that
for
people
doing
this
on
their
local
laptops
that
would
probably
cut
down
like
they
don't
necessarily
need
to
run
their
tests
through
the
whole
radio
suite
every
time
or
to
still
be
effective.
A
C
If
there
are
no
more
questions,
thanks,
junior
and
zach
for
doing
this
presentation,
I'm
sure
a
lot
of
people
did
not
know
about
this
and
are
excited
about
it.
I
think
we
should
do
like
you
know
another
one
once
you
know
we
get
the
containers
one
merged
and
we
can
get
a
checkpoint
as
to
what
we
are
going
to
do
next,
but
this
is
really
useful.
Thank
you
so
much.