►
From YouTube: Containers for HPC Shifter and Podman
Description
Part of the Data Day 2022 October 26-27, 2022
Please see https://www.nersc.gov/users/training/data-day/data-day-2022/ for the training agenda and presentation slides.
A
Okay,
so
yeah,
so
today,
I
I
want
to
talk
about
containers
for
HPC.
You
heard
a
little
bit
about
containers
for
micro
or
for
web
services
yesterday
in
the
spin
talk,
so
this
will
be
more
focused
on
HPC
workloads.
A
So
an
outline
of
this
talk
is
first
I'm,
going
to
give
a
very
brief
introduction
to
Containers.
So
this
is
is
not
really
going
to
be
sufficient
for
a
getting
started
with
containers
101,
but
the
goal
is,
if
you,
if
you're,
not
familiar
with
containerized
workloads.
This
should
give
you
enough
context
to
follow
the
rest
of
the
talk.
Hopefully
and
we'll
also
have
quite
a
few
other
materials.
A
We
have
a
lot
of
good
training
material
for
folks
to
get
started
with
containers,
so
we
will
share
that,
but
this
talk
is
a
little
bit
beyond
the
scope
of
that
and
I'll
talk
about
shifter,
which
is
our
current
container
solution
here
that
we
that
we
use
at
nurse
and
then
finally
I'll
talk
about
some
of
the
work
we've
been
doing
to
transition
to
podman
and
what
the
future
has
in
store.
A
Okay,
so
just
to
you
know
kind
of
set
the
set
the
foundation
here.
You
know
what
is
a
container
I.
Think
probably
most
people
here
today
have
heard
of
containers,
but
there's
there's
a
lot
of
you
know
varying
descriptions
of
what
they
are
and
what
they're
used
for.
So
you
know,
a
common
comparison.
A
So
you
know
when
you
think
of
like
how
a
virtual
machine
works.
It's
really
you
know,
sort
of
simulating
the
whole
computer
right.
It's
got
virtualized
Hardware
and
the
whole
operating
system,
and
and
then
all
the
the
software
runtime
you
know
inside
that
containers
by
by
contrast,
are
sharing
a
Linux
kernel
with
the
host,
and
so
they
don't
need
to
do
all
that
additional
hardware
virtualization,
so
comparatively
they're
they're
much
lighter
weight
than
a
virtual
machine.
A
But
it's
also
to
note
that,
even
though
you
know
the
concept
of
containerization
in
the
sense
is
not
could
be
implemented,
you
know
in
different
systems
in
2022,
when
we
talk
about
containers,
we're
really
talking
about
Linux
containers,
the
implementation
relies
on
features
of
the
Linux
kernel.
So
this
is
an
inherently
Linux
based
technology.
A
Okay,
I'm
just
a
little.
You
can
kind
of
see
a
diagram
here
yeah.
You
know
you
could
obviously
stretch
these
boxes
to
make
them
different
sizes.
But
the
point
here
you
know
is
on
the
right:
the
virtual
machine
has
to
simulate
the
whole
guest
operating
system
on
top
of
a
hypervisor,
while
the
container
the
the
sort
of
size
of
the
encapsulated
software
is
smaller,
and
so
it's
a
lighter
weight
object.
A
So
you
know
why
you
know
if
you've
been
following
following
the
trend.
Containers
have
become
very
popular
in
the
last,
maybe
10
years
or
so,
and
and
why
is
that?
Why?
Why
are
they
so
ubiquitous
these
days?
A
You
know,
so
the
idea
of
encapsulating
your
software
with
this
runtime
environment
has
a
lot
of
benefits
right.
So
the
the
words
that
are
thrown
around
a
lot
are
portability,
scalability
reproducibility,
and
you
know
if
we,
if
we
look
at
those
things,
it
just
makes
sense
that,
if
you,
if
you
make
sure
to
you,
know,
bundle
your
whole
runtime
environment
with
your
software,
it
becomes
a
lot
easier
to
move
it
around
between
machines.
It
becomes
a
lot
easier
to
make,
you
know
duplicates
of
it,
so
you
can
scale
up.
A
You
know,
as
as
you
have
more
Demand
on
your
application
or
you
can
easily
change
the
the
underlying
resources
given
to
your
application.
It
improves
your
reproducibility
because
you
have
a
a
static
file
which
describes.
You
know
your
your
workloads,
your
application,
and
so,
if
you
need
to,
if
you
need
to
redeploy
it,
you
can
redeplay
it
from
the
same
from
the
same
image,
there's
also
a
general
switch
from
imperative
to
declarative
deployment.
Paradigm
that
comes
along
with
containers,
and
that
is
an
additional
Improvement
to
to
reproducibility.
A
And
what
that
means,
if
you
haven't
heard
that
you
know
an
imperative,
Paradigm
is
giving
a
list
of
instructions
where,
as
a
declarative,
Paradigm
is
saying
this
is
the
result.
I
want
at
the
end.
So
when
you
look
at
the
planet
via
kubernetes
and
other,
you
know,
Technologies
in
the
container
environment,
they
work
that
way,
and
so
the
system
has
a
chance
to
say.
Oh,
we
didn't
get
to
the
goal
that
you
declared
that
you
wanted.
A
So
I
can
try
to
correct
for
that,
as
opposed
to,
if
you
give
a
list
of
instructions
and
something
goes
wrong
in
the
middle,
you
don't
necessarily
know
what
went
wrong
or
you
know
how
to
back
that
out
and
fix
it.
A
So
all
of
this
together
becomes
the
building
blocks
of
a
of
a
modern
scalable
web
architecture,
and
so
you
know
people.
This
is
how
you
build
applications
if
you
want
to
have
millions
of
users
on
a
web
service,
so
that
that's
why
they're?
That's?
Why
they're
popular
and
that's?
Why
they're
valuable?
A
So
what
about
HPC
right
like
we're,
not
building
web
services
for
millions
of
users,
but
we
still
care
about
portability,
about
reproducibility
and
about
scalability,
and
so
the
use
cases
in
HPC
differs
slightly,
but
this
concept
of
of
software
encapsulation
is
still
very
valuable
and
very
powerful.
So
just
some!
These
are
just
some
use
cases
that
that
you
know
users
benefit
from
today
using
containers
on
HTC.
A
So
one
case
is:
if
you
have
a
complicated
piece
of
software
that
is
hard
for
users
to
build,
you
can
build
it
once
and
then
you
can
share
a
container
that
can
be
used
by
many
collaborators.
Another
one
is,
if
you
know,
if
nurse
staff
keep
changing
packages,
you
know
on
the
HPC.
We
try
not
to
do
that,
but
sometimes
packages
change.
You
can
isolate
your
software
from
those
changes
by
by
packaging
your
time,
environment.
A
In
a
container,
you
could
potentially
make
your
your
research
more
reproducible
by
by
saving
that
your
runtime
environment,
you
know
about
10
years
ago.
A
It
was
becoming
quite
common
that,
for
you
know,
code
that
you
had
to
to
publish
your
your
source
code
when
you,
when
you
have
results
from
a
from
a
simulation,
and
you
know,
publishing
your
container
or
making
a
container
availability
available
when
you
publish
scientific
results
is
really
just
a
Step
Beyond
that
you're
not
just
saying
this
is
the
source
codes
that
code
I
use,
but
also
here's
the
compiled
application.
I
used.
Here's
the
the
libraries
that
it
called.
Thank
you.
A
You
can
also
a
really
really
common
use
case
and
yours
today
is
avoiding
metadata
contention.
So,
for
example,
if
you're
calling
python
from
a
data,
you
know
analytic
workload
and
you
have
several
nodes
all
trying
to
talk
to
the
same
python.
Libraries
at
once.
You
know
on
a
shared
file
system
that
can
that
can
create
a
lot
of
slowdown,
but
instead
you
can
pop
that
python
environment
in
a
container.
You
can
easily
distribute
it
out
to
your
to
all
your
workers
and
all
of
those
problems
go
away,
and
then
you
know.
A
Finally,
you
know
you.
This
potentially
gives
you
some
portability
to
move
between
super
computers.
And
finally,
you
know
scientists
also
like
web
applications
and
have
use
for
things
like
data
portals
and
workflow
management,
and
things
like
that.
A
So
I'm
going
to
take
a
brief
kind
of
pause
because
there's
a
lot
of
vocabulary,
some
of
which
I've
already
been
turning
around.
So
this
is
kind
of
a
reference
to
that
that
you
can
go
back
to
if
you,
if
you
look
at
these
slides
and
so
so
I've
said
the
word
container
many
times
already
right
and
stuff,
and
so
the
kind
of
the
building
blocks
here
is.
You
know
we
have
what
we
call
an
image
which
is
the
actual
file
that
saves
your
software
application
in
its
runtime
environment.
A
That's
a
static
object,
it
doesn't
get
changed,
you
know
or
updated
at
any
time.
The
container
itself
is
a
running
instance
of
that
image,
and
typically
it
will
have
like
an
ephemeral
file
system
on
top.
So
that
means
that
I
can
run.
My
container
I
can
go
inside,
I
can
make
changes
and
then,
when
I
shut
it
down
all
those
changes
go
away.
They're
not
saved
into
the
image
file.
The
image
is
totally
static,
foreign.
A
You
need
something
called
the
container
runtime,
which
is
software,
that's
responsible
from
creating
containers.
You
know
from
images-
that's
really
important
here,
but
usually
you
don't
interface
with
a
container
runtime
directly.
What
you
do
is
you
use
a
container
engine,
or
sometimes
it
might
be
called
a
container
framework
which
bundles
together
a
runtime
and
typically
some
other
useful
tools
like
an
image
Builder
and
you
know
potentially
other
things
to
to
manipulate
and
running
containers
and
and
images.
A
So.
Speaking
of
you
know,
images.
Where
do
you
get
an
image
from
if
you,
if
you
want
to
build
an
image,
you
have
to
have
a
specification
which
which
describes
what
goes
into
that
image.
That
specification
is
called
the
docker
file
or
a
container
file.
It's
really
just
a
human,
readable
list
of
instructions
for
how
to
compose
that
image
and
then
once
you've
built
an
image.
A
You
also
don't
want
to
just
kind
of
leave
your
images
scattered
all
over,
so
you
typically
would
save
them
to
you
know
some
Cloud
connected
image
registry
cloud
or
you
know,
Network
connected
storage,
and
that
gives
you
kind
of
a
source
of
Truth.
Where
you
can,
then
you
know
retrieve
your
your
images.
A
You
know
from
many
different
places
that
you
might
be
using
them.
Finally,
you
know
we'll
talk
a
little
bit
about
mounting.
So
I
said
that
the
container
is
ephemeral,
but
it's
also
very
useful
to
have
a
way
to
get
stateful
information
into
a
container.
So
you
know
how
do
you
get
a
data
set
into
a
container?
How
do
you
get
a
configuration
file
into
a
container,
and
so
you
can
volume
out
or
bind
Mount
persistent
files
or
directories
into
a
container
when
you
launch
it.
A
Okay,
now
there's
also
a
bunch
of
Technology
names,
I'm
not
going
to
go
through
this
whole
list,
but
you've
probably
heard
of
some
of
these
things.
What
I'll
focus
on
you
know,
probably
if
you've
heard
of
containers
you've
heard
of
Docker
Docker
Docker
is
a
very
popular
container.
Engine
podman
is
also
which
I'll
be
talking
about
more
today
is
also
a
popular
container.
Engine
shifter
and
Singularity
are
HPC
specific
container
engines
skipping
ahead
a
little
bit.
Another
point
that
a
lot
of
people
get
conflated
on
you
know
is,
you
know,
Docker.
A
What
is
Docker
desktop
versus
Docker,
you
know,
and
Rancher
desktop
is
another
equivalent.
You
know,
I
said
containers
are
a
Linux
specific
technology.
A
So,
if
you're
running
a
you
know
a
Mac,
OS
or
Windows
laptop,
you
actually
need
a
Linux
system
to
you
know,
build
and
use
containers.
So
what
these
tools,
these
desktop
tools
are,
is
really
a
way
to
run
a
Linux
cm
and
manage
it
for
you.
So
that's
a
really
good
place
to
start.
If
you're,
if
you're,
just
trying
to
get
started
on
your
laptop
using
containers,
is
to
look
up,
Docker,
desktop
or
Rancher,
desktop
and
finally
kind
of
the
the
other
big
elephant
in
the
room
is
kubernetes.
A
So
once
you
start,
you
know
working
with
containers
and
you
want
to
start
launching
a
lot
of
them
or
launching
different
containers
that
work
together.
Kubernetes
becomes
really
important
to
manage.
You
know
how
you're
deploying
scaling
up
scaling
down
track
all
the
containers
that
you
have
running.
So
it's
a
standard
for
the
orchestration
of
containers,
and
then
there
are
many
many
many
many
implementations
you'll
see
this
list
at
the
bottom
of
kubernetes.
A
So,
like
you
know
every
infrastructure
as
a
service
provider,
you
know
basically
any
company
that
can
conceive
of
a
way
to
sell
you
a
kubernetes
distribution
that
has
a
kubernetes
distribution.
There
are
kubernetes
distributions
to
run
on
on
your
laptop
to
run
on
the
internet
of
things.
You
know
any
any
device
you
can
think
of
running
kubernetes,
there's
probably
a
kubernetes
distribution
out
there
for
it.
A
A
You
know
so
here's
some
steps
that
this
isn't
the
most
minimal,
but
a
very
minimal
workflow
of
what
you
need
to
get
started
running
an
application
in
a
container.
So
you
can
kind
of
think
of
the
three
steps
here.
Are
you
know
you
build?
You
build
your
image
file,
you
you
ship
it
or
save
it
somewhere
like
publish
that
image,
and
then
you
can
run
it
somewhere
else.
A
So
in
this
case
you
know
if
you're
assuming
I'm
not
going
to
go
into
the
details
of
what's
in
the
docker
file
here.
But
if
you
have
a
valid
Docker
file
that
you
edit,
then
you
can
just
pass
that
into
a
Docker
build
command,
and
what
this
is
saying
is
build.
A
You
know
an
image
tagged
with
the
name,
my
image
and
look
in
this,
this
current
directory
for
the
docker
file
and
use
that,
as
my
my
build
context
after
I
build
that,
then
I
can
push
that
up
to
somewhere
in
this
case
with
Docker,
the
syntax
means
I'm,
pushing
it
to
Docker
hub
and
then,
if
I
say
you
know,
I
could
want
to.
Maybe
I
want
to
run
this
on
my
laptop,
but
in
this
case
I'm
saying
well,
I
want
to
run
it
somewhere
else.
A
I
want
to
run
it
on
my
workstation,
so
I
can
go
over
to
my
workstation.
I
can
retrieve
that
image
from
Docker
Hub
by
doing
Dr,
pull
pull
down
that
image
and
then
I
can
just
run
it.
Okay,
Docker
run.
So
it's
very
simple
at
the
end
of
the
day,
there's
obviously
a
lot
wrapped
up
here
in
you
know
what
goes
into
a
Docker
file,
but
this
is
very.
A
There
are
many
many
examples
and
tutorials
on
this,
so
I'm
going
to
gloss
over
that
for
the
moment,
but
this
is
kind
of
the
simple
workflow.
So
what
does
this
look
like
if
we
go
to
HBC
right
so
naively?
A
If
I
look
at
this
I
would
say:
okay
I
can
build
my
image
in
the
same
way
on
my
laptop
and
I
can
still
push
it
up
to
dog
your
head
and
that's
fine,
and
then
maybe
you
know
I
want
to
run
this
on
Pearl
Mudder
so
and
I
see
your
question
Alfred
so
I'm
about
to
answer
this.
A
So
you
know
naively
I
can
just
I
can
just
pull
it
right
and
then
maybe
okay!
Well,
you
know
I've
got
a
batch
system
here,
so
maybe
I
need
to
allocate
myself
a
compute,
node
and
then
I'll,
just
you
know,
put
my
doctor
around
behind
an
S1,
so
I
can
launch
it
on
many
tasks
right.
This
seems
like
it
should
work
right,
but
it
won't
work.
Don't
do
this
okay,
so
it
won't
work
because
no
you
know
Docker
does
have
some
kind
of
security
concerns
on
a
multi-user
system.
A
So
we
don't
allow
users
to
use
it,
but
even
if
we
did,
it
would
still
be
a
terrible
idea.
Okay.
So
you
know,
Docker
doesn't
know
anything
about
kind
of
how
you
want
your
HPC,
your
tasked
to
communicate.
It
doesn't
do
anything
to
kind
of
like
optimize
performance
for
HPC.
So
just
don't
don't
do
this
okay,
so
hopefully
this
this
slide
is
clear.
A
A
So
these
are
some
considerations
that
you
know
I
think
about
when
we
want
to
run
a
containerized
application.
We
know
that
HPC
applications
might
be
sensitive
to
file
system
performance.
Okay.
So
so
that's
a
that's
consideration.
When
we
have
this
virtualized
kind
of
layered
image
that
we're
matching,
we
know
that
they
can
be
sensitive
to
communication
time
right.
We
can
be
very
communication.
Intensive,
typically
containers
use
like
a
virtualized
networking
layer,
so
we
have
to
be
concerned
about
that
in
a
multi-user
HPC
system.
A
A
You
know,
and
then
how
can
we
access?
Does
that
mean
now,
if
I,
if
I
have
a
container
and
I'm
bringing
my
software
runtime
environment,
that
I
need
to
be
building
all
of
the
like
optimized
HPC
libraries,
myself,
all
the
time
that
sounds
like
that
sounds
difficult.
Is
there
a
way
that
I
can
get
around
that
and
then
finally
like
right,
so
I
would
batch
scheduler.
A
So
how
do
I
make
sure
that
when
the
bash
scheduler
is
allocating
resources-
and
my
container
is
deciding
to
spin
up
processes
that
those
things
are
kind
of
synced
up
and
doing,
you
know
interacting
well
together
and
not
kind
of
getting
each
other's
way,
so
those
are
kind
of
that's
the
broad
Strokes
of
what
I
would
say
are
issues
for
for
using
containers
on
HPC,
okay.
So
that
brings
us
to
know
why
we
want
to
use
a
a
customized.
A
A
It's
been
the
container
engine
of
choice
that
nurse
since
it
was
introduced
in
2015.,
it's
increasingly
popular,
even
through
2022,
with
over
700
unique
users
in
the
first
half
of
2022
and
the
the
super
short
version
of
what
shifter
does
is
that
IT
addresses
those
problems
that
I
just
raised
so
so
that
you
can
have
a
performative
performative
container
spreading
on
HPC.
A
So
I'm
going
to
talk
a
little
bit,
so
these
are
the
problems
that
or
the
the
things
that
I
said.
We
should
consider
here
on
the
left
and
I'm
going
to
talk
a
little
bit
about.
You
know
what
a
shifter
do
to
address
these
problems.
You
know
I
said
so:
we're
worried
about
sensitivity
to
file
system
performance.
Okay,
so
we
do
some
special
squashing.
You
know
we
do
some
management
of
the
image
ahead
of
time
to
make
it
a
single
layer,
read-only
image.
A
This
still
takes
a
Docker
image,
but
it
makes
it
a
format
that
can
be
accessed.
You
know
efficiently.
This
happens
kind
of
invisibly
to
the
user
when
a
user
pulls
an
image
onto
query
or
promoter.
It's
automatically
like
squashed
into
the
sufficient
into
the
sufficient
image
in
terms
of
communication
intensive
applications.
A
You
know
we
can
just
have
the
container
opt
out
of
any
virtualized
networking
and
just
you
know,
pass
through
the
host
networking.
So
we
get
all
the
advantages
of
like
high
performance,
HPC
Network
when
we're
using
shifter.
A
So,
as
far
as
Security
in
a
multi-user
environment,
we're
kind
of
out
of
luck
with
Docker
still
shifter
requires
containers
to
run
as
non-root.
So
you
have
to
you,
can't
you
can't
have
it
the
main,
your
main
user
inside
your
container
be
root
and
your
container
doesn't
get
any
special
group
capabilities
so
that
that
solves
that
issue.
All
the
containers
are
just
running
with
user
permissions.
A
As
far
as
including
optimized
HPC
libraries,
we
have
some
fancy
tricks
with
shifter.
You
can
add
Flags
like
this,
that
say,
for
example,
module
GPU
and
in
fact
some
of
these
are
turned
on
by
default
and
what
they'll
do
is
hook.
You
know,
system
libraries
like
mpich
or,
like
you,
know,
Kudo
libraries
into
your
container,
so
your
application
can
see
them
without
you
having
to
actually
explicitly
put
them
in
and
finally
back
scheduler
interaction.
A
Before
I
even
mentioned
shifter-
and
this
will
do
some-
do
some
work
to
to
pre-load
my
image
and
pass
it
out
to
all
the
nodes
that
that
storm
allocates
to
me
and
then
I
can
run
shifter
here
without
even
specifying
an
image,
and
it
just
gets
that
information
directly
from
to
learn.
So
there's
just
some.
You
know
these
are
just
some
kind
of
tricks
in
shifters.
Design
to
you
know,
make
this
process
more
more
streamlined.
A
A
But
what
does
this
lead
us
in
terms
of
our
workflow?
Okay?
So
if
we
come
back
here
well
again,
we
can
still
build
an
image
with
Docker
on
our
laptop
and
in
fact
we
we
need
to,
because
shifter
isn't
really
considering
that.
It's
really
just
considering
this
runtime
problem
down
here.
So
we
still
build
build
an
image
of
the
docker
or
you
know
another
container
Solution
on
our
laptop.
We
still
push
it
up
to
send
registry
here
like
Docker
hub,
but
now
we
can.
A
We
can
pull
it
onto
our
HPC
system
using
shifter,
so
in
this
case
we
have
a.
We
have
a
shifter
image,
there's
the
the
pulling
binary.
This
will
automatically
do
that
step
where
I'm
compressing.
My
image
I
can
now
allocate
a
compute
node
and
then
I
can
run
this,
so
This
actually
looks
very
similar
to
what
I
had
before,
where
I
just
pulled
the
image
and
that
I
can
do.
You
know
a
container
run
just
with
this
step
of
allocating
some
some
HPC
resources
in
between.
A
Okay,
so
I'm
going
to
take
a
pause
here
so
I
know
that
was
quite
quite
fast
since
I'm
not
spending
a
lot
of
time
on
shifter
today
that
looks
like
I'm,
oh
I
started
a
little
late,
so
I
think
I
have
a
couple
extra
minutes,
but
so
I
just
want
to
point
out
these
resources.
So
there
are
slides
on
on
the
GitHub
and
there's
a
lot
of
good
links
in
here.
A
There
was
a
really
good
talk,
so
if
you're,
really
just
starting
with
containers
today,
really
recommend
you,
you
start
on
your
laptop
look
into
Docker
or
podband,
and
there
are
a
lot
of
really
good
tutorials
just
out
in
the
internet,
for
doing
that.
If
you,
if
you're
interested,
particularly
in
shifter,
there
was
a
a
good
talk
given
about
a
month
ago
by
Laurie
Steffy,
that's
specifically
on
how
to
get
started
with
shifter,
and
we
also
have
a
beginner
tutorial
and
lots
of
good
documentation
on
our
documentation
website.
A
So
and
if
you
get
stuck
please,
you
know
we
have
a
lot
of
container
enthusiasts
kind
of
behind
the
help
desk
right.
So
you
know
please
file
a
ticket
and
I'm
sure
you'll
be
met
with
with
an
enthusiastic
response.
A
If
you,
if
you
have
questions
about
containers,
okay,
so
I
went
through
this
really
quickly,
you
know
so
so
why
not
just
stick
with
shifter
right,
I
said:
okay,
we
have.
We
have
a
way
to
performatively,
run
containers
on
Pearl
mutter
on
Quarry,
and
you
know
so
what
is
motivating
us
to
look
into
something
like
podman
right?
Well,
you
know.
If
we
go
back
to
our
picture,
you
notice
that
you
know
shifter
really
just
addresses
the
runtime
challenges,
and
so
it's
not
really
an
end-to-end.
You
know
container
Engine
Solution
the.
A
If
you
look
at,
you
know
the
requirements
for
the
security
solution,
which
is
to
to
not
allow
containers
to
run
as
root
a
lot
of
you
know,
off-the-shelf
containers
that
you
can
get
you
know
pre-packaged
from
companies
or
something
like
that
they
are
quite
they're,
often
run
as
root
okay,
so
that
like
having
that
requirement
disallows
using
a
lot
of
kind
of
free
containers
that
you
could
use
and-
and
then
finally,
you
know
we
maintain
shifter
shifter
is
a
tool
that
was
developed
at
nurse
and
it's
maintained
at
house
at
nurse.
A
So
it
doesn't
have
a
lot
of
users
and
it
doesn't
have
a
big
development
team
and
so
trying
to
address
these
problems
with
you
know,
a
lot
of
engineering
is
is
challenging
in
terms
of
just
Manpower,
so
we'd
really
like
to
to
move
to
a
model
that
has
more
a
larger
community
and
and
has
more
support.
In
that
sense
and
and
from
a
user
perspective,
it
also
gives
them
another
tool
that
they
need
to
use
to
learn
so,
which
is
a
which
is
obviously
a
burden
for
the
user.
A
So
we'd
like
to
address
those
issues.
Okay,
so
that's
kind
of
what
motivates
looking
into
podman,
so
I'll
talk
a
bit
about
podman
and
what
it
is.
It's
a
open
container
initiative,
compliant
container
framework
and
it's
under
active
development
by
a
red
hat.
A
It's
quite
it's
quite
popular.
It's
free
and
open
source.
It's
widely
used
by
an
active
community.
So
you
can
go
to
the
GitHub
and
see
how
many,
how
many
people
have
pulled
it
and
I
think
it's
tens
of
thousands.
Now
it
also
out
of
the
box.
It
provides
a
full
featured
rootless
container
environment.
So
what
that
means
is
it
can
run
it?
A
Can
it
can
launch
a
container
rootlessly
with
just
user
permissions,
but
that
container
can
still
be
root
inside,
and
so
it's
mapping
like
the
root
user
ID
to
I'm,
not
user
ID
in
the
in
the
host.
Okay.
So
that's
a
lot
more
sophisticated
than
what
shifter's
doing,
but
it
means
that
now
we
can
run
containers
that
you
know
as
with
root
inside
without
needing
special
permissions.
So
this
addresses
a
lot
of
the
security
concerns
immediately
out
of
box.
A
So
it's
very
powerful,
a
very
big
point
in
the
favor
of
of
podman.
A
Yeah
I'm
I'm
I
only
have
a
few
more
so
so
it
also
provides
an
image
Builder,
so
that
kind
of
gives
it
an
end-to-end
solution.
A
It
shares
the
command
line,
syntax
with
Docker,
so
that
it's
a
kind
of
a
good
that
people
can
kind
of
come
into
into
it
with
a
lot
of
experience,
and
so
the
question
is:
can
we
address
you
know
the
performance
issue
here
so
I'll
go
through
these.
You
know.
Basically,
the
the
question
can
we
can
we
address
the
performance
issue?
Is
yes,
okay,
so
we
did
a
lot
of
work
to
do
that
and
basically
all
of
those
features
all
that
experience
of
what
shifter
does.
A
We
were
able
to
do
as
well
with
configuration
to
podman
so
I'm
not
going
to
talk
about
the
details
of
this
because
they're
very
similar
to
what
shifter
does,
but
basically
we're
able
to
do
this.
You
know,
via
special
some
kind
of
special
tooling
built
around
the
outside.
A
A
We
see
that
podman
can
contain,
can
perform
comparatively
or
even
better
than
shifter,
when
it's
configured
appropriately
so
I'll
reference.
There's
an
upcoming
paper
called
scaling,
podman
and
promoter
embracing
a
community
supported
container
ecosystem
by
Larry
Steffy.
That's
coming
at
the
canopy
HPC
session
in
super
Computing.
So
if
you're
interested
in
this
work,
you
know
please
reference
that
paper.
A
So
this
looks
like
for
podman
here
what
this
looks
like
you
know.
Our
workflow
for
podman
again
is
very
similar
to
what
we
had
before,
but
now
we're
all
inside
the
the
HPC
ecosystem
the
whole
time,
and
we
can
use
podman
the
whole
time.
We
no
longer
really
need
to
do
this
shift
step
because
we
we
started
and
ended
it
on
prometer.
A
So
our
build
step
looks
very
similar,
but
instead
we
can
use
podman
I
forget
to
I,
think
I
skipped
over
it
in
the
last
slide,
but
we
also
basically
because
of
all
this
configuration
except
we
made
a
wrapper
that
sort
of
automatically
does
that
configuration
for
users
to
avoid
missteps.
A
A
So
the
building
here
there's
not
not
really
too
much
remarkable.
This
is
this.
This
is
the
same
process
like
I
said
this
migrate
step
is,
is
important
to
be
able
to
create
that
efficient
file
system
Mount.
So
what
this
does
when
I
say
migrate,
my
image
latest
on
an
interesting
migrate.
A
My
image
latest
is
it
takes
the
normal
kind
of
Docker
style
or
image,
and
it
creates
a
squash,
a
squash
read-only
version
of
it
that
can
be
accessed
efficiently
and
stores
it
elsewhere
on
the
system,
shipping
is
exactly
is
really
the
same.
You
can
log
into
any
registry,
you
can
tag
your
images
and
you
can
push
so
this
is
all
very
standard
and
there's
no
really
remarks
here,
but
there
are
some
references
to
comment
registries
you
might
use
and
so
running.
You
know
this
is
this
is
kind
of
the
interesting
part
here.
A
So
if
I
wanted
to
run
an
image
just
on
a
login
node,
then
I
would
have.
This
would
look
very
much
sort
of
like
a
normal
container
solution
where
I
can
do
podman
and
HTC
around
my
image
latest.
But
if
I
want
to
do
this
in
a
in
a
in
a
bash
context
or
if
I
want
to
do
this
on
a
compute
node,
then
I
mentioned
you
know
you
might
use
this
new.
A
You
would
probably
want
to
use
this
new
sub
command
called
run
shared
and
what
this
does
is
it
launches
actually
one
container
per
node
and
then
one
and
then
many
processes,
one
one
process
per
task
inside
the
container
and
we've
determined
this
is
kind
of
a
more
efficient
way
to
scale
containerized
workloads.
A
So
that's
all
packaged
up
in
the
sub
command
and
then,
finally,
if
we
go
back
to
that
issue
of
including
you
know,
efficient
HBC
libraries
into
your
containers,
we've
also
provided
some
hooks,
GPU
or
MPI,
which
can
do
that
which
can
can
hook
those
libraries
into
your
container
when
you
launch
it
at
runtime.
So
you
just
add
those
after
your
run,
shared
command
and
you
get
that
that
benefit
okay,
so
that
was
a
whirlwind
to
the
end
there.
A
The
summary
is,
you
know:
shifter
is
currently
is
the
current
current
Solution
that's
available
and
it
provides
good
container
performance
on
Corey
and
prometer.
However,
we
have
demonstrated
that
that
podman
has
very
comparable
performance
and
will
provide
many
additional
benefits.
So
if
you're
just
getting
started,
we
we
recommend
you
look
into
podman
and
we
will
have
a
working
podman
HPC
wrapper
coming
very
soon.
Okay,
so
thank
you
very
much.