►
Description
In this briefing, Red Hat's Adel Zaalouk will introduce OpenShift sandboxed containers, give an overview on the upcoming tech-preview of the product and technology along with it's features.
A
A
Well,
hello
and
welcome
again
to
a
yet
another
openshift
commons
briefing
as
we
like
to
do.
On
mondays,
we
like
to
hear
about
new
things
and
new
ideas
and
we've
been
going
through
a
number
of
things
around
the
latest
release
of
openshift
4.8
and
we
have
with
us
today:
adele
zaluk
who's,
one
of
our
many
openshift
product
managers
and
he's
going
to
talk
about
the
dawn
of
openshift
sandbox
containers.
And
I
know
that
sounds
a
little
mysterious.
A
B
Thank
you,
dan
hi,
everyone.
My
name
is-
and
I
am
the
product
manager
for
the
newly
tech
review
product
called
openshift
sandbox
containers.
So,
let's
start
with
introduction
and
use
cases.
What
I'm
gonna
talk
about
in
this
section
is
mostly
around
how
we
do
sandboxing.
What
are
the
trade-offs
of
of
sandboxing,
I'm
going
to
talk
a
bit
about
what
is
called
a
vegas
mode
or
what
I
call
a
vegas
mode
and
when
and
where
would
you
use
sandbox
containers
versus
other
products
and
openshift?
B
So
what
is
sandboxing
there's
a
lot
of
references
here?
You
can
see
the
numbers,
I'm
gonna
link
these
or
these
already
linked
in
the
back
of
the
slide.
But
one
definition,
for
example,
is
a
sandbox.
Is
a
tightly
controlled
environment.
Where
programs
run?
Is
it
any
program?
B
No,
these
programs
could
actually
be
programs
that
could
leak
resources,
so
we
would
want
to
isolate
them
or
they
could
actually
be
programs
where
you
run
on
trusted
code
or
in
general.
Sandboxing
is
means
for
isolation,
for
any
workloads
that
you
run
are
using
some
forms
or
some
technologies
available
for
you
in
the
kernel,
and
there
are
a
lot
of
use
cases
for
sandboxing.
There's
a
lot
of
types,
I'm
going
to
be
walking
you
through
the
types.
B
So
what
are
the
different
types
of
sandboxing
I'd
like
to
categorize
those
as
software
and
hardware
and
software?
We
already
know
about
those,
so
linux
namespaces
is
a
form
of
sandboxing.
We
know
about
the
mound
the
uds,
the
pid
network
names
faces.
B
A
prominent
way
of
name
spacing,
which
is
called
username
spaces,
which
allows
you
to
kind
of
like
map
route
inside
the
container
to
the
none
root
outside
and
so
reducing
the
blast
radius
as
well.
There
is
men
like
mandatory
access,
control
max
and
linux
security
modules
of
the
most
prominent
sa.
Linux
is
one
there's
a
lot
of.
It
protects
from
a
lot
of
things.
Basically,
it
uses
label
to
label
processes
inside
the
container
and
the
files
they
are
used
to
access
and
then
prevents
misbehavior
from
like.
If
the.
B
If
the
processes
got
out,
then
they
can't
really
access
files.
They're
not
meant
to
there's
also
sitcom,
but
it
requires
a
bit
more
knowledge
about
the
interim
of
your
application
and
what
system
calls
you're
you're
using
and
so,
but
you
still
can
use
it.
You
can
decide
which
system
calls
you
want
to
block,
prevent
and
or
allow
through,
then
the
other
type
of
sandboxing,
which
is
more
on
the
hardware
side
of
things.
The
first
one
is
pretty
easy.
You
know
you're
sandboxing,
using
your
machine,
you
you
run
nothing
else.
B
You
just
run
one
application
and
that's
it
so
you
have
bare
metal
one
application.
Nothing
else
shares
the
host
you're
good
to
go.
The
other
form,
however,
is
more
to
pinpack
workloads
on
the
host,
and
that
is
a
virtual
using
virtualization.
B
B
B
So
you
have
an
os
already
and
then
what
you
do
is
you
basically
enable
linux
kernel
modules,
for
example,
kvm
is
one
kvm
core
would
be
one
module
that
allows
you
the
core
functionalities
then,
based
on
the
architecture,
you
would
then
start
to
enable
different
modules,
whether
it's
intel
or
amd
and
so
on,
and
that
gives
you
the
ability
to
kind
of
like
or
allows
you
to
create
virtual
machines
and
and
and
make
use
of
the
hardware
virtualization
features
and
instruction
set
to
divide
up
your
host
into
different
virtual
machines.
B
So
what
are
we
interested
in
in
the
stock?
We're
going
to
be
talking
more
about
that
type
tool
that
kvm
did
as
the
the
base
technology
for
for
the
product
or
openshift
sandbox
containers
that
we
build
on
top
different
layers,
but
again.
D
B
There
are
forms
of
sandboxing
the
software
hardware.
The
mixture
of
these
is
what
we're
looking
for,
not
one
or
the
other
all
right.
So
let's
talk
a
bit
about
some
some
trade-offs
when
we're
talking
about
you,
know
the
software
side
and
and
and
the
hardware
side
or
like
virtualization
side.
So
one
trade-off
when
we
think
about
this
is
like
when
I
choose
to
run
my
workloads
and
as
vms,
then
I'm
looking
at
efficiency,
you
could
run
more
workloads
because,
usually
you
have
more
time
control
over
how
your
resources
are
met.
B
That's
with
containers
versus
vms
vms
are
more
a
bit
heavier
in
weight,
but
still
we're
targeting
lightweight
vms
was
with
kata
containers,
which
I
talked
about
later
in
terms
of
performance.
There
is
this
additional
virtualization
overhead
that
layer
does
not
come
for
free.
B
You
should
expect
some
some
some
performance
overhead
when
you're,
when
you're
picking
the
virtualization
path
and
then
the
isolation
so
with
normal
containers,
vanilla
ones,
I'm
gonna
be
talking
about
like
how
it
is
done
openshift,
but
with
vanilla
containers
as
you
get
the
host
and
then
on
that
kernel
on
the
host
kernel,
you're
you're,
creating
linux,
namespaces
and
and
then
you're
running
your
workloads
there's
an
additional
circle.
If
you
see
was
with
virtualization
and
that
is
kernel
isolation.
B
So
each
workload
would
then
get
a
separate
kernel
to
run
the
workloads
on
and
that
that
is,
you
know,
giving
it
an
edge
in
the
isolation
bin.
So
it's
more
like
a
trade-off
and,
depending
on
your
use
case,
you
should
make
sure
you're
choosing
the
right
option.
D
B
Wrote
a
very
interesting
blog
post
on
sc
linux,
and
there
he
mentioned
you
know
what
happens
in
vegas
stays
in
vegas
after
you
enable
sa
linux
and
now
we're
going
to
walk
through
like
what
kinds
of
configuration
that
can
be
enabled.
That
gets
you
really
in
vegas
mode.
So
the
first
like
is:
this
is
the
lazy
mode
where
you're
you
don't
want
to
do
anything.
You
just
want
to
rely
on
vanilla
and
don't
worry
we
don't
let
you
do
that.
B
Most
of
the
time
we
either
have
very
good
documentation
or
enable
that
and
automate
all
the
necessary
bits
to
get
you
in
vegas
mode
with
operators,
as
you
always
know,
but
with
namespaces.
You
know
you
get
minimum
configuration,
but
you
get
a
little
isolation.
Remember
that
that
triangle
diagram
I
showed
with
vms
you
get
that
kernel
isolation.
So
by
default
you
have
a
separate
kernel,
but
that
does
not
mean
that
you
are
immediately
safe
right.
B
There
are
other
steps
that
you
want
to
take
to
reach
to
to
good
security
levels
or
good
isolation
levels,
and
this
is
by
enabling
you
know,
sig,
comp,
se,
linux
and
then
optionally,
running
workloads
and
vms
alongside
containers.
So
it's
more
like
not
one
or
the
other.
B
It's
the
collective
of
these
things
and
what
we're
trying
to
do
with
openshift
sandbox
containers
is
giving
our
users
the
choice
to
choose
between
these
two
options
when
they
need
to,
for
example,
for
compliance
reasons
or
for
for
other
reasons
you
want
to
be
on
the
safe
side
and
then
to
say:
oh
I'm
just
going
to
run
my
workloads,
these
certain
workloads
on
vms,
because
I
have
no
absolute
control
over.
So
I
don't
control
these
workloads
or
where
they
come
from.
They
are
not
picked
from
openshift
registries.
B
B
So
the
question
that
you
might
have
now
is
so
you
already
have
something
that
uses
vms
in
the
openshift
landscape
right.
Why
would
I
have
another
technology
that
also
uses
vms
in?
In
openshift
this
this
this
diagram
kind
of
walk
through
the
different
decision
tree
that
you,
you
start
frustrated,
not
sure
where
to
go,
and
then
the
question
becomes
like.
Have
you
already
gone
through
your
cloud
native
journey?
Did
you
containerize
your
workloads,
your
applications?
Did
you
build
images?
B
If
you
answered
yes,
then
you
probably
want
for
95
of
the
cases
go
with
the
normal
containers
use
case,
because
this
is
this
is
how
we
test
and
and
our
code
and
and
we
have
already
published
repositories-
allows
people
to
to
securely
pull
stuff
from.
B
On
the
other
hand,
if
you
are
in
the
five
percent
of
the
cases-
and
you
say,
listen,
I
have
no
control
over
what
I'm
running
on
my
container
and
I'm
pulling
from
images
where
I
don't
trust,
but
I
still
want
to
ensure
some
level
of
isolation,
then
that
could
take
you
to
openshift
sandbox
containers.
This
is
where
you
have
an
oci
compliant
runtime,
openshift
sandbox
container
is
based
on
kata,
I'm
going
to
be
deep,
diving
into
kata
soon,
and
it's
an
oci
complaint
runtime
like
currency
that
allows
you
to
run
containers.
B
The
main
use
case
is
kernel.
Isolation,
as
I
said,
so,
you
get
a
separate
kernel
for
each
workload
you
run
and
then
you
can
run,
for
example,
on
third
party
code
or
or
maybe
untrusted
code
that
you
don't
have
control
or
your
decision
matrix
does
not
involve
you
being
in
it
to
decide
whether
you
want
to
run
that
workload
or
not.
So
you,
you
pick
sandbox
containers
to
run
it
in
a
vm
or
in
a
lightweight
vm.
B
Then
you
come
with
the
like
the
third
approach,
where
you're
not
yet
containerized,
for
example,
but
you
want
to
use
vms
for
general
purpose.
Think
about
you
know,
starting
just
the
vm
on
kubernetes,
and
this
is
where
you
can
lift
and
shift
your
existing
vm
images
running
on
different
hypervisors
to
the
openshift
and
kubernetes
world
that
could
be
traditional
vms.
They
have
no,
they
could
have
no
existing
container
image,
there's
no
image
built
for
them,
and
therefore
we
have
a
solution:
openshift
virtualization,
which
is
a
general
purpose
virtualization
on
top
of
kubernetes.
B
That
allows
you
to
do
much
more
than
running
containerized
images,
so
each
has
its
own
use
case,
lightweight
and
95
percent
of
the
use
cases.
You
would
use
normal
containers,
then,
if
you
are
securely
aware-
or
you
want
to
ensure
compliance
for
certain
cases,
then
you
will
go
with
openshift
sandbox
containers
for
general
purpose
and
migrating
existing
vmware
clouds.
You
would
use
open
virtualization
all
right
now,
I'm
going
to
be
talking
about
the
bits
that
are
concerned
like
that.
B
We
are
concerned
with
in
openshift
that
helps
us
or
that
that
our
product
uses
the
first
one
is
o
m,
so
we
make
use
of
volume.
We
are
an
operator
like
any
other,
and
we
make
use.
This
is
the
operator
lifecycle
manager,
I
think,
of
olm.
As
your
red
hat
package
manager
right,
you
define
source
rpms,
you
write
them.
You
define
how
you
want
to
install
your
software
and
then
you
push
them
to
repo
and
then
they
are
available
for
you
to
rpm
install
them
right.
So
this
is
a
life
cycle
process.
B
It
gets
through
many
iterations,
whether
it's
security
or
or
other
thing,
and
all
these
things
are
gone
into
that
process
and
included,
and
we
want
to
do
the
same
with
kubernetes.
So
this
is
what
om
is
about.
It
allows
us
to
package,
our
kubernetes
manifests
our
korean
his
artifacts.
B
The
same
way
we
do
package,
our
linux
components
and
there
are
three
important
resources
that
so
there
are
a
lot
of
resources
that
olm
defines,
but
I'm
concerned
here
about
three:
these
are
more
the
prereqs
for
you,
understanding
how
to
enable
and
how
to
use
the
product.
So
one
is
the
operator
group.
This
is
more
concerned
about
multi-tenancy,
so
we
want
to
make
sure
that
each
the
cluster
admin
have
the
control
over
what
namespaces,
what
operator
gets.
Certain
are
back
permissions
right.
So
here
we
define,
you
know
our
open
data.
B
We
defined
the
name
space
that
the
operator
should
exist
on
and
then
based
on
that
we
define
the
namespace
or
the
the
mapping
and
for
this
mapping
to
happen,
we
need
to
create
a
cluster
service
version
or
a
csv,
and
that
csv
needs
to
be
in
the
same
name
space
as
as
the
operator
group
and
therefore
the
mapping
happens,
and
then
you
get
the
permissions
or
your
operator
gets
the
permission
to
do
what
it
needs
to
do.
I
will
pause
on
the
csv
now
I
want
to
cover
the
subscription.
B
In
that
case,
we're
doing
like
sold
line
approval,
for
example,
automatic,
and
therefore
any
update
on
that
channel
will
get
pulled
and
applied
to
your
cluster.
Now
back
to
the
csv,
the
csv
is
your
rpm.
This
is
your
where
you
define
what
components
of
your
package:
what
components
of
the
operator.
This
is
where
you
say
I
want
to
have
a
deployment.
These
are
my
resources,
and
this
is
these
are
my
crds
that
I
want
to
enable
in
the
cluster.
B
Another
important
piece
for
bit
to
this
equation
is
the
machine
config
operator.
So,
if
you
remember
the
history
of
like
openshift
with
openshift
3
in
admin,
usually
you
would
have
to
create
and
and
install
all
the
packages
in
advance
before
getting
in
to
a
node
beneath
this
node
and
before
getting
in
an
os
life
cycle.
B
So
with
openshift,
four
mco
got
introduced
to
kind
of
like
life
cycle
the
operating
system
with
the
with
openshift
process,
so
it
takes
care
of
packaging
and
installation
of
all
the
components
and
to
the
point
where
you
can
actually
also
join
a
no-till
cluster.
B
There
are
multiple
pieces
to
that
and
the
one
the
most
important
one
is
so
the
machine
config
operator
is
divided
into
three
main
components:
the
machine,
config
controller,
the
machine,
config
server
and
the
machine
config
daemon.
The
machine
config
operator
consists
of
more
where
the
controller
consists
of
four
sub
controllers.
Out
of
these
four
stop
controllers,
there's
one
very
important
one
which
is
concerning
us
today,
which
is
the
renderer
controller.
B
B
It
takes
the
base
configuration
and
renders
that
additional
configuration
that
you
specified
with
the
machine,
config
and
plus
them
together,
hashes
them
together
and
creates
for
you
a
new
rendered
conflict.
This
rendered
config
is
what
will
end
up
being
installed
on
the
nodes
and
that's
where
the
machine
config
demon,
you
know,
watches
the
node
looks
at
the
status
finds
out.
Oh,
there
is
a
new
conflict
that
I
should
apply,
starts
applying
it
to
the
nodes
that
have
been
defined
on
the
machine.
B
Config
pool
the
machine,
config
pool
defines
a
set
of
nodes
that
the
update
should
be
applied
to
and
the
machine
config
daemon
basically
takes
that
rendered
machine
config
that
has
been
rendered
by
the
renderer
controller
in
the
machine
config
controller
and
applies
that
to
the
notes.
This
would
be
important
in
how
the
operator
works.
B
Finally,
we
rely
on
arcos
extensions.
This
is
a
concept
so,
as
I
said,
like
you
define
a
machine
config.
This
is
how
you
say.
I
want
to
install
additional
things:
open
shift,
sandbox
containers
install
an
additional
run
times,
which
is
kata
containers.
It's
not
the
main
runtime
of
your
cluster.
You
already
have
a
main
run
time
and
that
is
running
by
default.
B
The
runtime
that
we
want
to
install,
which
is
kata
containers,
is
a
day
to
run
time.
So
you
install
that,
after
fact
that
you
have
a
cluster
everything
is
running
already.
Then
you
want
to
run
sandbox
workloads
using
vms
with
cathode
containers.
Then
you
create
the
machine
config,
and
then
you
specify
extensions
extensions.
Is
our
red
hat,
coreless's
way
of
saying,
listen
and
there's
a
few
things
that
I
would
also
take
care
of
and
life
cycle
myself
and
you
have
to
specify
one
of
these
and
sandbox
container.
B
Is
one
of
these
components
that
the
mco
decided
to
lifecycle
itself
as
well.
This
is
basically
a
list
of
packages
that
are
required,
for
example,
category
containers,
kmu
and
others.
So
the
mcu
watches
that
and
basically
starts
the
process
actuates
that
creates
a
desired
config
and
basically
follows
the
pass
and
the
node
controller
picks
that
and
starts
installing
it
on
the
node
in
the
cluster.
B
So,
with
this
in
mind,
we
we've
said
overland,
the
mco
and
extensions,
let's
now
get
to
the
operator
right.
So
what
is
the
operator?
This
is
actually
the
thing
that
you
as
a
user
would
consume.
At
the
end,
the
operator
will
be
available
in
when,
when
4.8
is
released
on
the
red
hat
operator
catalog
and
it's
installed
as
any
other
oem
operator,
it
exposes
a
crd
called
the
catacomfic.
B
This
is
the
crd
that
you
would
use
to
configure
your
kata
installation
and
it
installs
catac
containers
on
your
cluster
cathode
containers,
as
I
said,
is
an
oci
compliant
runtime
that
allows
you
to
run
lightweight.
Virtual
machines
was
the
same
cube
native
experience
that
you
would,
with
normal
containers,.
B
And
then
you
have
it
also
configures
cryo,
because
cryo
is
the
high
level
runtime
it
configures,
for
you
adds
the
runtime
handlers,
creates
the
runtime
classes,
the
runtime.
The
cost
is
a
way
of
describing
that
you
want
to
add
an
extra
runtime
in
addition
to
that
default
runtime.
B
So,
as
I
said,
we're
not
the
default
runtime
on
the
cluster,
where
there's
a
secondary
runtime
that
gets
enabled
day
two,
so
it
creates
the
runtime
class
for
you
and
then
finally,
also
installs
chemo,
which
is
our
virtualization
machine
virtual
machine
monitor
or
how
we
create
vms
as
an
os
extension,
as
I
mentioned,
like
catholic
containers
and
chemo,
are
os
extensions
which
rely
on
the
machine
config
operator
to
life
cycle
and
and
install
now.
B
How
would
you
use
that
so
we
exposed
now
the
installer
or
the
the
operator
gets
all
these
things
on
your
cluster.
You
would
then
start
there's
one
simple
one
resource
that
we
expose
at
the
moment.
It's
called
cata
config,
tata
config
exposes
one
optional
parameter
in
the
first
release,
which
is
a
pool
config
pool
selector.
This
allows
you
to
choose
which
nodes
in
your
cluster
you
want
to
install
the
kata
containers
runtime.
B
If
you
don't
specify
any
it
gets
installed
on
all
your
nodes.
You
have
the
option
to
specify
which
nodes
you
would
expose
in
your
clusters
or
you
would
install
the
runtime
on
when
you
create
that
kata
config
resource
it
starts
by
this
is
how
you
trigger
the
installation
right,
you
you,
it
creates
the
runtime
class,
and
then
you
finally
could
create
workloads
deployments.
Stateful
sets
and
pods,
and
all
you
need
to
do
is
just
specify
the
runtime
class
name
on
your
cluster.
B
That's
basically
how
you
would
use
that
as
a
cluster
admin
or
a
developer,
there
will
be
a
lot
of
integrations
in
the
future
on
was
workflows
that
already
exist
for
developers,
but
that's
where
we
are
at
now
from
the
operator
perspective.
C
A
Anyone
know
yeah
a
little
bit.
Bigger
would
be
great,
and
if
anybody
has
questions
type
them
in
the
chat
and
we'll
get
there
that
that
works.
The
upper
two
squares
windows
rather.
B
Yeah,
I'm
I'm
not
going
to
use
them
now,
so
I'm
I
I'm
just
going
to
use
that
one
down
at
the
moment
and
then
there's
another
demo
along
the
process
that
I'm
going
to
be
using
all
right.
Okay,
cool!
So
we've
covered
only
the
openshift
bits.
So
in
this
demo
I'm
just
going
to
be
showing
the
openshift
bits,
basically
over
them,
the
mco
and
the
operator
I'm
going
to
go
into
kata
containers
deeply
in
the
other
demo.
B
This
is
the
marketplace.
This
is
where
all
our
operator,
catholics,
reside
and
you're
going
to
see
here,
the
normal
ones
that
you
use
to
see
as
red
hat
operators.
I
mean
it's
operator
certified
operators
in
your
cluster,
but
since
we're
not
released
yet
this
is
where
like,
where
we
added
an
additional
catalog
that
we
would
pull
stuff
from,
and
there
are
the
three
resources
that
I
told
you
about.
So
if
we
look
at
olm
resources,
there
are
many
because,
hopefully
yeah.
B
So
there
are
many
resources
exposed
by
all
them
we're
interested
in
subscriptions
operator
groups
and
clusters,
resource
versions.
So
you
see
the
the
subscriptions
are
namespaced
operator
groups
are
namespaced
and
the
cluster
service
version
are
namespace.
That
means
we
need
to
go
to.
Oh,
but
before
I
go
to
the
namespace
of
the
operator,
we
can
also
check
package
manifests.
B
B
If
yeah,
these
are
all
the
list
of
operators
available
and
if
we
search
for
sandbox
containers
we're
going
to
find
that
there
is
indeed
an
operator
available
for
installation.
I've
done
so
this
is
more
of
a
read-only
version,
I'm
showing
the
cli,
but
then
I'm
going
to
also
show
you
the
console
all
right.
So
we
can
then
go
to
the
open
shift.
Sandbox
containers
operator.
B
B
B
C
B
B
Yeah
the
name
space
here
is
the
same
and,
as
I
said,
this
is
where
you
specify
your
entire
package,
even
the
icon
of
the
product
and
and
you
specify
the
permissions
and
what
you're
deploying
what
requests
you
get
and
so
on.
B
The
olm,
where
to
get
the
operator
from
right,
so
we
need
a
catalog
that
can
serve
us.
The
csv
and
yeah
again
suspect
that
matters.
So
we
have
a
channel
called
preview
1.0
and
the
install
the
plan.
Approval
is
automatic,
meaning
that
any
update
that
gets
into
the
operator
will
be
applied
automatically
and
the
source
is
that
catalog
that
I've
shown
you
and
the
reason
we're
using.
B
That
catalogue
is
because
we're
not
yet
released
when
it
is,
this
will
be
released,
then
it
will
be
available
with
the
red
hat
operators-
catalog
cool,
so
that's
over
them,
okay,
the
second
part.
So
now
you
have
the
operator
turning
on
the
cluster.
The
second
part
that
you
need
to
look
at
is
the
machine
config
operator,
but
before
I
do
that,
let's,
let's
have
a
look
at
tata
config,
so
we
saw
the
operator
running
already
and
for
us
to
have
a
cata
installation.
B
First,
let's
look
at
the
spec,
so
here
I
haven't
specified
any
node,
so
that
means
I
will
install
kata
on
all
my
nodes
in
the
cluster.
So
what
I
should
expect
to
see
like
the
solution
has
been
done.
It
says
that
the
node
that
is
stalled
by
qatar
run
time
on
is
these
three
nodes,
which
are
the
three
nodes
in
my
cluster.
B
But
what
should
I?
What
I
should
also
see
is.
B
B
B
B
After
all,
there
are
additional
things
that
we
need
to
get
to
a
cluster,
and
that's
the
tradeoff
that
I
mentioned
before,
and
these
things
collected
resources
required
for
for
these
things
in
terms
of
memory
and
cpu
has
to
be
specified,
as
you
see
in
the
runtime
class,
so
in
compared
to
run
c,
you
don't
need
to
do
that
because
run
c
runs
on
on
the
host
we
run
chemo,
we
run
a
component
called
the
kata
agent,
which
I
will
talk
about,
requires
these
resources
to
be
pre-specified
and
they
are
mostly
fixed,
except
some,
some
that
changes
based
on
the
workload
which
we
can
configure
later
all
right.
B
B
What
the
operator
does
it
creates
a
third
one
like
a
50
is
the
one
created
by
the
operator.
This
basically
has
the
extension
that
we
will
enable
that
allows
the
mco
to
kind
of
like
take
control
and
life
cycle
the
stuff
for
us.
So
if
we
get
the
machine
config
now.
B
And
we
look
at
the
spec,
all
we
need
to
do
is
just
the
extension
extension
sandbox
containers.
Mco
looks
at
that
understands
what
it
needs
to
do.
Life
cycles,
the
rpms
life
cycles,
all
the
packages
and
gets
to
work,
and
we
can
see
on
our
node
that
the
desired
config
is
the
same
as
the
current
conflict.
B
B
B
B
Check
the
operators
search
for
sandbox
you're
gonna
find
one
operator
since
I've
already
installed
it.
It's
there
in
the
cluster.
So
now,
but
the
usual
path.
You
go,
you
install
it
and
then
once
it's
installed,
you're
gonna
see
it's
there
and
it
has
succeeded
and
you're
going
to
find
that
the
cataconfig
resource,
one
catacomfy
resource,
has
been
created.
B
And
this
is
the
status
and
the
spec,
as
I
showed
you
in
the
cli,
the
spec
says
install
on
all
nodes,
because
I
didn't
specify
the
catalog
config
tool
selected.
Had
I
specified
that,
but
I've
gotten
a
selection
of
nodes
that
only
have
cathode
containers,
so
that
is
the
openshift
bits
and
we
we're
going
to
go
back.
But
first,
let's,
let's
go
and
understand
more
about
the
stack
of
cathode
containers
and
what
on
that
stack,
we
choose
all
right
so
back
to
business.
B
What
we're
gonna
do
is
we're
gonna
just
have
a
look
at
high
level
stack
of
catholic
containers,
how
the
end-to-end
flow
looks
like,
and
what
of
that
inflow?
What
components
of
that
entering
flow?
Are
we
interested
in
from
a
higher
level
perspective,
as
I
said,
was
run
c
containers
with
normal
containers?
B
You
have
a
host,
you
have
a
shared
kernel
and
containers
are
isolated
or
sandbox
file,
name
in
linux,
name,
spaces,
sick
comp
and
all
the
goodies
and
the
stack
and
the
high
level
run
time
that
you're
using
is
cryo
the
low
level
runtime
is
run
c
and
you're,
using,
as
I
said,
linux,
namespaces
and
c
groups.
Now,
if
we
move
to
the
isolated
kernel,
bits
we're
using
again
the
same
high
level
runtime,
but
we're
changing
the
low
level
runtime
that
would
be
kata
containers
and
what
this
does.
B
It
allows
us
to
run
vms
in
a
lightweight
the
same
way.
We
do
containers,
so
they
will
have
two
isolated
kernels.
If
you're
on
two
workloads,
each
workload
will
get
its
own
separate
kernel.
Now,
one
important
bit
to
notice
is
that
we
don't
share
the
kernel
with
the
host.
We
use
the
same
type
of
kernel
as
the
host,
so
you
get
the
same
benefits
of
our
cost
of
red
hat
core
os
patched
and
and
and
has
no
cbs
whatever,
and
we
use
that
on
the
vm
as
well.
B
This
is
more
the
entire
stack
of
kata
containers
right.
You
start
here
as
a
user,
and
then
this
is
upstream
right.
So
this
is
upstream
catholic
and
dinners
project.
Then
you
have
multiple
options
of
what
high
level
runtime
you
would
use
and
then
there's
the
shim.
And
then
there
is
the
your
host,
your
kernel,
your
vm
kernel,
which
is
a
separate
kernel.
Your
linux
namespace
we're
not
interested
in
that
entire
picture.
We're
only
interested
in
the
highlighted
pieces,
so
we
are
interested
in
cryo
we're
interested
in
chemo.
B
This
is
what
we're
using
now
streaming
on
our
stack
and
where
you
configure
stuff.
So
as
a
user,
you
start
by
creating
a
normal
pod.
You
specify
the
runtime
class
name
as
kata
and
id
the
cube
watches
calls
out
to
the
runtime.
In
our
case,
it's
cryo
cryo
implements
shim
v2,
so
it
calls
out
to
the
shim
v2
and
a
shim
is
simply
an
intermediary
process.
That
knows
how
to
juggle
the
lower
runtime
class,
whether
it's
runs
he
runs.
B
He
has
also
its
own
shim
con
mon,
but
in
our
case
it's
it's
the
container
d
shim
kata
v2
and
that
basically
calls
out
to
the
run
time
to
the
agent
creates
the
vms
and
the
low
level
bits,
and
there
are
certain
places
where
you're
interested
to
configure
stuff
you
want
to
configure
or
what
we
configure
is
the
cry
of
it:
the
key
movement
and
the
catholic
containers
bit
and
yeah
here.
B
So
basically,
this
stack
that
we're
interested
in
is
chemo
cryo
and
catholic
containers.
There
is
an
other
version
of
this
figure
that
you
could
look
at
later
on,
which
has
annotated
descriptions
of
all
these
components.
If
you're
interested
in
the
details,
it
describes
each
component
in
that
stack
in
details,
including
the
networking
stack
and
so
on
yeah.
B
So
that's
that's
basically
overview
from
from
a
deeper
down
and,
as
I
said,
if
you're
more
interested
in
the
nitty
gritties,
you
could
have
a
look
at
this
figure
and
I
will
also
show
a
demo
all
right.
So
now,
let's,
let's
look
at
the
the
the
demo
to
kind
of
like
understand
happening
underneath,
so
we
stopped
at
the
openshift
blitz.
Last
time
we
discussed
mcolm
and
the
operator.
Now
we
want
to
switch
the
workloads.
B
Now
I
have
installed
the
operator
via
the
console,
and
I
want
to
look
at
workloads
right,
I'm
going
to
show
it
from
the
console
I'm
going
to
show
them
also
from
the
cli.
Let's
look
last
time,
okay,
I
want
to
move
to
default
namespace,
but
by
the
way,
something
I
forgot
to
say,
k
is
obvious
to
keep
ctl
and
kns
is
a
cubesdl
plugin
that
I
use
that
helps
me
map
through
the
namespaces.
B
So
now
I'm
going
to
the
default,
namespace
and
and
k
pulse
is
also
cubocitylocatpods
right.
So
here
I
have
two
two
parts:
how
did
I
create
these
spots?
Let's
have
a
look
normal
pod.
This
is
the
normal.
I
created
a
deployment
which
created
the
pod.
I
use
the
uv8
image
to
create
the
pod
and
I
did
not
specify
a
runtime
class
name
in
this
container.
B
Now,
if
I
look
at
the
catapod,
I
have
specified
the
runtime
class
name,
because
you
know
the
operator
did
all
the
work.
For
me
all
the
nitty
gritties
installed
kata
the
rpms
chemo,
the
agent.
Everything
is
on
the
nodes.
Now
I
can
just
as
a
user,
go
and
create
that
and
specify
the
runtime
class
name
on
the
pod
and
off
I
go
so
this
is
this:
is
the
configuration
what's
actually,
there
is
right.
B
B
C
B
B
Let's
see
and
it's
not
full
screen.
B
B
Me
the
container
id
I
just
used
that
and
filter,
and
here
I
could
find
that
it's
running
chemo
inside
the
chemo
process
and
if
I
do
the
same
now
so
it's
virtualized.
If
I
do
the
same
now
with
the
other
container.
B
And
just
grip
for
this
time,
you're
going
to
find
it's
running
common
right.
As
I
said,
that's
that's
the
shim
and
it's
running
using
normal.
A
convent
is
the
shim
that
talks
to
ramsey
and
starts
start
starts
processes
in
the
back
end.
So
this
is
the
normal
container.
B
So
the
difference
you
can
see
here,
I'm
running
a
chemo
process
to
run
my
container
versus
a
convent
process
as
a
shim
for
cryo
to
call
out
for
running
yeah
here,
run
c
to
go
out
to
run
c
and
start
my
normal
container
and
that's
mainly
the
difference
from
the
outside.
B
B
B
You're
going
to
see
a
difference,
so
this
is
a
normal
container,
that's
a
catholic
container
or
a
catholic
container
pod
and
that's
a
host
you're
going
to
find
that
the
kernel
parameters
are
different,
so
we're
passing
we're
using
the
same
current
version,
but
we're
using
a
separate
kernel
for
each.
When
I
say
a
kernel
version,
I
mean.
B
B
I'm
using
here
4.18
and
the
same
kernel
version
here,
the
same
version
here
so
I'm
using
the
same
kernel
version,
but
it's
a
different
kernel
and
or
a
separate
kernel.
It
doesn't
share
the
current.
The
kernel
was
the
host
and
that's
one
one
thing
you
can
also
identify
or
differentiate
between
a
category
container
and
a
normal
run
c
container.
B
All
right,
let's
see
what
else
we
can
see
yeah
I
mentioned
earlier
so
now.
I
I
don't
need
these
upper
queens,
but
we
can
also
have
a
look
at
the
rpms
that
got
installed
in
the
cluster
as
a
result
of
the
operator
specifying
the
extensions
in
the
catacomfic
resource.
B
You
can
see
here,
you
know
it
installed
all
the
binaries
I
need
to
run.
The
configuration
for
cryo
can
have
a
look
at
that.
It's
important
so
here,
I'm
just
configuring
crying,
I'm
telling
cryo
listen,
you
have
a
handler
was
the
name
kata
and
if
you
remember
to
exit
that
and
get
a
runtime
class
and
look
at
the
handler
or
the
name
named
kata
that
simply
is
mapping
back
right.
So
mapping
back
to
that
handler
on
the
runtime,
so
it
tells
it
yeah.
B
You
have
a
new
handler
if
anyone
any
part
specifies
the
runtime
class
name.
This
is
where
you
would
look,
and
this
is
what
you
should
call
it
calls
to
the
shim
and
the
shin
takes
care
of
bootstrapping
and
starting
the
agent,
the
vm,
all
the
things
underneath
another
interesting
thing
that
gets
there
is
yeah
the
agent
of
course
and
yeah.
B
As
I
said
earlier,
we
are
using
the
same
os
image
as
the
host,
but
we're
not
sharing
the
kernel
we're
using
just
using
the
same
os
image,
and
for
that
to
happen
we
have
a
builder
script.
That
actually
does
that,
so
that
script
is
here
right.
The
systemd
or
a
script
that
gets
run
by
systemd
that
kind
of
like
generates
the
root,
fs
and
and
maps
to
the
kernel
of
the
host.
B
Let
me
go
back
to
the
list.
What
else
is
interesting
yeah?
So
basically
this
is
this
is
the
rpm
but
the
rpm
gets
installed
for
us.
You
don't
need
to
do
any
of
these
things.
I'm
just
deep
diving
for
you
to
understand
that
the
pieces
yeah,
but
I
think
that's
that's
from
a
high
level
on
the
on
the
cli.
Now
we
can
go
back
to
the
console
to
to
kind
of
like
see
the
the
how
it
looks
like
how
the
workloads
look
like
I'm,
going
back
to
home,
I'm
going
to
workloads.
B
B
You
know
I
want
to
change
the
namespace,
so
these
are
the
two
workloads.
This
is
the
normal
experience
you
get
and
when
I
press
here
I
don't
see
any
runtime
class
and
there's
nothing
here
so
that
that
basically
means
that
I
told
you
we
don't
need
to
specify
a
runtime
class
for
a
pod.
If
I
go
back
to
deployments
and
go
for
the
kata
one,
I
will
find
a
runtime
class
name
or
a
runtime
class
as
kata.
B
That's
basically
also
one
indication
that
you're
referring
to
a
runtime
class
that
has
a
cryohandler
as
it
showed
as
kata
as
the
runtime.
B
Now
you
have
also
metrics,
which
is
interesting,
like
you
would
normal
containers,
you
could
look
at
memory,
usage,
cpu
and
the
reason
you
get
you
know
a
bit
of
high
memory
here
is
because
of
that
pulled
overhead
that
I
mentioned,
and
that's
the
trade-off
that
you
get
by
running
that
container
in
a
vm.
If
I
look
back
to
the
normal
deployment
or
the
normal
pod
running
the
same
image,
I
would
find
the
that
to
be
much
less,
but
there's
also
an
interesting
fact.
So
this
is
only
visible.
B
Okay,
I
think
we
need
to
pick
the
that
one
here,
yeah
you're
gonna
see
you
know
the
same.
The
memory
usage
is
the
same,
so
this
is
because
it's
inside
and
so
but
we're.
B
You're,
seeing
you
know
also
the
overhead
included
just
to
understand,
also,
you
know
the
trade-offs
and
to
better
accustom
for
your
workloads
all
right,
so
that's
it
for
the
second
demo,
going
back
to
the
slides
now
now
the
pipeline.
B
How
we're
planning
to
progress
from
here,
as
I
said,
where
we
have
not
released
yet
so
we're
very
much
looking
for
feedback
for
you
to
try
things
out
and
let
us
know
what
you
want
things
coming
up
your
way.
You
know
viewable
metrics.
This
is
this
was
actually
you
know
the
effort
to
kind
of
get
right,
console
awareness,
you
see
the
runtime
class.
The
first
version
of
the
operator
will
only
run
in
very
metal.
B
That's
something
important
to
note:
it's
not
going
so
known
as
the
virtualization
and
the
first
or
the
releases
of
the
product,
we're
going
like
going
through
the
releases
we're
going
towards
more
making
or
towards
making
the
product
more
ready,
we're
enabling
cat
specific
metrics.
So
you
sell
catholic
containers
consist
of
chemo
the
agent
and
all
these
things
you
saw
in
the
deep
dive
we
want
to
see
those
want
to
enable.
B
And
positive,
expose
more
dashboards
and
expose
more
configuration
options
right
in
the
future,
but
that's
again
based
on
your
feedback,
so
for
what
we're
looking
for
you
to
do
is
try
out
the
product.
Let
us
know
what
you
think
and
please
reach
out
and
we're
happy
to
help
with
that
yeah.
I'm
finished,
thank
you
for
for
for
listening
and
the
references
here.
A
Lots
of
references
I'm
wondering
if
you
could
also
that
think,
link
dot,
interactive
diagram
if
you
could
just
cut
and
paste
and
throw
that
link
into
the
chat
people
in
for
that,
and
there
were
a
couple
of
questions
and
they
were
good
ones.
But
I
didn't
want
to
stop
you
in
your
tracks
there's
to
interrupt
the
flow
because
you,
you
definitely
had
a
flow
there
and
it
the
the
deep
dive.
I
think
people
are
gonna
have
to
watch
this
a
couple
of
times
and
read
some
of
these
references.
C
B
C
A
Okay,
cool
and
preston's
asking:
are
there
any
plans
to
make
the
cata
operator
available
during
the
initial
cluster
install
or
do
they
expect
this
to
stay
a
day?
Two
up.
B
A
good
question
for
now
it's
expected
to
stay
at
daytona
up,
and
the
reason
is
to
me:
is
it's
not
the
main
runtime?
There
is
limitations
when
using
vms,
for
example,
we
require,
like
run
c,
allows
you
to
share
host
network
name
spaces.
For
example.
Kata
is
a
vm
by
design
it's
isolated.
You
can't
do
that,
so
certain
workloads
that
require
privileges
which
kata
containers
hopefully
helps
with,
does
not
allow
that
by
by
design.
So
that's
could
stay
at
day
two
and
I
think
we'll
stay
as
as
day
two.
It
could
also
like.
B
C
A
Great
talk
by
the
way
now
I'm
going
to
have
to
watch
it
again
because
quite
a
bit
of
it
went
right
over
my
head,
and
it
really
makes
me
understand
how
much
the
machine
can
configuration
operator.
How
much
effort
that
engineering
team
and
the
folks
working
on
it
have
been
putting
in
kind
of
over
time
and
why
they're
so
busy.
B
Loads,
those
did
a
great
job
and
I
think
I
think
so
what,
in
the
early
designs
of
this
thing,
we
were
not
relying
on
the
machine
config
operator,
but
seeing
how
great
it
works
and
and
that
we
can
delegate
that
life
cycling
to
it
made
our
our
code
base
very
simple,
and
I
think
so.
The
team
did
a
great
like
spend
a
lot
of
effort
to
to
to
to
make
sure
we're
in
compliance
with
that
operator.
So
it
could
do
still
all
the
team
involved.
A
Yeah
so
preston,
if
you
want
to
turn
your
camera
on
and
ask
that
question
I'll,
read
out
what
his
clarification
was,
I
think
he
must
work
in
the
federal
sector
or
something
because
it
sounds
like
a
secure
issue.
The
idea
was
for
secure
environments
where
a
portion
of
the
default
containers
would
need
to
be
isolated
as
part
of
day.
One
star
war,
for
example,
would
make
use
of
something
like
this.
I
don't
know
what
star
wars,
but
it
sounds
like
star
wars,
so
there
you.
B
B
In
that
case,
you
don't
need
to
trigger
that
manually
and
then
the
runtime
will
be
available
in
the
cluster,
but
still
after
all,
you
have
to
as
a
developer
choose
an
opt-in
for
cata
as
a
runtime,
even
if
kata
is
available
on
your
cluster
using
openshift
sandbox
containers,
so
there's
still
a
manual
effort
of
you
declaring
that
you
run
at
least
for
now
that
might
change
when
we
integrate
more
with
developer
workflows
in
the
future.
But
for
now
there's
this
manual
step
that
you
need
to
take
to
specify
the
runtime
for
your
workload.
E
Yep,
okay,
awesome
yeah
that
actually
answered
the
question
when
I
was
referring
to
spay
war
as
an
example,
I
was
identifying
certain
use
cases
where
they
would
still
need
openshift
as
the
base
operating
system
and
their
disconnected
cluster,
but
there
are
certain
highly
sensitive
classified
workloads
that
they
would
run
that
would
need
to
be
sandboxed
and
isolated
as
part
of
a
day
one
knob.
E
B
Yeah
I,
as
I
said
this,
is
this-
is
more
an
option
on
so
for
now
we're
not
enabling
an
automatic
installation
of
the
operator,
but
if
you
have
automation
out
of
scope,
that
does
that
it
also
works.
But
the
idea
is
that
kata
is
not
the
main
runtime,
and
this
is
the
one
thing
that
I
want.
People
to
also
like
get
home
is
that
it
does
not
replace,
run
series,
so
you
you
enable
it
as
an
operator.
B
You
could
do
that
at
day,
one
if
you
automate
it,
but
it
still
has
a
secondary
runtime
to
the
cluster.
So
you
you
will
not
replace
the
existing
one
will
be
like
a
secondary
runtime
and
then
at
day
one.
If
you
automate
it,
you
could
specify
on
your
workloads
data
and
then
all
that
you
know
sensitive
data
will
run
in
containers
that
are
isolated
in
vms
or
cathode.
Containers.
A
And
preston,
you
asked
one
other
question
about
code
ready
containers.
There
is
this
available.
Will
kada
containers
be
accessible
to
code,
ready
containers
for
development
work.
E
Yeah,
ultimately
same
market
space,
we're
looking
at
clients
that
are
looking
to
do
development
work
within
their
cata
containers.
E
But
some
of
these
clients
will
be
using
code
ready
as
their
primary
run
base
for
the
developer
development
teams
and
just
wanted
to
ensure
that
the
cata
containers
are
interoperable
with
code
ready
containers
or
if
there
was
some
kind
of
special
sauce
that
we
need
to
look
at.
Underneath.
The
covers.
B
At
the
moment,
not
like
was
4.8
not
and
we're
looking
into
how
to
integrate
like.
What's
the
best
developer
experience,
because
you
get
also
tooling
with
the
when
you
go
to
the
developer
consoles
you
get
to
link
to
to
with
dev
files
and
so
on
that
could
integrate
also
things
like
code
ready,
but
from
a
first
stance.
We're
not
doing
that.
For
simplicity
reasons.
B
There
are
efforts
to
run
open
shift
on
different
flavors,
which
is
a
single
node,
similar
to
code,
ready
containers
which
we
will
be
looking
at,
maybe
in
the
4.11
or
time
frame,
but
not
as
a
as
a
first
goal.
What
we're
looking
now
is
we're
looking
for
people
to
give
us
feedback,
whether
that's
the
whole
thing
is,
is
feasible
and
likable
or
are
usable,
but
definitely
yeah.
It's
something
we
were
considering
not
in
the
near
term,
though,.
E
Gotcha
yeah,
I
don't
want
to
take
up
all
your
time
here
and
I'll
shut
up
after
this.
I
promise,
but
I
just
I
just
wanted
to
state
that
I
could
definitely
see
a
lot
of
good
use
cases
for
this,
both
in
the
security
sector
and
in
the
financial
sector.
Both
places
would
definitely
have
a
high
use
case
for
this.
E
I
actually
I'm
primarily
commercial,
but
I
do
on
occasion
cover
some
naps
clients
and
all
of
the
clients
that
I've
been
working
with
to
date,
actually
for
the
last
two
to
three
years
has
been
asking
for
something
similar
to
what
you
just
described.
So
this
is
awesome.
I'm
really
looking
forward
to
getting
my
hands
dirty
with
this.
C
A
Awesome
well
thanks,
pres
preston
and
chris
short,
if
you
unmute
yourself,
I
might
have
to
you-
had
a
baby
crying
in
the
background
before
a
comment
about
run.
No.
D
Yeah
run
c
is
still
the
underlying
run
container
runtime,
but
preston.
If,
if
you
want
to
dive
a
little
deeper
into
some
of
the
product
parts
for
this,
let
me
know
just
ping
me
short
at
redhat.com
and
you
know
I'd
be
curious
to
hear
your
use
cases
and
I'll
get
that
forwarded
along
to
our
product
management
team.
A
D
A
Never
better
than
actually
having
a
customer
that
wants
something
that
would
be
cool
all
right.
Folks
and,
as
adele
said
at
the
beginning
of
this,
he
thought
he
could
fill
a
full
hour
and
he
did
indeed,
but
it
was
incredible.
A
So
to
be
quite
honest,
because
I
think
this
is
a
very
important
use
case
and
I
can
see
a
lot
of
people
taking
advantage
of
this
new
technology
when
4.8
comes
out,
so
we
will
put
together
a
short
blog
post
I'll
give
you
adele
all
of
his
resource
links
there
shortly,
and
we
will
definitely
have
adele
back
again
to
talk
about
this,
some
more
and
preston,
if
you
or
anybody
else,
who's
listing
has
a
end
user
who's.
A
Looking
for
something
like
this
definitely
reach
out
to
any
of
us
at
red
hat,
but
especially
adele,
whose
contact
information
we
will
post
in
the
how
to
reach
you
in
the
in
the
chat
and
in
the
youtube
description
as
well.
So
we
can
get
some
more
feedback
on
this,
but
again
yet
another
wonderful
set
new
set
of
features
that
are
coming
with
4.8
and
adele.
We
thank
you
for
your
time
and
chris
for
backing
us
up
with
great
production
services,
so
awesome,
sauce
and
we'll
all
be
back
soon.