►
From YouTube: OpenShift Coffee Break: OpenShift Sandboxed Containers
Description
Get your espresso ready for the EMEA OpenShift Coffee Break as we welcome our special guest Christophe de Dinechin, Software Engineer at Red Ha to talk about OpenShift Sandboxed Containers!
This functionality is based on the Kata Containers open source project and provides an Open Container Initiative (OCI)-compliant container runtime using lightweight virtual machines running workloads.
Twitch: https://red.ht/twitch
A
B
A
And
welcome
to
today,
official
coffee
break
with
my
dear
colleague,
corgis
jafar,
one
in
japan,
hello,
everyone.
A
B
A
C
So
I
like
my
coffee,
cold
and
with
cream,
so
I'm
really
bizarre,
so
I'm
christopher
dinosaur
and
I'm
working
on
a
product
called
open
shift.
Sandbox
containers
that
we
are
going
to
discuss
today.
That's
the
openshift
implementation
of
an
upstream
project
called
catacontainers.
C
I
hope
that
today,
we'll
also
have
a
chance
to
discuss
about
some
further
applications
down
the
road
like
confidential
containers
and
otherwise
I've
been
working
in
the
past
in
virtualization,
3d,
graphics,
compilers
and
operating
systems.
So
that's
essentially
my
background.
A
C
C
C
Yeah,
so
actually
you
have
to
be
careful
about
what
you're
talking
about.
I
saw
someone
on
twitter
was
saying
that
he
read
about
cafe
containers,
so
kafka
containers
is
a
very
different
thing.
It's
when
you
know.
If
your
containers
are
too
fast,
then
you
have
this
option
of
sending
a
form
in
triplicate
by
fax,
to
your
cluster
operator,
and
then
he
will
start
your
container
within
six
months.
So
that's
a
different
product.
Kela
containers
is
the
idea
of
running
containers
inside
virtual
machines
to
provide
a
bit
additional
isolation.
C
Let
you,
for
instance,
configure
your
channel
to
your
taste
and
run
things
as
roots
without
having
risk
for
your
host
and
this
kind
of
things.
So
this
is
really
a
way
to
exploit
lightweight
virtual
machines.
It's
a
bit
counter
intuitive,
because
if
you
remember
initially,
containers
were
seen
as
oh
that's
to
avoid
virtual
machines,
because
virtual
machines
are
heavy
weights.
C
Yeah,
so
so
the
problem
with
with
containers
is
that
they
run
in
the
host
operating
system
and
exploit
its
resources,
exploits
whatever
isolation
features
it
gives.
C
But
then
people
start
thinking,
oh
yeah,
but
I
can
bypass
that,
for
instance,
intentionally
in
the
case
of
privileged
containers,
which
is
used
often,
for
instance,
to
install
software
on
the
host
other
things
like
that,
and
and
sometimes
do
you
have
no
other
choice.
For
example,
if
you
want
to
access
a
physical
device,
like
I
don't
know,
an
si
of
unique
network
interface
card
or
all
these
kind
of
things,
sometimes
you
need
at
least
part
of
the
configuration
to
be
done
using
privileged
software
and
so
with
color
containers.
C
The
idea
is
that
you
can
run
this
kind
of
software
in
its
own
virtual
machine
and
that's
now
feasible
easily
thanks
to
moore's
law,
because
something
that
10
years
ago
made
little
sense.
Today
we
have
more
resources
and
so
on.
So
that's
something
that
we're
going
to
fold
and
now
you
can,
for
instance,
attach
a
physical
device
to
your
vm
and
and
let
the
software
inside
control
that
physical
device
directly,
while
still
having
isolation
enforced
by
the
I
o,
mmu
and
so
on.
B
C
B
To
to
clarify
how
this
is
executed,
so
you
said
it
allows
you
to
run
clouds
in
vms.
But
do
you
mean
it's
going
to
run
the
pod
as
a
vm
is?
Is
there
like
a
one-to-one
mapping
or
for
each
pod
correct.
C
So
exactly
so
so,
essentially
the
units
corresponding
to
the
the
vm
is
the
pod
and
the
the
rest
of
the
deployment
is
relatively
similar.
C
We
use
some
features
like
virta
ufs,
to
expose
the
host
file
system
to
that
virtual
machine
so
that,
for
instance,
you
can
download
your
container
image
on
the
host
and
do
you
know,
have
the
overlays,
etc
and
and
then
that's
transferred
to
the
guest
and
the
guest
sees
the
file
system
that
part
of
the
file
system
of
the
host
in
its
own
file
system.
C
You
can
also
attach
things
like
block
devices.
You
can
have
other
options
for
how
you
connect
to
your
your
storage,
the
networking
same
thing.
You
have
several
options,
but
you
can
go
through
turntab
devices,
etc.
So
there
are
various
ways
to
to
connect
to
the
enclosing
networking.
C
A
On
the
topic
containers
versus
virtual
machine
now
we
are
bringing
the
concept
of
virtual
machine
on
the
ship.
Can
you
explain
us
better
when
actually
before,
which
is
the
main
difference
between
having
vitro
machine
and
sandbox
container
or
sorry
container
machine
in
our
sheets
unboxing
container,
and
why
why
I
should
use
one
of
these.
B
Yes
and
maybe
to
expand
on
that
so,
for
example,
the
difference
between
containers
and
and
using
something
like
which
is
also
the
vm
inside,
I
mean
a
pod
as
a
vm,
but
what's
like
the
difference,
and
why
is
there
this
new
initiative.
C
So
that's
a
that's
a
very
good
question.
C
The
idea
of
keller
containers
is
really
to
have
virtual
machines
in
the
context
of
the
container's
ecosystem,
so
in
other
words,
you're
really
running
containers,
whereas
in
the
case
of
cubot
you're
really
running
virtual
machines
and
you
can
run,
for
instance,
windows,
virtual
machines,
but
you're
orchestrating
them
with
the
openshift
kubernetes
execution,
environment,
and
so
that's
the
main
difference
is
that
in
one
case,
you're
really
running
containers.
That's
the
case
for
color
containers
and
you're.
C
Just
you
happen
to
run
them
in
virtual
machines,
but
that's
sort
of
an
implementation
detail,
whereas
in
the
case
of
kubevert
you're
really
interested
in
running
a
virtual
machine
as
a
virtual
machine
and
to
use
its
benefit.
C
So
if,
if,
if
it's
okay
for
me
to
share
a
screen,
I
have
a
node
slide
dash.
C
I
I'm
not
sure
if
it's
public,
so
in
the
sense,
I'm
not
sure
I
can
share
the
link
of
it.
But
let
me
try
to
just.
C
So
this
slide
deck
is
a
bit
outdated
because
it
dates
back
from
a
presentation
I
gave
at
def
conf
in
2021
defront.cz,
so
but
that's
really
an
overview
of
of
what
we
are
doing
here
and
so
essentially,
what
I
just
told
you
is
is
what
is
shown
on
this
slide
here,
where
we
are
trying
to
really
benefit
from
the
container
ecosystem,
and
that's
mean
that
means
a
new
architecture
for
application
that
is
scalable,
it's
small,
modern,
etc.
C
You
can
provision
components
very
easily.
You
can
do
software
composition,
you
can
you
have
other
benefits
of
dynamic
and
automated
orchestration
in
a
recent
coffee
break
about
takedown.
There
was
this
idea
that
your
you
can
really
automate
practically
anything
with
openshift
and
I
think
that's
very
true,
but
at
the
same
time
we
want
to
get
some
of
the
benefits
of
virtual
machines,
and
that
includes
kind
of
tuning
configuration
hardware,
enforced
isolation
with
things
like
the
immu
et
cetera,
and
that
includes
isolated
access
to
devices
and
in
the
future.
That's
the
confidential
containers
approach.
C
We
can
benefit
from
things
like
memory,
encryption
where
you
can
completely
hide
your
guest
memory
from
the
host.
So
that's
essentially
the
kind
of
benefits
that
we
can
get
from
doing
things
this
way.
C
C
One
part
of
the
question:
I'm
sorry
and
that's
the
question
about
who
benefits
from
this,
and
so
some
of
the
markets
that
we
see
having
whether
there
is
subtraction
for
this
include
either
markets
where
you
want
to
benefit
from
the
isolation
for
hardware,
containment,
and
that
includes,
for
instance,
the
telco
ecosystem,
this
kind
of
of
environments
or
when
you
want
to
have
isolation
for
legal
or
various
reasons,
and
that
includes
banking
and
this
kind
of
environment.
So
these
are
the
two
big
finance
or
these
kind
of
things.
C
These
are
the
two
areas
where
we
see
traction
first.
C
C
So
there
are
many
things
that
you
may
want
to
put
in
vms.
Further
down
the
road.
That's
you
know,
that's
a
trend.
You
see,
for
instance,
an
open
shift
for
you,
and
now
we
have
these
requirements
about
enforcing
s
linux
and
forcing
values
slightly
more
restrictive
policies.
There
is
this
trend
that
we
are
trying
to
secure
things
more
over
time,
and
so
I
suspect
that
in
the
future,
containers
will
be
used
more
often
because
of
that.
B
A
The
tools
so
that
we
already
have
an
openshift
like
the
github's
ones,
also
image
one
so
on
so
forth.
So
from
developers
perspective,
you
don't
lose
a
lot
of
features
that
probably
you
miss
when
you.
C
Just
virtualization
is
that
exactly
so?
Okay,
so
if
you
so,
if
you
know
a
little
bit
how
kubernetes
and
openshift
run
inside
there
is
this
thing
called
the
runtime
that
you
just
mentioned,
and
that's
essentially
the
part
that
is
in
charge
of
managing
the
containers
while
they
run,
and
so
the
historical
one
is
called
run
c,
there
is
one
called
c
run
that
is
more
efficient.
It's
rewritten
in
c
is
much
smaller.
C
Much
faster
uses,
less
resources,
so
that's
likely
to
become
more
commonplace
in
the
future
and
and
keta
has
its
own
runtime.
B
This
seems
very
nice
in
terms
of,
I
would
say,
architecture
and
the
type
of
features
that
it
can
add.
But
do
you
have
maybe
a
demo
that
you
could
show
us,
so
we
have
a
better
grasp
yeah
how
things
work.
C
Yes,
so
this
is
a
cluster
here
that
has
cadet
containers
running.
Oh
sorry,
openshift
sandbox,
containers
running,
and
so
maybe
I
should
start
with
the
console,
that's
probably
easier
to
look
at
it
from
a
ui
perspective,
and
so
the
way
we
actually
deploy
things
is
with
something
called
the
openshift
sandbox
containers
operator
that
is
going
to
manage
the
a
new
runtime
class
called
cata,
and
so
in
order
to
be
able
to
run
a
workload
you're
going
to
simply
add
the
runtime
class
cata
to
the
world
close
workload
description.
A
C
C
So
can
I
make
the
text
of
the
year
bigger?
Yes?
Well,
if
it's
bigger,
then
we
don't
see
much
so.
Can
you
still
see
it
here.
C
So
if
you
look
at
the
spec
the
thing,
so
everything
looks
pretty
much
like
for
anything
else,
with
the
difference
of
this
runtime
class
name
cata
and
that's
really
the
thing
that
is
going
to
make
the
difference.
So
let
me
just
shut
it
down
and.
B
And
so
is
this
something
that
is
added
automatically
by
open
shift,
or
do
you
have
to
tag
it
for
you,
for
example,
in
your
deployment
yaml,
or
something
like
that?
Yes,.
C
You
have
to
add
that
to
the
to
the
yaml
file,
you
can
do
that
either
manually
or
you
can
have
values
tools.
That
would
do
that
for
you,
you
can
have
an
admission
controller.
You
can
have
various
things
that
that
word
under
specific
conditions.
B
C
C
C
No,
no
yeah,
it's
it's
that
the
one
that
is
running
is
the
one
running.
That's!
Okay!
That's
right!
Sorry!
I
got
confused
for
two
minutes
in
my
file
so
and-
and
that's
precisely
what
I
wanted
to
show
is,
if
you
do
a
diff
between
the
end
skins
deployments
and
the
en
keynes
scada
deployments.
C
That's
essentially,
the
difference
is
the
name
that
I
gave
to
the
app
so
that
I
can
run
the
two
at
the
same
time
and
this
runtime
first
name,
and
so
in
order
to
start
it.
I
can
start
it
from
the
console
or
I
can
start
it
from
here
and
that's
just
so.
In
that
case
I
want
to
get
a
one
and
that's
basically
it
okay.
So
there
is
this
new
warning
that
appeared
in
for
openshift
411,
that's
due
to
the
new
security
context,
but
that
doesn't
prevent
that
from
starting.
C
C
And
so
it
starts
really
quick
yeah.
So
it's
it.
It
takes
a
couple
of
seconds
more
to
start
in
some
conditions
because
we
need
to
run
a
vm,
but
that's
essentially
what
what
you
need
and
then,
of
course
you
can
on
top
of
that,
you
can
add
the
services
or
whatever.
So
it's
just
and
the
service
itself
doesn't
need
to
change
so.
B
C
So
if
I
look
at
the
nodes
that
are
running
here-
and
so
essentially
you
see
that
on
the
roles,
this
one
can
run
keller
because
the
operator
was
deployed
on
it.
Okay,
so
let
me
start
with
the
installation
itself
how
you
you
actually
deploy
that
thing.
So
what
you're
going
to
do
is
so,
besides
the
usual
subscription
and
etc,
you're
going
to.
C
And
so
well,
this
one
is
is
one
that
I
I
take
directly
from
some
internal
builds,
so
it
has
a
this
additional
element
to
the
spec,
but
essentially
you're
you're
deploying
this,
and
that's
going
to
give
you
it's
probably
better.
If
we
watch
it,
yeah.
B
Yeah
but
you
have
to
enable
show
default
projects,
the
the
switch.
B
Sorry,
sorry
does
it
require
anything
specific
on
the
node,
any
specific
configuration
or
it
can
run
on
any.
So
it's
running
on
core
os.
C
C
With
the
no
discovery
operator
to
be
able
to
do
more
fine-tuning
on
the
kind
of
machines
that
you
want
to
deploy
this
on,
okay
and
so.
B
C
B
C
This
so
this
one
is
a
bit
tricky,
so
we
do
most
of
our
developments
with
nested,
virtualization
and
actually
the
cluster.
I
just
showed
you
uses
the
nested
virtualization,
but
it's
not
supported
and
it's
notably
not
supported
by
the
the
values.
C
Providers-
and
so
that
means
that
at
the
moment
you
need
to
have
your
nodes
on
bare
metal.
Okay,
we
are
working
on
something
called
peer
parts.
So
if
you
search
peer
parts,
you're
going
to
find
some
efforts
in
this
area
and
the
idea
of
peerpods
is
to
use
the
runtime
to
start
a
vm
not
directly
by
starting,
for
instance,
launching
a
qmu
instance,
as
is
the
case
today,
but
instead
by
requesting
the
vm
from
the
cloud
provider,
api.
B
C
Then
you
connect
to
it
over
a
tunnel
or
something
like
that.
This
is
work
being
this
has
been
actively
worked
on
for
clouds
from
you
know,
vmware
or
others,
but
it's
still
working
progress.
C
C
It's
also
necessary
for
a
confidential
container
or
similar
technologies
where
nesting
is
not
just
not
possible.
Today,.
C
That's
a
very
good
question,
though:
it's
at
the
moment
that's
one
of
the
weaknesses
of
the
product
that
we
are
working
actively
to
to
fix.
Let's.
B
C
The
kind
of
markets
that
we
are
thinking
about,
they
would
not
want
to
run
nested
anyway,
but
yeah.
B
I
I
think
it
makes
sense,
because
if
you
look
at
openshift,
virtualization
or
yeah
things
like
convert,
that's
also
something
that
we
would
recommend
to
have
like
bare
metal
to
run
your
your
vms
instead
of
nested
vms,
because
you
will
probably
have
also
performance
differences.
I
guess
if
you
are
using
virtualization,
so
for
me
it
does
not
it's
not.
It
doesn't
seem
like
an
issue,
especially
with
you
know,
everything
that
we've
done
to
automate
installation
and
upgrades
of
openshift
open
shift
environmental
instances.
B
C
So
there
are
a
number
of
important
differences
in
how
this
runs
and,
and
so
most
of
the
performance
impact
actually
is
not
due
to
the
head
of
virtualization,
but
to
the
fact
that
we
run
with
a
different
configuration
to
give
a
simple
example:
when
you
run
a
container
today
on
a
machine
that
has,
for
instance,
64
cpus
it
unless
you
specifically
say
to
limit
resources,
etc,
it
potentially
has
access
to
the
64
cpus,
even
if
it's
indirectly,
for
instance,
while
it's
doing
ios,
some
of
the
I
o
completions
might
happen
anywhere
on
the
64
cpus.
C
Now,
when
you
run
in
a
catalog
containers,
you're
going
to
start
a
vm
and
the
vm
by
default
starts
with
two
cpus
and
very
little
memory
and
then
based
on
the
spec
that
you
send.
We
are
going
to
hot
plug
additional
cpus
additional
memory,
and
so
on.
So
initially
you
start
with,
for
instance,
one
cpu
and
two
gigs
of
memory,
and
if
you
have
something
that
requires
10,
cpus
and
40
gigs
of
memory.
B
C
So,
in
terms
of
risk,
it's
you
have
a
better
control
of
resources,
because
now
it's
no
longer
using
you
know
running
wild,
potentially,
even
if
you
don't
put
limits,
but
at
the
same
time
this
means
that
if
you
just
have
transient
bursts,
that
could
actually
benefit
from
extra
cpus.
Well,
they
will
be
clamped
by
this
number
of
cpus
and
the
same
with
memory.
C
Another
aspect
that
is
important
is
that
the
virtual
machine
monitor
itself
or
the
virtual
machine
requires
resource
some
resources
for
itself.
So
that's
why
we
start
the
pod
with
one
cpu
and
two
gigs
of
memory
just
to
to
get
started,
and
you
just
can't
boot
linux.
If
you
don't
have
a
minimum
set
of
resources.
C
C
The
problem
in
accounting,
for
that
is
that
these
300
megs
include,
for
instance,
things
like
fiber
for
cash
and
other
things
that
are
really
useful
for
your
workload
and
again
now
they
are
not
shared
when
you
have
a
fight
buffer
cash
on
the
host,
you
share
it
with
everyone
else.
In
that
case,
it's
dedicated
to
you.
So
in
a
sense
it
it
can
give
you
back
some
of
the
performance
you
are
losing,
for
instance,
from
doing
the
the
the
disco
of
our
this,
this
extravagator
ufs
layer.
C
So
what
this
means
is
that
when
we
do
performance
studies,
we
have
performance
results
all
over
the
place,
and
it's
really
something
you
it's
a
completely
different
configuration,
so
the
performance
results
you
get
in
few
cases
can
be
better
in
most
cases
are
worse,
but
it
really
varies
by
how
much,
for
instance,
you
know
if
you
do
very
short
packet
network
packets,
but
you
tons
of
them
that
that's
typically
not
good
in
virtualization
and
that's
true
for
any
kind
of
virtual
machine,
not
just
in
canada
containers.
C
So
this
is
really
something
that
you
probably
want
to
measure.
If
you're,
considering
using
cada
containers,
you
should
measure
to
be
able
to
evaluate
how
big
your
vm
should
be,
how
many
resources
you
need
to
allocate
to
the
workload
etc.
So
you
need
some
tuning
for
that.
That
may
be
completely
different
from
running
it
on
a
normal
cluster.
B
But
yeah
again
I
think
it's
just
a
a
question
of
when,
like
the
confidentiality,
the
necessity
for
the
isolation,
yes,
which
may
be,
I
mean
if
it's
mandatory,
then
you
have
no,
I
mean
there's
no
question
you
just
have
to
to
correctly,
as
you
say,
define
the
requirements
to
have
a
good
performance.
B
But
if
it's
not
necessary,
then
you
probably
just
go
with
the
traditional
containers
and.
C
C
One
example
to
to
illustrate
what
you
just
said:
one
of
the
things
we
are
working
on
that
is
not
completely
working
yet
because
of
some
issues
with
how
cutter
containers
deals
with
huge
pages
at
the
moment.
C
But
the
use
case
we
are
working
on
is
supporting
a
gpdk
nsio
v
in
in
these
workloads,
with
assigned
virtual
functions
from
the
host.
C
If
you
have
the
slightest
container
escape
or
something
that
some
part
that
runs
a
previous
container
or
whatever
you
could
actually
configure
the
hosts
the
way
you
want
and
essentially
steal
the
cars
from
your
neighbor
or
whatever,
and
so
that's
that's
a
bit
of
a
risk.
That's
okay!
If
all
the
workloads
that
run
on
your
system
are
yours,
but
if
you
start
wanting
to
run
some
customer
workloads
aside
alongside
with
yours,
let's
say
you
are
in
a
telco
environment
near
the
edge
or
whatever
there
are.
C
There
are
cases
where
you
want
to
run
workloads
that
do
not
belong
to
you
and
you
want
to
make
sure
that
they
they
will
not
risk
damaging.
Your
main
feature,
your
main
function
of
transmitting
data,
so,
and
so
so,
in
that
case,
you
can
essentially
give
a
card
or
a
virtual
function
to
your
virtual
machine,
run.
Dpdk
inside
and
so
dpdk
is
user
space
networking
and
it
so
essentially
you
have
no
other
head
because
your
dpdk
drivers
are
going
to
touch
the
car
practically
directly
and
so
in
the
experiments
we
did.
C
We
had
something
like
less
than
two
percent
of
our
head
on
on
some.
You
know
on
tests,
doing
bandwidth,
tests,
etc.
C
Using
that
function,
so
you
are
saturating
your
10
gig
link,
but
it's
yours.
It's
really
only
yours,
and
so
that's
the
that's
the
good
news
of
where
we
are
going
on
the
kind
of
of
features
that
we
mean
behind
isolation.
C
Now
again,
this
is
we
we
had
initially
scheduled
that
srv
would
be
there
for
for
4
11.,
it's
going
to
be
late.
We'll
probably
have
to
postpone
that
to
412
because
of
some
unforeseen
bugs
we
discovered
in
the
the
way
huge
page
use.
Phases
are
set
up
in
killer
containers
today,
which
is
it's
not
the
right
way,
so
you
only
have
an
insufficient
number
of
huge
pages
in
practice,
so
that
doesn't
work.
A
Christopher
you
mentioned
more
times
that
the
word
security
I
workloads
isolation
in
order
to
protect
how
to
have
an
effective
but
basic,
secure
way
to
run
multi-tenants
workloads
across
tenants
of
the
ship.
Is
there
any
is
any
new
something
that
needs
to
provide
us
in
terms
of
trends,
but
we
are
going
to
plan
and
shift
cada
containers
from
security
perspective.
C
In
more
multitask
way,
so
one
of
the
things,
for
instance,
that
is
difficult
to
do
today,
is
if
you
have
tenants
that
have
specific
requirements
in
terms
of
canal
tuning,
for
example,
so
they
want
to
pass
specific
parameters
to
the
kernel
they
want
to
resize
whatever.
So
so,
if
they
want
to
do
that
today,
they
need
to
dedicate
nodes
to
that.
B
C
A
classical
openshift
environment
and
and
that
tuning
may
actually
be
detrimental
for
other
applications
and
so
maybe
that's
uncomfortable,
and
they
have
to
essentially
dedicate
nodes
to
this
particular
tenant
and
they
can't
run
any
other
workload
on
it.
C
C
This
is
something
that
has
explicit
support
in
catacontainers,
in
terms
of,
for
instance,
you
can
specify
annotations
that
are
dedicated
to
color
containers
and
that
will
change
the
boot
parameters
that
you
have
all
the
I
don't
know
the
options
that
you
pass
to
the
vertex
ufs
demon,
all
these
kind
of
things,
so
you
have
multiple
tweaks.
You
can
use
along
the
way
like
this
to
really.
C
B
C
And
and
that's
only
this
vm
and
it
doesn't
impact
anybody
else's
workload,
including
other
vms,
so
you're
free
to
really
you
know,
that's
the
same
same
thing,
for
instance,
with
ebpf
ebpf
is
something
that
typically
is
restricted
on
normal
nodes,
because
there
is
too
much
chances
of
getting
it
wrong
or
degrading
the
performance
of
others.
C
But
you
can
safely
do
any
kind
of
send
any
kind
of
ebpf
program
in
your
vm.
It
doesn't
impact
anyone
else,
so
so
you're
free
to
get
that
capability
and
use
it.
C
C
C
There
were
some
issues
with
user
space
versus
kernel
in
our
course,
and
so
the
end
result
was
that
the
capabilities
were
there
but
could
not
be
named
so
it
was,
you
could
not
actually
specify
them
in
the
so
there
was
a
bug
there.
I've
not
checked
recently
for
that,
but
for
upstream
that
works.
A
C
So
there
is
this
project
derived
from
cada
containers
called
confidential
containers
and,
as
you
can
see,
it's
really
active.
C
It
became
an
open
sorry,
a
cncf
sandbox
container
sent
off
sorry
a
cncf
sandbox
project
very
recently,
so
sandbox
project
for
the
cncf
means
the
beginning
of
a
product
that
has
nothing
to
do
with
the
sandboxing
in
open
system,
box,
containers
and
so
confidential
containers
will
take
advantage
of
the
features
that
are
arriving
in
various
cpus,
like
amd
scv
in
apex,
cpus
or
intel
tdx,
or
on
s390
there's
similar
features
which
allow
you
to
essentially
encrypt
the
memory
of
your
vm,
so
that
someone
on
the
host
dumping,
the
the
whole
vm
memory,
for
instance,
cannot
see
passwords
or
other
secrets
that
can
be
in
the
in
the
vm
memory
at
the
moment.
C
So
what
this
allows
you
now
is
that
in
a
multi-tenant
configuration
you
can
have
two
competing
tenants
on
the
same
system.
You
know
they
are
not
going
to
be
able
to
read
each
other's
data,
which
is
important,
but
my
most
important.
They
don't
even
need
to
trust
the
cloud
provider
are
running
on,
so
they
can
essentially
run
on
an
untrusted
host.
C
So
in
that
case
that
would
be
a
runtime
that
is
confidential.
You
may
have
to
add
some
secrets
as
well
and
essentially
then
your
workload
deploys
safely
and
and
cannot
be
tampered
with
even
by
the
host.
So
I
gave
a
talk
about
this.
Let
me
check
so
that's
another
talk
at
def
conf
again,
so
you
can
look
it
up
on
the
web.
C
So
if
you
look
for
this,
this
talk,
there
is
a
youtube
video
that
that
talks
about
it,
and
so
essentially
the
the
problem,
as
I
said,
is
that
the
sandboxing
that
exists
today
is
really.
It
goes
one
way
it
protects
the
host
from
containers,
but
it
doesn't
protect
the
containers
from
the
host.
And
so,
if
you
want
to
consider
your
host
as
hostile,
then
you
need
to
find
a
way
to
protect
from
that
and
that's
essentially
what
confidential
computing
does
and
it's
more
than
just
encryption
of
memory
right.
C
It's
it's
also
things
like
making
sure
you
cannot
corrupt
the
state
by
injecting
data
in
it,
and
it's
also
this
attestation
service
that
I
was
telling
you
so
that
you
cannot
replace
your
valid
nginx
image
by
some
alternate
version
or
whatever
yeah.
B
C
C
We
can
extend
that
to
also
include
the
container
image,
but
for
the
container
image
we
typically
use
something
else
which
is
cryptographic
signatures
that
already
exist,
so
we
don't
need
more
than
that
for
the
container
itself,
but
it's
really
checking
that
the
candle
you
run
on
is
trusted
and
the
hypervisor
you
run
on
has
not
been
tampered
with.
B
A
A
C
C
So
we
are
trying
to
make
that
transparent.
It's
not
very
easy
and
that's
sort
of
you
can
see
the
the
complexity
of
the
process
here
where,
when
you
start
your
your
vm,
you
have
to,
for
instance,
send
a
quote.
So
the
destination
agent
is
going
to
send
a
code
based
on
measurements.
It
did
here
that
code
itself
has
to
be
encrypted
with
data.
C
So,
as
you
can
see,
it's
a
relatively
complicated
process
and
we
are
moving
step
by
step
in
this.
What
you
are
seeing
here
is
already
completely
outdated,
because
we
no
longer
rely
on
machine
scope.
That
was
really
the
first
steps
in
the
process,
but
we
now
we
we
use
something.
A
We
do
have
a,
I
don't
know
if
you
have
anything
to
add
to
relate
to
the
computer,
which
is
a
great
news.
I
I
just
read
something
about
confidential,
but
I
was
not
aware
that
as
a
redhead,
we
are
going
to
work
on
this.
C
So
what
I
showed
you
here
is
only
upstream,
as
I
said,
it's
just
a
sandbox
project
inside
the
cncf.
So
that's
the
very
beginning
of
the
process,
but
I'm
talking
about
it
here,
because
I
want
to
invite
anyone
who
is
interested
in
these
kind
of
topics
to
join
us
and
contribute,
there's
quite
a
bit
of
work
to
do
and,
as
you
see,
there's
already
a
large
team
working
on
it
upstream
with
companies,
including
ibm
apple
and
group,
alibaba,
etc.
C
So
many
of
of
the
contributors
are
actively
using
kela
containers
today
in
production
and
want
to
benefit
from
these
new
features.
But
I
see
that
as
a
really
important
use
case
in
the
in
the
future
of
color
containers
and
therefore
I
suspect
that,
probably
in
two
years
or
something
like
that,
we're
going
to
start
talking
about
it
in
the
context
of
sandbox
containers.
A
Yeah
sure-
and
there
are
a
lot
of
domain
where
probably
confidential
computing
or
containers
would
be
really
useful,
like
I'm
involved
in
some
project
dealing
with
the
cloud
sovereignty,
which
is
specific
for
your
anemia
and
data,
there
are
a
lot
of
security
levels
we
need
to
deal
with
and
the
data
in
use.
So
there
is
that
the
rest
of
the
motion
that
the
user
is
one
of
the
era
where
still
there
is
a
lot
of
things
to
do
so,
definitely
a
great
topic
jafar.
A
I
am
seeing
finally
a
question
coming
from
our
one
of
our
attendees,
the
the
the
name
is,
it
seems
to
be
sh
eris.
I
don't
know
how
to
pronounce
it.
Sorry.
B
It
starts,
but
we
it's
just
a
guess,
but
it's
a
sort
of
cherries
and.
C
Yes,
so
so
I
quickly
alluded
to
that
earlier
that
today,
the
the
option
that
we
give
to
deploy
the
operator
itself
is,
is
you
set
a
label
on
your
nodes
and
you
have
to
set
that
manually
and
you
can
use
whatever
script
or
whatever
now
the
the
standard
way
of
doing
that
is
with
the
node
discovery
operator
and
and
that
that
lets
you
automate
that,
but
the
integration
with
openshift
sandbox
containers
is
not
complete
yet
and
so
and
yeah
so
for
gpu
workloads.
C
B
C
So
this
can
be
done,
but
it's
it's
a
bit
tricky
part
of
the
the
the
problems
described
in
the
upstream
documentation
are
related
to
which
channel
to
pick
to
have
gpu
support
and
that's
something
that
doesn't
apply
in
the
case
of
sandbox
containers.
B
B
B
Just
it's
maybe
unrelated,
but
in
case
of
gpu,
for
traditional
containers,
as
you
mentioned,
there's
the
the
node
discovery
operator
that
automatically
adds
annotations
to
the
to
the
gpu,
enabled
hosts
and
they're.
You
know
like
operators
that
take
advantage
of
that,
like
the
nema
operator
and
such
things
that
allow
you
to
automatically
configure
the
nodes
for
gpu
support
and
such
things.
B
So
I
would
say
I
don't
know
how
this
relates
to
the
containers
world.
C
B
B
C
That's
not
completely
sufficient
because
you,
you
really
need
to
pass
specific
boot
options
to
the
channel
to
make
sure
that
you
can
actually
do
what
you
want
and
part
of
the
tricks
also
are
that
there
are
some
steps
that
I'm
not
sure
we
automate
today
likes
it's
possible
that
the
device,
the
way
you
present
the
device
inside
the
guest
doesn't
use
the
same
name.
So
you
essentially
need
to
rename
it
in
the.
B
C
All
right
so,
as
far
as
I
know,
that's
not
automated
for
gpus,
so
you
probably
have
to
be
a
bit
to
be
hand
holding
the
workload
there.
C
A
C
Yes,
so
let's
assume
that's
my
the
machine
config
question
when
we
deploy
the
the
operator,
so
so
that's
a
piece
that
is
also
on
the
web.
Let
me
just
so
one
of
the
key
components
or
contributions
is
called
the
sandbox
containers
operator.
Let
me
share
my
screen
again
to
show
you.
B
C
Yeah,
so
that's
that
is
the
the
sandbox
containers
operator
source
code,
so
you
can
look
how
this
works
inside
and
I
don't
know
where
this
is
written
exactly,
but
essentially
this
operator
when
you're
activating
it
with
a
keller,
config
custom
resource,
as
I
explained
at
the
beginning.
C
So
when
you
do
this,
this
operator
deployment
it.
It
will
actually
install
software
in
the
various
worker
nodes.
C
So
we
use
the
extension
mechanism
for
that
and
essentially
we're
installing
a
new
copy
of
our
own
copy
of
qmu
plus
some
other
things,
so
some
specific
rpms
that
get
installed
in
your
in
your
nodes
and-
and
so
there
are
reboots
involved,
and
so
the
machine
config
operator
is
dealing
with
some
of
these
aspects
for
us
okay.
So
there
is
a
relationship
with
the
mco,
but
that's
mostly
for
these
aspect
of
things.
C
After
that,
I
don't
think
it
gets
involved
anymore.
Okay,.
A
So
I
think
we
are
up
to
the
hour
thanks
a
lot
christoph
for
the
effects:
hey
yeah,
more
or
less
three
other.
A
C
B
B
Yeah
but
other
than
that
it
was
very
informative.
I
wasn't
really
well-versed
into
kata
containers
yet,
but
I
think
I
have
a
better
question.
I
hope
it's
the
same
for
our
viewers
as
well.
So
thanks
a
lot.
A
A
Just
a
very
small
one
will
be
provided
to
you
and
you
can
start
to
deploy
your
your
application,
a
reminder
for
next
session.
Next
week
we
are
going
to
have
a
guest
from
a
mongodb
and
we
are
going
to
speak
about
the
red
dead
of
the
ship
database,
arches,
which
you
said
new
services.
A
new
feature
would
allow
to.
A
Connect
to
the
application
to
db
like
mongodb
one
in
a
more
in
a
similar
way
and
again
thanks
a
lot
for
your
time.
Thanks
christoph
for
the
session
was
great.
I
learned
a
lot
of
things
mainly
when
you
spoke
about
confidential
contains.
I
will
take
a
look
at
some
something
on
google
just
to
learn
more.
So
thanks
a
lot
again.
B
Yeah
and
please
do
not
forget
to
check
the
openshift
tv
schedule,
there
is
a
lot
of
rituals
as
well
happening.