►
From YouTube: CNCF SIG Runtime 2020-04-16
Description
CNCF SIG Runtime 2020-04-16
A
A
A
A
A
So
we've
got
Eric,
so
thank
you
Eric
for
agreeing
to
be
our
scribe,
so
we
are
open
to
anybody
else
who
wants
to
be
a
scribe
so
feel
free
to
reach
out
to
any
other
chairs
if,
if
you're
interested
and
also
we
are
open
to
having
somebody
else
facilitate
this
meeting,
so
I've
been
facilitating
this
meeting.
You
know
the
last
few
months,
but
if
anybody
is
interested
feel
free
to
reach
out
to
so
I
think
with
that
we
can
kind
of
do
a
stand
up.
A
A
A
B
C
A
D
A
G
A
Based
on
any
feedback,
and
then
we
said
that
we
would
go
back
and
kind
of
start
working
on
some
of
these
items.
I
think
we've
already
started
working
on
some
of
these
I've
been
actually
reaching
out
to
some
of
the
different
related
communities
I've
reached
out
to
kind
of
containers
device
for
firecracker
community,
the
webassembly
communities,
whether
somebody
has
a
lot
of
different
projects,
so
there's
multiple
communities
but
I've
actually
reached
out
to
a
few
of
them,
and
we
we've
done
some
of
that
work.
A
But
there
was
still
some
I
mean
we
want
to
kind
of
expand
and
to
even
a
broader
scope
of
different
projects
and
and
and
try
to
identify
gaps
of
some
of
the
CNC
projects
that
exist
right
now
in
that
that
are
not
actually
filling
that
gap,
so
I
I
think
Quentin's
not
around
yet
so
so
maybe
we
can
chat
around.
Maybe
me
and
Alena
can
talk
about
it
later
and
see
what
some
of
these
items
are.
We
want
to
tackle
in
the
future.
A
But
if
you,
if
you
have
anything
that
you
want
to
add
to
that
roadmap,
please
add
it
and
and
then
we
can,
you
know
prioritize.
Basically,
this
is
some
of
the
things
that
we
want
to
do.
First,
I
think
another
one
that
is
was
very
interesting
is
talking
some
of
the
ML
Ops
type
of
workloads
and
tools,
so
some
of
those
are
not
necessary
and
the
foundation
yet
so
so
we're
looking
out
for
maybe
projects
or
technologies
that
can
fill
that
gap.
A
I'm
just
reading
the
chat
right
now
so
I
think
yeah,
so
we're
now
yeah
that
yeah,
the
stand-up
is
just
you
know,
to
check
in
and
basically
just
talk
about
whatever
you
want
to
talk
about
and
talk
about
later
in
the
meeting
and
yeah,
if
you,
if
you
or
an
attendee,
please
add
your
name
to
the
list
and
also
reminder
that
we
have
a
repo
and
then,
if
you
are
a
participant
and
any
way
you
want
to
contribute.
You
know.
Please
add
yourself
to
the
repo.
I
A
So
typically,
what
happens
is
that
you
know
the
meeting
gets
recorded
and
then
later
you
know,
there's
a
review
by
the
by
the
cig
and
there's
a
document
that
gets
may
be
posted
in
the
github
repo
or
it
gets
checked
into
the
github
repo
and
later
the
TOC
members
have
a
chance
to
see
the
presentation
and
based
on
that,
you
know
they
decide
to
either
sponsor
or
not
sponsor
the
project
right
and
then
for
entry
into
sandbox.
The
requirements
is
three
sponsors
and
the
TLC
so
yeah
so
go
ahead.
Okay,.
I
Cool
I
yeah,
we
there's
a
github
pull
request
against
the
TOC
repo
with
a
document
that
covers
a
lot
of
a
lot
of
details
about
the
project
and
the
proposal.
So,
if
anyone's
interested,
of
course,
you
can
check
that
out
afterwards
and
and
find
out
some
of
that,
of
course,
we're
going
to
cover
a
lot
of
what's
in
there
today.
But
give
me
a
second
here
to
share
the.
I
Great
so
yeah
the
project
called
metal,
cubed
and
so
the
high-level
overview.
What
is
this?
This
is
a
project
to
provide
kubernetes
native
bare
metal
host
management
so
getting
into
it
so
managing
bare
metal
hosts
or
provisioning
and
permissioning
bare
metal
host.
This
is
not
a
new
problem
space.
Obviously,
people
have
been
provisioned
farewell
hosts
for
a
while.
So
why
should
we
want
to
build
another
one
or
or
build
another
approach
in
this?
So
starting
there,
a
big
one
is
really
the
API.
We
wanted
to
explore
this.
I
This
problem
space,
the
problem
space
of
managing
bare
metal
hosts
with
a
declarative
API.
So
what
would
that
look
like?
We
would
have
a
declarative
API
for
managing
bare
metal
hosts,
and
so
we
did
that
by
creating
custom
resources
with
kubernetes
to
do
so.
We
also
wanted
to
do
something
that
was
designed
to
run
within
a
kubernetes
cluster,
so
self
hosted,
and
one
reason
for
that
is
again.
You
know
managing
this
software,
how
you
would
manage
other
vacations,
but
really
a
major
one.
I
was
also
the
footprint
required
for
a
bare-metal
kubernetes
cluster.
I
We
didn't
want
to
have
to
require
something
off
to
the
side,
to
run
the
bare
metal
host,
provisioning,
stuff
and
part
of
that
is
for
certain
environments.
You
know
some
bare
metal,
kubernetes
cluster
use
cases
would
be
a
large
cluster
in
a
data
center.
Other
use
cases
will
be
very
small
clusters.
The
edge
computing
news
cases
we're
requiring
another
host
is
really
unacceptable.
I
So
we
needed
to
address
this
problem
of
space
with
something
that
released
I'll
post
it
in
the
cluster,
and
then
we
also
wanted
to
be
able
to
have
a
cluster
manage
its
own
infrastructure.
That
was
another
aspect
of
this,
and
that
gets
into
not
we
did
what
we've
built
is
not
just
something
that
manages
hosts.
I
We
have
a
command
I'll,
get
into
a
bit
more
detail
in
a
moment,
not
something
it's
just
provisions
host,
but
also
can
we're
looking
at
provisioning
kubernetes
clusters
so
building
on
some
tooling
out
of
one
of
the
kubernetes
6,
the
cluster
API
project.
Integrating
with
that
to
allow
a
cluster
to
manage
its
own
hopes
to
become
additional
nodes
in
a
cluster.
So
looking
so
that's
that's
why
we
did
it.
I
You
know
that
some
of
the
problems
we're
trying
to
solve
so
the
first
major
component
to
this
is
the
bare
metal
operator
and
this
the
show
some
detail,
but
I'll
talk
through
it.
There's
a
component
on
the
bare
metal
right
here.
This
bare
metal
operator
is
something
that
runs
in
a
cluster
and
it
manages
a
custom
resource
called
the
bare
metal
host
and
the
bare
metal
host
is
the
declarative
interface
or
your
hopes
for
a
bare
metal
other
state.
It
has
details
about
the
hardware
and
and
what
stage
you
want
it
to
begin.
I
So
you
update
this
to
describe
how
you'd
like
a
host
to
be
provisioned
and
there's
some
secrets
that
contain
some
key
details.
One
of
them
is
the
config
drive
secret,
so
this
is,
if
you've
ever
used
any
cloud.
Compute
API,
there's
always
a
like
a
a
user
data
section
where
you
pass
data
that'll
be
given
to
the
to
the
host
when
it
boots
up
the
first
time,
so
a
tool
like
cloud
in
it
or
ignition
can
initialize
itself
on
first
food.
We
support
that
same
interface.
I
The
way
that
we
do,
that
with
bare
metal
is
that
we
write
that
information
to
a
dedicated
partition
so
the
first
time
there's
a
new
operating
system
boots
up
cloud
and
edit
Orion
or
whatever
tool
you'd
like
reads
that
data
from
that
partition
and
initializes
itself.
So
what
is
this
do?
So?
You
know
we
have
this
resource
and
you,
maybe
you
update
this
to
say,
like
a
host
be
provisioned
under
the
hood,
we're
making
use
of
ironic.
I
This
is
hiddenness
sort
of
an
implementation
detail,
but
we're
reusing
existing
environmental
host
provisioning
technology
here
ironic
and
it
knows
how
to
contact
the
management
controller
and
a
host
boot
up
a
special
ram
disk.
This
Ram
disk
knows
how
to
download
the
operating
system
image
that
you've
decided
you'd
like
provision
to
a
host
write
it
to
the
correct
disk.
I'm
also
write
your
user
data
into
it.
Like
I
said
the
config
drive
partition,
so
it's
kind
of
a
high
level
or
people
we've
built
and
quick
overview
of
the
API.
I
These
samples
are
sort
of
cut
down
the
thermos
lied,
but
give
you
an
idea
of
what
the
API
looks
like.
This
is
a
bare
metal
host
custom
resource
in
the
spec
of
this
there's,
some
key
things,
one
of
them
it
says
EMC.
So
that's
information
about
a
management
controller
on
a
server
management
controllers.
What
we
talked
to
sort
of
a
demand
management
for
a
server.
I
We
can
use
that
to
do
power
minute,
but
turn
the
server
on
and
off
or
control
boot
settings,
and
so
this
is
key
for
for
doing
management
or
automatically
triggering
provisioning
of
the
host.
We
need
to
know
the
MAC
address
if
we're
doing
pixie
based
provisioning,
we
need
to
be
able
to
recognize
the
host
when
it
shows
up
on
the
provision
network,
the
consumer
ref
here
you
know
this
this
interface
and
this
API
can
be
used
for
for
any
reason
that
you
want
to
provision
a
host.
I
You
need
to
just
do
generic
Behrman
all
those
provisioning
for
you
know
whatever
you
want
you
may
it's
also
designed
to
be
integrated
with
like
layers
above
it.
So,
for
example,
if
you're
using
the
cluster
API
integration,
which
we'll
we'll
talk
about
in
a
few
minutes,
then
we'll
have
references
references
to
what
you
know
resource
just
claimed
in
this
bare
metal
host
and
when
it's
permission
answer
here,
is
showing
a
machine
from
the
from
like,
say
the
cluster
API
group.
I
We
would
you
have
a
reference
to
one
claim
this
bare-metal
host
the
image
section
in
this
API.
This
is
the
operating
system,
image
that
you
said
you
want
provision
to
that
host
and
then
a
user
data.
That's
the
data
that
was
something
a
cloud
mint
or
ignition
would
consume,
so
a
reference
to
where
that
is
stored
and
the
status
section
of
a
bare-metal
host.
Another
thing
that
we
do-
and
we
read
one
of
these
hosts
under
management
that
ramdisk
that
we
boot
up.
I
That
then
knows
how
to
download
an
image
and
write
it
to
disk,
as
this
special
ramidus
also
knows
how
to
inspect
the
hardware.
So
it
gathers
as
much
detail
about
the
hardware
as
I
can
and
and
sends
it
back
out,
so
we
can
store
it
at
this
resource
and
this
is
heavily
condensed.
There's.
Quite
a
bit
of
hardware
details
we
collect
about
CPUs
memory,
network
interfaces,
storage,
you
know
anything
we
can
find,
but
this
is
sort
of
a
sample.
I
It
shows
here
a
bit
of
info
about
CPU
a
little
bit
of
info
about
a
network
interface.
How
much
RAM
is
there
and
then
in
the
status
section
also
have
a
state
of
provisioning.
So
in
this
case
this
host
has
been
provisioned,
which
shows
you,
which
image
had
been
proposed,
a
provision
to
that
host.
So
that's
our
declarative,
API
for
for
managing
bare-metal
hosts,
and
on
top
of
that,
we
integrate
with
the
cluster
API
project
and
I
will
turn
that
over
to
male
to
discuss
this
in
a
bit
more
detail.
Sure.
E
Thank
you
yes,
so
basically,
cluster
API
is
a
project
from
SiC
cluster
lifecycle,
and
the
idea
is
that
you
would
be
able
to
manage
your
clusters.
Your
cube
entities
clusters
declaratively
using
the
kubernetes
api,
so
this
slide
is
just
like
a
breakup
of
the
cluster
API
project,
like
the
the
main
idea
that
is
behind
this,
if
you're
not
yet
familiar
so
you
would
have
basically
a
bootstrap
cluster
or
a
management
cluster
that
is
a
component
is
faster
and
then
the
user
would
be
interacting
with
this
creating
CRS.
E
That
would
actually
be
representing
the
target
clusters
that
this
user
wants
to
deploy
behind
the
hood
under
the
hood.
The
cluster
API
actually
interacts
with
different
cloud
providers,
so
AWS
either
like
google
cloud,
like
any
any
kind
of
cloud
providers.
So
we
came
to
things
like
that.
Well,
this
is
this
is
really
cool
to
be
able
to
manage
the
feature.
The
managed
cluster
like
this,
and
we
wanted
to
extend
it
so
that
we
would
be
able
to
actually
deploy
the
clusters
not
on
the
cloud
provider,
but
on
actual,
like
bare
metal
nodes
on
physical
hardware.
E
So
for
this
sorry,
can
you
change
the
slide?
Please
yeah
great
thanks,
so
for
this
basically
plus
API
defines
a
set
of
resources.
That,
for
example,
is
like
cluster
machine
that
represent
the
the
the
cluster
and
it
allows
that
providers
to
bring
their
own
objects
like
an
equivalent
of
each
like
an
AWS
cluster
for
a
cluster
and
the
briefs
machine
for
a
machine
like
like
this
is
the
idea
behind
the
hood.
So
what
we
did
then
to
integrate
with
the
cluster
API
is
that
we
created
those
those
machines
and
those
infrastructure.
E
Specific
CR
DS,
like
networking,
machine
and
major
cube
cluster
with
the
class
IP
I,
provide
a
metal
cube.
That
is
actually
a
set
of
controllers
that
reconcile
those
objects
and
the
the
core
difference
of
what
we've
been
doing
versus
like
what
you
can
find
in
AWS
or
in
Azure
providers,
is
that
we
do
not
have,
of
course,
a
cloud
provider
API
that
we
could
use
to
start
the
machines.
E
So
this
is
the
logic
and
like,
under
the
this
class,
API
provide
a
metal
cube.
If
you
can
go
to
the
next
slide.
Please
yeah
I,
just
like
put
here
a
simple
example
of
what
we
can
achieve
like
is
it?
Basically,
you
can
consider
that
each
of
the
squares
are
actually
like
physical
hardware,
so
we
would
like
start
a
first
node
with
this
bootstrap
cluster.
This
can
be
achieved
by
using
a
specific
eye.
So,
for
example,
then
deploy
or
cluster
API
and
metal
cube
components,
and
then
we
will.
E
We
would
be
able
from
there
to
to
deploy
a
target
fluster
with
like,
for
example,
here
master
node
on
the
worker
node,
and
once
this
is
done,
we
can
use
a
cluster
API
feature
to
move
all
the
CRS
to
the
target
cluster
and
then
remove
this
this
bootstrap
cluster,
so
that
then
the
target
Vista
is
self
managed.
So,
but
this
is
currently
working
work
in
progress
in
in
in
our
in
our
project,
so
we
can
go
to
the
next
slide.
I
Yes,
so
some
of
the
future
work
as
he
was
just
saying
that
the
clustering
kind
pivoting
is
on
this
list.
This
is
where
you
know
you
have
a
this
bootstrap
process.
You
start
with
this
construct
cluster,
but
then
moving
the
components
into
the
resulting
cluster.
That's
particularly
relevant
for
those
smaller
footprint
use
cases.
I
Machinery
remediation
is
another
one,
where
that's
detecting
that
there's
problems
on
hosts
and
being
able
to
automatically
try
to
do
things
to
repair
the
cluster
that
would
commonly
be
just
trying
to
reboot
the
reboot
hosts
to
get
them
to
recover
or
taking
them
out
of
service
if
necessary.
There's
also
a
more
detailed
management
of
bare
metal
in
different
ways
so
automatically
creating
raid
volumes
during
the
deployment
process.
It's
another
thing
or
different
firmware
aspects
so
managing
by
Oh
sir
settings
during
deployment
as
another
example.
I
So
a
lot
of
that
those
those
things
are
just
some
samples
of
stuff.
That's
future
work
or
work
in
progress,
rather
there's
a
website,
metal,
3,
io
lots
of
info
there's
a
development
environment.
If
you
have
a
host
that
a
single
host,
we
can
set
up
a
development
test
and
demo
environment
using
virtual
machines
than
we
set
up
different
bits
of
software,
so
that
we've
managed
these
virtual
machines,
just
as
if
they
were
bare
metal
hosts
it's
great
for
giving
a
shot.
There's.
I
Also,
there
was
a
coupon
talk
that
included
some
demos,
it's
linked
from
the
blog
on
the
website,
but
another
resource
to
find
out
more
if
you're,
interested
I
wanted
to
also
close
with
a
few
sort
of
community
project
highlights
that
have
overview
the
the
project
we
started
this
beginning
of
last
year,
so
you
know
a
bit
over
a
year.
Now
we
do,
we
do
have
production
deployments
happening
this
year,
the
code
it's
Apache
2
license.
That's
all
on
github
contributors.
I
You
know
the
there's
several
repos,
but
these
are
the
two
biggest
ones
are
the
sort
of
prime
ones
where
there's
new
code
being
developed,
and
you
can
see
how
many
individual
people
have
contributed
to
each
there's
a
list
of
companies
that
represent
the
contributors
so
far
and,
of
course,
the
two
of
us
here
from
from
two
of
them
two
of
the
companies
on
that
list.
The
way
we
communicate
is
a
project
community.
We
have
a
mailing
list
that
we
use.
I
We
have
a
channel
on
the
kubernetes
slack
that
we
use
we
hold
by
weekly
student
meetings
to
catch
up
with
each
other
and
where
we
are
on
the
internet
elsewhere,
we've
got
our
websites
and
a
Twitter
account
a
YouTube
channel
YouTube
channel
as
videos
of
our
our
meetings
and
also
things
like
demos.
Go
there
as
well.
So
that's
just
some
some
highlights
there
and
with
that.
Thank
you
very
much
for
your
attention
and
listening
and
I
wanted
to
open
it
up
for
any
any
questions
or
discussion
that
you
may
have
about
what
we've
done.
G
C
A
couple
of
questions,
the
first,
the
first
question
is
so,
but
the
probability
of
the
about
the
maintenance
and
the
upgrade
for,
therefore,
the
provision
hosts.
How
is
it,
how
is
it
different
from
the
initial
provisioning?
What
are
what
are
the
stages
like
if,
for
example,
I
want
a
vertically
scale,
it.
I
I'm,
sorry,
so
you
want
to
scale
a
cluster
or
you
talked
about
managing
us
something
about
a
specific
host.
This
is
your
culture,
specific
focus.
Yeah
I
mean
you
can
Reaper
vision
a
host
any
time.
So
just
talking
about
the
bare-metal
operator
part,
not
the
cluster
API
part
of
it,
the
widow
the
way
the
interface
works
is
you?
I
Can
you
could
have
a
host
permission,
gordy
provisions
at
any
time
you
can,
if
you
D
permission
it,
it
can
also
do
cleaning
so
like
it'll,
go
in
and
wipe
the
discs
for
you
before
it
before
it
puts
it
back
into
a
sort
of
in
the
inventory
of
available,
hosts
and
refurbished,
and
when
you
reprovision
it
it's
the
same
process,
so
it
takes
the
the
way
the
the
provisioning
approach
is.
It's
it's
doing
whole
disk
images.
So
it's
like
any
of
it,
looks
like
a
cloud
image
for
a
cloud.
I
C
I
There
is
the
interface
for
well.
I
saw
the
two-part
answer
for
that
and
then
see
if
they
have
some
additional
comments.
The
at
the
bare
metal
host
API
has
a
user
data
interface,
so
this
would
be
something
to
be
processed
by
cloud
in
it
or
for
ignition.
So
you
could
say
you
could
include
in
there
run
these
commands
the
first
time
boots
to
install
this
additional
software.
So.
G
I
Example,
like
what
I'm
doing
testing
I
I
commonly
use
the
generics
and
toss
cloud
image,
just
like
the
generic
one,
that's
just
tributed
from
CentOS
and
then
pass
in
additional
data
to
say,
install
SH
key
for
the
user
or
any
software
oh
and
installed
right
away,
and
it
supports
that
interface.
So
the
data
you
pass,
it's
written
to
the
to
a
special
partition,
and
so
when
that
generic
cloud
image
boots
up
the
first
time,
it
reads
that
and
it
will
install
ever
ask
for.
C
I
I
You
could
have
a
secret
that
just
says
like
when
the
host
boots
up
tell
the
hosts
to
go,
pull
it
from
somewhere
else.
Right
I
mean
it
could
be,
like
a
stub
says:
go
hit
this
web
service
to
pull
down
what
you're
really
going
to
do.
That's
actually
what
we
do
and
well
like.
So
we
have
this
integrated,
so
I
prefer
it.
How
we
have
this
integrated
with
with
openshift
and
that's
how
a
lot
of
our
get
fix
will
work.
I
A
Yeah
some
question
so
the
management
cluster
or
that's
a
requirement
right.
So
if
you
want
to
manage
some
the
same
cluster,
that
is
the
management
cluster,
would
you
be
able
to
do
that
with
you
know
if
you
want
to
have
bare
metal
notes
for
that
match
one
cluster?
What
is
the
recommended
architecture?
Is
it
I'm
going
to
have
to
just
always
that
I
have
the
management
cluster,
or
can
you
just
have
bare
metal
providers
for
multiple
clusters.
E
So
I
can
maybe
answer
this
one
day,
so
the
you
can
have
basically
the
both
both
approaches.
You
could
have
a
management
cluster
with
a
bit
more
like
that's
a
bit
more
chunky
and
that
would
then
be
used
to
deploy
multiple
target
cluster
if
you
wish,
but
the
main
goal.
Our
main
goal
is
to
have
self
managed
target
cluster.
A
E
It
depends
it
if
you
have
access
to
the
proper
like
networks,
you
could
do
it
with
mini
cube,
meaning
that
at
the
moment
there
is
a
requirement
that
you
need
to
have
a
layer,
2
connectivity
between
the
bootstrap
cluster
and
the
target
fluster.
We
are
working
on
trying
to
lift
this
by
using
a
specific
features
feature
of
the
DMC
so
that
we
could
do
it
of
a
layer,
3
network,
but
it's
not
the
case
yet
so
you
need
to
have
this
network
requirement
fulfilled.
E
So
if
you
have
and
you're
like
laptop
or
whatever
is
connected
on
the
layer
2
network
with
your
target
Lister,
then
you
can
use
that
as
an
ephemeral
node.
But
usually
it's
not
the
case.
So
that's.
Why,
like
in
most
of
the
projects
that
are
using
metal,
cube
like,
for
example,
airship
or
what
we're
doing
in
in
Ericsson,
we
actually
create
an
ISO
image
that
we
start
on
the
on
the
bootstrap
node
with
everything
included
in
there.
E
So
it's
a
self-contained
image
that
just
like
start
a
whole
like
standalone,
kubernetes
cluster
on
on
one
of
the
hardware,
and
then
we
use
that
to
provision
the
target
investor,
and
once
we
have
like
pivoted
the
resources
so
that
the
target
user
becomes
self-managed.
We
scale
up
the
cluster
to
take
the
previous
node
that
we
used
as
bootstrap
in
use
into
the
target
cluster.
I
Yeah,
there's
different
management,
controller
protocol,
ipmi
being
sort
of
the
least
common
denominator,
one,
but
there's
more
moving
more
towards
redfish,
that's
a
newer
standard
and
then
there's
also
a
lot
of
vendor
specific
interfaces
that
that
we
can
support
and
all
of
them
have
authentication
and
we
make
use
of
it.
You
store
the
authentication
details
in
the
secret
and
the
references
reference
that
secret
from
the
bare
metal
host
object,
so
I
mean
yeah.
I
A
Yeah
I
was
just
thinking
more
about
someone's,
newer
things
like
nitro
from
AWS,
and
would
they
have
this
Enclave
on
the
machines
where
we're
you
know
only
if
you
have
a
specific
fingerprint
you're,
the
only
one
who
can
access
that
machine
or
something
so
preventing
people
from
accessing
your
the
lowest
level
of
your
infrastructure.
Basically,
your
firmware
ISO.
A
I
That's
interesting,
I
mean
I
mean
the
extent
of
the
authentication.
We
could
support
right
now,
as
a
username
and
password
to
the
to
the
management
controller
and
anything
more
sophisticated.
You
can't
do
anything
yet
I,
don't
know
everyone
who's,
exploring
any
any
of
the
more
sophisticated
access
control.
That's
what
we
do
right
now
and
then
the
use
of
that
you
typically
put
that
on
an
isolated
network,
but
you
might
put
that
on
a
network
that
that's
not
reachable.
You
would
ideally
put
that
that's
not
reachable
by
any
of
your
workloads
on
the
cluster
example.
A
And
then
on
the
components,
you
have
two
components:
you
have
the
the
cluster
API
component
and
you
also
have
the
the
operator
right.
So
that
was
those
two
components
are
part
of
the
project
or
they're
like
separate,
because
I
think
the
cluster
API
component
is
a
more
related
to
the
cluster
API,
the
kubernetes
cluster
API.
Or
is
it?
How
is
a
bundle
yeah
so.
I
Okay,
so
yeah,
those
are
the
two.
So
if
you
were
to
look
at
our
github
you'll
see
that
both,
if
there's
a
repository
for
each
of
those
components
and
then
there's
another
one
that
sets
up
a
Curt
development
test
environment
and
then
there's
others
that
are
like
container
images
and
that
sort
of
thing,
but
those
two
components
are
yeah
they're,
standalone
things
or
standalone
kubernetes
controllers
that
you'd
run
in
your
cluster,
the
first
one
the
bare-metal
operator
is
very
focused
on
probation
environmental
host.
But
it's
it's
sort
of
a
it's.
I
A
more
generic
bear
mental
hose
provisioning
interface.
So
you
can
provision
you
know
one
to
hosts
or
whatever
operating
system
of
choice
hosts
for
any
purpose.
Using
that
interface,
the
cluster
API
integration
is
a
layer
on
top
that
then
takes
some
of
the
more
generic
cluster
API
controllers,
but
but
provide
like
that's
a
bad
generic
controllers,
and
then
you
have
to
integrate
it
with
any
type
of
infrastructure
platform
and
that's
what
we
provide
here
and
since
that's
so
tightly
related
and
integrated
with
our
environmental
operator
controller.
I
You
know
we
have
them
under
our
same
project
done
for
a
lot,
but
those
are
the
two
key
things,
but
they,
you
know
have
that
you
don't
have
to
use
both.
You
can
use
just
their
no
operator,
for
example,
but
of
course,
the
the
clustering.
The
only
thing
that
is,
that
popular
that
you
use
on
top
of
it
right
because.
I
I
Exactly
that's!
That's
why
they're
that's
why
they're
architectural
stuff,
because
that's
a
problem
to
solve
on
its
own,
for
you
know
not
specifying
what
your
provisioning
or
for
what
purpose
and
then
and
if
you
happen
to
be
provisioning,
kubernetes
clusters,
that's
of
interest
to
us
and
we
have
an
additional
component.
You
can
use
that.
I
Ironic
is
so
you
know
we
have.
We
built
the
API
and
the
behavior
we
want
is
the
declarative
interface
to
do
the
lower-level
provisioning
aspects
we
integrated
Byronic,
it's
like
within
the
code
there's.
This
is
sort
of
like
a
plug-in
layer
where
you
can
plug
in
different
provisioning
systems
that
can
fulfill
a
set
of
needed
operations.
I
Ironic
is
where
we
started.
We
had
a
lot
of
experience
with
it.
It's
quite
has
a
ton
of
features.
It
knows
how
to
talk
to
a
bunch
of
different
interfaces,
including
a
number
of
first
vendor
specific
ones.
It's
got
a
pretty
good
community
with
with
participation
from
a
lot
of
hardware
vendors.
So
it
just
got
us.
You
got
us
going
pretty
quickly
architectural
Ino,
like
it's
kind
of
a
we
run
it
inside.
I
Maybe
you
know
to
leave
it
open
to
exploring
either
all
different
options
or
or
performing
certain
operations
with
something
else
that
sort
of
thing
yeah.
We
used
ironic
and
we
typically
will
have
it
deployed
in
the
cluster
as
well
or
even
within
the
same
pod.
Yeah
I
could
talk
more
about
how
how
we
use
it.
We
use
it
in
kind
of
a
pretty
unique
way,
but
that's
right.
That's
a
key
component.
We
depend
on.
A
F
So
this
is
Sarah
I
had
a
couple,
questions
and
pathologist's
might
have
already
been
answered,
but
I
just
wanted
to
understand
that
the
clusters
you
guys
are
provision
are
full
on
their
metal
hosts
right.
It's
nothing
like
VMs
or
I.
Don't
know,
I
just
wanted
to
get
Storify.
You
guys
didn't
agree
with
other
configuration
management
tools
like
chef
and
puppet,
or
is
it
just
more
social,
strictly
speaking,
they're
metal.
I
In
yeah
I
mean
our
use
case
is,
is
fall,
bare-metal
hosts.
You
know
things
that
you
take,
you
know
you
put
in
put
in
iraq
and
and
have
no
physical
access
to
that.
That's
that's
our
primary
use
case.
We
have
some
development
environment,
stuff,
it's
based
on
virtual
machines,
but
like
the
project,
the
whole
point,
the
whole
point
of
it
is-
is
bare
metal
hosts
for
on-premise
use
cases.
Yeah.
F
And
then
you
mentioned
also
using
ISO
images.
So
it's
just
wondering
I!
Guess
the
OS
is
big,
thinner,
there's
a
desire
to
make
that
I,
guess
pluggable.
It
seems
like
with
grenades
specially.
You
have
unit
specifically
says
they're,
very
much,
trimmed
down
so
I'm
wondering
if
there's
I
guess
one
preferable,
metal,
I,
guess
OS
I
imagined
sometimes
with
vendors.
She
was
pushing
the
hardware
side.
You've
run
into
a
lot
of
challenges
so
discouraged
to
see.
If
there's
you
know
an
area
there
with
driver,
restore
yeah.
I
I
In
terms
of
so
operating
systems
at
the
bare
metal
host
provisioning
layer,
it's
completely
operating
system
agnostic.
You
just
need
to
be
able
to
point
it
to
a
disk
image
and
that
could
have
any
operating
system
in
it.
I
mean
it
needs
to
be
compatible
with
the
hardware
you're
trying
to
deploy
it
to,
but
it
doesn't
really
care.
Now,
once
we
trying
to
reboot
in
the
host
tries
to
boot
off
the
imaging
gave
it
and
you
can
give
it
something
bogus,
then
we
can't
fix
that
any
case,
its
operating
system
agnostic.
I
E
So
in
the
development
environment
that
we
have,
we
are
able
to
provide
clusters
provision
clusters
with
both
centrist
and
one
two
images
and
internally
in
Ericsson.
We've
also
done
it
with
less
so
it.
Basically,
it
really
just
depends
on
how
you
install
the
different
components
that
are
needed,
like
cube,
ADM,
cubelet
and
docker,
or
whatever
Ciarra,
yet
that
you're
using
so
it
can
be.
It
can
basically
run
pretty
much
on
any
OS.
E
Then,
of
course
some
will
be
better
adapted
to
some
hardware
than
others,
but
then
that's
the
choice
of
the
of
the
user,
like
it's
just
about
providing
a
disk
image
of
that
specific
OS
and
then
like
changing
a
bit
the
the
installation
parts
in
in
cluster,
a
PID
like
what
executable
you
install
on
how
you
deploy
them.
We've.
F
I
So
I
mean
so
I
I'll
speak
in
the
mail
to
to
cover,
as
as
part
of
this
too
so
at
Red
Hat.
We're
we've
adopted
this
Gaea
to
be
part
of
our
bare
metal
solution
for
openshift,
so
optionally
be
able
to
automate
provisioning,
a
bare
metal
cluster,
bare
metal,
kubernetes
clusters
with
our
with
open
shift
and
we're
working
with
customers
right
now
to
use
this
for
correction
to
fly
deployments,
and
so
that
that's
that's
our
our
ticket,
I
can't
call
it
specific
customer
names.
I
can
just
say
that
we
are
working
with
them
with.
E
I
can
also
probably
add
that
well
we're
in
Ericsson
of
caustic
using
this
internally,
but
then,
on
top
of
that,
there
is
an
project,
a
project
called
airship
that
is
led
by
AT&T,
that
is
making
full
use
of
of
metal
cube
to
deploy
on
bare
metal.
So
the
goal
of
that
airship
project
that
is
under
the
OpenStack
umbrella
is
to
deploy
an
openstack
cluster
for
5g
networks
and,
and
they
are
using
metal
cube
under
the
hood.
A
A
A
A
B
Quick
note:
I,
do
you
steer
I'll,
try
to
change
them
as
time
goes
on
so
Lambs
Renault
I
work
in
Nvidia,
my
software
engineer,
there
I
wanted
to
present
a
bit
the
container
device
interface
I'll.
Give
you
some
background
on
device
support
I'll,
give
you
some
new
skis
and
I'll
talk
briefly
about
a
wouldest
container
device
interface.
B
Who
does
it
come
from
just
it's
not
just
in
video
and
I
mean
how
we
thought
about
it,
but
we're
definitely
open
to
we
very
quickly.
I'm
part
of
the
group
that
originally
built
the
device
plug-in
interface
and
Trinities
I
maintain
in
a
device
plug-in
for
Danny's
I,
maintain
a
stack
to
support
devices
in
different
runtimes,
whether
it's
dr.,
Padma
and
singularity
I've
interacted
with
many
runtimes
and
more
recently,
I've,
been
working
on
the
OCA
hopes
to
help
device
support.
B
I
think
that
the
first
point
that
most
people
I
want
to
address
is
why
is
it
that
dr.
Ron
pond
man
run
or
basically
ways
it
that
this
option
device
death?
My
device
now
is
not
sufficient.
B
You
might
need
to
explore
as
I
pcs,
for
example,
X,
org
or
vendor
specific
IP
seats,
you
might
need
to
expose
files
from
the
runtime
namespace
or
even
change
some
profit
centers
on
or
more
generally,
you
might
want
to
perform
capability
checks.
Is
this
maintainer
gonna
run
with
this
device?
You
might
want
to
perform
some
runtime
specific
Andre
fields?
What
you
do
in
a
Linux
container
might
be
very
different
than
what
you
do
on
a
VM
parent
or
even
you
might
want
to
perform
some
device.
Specific
knowledge,
I.
B
Think
if
you
look
a
bit
at
some
of
the
devices,
some
a
third
party
device
support
that
you
have
out
there
oror
firfer
devices.
It
is
a
very
fragmented
space
to
burn
any
supports
device.
Plugin
docker
has
an
entry,
plugin
mechanism.
Pod
man
has
a
concept
of
hooks
that
allow
you
to
run
OCI
hooks
on
a
container.
No
man
has
a
concept
of
device
plug
in,
but
it's
different
from
the
communities
of
eyes.
Again
singularity
has
a
concept
of
plug-in.
B
Similarity
is
an
HPC
runtime
or
you
live
it
and
you
could
go
on
with
many
different
rental
like
see
also
has
so.
Why
is
there
a
need
for
a
specification
generally,
the
user
experience
is
not
very
consistent.
You,
you
won't
even
get
the
same
user
experience
if
you
were
using
darker
and
kubernetes
that
uses
dr.
so
and
get
the
same
user
experience
between
the
runtime
and
Orchestrator.
B
Even
though
they're
using
the
same
runtime
plugins
can
be
moved
from
one
runtime
to
another,
even
when
it,
including
leaves,
should
be
very
straightforward,
for
example,
having
a
plugin
and
doctor
and
having
a
plugin
and
pod
menace
is
not
something
that
can
be
easily
done
and
as
vendors
we
end
up
either
in
maintainability
help
spending
time
on
what
you're
facing
a
feature.
Two
different
runtimes
I'm
resorting
to
hats.
I'll
present
some
of
the
use
cases
from
different
vendors
in
the
next
slide,
where
some
people
are
at
least
for
example,
for
Nvidia.
B
We
ended
up
writing
a
run
fiction
so
that
we
basically
hijack
the
OCI
spec.
Well
hijack
is
not
really
the
word,
but
when
docker
passes
the
OC
I
expect
to
run
see,
we
would
inject
our
hosts
in
that
spec
or
for
other
vendors.
What
that
means
is
that
you
just
exclude
runtimes
from
your
platform
so
going
into
some
of
the
use
cases.
B
B
Basically,
they
need
to
mount
other
device
nodes
and
they
need
to
reconfigure
the
SDG
a
with
a
current
function,
one
of
the
requirement,
meaning
that
they
don't
want
the
container
to
be
reconfiguring
the
FPGA,
because
that
would
be
a
security
risk
for
them
and
from
what
I've
gathered
they
currently
mostly
use,
cryo
and
kubernetes
to
inject
OCI,
hoaxes
and
I.
Don't
think
they
have
a
runtime
runtime
specific
mechanism
to
do
that
for
Delta
other
than
just
passing
the
right
arguments
on
the
command.
B
B
We
use
that,
for
example,
in
deep
learning,
they
have
any
specific,
interesting
use
case
where
they
definitely
need
to
know
device
notes,
but
they
also
need
to
mount
user
libraries.
So
their
specific
use
case
is
when
you
install
the
melmans
driver,
email
and
strawberry
is
also
going
to
install
user
line
components,
libraries
and
because
they
don't
provide
backwards,
compatibility
guarantee
when
they
give
you
the
amount
driver
1.0.
You
can
only
use
the
libraries
1.0
to
talk
to
the
driver
1.0.
So
there's
no
wonder
they
don't
have
Center
versioning.
B
So
look
the
example
would
be
that
in
their
case,
an
experiment
they
would
provide
his
2.0.
So
what
that
means?
Is
you
can't
but
C
1.1
libraries
in
a
container,
because
if
you
were
to
move
that
container
to
another
machine
at
2.0,
well,
that
container
would
run
were
the
calls
that
you
would
make
in
the
libraries
would
fell
on
the
drive.
B
Another
new
skis,
which
I'm
definitely
more
familiar
with,
is
the
Nvidia
use
case
where
we
provide
a
a
stack
to
help
with
GP
integration
and
there's
there's
a
couple
of
things
that
we
need
to
do.
For
example,
mount
device
nodes
about
user,
LAN
libraries,
we
have
the
same
problem
now
Knox,
we
don't
provide
any
guarantees
and
so
or
it
continued
to
be
GPU
where
we
need
to
mount
libraries
that
were
installed
with
a
driver
from
the
host.
B
We
need
to
know
some
UNIX
sockets
for
specific
components,
so
persistence,
D
being
this
demon
that
keeps
the
driver
loaded
at
all
times
and
PSX,
or
we
need
to
update
the
progress
entries.
So,
for
example,
let's
say
user
wanted
to
only
show
one
GPU
out
of
data
ID
in
GPUs
that
he
or
she
has
on
the
machine.
We
would
need
to
Heidi
GPUs
are
available
in
the
pocket,
that's
the
other
GPUs
and
we
might
want
to
perform
compatibility
checks
between
the
container.
A
B
Take
the
discussion
in
the
next
meeting
or
continuous
people
want
to
stay
around
I
think
I've,
given
the
gist
of
the
use
keys
just
to
basically,
the
next
few
slides
are
mainly
about
just
presenting
the
solution
that
we
came
up
with
so
depending
on
what
people
want
to
do.
I'm
happy
to
either
continue
or
wait
until
the
next.
A
I
think
yes,
it's
your
call,
mostly
I,
think
maybe
I
recommend
you
know
talking
about
briefly
and
talking
about
it
briefly
in
the
next
meeting
and
and
then
then
we
can
jump
into
maybe
a
discussion
or
something
like
for
a
few
minutes
right.
So
because
you
know
you
will
get
more
eyeballs
during
the
meeting.
I
am
so.
A
B
A
It
I
think
it's
relevant
and
I.
If
it's
not
cover
that
it's
it's
specific,
you
know
and
it's
more
around
workloads
and
how
you
can
use
it
to
facilitate
workloads.
Then
it
is
more
of
a
fit
into
grunta
hi,
I'm
dan
kubernetes,
but
if
it
is
something
that
is
has
to
be,
you
know
the
fire
iron
in
and
implemented
at
the
kubernetes
level,
it
would
be
more
fitting
to
Valerius,
but
then
yeah
and
we
talked
about
it
and
before,
and
so
you
know.