►
From YouTube: Commons Briefing #60: Virtual Multitenancy and Running Non-Container Workloads on OpenShift
Description
Enterprise complexity yields a new set of challenges as companies mature in their use of Kubernetes and OpenShift. Providing virtual multitenancy on OpenShift allows organizations to share Kubernetes clusters between multiple production applications and/or between development, test, and production. Sharing clusters with non-containerized applications creates a different set of issues altogether. Univa’s CTO, Fritz Ferstl, will demonstrate how Navops Command provides the advanced scheduling and policy framework allowing sharing of OpenShift clusters across teams and for the deployment and execution of non-containerized workloads in a Kubernetes context.
Guest Speaker: Fritz Ferstl, CTO – Univa
A
A
This
works
in
generic
to
burn
Eddie's
as
well,
but
we're
really
pleased
to
have
this
approach
being
explained
to
us
and
we're
going
to
learn
all
about
how
they're
offering
neva
helpless
to
make
it
a
reality.
So
without
further
ado,
I'm
going
to
let
Fritz
and
start
us
off
and
get
us
introduced
to
what
you
need
is
been
doing
and
let
you
take
it
away.
B
Thank
you,
then
I
hope
you
can
give
me
and
thanks
for
having
us
so
jumping
right
into
the
first
slide
here,
just
by
way
of
introduction,
very
briefly
who
we
are
so
I'm
for
it.
First
of
all,
I'm
the
CTO
at
unova
I've,
been
around
the
block
for
quite
some
time,
pretty
much
always
focused
on
workload
and
resource
management
and
distributed
environments,
and
more
recently
has
been
focusing
my
attention
on
to
contain
orchestration
so
mainly
could
learn.
B
It
is
actually
but
also
love
the
talks
or
initials
and
fall,
and
Stefan
has
been
an
engineer
with
us
with
a
pretty
long
tenure
as
well.
He
has
been
working
for
some
time
in
our
core
technology,
which
I
will
briefly
introduce
a
little
later
and
now
has
been
switching
over
to
working
on
our
container
facing
product
that
it's
called
nabobs
and
he
has
been
instrumental
in
integrating
nabobs
when
its
open
shift
and
also
doing
this
week's
workload
support
that
we
will
be
seeing
in
the
demo
love
it
later
today.
B
Also,
very
briefly,
by
way
of
introduction
who
universe
is
so
we
really
are
focused
around
allowing
customers
using
large
shared
infrastructure
for
any
type
of
workloads,
be
that
containerized
or
not
containerized.
We
have
offices
based
in
chicago
in
canada
and
in
germany,
and
we
really
focused
around
enterprise
customers
fortune
500
companies
are
really
mainly
our
customer
base,
of
which
I
have
a
bunch
of
logos
here
on
the
next
slide,
we're
whoops
much.
B
So
let
me
first
jump
into
what
we're
doing
in
container
lands,
specifically
in
Cuba
neatest
land.
We
have
been
always
get
two
slides
advanced
sorry
for
that
we
have
been
created,
creating
a
product
suite
which
we
call
nav
ops
and
the
nabobs
command.
One
of
the
products
in
that
suite
is
our
corner
product
in
that
space,
and
it
really
again,
as
we
usually
do,
focuses
on
workloads
and
resource
management.
B
Then
we
also
to
drive
that
further
provide
the
ability
to
run
mixed
workloads
by
that
we
mean
containerized
in
non
containerized
workload
on
exactly
that
same
infrastructure
again
to
drive
utilization
and
make
it
easier
to
integrate
into
existing
environment
or
to
migrate
from
existing
environment
into
container
facing
architectures,
and
we
also
manage
scarcity
with
it.
By
that
again
we
mean,
if
you
have
different
applications,
different
workflows,
different
projects,
teams
that
compete
for
resources,
and
there
is
not
enough
to
do
all
of
them.
B
We
have
been
talking
to
customers
who
are
in
the
process
of
adopting
carbonate
is
big
time
or
open
shift
in
most
cases.
Actually,
when
you
talk
to
commercial
customers
in
some
of
those
customers,
for
instance,
we're
planning
to
create
many
many
dozens
of
open
shift
clusters
for
different
projects
for
the
different
stages
of
the
product
projects,
so
death
test
production
stage
and
so
on.
B
I
mean
you
always
have
idle
resources
in
those
classes,
and
the
consequence
is
that
the
overall
environment
will
probably
have
a
pretty
poor
utilization
and
what
we,
the
type
of
functionality,
that
we
provide,
allows
to
consolidate
those
clusters
and
drive
up
utilization.
Actually,
the
larger
50%
utilization
that
I
have
here
on
the
slide
is
very,
very
conservative
in
the
environments
that
I
was
talking
about
before,
where
our
core
technology
is
being
used.
B
Miller
said,
you
know,
hundreds
of
course
we
actually
see
regularly
utilization
rates
of
eighty
percent
in
some
cases
way
above
ninety
percent
and
that's
necessary
I
mean
if
you
have
environments
that
big
they
may
cost
100
million
dollars
total
cost
of
ownership.
So
a
few
percent
of
utilization
make
a
big
difference
there,
some
of
the
unique
capabilities
that
our
solution
provides.
So,
first
of
all,
something
that
isn't
really
currently
in
any
product
as
far
as
I
can
see
in
the
community
space
is
that
we
prioritize
workloads
and
that's
pretty
much
automatic.
B
So
by
way
of
getting
the
work
load
properly
submitted
to
your
system
and
advertising
certain
things
to
our
scheduler,
we
prioritize
that
automatically
and
dynamically.
We
have
a
sophisticated
policy
system
as
we
shall
see
in
a
slide
that
follows
shortly
and
we
provide
mixed
workloads,
support
so
for
containerized,
non
containerized
workloads,
and
then
we
have
a
whole
set
of
functionality
that
makes
it
easier
to
use
the
system.
B
So,
for
instance,
of
web
UI
to
drive
the
policy
configuration
and,
of
course,
the
CLI
and
the
REST
API
as
well,
our
workloads
are
affiliation
or
our
decision
making
is
affiliation
based.
So,
as
I
mentioned
before,
when
you
submit
your
work
load
properly,
then
we
will
know
where
it
comes
from,
for
instance,
the
owner
who's,
the
project.
Is
there
a
certain
workload
template
that
you
want
to
use
and
that
will
be
used
in
the
automatic
policy
decision
making?
We
do
support
any
Google
latex
distribution.
Our
solution
is
totally
plausible.
B
I'll
talk
about
that
also
in
a
minute,
and
you
can
actually
reconfigure
the
policy
system
on
the
fly.
There
is
no
need
to
stop
any
components.
We
start
them.
If
you
have
made
a
change,
it's
just
really
changing
some
policy
configuration
in
the
web
UI
for
instance,
and
immediately
the
changes
will
take
effect
so
very,
very
simply
stating
what
now
ops
command
is.
It
is
a
kind
of
replacement
for
the
cognitive
scheduler.
B
That's
not
a
hundred
percent
correct.
Really.
What
happens
is
that
we
do
install
metal
to
command
side
by
side
with
the
governator
scheduler.
You
can
use
the
stock
scheduler
and
you
can
use
in
parallel,
cuneta
scheduler,
so
for
us
or
is
the
command
schedule
and
I
was
commenced,
scheduler
for
different
types
of
workloads,
but
you
could
also,
if
you
wanted
to
completely
replace
the
religious
scheduler
with
notes
command.
That
is
a
configuration
option.
In
both
cases,
we
probably
would
recommend
to
them
side
by
side.
B
A
few
more
words
about
the
solution
is
such
from
a
technical
point
of
view.
So,
first
of
all,
nevels
command
is
itself
a
service.
It's
a
multi
container
replication
it
ships,
basically
as
a
part
that
you
can
just
start
or
actually
two
parts
that
you
can
just
start
by
curling
a
younger
file
and
credit
getting
created
in
the
Grenadiers
cluster.
It
has
a
couple
of
components.
I
have
an
architecture
slide
on
that
later
on
and
itself,
it
interacts
basically
completely
with
the
cube
API
server.
B
So
there
isn't
really
anything
that
it
does,
that
you
know
doesn't
fit
a
regular
coordinator
system.
Hence
it
is
plausible
and
as
an
end
user,
you
continue
to
interact
with
the
cube
API
server,
so
you
submit
your
job,
the
queue
control
you
make.
A
change
cube
control,
the
only
thing
that
you
do
specifically,
if
you
want
to
make
policy
changes
to
nevels
command
than
what
you
would
do
is
you
would
use
our
web
UI,
CLI
or
rest
api
to
do
so
inside
of
animals
command.
B
B
Here's
a
architecture,
diagram
I
works.
As
you
can
see.
On
the
left
side,
there
is
the
cube
API
and
then
the
end
user
directly
interacts
with
it,
and
nevels
command
itself
again
also
interacts
with
the
cube
API.
It's
basically
registers
events,
it
gets.
You
know
any
changes
that
happens
to
objects
in
the
cube,
API
and
integrates
that
with
its
policy
system,
and
if
there
is
a
scheduling
decision
to
be
made,
then
it
makes
it
in
the
same
way
as
the
stock
scheduler.
Does
it
as
an
admin.
B
You
would
be
interacting
with
the
novels
command
system
to
its
web,
UI,
CLI
and
rest
api.
The
CLI
is
modeled
pretty
much
after
cube
control.
So
there's
there's
quite
a
familiarity
there
and
then,
of
course,
the
web
UI
is
does
its
own
thing.
Then
we
will
see
the
web
UI
in
action
later.
When
Stefan
wants
the
demo
a
little
bit
of
an
overview
of
the
policy
system,
we
will
see
it
also
later
in
the
web
UI.
But
let
me
first
start
at
the
upper
right
and
the
workload
affiliation
is
a
core
part.
B
I've
mentioned
it
before.
So
we
have
extended
some
of
the
labeling
and
annotation
is
the
properties
of
a
coup
benitez
manifest
all
manifest.
So
you
can
express
these
things
you
can
express.
Who
is
the
owner
of
a
workload
which
project
does
it
belong
to
and
which
application
profile,
which
is
kind
of
a
workflow,
a
workload
template?
B
Does
it
belong
to,
and
once
you
have
done,
that
you
we've
given
our
nevels
command
scheduler
information
that
it
can
use
them
in
the
context
of
all
these
other
policies
that
you
see
on
the
screen,
we
also
do
have
the
default
policies
that
stock
scheduler,
clinica
stock
scheduler
has
meaning
pack
and
spread
when
it
comes
to
note
selection.
But
we
do
have
additional
policies,
for
instance,
to
maximize
utilization
which
allows
you
to
not
only
just
you
know,
distribute
workloads
but
to
actually
look
at.
B
What's
the
current
utilization
of
resources
on
a
note,
what's
the
type
of
requirement
that
the
workload
has
and
then
it
looks
for
the
best
fit
of
the
workload
so
to
really
create
balanced
workload,
placement
and
good
performance
of
those
workloads,
but
that's
just
a
note
selection.
We
have,
of
course,
additional
policies,
and
there
are
those
policies
on
one
hand
or
about
workload
priority
and
there's
a
bunch
of
sub
policies
there.
For
instance,
the
proportional
shares
policies
which
allows
you
to
subdivide
your
your
cluster
into
those
multi-tenant
partitions.
B
There's
an
interleaving
policy
which
allows
you
to
maintain
certain
ratios
of
replicas
following
policy
guidance.
There
is
ranking
which
simply
allows
you
to
you
know,
rank
applications
by
application
profile
or
by
the
source,
and
then
there
is
workload,
isolation,
quotas
so,
for
instance,
runtime
quotas
and
access
restrictions.
We
will
see
some
of
these
in
action
in
the
demo
later
so
that,
just
by
way
of
oval,
you
here
is
just
a
screenshot
of
one
of
them.
B
That's
the
proportional
sharing,
and
that
is
a
screenshot
of
the
web
UI,
and
it
shows
how
you
can
more
or
less
graphically
subdivide
your
environment
into
in
a
different
partitions
and
do
that
even
in
a
hierarchical
fact
fashion.
So
you
can
subdivide
a
partition
again
and
again,
depending
on
how
you
want
it
to
be
done.
B
Maybe
one
word
before
we
dive
more
into
the
demo
use
case
itself
on.
Is
this
all
really
necessary
all
these
policies,
and
so
on?
And
our
point
is
yes,
it
is
absolutely
we
do
live
in
a
world
where
resources
are
finite,
I
mean
sometimes
clouds
give
you
the
illusion.
They
are
infinite.
You
could
always
add
another
resource
and
another
resource,
but
the
fact
of
the
matter
is:
if
nothing
else
is
limited,
then
at
least
budgets,
and
if
you
are
using
on-prem
resources,
then
there
is
our
limitations.
B
Anyhow,
we
see
ourselves
in
our
own
development,
I
mean
some
of
the
grassroot
things
that
we
have
done.
First,
with
clouds
have
started
cervelo
totally
nimble,
but
once
you
let
that
go
for
a
couple
of
months
and
you
look
at
the
bill
that
you're
getting
for
cloud
providers,
you
go.
What
are
we
really
paying
that
much
and
you
immediately
have
to
think
about.
You
know
utilizing
your
resources
better,
because
otherwise
for
spending
gets
out
of
hand.
B
So
if
you
recognize
that
there
is
a
restriction
of
resources
which,
basically,
at
the
end
of
the
day,
you
know
comes
down,
you
have
just
a
certain
amount
of
servers.
Then
sharing
resources
is
actually
a
key
thing
and
then
you
need
to
automate
the
type
of
sharing
this
otherwise
you're
constantly
in
the
business
of
we
reconfiguring
your
environment.
So
that's
why
we
have
been
creating
policies
that
can
automatically
maintained.
Sla
is
resource,
partitions
and
similar.
B
And
a
prioritization
that
you
have
there
in
at
the
heart
of
those
policies
is
really
very,
very
important.
You
always
have
dynamic
changes,
I
mean
you
could
have
something
that
is
crucially
important.
Now
could
be
less
important
when
some
other
work
flow
comes
up.
You
know
that
has
a
higher
priority
at
that
moment,
so
you
always
have
to
stay
on
top
of
those
things
and
there's
really
no
way
to
do
that
other
than
automating
information.
B
Also,
you
want
to
give
as
much
information
as
you
can
to
a
scary,
really
don't
want
to
withhold
things
so,
for
instance,
coordinators
provides
this
notion
of
submission
quotas
or
access
quotas.
These
are
good.
These
are
sometimes
useful,
but
they
basically
hide
information
from
the
scheduler
that
additional
work
really
would
want
to
run,
and
you
know
our
belief
is
that
you
really
have
to
get
the
schedule
as
much
information
as
possible.
That's
why,
for
instance,
we
have
run
time
quotas,
so
you
don't
have
to
hide
something.
Scheduler
will
make
sure
that
we
run
this.
B
Also,
if
you
give
the
scheduler
all
of
this
information-
and
you
look
at
the
schedule,
what
the
decision-making
process
is
that,
then
you
can
analyze
why
certain
decisions
were
made
and
why
maybe
you
running
against
the
wall.
So,
for
instance,
just
a
certain
type
of
service
cannot
get
enough
replica
running
then
looking
at
why
the
scheduler
had
to
make
those
decisions
may
reveal
that
you're
lacking
some
critical
resources.
Maybe
you
need
to
buy
additional
resources
or
allocate
them
to
a
cloud.
B
So
really
you
can
do
some
capacity
planning
if
you
have
a
sophisticated
schedule
and
you
can
inspect
what
type
of
decisions
it
is
making
are
now
coming
to
the
demo
use
case
and
mixed
workload.
So,
first
of
all,
the
environment
that
we
will
be
looking
at
is
an
open
shift
cluster
and
we
have
command
installed
on
top
of
it.
That
manages
some
containerized
services
and
cleaner
containerized
applications,
and
you
know
that
gives
you
all
that
nice
policy
control
that
I
was
talking
about.
But
what
if
you
want
to
run
non
containerized
applications?
B
You
could
of
course
run
them
at
the
side
of
this
environment
and
maybe
split
your
cluster
and
have
some
part
running
container
right
services
and
another
part
running
non
container
s
work.
But
the
problem
is
of
course,
first
of
all
again,
you
would
be
creating
silos
and
inefficiencies
and
then
also
if
those
non
containerized
workloads
need
to
interact
with
the
containerized
workloads
then
will
benefit
from
sharing
the
same
networking
setup,
same
storage,
solutions,
etc.
B
So
what
we
have
created
is
a
version
of
our
universe
with
engine
technology
that,
as
I
mentioned,
is
integrated
with
many
many
thousands
applications,
hundreds
of
workflows
and
it
actually
can
also
handle
container
rice
workloads,
but
but
really
more
like
patchwork
loads.
In
that
context,
not
services.
But
the
idea
is
you,
you
run.
B
B
Then,
in
terms
of
the
worklist
that
are
being
run,
there
will
be
a
number
of
development,
jobs
and
tasks
for
death
and
test
purposes,
and
then
there
will
be
services,
tasks
also
and
batch
applications.
Those
specifications
will
be
long.
Containerized
in
the
demo
environment
is
going
to
be
hosted
on
AWS
and
there's
AB
mentioned
before
infants,
using
open
shift
and
then
command
is
deployed
as
a
scheduler
into
it.
With
that
I
hand
over
to
Stefan,
and
let
me
just
stop
sharing
the
fun
one.
C
The
painting
is
creepy,
yes,
we
do
perfect,
thank
you,
Chris
and
welcome
to
the
demo
wealthy
local
support
of
novel
command.
On
top
this
weather,
ostracism
that
we
heard
level
command
provides
commercial
multi-tenancy
on
hop
off,
brett
has
purchased,
it
allows
to
share
one
cluster
among
multiple
piece,
vacations
or
even
services.
We
seem
allowed
to
run
container
us
and
non
containerized
circles
in
the
same
environment
by
running
our
universe,
with
engine
ruffle
specific
service
for
these
non
containerized
applications
on
top
of
offices.
This
is
what
I'm
going
to
demonstrate
you
right
now.
C
So,
let's
start
directly
into
directly
above
level
command
list.
Local
support
allows
you
to
have
running
stimulus,
Tennessee,
and
here
you
can
see
the
containerized
application
and
services
that
are
running
on
the
classic.
Currently,
so
we
have,
some
productions
of
services
is
replicated
two
times.
C
Second,
1i
back
and
fetch
application
currency
scales
to
something
like
eight
instances
and
we
have
the
universe,
would
as
user
code
processing
service
country
scale
to
one
instance
f,
also
the
univ,
a
good
which
you
can
beat
the
red
vehicle
on
the
last
class,
so
here
on
the
right
side
and
diffuse,
often
on
containerized.
Your
thoughts
running
inside
the
unit.
A
good
engine
service
that
is
currently
scale
to
one
class
is
the
open
just
process.
C
Metrics
already
said:
universal
energy
is
a
leading
records
management
solution
and
is
integrated
with
thousands
of
applications
and
hundreds
of
girls.
In
our
demo
we
are
running
non
containerized
applications,
as
we
have
currently
updated
to
run
an
honest,
containerized
application
paragon
of
every
action
part.
As
you
can
see
here,
we
have
this
job
75
and
76
run
inside
the
universe,
essential
service.
C
So
now,
let's
have
a
look
at
how
this
environment
is
managed
through
net
of
commands
and
how
we
can
modify
an
end
for
the
virtual
multi-tenancy
policy
through
it.
Lets
me
choose
into
the
novelty
you
lie.
So
what
you
see
here
is
an
organizational
breakdown
of
how
the
entire
class
of
resources
are
abused.
It
is
reflected
in
the
so-called
proportional
share
policy
of
never
command.
C
The
full
list
of
the
diagram
represents
100
persons
of
the
cross
of
resources
at
the
highest
level
we
have
with
us
between
the
ICD
Katinas
integration
continues
to
a
development
and
up
which
is
responsible
for
the
production
rocker
I'm
for
cipd.
We
have
that
attest
work
which
has
roughly
from
finger
to
assured
of
the
overall
cluster
resource
for
simplicity
reasons.
We
are
not
running
any
death
or
Hester
as
part
of
this
demo.
C
So
let's
focus
on
the
obvious
I
this
own,
the
latter
amount
of
the
cluster.
Yes,
with
the
dancers
will
between
that
and
services.
Work
with
the
bigger
share
is
going
to
batch
and
the
best
resources
we
have
configured
mostly
to
be
consumed
by
birth
in
the
back
end
project,
while
only
roughly
a
quarter
is
the
time
to
run
on
containerized
real
close
that
gets
managed
by
univ
of
attention.
C
C
C
C
So
I
can
show
you,
while
that
a
couple
of
another
policies,
you
can
change
in
D
and
D
WI,
and
you
might
have
noticed
that
there
are
only
hancock
drumming
at
a
time
on
fire
that
she
and
the
50
pop
and
the
ug
mark
across
the
last
two
degrees
engine
project
and
the
backhand
chop
belong
to
the
back
approach
at
both
of
them
are
part
of
the
mix
were
called
namespace
and
within
for
each
early
phase,
you
can
set
a
so
called
em
quota
and
I
set
a
quota.
C
I
call
the
limit
dash
or
displacement
which
lunes
the
griz
engines,
as
also
the
back
in
projects
combined
to
run
only
10
pots.
At
a
time
when
we
now
get
back
to
our
view,
we
will
see
that
there
are
running
much
more
executions,
even
dance,
a
control
which
is
which
is
usually
change
with
this.
In
the
end,
the
resource
in
the
proportional
share,
and
if
you
have
a
look
at
the
UT
q
matter,
so
its
up
locked
out.
C
C
C
We
have
also
de
so-called
maximize
utilization,
which
size
to
automatically
balance
the
ruffles
placements
and
mix
where
you
can
specify
entities
which
should
get
texts
and
entities
which
should
get
thread
in
your
open,
just
cluster
and
another
command.
You
also
have
the
possibility
to
create
predefined
profile,
the
so-called
application
profile,
and
this
example
I
created
a
profile
for
a
web
server,
as
also
one
for,
for
example,
a
database
in
the
database
profile
is
configured
with
pop
ties
to
depth.
To
that
profile
need
to
get
pissed.
Dispatch
to
knowledge
has
at
least
five
gigs
of
memory.
C
C
So
I'm
this
include
my
brief
demo
of
the
mixed
local
support
of
metal
command
on
topic
of
interest.
So
again,
what
the
install
was
single,
open
ships,
weapons
that
ran
standard
containerized
locals
along
with
unicode
with
engine
as
the
report,
processing
service
or
non
containerized
legacy
vocals.
On
top
of
that,
I
also
demonstrated
how
networks
command
can
use
segmental
resources
in.
B
A
A
To
do
a
second
to
say
our
question:
it's
infinitely
good
for
me,
I
was
thinking
of
unova
is
something
more
of
premise
offering,
and
so
the
thing
that
I
got
out
of
today
was
that
that
this
is
not
it's
and
it
sort
of
a
great
pirate
application
as
well.
But
it's
pretty
been
pretty
interesting
to
proceed
with
them
to
see
it.
A
Do
the
things
the
scheduler
that
comes
with
true
bernetti
is
pretty
pretty
basic,
and
the
things
that
you
guys
have
done
is
your
grid
grid
engine
over
the
years
is
giving
you
a
lot
of
background
and
experience,
and
integrating
this
and
making
the
you
know
the
actual
things
that
the
reality
things
that
we
need
for
off
and
to
really
make
an
enterprise
offering
on
top
of
Coober
netting.
The
workflow
priority
is
pretty
fitting
awesome
so
pleased
that
that
we've
gotten
to
get
this
integration
done
looking
again
to
see.
B
A
And
Stefan,
thank
you
for
that
for
the
demo
fun
really,
especially
in
the
beginning.
One
of
the
things
that
I
really
like
is
that
you're
doing
the
myth-busting
about
the
unlimited
capacity
of
the
clouds,
especially
on
premise
as
well.
There's
always
a
limit.
Someone
always
have
to
buy
more
resources
or
upgrade
things.
This
is
quite
a
quite
an
interesting
solution
and
I
think
and
that's
fun.
On
your
customer
list.
There
I
saw
quite
a
few
open
ships,
customers
to
items
which
is
a
lot
of
work
to
neutral
introduced
or
the
system
quite
an
eye-opener.
A
So
thank
you
again
for
coming
today
and
if
you
have
any
questions,
please
reach
out
to
them
directly
or
hit
them
up
on
the
flex
channel.
Also
on
all
these
guys,
plus
a
few
more
from
the
unova
will
be
at
the
open
shift,
comes
gathering
in
Berlin
and
under
10
month
on
march
twenty
eighth
and
if
you're
interested
in
coming
reach
out
to
me
and
I'll
see
if
I
can
get
you
into
it,
it's
co-located
with
coupon.