►
From YouTube: OpenShift Administrator’s Office Hour (Ep 1)
Description
Join Andrew Sullivan, Chris Short, and the occasional special guest for an hour designed specifically to help the OpenShift admins out there. Come with your questions, leave with solutions. https://openshift.tv
A
A
D
Yeah,
so
yeah,
so
prometheus
is
just
essentially
a
metrics
collector
that
we
use
in
openshift
to
gather
metrics
from
fonts
and
nodes.
So
we
can
keep
an
eye
on.
D
You
know,
cpu
usage
and
memory
usage
and
all
that
and
with
that
we
have
another
piece
of
software
that
we
deploy
with
openshift
called
alert
manager
that
so
prometheus
has
alerts
set
up
in
it.
So
you
can
basically,
when
certain
thresholds
are
hit,
you
know
alerts
will
be
fired,
like
you
know,
if
you're
getting
low
on
memory
or
storage
or
something
like
that
or
endnote
goes
down
and
then
alert
manager
gives
you
a
way
to
set
up
notifications
so
like.
D
C
No
other
than
we
really
probably
won't
have
a
preference
on
whether
it's
email
or
text,
it's
really
completely
dependent
on
the
end
user
and
what
you're
going
to
be
using
the
prometheus
and
alert
manager
stacks
for,
for
instance,
if
you're
using
it
on
your
production
environment,
you
probably
will
want
it
to
text
you
on
your
on
your
weekend
to
interrupt
you
versus
if
it's
your
dev
or
qa
environment.
You.
A
C
C
I
think
generally,
we
have
set
the
alerts
that
we
ship
by
default
to
fairly
reasonable
values.
With
that
said,
if
you're
customizing
things
such
as
give
an
example,
if
you
have
a
giant
storage
data,
store
right
and
the
let's
hypothetically
say
the
upper
bound
that
the
alert
is
set
for
is
80.
If
you
have
a
petabyte
data
store,
you
might
have
a
plethora
of
data
left
when,
when
you
still
have
20
left,
you
might
want
to
customize
that
alert
to
be
10
5,
something
that
you
deem
to
be
critical
at
that
point.
C
There's
a
lot
of
parameters
that
go
into
that
like
how
fast
can
you
react
to
that
alert
being
hit?
How
fast
can
you
increase
the
data
store
when
that
alert
has
hit.
B
Yeah
yeah,
so
I
my
my
background
before
I
was
a
vendor.
I
was
a
customer
yeah.
E
B
Was
a,
I
was
a
virtualization
admin,
architect,
storage,
admin,
storage,
architect,
and
I,
the
last
point
that
you
hit.
There
was
really
important
to
me
because
it
doesn't
matter
what
the
threshold
is
so
long
as
you
give
yourself
enough
time
to
react,
so
you
know
one
of
the
ones
with
storage
right.
So
how
quickly
am
I
filling
up
that
data
store?
And
how
long
does
it
take
me
to
react
to
whatever?
That
scenario
is
so
if
the
solution
is
well,
I'm
out
of
storage?
B
Now
I
have
to
go
out
and
buy
more,
and
my
purchasing
cycle
is
months
yeah.
It
takes
me
three
months
to
get
new
discs
in
well.
If
my
growth
rate
is
x,
you
know
gigabytes
terabytes
a
day,
then
I
need
to
you
know,
go
backwards
from
that
and
that's
how
much
free
space
I
need
to
be
able
to
continue.
You
know
not
disrupt
the
ability
to
provide
that
service.
B
You
know,
maybe
it's
a
fluke.
You
know,
there's
always
the
you
know
the
difference
between
regular
growth
and
something
has
broken
and
something
is
actively
consuming
storage.
I
had
that
happen.
Once
we
had
a
developer
that
created
a
a
perl
script,
wait,
that's
went
haywire.
This
was
back
in
like
2007,
okay,
so.
A
So
yeah,
my
story
is
like
2010,
a
friend
of
mine
co-worker,
who
actually
works
at
red
hat.
Now
in
the
supporter
work.
I
believe
he
pearl
developer
wrote
something
that
you
know
needed
to
run,
for
he
knew
it
needed
to
run
for
a
long
period
of
time,
but
so
he
left
it
running
during
lunch
and
pages
started
going
off
and
we
were
the
on-call
team.
He
and
I
we
were
both
at
lunch
at
the
same
time,
so
we're
messaging
back
just
kill
this
program.
A
B
Yeah
yeah,
so
I
I
think
you
know
very
much
to
your
point
evan.
I
think
that
you
definitely
have
to
take
into
account
how
quickly
you
can
react,
and
you
know
take
that
into
account
when
you're
creating
those
alerts,
those
thresholds
a
long
time
ago.
I
I
did
an
interview
with
gene
kim
and
we
were
asking
gene.
You
know
kind
of
what
what
is
devops.
B
Why
is
devops
importance
and
and
I'll
never
forget,
the
answer
that
he
gave,
which
was
it
brings
humanity
back
to
I.t
of
because
everybody
is
a
part
of
the
process
right.
It
means
that
oftentimes
now
everybody
has
a
vested
interest,
which
means
that,
hopefully,
the
ops
team
right,
the
the
guys
that
are
way
down
at
the
bottom
stop
getting
calls
at
2
a.m.
On
saturdays,
you
know
because
something
has
gone
haywire
and
rather
everybody
has
taken
care
of
it.
B
A
No
but
like
this
is
a
good
tangent
to
go
off
on
because
I'm
sure
the
support
folks
here
understand
that,
like
when
you
have
an
outage,
you
need
to
learn
from
it
right
and
all
of
our
customers
are
trying
to
learn
from
their
outages
every
time
they
happen
and
bake
in
new
knowledge
into
their
institutions
to
help
not
have
that
outage
happen
again
right,
but
we
have
to
think
we
red
hat,
have
to
think
about
everybody's
outage
right,
like
not
just
root
causes
but
like
how
do
we
help
prevent
them
from
ever
happening
right
and
I'm
sure
there's
like
this
long
list
of
things.
A
E
C
Yeah,
you
can
definitely
see
the
most
linked
kcs's
that
actually
might
be
more
what
you're
looking
for,
rather
than
getting
them
right
when
they're
created.
B
So
that
that
also
brings
up
another
question
that
I
have
for
for
you
ce
folks
insights.
B
C
Yes,
it
is
very,
it's
not
quite
mature
yet,
but
we
are
still
creating
the
process
where
we
retroactively
look
at
cases
that
would
be
prime
candidates
for
insights
rules
and
someone
within
cee
drives
creating
that
insights
rule
to
trigger
whether
whether
it's
when
a
must
gather
or
sauce
report
is
uploaded
to
the
case.
The
insights
rules
ran
against
that
data
set
whatever
it
is,
whether
it's
plain
text
file
whatever
looks
for
a
string
or
some
rule
that
is
coded
into
the
insights
rule,
and
then
it
says
hey.
A
C
Knowledge
something
solution,
knowledge.
A
What
I
was
talking
about
knowledge,
venturing
solution,
yeah
interesting,
so
speaking
of
insights,
right,
like
I
worked
as
part
of
the
management
bu
for
my
first
year
here
at
red
hat,
like
insights,
takes
like
a
sliver
of
the
sos
report.
Data
right
is
my
understanding
is
like
what
amount
of
information
do
you
get
from
insights
versus
sos?
That's
helpful
to
y'all,
I
mean
is:
is
there
a
differentiation
there
I
mean.
Do
you
always
go
to
the
sos
report,
or
are
you
looking
more
towards
insights
potentially
for
things.
C
A
A
With
that
functionality
in
rel
is
sos
dash
report,
I
believe,
and
it
will
generate
a
report
of
your
system
status,
that
we
can
then
get
at
red
hat
and
look
at
and
say
hey
here.
We
see
things
that
might
be
an
issue
right
like
this
running
process
is
failing
because
x
or
in
the
case
of
insights.
A
One
of
the
scenarios,
john
spanx
once
told
me,
is
that
a
customer
had
been
dealing
with
an
oracle
database
bug
for
like
a
year
and
a
half,
and
they
turned
on
insights
on
like
a
handful
of
instances,
and
it
detected
like
this
incompatible
database
with
kernel
issue
that
they
that
neither
oracle
nor
the
customer
had
you
know
been
discovered,
but
we
knew
because
we
had
customers
that
run
oracle
so
much
so
it's
it's
interesting
to
always
flip
those
services
on
and
and
get
the
the
actual
insights
of
red
hat
into
your
environment.
A
C
A
great
point,
if
even
no
matter
how
knowledgeable
you
are
looking
at
a
sauce
report,
if
you're
not
looking
for
the
right
things,
you'll
never
find
the
root
cause
versus
insights.
Has
those
rules
already
ingrained
in
it?
There
is
no
prior
knowledge
that
you
have
to
have
when
using
the
insights
tool
to
actually
find
a
root
cause
or
a
possible
solution.
E
A
Yeah,
it's
shocking
that
I
haven't
had
him
on
with
his
wealth
of
verbal
and
audio
talents.
Yeah,
you've
known
john
for
a
long
time,
yeah
john
and
I
are
friends
from
like
way
way
back.
We
used
to
work
at
college
foundation
way
back
in
2012,
11
10,
something
like
that
like
way
back,
and
then
he
worked
at
netapp
for
a
while,
and
we
were
still
like
lunch
buddies
because
I
was
part
of
this
group
now
that
always
went
to
lunch
on
fridays
together.
A
A
A
A
B
Anyways
enough
reminiscing,
yes,
so
alerts
alerts
are
good
alert
thresholds
set
based
off
of
basically
your
ability.
B
Right
so
actionable
alert
fatigue
so
luke,
I
know
you've
been
kind
of
quiet.
I
don't.
I
want
to
make
sure
that
you
have
the
opportunity
to
speak
up
evan,
of
course,
any
thoughts,
any
opinions
on
alert
fatigue.
D
E
D
I
haven't,
I
haven't
seen
it
from
the
perspective
of
potentially
receiving
a
ton
of
alerts,
though
we
I
just.
We
definitely
had
situations
where
customers
are
like
hey,
I'm
getting
inundated,
you
know
what
do
I
do.
E
D
I've
got
this
alert
that
I
you
know,
that's
triggering
that
doesn't
seem
to
be
correlating
to
anything
serious
in
the
cluster
like
it
may
just
be
kind
of
a
I'm
trying
to
think
of
an
example,
but
basically
there's
no
sometimes
it's
tricky
to
figure
out
what's
actually
causing
them,
and
so
you
may
just
be
getting
inundated
with
emails,
and
your
cluster
looks
like
it's
running
fine,
so
sometimes.
A
B
A
D
A
that's
a
good
question.
I
mean
alert
manager
gives
you
a
lot
of
flexibility,
so
I
mean
you:
can
it
takes
a
little
work,
but
you
can
basically,
you
know,
filter
things
down
to.
I
only
want
to
receive.
You
know,
notifications
on
specific
alerts
that
we
provide
through
prometheus.
So
if
you
only
want
like
critical
alerts,
you
can
do
that.
You
can
send
some
alerts
to
email,
some
alerts
to
a
web
hook.
You
can
subdivide
into
different
emails
like
and
I
know
we've
had
some
customers
do
this.
D
D
A
A
Know
whatever
you
want,
yeah
per
namespace
alerting
sounds
brilliant
right,
like
this
group
of
admins
or
devs
gets
these
alerts.
This
group
devs
gets
that
alerts.
You
know
right
like
or
I
can
send.
You
know
like
there's
that
line
where
it's
like
it's
it's
open
shift
versus
it's
the
application
right
like
I
can
make
that
determination
in
advance
right,
like.
B
Yeah,
that's
what
I
was
just
getting
ready
to
bring
up.
If
we
get
asked
about
application
monitoring
using
you
know
the
defaults,
prometheus
alert
manager
stack,
which
I
think
is
on
the
roadmap.
You
know
that
to
me
sounds
like
a
perfect
opportunity
of
there's
always
that
one
admin
right
who
who
you
know
I
was
that
admin.
I
want
all
the
alerts.
Let
me
filter
them
out
using
my
email
or
something
like
that.
B
Oh
god
and
I
can
imagine
adding
application
alerts
on
top
of
that
would
just
create
a
nightmare
scenario
so
having
something
like
that
that
can
apply
that
filtering
yeah
help
helpful
yeah.
A
Yeah
so
abdul
asks:
is
it
possible
to
send
alerts
from
alert
manager
to
kafka.
A
E
A
If
it
has
a
web
hook,
you
can
do
it
for
sure,
but,
like
I'm
curious
more
about
like
the
actual
use
case
right
like,
why
are
you
sending
alerts
to
kafka?
Is
there
something
else
going
to
come
along
and
do
something
with
it?
You
got
an
operator
for
managing
alerts.
Maybe
I
don't
know
that's
interesting.
B
I
think
you
could
also,
if
you
really
want
to
get
fancy
with
it,
do
some
proactive,
preemptive
type
of
operations.
C
E
A
Right
which,
in
that
case,
I
think
you
would
just
instantiate
the
operator
in
your
application,
but
that
could
be
a
very
like
legacy
application.
You
don't
want
to
touch
potentially.
So
that's
why
I'm
curious
about
the
the
potential
use
case
here
which
abdul?
If
you,
if
you
don't
mind,
please
chime
in,
but
what
other
fun
things
can
you
do
with
alert
manager
other
than
kafka?
If
it
has
a
web
hook.
B
Yeah,
I
will
say
that
you
know
it's
funny
on
the
tech
marketing
side.
We
don't
have
a
lot
of
long-lived
clusters
right.
No,
we
don't
we
go
through
clusters.
Like
you
know,
children
go
through
candy
on
halloween
yeah.
I
have
one
that
I
that
I
run,
but
it's
pretty
disposable,
so
I
ignore
most
of
the
alerts
and
stuff
like
that
and
it'll
run
for
a
few
weeks
at
a
time
and
then
it
invariably
gets
destroyed,
because
I
need
to
test
the
next
version,
or
this
other
feature
or
something
else
entirely.
A
You
know
I'm
trying
to
run
and
the
drive
might
actually
be
at
my
front
door
right
now.
The
sixth
drive,
because
I
could
only
order
five
at
a
time
from
amazon
for
the
server
that
I'm
trying
to
build
over
on
the
other
side
of
the
house
across
the
office.
Here
is
like
going
to
be
my
de
facto
like
openshift
cluster
and
the
intent.
A
Is
it
for
it
to
be
long
lived,
which
will
give
me
the
experience
that
luke-
and
you
know
everybody
else-
has
here
on
this
call
of
having
you
know-
to
deal
with
yes,
you're
building
applications
on
top
of
openshift
and
you're
upgrading,
openshift
and
you're
moving
the
applications
along
with
it
kind
of
thing.
Right
like
I
want
to
have
that
experience,
and
I
want
to
kind
of
have
that
you
know
I
do
have
business
class
internet.
You
know
we
were
talking
about
that
warm
up
to
the
call.
A
You
know
the
stream,
where
comcast
decided
to
cut
my
internet
to
upgrade
me.
The
the
idea
here
would
be
that
I
would
actually
start
using
it
to
actually
run
some
real
services
just
from
the
house
right
like
that.
Don't
need
high
latency
or
high
availability,
necessarily
right
but
like
it
would
give
me
the
experience
of
having
these
problems
because
I
haven't
you
know
I
used
openshift
as
a
devops
consultant
for
about
a
year
and
then
I
became
a
vendor
right
like
I
became
part
of
the
problem,
essentially
right.
A
So
having
that
hands-on
experience,
I
think
is
going
to
be
helpful
for
anyone
how
do
either
of
y'all
recommend
getting
that
like
hands-on
experience
in
a
you
know,
potentially
limited
situation
as
some
new
people
here
on
the
channel
would.
E
D
So
something
beyond,
like
you,
know,
oc
cluster
of
kind
of
thing.
Well,.
A
A
B
B
It
is
a
supported
use
case
and
we
are
continuing
to
grow
the
the
edge
right
type
of
use
cases
or
the
edge
capabilities
of
openshift.
There's
a
huge
roadmap,
as
chris
just
said,
there's
a
whole
team
of
people
that
are
dedicated
to
openshift
at
the
edge
and
openshift
for
telco
yeah,
so,
and
I'm
sure
that
that
chris
can
have
somebody
from
that
team
onto
a
a
dedicated
live
stream.
For
that
at
some
point
in
the
future.
A
Yeah
I
mean
if,
if
you're
interested,
I
can't
say
your
username
on
youtube
but
they're
gncs,
we
can
definitely
get
the
telco
team
on
and
you
know
I
we
have
somebody
on
our
team.
That
could
be
helpful
for
that.
B
So
from
an
nfv
perspective,
so
I'll
focus
on
the
virtualization
part
so
open
to
fertilization.
I
don't
believe
dpdk
is
there
yet
for
openshift
fertilization,
sri
ov,
yes,
dpdk!
No,
although
I'm
right
sure
it's
on
the
road
map,
but
I
don't
know
where.
So
if
you
are
doing
cnv,
which
is
container.
A
B
Cnv
is
the
old
name
for
openshift
virtualization,
then
sure
right.
All
of
that
would
work
as
expected
with
nfv
and
openshift
virtualization
dpdk
would
be
the
the
barrier
there
right
so
recommendation
to
run
a
telco
workload
on
openshift.
I
think
it's
you
know
if
we're
talking
like
real-time
colonel
rights,
that
type
of
stuff
there's
a
whole
separate
set
of
stuff
that
I
have
like.
I
know
what
vran
stands
for,
but
I
don't
know
what
it
does
or
any
of
those
things.
A
B
Yeah,
definitely
you
know
in
general,
openshift
is
open
shift
so
deploying
to
like
a
three
node
cluster.
You
know
a
quote-unquote
compact
cluster
should
work
exactly
the
same.
You
know
you're
creating
schedulable
masters.
You
just
want
to
make
sure
that
you
have
the
capacity
for
all
of
the
different
workloads
right.
It's
no
longer
4,
cpus
and
16
gigabytes
of
ram
is
the
minimum.
It's.
E
B
Aware
of
those
types
of
things
other
than
that
workload
is
workloads
for
the.
A
Most
part,
how
do
I
log
into
openshift
as
a
cluster
admin,
a
frequently
asked
question
of
our
own.
I
feel
like,
like
all
the
various
ways,
to
get
that
information.
If
you
stood
up
the
cluster
yourself.
C
Yeah,
absolutely
I
actually
can
answer
that.
So
when
you
stand
up
a
cluster,
you
will
have
a
system,
colon
admin
cube
config
created
and
you
can
use
that
and
that
has
the
cluster
admin
role
associated
to
that
cupid
fig.
But
I'm
assuming
you
have
your
own
user.
So
you
will
want
someone
with
cluster
admin
privileges
to
give
your
openshift
user,
the
cluster
admin
role
and
assign
that
to
your
user.
A
B
And
there
is
so
funny
enough
here,
I'm
going
to
share
my
screen.
Oh.
A
A
We
have
talked
about
using
jitsi
versus
you
know,
zoom
for
this
channel,
but
right
now
like
running
our
own
jizzy
server
is
just
not
in
the
car
hurts,
but
go
ahead.
Please
andrew.
B
Yeah,
so
from
a
documentation
perspective,
we
cover
very
quickly
how
to
create
a
cluster
admin
user.
So
just
here
on
the
rbac
page,
sorry
I
jumped
too
far
so
using
our
back
to
define
and
apply
permissions.
You
can
see
authentication.
B
It's
a
one-liner
to
define
those
cluster
admin
users.
Now,
that
being
said,
is
that
a
good
idea.
B
Depend
on
the
person
in
the
organization
and
and
all
of
that
you
know,
luke
evan,
I'm
sure
you
have
some
interesting
stories
of
people
who
were
given
too
much
permissions.
D
I
haven't
seen
I
haven't
seen
too
many
cases
related
to
it,
but
there,
oh
one
in
particular,
sometimes
you'll,
see
like
a
project,
will
just
disappear.
All
of
a
sudden
like
running
critical
applications,
and
nobody
knows
what
happened
like
nobody
knows
who
did
it?
They
didn't
have
auditing
on
things
like
that,
maybe
so
they're
just
trying
to
figure
out
like
how
do
we
prevent
this
from
happening
in
the
future
and
then
in
that
situation
there
I
think
everybody
had
cluster
admin.
A
C
Yep,
exactly
a
perfect
example
is:
do
you
want
your
developers
to
run
privileged
containers,
yep
right,
like
if
you're
giving
them
all
cluster
admin
and
their
applications
want
to
be
privileged,
then
that
might
be
fine,
but
you
don't
really
want
your
developers
to
be
in
that
position
of
power.
You
want
them
to
use
least
at
least
workable
permissions
right,
like
you
want
them
to
not
be
using
privileged
containers.
So
you
keep
that
name.
Space
separation
between
the
containers
and
the
host.
B
Yeah,
an
extension
of
that
is
the
oc
debug
on
nodes
right,
giving
them
the
ability
to
access
nodes,
and
you
know
the
file
systems
and
units
and
services
and
everything
else
that's
running
inside
of
there
is
yeah.
No
no
bueno.
B
My
favorite
story
of
too
much
permissions.
I
was
working
with
the
us
government
working
with
dod
and
we
had
an
air
force
e3,
who
accidentally
did
a
change,
mod
minus
x,
recursively
on
route.
B
D
E
B
Streaming
together,
it's
hilarious
there
you
go
yeah,
let's
see
a
couple,
more
questions.
I've
seen
come
across
here
I
have
a
prometheus
set
up
on
bare
metal
and
I
have
open
shift
prometheus
and
its
own
node
exporter.
B
A
B
It's
basically
allowing
an
external
service
access
to
the
node
exporter
data
and
I
think
the
answer
to
that
is:
if
you
know
the
nodes
ips,
you
should
be
able
to
access
it.
It's
the
authentication
piece
if
there's
authentication-
and
I
don't
know,
look
or
evan
if
you
know
off
the
top
of
your
head,
if
there's
authentication
required
to
access
that
particular
service
or
not,
I.
A
A
B
A
A
A
B
A
B
B
Not
that
that's
my
tech,
ready
presentation
on
openshift
virtualization,
I
good
for
you
there
we
go.
So
if
we
look
here
which
pod
is
this,
the
init
container,
we
don't
care.
B
B
So
that
way
they
can,
they
can
talk
to
each
other
and
access
each
other,
because
you
can
access
externally
the
prometheus
instance.
So
let
the
one
open
shift
prometheus
the
internal,
the
default
metrics
collection
prometheus,
collect
all
that
data
and
then
query
that
prometheus
for
the
data.
Instead
of
scraping
the
nodes
directly.
A
A
No,
no
doesn't
work.
No,
it's
chiefs
that
doesn't
work
either.
B
E
B
Yeah,
so
if
you
could
figure
out
so
this
is
a
node.
This
is
the
ip
address
for
worker
zero
in
my
cluster,
and
this
is
the
port
that
we
were
just
looking
at
so
luke.
I
think
it
was
you
who
asked
about
what's
that
proxy
and
I
realized
that
the
proxy
doesn't
isn't
specifying
an
ip
address,
unlike
the
exporter
up
here,
which
is
specifying
an
ip
address,
so
I
think
that
means
it's
automatically
listening.
B
B
Credentials
are
which,
if
I
had
to
guess,
are
in
a
config
map
or
a
secret
yeah
somewhere
inside
of
here.
I
don't
even
know
where
to
look
through
all
of
these.
B
B
You
know:
metrics
collection
has
its
own
overhead,
the
more
times
you
scrape
it
the
more
times
it's
going
to
incur
that
overhead
right,
cpu
cycles,
etcetera
right,
so
that
may
or
may
not
be
an
issue.
I
mean
we
are
talking
reading
a
web
page
from
prometheus,
node
exporter
right,
but
yeah.
I
would
think.
Certainly
from
you
know,
you
would
have
to
take
into
account
if
it's
ipi
every
time
a
node
is
added
rates
having
or
removed
having
to
take
that
into
account,
etc.
B
So
it,
but
it
does
seem
possible
if
you
can
figure
out
that
username
and
password.
B
Interesting,
let's
see,
I
know,
we've
only
got
about
a
minute
and
a
half
or
two
minutes
left
so
yeah,
I'm
quickly.
Looking
through
setting
up
a
bss,
I'm
not
sure
what
bss
platform
for
telecom
on
openshift
over
red
hat
virtualization
any
recommendations
to
watch
out
for
nothing
off
the
top
of
my
head
again.
Openshift
is
open
shifts,
so
the
openshift
on
rev
roadmap
is
so
look
in
the
next
version
for
things
like
the
csi
provisioner
for
openshift
storage
domains.
B
So
you'll
be
able
to
add
that
in
and
and
be
able
to
dynamically
provision
storage
there.
I
think
they're
also
doing
away
right
now.
If
you
do
ipi
on
rev,
it
asks
for
a
dns
vip.
I
think
that'll
also
go
away
in
the
next
version.
If
I
remember
correctly,
so
it
will
change
a
little
bit,
but
for
the
most
part
it's
it's
rev
and
it's
openshift
and
they
work
great
together.
B
B
Yeah,
so
I
think
we
have
one
last
question
that
I
can
address
real
quick.
Can
I
create
a
master
and
worker
architecture
with
openshift
in
a
local
machine
using
virtualbox.
B
All
the
nodes
using
the
bare
metal,
slash,
non-integrated,
method,
yeah,
it
won't
be
supported
great,
but
right
you
can
certainly
get
up
and
running
and
test
and
experiment.
If
you
want
to,
you,
can
also
do
it
with
kvm.
You
can
also
do
with
hyper-v.
You
can
also
do
it
with
any
number
of
other
things
on
your
local
box.
A
E
B
Sorry
all
right,
I
think,
since
we
all
have
noon
meetings
or
or
11
a.m-
luke,
I
don't
know
if
you're
still
in
our
time
zone.
Thank
you
everybody
for
tuning
in,
for
the
inaugural
open
shift
administrator
office
hours,
so
thank
you
to
both
luke
and
evan,
really
appreciate
you
joining
us
along
with
our
audience.
B
A
Six
1500
utc.
B
There
we
go-
and
we
have
shows
planned
out
through
I
think,
february
or
march
of
next
year
with
various
topics
so
look
forward
to
all
of
these
in
the
future.
Please
feel
free
to
continue
to
submit
questions
I'll,
keep
an
eye
on
the
chat
throughout
my
next
meeting
and
stuff
like
that.
So
we
can
answer
those,
but
thank
you,
everybody
for
for
attending
for
listening
for,
watching
and
look
forward
to
the
next
one.