►
From YouTube: OpenShift Administrator’s Office Hour (Ep 2)
Description
Join Andrew Sullivan, Chris Short, and the occasional special guest for an hour designed specifically to help the OpenShift admins out there. Come with your questions, leave with solutions.
C
A
No,
we
do
appreciate
you
having
the
show.
This
is
the
show
for
openshift
admins
right,
like
we
have
a
developer
experience
office
hours
now
we
have
an
admin
office
hours.
So
the
idea
is
that
we're
going
to
answer
some
of
those
running
open
shift
at
scale
things,
and
today
we
are
talking
about
what
andrew.
B
So
in
the
lack
of
audience,
questions
which
I
do
want
to
remind
everybody
that
this
is
kind
of
an
ama
right,
it's
driven
by
you
all
and
your
participation
notice.
I
said
you
all
not
y'all,
I'm
trying
to
to
eliminate
some
of
my
southernisms
at
least
while
I'm
on
stream
wow,
I
know
yeah.
I
don't
know
if
you
all
is
any
better
but
anyways,
so
in
lack
of
of
any
questions
coming
from
from
the
audience
which
we
definitely
want
regardless
of
topic,
you
know
if
we
don't
know.
B
Then
we
will
certainly
take
a
note
and
follow
up.
You
can
always
reach
out
to
me
on
twitter
for
follow-ups
as
well
as
emailing
me
at
red
hat,
if
you
so
choose
yeah
super
easy,
first
name,
dot,
last
name
yeah,
but
in
the
absence.
B
Thank
you.
So,
in
the
absence
of
participation,
I
kind
of
wanted
to
talk
about
core
os
or,
more
specifically,
I
should
say
redhead
enterprise,
linux
coreos.
The
operating
system
used
by
openshift.
A
Yes,
it
has
a
lineage
behind
it,
though
right,
like
coreos,
was
a
company
that
we
acquired
coreos
was
actually
the
coreos
tectonic
was
my
first
kubernetes
distribution
like
ever
and
like
to
put
into
like
a
you
know,
data.
A
And
yeah
the
redhead
acquisition
of
core
os
gave
us
a
lot
of
their
technologies
and
ideas
and
tooling,
and
you
know,
operators
or
something
that
you
know
we've
embraced
wholeheartedly
and
that
totally
came
out
of
core
os
and
then
the
ideal
of
this
you
know
completely.
Immutable
operating
system
is
a
coreos
ism
for
lack
of
a
better
term
right.
B
Yes,
but
there's
also
some
other
things
that
are
related
from
the
red
hat
portfolio,
absolutely
right,
so
yeah
to
your
points,
I
I
have
a
little
bit
of
history
there
as
well.
I
went
to
the
very
first
of
the
tectonic
conferences.
B
The
very
first
meeting
of
you
know
cncf
I
was
I
was
in
the
room.
There
was
fun
but
yeah
that
was.
A
B
B
B
It
was
it
was
fun
anyway,
so
yeah
core
os
the
company
which
was
acquired
by
red
hat
a
little
over
two
years
ago,
right
at
two
years
ago,
something
like
that.
Now
we
took
their
technology,
so
the
two
big
ones
being
tectonic,
which
was
the
kind
of
turned
into
the
openshift
administrator
interface
and
coreos
itself,
so
coreos
the
operating
system,
and
then
we
took
all
of
that.
You
know
concepts,
ideas,
technology,
implementation
and
merged
that
with
what
red,
hat
called
atomic
right.
So
both
of
them
have
that
same
principle
of
you
know.
B
Immutability
of
you
know
the
the
whole
a
b
rpm
os
tree.
You
know
switching
between
versions,
right,
everything
is
done
in
one
kind
of
holistic
method
and
the
outcome
of
that
was
oh
and
operators.
So
operators
were
originated
at
core
os
as
well
and
you'll
notice
that
those
three
components
play
pretty
major
roles
in
open
shift.
Four,
you
could
previously
argue
or
state
that
those
were
you
know
the
especially
operators
were
the
driving
force
behind.
B
B
Yeah
so,
but
you
know
from
a
higher
a
high
level
right,
the
operators
especially
were
a
huge
change
from
openshift
three
to
openshift.
Four,
I
remember
back
in
the
openshift
four
rc
days,
which
remember
4.0
was
never
actually
released
right.
We
did
a
hackathon
and
it
was
one
of
those
wow.
There's
a
there's.
A
lot
of
change
here,
like
I
had
to
relearn
a
lot
of
things
from
scratch.
Basically,.
A
B
Yeah,
so
I
see
waleed
is
asking
about
bottle
rockets,
so
I
am
not
super
familiar
with
bottle
rocket
at
this
point.
Like
I,
I
know
the
principle
behind
it.
I
kind
of
know
what
they're
doing,
but
from
an
implementation
low
level
right.
Can
I
explain
the
differences
I
I
don't
know.
I
have
not
had
the
time
nor
opportunity
to
go
through
and
well
dig
into.
A
A
A
A
A
A
B
That
so
things
like
there
is
no
ssh
right.
You
use
the
control
container
right.
So
very
much
like
we
have.
You
know
we
we
do
the
oc
debug
command
against
nodes,
that's
the
one
and
only
entry
point
in
there.
So
you
know
getting
down
into
details
again.
I
don't,
I
don't
know
the
details,
so
I'd
have
to
spend
some
time
digging
in
there.
I
don't
know
if
our
and
I
should
know
this,
but
I
don't
know
if
our
competitive
folks
have
had
any
opportunity
to
dig
into
it
or
anything
like
that.
So.
A
No
that's
a
good
point,
but
like
maybe
I
should
ping
them
and
just
bring
them
on
and
say
hey
what
is
the
difference
between
like
aws,
eks
and
openshift
kind
of
thing?
Maybe
there
should
be
a
competitive
or
not
competitive
show,
but
maybe
you
know
like
an
xks
versus
open
shift,
show
kind
of
thing
right
like
what
are
the
differences?
Where
are
the
sharp
edges
in
comparison
kind
of
thing.
B
Yeah
on
one
hand,
maybe
on
the
other
hand,
you
know
the
the
difference
is
from
andrew's
perspective.
You
know
when
we
look
at
all
of
those
other
kubernetes,
I'm
going
to
keep
using
that
until
it
becomes
a
thing
you
know
they
are
quote
unquote,
just
kubernetes,
I
say
quote
unquote
because
you
know
just
kubernetes
is
encompassing
a
lot,
but
you
know
openshift
adds
on
a
lot
of
value.
Add
right,
there's
there's
a
lot
of
things
that
we
do
on
top
of
there.
A
So
we've
got
some
other
questions
and,
while
they'd
wan
we'll
lead
has
a
continuation
but
we'll
get
back
to
that.
Well,
let's
do
that.
First,
he
says
no
worries.
I
if
I
would
like
to
change
the
install
config.yml
to
a
single
node.
What
do
I
need
to
make
the
router
run
on
the
master.
B
I'm
actually
going
to
jump
up
to
the
to
the
question
from
mudasar,
because
he
he
asked
first
red
hat
openshift
has
vendor
validation,
certification.
If
I
want
to
validate
a
cnf
from
platform
perspective,
does
vendor
validation
have
some
test
suite
to
achieve
that?
B
I
would
assume
yes,
but
I
do
not
know
for
sure
we
would
have
to
put
you
in
contact
with
the
certification
team
and
if
you
want
to
reach
out
either
to
me
or
chris,
we
can
put
you
in
contact
with
the
right
people
to
get
all
of
those
answers.
I
can
speak
from
a
operator
certification
perspective
from
a
container
certification
perspective.
B
B
I
know
so
the
link
that
I
just
posted
there
is
our
partner
certification
guide
for
both
containers
and
operators,
so
that
kind
of
walks
through
the
whole
process
of
just
joining.
You
know
partner
zone,
getting
the
right
entitlements.
You
know
for
nfrs,
et
cetera,
as
well
as
what
the
requirements
are
to
actually
go
through
the
container
and
operator
certification
process
awesome
and
furthermore,
if
you're
interested
in
doing
a
certified
operator,
there's
actually
a
certified
operator
build
guide.
B
It
is
linked
from
that
first
guide,
but
it's
a
little
hard
to
find
in
my
opinion,
because
it's
like
one
link
on
one
page,
so
I
also
included
the
link
to
that
which
goes
into
great
detail
in
depth
on
what
are
the
requirements
for
a
certified
operator
so
for
for
those
of
us
who
aren't
developing,
who
aren't
creating
those?
I
find
this
interesting
because
it
gives
you
a
peek
and
some
background
into
what
does
it
mean
right?
B
What's
the
difference,
when
I
look
in
operator
hub
in
my
openshift
environment,
what
does
it
mean
the
difference
between
a
certified
and
a
community
operator
right?
Well,
a
community
operator
is
one
that
is
basically
you
know
the
the
third
party
submits
a
pr
to
a
github
repo,
which
is
github.com
operator
framework
community
operators.
I
think.
B
B
There
is
enough
validation
that
it
runs
in
openshift,
but
nothing
more,
whereas
operator,
certification
or
container
certification
means
a
lot
of
additional
stuff
all
the
stuff.
That's
in
those
documents,
as
well
as
a
lot
of
additional
testing
that
happens
on
both
sides,
both
our
third-party
partners
as
well
as
inside
of
red
hat.
So
it
is
important
for
that
kind
of
assurance,
background
understanding
of
what's
happening.
A
C
B
You
know
if
you
want
to
create
your
own
and
publish
them
internally
right.
What
does
that
look
like
right?
So
I
find
the
particularly
if
you
go
to
learn.openshift.com
and
do
the
ansible
operator
workshop
that
they've
got
published
there.
It's
great
for
getting
familiar
with.
You
know,
equating
things
that
we
already
know
is
as
admins
ansible
into
this
operator
paradigm
inside
of
the
cluster.
A
Exactly
right
like
if
you're
familiar
with
operators
or
ansible,
the
leap
to
operators
isn't
as
big
right,
because
normally
operators
are
written
in,
you
know,
go
or
some
programming
language,
but
you
can
write
them
an
ansible.
You
can
write
them
in
just
about
any
language
nowadays,
it
feels
like
there
are
shell
operator
frameworks
and
python
operator
frameworks
and
java
operator
frameworks
out
there.
A
So
and
but
if
you're
not
familiar
with
a
language,
and
you
want
to
do
something
with
an
operator,
ansible's
a
perfect
way
to
get
started
and
we
actually
have
in
our
archives
over
on
youtube
on
the
playlist,
the
ansible
operator
workshop
that
we
ran
in
july
during
the
well.
You
know
summit
at
home
kind
of
event
that
we
did,
and
I
will
grab
a
link
to
that
here
in
a
second.
A
B
So
move
to
sarah,
I
see
I
see
your
thank
you
so
hopefully
that
answered
your
question.
Please
feel
free
to
reach
out
again
twitter
email
named
out
last
name
for
me
at
redhat.com.
B
Yeah,
if
you
want
further
clarification
or
want
us
to
connect
you
with
the
right
folks,
insider
red
hats.
So
the
the
question
that
you
were
asking
which
comes
from
waleed,
which
yeah
we'll
eat.
B
So
a
couple
of
things
inside
of
there,
so
one
install
config
is
the
file
that
is
fed
into
openshift
install
to
deploy
the
cluster.
So
I
think
it's
important
to
point
out
that
we
don't
support
single
node
deployments.
A
minimum
is
three
nodes
and
I
it
might
be
possible
to
do
a
single
node
deployment.
I
want
to
say
that
there's
a
blog
post
out
there
somewhere.
I
don't
remember.
B
B
So
that
being
said,
I
want
to
let's
ignore
the
single
node
thing:
how
do
we
make
a
router
run
on
a
master
so
effectively
during
deployments
in
the
install
config.yml
you
want
if
you
set
the
number
of
worker
nodes
to
zero,
that
will
tell
the
installer
that
you
want
to
create
a
three
node
quote:
unquote,
compact
cluster.
B
So
what
that
means
is
behind
the
scenes
effectively.
The
masters
are
marked
as
schedulable,
so.
C
B
Literally
change
a
false
to
a
true
and
you
will
have
a
scheduled
control
plane
at
that
point
you
have.
B
A
So
there's
a
message
in
the
stream
that
it
looks
like
skynet
is
talking,
but
the
audio
is
much
better
now,
so
I
think
I
fixed
it
so
sorry
about
that
folks,
something
very
weird
happened
there.
All
of
a
sudden
unusual,
the
joys
of
live
streaming.
Welcome
to
it.
Let
me
know
if
it's
fixed,
it's
fixed,
okay,
cool
awesome.
So
please
continue
andrew
sorry.
B
No
no
worries
yeah,
who,
who
knows
could
be.
A
A
B
A
Flares
solar
flares
fm,
whatever
you
want
to
call
it
yeah
like
it,
it
did
sound
like
there
was
like
two
streams
running,
so
I
don't
know
what
was
going
on
there.
It
could
entirely
be
yeah.
Chrome
was
closed.
It's
only
zoom
and
obs
on
the
streaming
rig.
So
I
don't
know.
What's
going
on
there,
that's
weird
okay,
cool.
B
Okay,
so
I
don't
see
any
other
questions
at
the
moment.
I
don't
think
I
missed
anything.
So
please
please
shout
if,
if
I
did.
A
Yeah,
your
questions
are
very,
very
welcome
right
now
we
are
all
about
answering
your
administrative
or
other
openshift
questions,
and
if
we
don't
have
the
answers,
we
will
find
them
for
you
and
get
you
in
contact
with
the
people
that
do
that's
the
most
wonderful
part
of
openshift
tv.
B
So
if
you
look
in
the
documentation,
you
can
see
that
I'm
just
at
the
landing
page
for
the
openshift
4.5
documentation
underneath
the
architecture
heading.
We
have
this
red
hat
enterprise
linux
core
os,
which
walks
through
a
number
of
different
aspects
about
core
os
and
the
differences.
The
changes
right
why
it's
important
to
openshift
as
a
whole.
B
B
However,
in
general-
and
I
think
most
red
hatters
would
agree
that
the
benefits
of
core
os
of
red
hat
enterprise
linux
core
os,
far
outweigh
rate
the
little
bit
of
additional
flexibility
you
get
with,
inter
using
rel7
for
those
worker
nodes
right
now,
there's
always
edge
cases,
there's
always
exceptions.
There's
always
right.
All
of
these
things
and
you're
not
doing
anything
wrong.
There's
nothing!
You
know
inherently
bad
about
using
rel7
worker
nodes.
You
just
lose
some
of
that
again,
flexibility,
manageability
or
excuse
me,
you
gain.
B
B
So
core
os
is
very
much
like
it's
lineage,
so
so
modern
core
os
red
hat
enterprise,
linux,
core
os,
like
its
lineage
of
both
core
os
from
coreos,
the
company
and
atomic
linux,
is
an
immutable
operating
system
right.
It
is
intended
to
be
deployed
and
have
basically
a
known
state,
and
I
want
to
be
clear
that
immutable
does
not
mean
that
it
is
unchangeable.
B
Yeah,
so
here
I'll
I'll
show
this
slide
here,
which
we
created
a
while
ago,
right
immutability
does
not
mean
that
it's
static.
It
doesn't
mean
that
there
is
no
configuration
change,
that's
happening
there.
It
means
that
it
happens
in
a
controlled
manner
and
it
means
that
we
can
predict
right.
We
should
know
exactly
what
the
configuration
of
that
node
looks
like
at
any
given
point
in
time,
so
this
is
hugely
beneficial
for
something
like
open
shifts.
Where
my
nodes,
much
like
my
containers,
can
be
treated
as
disposable
right.
I
need
to
change.
B
I
need
to
change
the
config.
Well,
it's
basically
going
to
reapply
the
entire
configuration
for
the
node
with
every
change.
When
I
go
and
do
an
os
update,
it's
going
to
reapply
the
os
using
our
pmos
tree
right
into
the
node,
and
then
we
simply
switch
over
to
the
new
operating
system
sanjeeth.
What
is
the
best
way
to
set
up
openshift45
in
a
closed
environment?
I've
got
a
video.
A
A
Caught
my
attention,
according
to
mobile
curry,
I
hope
that's
a
good.
B
Thing
yeah,
so
to
answer
your
question
about
disconnected
so
the
the
best
way
to
do
it
is,
if
you
look
in
the
documentation
underneath
installing
and
then
we
go
to
installation
configuration
there,
is
this
creating
a
mirror
registry
for
a
restricted
network?
B
So
this
page
this
documentation
page
details
how
to
take
all
of
the
images
right,
all
the
things
that
we
need
to
instantiate
to
deploy,
openshift
and
bringing
them
onto
a
private
network,
whether
that
is
fully
disconnected
or
partially
disconnected
so
partially
disconnected
and
meaning
that
there's
a
host
that
straddles.
You
know
the
the
internet
and
your
private
network.
B
So
one
thing
to
note
about
this,
and
this
has
come
up
a
number
of
times.
I've
answered
a
bunch
of
questions
about
it
internally.
B
A
B
A
B
Yeah
apologies,
so
you
want
to
use
this
command
down
here,
which
you
notice
uses
the
dash
dash,
2-dir
or
tac-tac
or
hyphen-hyphen.
Whatever
word,
you
want
to
use
for
this
character,
whatever
you
want
to
call
it
fine
andrew.
So
this
will
take
all
of
that
data
and
metadata
and
put
it
into
a
directory.
You
know
on
a
usb
drive
on
a
portable
hard
drive,
whatever
you
want
to
use,
you
disconnect
that
from
the
internet
connected
machine,
do
whatever
your
internal
process
is
for
scanning
security.
B
B
There
is
a
chunk
so
up
here.
It
actually
tells
you
when
you
do
the
dry
run.
Sorry,
it's
hard
to
find-
and
I
know
it
looks
like
a
lot
of
scrolling
because
we're
on
this
on.
B
Yeah
yeah,
so
when
you
do
the
dry
run,
it
will
spit
out
and
actually,
when
you
do
this
command
as
well,
it
will
spit
out
a
chunk
of
yaml
that
you
want
to
put
in
your
install
config,
and
that
is
what
tells
openshift
during
the
install
to
look
to
this
other
registry,
not
the
default
red
hat
public.
You
know
quad
io
registry
for
all
of
its
stuff,
so
basically
two
steps,
one
mirror
all
of
the
stuff
are
three
steps.
B
B
B
A
B
I
will
add
so,
let's
jump
up
to
the
top
sorry
for
the
quick
scrolling
there,
so
I
will
add
that
that
only
addresses
the
install
portion
so
getting
openshift
up
and
running
if
you
want
to
use
operator
lifecycle
manager,
basically
the
package
of
or
the
set
of
operators
found
in
operator
hub,
there's
a
separate
set
of
instructions.
For
that
that
I
can
never
remember
where
they're
at
in
this
documentation.
B
There
we
go
using
olm
on
restricted
networks.
There
you
go
so
this
walks
through
basically
the
same
process
just
with
a
slightly
different
command
of
identifying
so
first
turning
off
the
default
operator,
so
it
stops
trying
to
connect
to
the
internet
and
pull
them
and
then
on
that
internet
connected
machine
on
that
public
machine
pulling
in
all
of
that
data
metadata
right,
storing
it
locally,
moving
it
over
and
then
importing
it
on
the
disconnected
side.
B
But
you
only
need
to
do
that
if
you
intend
to
use
any
of
the
red
hat
operators,
so
openshift
virtualization
service,
mesh
elasticsearch,
the
logging
rate
suite
et
cetera,
and
you
can
be
selective
with
those
yeah.
So
if
we
were
to
do
oc
get
catalog
source,
so
you
can
see
here's
my
catalog
sources
and
if
I
wanted
to
have
just
the
core
red
hat
operators.
Well
then
I
only
replicate
this
particular
catalog
source.
B
A
Over
to
a
disconnected
network
right,
like
we've
seen,
people
do
thumb
drives
we've
seen
people
do
spinning
hard
disks.
You
know
like
usb
drives
dvds,
you
name
it
we've
even
seen.
People
like
maintain
like
copy
the
registry
as
it
exists
like
on
you,
know
the
connected
side
and
then
just
replicate
that
as
like
a
snapshot
or
something
into
the
new
environment,
which
I've
always
think
seen
interesting,
but
yeah.
B
A
Yeah
yeah
yeah.
So,
let's
see
do
you
want
to
make
sure
so
I'll
I'll
jump
back
up
a
little
just
scooter.
B
A
B
Yeah,
so
the
concept
is
you
want
an
operating
system
where
you
want
something
that
is
not
easily
modified
outside
of
the
change
process.
Ideally,
it
cannot
be
modified
outside
of
that,
but
especially
if
somebody
has
roots
right,
it's
still
linux
at
the
core,
there's
still
things
that
you
can
do
to
abuse
the
system.
If
you
will
so
inside
of
the
open
shift
paradigm,
core
os
is
managed
by
the
machine
config
operator
so
effectively.
What
happens
when
I
go
to
install
coreos,
it
installs
the
operating
system,
but
it
has
no
configuration
right.
B
It's
just
the
the
kernel,
the
drivers
right,
the
other
libraries
that
it
needs
and
the
first
thing
that
needs
to
happen
is
it
reaches
out
and
it
pulls
down
an
ignition
config
yeah.
So
you
can
think
of
ignition
as
being
like
cloud
init
if
you're
familiar
with
cloudinits
right
or
you
know
the
various
other
tools
that
are
out
there.
I
know
there's
one
for
windows
and
for
various
other
things.
I
don't
know
what
you're
holding
up
chris.
B
Yeah,
so,
with
the
goal
being
and
and
what's
really
interesting
about
about
ignition,
that's
different
from
cloud
init
and
most
of
those
others
is,
it
runs
before
the
system
is
fully
booted.
So
before
you
know,
pid
1
starts
ignition,
does
its
thing.
So
this
means
that
we
can
use
ignition
to
lay
out
things
like
unit
files
so
that
I
can
start
or
stop.
C
B
Exactly
and
this
this
is,
cluster
was
deployed,
it's
deployed
onto
rev,
using
the
bare
metal
method,
so
these
always
exist,
because
this
is
how
we
configure
coreos.
So
we
can
see
in
machine
configs.
I
have
a
number
of
different
objects
that
exist
inside
of
here.
Hopefully
they
tell
me
by
the
name
what
they
pertain
to,
but
we
can
also
look
at
the
contents
inside
of
here.
B
B
What
I'm
going
to
see
is
a
bunch
of
different
things
right
that
are
telling
this
is
ignition,
which
is
yaml
telling
it
to
lay
out
contents
right
files.
So,
for
example,
here
it's
saying
put
a
file
at
etsy,
kubernetes
cloud.conf
that
is
owned
by
root
with
an
access
mode
of
4.
or
420..
So
is
that
reads
I
don't
know
what
the
two
is
reads
executes
nothing.
I
think.
C
B
B
So
we
can
see
here
right,
hey,
create
this
file.
Cloud.Conf
hey,
create
this
file,
kubelet.conf
right.
Here's
the
contents,
so
ignition
will
go
through
and
ensure
that
these
files
exist
and
lay
them
out
and
make
sure
that
they
have
the
contents
that
they
are
supposed
to
have
inside
of
here
and
what
we
end
up
with
is.
Oh,
I
wanted
to
look
at
one
other
thing.
While
I
was
there.
B
B
I
have
this
rendered
config,
alright,
so
down
here
I
have
this
rendered
worker
config,
and
if
I
got
a
look
at
this
guy,
it
will
be
the
sum
it
will
be
all
of
the
machine
configs
that
are
tagged
for
workers
added
into
one
file,
so
we
can
go
through
and
you
can
find
all
of
the
things
all
of
the
configuration
right.
Here's
my
registries.conf,
here's,
the
cloud.conf,
here's
a
bunch
of
systemd
units
right
that
get
added
in
you
know
here's!
B
B
So
a
machine
config
pool
is
a
group
of
machines
and
a
configuration
that
are
mapped
together.
So
how
does
this
actually
apply
during
the
install
process?
So
I
deploy
my
core
os.
So
let's
say
that
it's
the
bare
metal
method
right
I
boot
to
that
iso
I
hit
tab,
so
I
can
edit
to
the
kernel
line
and
then
I
append
ignition.cfg
equals.
I
think
it's
ignition.cfg
equals
and
then
I
point
it
to
where
this
file
is
being
served
from
right
which,
by
default,
if
it's
an
existing
cluster,
is
going
to
be.
B
B
B
So
we
we
get
asked
things
like
well,
how
do
my
nodes
get
assigned
their
host
names
right?
Well,
if
we're
talking
ipi,
that's
managed
by
the
machine
set
right,
the
machine,
api
integration
and
the
machine
config
so
effectively
when
it,
you
know,
machine
api,
says,
create
me
a
new
virtual
machine
and
it
has
coreos
on
it
and
then
it
boots
up.
It
pulls
its
machine
config
and
says
you
are
node
worker
12
and
it
configures
itself
with
that
hostname.
B
A
A
B
Yeah,
how
many,
how
many
microphone
arms
did
you
go
through
like.
A
This
one
or
something
I
went
through
three
in
a
two
week
span.
This
is
microphone
arm
number
five,
but
keep
in
mind
one
of
those
microphone.
Arms
is
now
christian's
microphone
arm
and
he
is
having
problems
with
it.
So
it's
not
just
me.
A
Yeah,
I
don't
like
the
fact
that
they,
like
plastered
their
logo
all
over
it,
and
I
can't-
and
it's
like
you,
know,
concave,
so
I
can't
like
get
a
sticker
to
fit
over
it
just
right
either.
Right
like
I
would
put
like
you
know
a
red
hat
or
okd
sticker
or
something
a
you
know,
kubernetes
sticker
on
top
of
it
for
all
I
care,
so
maybe
I'll
do
that
this
weekend
with
an
exacto
knife
and
try
not
to
cut
myself
open.
B
Aubry
my
pmm
counterpart,
she
has
a
a
monitor,
the
back
of
a
monitor,
which
is
where
her
husband
works.
On
the
other
side
of
that
monitor,
she
put
a
a
red
hat
logo
sticker
over
the
monitor
manufacturer
logo.
A
Yeah
man
like
that,
makes
total
sense.
I
would
totally
do
that
right,
like
I'm
not
here
to
advertise,
for
you
know,
lg
or
samsung,
or
whoever
right
like
yeah
or
heil
or
heil.
A
B
B
Okay-
and
I
see
in
in
chat-
thank
you
christian
for
correcting
me
on
what
the
correct
on
the
the
appropriate
kernel
parameter
for
providing
the
ignition
url
is.
So
thank
you.
Yes,.
A
B
Yeah
so
spot
instance
support,
I
think,
was
just
added.
If
I
remember
correctly,
I
think
it's
4.5
was
when
that
was
introduced
so
effectively
to
my
knowledge,
spot
instances
are
really
no
different
than
any
other.
You
know
node
type
or
or
machine
type.
I
should
say,
aside
from
in
the
definition,
it's
saying
you
know
yes
use
use
spot
instances
here
and,
of
course
they
can
be
reclaimed
by
the
by
aws
at
any
point
in
time
yeah.
So
my
assumption
is,
you
would
want
to
have
and
to
be
clear.
B
I
haven't
tested
this,
so
this
is
based
off
of
an
educated
guess,
so
you
would
have
machine
sets
so
for
if
you're
not
familiar
machine
sets
are
what
define
to
machine
api
operator,
how
to
or
or
what
to
request
from
the
underlying
infrastructure.
B
So
I
don't
have
a
cluster
deployed
that
has
any
integration
with
machine
sets,
so
I
can't
show
it,
but
essentially,
in
the
case
of
aws,
it's
saying
I
have
a
machine
set.
It's
for
you
know
this
az
and
this
region.
I
want
you
to
create
a
new
machine
that
is
of
this
type,
that
has
these
other
properties
associated
with
it,
and
when
you
request
that
openshift
scale
that
machine
set
essentially
machine
api
says.
B
B
So
auto
scaling
is
configured
both
on
the
machine
set
level
as
well
as
at
the
cluster
level,
and
then
there's
some
intelligence
within
at
least
I've
been
told
there
is.
I
haven't
experimented
with
it
of
it
understands
what
to
what
type
of
node
to
provision
based
on
the
scenario.
So,
if
I'm
deploying
a
workload
that
needs
a
gpu,
it
understands
that
I
need
to
deploy
a
a
machine
from
a
machine
set
that
has
a
gpu
attached
to
it.
A
Yeah,
the
I
think
the
crux
there
is
like
the
machine
set
in
each
availability
zone
already
existing
and
like
it
doesn't
necessarily
have
to
be
on
right,
like
that's
kind
of
the
beauty
of
spawn.
A
B
Yeah
so
in
open
shift
nomenclature-
and
you
can
see
on
my
screen
here-
we
have
nodes
when
we
have
machines
and
we
have
machine
sets.
So
a
machine
set
is
a
definition
you
can
see.
This
is
not
a.
I
don't
know
why.
I
clicked
that
it's
not
a
cluster
that
has
one
so
a
machine
set
much
like
a
daemon
set.
Much
like
a
stateful
set
defines
the
template
that
it
will
use
to
create
something
so
with
a
staple
set.
That
template
is
here's
the
pod
definition.
Here's
the
pvc
definition
when
I
scale
that
stateful
set.
B
When
I
go
from
zero
to
one
it
will
create
the
pvc,
it
will
create
the
pod
definition
according
to
what's
in
that
template,
when
I
scale
from
one
to
two,
it
creates
another
one
right
so
on
and
so
forth.
Machine
sets
do
the
same
thing
right.
So
what
is
the
difference
between
a
machine
and
a
node?
So
a
machine
is
the
representation
of
the
infrastructure
object?
Basically,
it's
the
virtual
machine
or
whatever
in
the
future,
it'll
be
a
physical
machine
right.
B
So
it
is
a
virtual
machine
that
it
understands
has
properties
with
that
underlying
infrastructure.
As
a
service
provider,
the
node
is
the
traditional
kubernetes
right.
I
have
nodes
in
my
cluster,
so
a
machine
is
the
virtual
machine
and
how
the
cluster
interacts
with
the
infrastructure
underneath
so
created
a
machine
destroy
a
machine.
The
node
is
after
coreos
has
been
deployed
configured
and
it's
joined
the
cluster
and
becomes
a
productive
member
of
the
cluster.
A
Nice,
so
elite
has
a
follow-up
machine
set
per
a-z
got
it,
but
would
it
also
be
per
machine
type?
Yes,
yes,
okay,
so
you
would
have
machine,
sets
machine
types
under
machines
and
then
that
would
all
be
scaled
to
zero
and
then
you
would
grab
spot
instances
as
you
saw
fit.
Yes,.
A
A
B
B
A
Yeah
narendev
is
asking
what
is
an
az,
I'm
saying:
az
is
availability
zone,
think
of
it
as
a
data
center,
so,
for
example,
gcp
aws
azure.
They
have
multiple
availability
zones
within
each
of
their
data
center
locations.
A
So
what
does
that
mean
right
like
if
us
east
one
is
the
dc
region
or
the
eastern
seaboard?
They
could
potentially
have
a
data
center
in
you
know
dc
or
any
one
of
the
multiple
areas
around
dc
or
the
east
coast
in
general,
and
use
that
as
an
availability
zone.
The
idea
is
that
within
those
availability
zones
there's
no
or
there's
lower
latency
and
higher
throughput,
potentially
than
it
is
when
you
cross.
A
You
know
the
the
regions
themselves,
like
us,
east,
one
versus
u.s
east,
two
kind
of
thing,
yeah
yeah,
I
don't
I
I
mean
I
feel
like.
I
did
a
good
job
of
that.
I
might
have
missed
some
points
there.
So
andrew,
please
feel
free
to
add
on.
C
B
B
So
you
know
when
I
was
a
customer
right:
we
based
our
quote-unquote
availability
zones
based
off
of
the
power
distribution
right
of
each
pdu,
served
somewhere
between
8
and
12
racks.
So
we
wanted
to
be
able
to
accommodate
a
pdu
failing,
okay,
which
did
happen.
That
does
happen
frequently
all
right.
So
you
know
if
we
lost
eight
racks
simultaneously,
you
know
is
my
infrastructure,
so
maybe
it's
you
know
rev
or
vmware,
or
something
like
that.
A
Yeah,
so
it's
like
you
know,
ryan
mentioned
ryan
jarvan
is
on
the
call
or
on
twitch
us
west,
one
us
west
to
u.s
east
one,
but
this
is
like
the
abcd
behind
that
right.
Like
so,
I
know
us.
East
two
has
like
four
regions
and
aws
and
like
gcp
central,
which
is
my
local
region
for
gcp.
I
think
it
has
three
availability
zones.
Maybe
five
now
yeah
pdu,
sorry
we're
using
acronyms-
and
you
know
people
might
not
understand
them
these
days.
A
I
you
know
didn't
even
think
about
that
right,
like
what
is
a
power
distribution
unit
aka,
it's
a
big
ass
power
strip.
It's
an
enterprise
power
strip
with
you
know,
potentially
monitoring
and
all
kinds
of
fun
stuff
in
it
like
ipmi,
which
is
a
fun
little
protocol
on
of
itself.
Yeah.
B
Yeah,
so
they
found
rather
complex
yeah
because
it
was
there's
the
main
power
distribution
for
the
data
center,
which
sends
in
the
data
center
I
worked
in,
would
send
a
few
hundred
amps
of
three
phase
208
to
each
basically
the
end
of
each
rack
row
and
then
that
pdu
would
break
it
down
into
you
know:
40
50,
60
amps
of
one
phase
that
go
to
each
of
the
two
power
strips
in
the
rack.
B
So
if
I'm
looking
at
the
back
of
my
rack,
I
have
one
power
strip
that
is
on
phase
one
and
one
power
strip.
That
is
on
phase
two
and
then
I
can
source
from
each
one
of
those
gives
us
some
some
redundancy
at
that
level.
You
know
if
I
drop
a
phase.
If
I
drop
a
pdu,
it's
not
going
or
excuse
me
a
power
strip;
it
doesn't
affect
it
right.
B
A
B
I
should
say
you
know,
there's
a
ton
of
stuff
that
we
can
talk
about.
Usually
the
biggest
one
that
I
try
to
communicate
to
people
is
redhead
enterprise
linux,
core
os
like
that
naming
is
not
an
accident.
It
is
based
on
rel
eight.
It
is
the
same
kernel.
It
is
the
same
drivers
right
all
of
those
other
things
and
importantly
for
us
administrators.
B
It
has
the
same
underlying
tools
for
doing
configuration
management.
So
what
do
I
mean
by
that?
Well,
you
saw
where'd
my
thing
go,
so
you
saw
in
my
machine
configs
here
right
when
I'm
looking
at
the
contents
of
these
machine
configs,
I'm
just
laying
out
standard
files
right.
If,
if
I
need
to
here,
here's
a
good
one,
iptables.com
right,
I'm
just
laying
out
files
the
same
way
that
I
always
am
how
they're
getting
there
is
what's
different
right,
but
at
the
end
of
the
day
it's
still
rel.
It's
still.
B
So
I
think
it's
also
important
to
highlight
that
core
os
nodes
right
machines
in
the
open
shift
paradigm
are
meant
to
be
treated
as
disposable,
yep,
so
effectively
most
configuration
should
be
dynamic.
Right
should
not
be
static.
Shouldn't
right,
I
shouldn't
be
going
in,
and
you
know
modifying
individual
files
on
individual
hosts.
If
I'm
having
to
go
in
and
create
a
specific,
you
know
machine
config
pool
for
worker
0
worker
1
worker
2
worker
3,
because
each
one
of
them
is
different.
B
B
So
I
think
it's
particularly
important
to
remember-
and
I
think
I've
said
important
like
four
times
now,
so
there
are
some
things
that
should
always
result
in
the
node
being
reloaded
so,
for
example,
the
first
network
interface,
the
one.
That's
that
sits
on
the
machine
cider.
So
if
we
look
at
our
install
config.
A
B
So
it's
not
so
it's
not
actually
defined
in
this
in
this
config.
Let
me
see
if
I
have
one
that
does
have.
B
B
B
So
the
the
network
interface,
the
nic
on
the
machine
that
sits
on
this
network,
is
the
one
that
will
be
used
for
the
sdn.
It
will
be
the
one
that
it's
expected
to
connect
to
the
the
control
plane
with
right,
there's
a
lot
of
things
that
are
dependent
on
that.
B
If
it's
wrong
most
likely,
your
cluster
will
still
deploy
fine,
but
some
things
might
not
work
as
expected
in
particular,
and
especially
if
you're
using
a
proxy
basically
by
default,
it
will
say:
I'm
not
proxying
anything
that's
on
this
network.
So
if
you
accidentally
have
this
wrong
and
then
you
suddenly
implement
a
proxy
you'll
end
up
with
a
bunch
of
nodes
disconnected
because
they
think
they
need
to
proxy
to
get
to
the
control,
plane
and
yeah.
B
So
if
you
need
to
change,
for
example,
the
ip
address,
so
I'm
using
static
ip
assignments,
I
need
to
change
the
ip
address
for
the
the
network,
adapter
that
sits
on
this
subnet.
That
should
be
a
reload
right
right,
basically
blow
away
the
nodes
right,
destroy
it
or
reload
the
operating
system
and
provide
that
static
ip
address
again
at
that
kernel
parameters
line
and
let
it
implement
it
that
way
and
remember
we're
using
machine
config
machine
config
pools.
Basically,
that's
the
only
configuration
that
you
need
to
do
right.
B
I
give
it
a
new
ip
address.
It
pulls
that
configuration
in
all
of
it
gets
reapplied
just
like
it
was
before
the
workload.
Excuse
me.
It
gets
rescheduled
over
to
that
particular
node
and
we're
good
to
go
with
a
couple
of
minutes
of
downtime.
Basically
yeah
yeah,
it's
one
of
those.
We
have
to
think
about
managing
those
nodes
a
little
bit
differently
in
that
we
want
to
be
very
careful
about
creating-
and
I
know
everybody
loves
the
term
snowflakes.
B
Yes,
we
want
to
think
about
managing
at
scale
managing
nodes,
homogeneously,
so
that's
and
they
don't
all
have
to
write.
Not
every
node
in
the
cluster
needs
to
be
the
same,
but
within
the
same
machine
config
pool.
A
Right,
like
you,
can
have
different
machine
instance,
types
in
the
same
node
pool
based
off
your
needs
right,
like
gpus
versus
you,
know,
regular.
You
know
whatever
instance
left
so
willie
has
a
question.
We
only
have
five
minutes,
maybe
next
section
next
session,
I
will
ask
about
the
proxy.
Thank
you
all
love.
This
edition
and
diversity
of
the
session.
Much
appreciated.
B
Yeah
happy
to
talk
about
the
proxy.
I
could.
A
Yeah
at
a
high
level
and
I've.
B
C
B
Yeah,
thank
you.
I
really
appreciate
that
and
please
don't
be
afraid
to
ask
clarifying
questions
because
that's
what
we're
all.
B
I
I'm
the
king
of
dumb
questions.
As
you
all
know,
in
our
private
slack
in
our
private
select
yeah.
C
B
A
B
So
we
had
a
number
of
questions
come
up
about
using
proxies
with
open
shifts
so
effectively
proxy
is
pretty
straightforward.
I've
got
a
config,
I'm
not
going
to
show
it
now,
because
the
vm
is
turned
off,
but
I
will
show
deploying
it
next
time
but
effectively
if
you're
using
a
proxy
to
access
the
internet,
it's
pretty
straightforward
and
that's
and
I'll
just
pick
any
one
of
these
rates.
B
I'll
do
one
up
I'll
use
your
proxy
or
your
search
which
didn't
work
for
me.
It
took
me
to
the
operator
lifecycle,
manager,
proxyconfig
clusterwide
proxy,
so
you
can
configure
the
cluster-wide
proxy
either
at
install
time
or
after
install,
depending
on
what
you're
doing
so.
You
can
see
here
if
it's
after
install
pretty
straightforward,
create
a
proxy
object
that
has
your
certificates
if
you're
using
you
know
man
in
the
middle
proxy
and
basically
the
cluster
configures
itself.
B
B
B
Every
network
that
doesn't
need
to
go
through
the
proxy
needs
to
be
defined
here.
Theoretically,
the
network
that
matches
your
again
machine,
cider
or
machine
network
defined
below
doesn't
need
to
be
in
there.
I
put
it
in
there
just
to
be
safe,
but
if
it's
not
in
this
list,
it
will
try
and
proxy
to
it.
So
if
your
workstation
is
in
a
different
network,
if
your
http
server
for
ignition
rate
is
in
a
different
network,
all
of
these
things,
it
needs
to
be
in
this
no
proxy.
Otherwise,
you
will
have
issues
big
issues.
A
All
right,
so
it's
top
of
the
hour
I
got
to
switch
over
to
open
shift
commas
briefing,
which
is
going
to
talk
about
instanta
we've
got
mathias
lubkin,
I'm
pretty
sure
I
messed
up
his
name.
I
apologize,
but
they
are
up
next
to
talk
about
the
astana
operator.
So
thank
you
very
much
andrew
appreciate
your
time
today.
Sorry
for
the
hard
cutover.