►
Description
Does the newest version of Red Hat OpenShift, OpenShift 4.7, bring anything new to the table for those of us that run it? In this episode we’ll take a look at some of the most important new features for administrators, including Kubernetes 1.20, the Pod descheduler, GitOps, and more!
A
Good
morning
good
afternoon
good
evening
and
welcome
to
another
edition
of
the
open
shift
administrator
office
hours,
we
are
going
to
be
talking
about
what's
new
in
openshift47
today
and
I
am
joined
by
the
one
and
only
andrew
sullivan,
the
cuddliest
curmudgeon.
I
could
find
to
talk
about
it.
A
B
Right
so
hello,
hello,
hello,
welcome
everyone.
So
this
is
the
open
shift
administrator
hour
or
we
are
in
the
process
of
rebranding
to
the
ask
and
openshift
admin
office
hour
yeah.
I
mentioned
that
last
week
that
our
marketing
team,
our
branding
team,
has
very
kindly
agreed
to
help
us
out
and
and
help
promote
the
show
and
just
generally
improve
things
overall.
So
thank
you
to
them,
but
same
great
host
same
great
time,
same
great
show
just
a
slightly
different
name
and
probably
some
new
colors
and
new
images
coming
in
the
future.
B
Yeah,
you
know
it's
we're
getting
official
yeah
officialish.
B
So
this
is
one
of
the
office
hour
shows
right
really.
The
goal
here
is
for
you,
our
audience
right
all
of
the
people
listening
and
watching
in
to
be
able
to
ask
us
questions
about
whatever
it
is.
That
is
on
your
mind,
whatever
it
is
that
you
would
like
to
know
about
anything,
openshift
related,
I
I
love
to
say
that
we
can
field
any
of
those
questions
regardless
of
the
topic,
but
you
know
chris,
and
I
both
have
an
administrator
background.
B
So
if
it's
dev
related,
it's
we'll
probably
have
to
take
a
note
and
and
follow
up
with
you
yeah.
B
Yeah
and
don't
let
that
don't
let
that
stop
you
from
asking
those
questions
seriously,
so
we
are
dependent
on
you
on
our
audience
for
those
questions
for
that
interaction,
so
please
don't
hesitate
at
any
time
to
ask
any
of
those
questions
that
you
have.
B
However,
in
the
absence
of
that
in
in,
if
you
all
are
feeling
shy
or
quiet
or
you
know,
all
of
your
questions
have
been
answered
already,
I'm
sure
that's
the
case
right.
We
do
have
a
topic.
We
we
always
come
prepared
with
a
topic,
and
today
that
happens
to
be
drumroll,
please
openshift4.7,
and
why
today,
because
today
is
the
ga
day
today
is
ga
day
today
is
ga
day.
B
So
if
you
go
to
openshift.com,
you
can
see
there's
at
least
one,
if
not
like
five
or
six
blog
posts
that
were
all
put
up
around
various
aspects.
If
you
browse
to
cloud.redhat.com
and
go
to
the
openshift,
install
page
you'll
be
able
to
download
all
of
the
binaries
and
everything
else.
B
A
It's
here
yeah,
someone
asked
yesterday
when
is
the
4.7
release
and
all
I
said
was
soon
knowing
the
whole
time
that
it's
today.
B
You
say
that,
but
opening.
B
Was
ga
last
night
when
I
went
to
bed
at
well,
you
know
we
were
doing
this
apac
virtual
tour
thing.
So
I
went
to
bed
about
1am
and
it
was
ga-
and
I
woke
up
this
morning
to
we
found
a
really
late
bug,
so
we're
gonna
have
to
push
by
a
week.
So
yeah.
B
B
Exactly
well
most
of
us
anyways,
oh
yeah,
good
point,
so
today's
topic
is
4.7
and
I'm
going
to
focus
on
I've
got
chris.
I
was
reading
to
crystal
list
of
things
that
I've
got
for
today.
There's
about.
B
Things
that
I
want
to
talk
about
in
various
aspects
that
I
think
are
going
to
be
important
for
us
as
administrators,
so
I'm
going
to
kind
of
go
through
those,
but
again
don't
let
that
stop
you
from
asking.
Whatever
questions
happen
to
come
to
mind,
you
know,
if
there's
something,
that's
bugging
you
something
on
the
top
of
your
minds,
don't
don't
hesitate
all
right!
I
am
going
to
share
my
screen.
B
A
B
Today,
if
I
could
talk
now,
so
today's
screen
share
is
probably
going
to
be
documentation,
heavy
and
there's
a
couple
of
different
reasons.
For
that
one,
I
like
to
think
that
the
documentation
is
the
source
of
truth
right
and
if
it's
not,
if
there's
something
wrong,
then
our
goal
is
to
get
that
fixed
and
get
that
updated.
B
There's
a
really
great
if
you
aren't
familiar
there's
this
open
an
issue
thing
up
here
in
the
top
corner
at
any
point
in
time,
if
you
find
something,
that's
wrong,
go
and
click
that
button
and
it'll
open
a
github
issue,
and
the
docs
team
is
phenomenal
about
responding
to
those.
It
doesn't
take
a
red
hat
employee.
It
doesn't
take,
you
know
reaching
out
and
poking
us
literally.
Anybody
can
do
that
and
you
don't
have
to
provide
a
fix.
It
can
just
be
hey.
A
A
couple
comments
very
interested
in
the
compliance
operator
updates
and
pipelines.
Are
they
ga,
and
I
forget
if
that
was
part
of
it?
I.
A
B
All
right,
I'll
yeah,
so
christian,
do
you
want
to
install
the
operator
or
just
want
it
managed
by
l-o-l-m
pipelines
or
not
ga.
Thank
you
christian.
Thank
you.
I
hope
your
workout
is
going
well.
I
hope
I'm
playing.
B
For
those
who
don't
know,
christian
usually
listens
to
the
show,
as
he's
doing
his
his
wednesday
morning,
workout
okay,
yeah,
please
all
right!
So,
as
I
said,
documentation
is
super
important,
and
I
know
that
the
docs
team
put
a
huge
amount
of
effort
into
this
release
into
you
know
they're
always
doing
lots
of
effort.
Sometimes
I
find
it
funny
to
appear
in
the
selector
if
you
go
all
the
way
back
to
like
4.1
and
look
at
what
the
docs
look
like
they're,
pretty
dramatically
different,
just
the
amount
of
content.
B
That's
here
that
they
maintain
that
they
add,
like
you
see
over
here,
there's
no
day
two
thing:
there's
no
here
the
post
installation
configuration
none
of
that
stuff
existed
until
later
versions,
so
the
body
of
work
here
is
absolutely
incredibly
massive
yeah
and
I
I
really
can't
sing
enough
kudos
about
the
docs
team.
I
know
that
it's
a
herculean
task
that
they
take
on
so
a
couple
of
things
to
look
at
here.
A
couple
things
to
be
aware
of.
B
As
always,
the
release
notes
are
going
to
have
a
pretty
substantial
amount
of
information
about
any
release.
You
probably
can't
see
it
on
your
screen,
but
my
little
scroll
bottle
over
here
is
quite
tiny,
because
this
is
a
massive
page.
Yeah,
definitely
recommend
that
you
read
through
that.
You
look
through
that.
You
check
out
all
the
things
here
in
particular
pay
attention
to
the
deprecated
and
removed
features
right.
There's
this
lovely
chart
here
of
things
that
have
been
changed
or
removed.
B
Additionally,
I'm
going
to
jump
back
up,
the
top
apologies.
I
would
recommend
checking
out
the
known
issues.
So
known
issues
are
always
an
important
thing
right.
If
there's
something
that
you
should
be
aware
of
like
this
one,
all
the
way
back
from
4.1
right,
making
a
recommendation
on
how
to
improve
the
security
so
always
always
always
be
sure
to
at
least
check
again.
A
So
I
saw
the
get
ups
operator
was
released
in
four
seven.
It
is
still
tech
preview.
B
A
B
Getting
started
with
the
open
shift
get
ops,
so
it
is
tech
preview.
I
would
definitely
recommend
if
you're
interested
in
get
ops
and
the
argo
cd
stuff
check
out.
Christian's
live
stream.
It
is
every
other
thursday
at
4,
00
p.m.
Eastern
3
p.m.
Eastern.
B
Yeah
so
definitely
check
out
christian's
livestream,
it's
a
great
one.
I
always
learn
something
you
know
this
get
up
thing
is,
is
new
to
me
as
well,
and
I'm
learning
as
I
go
so
he's
really
good
at
explaining
those
concepts.
B
So
all
right.
The
first
thing
I
want
to
highlight
with
4.7
is:
we
have
updated
to
right.
The
underlying
kubernetes
is
now
1.20,
there's
a
great
blog
post
that
was
done
by
guav.
B
I'm
sure,
I'm
butchering
that
name
who's
on
the
product
management
team,
around
kind
of
the
what's
new,
or
what's
the
things
that
have
changed
in
kubernetes,
1.20
and
being
that
this
is
on
the
openshift
blog
right.
It
is
going
to
be
things
that
are
important
and
relevant
in
openshift,
especially
and
not
just
you
know,
kubernetes
generically.
B
So
a
couple
of
the
interesting
ones
here,
right,
storage,
we
can
see
things
like
snapshot.
Objects
are
now
ga
they're,
also
ga
inside
of
open
shifts
additional
network
things
right
all
kinds
of
stuff.
Inside
of
here,
I'm
not
going
to
go
through
and
read
this
blog
again
definitely
check
that
out,
because
kubernetes
being
the
core
of
openshift
remember,
openshift
is
built
on
top
of
kubernetes.
B
Reading
chat
by
the
way,
so
the
next
thing
I
wanted
to
talk
about,
if
you,
if
you
haven't
heard-
and
we've
done
a
couple
of
streams
on
this,
we
did.
I
know
we
did
one
in
the
early
days
of
this
show.
I
think
we've
also
done
another
dedicated
live
stream
for
the
assisted
installer.
B
So
if
you're
not
familiar
all
I've
done
here,
this
is
cloud.redhead.com
right.
If
we
from
our
clusters
here,
if
I
go
to
create
cluster,
it
takes
me
to
this
page
and
then
we
can
come
down
here
to
the
bottom
and
do
this
platform
agnostic,
and
I
have
this
assisted
bare
metal
installer.
B
So
the
page
here
still
says
developer
preview,
but
it's
actually,
I
think
it's
now
beta
or
whatever
the
next
step
in
the
the
path
to
ga
is
so.
It
has
been
promoted
right
to
beta.
Essentially,
I
can
select
here
and
this
walks
me
through
the
process
of
deploying
a
cluster
on-premises
or
in
the
cloud
I
think,
depending
no.
I
think
it's
on-premises
only
we'll
do.
A
B
Yeah,
my
brain
is
only
partially
functioning.
We
had
a
a
long
conversation
about
dhcp
already
this
morning.
I'll
talk
about
that
more
in
just
a
moment
as
well,
and
somebody
pointed
out
that
oh
yeah,
that's
that
only
applies
to
ipi,
I'm
like
wait
where,
where
does
that
say
that?
Oh
no,
it's
like
right
across
the
page
bare
metal
ipi.
Only
oh
okay,
yeah,
my
brain
is
anyways,
so
the
assisted
installer
is
a
really
great
way
of
very
simply
deploying
a
cluster.
Essentially
you
plug
in
your
information
right.
You
hit.
This
generate
discovery,
iso.
B
It
gives
you
an
iso
that
you
boot
your
servers
to
so
again,
physical
or
virtual,
bare
metal
installation,
and
then
they
boot
to
that
iso,
and
then
they
register
themselves
here
in
the
interface
and.
A
B
Yes,
the
the
nodes
show
up
here
right
you
can
assign
their
node,
I
don't
have
any
nodes.
All
of
my
resources
are
currently
in
use
for
4.7
clusters
that
are
demo
clusters
and
all
that
other
stuff.
So
I
I
don't
have
anything
to
show
here,
but
the
nodes
show
up.
You
can
assign
their
role.
You
can
assign
all
kinds
of
things.
You
don't
need
a
dedicated,
bootstrap
host
right
it'll.
B
It
takes
one
of
the
nodes
and
uses
it
to
bootstrap,
the
others
and
then
reloads
that
nodes
to
join
the
cluster
as
a
normal
host
lots
of
cool
stuff-
that's
happening
inside
of
here
and
they're
doing
constant
improvements
to
this
as
well.
I
was
just
talking
with
them
about
they're
wanting
to
integrate
ocs
deployment
into
this
as
well.
So,
basically
you
tick
the
box
say
I
want
ocs
and
it
will
automatically
deploy
it
out
of
the
box
for
you.
B
B
I'm
trying
to
read
the
chat
here:
does
olm
even
manage
non-supported
operators.
So,
yes,
so
olm
operator,
lifecycle
manager
is
kind
of
you.
It
is
used
for
all
operators
that
are
a
part
of
a
catalog
inside
of
openshift.
B
B
A
B
Yeah,
although
I
will
say
that
the
operator
framework
is
fully
supported,
so
if
you
have,
if
you're,
if
you're
creating
your
operators-
and
you
have
an
issue
with
the
operator
framework,
we
can
help
with
that.
But
again,
whatever
your
custom
logic
is,
we
can't
help.
You
write
your
logic
for
better
or
worse.
B
Let's
see
so
moving
on
so
one
of
the
things
I
wanted
to
talk
about
with
regard
to
the
docs
team.
This
is
probably
my
favorite
new
page
in
the
documentation,
so
you
can
see
I'm
on
the
4.7
documentation
here,
just
the
very
top
level
about,
and
we
have
this
learn
more
about
openshift
container
platform
and
what
the
docs
team
is
calling.
This
is
essentially
persona
based
documentation
so,
depending
on
your
role
right,
what
is
what
is
your
job
with
regard
to
or
in
relation
to
the
openshift
cluster?
B
There's
links
directly
to
the
things
that
are
relevant
for
you
in
the
various
stages
of
the
cluster
life
cycle.
So
I'm
an
administrator
right.
What
do
I
need
to
know
before
getting
started?
What
do
I
need
to
know
to
deploy
my
particular
cluster
so
on
and
so
forth?
So,
if
you're
new
to
this,
especially
if
you're
not
new,
even
you
may
find
a
lot
of
useful
information
here.
Every
time
I
look
at
the
docs,
I
swear.
I
find
new
pages
that
I
didn't
know
existed.
Yeah
yeah
tell
me
about
it.
B
Let's
see,
let's
move
over
here,
oh
another
one!
That's
I
just
learned
about
this
one
this
morning,
speaking
of
the
docs
team,
validating
and
installation.
B
So
for
a
long
time
effectively
we
document
how
to
deploy
the
cluster.
We
document
how
to
use
the
cluster,
but
we
never
had
anything
that
said:
here's
how
to
figure
out
whether
or
not
the
deployment
was
successful
right.
Here's
right
how
to
figure
out
whether
or
not
it
really
did
the
things
that
it
did.
B
You
know
our
our
generic
response
to
that
was
well.
If
the
installer
finished
successfully,
then
the
cluster
deploy
was
successful,
but
reality
was
often
you
know.
Sometimes
a
cluster
operator
was
still
in
the
process
of
doing
something,
or
maybe
it
was
successful,
but
there
was
still
a
lingering
error.
I
always
used
to
point
out
the
registry
right.
B
The
registry
shows
that
it's
good,
but
unless
you
provide
it
at
some
sort
of
storage,
it's
not
actually
there
so
anyways
this
documentation-
or
this
page
here
walks
through
a
number
of
different
steps
to
kind
of
go
through
and
just
do
a
high
level
validation
that
the
cluster
is
deployed.
The
cluster
is
ready
right.
You
can
begin
going
through
and
doing
all
of
the
day
two
configuration
stuff,
so
this
section
in
the
documentation
here.
Let
me
move
it
up
in
the
in
the
screen
here.
B
B
So
I
this
one
is
near
and
dear
to
my
heart,
because
I
helped
create
this
section
of
the
documentation,
but
you
know
just
going
through
and
evaluating
what
are
the
things
that
I
need
to
do
on
that
day
too.
So
you
know,
do
I
need
to
go
through,
and
you
know
configure
time
synchronization,
probably
a
good
idea
so
documentation
on
how
to
do
that
right.
Do
I
need
to
add
any
kernel
arguments.
Do
I
need
to
change
something
with
those
nodes
in
order
to
make
them?
B
You
know
conform
to
my
particular
usage
right,
real-time
kernel,
any
of
the
vran
stuff
or
anything
like
that
anyways
huge
section
of
the
documentation.
You
can
see.
There's
lots
of
different
tasks
associated
with
this,
so
I
like
to
think
of
this
as
an
administrator
right,
if
installing
is
sort
of
the
the
old
testament,
if
you
will,
the
post
installation
is
the
new
testament
right
it
goes
through
and
it
covers
rates
everything
that
I
need
to
know
to
basically
hand
off
right
here.
B
Here's
your
accounts
developers,
a
capital,
k,
application
teams
go
forth
and,
and
do
great
things
be
awesome,
yeah
yeah,
and
they
they
should
be
good
at
that
right.
They
should
be
all
right
so
quickly
catching
up
on
chat
here
so
by
you,
yes,
assistant
installer
definitely
recommend
giving
that
a
a
a
poke
and
trying
it
out.
So
it
says
bare
metal.
Remember
that
bare
metal
is
an
installation
method,
not
an
infrastructure
type.
B
So
if
you're
doing
the
non-integrated
bare
metal
installation
with
virtual
machines,
it
will
work
there
as
well,
which
was
my
slight
confusion.
When
I
was
talking
about
it
initially,.
A
B
A
Yeah
mine
will
be
in
the
basement
forever,
just
because
it's
loud.
B
All
right,
why
did
I?
Oh
so
one
of
the
other
links
we
talked
about
this
a
couple
of
weeks
ago.
You
know.
Normally
I
open
the
show,
and
I
talk
about
recent
developments
or
or
things
that
have
come
to
the
top
of
my
inbox,
which
really
the
big
one
that
that
we'll
talk
about
is
I'll
circle
back
to
that
in
a
few
minutes.
B
But
one
of
the
things
we
talked
about
was
when
did
nodes
get
rebooted
and
nodes
can
be
rebooted
for
a
number
of
different
reasons
and
sometimes
not
always
expected,
and
during
that
episode
I
highlighted
that,
and
I
think
we
were
talking
about
registries
yeah.
It
was
the
registry
show.
I
highlighted
that
if
you
change
the
insecure
registries
right,
if
you
add
an
insecure
registry
or
remove
an
insecure
registry,
it
will
result
in
the
nodes
being
rebooted
and
there's
a
number
of
those
those
reasons
so
internally.
B
I
know
we're
working
to
document
all
of
those
conditions
right
all
of
the
times
that
the
machine
config
operator
will
reboot
the
node
to
apply
some
sort
of
configuration.
B
But
right
now
we
have
in
this
section
of
the
documentation,
and
I
need
to
get
better
about
posting
all
these
links
into
the
chat
here.
Yes,.
B
Yeah
so,
and
my
apologies
for
not
posting
them
into
the
chat
already,
so
we
do
follow
up
each
one
of
these
shows
with
a
blog
post
that
will
have
all
of
these
links,
as
well
as
links
to
the
specific
times
in
the
video
for
various
topics.
B
B
B
B
You
can
almost
reasonably
expect
something
to
be
going
through
and
and
changing
updating
modifying
at
any
point
in
time.
I
I
I
had
this
conversation
with
someone
recently
of
we
effectively
release
a
z
stream
right,
so
4.7.1
or
4.6.
whatever
the
next
one
is
19
or
20..
You
know
every
two
weeks
I
think,
is
the
official
release
cadence.
B
So
it's
one
of
those
like
you
can
reasonably
expect
to
come
in
every
other
monday
and
see
that
I
have
an
update
to
apply
and
that's
going
to
trigger
all
of
the
nodes
in
the
cluster
to
reboot
and
with
a
sufficiently
large
cluster,
especially
if
it's
a
physical,
you
know
physical
servers.
Take
you
know
three
five,
eight
minutes
sometimes
to
reboot
that's
a
long
time
and
that's
a
lot
of
capacity
to
potentially
be
unavailable.
B
So
you
can
pause
those
updates.
Basically,
it's
a
mcp,
a
machine,
config
pool
setting
that
says
you
know,
pause,
updates
or
pause
roll
out
and
let
them
pile
up
so
that
it's
fewer
reboots.
But
there's
still
reboots
that
have
to
be
involved
in
many
cases.
B
This
just
helps
to
identify
and
alleviate,
like
you
said,
would
you
have
expected
changing
the
authorized
keys
for
ssh
to
result
in
a
node
reboot,
probably
not
so
identifying
those
reducing
them
where
we
can
that
type
of
stuff
yeah?
So
here's
the
one
I
was
referring
to
the
updating
the
registry
settings
on
the
nodes,
some.
A
A
B
So
from
here
we
kind
of
get
into
a
little
more,
so
those
were
kind
of
high
level
relatively
small
changes.
I've
got
a
whole
bunch
of
other
ones
that
that
we
can
talk
about
here.
I
have
a
sticky
note
to
help
me
out
here
so
for
kvm,
based
deployments.
So
if
you're
deploying
to
red
hat
virtualization
or
openstack,
the
qmu
guest
agent
is
now
included
in
coreos.
B
So
if
you
were
ever
concerned
or
if
you
noticed
in
your
rev
openstack
gui
that
you
didn't
have
any
details,
like
I
don't
know,
guest
ipi
address.
All
of
that
should
be
there
now.
B
So
I
think
we
have
officially
documented
that
it
is
supported
to
with
openstack
deploy
a
cluster
that
spans
both
physical
and
virtual.
So
this
is
a
question
that
comes
up
quite
a
bit.
Actually,
so
normally
you
cannot.
You
can
deploy
a
cluster
that
only
consumes
one
infrastructure
type.
So
what
do
I
mean
by
that?
If
I
do
a
upi
or
an
ipi
installation?
B
B
So
I
can't
deploy
some
nodes
that
are
upi
to
red,
hat,
virtualization
and
sub
nodes
that
are
upi
to
vsphere
right
because
they're,
two
separate
cloud
providers,
two
separate
infrastructures
and
it
just
doesn't
work
and
that's
a
kubernetes
limitation,
not
an
openshift
limitation
right.
So
the
way
to
get
around
that
is
a
non-integrated,
aka
bare
metal,
upi
installation,
so
essentially
there's
no
integration
with
the
underlying
infrastructure.
B
B
So
with
openstack
because
of
ironic,
we
can
effectively
use
the
same
cloud
provider,
the
openstack
cloud
provider,
to
talk
to
both
virtual
machines
and
physical
servers.
So
as
a
result,
that
means
that
you
can
mix
infrastructure
types
there
as
much
as
you
would
like.
So
maybe
a
virtual
control
plane
with
physical
worker
nodes
or
physical.
You
know
create
a
a
compact
cluster
right
day,
two
schedule:
schedulable
control,
plane,
nodes
that
are
all
physical
and
then
dynamically
scale
that
up
and
down
all
using
that
openstack
cloud
provider
nice.
B
We
have
now
added
wheels
to
our
car.
Thank
you,
christian
yeah.
Let's
see
what
else
have
I
got
here?
Csi
snapshots
are
generally
available
if
you
haven't
seen
that
yet
I'm
going
to
switch
over
to
here
if
we
come
down
to
storage-
and
so
I
have
wait
which
cluster
am
I
in
I'm
in
the
wrong
cluster.
B
B
A
A
Yeah
was
it
marker
dot
dom?
I
forget
I'll,
find
it
just
real
quick
here
in
our
archives.
B
B
If
you
use
this
right,
these
gui
elements
existed
in
4.6,
but
it
had
a
big
red
banner
across
the
top
that
said
tech
preview.
So
it
is
no
longer
tech
preview.
It
is
now
generally
available
right.
You
can
go
in.
You
can
see
all
of
the
snapshots
you
can
manage
the
snapshots.
You
can
revert
to
snapshot.
You
can
create
new
off
of
snapshot
right,
do
all
of
the
great
funny.
B
You
know
things
that
you
would
like
to
do
with
the
snapshot,
as
the
case
may
be.
Whatever
floats
your
boat
right,
it's
a
snapshot
yeah.
So
in
that
cluster,
the
one
that
I
was
just
showing
is
one
that
I
use
for
openshift
virtualization
testing,
so
openshift
virtualization
will
do
csi
clones
of
volumes
when
creating
virtual
machine
disks,
which
is
really
cool.
B
B
So
this
is
just
the
mac
os
desktop
and
screen
that
I'm
sharing.
You
can't
see
the
other
virtual
desktop.
B
So
if
we
were
to
come
over
here
to
cluster
settings,
it
would
essentially
say
that
unable
to
apply
because
because
of
a
degraded
machine,
config
pool,
so
while
I'm
here
you'll
notice
a
couple
of
things
so,
first
and
foremost
this
is
a
ga
4.7
cluster
deployed
into
azure
awesome.
I
deployed
it
literally
two
hours
ago,
something
like
that
yeah.
So
if,
if
you
go
to
cloud.redhead.com
right
and
you
go
and
you
you
can
pull
down
the
openshift
install
and
oc
client
libraries,
you
can
go
and
deploy
a
new
cluster
using
4.7.
B
If
you
would
like
right
now,
nothing
wrong
with
that.
So
a
couple
of
things
to
note
here,
first
you'll,
see
that
this
update
status
version
not
found.
Well,
that's
only
sort
of
true.
So
yes,
4.7
is
available.
Yes,
it's
fully
supported.
Yes,
the
version
really
does
exist.
What
this
is
saying
is
that
the
update
or
the
cincinnati
update
channel
is
not
seeing
4.7.
B
So
we've
looked
at
this
before
we
looked
at
it
a
few
weeks
ago.
If
I
go
to
github.com
openshifts
and
I
find
the
cincinnati
graph
data
repository.
So
this
repository
and
specifically
the
channels
data
that's
inside
of
here-
is
where
it
looks:
you'll
notice
that
there
is
no
stable
4.7
and
that's
why
it's
complaining.
That's
why
it's
saying
the
version
is
not
found.
B
B
B
It's
expected,
you
can,
probably
it
should
be
available
in
fast
relatively
quickly,
but
remember
fast
indicates
right
that
there
has
basically
we
don't
trusted
and
stable.
Yet
and
don't
trust
might
be
a
bit
of
a
strong
word,
but
essentially
it's
we
haven't
gone
through
all
of
the
normal
testing,
all
the
normal
validation
for
all
of
the
potential
upgrade
paths
so
keep
an
eye
on,
and
now
I'm
not
going
to
have
the
link
where's
the
link
go.
B
B
See
this
is
what
happens
when
I
gotta
log
in
hang
on
anyways,
so
chris
is
digging
for
we
recently
published
a
week
ago.
Maybe
two
weeks
ago,
one
of
the
labs
teams
published
to
access.redhat.com
a
tool
that
very
helpfully
will
show
you
the
exact,
upgrade
path
from
your
current
version
to
whatever
version
you
want
to
go
to
just
drop.
B
So
if
we
look
at
this
and
again,
this
probably
won't
reflect
4.7,
because
4.7
is
not
in
cincinnati
yet
right,
but
I
can
select
yeah
so
right
now,
I'm
on
stable
4.6
and
I
want
to
go
to
where
I'm
currently
on,
say
I'm
on
four
dots,
9
4.6.9
and
I
want
to
go
to
4.6.17..
B
It
tells
me
exactly
how
to
get
there.
4.99
6.9
to
17
is
kind
of
boring.
So
let's
pick
something
earlier
right,
so
this
is
telling
me
if
I
were
today
on
4.5.1
to
get
to
4.6.17,
I
would
have
to
go
to
an
intermediate
4.5.24
right.
So
when
4.7
is
in
cincinnati,
all
of
this
will
automatically
update
they.
All
all
of
these
things
rely
on
that
repo
for
their
data
and
you'll
be
able
to
see
those
things.
Just
remember.
Stable,
often
takes
two
to
four
weeks
after
the
release
to
reflect
the
new
version.
B
So
we
go
through
this
with
every
4.x
release.
So
don't
be
alarmed,
don't
be
surprised
if
you're
using
the
stable
channel-
and
you
don't
see
the
4.7
update
for
a
little
while
also
keep
in
mind
that
it
is
going
to
be
going
to
be
dependent
on
which
source
or
which
current
versions
are
eligible
for
that
upgrade.
B
B
B
Yeah,
this
is
one
of
those
you
know
kubernetes
and
openshift,
and
the
applications
that
are
deployed.
It's
super
important
for
us
as
administrators
to
work
with
those
app
teams
because,
for
example,
if
they
don't
define
a
pod
disruption
budget
that
is
appropriate,
then
yeah
us
just
going
about
doing
an
update
or
an
upgrade
is
yeah
can
potentially
break
something
right.
Those
pod
disruption,
budgets
and
the
other
things
that
are
in
place
to
protect
the
workload
are
important
and
you
know
with
with
a
virtualization
solution
like
rev,
we
pretty
much
own
that
right.
B
B
So
yeah,
it's
it's
a
little
bit
of
a
and
I
feel
like
I'm
probably
preaching
to
the
crowd
or
preaching
to
the
choir
with
fire
this
audience.
But
it's
important
to
understand
that
we
do
have
to
communicate.
A
Yes,
no
communication
is
key.
Talking
to
your
co-workers
is
like
most
of
devops
right,
like
no
offense,
but
it
kind
of
is
it's
also
the
hardest
part
exactly
the
culture
is
the
hardest
part.
B
All
right
circling
back
around,
so
I
think,
as
you
pointed
out
chris,
you
did
a
live
stream
with
either
mark
and
don
or
mark
ordon,
or
there
was
a
dedicated
live
stream
about
ipsec
encryption
with
ovn
kubernetes
in
4.7.
B
B
So,
for
example,
when
ipsec
is
enabled
following
network
traffic
flows
between
pods
are
encrypted
so
traffic
between
the
paw
between
two
pods
on
the
cluster
network.
So
without
using
you
know
an
application
level
or
a
pod
level,
you
know
tls
certificate
or
something
like
that,
essentially,
that
inc
that
communication
is
encrypted
by
default.
B
You
don't
have
to
do
anything.
You
turn
it
on
at
the
cluster
level
and
it's
automatically
happening
there,
but
note
that
things
like
so
traffic
between
pods
on
the
host
network
is
not
encrypted.
So
in
other
words,
if
it
never
leaves
the
host,
it
doesn't
get
encrypted
right.
If,
if
both
pods
are
on
the
same
host,
that
traffic
would
not
be
encrypted
so
read
through
this.
Please
make
sure
you
pay
attention
to
when
it
does
and
doesn't
apply
and
how
that
affects.
Whatever
your
security
stance
may
be,
you
know
I
understand.
B
Security
teams
can
be,
shall
we
say
curmudgeony.
Sometimes
I'm
familiar
with.
B
B
Yeah
and
my
my
experience,
you
know
when
I
used
to
when
I
used
to
have
to
certify
or
work
with
the
security
teams.
Was
that
oftentimes
I
had
to
explain
the
technology
to
them
and
and
help
them.
I
think
that's
one
of
the
reasons
why
I
feel
like
I
do.
Okay
at
being
a
tmm,
you
know,
tmm
is
basically
explaining
engineering
level
technology
to
us,
regular
people
and
asking
a
lot
of
stupid
questions
of
engineering.
You
know
you
get
it's
translating
so,
and
security
folks
sometimes
need
that
help
so
anyways.
B
B
You
know
I
use
the
install
config
or
not
the
install
config,
the
kernel
parameters
or
the
live
iso
and
configure
my
first
network
interface
for
the
the
machine
sider
right,
the
one
that's
gonna,
run
sdn
and
all
that
other
stuff
right,
but
maybe
I've
got
you
know
two
or
four
or
however
many
other
interfaces,
and
I
need
to
create
a
bond
and
I
need
to
put
some
vlans
on
that
bond.
So
that
way
it
can
connect
to.
You
know.
B
So
nm
state
makes
that
super
super
easy.
Essentially,
if
I
go
to
updating
the
node
network
configuration,
let
me
paste
this
into
the
christian
hernandez.
I
am
not
a
people
person,
so
if
we
look
inside
of
here
right,
so
here's
how
to
create
a
vlan
interface
using
the
nm
state
operator,
essentially
I'm
creating
a
node
network
configuration
policy.
B
That
says
I
want
to
create
a
vlan
named
eth1.102
with
this
vlan
interface
information
and
we
apply
that
it
uses
the
node
selector
to
automatically
apply
it
to
the
nodes.
So,
for
example,
if
I
had
this
node
selector
as
a
role
right,
maybe
it's
node
selector
is
workers,
and
you
know
some
other
arbitrary
label
associated
with
my
machine
set
or
something
like
that
right,
anytime,
I
automatically
add
a
node
or
a
node
joins
the
cluster.
B
I
was
telling
chris
before
the
show
I
found
a
bug
in
the
way
that
it
I
was
trying
to
create
or
move
a
secondary
network
interface
to
a
linux
bridge
that
had
already
been
given
an
ipv
via
dhcp,
and
it
didn't
like
the
way
that
the
routes
worked
so
interesting.
It's
it's
a
bug
in
that
it's
unexpected
and
expected
behavior.
B
The
work
around
is
to
first
remove
the
default
route
that
it
creates
for
that
network
interface
and
then
move
it
to
the
bridge,
but
so
yeah
that
and
that
team
is
super.
Helpful
petter
is
the
engineer
that
I
was
working
with
super
super
super
helpful
again
much
like
the
docs
team.
Can't
say
enough
good
things
about
them,
exactly:
oh,
horizontal,
pod,
auto
scaler.
B
So
if
you're
not
familiar
with
hpa,
historically,
it
automatically
uses
right.
It
basically
uses
the
metrics
to
gauge
when
a
pod
exceeds
whatever
the
defined
cpu
threshold
is
so
maybe
it's
using
more
than
I
don't
know
two
cpus
worth
of
of
of
you
know:
2000
ml
cpus
worth
of
resources.
Hey
it
exceeded
this
threshold,
I'm
now
going
to
automatically
deploy.
You
know
two
three,
five,
ten
additional
instances
of
the
pod
to
help
spread
that
workload
out.
Historically,
it's
only
been
cpu
based.
B
So
as
of
this
release,
as
of
4.7,
we
now
add
memory
based
utilization,
metrics
as
well
nice,
so
yeah
very
helpful.
You
can
still
use
custom
metrics
as
well,
so
I
don't
know
if
that's,
I
think,
that's
ga,
but
effectively
defining
custom
metrics
to
trigger
auto
scaling
pod,
auto
scaling
is,
is
a
thing
if
you
so
choose.
B
So
another
one
that
I'm
that
I'm
strangely
excited
about
the
descheduler,
so
the
descheduler,
which
sounds
kind
of
scary,
is
an
effort
to
balance
the
workload
in
the
cluster
according
to
your
policy
right.
So
when
we
think
about
the
kubernetes
scheduler,
essentially
it
works
off
of
a
bin
packing
algorithm
right.
How
do
I
you
know
this?
This
pod
needs
this
much
resources
right.
All
of
my
nodes.
Have
these
much
resources
available
right?
B
B
It
won't
start
removing
pods
from
a
node
to
free
up
resources
until
there
is
active
contention
right
whatever
that
threshold
happens
to
be,
I
think
it's
90
or
95
by
default,
node,
auto
scaling
doesn't
take
effect
until
pods
fail
to
schedule,
so
the
descheduler
essentially
is
looking
for
conditions
and
we
can
define
those
those
conditions
using
these
policies
or
profiles,
and
basically
have
it
say
this
pod
is.
I
want
to
reschedule
this
pod
so
effectively.
B
So
the
profiles
combine
a
bunch
of
different
strategies
to
then
try
and
make
it
behave
the
way
we
want
it
to
so.
What
do
I
mean
by
that
affinity?
Intains?
So
if
I
enable
the
descheduler
using
the
affinity
in
taints
profile,
it
is
going
to
look
for
pods
that,
as
we
can
see
here,
are
violating
the
pod
anti-affinity
rule
or
the
node
anti-affinity
rule.
B
So
the
reason
for
this
is
relatively
straightforward
right,
you
think.
Well,
I
set
an
anti-affinity
rule
right.
Why
why?
Why
would
it
be
in
violation,
and
it
can
happen
for
a
number
of
reasons?
Maybe
it's
a
soft
anti-affinity
and
you
want
to
go
back
and
reinforce
that
anti-affinity,
maybe
the
paw
or
the
particularly
with
the
taint.
B
So
there's
a
couple
of
different
profiles
here,
affinity
intents
topology
and
duplicates
as
well
as
life
cycle
and
utilization.
You
can
apply
more
than
one
of
these.
If
you
so
choose,
you
can
have
all
three
if
you
like,
so
life
cycle
and
utilization,
one,
I
think,
is
if
we're
familiar
with
how
resource
balancing
happens
in
like
red
hat
virtualization
vsphere
with
drs
et
cetera,
this
is
kind
of
a
similar
concept
right,
so
low
node
utilization
right
finds
nodes
that
are
underutilized
and
evicts
pods
so
effectively.
B
What
this
is
doing
is
it's
saying
that
I
have
maybe
two
nodes
in
my
cluster.
One
of
them
is
at
10
utilization.
One
of
them
is
at
80
utilization,
so
I
have
a
node
that
is
below
my
low
node
utilization
threshold.
So
even
though
the
other
node
is
happy
and
healthy
as
far
as
the
resources
are
concerned,
this
other
one
right.
I
can
move
some
of
those
resources
and
make
them
more
balanced.
B
So
it
will,
you
know,
terminate
pods
on
the
highly
utilized
nodes
or
node
or
nodes
in
this
instance
in
order
to
hopefully
have
the
scheduler
put
them
onto
the
lower
utilization
nodes.
Again,
balancing
that
out
so
and
then
the
pod
lifetime
right
you
can
set,
you
know,
hey,
I
don't
want
pods.
I
want
to
constantly
have
pods
that
are
less
than
24
hours,
old
and
stuff
like
that
right,
so
kind
of
in
conjunction
with
that
in
tech
preview
is
scheduler
profiles.
B
I
think
the
technical
term
is
hope
and
prayer
effectively
right.
If
I
use
the
low
node
utilization
profile,
it's
going
to
say
well,
I've
got
these.
You
know
in
my
example,
two
nodes,
this
one's
highly
utilized
this
one's
underutilized.
I
want
that
workload
to
flow
from
you
know
the
workload
to
move
from
high
to
low.
So
it
terminates
from
the
high
with
the
hope
and
the
assumption
that
the
scheduler
will
make
the
right
decision
and
put
it
onto
the
load
low,
the
lesser
utilized
node,
but
that's
not
a
guarantee.
B
It
could
get
rescheduled
right
back
to
the
same
pod
or
the
same
node.
Rather
so
the
scheduler
profiles
assist
with
making
those
types
of
decisions
right,
hey.
I
want
you
to
target
nodes
that
have
the
lowest
utilization,
or
maybe
I
want
you
to
pack
as
many
pods
as
possible
into
as
few
nodes
as
possible.
B
Remember
they
are
tech
preview,
but
this
is
something
that
I
found,
particularly
when
used
in
conjunction
with
the
descheduler
to
be
a
pretty
powerful
way
of
potentially
either
balancing
the
workload
across
your
cluster.
If
you
want
that,
you
know
even
distribution
or
compacting
it
into
as
small
a
space
as
possible,
depending
on
your
preference
right.
Maybe
you
want
to
have
as
few
nodes
as
possible.
Maybe
you
want
to
have
extra
capacity.
B
The
idea
so
the
last
thing
I've
got
to
talk
about.
I
skipped
over
that
tab.
That
was
the
getting
started
with
get
ops
tab
which
we
talked
about
earlier.
So
the
last
thing
I
wanted
to
talk
about,
and-
and
I
kind
of
alluded
to
this
earlier-
we
had
a
lengthy
conversation
in
one
of
the
internal
chat
rooms
about
dhcp
and
ipi.
B
B
We
rely
on
you,
know
the
the
intrinsic
infrastructure
functionality
to
do
things
like
give
it
an
ip
address,
but
there
can
be
issues
there,
and
this
is
what
turned
into
a
200
plus
message
right
thread
in
with
with
a
bunch
of
our
field.
Folks,
as
well
as
a
bunch
of
usb
folks-
and
you
know
chris
pointed
out
that
it's
really
great-
that
red
hat
has
this
culture
of
like
very
open
feedback
and.
B
B
So
one
of
the
things
that
I
believe
it's
documented
in
a
kcs
I'll
have
to
dig
up
the
kcs,
but
one
of
the
things
that
we
recommend
is
setting
static,
dhcp
reservations
for
the
control
plane,
nodes
after
deployment.
So
when
the
cluster
deploys
it
comes
up,
it
pulls
those
dhcp
addresses
and
then
day
two
for
the
control
plane
set
those
as
static.
The
rationale
makes
sense
right.
Essentially
maybe
I
have
a
small
dhcp
scope,
or
maybe
I
have
a
dhcp
scope
that
is
almost
full,
so
there's
a
lot
of
churn.
B
B
For
that
is,
as
you
might
expect,
cd
right,
so
etcd
is
configured
by
the
xcd
operator,
cd
cluster
operator
to
use
ip
addresses
for
discovery,
and
the
operator
will
recover
right.
Basically,
if
it
says
hey,
there's
a
new
control,
plane,
node,
there's
a
new
lcd
node.
I
need
to
point
at
all
of
that:
it'll
reconfigure
everything
and
it's
great
so
long
as
only
one
node
at
a
time
changes
if
I
have
two
nodes
change
or
all
three
nodes
change
effectively.
Etcd
can't
find
its
peers
and.
B
So
we
went
through
this
debate
literally
at
I
don't
know
nine
o'clock
last
night
was
I've
again.
I
was
up
late
because
of
the
apac
virtual
tour
thing
and
it
turns
out
unbeknownst
to
me.
I
found
out
this
morning
that
our
our
smart
engineering
folks
are
already
starting
to
address
this.
You
know,
obviously
this
has
been
in
the
works
for
longer
than
we
had
been
talking
about
it
last
night.
But
if
we
look
in
the
documentation
here
and
I'm
gonna
post,
this
is
the
the
bare
metal
ipi
pro
install
prerequisites
documentation.
B
So
this
was
really
interesting
to
me
right
because
if
we
think
about
it
right,
dhcp
leases
are,
you
know,
they're
set,
so
that
they
can
eventually
be
reaped
and
reused
et
cetera,
but
if
it's
infinite,
then
the
host
can
basically
assume
well.
This
is
always
going
to
be
buying
right.
It
doesn't
expire,
it's
not
going
to
get
reaped.
It's
not
going
to
be.
B
You
know
given
to
something
else,
so
I
went
and
did
some
digging
and
found
the
script
that
actually
does
it
so
really
cool,
really
interesting
rate
of
basically
it
figures
out
that
you
know
here
we
can
see
it's
using
an
mcli
to
figure
out
the
connection.
Type
show
connection,
show
infinite
least
to
static
right,
and
if
it
determines
that
it
is
an
infinite
dhcp
lease
it
modifies
the
interface
to
a
static
ip
and
this
this
works
for
both
this
is
d
or
ipv4.
B
It
also
works
with
ipv6
just
a
different
file
for
ipv6,
but
I
thought
this
was
really
amazing,
an
interesting
way
to
solve
that
problem,
because
the
alternative
is
some
kind
of
day
two
thing
to
go
through
and
set
those
those
reservations
or
I've
seen
some
people
use.
You
know
machine
config
operator
to
go
through
and
effectively
do
the
same
thing
set
a
static,
ip
or
use
nm
state
operator.
To
do
the
same
thing,
my
recommendation
has
always
been
for
the
primary
interface
right.
B
Whichever
interface
sits
on
the
machine
cider,
the
you
don't
want
to
modify
that
through
other
mechanisms
than
effectively
like
the
boot
time,
kernel
parameters
or
the
install
time
kernel
parameters
rather
right,
because
if
you
break
that
interface,
you
broke
the
node,
because
then
the
sdn
can't
stand
up.
It
can't
talk
to
the
rest
of
the
control
plane.
It
can't
do
all
of
those
other
things.
So
this
is
an
interesting
way
that
they've
done
to
work
around
this.
B
B
Yeah,
I
know
we
have
a
hard
stop
today
for
the
openshift
commons
folks.
Well,
as
I
said,
this
is
this
is
the
last
one.
So,
oh.
A
B
Yeah,
that's
all
I've
got.
B
Weekish
with
that,
please
don't
hesitate
again
if
you
have
any
questions,
drop
them
into
the
chat,
we'll
try
and
address
those
in
these
next
few
minutes,
I
will
summarize
by
saying
4.7
is
a
huge
release.
B
It's
funny
I
had
this.
I
had
this
debate
with
my
product
marketing
counterpart
of
product
marketing
says:
oh
there's
not
a
lot
going
on
right.
You
know
we'll
do
a
press
release,
but
you
know
there's
there's
just
there
hasn't
been
a
lot
of
things
like
have
you
guys
looked
at
what's
going
on
like
there's
a
huge
amount
of
things
that
have
gone
on
inside
of
here
definitely
check
out
the
release,
notes,
look
through
all
of
the
things
that
have
changed
inside
of
there.
Please
don't
hesitate
to
reach
out
to
ask
questions.
B
You
don't
have
to
wait
for
the
admin
hour.
Every
week
you
can
reach
out
to
me
directly
on
twitter
at
practicalandrew.
You
can
reach
out
to
me
via
email,
andrew.sullivan
redhat.com.
So
any
questions
with
any
of
that
I
I
know
chris
will
also
volunteer
his
contact
information.
A
Here
in
a
moment,
see
short
at
redhat.com
and
at
chris
short
on
twitter,
you
might
get
more
responsiveness
out
of
twitter.
To
be
honest
with
you,
okay.
B
I
got
buried
in
email,
matt
compliance
operator.
Yes,
so
the
compliance
operator
is,
as
far
as
I
know,
ga
and
I
have
to
find
the
documentation
page.
B
I
know
that
they
are
working
diligently
to
add
additional
profiles
so
that
you
can
basically
apply
more
of
those
security
settings
so
keep
an
eye
out
in
the
next.
I
know
we
generically
said
I
think
q1
calendar
year
21
we're
expecting.
I
think
three
additional
compliance
profiles
to
be
released,
including
we
will,
I
think,
we're
wanting
to
look
at
the
yeah
cis
benchmark.
B
So
one
of
those
is
the
cis
benchmark
for
openshift,
so
remember,
release
dates
are
flexible.
We
talked
about
this
at
the
beginning
of
the
show,
literally
within
like
eight
hours
ago
right
we
had
a
feature
get
bumped
by
a
week,
but
keep
an
eye
out
for
us
to
release
additional
compliance
profiles
to
get
integrated
inside
of
here.
So
I
am
working
on
a
show
focused
on
the
compliance
operator.
B
It
will
probably
be
in
the
mid
march,
mid
to
late
march
time
frame,
if
not
early
april,
I'm
trying
to
remember
the
schedule
so
we'll
go
in
depth
on
the
compliance
operator.
In
the
not
too
distant
future
and
talk
about
all
kinds
of
things
inside
of
there,
so
if
you
have
questions
about
that,
please
feel
free
to
reach
out
to
us
now
we'll
do
those
kind
of
one-off,
but
we'll
summarize
all
of
that
in
a
show
as
well
and
with.
A
B
We're
at
the
top
of
the
hour
my
time
is
up.
Thank
you.
So
much
really
appreciate
everybody
who
joined
today.
All
the
questions
please
keep
an
eye
out
on
openshift.com
blog
for
our
follow
up
that
has
all
of
the
links
and
other
information
from
today's
stream.
A
Yeah
and
stay
tuned
for
the
openshift
commons
briefing,
we
have
should
look
at
the
wrong
week.
I
think
it's
datadog
today.
I
could
be
wrong,
so
don't
hold
me,
do
it
but
see
you
over
there
in
just
a
few
seconds.
Folks,
thanks.