►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
Implementation
largely
based
on
everything
that
we
already
had
in
our
repos,
just
wiring
it
into
the
pipeline
and
kind
of
streamlining
things.
So
I'll
just
go
with
sharing
my
screen
and
where
things
are
at
so
like,
I
said
it's
a
proof
of
concept,
so
it
lives
in
my
private
repo,
it's
private,
private
repo.
A
So
with
that
being
said,
the
entire
setup,
like
I'll
just
walk
through
the
entire
setup
from
start
to
finish,
and
then
we
can
kind
of
pick
apart
what's
out
there,
so
the
entire
setup
consists
of
two.
B
Kind
of
sections-
I'm
mostly
just
seeing
your
slack
at
this
point.
A
What
it
didn't
do
that
all
let
me
do
the
the
other
thing
stop
the
share.
A
So
here's
the
documentation
and
basically
kind
of
lines
out
what
needs
to
be
done
for
the
prerequisites
one
has
to
create
the
we
talking
about
creation
in
gcp,
so
one
has
to
create
the
service
account
and
enable
certain
apis
inside
the
project
into
which
the
openshift
deployment
is
going
to
go
to.
So
all
of
that
is
done
through
that
simple,
a
single
script
and
just
different
entry
points
into
the
script.
So
you
just
create
the
service
account.
A
A
There
so
when
you
define
google
application
credentials-
and
I
think
I
should
have
mentioned
it
here-
oh
yeah-
I
did
yeah,
so
that's
the
file
into
which
the
credentials
are
going
to
be
saved
for
the
google
cloud.
A
The
other
thing
that
needs
to
be
done
is
one
has
to
fetch
the
pull
secret,
either
from
red
hat
directly
or
we
have
it
in
our
in
one
password.
So
that's
the
second
piece
and
the
third
piece
is
ssh
key
public
key.
I'm
gonna
use
mine
through
the
demo,
but
that's
another
one.
So
once
this
is
done,
that's
when
the
pipeline
actually
can
kick
in.
So
what
I
have
done
so
far.
A
And
here
for
the
this
is
where
I'm
kind
of
a
little
iffy,
you
have
to
submit
things
as
files,
but
you
cannot
by
default,
make
them
file,
so
you
have
to
flip
it.
A
A
So
this
is
all
that
I
kind
of
put
as
the
entry
point.
So
if
anything
needs
to
be
customized,
this
version
of
the
cluster
actually
controls,
which
tools
are
we
downloading,
because
apparently
openshift
setup
depends
on
the
tool
that
you've
downloaded.
So
it's
a
one-to-one,
you
can't
download
one
openshift
installer
and
try
to
install
a
different
cluster,
at
least
not
from
what
I
can
see
unless
dustin
knows
more
in
this
account,
but
it
didn't
look
that
way.
A
So
after
all,
that
filled
just
run
that
pipeline
and
while
this
thing
is
running
so
this
section,
the
configure
cluster
is,
is
a
dummy
section
like
I
did
not
do
anything
in
there.
I
was
just
sorting
out
all
the
rest
of
the
things,
so
there
is
only
one
command
ls.
That's
it
not
much,
but
it's
something
what
we
want
to
do
after
we
install
the
cluster
and
what
we
wanted
to
kind
of
install
into
the
cluster
right
after
that.
A
A
It
just
spits
out
the
credentials
right
there
in
the
log,
and
there
is
no
nothing
I've
seen
that
actually
disables
that.
So
that's
my
first
hurdle.
It's
like
I'm
not
like.
I
can
probably
just
redirect
the
whole
thing
pipe
it
through
and
try
to
filter
the
output.
Maybe
sort
of
I
still
need
that
data,
so
I
can
try
to
pump
it
into
something
else,
but
we'll
get
to
that
later.
A
D
A
That
was
my
second,
I
actually
okay,
since
we're
talking
about
this.
Let's
just
go
like
I
created
all
the
issues
that
I
have
encountered
so
far.
It's
just
six
of
them.
Where
is
the
there?
Okay,
oh
and
I
didn't
pump
it
here
anyway,
so
yeah
it
was
exactly
dustin
exactly
what
you
were
saying.
I
was
just
thinking
about
and
I
actually
dug
up
the
documentation
on
that
account
that
talks
about
changing
the
user
password
the
cube
admin.
A
No,
it
talks
about
removing
the
cube
admit.
It
doesn't
talk
about
the
change
in
the
password,
but
from
the
steps
I've
gathered
is
just
you
create
the
credentials,
some
credentials
and
assign
it
the
cluster
admin
role,
and
then
you
can
remove
the
cube
admin,
but
even
with
that,
for
the
short
period
of
time,
there's
going
to
be
a
clear
text,
representation
of
the
credentials
before
we
provision
so
there's
a
small
window.
A
I
don't
know
so
now
we're
going
back
to
now.
You
guys
have
seen
more
or
less
in
rough
details
what
it
does.
So
a
couple
of
assumptions
that
I
have
made
it's
a
private
repo,
so
nobody,
but
our
group
has
access
to
it.
That's
why
I
took
the
liberty
at
storing
things
into
the
artifacts,
so
you
can
download
if
you
go
back
to
any
of
the
successful
pipelines,
come
on.
A
Into
deploy
cluster,
you
can
actually
get
the
whole
the
whole
thing.
Everything
that
you
would
normally
get
on
your
workstation,
including
the
binary
file
that
was
used
to
install
that
so
going
back,
is
easy
in
terms
of
environments.
Again,
we
I
try
to
utilize
the
environment
feature
that
gitlab
already
has
so
we
can
have
as
many
environments
as
we
want
and
stopping
it
is
just
you
hit
the
stop
and
the
environment's
done.
It'll
just
decommission
the
entire
thing,
and
that's
it
and
I
checked
afterwards.
A
C
A
A
I
did
find
a
few
things,
but
we'll
get
back
to
this
in
a
second
that
may
not
require
us
to
store
all
the
store,
all
the
artifacts
for
the
destroy
phase,
but-
and
there
is
another
issue
that
I
kind
of
came
across
and
that
is
ability
to
rerun
the
job
or
you
know
if
we
want
to
update
something
about
the
cluster
and
we
want
to
rerun
the
openshift
installer
against
that
cluster.
A
So
a
couple
of
ideas-
I
was
tinkering
with
in
my
mind
at
least,
and
that
is
we
can
make
it
into
branch
per
environment
kind
of
situation.
So
every
time
we
open
a
new
environment,
we
create
a
brand
new
branch
and
we
run
that
pipeline
against
that
branch.
A
That
way,
the
artifact
is
going
to
be
stored
with
that
environment
kind
of
hashed
in
because
you
will
have
the
ci
ref
name
in
the
path,
so
you
can
download
the
artifacts
specifically
from
the
pipeline
for
this
particular
branch.
There'll
be
one
way
of
doing
it,
the
other.
The
alternative
that
came
to
my
mind,
was
using
the
object,
storage,
gcp,
aws,
doesn't
matter
and
just
throw
our
artifacts
into
the
private
bucket
and
that's
what
I've
done
previously.
In
my
past
experience
in
my
previous
jobs,
that's
exactly
what
we
were
doing.
A
We
would
be
chucking
the
artifacts
into
the
external
repo
and
then
those
can
survive
anything
and
you
can
attach
them
to
specific
locations,
and
then
you
can
fetch
them
properly
and
everything
the
things
to
consider.
It's
a
lifecycle
management,
so
you
have
to
when
the
environment
gets
deleted,
you
have
to
go
and
delete
stuff
from
the
bucket
shouldn't,
be
that
complicated
security
management.
A
That's
a
another
aspect,
that's
where,
if
security
settings
on
the
project
has
changed,
somebody
has
to
go
and
control
the
security
sector
settings
on
the
bucket
or,
for
example,
we
got
a
new
team
member.
We
need
to
allow
them
access
to
the
project
and
to
the
bucket,
so
that's
a
bit
of
a
complication,
but
it
does
open
some
other
things.
A
Something
that
jason
has
mentioned
previously-
and
I
was
kind
of
curious
about
that
support
for
the
terraformity
of
state
files.
Openshift,
is
not
that
open.
A
In
that
regard,
the
way
they
do
the
terraform
is
they're
using
the
terraform
libraries.
So
all
the
assembly
of
terraform
manifests
is
happening
behind
the
scenes.
We
don't
even
see
it,
we
don't
get
to
interfere
with
it,
and
anything
we
want
to
add
to
the
terraform
processing
is
not
really
available
to
us
at
the
moment
like
unless
we
try
to
go
back
to
the
upstream
project
and
try
to
contribute
something
that
will
make
it
possible
and
I've
seen
people
actually
asking
that
particular
question.
There
was
a
question
about
saving
the
get
started.
A
Openshift
state
into
the
s3
buckets,
and
that
was
terraform
involved
there
too,
and
the
answer
was
nope
so
far,
so
the
other
one
I
actually,
what
I
did
use
in
this
pipeline
is
our
gitlab
operator
image
build
image
because
it
contained
all
the
tools
I
needed
helm,
qtl
jq.
A
Something
else
was
there.
Let
me
take
a
look
wget
yeah
and
it
assumes
it's
an
alpine
based
image
so
because
we
add
jq
right
there,
so
there's
a
possibility
for
optimizing
that
and
not
fetching
the
big
gen
image
that
has
the
golang
in
it
and
things
like
that,
because
we
don't
need
them.
For
this
particular
case.
A
That's
where
most
of
the
information
on
this
one
went
so
we
can
decouple
stop
from
deploy
because
right
now,
if
we
take
a
look
at
the
pipeline,
I
had
to
do
stop
needs
deploy
cluster.
Why?
Because
we
need
all
those
artifacts
from
the
deploy
stage
to
be
fetched
and
that's
how
you
do
it
and
that
will
create
something
that
mitch
will
probably
be
the
most
intimately
familiar
with.
If
we
fail
the
deployment
it's
half-assed
deployment,
you
can't
run
stop
because
the
deployment
has
not
completed.
A
So,
oh,
that's
a
that's
a
bummer
right
there,
but
it
that's
the
way.
It's
wired
right
now,
again
proof
of
concept
open
for
discussion
or
whatever
on
this
front
and
the
last
one
was
it's
mostly
artificial
and
cosmetics.
A
I
was
rendering
the
uri
from
the
data
that
was
given
to
me
and
it
was
rendering
into
something
much
shorter
and
then
whatever
the
openshift
installer
generates
it's
actually
quite
redundant.
It's
a
console,
openshift
console
dot,
app
dot
whatever.
So
I
thought.
Maybe
there
is
a
way
of
getting
around
that
and
I
kind
of
dug
a
little
bit
more
and
I
found
that
there
is
this
openshift
documentation
suggests
that
we
can
add
additional
routes.
So
that's
a
subject
for
experimentation
later
down.
A
For
the
time
being,
I
just
software
that
console
openshift
console
apps
into
the
uri
just
making
it
at
least
workable.
So
if
we
go
now
to
deployments
environments-
and
you
want
to
take
a
look
at
the
this
one-
the
419
49
17-
you
just
click
here
and
you
get
oh,
oh,
that's
the
old
deployment!
That's
why
I'm
sorry
this
one
didn't
complete,
yet
that
one
was
before
the
fix.
A
Yeah
and
actually
the
good
news-
I
guess
good
news-
is
it-
takes
about
26
minutes,
give
or
take
to
stand
up
a
cluster
start
to
finish.
So
it's
not
terrible
with.
That
being
said,
that's
pretty
much
all
the
intro
I
had,
and
now
I'm
really
open
to
any
input
and
what
people
think
about
how
like
especially
what
I'm
really
really
curious
about,
is
how
we're
going
to
be
managing
like
this
ability
to
rerun
the
jobs,
and
you
know
whether
we
want
to
go
the
route
of
doing
branch
per
environment.
A
B
Well,
dimitri
I'm
curious
about
if
you
could
just
touch
on
the
scope
of
the
proof
of
concept.
In
terms
of
like
how
much
code
did
you
have
to
write,
so
you
showed
us
that
they're
beginning
those
scripts.
Is
that
all
new
as
well
or
how
much
of
that.
A
A
Pretty
much
all
of
it
is
recycled
from
the
existing
repo.
I
just
yanked
it
from
out
of
some
of
the
gitlab
operators.
I
think
it's
all
from
the
gitlab
operator.
It
doesn't
correct
me
if
I'm
wrong,
because
you
were
the
guiding
star
pointing
me
at
the
right
scripts,
because
I
started
off
writing
my
own
and
then
doesn't
mention
that
we
already
have
some
of
that
stuff
done.
So
I
just
lifted
those
and
moved
it
over.
So
it
was
not
as
much
work.
I
just,
for
example,
create
openshift
cluster.
A
A
Yeah
I
like,
like
I
said
I
started
off
with
a
vanilla,
openshift
commands
just
to
make
sure
that
I
know
what
I'm
doing
and
what
I'm
you
know
what
I'm
getting
and
then
I
went
and
it's
like
okay
now
I
know
what
it's
doing
and
now
I
know
it's
working.
We
can
go
and
recycle
the
scripts
that
we
already
have
and
you
know,
get
a
little
bit
more
functionality
into
it.
That's
what
I
did
so
yeah
yeah,
it's
more
or
less
it's
recycling,
the
scripts.
B
Yeah,
and
is
that
the
same
case
with
the
service
account
and
the
the
rules
added
to
the
service
account
or
is
that
yeah.
A
Okay,
yeah,
we
a
couple
of
things
that
we
have
there's
actually
there's
no
ticket
or
any
other
mention
of
the
dns,
but
there
was
a
fun
time
we
had
with
the
dns
setting
up,
because
going
back
to
my
so
I'm
working
in
my
private
project
that
I
created
with
that
staging
toolbox
that
we've
been
given
some
time
ago.
A
Fresh
new,
there
was
nothing
there,
so
we
had
a
chance
to
test
all
of
those
scripts
to
create
the
api,
enable
the
apis,
create
the
service
account
and
everything
else
make
sure
that
it
actually
works.
So
now
we're
at
the
stage
where
this
repo
contains
only
the
stuff
that
you
really
need
to
start.
The
openshift
cluster.
D
D
I
think,
after
working
with
this
stuff
and
working
with
openshift
install
tool,
I
I
think
it
would
be
best
if
we
just
treated
these
clusters
as
stateless
like
we're
not
managing
their
life
cycle
and
that's
what
the
caveat
of
we
should
be
able
to
install
like
start
manager
and
install
updates
to
start
manager
like
at
the
helm
level.
I'm
not
considering
that
as
part
of
the
state,
but
the
state
that
comes
from
like
the
actual
cluster
configuration
like.
We
already
know
how
to
scale
nodes
up
and
down
as
needed.
D
We
don't
really
like
if,
if
we're
going
to
change
to
different
instance,
types,
for
example,
I
would
say
just
delete
the
cluster
and
start
a
new
cluster,
especially
since
we're
under
30
minutes
now
for
cluster
creation.
It
used
to
be
closer
to
an
hour
for
four
or
six,
but
we're
in
a
better
spot.
Now
I
just
feel
like
that
would
avoid
a
whole
lot
of
life
cycle
management
work
for
these
clusters,
so
that's
thought
one
as
far
as
artifacts,
like
there's,
really
only
three
things
in
that
artifacts
folder.
D
We
care
about
the
rest
is
just
junk.
I
guess
if
their
terraform
is
not
easy
to
hook
into,
then
all
that
terraform
stuff
is
junk.
It
is
that
yeah,
and
so
that
would
be.
You
know
the
metadata.json
file
which
allows
you
to
delete
clusters.
That's
only
all
you
need
in
your
credentials,
of
course,
and
then
the
other
two
files
are
your
cube
config
and
your
cube
admin
password
for
the
root
user,
and
so
you
know
another
step
in
that.
D
Can
another
thing
to
do
in
that
configure
step
since
one
password
has
a
cli
would
be
to
just
upload
those
directly
to
one
or
one
password
vault
as
part
of
the
pipeline,
and
then
maybe
the
decommissioned
staff
could
delete
them
from
one
password
when
we
delete
clusters
just
less
manual
steps.
You
know.
A
Yeah
another
and
then
there's
a
good
question
like
do
we
do
we
need
or
do
are
we
required
to
have
all
those
credentials
in
one
password
or
are
we
okay
with
because
those
environments
in
theory
are
transient
because
they're
alive
for
some
period
of
time
and
then
they're
going
to
go
away?
So
managing
everything
in
one
place
seems
appealing.
A
D
Well,
I
mean,
I
think,
we're
already
putting
the
cube
configs
and
the
passwords
in
one
password,
so
it
seemed
to
me
that
the
leap
of
uploading,
the
metadata
file,
would
just
be
another
addition
to
the
process
we
already
have.
I
mean
I
don't.
I
don't
think
we
should
get
into
this
topic
on
this
call,
but
having
people
being
able
to
ssh
into
our
open
shift
clusters
that
produce
ti.
Artifacts
is
not
such
a
great
idea
anyway,
if
we're
worried
about
like
build
rep
reproducibility,
and
you
know
minimizing
our
threat
vectors.
A
A
D
A
And
what
their
comment
there
says
that
the
tf
state
has
nothing
to
do
with
the
cluster
teardown,
because
it
doesn't
know
about
this
resources
cluster.
So
I'm
not
even
sure
what
has
happened
to
the
terraform
under
the
hood,
because
this
is
not
a
normal
behavior.
You
would
expect
something
different,
but
from
the
response
I'm
seeing
here,
yeah
metadata.json
is
all
we
care
for
really.
D
Well,
I
just
thought
of
something,
though
the
install
config,
I
don't
know,
maybe
that's
not
important
because
we're
putting
the
parameters
in,
but
if
we
needed
that
install
config.yml
for
debugging
reasons
or
whatever,
when
openshift
install
runs,
it
consumes
that
file.
It
deletes
that
file
for
some
reason
yeah
out
of
the
install
directory,
I'm
just
thinking
it
might
be
useful
to
have
later
on
too.
A
A
So
if
you
take
a
look
at
this
install
there,
it
is
so
you
can
go
back
to
it
and
do
whatever
you
want
with
it.
So
this
python
does
that
right
now,
because
I
try
to
disassemble
the
setup
into
multiple
steps
that
each
one
is
more
or
less
atomic,
so
we
can
deal
with
each
one
separately
rather
than
the
whole
bunch
of
things
at
once.
A
Is
your
gcp
credentials
and
I
do
believe
there's
let
me
take
a
look
at
the
template
itself.
It'll
be
easier
to
tell
no
project
id
pull
secret
is
in
there,
and
the
public
keys
in
their
public
key
is
not
that
big
of
a
deal
full
secret
is
something
that
we
probably
don't
want
to
expose.
F
Everything
else
is
pretty
much
public
because
I
was
just
thinking
like
for
the
most
part
when
we
spin
up
clusters.
This
is
the
file
that
configures,
you
know,
represents
what
it'll
look
like
right.
So
if
we
did
your
idea
of
having
a
branch
per
environment,
this
could
be
the
file
that
changes
per
branch,
and
then
we
can
compare
branches
and
see
what's
different
and
how
environments
were
configured
would
be
a
perk
of
that
approach,
precisely
yeah.
That
was.
A
Kind
of
one
of
the
things
that
comes
out
of
branch
per
environment
approach
when
you
can
deviate
your
environments
depending
on
where
they
are.
If
you
need
to
so
say,
4.9
right
now
looks
like
this,
but
4.10
requires
an
additional
setting
in
there.
So
we
can
actually
go
and
add
that
setting
and
we
don't
have
to
yeah.
A
F
A
F
And
I
think
it's
just
a
matter
of
permissions,
but
I've
seen
pipelines
that
were
launched
with
these
variables
and
they're
hidden
like
it's
all
stars.
So
is
that
just
a
matter
of
permissions
like
if
you
go
to
one
of
the
pipelines,
can
you
see
all
the
configuration
that
was
passed
in
for
that
pipeline?
There's
usually
a
little
table
on
the
right
hand,
side?
A
The
only
way
it
will
pop
up
it
may
pop
up
here,
but
it
does
not
yeah.
This
one
is
just
stretching
the
content
debug
that
one
just
fits
practically
everything
else.
F
Yeah
like,
in
that
right
hand,
column
you're,
looking
at
I've,
seen
pipelines
where
there's
a
table
there,
showing
the
key
and
the
value
for
the
value
variables
that
you
passed
in.
We
started
the
pipeline,
I'm
mostly
thinking
about
observability,
so
we
launched
an
environment.
What
configuration
was
passed
in
and
if
it's
not
super
visible
here,
maybe
we
should
go
the
route
of
branching
and
then
that
config
file
is
part
of
the
branch
yeah.
B
That
table
you're
talking
about
mitch
for
some
reason
and
there's
issues
to
fix
this.
It
only
shows
up
for
triggered
pipelines
and
triggered
only
so
it
doesn't
show
up
for
once.
You
run
like
normally
off
a
git
commit
or
the
ones
you
manually
trigger
from
the
ui,
for
whatever
reason
it
only
shows
up
for
triggered
that.
F
A
What
I
was
kind
of
thinking,
because
the
environment
here
is
kind
of
encapsulating
everything
that
you
would
want
to
know
about
this
environment,
which
pipeline
fired
it
off
all
the
details
are
right
there
in
the
name,
I
specifically
kind
of
attach
that
you
know
cluster
version
at
the
end.
So
you
know
what
you're
dealing
with
and
then
going
to
the
environment
is
not
a
big
deal
either.
It's
right.
There
is
the
link
so.
B
D
I'm
thinking
about
dev
clusters,
like
we
have
dot
com
and
dev
clusters
that
share
the
same
version.
How
would
that
work
if
we
were
going
to
branch
route
like
because
when,
when
we
want
that
configuration
to
be
in
the
dev
repo
and
this,
the
dev
ci
pipeline
launches
it
instead
of
the
dot
com?
Because,
right
now
we
basically
launch
everything
via
com
and
then
use
it
in
dev,
but
it
might
be
useful
to
have
separation.
A
We
could
probably
just
isolate
that
in
terms
of
where
the
openshift
cluster
is
going
to
be
instantiated
like,
for
example.
This
is
something
that
I
kind
of
kind
of
glanced
over,
but
the
gcp
project
id
is
actually
being
extracted
from
the
essay
credentials.
The
essay
credentials
json
contains
the
project
id,
so
I'm
extracting
it
there.
So
if
you
generate
it
for
a
different,
completely
different
project,
it
will
instantiate
it
there.
It
doesn't
have
to
that's
why
I
didn't
put
it
at
the
top
level.
A
F
B
F
A
A
Abstinent
last
question
then,
and
since
mitch
brought
up
the
subject,
do
we
want
to
move
it
out
of
poc
and
into
some
more
permanent
space,
or
would
it
be
more
prudent
for
me
to
kind
of
hammer
it
off
a
little
bit
further
into
in
my
private
space,
and
I
can
knock
out
that
ticket
with
creation
of
the
openshift
4.9
with
this,
because
it
doesn't
really
matter
if
I
do
it
on
my
workstation,
and
I
do
it
through
this
pipeline,
but
you
know,
address
any
further
ideas
or
issues
further
down
the
line
and
not
worry
about
it
at
this
point
in
time
and
I'm
kind
of
open
to
any
suggestions
here.
F
In
my
two
senses
I
would
say
as
soon
as
you
figure
out
how
to
get
the
password
to
not
spit
out
in
the
logs,
you
just
move
it
somewhere,
it's
all
visible
and
then
we
could
collaborate
on
it.
I
don't
really
know
the
process
for
creating
a
project
in
ops
or
who
has
permissions
there,
but
that's
what
I
I
would
see
as
the
next
step.
E
I
was
going
to
maybe
say
something
similar,
but
you
know
you
just
did
a
demo
on
it
get
maybe
give
a
day
or
two
for
some
for
folks
to
churn
on
it.
Resolve
the
password
thing
kind
of
similar
idea
get
get
just
a
touch
more
polish
on
it,
so
that
folks,
when
you
we
get
some
more
people
looking
at
it,
it
kind
of
makes
sense
and
as
easy
for
people
to
to
jump
in
that'd,
be.
B
B
Sorry
I
interrupted,
but
if
you
could
add
it
at
a
team
task
issue
just
so,
we
could
discuss
like
the
ops
versus
dev
versus
all
their
stuff
and
get
that.
Oh.
B
But
once
once
we
have
an
idea
like
you
can
keep
working
on
the
rest
of
stuff
here
in
this
repo,
but
once
we
have
an
idea,
I
can
start
looking
at
getting
the
the
right
places
opened
for
to
move
it
over,
but
that
might
take
a
while.
So
don't
even
once
we
decide
we're
gonna.
Do
that,
like
don't
wait
on
it.
A
F
And
this
is
awesome
for
openshift,
I
know
technically,
we
would
still
want
similar
functionality
for
kubernetes
clusters
right.
I
know
it's
just
basically
clicking
around
the
uis
that
it
would
be
much
simpler,
but
technically
we
should
have
something
more
visible
and
traceable.
A
Yeah
and
plus
what
I
really
like
about
going
this
approach,
is
we
codify
our
setup?
So
it's
a
guarantee
that
the
next
cluster
is
going
to
look
like
previous
one
exactly
versus
when
it's
running
out
for
machine
we
may
miss.
You
know
the
error
message
and
we
think
that
something
is
installed
and
it's
not
it's
less
than
possible
here.
F
D
Yeah,
that's
coming
up
right
because
we'll
have
fits
clusters
for
testing
at
some
point
here.
So
that's
yeah.
A
Different
types
that
one
was
something
again
thanks
to
dustin
today.
He
who
pointed
me
to
some
of
the
documentation
that
is
not
so
obvious
with
the
openshift
installed,
but
there
is
a
switch
for
the
fips
compliance
in
openshift
cluster
deployment.
So
we
could
expose
that
as
well
as
part
of
this
entire
pipeline
and
just
say:
okay,
I
want
this
cluster
to
be
flipped
compliant,
for
example.
A
D
I
think
to
be
determined
on
that
one.
Thankfully
I
looked
into
what
operating
system
these
cluster
nodes
are
running
and
it's
already
red
hat
core
os
like
rh
cos,
so
that's
able
to
be
fips,
enabled
they're
not
like
ubuntu
or
anything
just
checking
that.