►
From YouTube: Distribution Team Demo - Nov 12 2020
Description
The team demos the openshift cluster creation script for setting up environments to test/develop the new gitlab openshift operator.
A
All
right,
hey
everyone,
welcome
to
the
distribution
demo
for
november
12
2020..
My
name
is
dustin
collins
and
I'll
be
showing
off
some
open
shift.
Cluster
automation
today
before
I
get
started
big
thanks
to
gerard
from
our
team
and
edmund
from
red
hat
for
reviewing
this
work.
A
Okay,
so
you
can
see
on
the
right
side
here
I
have
already
started
creating
the
cluster
because
it
takes
anywhere
from
25
to
40
minutes.
So
I
kicked
this
off
about
20
minutes
ago.
So
we're
not
sitting
here
waiting
for
it.
That
said,
we
are
keeping
the
high
wire
live.
Demo
things
could
fail,
so
let's
just
continue
forward
on
that.
So,
while
we're
waiting
for
that,
I
can
talk
a
little
bit
about
the
background
for
this
project
and
go
into
the
scripts
themselves,
so
we're
working
on
a.
A
Red
hot
or
open
shift
operator
in
this
project
here
and
one
of
the
things
we
need
for
this
is
easy,
reliable
way
to
stand
up,
openshift
clusters
for
testing
both
for
developers
to
launch
clusters
locally
and
then
also
to
use
them
for
cis.
We
can
run
some
tests
against
the
set
repositories
here
on
the
left.
A
A
Under
open
shift
installer,
so
basically
this
is
a
go
cli
that
wraps
terraform
and
some
logic
to
stand
up
an
open
shift,
cholesterol
on
you
know
they
have
a
bunch
of
supported
providers
now
we're
starting
out
with
gcp.
Just
because
that's
the
default
for
our
team,
we
could
add,
you
know
aws
as
we
go,
I'm
not
sure
if
we'll
run
into
ish,
like
cloud
provider,
specific
issues
we
might
where
something's
working
on
gcp
and
not
working
on
aws
or
vice
versa.
A
So
I
get
we'll
cross
that
bridge
when
we
get
to
it,
but
it's
nice.
They
provide
this
installer
tool
because
it
we
don't
have
to
write
our
own
terraform
code
to
get
clusters
going,
which
you
know
it
might
seem
simple,
because
we're
just
kind
of
standing
up
six
instances
and
configuring
them,
but
they
all
have
to
talk
to
each
other
and
the
way
that
this
installer
tool
actually
works,
and
you
can
see
it
here
on
the
lower
right.
A
It
actually
creates
a
bootstrapping
instance.
So
terraform
stands
up
the
bootstrapping
instance
and
then
I
believe
on
the
bootstrap
instance.
It's
also
running
different
terraform
resources
to
stand
up
the
control,
plane
and
the
workers,
so
by
default,
the
openshift
cluster
has
three
masternodes
and
a
control
plane
that
are
also
running
at
cd
and
then
three
workers
that
can
be
configured
as
far
as
the
workers.
But
as
far
as
I
understand,
we
will
always
need
three
three
master
nodes
in
the
control
plane.
So
scd
can
reach
quorum.
A
It
would
be
kind
of
nice
to
just
have
a
one
master,
one
worker
cluster.
You
know
it's
pretty
lightweight
for
testing,
but
in
my
experiments
anyway,
running
less
than
three
masters
always
resulted
in
errors.
So
that's
how
we're
going
with
it.
A
A
You
can
see
that
here
on
the
right
and
there's
it's
a
mix
of
open
shift,
configuration
and
then
underlying
cloud
provider
configuration
so,
for
example,
we're
saying
things
like
we
want
to
use
the
open
shift.
Software
defined
network
is
the
network
type.
They
have
a.
They
have
some
pretty
good
documentation
which
is
linked
from
our
documentation
there
about
all
this
customized
customization.
You
can
do
for
the
network
and
the
machine
nodes,
but
this
is
pretty
much
the
standard
install
config
template
to
start
with.
You
know
we
can
tweak
it
as
we
need.
A
For
example,
if
we
launch
some
ci
clusters
and
have
issues
with
you
know
resource
consumption,
we
can
always
bump
the
type
of
these
nodes
up
higher
by
default.
I
think
they
are
four
cpu
cores
and
16
gigs.
So
they're
not
you
know
wimpy
instances
by
any
means,
but
there
is
a
lot
as
I'll
show
you
and
here
in
a
few
minutes,
get
our
openshift
runs
a
lot
of
different
software
as
operators
just
standards.
So
just
threading
those
operators
is,
you
know
that
could
easily
take
up,
probably
eight
gigs
of
your
memory.
A
So
we
need
to
use
bigger
machines
here,
so
really
the
shell
scripts
that
I've
written
for
this
this
create
cluster
and
destroy
cluster
shell.
A
Here
they
are
just
wrapping
the
open
shift,
install
tool,
but
one
of
the
interesting
things
about
this
tool
is
that,
like
you,
create
your
install
config.yaml
and
then
you
run
open,
shift,
install
and
point
it
to
that
open
shift,
install
that
cli
actually
consumes
and
deletes
that
file
once
it's
done,
which
I
guess,
I'm
not
sure
why
they
why
that
happens,
but
this
create
cluster
shell
script
is
basically
you
know
just
making
sure
that
you
have
the
open
shift,
install
and
oc
tools,
and
then
it
is
using
this
template
to
render
in
different
configurations.
A
So
you
might
have
noticed
when
I
was
showing
the
config
template.
We
have
this
like
cluster
name
and
poll
secret,
ssh
public
key,
so
the
variables
for
this
file
are
secrets.
You
know
that
you
wouldn't
want
to
commit
to
this
file
like
the
poll
secret,
but
also
just
you
know,
kind
of
templated
values.
So
we
might
want
to
switch
up.
A
These
are
all
documented.
Let
me
switch
to
the
nice
looking
one.
This
is
this.
Is
the
dock
for
this
for
these
scripts,
it's
in
the
doc
folder
in
the
openshift
repo,
but
it
spells
out
a
configuration,
is
passed
by
an
environment
variable
and
then
it's
rendered
into
the
template.
So
we
have
that
all
documented
here.
A
That
said,
as
it
says
here,
there
are
no
required
options.
So
if
you
just
run
this,
it
has
seen
defaults
for
everything.
So
you
really
really
need
to
start
playing
around
with
the
configuration.
If
you
want
to
change
things,
you
don't
have
to
really
even
know
about
it.
If
you're
just
trying
to
run
the
tool
to
set
up
a
standard
cluster,
and
so
after
create
cluster,
renders
that
config
file,
then
here
it's
actually
creating
the
cluster,
where
it's
wrapping
the
open
shift,
install
tool,
and
so
that
is
what
I've
run
here
on
the
bottom.
A
On
the
bottom
terminal,
you
can
see.
I
overrode
the
cluster
name
to
call
it
ocpt
ocp
demo
occp
is
open
shift
container
project,
which
is
the
flavor
of
openshift
that
we're
using.
A
So
that's
running
along
here,
so
that's
really!
This
creating
cluster
is
really
a
a
three-step
process.
A
A
But
this
install
director
also
contains
the
cube
config
and
the
cube
admin
password
for
when
the
cluster
is
up
and
running,
so
you
can
interact
with
it
from
you
know,
k9s
or
cubectl
or
whatever
you
like.
These
files
live
here.
So
you
know,
while
doing
this,
I
you
know
one
of
the
main
reasons:
the
create
cluster
script
takes
environment
variables
as
configuration
is
because-
and
maybe
someone
on
the
call
can
tell
me
more
about
this,
but
I
envisioned
you
know
for
for
maybe
our
first
pass
to
set
up
ci
clusters.
A
You
know
somebody
could
just
run
it
from
their
local
workstation
and
then
put
the
secrets
and
they
get
mci
variables.
So
the
the
jobs
can
talk
to
the
openshift
cluster
but
it'll.
It
would
also
be
nice
to
run
this
create
cluster
script
from
ci
itself.
You
know,
is
sort
of
a
one-off
manually
triggered
job.
That
way.
You
know
if
somebody
that
at
least
that
way
we
have
an
audit
log
of
what
clusters
have
been
launched
and
with
what
parameters.
A
A
B
So
far,
this
all
seems
fairly
straightforward.
We
have
run
similar
things
in
the
past
a
long
time
ago,
when
we
were
doing
well
in
gitlab
years,
when
we
were
doing
the
the
work
with
one
of
our
partners
to
cloud
foundry.
We
actually
had
a
specific
pipeline
that
effectively
did
a
very
similar
setup.
It
spun
up
a
bastion
and
then
exploded
everything
into
play
and
then
later
on,
you
would
shut
the
pipeline
down.
It
would
reverse
that
process.
B
A
Okay,
thank
you
one
other
well
well.
This
cluster
should
be
completed
here
soon
and
then
I
could
show
the
openshift
console
that
kind
of
the
the
resources
that's
created
at
gcp.
A
But
while
we're
waiting
for
that,
we
also
have
this.
You
know
describe
destroy
cluster
script,
which
is
a
lot
simpler
than
create
cluster.
But
again
it's
just
wrapping
the
open
shift,
install
tool
here.
One
interesting
thing
here
is
that
I
previously
you
know
this,
this
install
directory
that
holds
all
of
the
cluster
information
is
get
ignored.
So
at
the
end
of
the
script
originally,
I
had
you
know
rmrf
install
because
it's
no
longer
needed
right.
A
You've
deleted
the
cluster,
but
I
ran
into
a
situation
where
you
know
anytime,
you're
dealing
with
cloud
providers,
there's
the
potential
for
timeouts
right
or
just
some
network
flakiness.
A
So
if,
if,
if
you
run
this
destroy
cluster
and
something
goes
wrong
and
then
it's
it
won't
report
like
it
won't
return
a
non-zero
exit
code,
sometimes
it'll
just
say
it's
destroyed,
moving
on
and
then,
if,
after
after
that,
you
remove
the
install
cluster
and
then
go
back
to
google
cloud,
and
you
know
verify
that
everything's
down,
if,
if
it
doesn't
actually
destroy
the
cluster
and
you
delete
the
install
directory,
that's
the
state
file
for
terraform.
So
you
can
no
longer
delete
the
your
openshift
cluster
automatically,
and
so
I
removed
that
line.
A
A
What
you
know,
I
feel
it
the
it
fills
up
with
information
when
you're
launching
the
cluster,
but
after
you
delete
the
cluster.
It
really
only
has
this
open
shift,
install
log,
which
is
you
know,
just
a
very
long
log
of
everything
that
it's
it's
putting
the
standard
out
here
too,
if
you
were
in
debug
mode.
B
A
Yeah,
what
we
do
expose
this
log
level
environment
variable
which
is
passed
through
to
the
open
shift
install
tool
too.
So
if
you
are
working
with
this,
that
something
goes
wrong.
The
first
step
is
probably
to
set
log
level
to
debug,
because
it
puts
out
a
lot
more
options,
so
you
can
see
here
in
the
lower
right.
A
I
think
we're
almost
done.
If
I
had
debugging
log
on
I
would.
I
would
have
a
better
view
into
what
was
actually
going
on,
but
since
I've
done
this
dozens
of
times,
I
know
that
these
three
waiting
calls
up
here.
The
three
steps
that
are
taken
to
launch
the
cluster,
so
first
of
all,
you're
launching
the
master
or
the
the
master
nodes,
the
control
plane
here
and
waiting
up
to
20
minutes.
A
For
it
to
be
ready,
then
you
are
launching
the
worker
infrastructure
and
waiting
up
to
30
minutes
for
it
to
be
ready
after
the
control
plane,
nodes
and
the
worker
nodes
are
up
the
bootstrap
resources,
which
is
basically
another
instance
with
some
associated
service
accounts,
is
then
destroyed,
and
then
this
last
step
that
we're
waiting
on
here
waiting
for
the
cluster
to
initialize
is
basically
just
waiting
for
all
of
those
different
operators
that
are
running
in
the
cluster
to
resolve.
A
So
they're,
just
you
know,
they're
just
in
a
loop
waiting
for
you
know
other
resources
that
they
depend
on
to
be
ready
and
that
that
is
probably
the
most
time
consuming
thing
here,
where
we're
just
waiting
on
the
operators
to
be
ready,
but
it
has
completed.
So,
let's
go
ahead
and
open
up
the
open
shift
console
here.
A
One
of
the
improvement
we
can
make
here
is,
I
don't
know
if
they
provided
out
of
the
box,
but
it
would
be
nice
to
have
a
let's
encrypt
set
up.
Instead
of
having
to
you
know
accept
the
security
risks
as
you
go
through,
because
it's
using
self-signed
certificates,
hopefully
there's
an
easy
way
to
do
that
it
puts
out
the
admin
password
or
username
and
password
here
it's
standard
out,
but
this
password
is
also
saved
in
the
cube
admin
password.
So
you
don't
have
to
save
standard
out.
A
All
right
so
yeah
we
have
our
openshift
cluster.
Here
you
can
see
a
cluster
up
version.
Update
is
available.
Openshift
is
moving
pretty
fast
like
when
I
started
this
project.
You
know
like
a
week
and
a
half
for
two
weeks
ago.
A
The
latest
open
shift
version
was
four
five
three
and
they
were
up
to
four
six
three,
which
is
the
latest,
and
so
we
you
know,
we've
talked
about
what
minimum
version
will
support
and
it
sounds
like
461
will
be
a
pretty
good
bet,
at
least
to
start
out.
I
did
have
a
bunch
of
issues
with
this
openshift
install
tool,
launching
a
453
cluster
where
I
was
just.
A
It
was
like
one
out
of
10
clusters,
but
actually
complete
to
launch
otherwise
there'd
be
timeouts,
and
it
would
just
fail
out,
like
the
the
workers,
wouldn't
come
up
and
then
I'd
launch
it
again
and
they'd
come
up
fine,
but
since
switching
to
openshift46
I
haven't
had
any
of
those
issues
and
cluster
creation
actually
went
down
from
like
45
minutes
to
like
30
35
minutes.
So
that's
nice
that
that's
improving
yeah.
The
really.
The
only
thing
I
want
to
show
here
in
the
cluster
is
I'm
looking
at
the
pods
here.
A
A
A
A
Yeah,
you
can
see
it
just
the
same
as
you're
used
to
seeing
k9s.
You
know
this
is
also
a
good
way
to
debug
why
a
particular
operator
might
not
be
stopping
or
starting
because
when
you're
launching
a
cluster,
you
can
this.
This
cube,
config
file
is
created
after
the
second
step.
So
you
can,
if
you
see
that
it's
waiting
for
a
while
for
the
cluster
to
initialize,
you
can
pop
in
the
k9s
and
start
inspecting
these
different
operators
in
their
pods
to
see
what
the
issue
might
be.
A
Helpful
okay,
I
guess
one
last
thing
here
is
that
when
you
are
logged
into
your
red
hat
account,
there's
like
an
open
shift
and
I'm
not
logged
in
right
now,
but
I'll
just
describe
it.
There's
a
there's,
a
cluster
manager
in
there
openshift
cluster
manager.
A
So
I
guess
that's
another
thing
to
look
to
into
if
we
care,
but
at
the
same
time
each
of
us
has
a
separate
red
hat
login,
and
so
that
would
only
be
our
clusters,
so
we
don't.
It
wouldn't
be
a
view
of
like
every
cluster
that
the
distribution
team
has
launched.
So
I
would
say
its
utility
is
limited
just
because
of
that
scoping.
A
Okay,
well
open
up
to
any
questions,
concerns
comments.
I'm
gonna
go
ahead
and
destroy
this
cluster
in
the
background,
while
that's
happening.
B
A
I
don't
know
I
mean
one.
One
issue
with
state
files
is
that
they
contain
secrets,
so
checking
them
in
might
not
be
the
best
way
to
go.
B
No,
no!
No!
No!
No!
We
don't
it's.
That's
not
how
that
works.
We
don't
check
them
in
to
get
we.
We
have
a
terraform
state
management
component
within
gitlab.
Now,
what
I'm
saying
is,
do
you
think
it
would
be
possible
for
us
to
configure
this
script
in
such
a
way
that
we
can
convince
it
to
stash
it
through
that.
A
I
didn't
see
that
looking
through
the
documentation,
but
it
may
be
possible.
There
is
a
lot
of
activity
on
that
this
openshift
installer
directory.
You
know
it
actually
has
issues
and
stuff,
so
we
might
be
able
to
add
that
in
if
it
doesn't
exist.
What
benefit
would
that
give
us.
B
Basically,
we
would
be
able
to
have
one
centralized
in
terms
of
our
ci
we'd.
Have
one
centralized
location
for
where
everything
is
we
wouldn't
have
to
worry?
Did
we
ship
this
off
before?
Did
we
make
sure
that
the
ci's
got
the
right
access,
because
we
could
be
really
touchy
about
access
to
a
gcp
bucket?
If
it's
got
this
kind
of
information?
B
A
Yeah
that'd
be
interesting,
you
know,
I
don't
think
it
supports
it
now,
but
if
it's
something
we
want
to
do,
I
think
we
can
get
it
merged
in.
A
A
A
All
right
there
we
go
delete,
takes
about
three
minutes,
so
that's
not
too
bad
yeah.
So
as
a
follow-up
to
this,
we'll
have
an
issue
for
actually
creating
some
ci
clusters
and
that'll
probably
involve
you
know,
setting
up
an
initial
sort
of
smoke
test
for
the
operator.
Does
it
install
find
an
open
shift
without
airing
out,
and
then
we
can
kind
of
build
off
that
yeah?
That's
all
I
had
for
the
demo
there
any
last
comments,
questions.