►
From YouTube: OKD Code Ready Containers (CRC) Build Process Explained
Description
Interested in learning the OKD Code Ready Containers (CRC) build process? Join this live session led by Red Hat's Charro Gruver and UMich's Jaime Magiera as they walk us thru the build process and learn what it takes for each OKD release to build the OKD CRC!
Members of the OKD-Working Group will be on-hand to discuss the ins and outs of the process. We are hosting this session to recruit volunteers to help automate the process for new releases, so if you are interested in contributing to this area of the OKD community - please join in the conversation and attend this session.
A
A
A
Sometime
soon
we
meet
on
tuesdays
at
9
a.m.
Pacific
noon,
eastern
and
it's
a
pretty
active
group
of
folks.
So
we'd
love
you
to
come
and
I
think
I
recognize
a
number
of
your
names
from
the
from
the
working
group
already
so
hopefully
you're
part
of
the
conversation.
That's
been
about
building
and
learning
how
to
build
these
things.
So
again,
thanks
for
coming
and
joining
us
and
here's
charo
back
as
I
ad-lib
wildly
for
a
few
minutes
and.
A
All
right:
well,
I
was
hoping
jamie
mcguire
mageira,
rather
not
maguire
was
going
to
join
us
as
well,
but
he
might
have
gotten
caught
up
in
something
else.
This
morning
the
university
of
michigan
so
he's
our
other
co-chair
for
the
okd
working
group
and
hopefully
he'll
be
joining
us
shortly
and
so
chris,
if
you
are
broadcasting
this
on
twitch,
you
feel
free
to
go
live
whenever
you
are
ready.
A
Okay
now
pause
there
for
a
minute
and
let's
see
if
I
can
get
a
chat
out
of
chris
short
when
you
are
ready
to
really
start
going
on
the
twitch
stream.
A
All
right
we're
streaming
now
cool
all
right.
Well,
we're
streaming
andrew!
Are
we
streaming
on
twitch
or
are
we
streaming
on
blue
jeans.
A
And
here's
neil
there's
a
whole
bunch
of
folks
from
the
working
group
joining
now
so
awesome,
because
this
is
one
of
those
mysterious
processes
that
everybody
should
know
how
to
do
so
that
whenever
we
get
hit
by
a
bus
we
can
go,
let's
see
twitch
and
twitch
and
youtube
we're
live.
So
everybody
thank
you
for
joining
us.
Today.
We
are
going
to
follow
along
with
darrow
groover
and
learn
about
the
okd,
build
process
or
code
ready
containers
and,
as
he
has
launched
a
new
blog
post
I'll.
A
Let
him
talk
a
little
bit
about
that
and
introduce
himself
a
little
bit
more
deeply,
and
we
will-
and
I
can
see
now
that
jamie
has
joined
us
as
well.
Welcome
jamie
and
we're
going
to
rock
and
roll
and
hopefully
turn
you
all
into
code
ready
container
builders
by
the
end
of
the
day,
all
right
take
it
away.
B
Excellent
well,
thank
you
everybody
for
joining.
I
think
this
is
going
to
be
a
very
fun
and
exciting
boot
of
our
openshift
commons.
I
am
tyro
gruver,
I'm
currently
an
architect
with
red
hat
services.
I've
been
with
the
company
for
about
a
year
now.
B
Prior
to
that,
I
was
a
red
hat
customer
through
most
of
the
20
years
of
my
preceding
technology
career
and
really
one
of
the
things
that
that
drew
me
that
this
direction
was
the
the
open
source
nature
of
everything
that
red
hat
does,
and
that's
that's
what
I'm
going
to
show
you
guys
today
the
the
code
that
we're
going
to
be
dealing
with
in
the
projects,
I'm
going
to
show
you
how
we
take
ready
containers,
the
red
hat,
supported,
distribution
and
leverage
it
for
okd
is
is
directly
because
all
of
the
code
is
out
there.
B
It's
on
github.
It
is
purely
open.
There's
no
additional
secret
sauce
that
gets
added
to
it.
Red
hat
is
very
passionate
about
open
source
and
has
been
pretty
much
since
its
inception.
B
So
that
the
okd
can
benefit
from
the
same
capabilities
that
are
available
to
our
subscription-based
customers
that
are
using
red
hat
openshift
on
the
various
hybrid
cloud
platforms
that
are
available
out
there
and
leveraging
code
ready
containers
on
their
developer
workstations.
B
I
I
have
written
a
blog
post
and
that's
what
you
see
on
the
screen
right
now
that
that
is
related
to
exactly
what
I'm
gonna
talk
to
you
guys
about
today.
B
If
we'll
we'll
post
it
in
the
in
the
chat
later,
actually
diane's
got
the
link
to
it,
so
she
can
probably
drop
it
in
there
and
right
before
this,
I
did
create
a
twitter
handle
for
the
blog
that
I
started
a
few
weeks
ago
at
diane's
recommendations.
So
you
can
also
find
me
there.
B
The
the
blogs
that
I
write
are
dominantly
open
shift,
kubernetes
focused
and
they
are
very,
very
home
lab
focused.
So
if
you
like
to
tinker,
if
you
like
to
build
your
own
things,
this
is
a
place
where
I
like
to
throw
out
all
of
the
interesting
things
that
I
do
at
home
today.
We're
going
to
talk
like
I
said
about
code
ready
containers.
It's
the
successor
to
mini
shift.
Many
of
you
from
the
openshift3.x
days,
probably
remember
mini
shift,
which
would
it
was
itself
a
a
packaging
of
mini
cube.
B
Its
sole
purpose
is
to
enable
you
to
develop
applications
for
openshift
or
to
learn
openshift
on
your
local
workstation.
It
does
require
a
fairly
beefy
environment
to
work.
Your
workstation
really
needs
at
least
16
gigabytes
of
ram
for
code,
rated
containers
to
be
usable
with
all
of
the
other
things
you're
going
to
be
running.
If
you've
got
you
know,
50
browser,
tabs,
open
and
you've
got.
You
know
your
id
vs
code
or
intellij,
whatever
your
your
religious
preference
is
for
ide.
B
The
the
addition
of
code.
Ready
containers
is,
you
know,
is
going
to
require
some
some
fairly
beefy
workload,
so
make
sure
you've
got
a
well
apportioned
workstation
before
you
try
to
use
this
and
for
building
it,
which
is
what
our
focus
is
going
to
be
today,
you're
going
to
need
a
fedora
or
a
rel
or
centos
based
operating
system,
a
mini
server
run
this
build
on
because
it
does.
It
is
very
opinionated
towards
a
libert
install.
In
fact
it's
it's
using.
B
What
is
effectively
a
installer
provisioned
infrastructure,
what
we
call
an
ipi
installation
build
the
cluster
that
is
going
to
be
the
heart
of
ready
containers
and
in
this
blog
post
that
I've
got
here.
I've
got
several
links
for
you
to
go
to
that.
You
can
see
documentation,
information
or
about
a
lot
of
the
underlying
things
if
you're
new
to
the
openshift
ecosystem,
I
suspect
most
of
you
on
this
call
are
not
are
not
new
to
this
ecosystem
and
you're.
Wishing
that
I
would
shut
up
and
just
get
straight
into
how
to
build
this
thing.
B
So
that's
what
I'm
going
to
do
right
now,
so
code
ready
containers
what
it
actually
is
at
its
heart.
It's
a
single
node,
openshift
cluster,
single
node
openshift
cluster.
That
is
built
with
an
opinionated
installation
process
that
strips
out
as
much
of
the
weight
of
a
full
open
shift
environment
as
it
can
so
that
it
can
run
on
effectively
and
be
usable
on.
B
If
you're
running
on
a
macbook
like
I'm
running
right
now,
then
it's
going
to
use
hyperkit
to
leverage
that
qco2
image
and
spin
up
your
crc
instance.
If
you're
running
on
fedora
or
or
another
linux
os,
it's
going
to
use
the
underlying
libvar
kvm.
B
So
the
the
whole
heart
of
this
thing
is
that
single
node
cluster
that
gets
turned
into
a
qcow
2
image.
It's
a
three-step
build
process
that
I've
I've
documented
out
here.
So
I'm
going
to
walk
you
through
this
real,
quick
and
then
I'm
going
to
pivot
over
to
showing
you
a
build
that
I
just
finished
and
a
running
instance
that
is
halfway
through
the
build
process.
B
So
you
can
see
what
that
single,
node
cluster
looks
like
while
it's
while
it's
running
before
the
the
cucu
so
like,
I
said
three
step
process
that
requires
some
initial
setup
and
I
detail
out
the
initial
setup
here,
like
I
said,
you're
going
to
need
I'm
using
centos,
eight
stream,
and
so,
if
you,
if
you
follow
the
instructions
that
I've
got
here
with
the
centos
eight
stream,
there
should
be
no
modification
required
to
these
instructions
whatsoever.
B
If
you're
on
a
an
upstream
fedora,
there
may
be
some
nuances
here
that
I'm
doing
that
will
have
to
change
if
the
upstream
operating
system
has
changed
out
some
of
these
features.
I
haven't
run
this
on
fedora.
B
So
so
I
don't
know,
but
if
you
do
reach
out
to
the
get
hub
page
where
I've
got
this,
and
let
me
know
what
your
experience
is:
you're
going
to
need
a
libvert
environment,
you're,
going
to
need
golang
you're,
going
to
need
a
d
plus
plus
the
gcc
compiler
installed,
and
then
all
of
the
typical
tools
that
go
with
that.
So
I've
I've
detailed
out
here
the
off
of
just
a
minimal
synthos,
h
stream,
install
what
else
you
need
to
add.
B
B
Next
thing
you
need
to
do
is
create
a
firewall
zone
because
the
we're
going
to
expose
libert
to
listen
on
a
tcp
port
and
especially,
if
you're
going
to
do
this
on,
like
an
ec2
instance
or
an
instance
running
on
google
cloud,
probably
not
a
good
idea
to
expose
this
port
outside
of
your
machine,
so
create
firewall
rules
so
that
when
we
expose
this
library,
port
you're,
not
inviting
other
people
to
come
and
play
in
your
virtualization
environment,
important
sticky
tip
there.
B
The
the
other
thing
on
on
centos
8-
and
this
is
a
this-
is
a
change
about
between
seven
and
eight
or
if
this
is
a
change
that
happened
to
me
on
an
update
of
centos
eight,
but
it's
no
longer
sufficient
to
enable
the
tcp
port
and
then
just
restart
libertd.
B
There's
I'm
going
to
call
it
a
sidecar.
B
My
head
is
full
of
kubernetes
right
now,
there's
effectively
another
systemd
service
that
you
need
to
enable
and
start
along
with
liberty
that
enables
that
socket
listener.
B
B
The
next
thing
that
I've
done
here
is
I
I
took
a
whole
bunch
of
boilerplate
that
you
need
to
do
to
tell
the
project
that
you're
building
this
single
node
cluster
for
okd,
there's
a
bunch
of
environment
variables
that
you
need
to
set
I've.
I've
included
the
instructions
here
to
just
drop
those
into
a
shell
script
so
that
you
can
create
the
environment
just
by
running
a
command
before
you
execute
the
build
and
one
of
the
things
I'm
going
to
point
out
as
we
walk
through.
This
is
all
of
the
opportunities
here
for
automation.
B
That's
that's
one
of
the
things
that
that
we're
missing
is
with
the
okd
community.
We
don't
have
a
ci
environment
that
we
can
run
this
on.
I'm
sure
there
are
environments
that
we
could
leverage.
You
know
a
lot
of
the
a
lot
of
the
red
hat
upstream
projects.
They
have
ci
environments,
it's
not
a
matter
of
getting
the
environment,
it's
more
a
matter
of
getting
the
people
work
on
the
ci
so
that
we
can
leverage
an
environment.
There
are
fedora
resources.
B
There
are
resources
that
are
associated
with
the
upstream
open
shift,
maybe
even
the
engineering
team
that
is
behind
code
ready
containers
would
be
willing
to
help
us
out,
but
until
we
get
some
volunteers
that
say
hey
I
want
to
be
part
of
this.
We
really
we
haven't
started
any
of
those
companies.
B
Another
internal
red
hatter,
who
I
won't
call
out
by
by
name,
because
I
don't
want
to
embarrass
him,
but
he
he
and
I
recently
talked-
and
he
is
all
in
on
learning
to
do
this.
So
I
know
we've
got
a
couple
of
volunteers
out
there
already,
but
we'd
really
like
to
get
some
community
folks
too,
because
this
this
doesn't
need
to
be
just
a
a
red
hat
internal
effort.
We
want
this
to
be
community.
B
Okay
in
the
public
service
announcement
back
to
the
build,
but
these
variables
here
most
of
them
are
just
setting
up
the
fact
that
we're
going
to
build
with
an
okd
image
this
one
here,
this
terraform
variable
that
we're
setting
what
you
see
is
a
bootstrap
memory.
B
This
one
is
a
is
an
important
one
and
important
for
anybody.
That's
going
to
be
messing
with
single
node
clusters,
with
libert
outside
of
code
ready
containers.
This
was
a
battle
that
I
fought
for
a
while
banging
my
head
against
the
fact
that,
with
a
recent
fedora
core
os
release
the
default
memory
size
that
the
bootstrap
node
was
coming
up
with,
and
the
temp
file
system
that
it
created
out
of
a
subset
of
that
ram,
wasn't
big
enough
to
hold
the
os
tree.
B
The
other
couple
of
things
here
that
that
are
important
is
the
telling
the
lib
guest
fs
b
to
use
direct
back
end
and
the
open
shift
version
that
we're
going
to
build
here-
and
this
is
a
this-
is
just
a
cute
little
useful
bash
command.
That
folks
may
want
to
harvest
for
any
automation
that
they're
doing
where
they
want
to
do
something
with
the
latest
release
of
openshift
as
dropped
by
the
openshift
community.
B
Right
now
is
vadim,
so
when
vadim
releases
a
new
build,
if
you
execute
this
curl
command
here
with
the
little
pipe
to
cut
pipe
to
cut,
you
will
get
the
string
of
the
current
release
that
then
you
can
use
to
go,
mirror
the
images
pull
down
the
oc
and
the
open
shift
install
commands.
Whatever
you
need
to
do
that
that
leverages
that
version
string,
then
the
last
thing
is
cloning.
The
repos
and
these
two
repositories
here
are
currently
hanging
off
of
my
personal
github
account.
B
I've
got
a
couple
of
pull
requests
prepped,
but
I
haven't
submitted
yet
because
for
because
the
code
ready
engineering
team
has
already
moved
on
to
openshift
4.8
and
preparing
for
openshift
4.9
development,
the
the
current
crc
build
that
is
off
of
the
the
official
github
page
does
not
work
with
okd
there's
a
couple
of
changes
that
have
been
injected
in
that
need
to
be
fixed
so
that
we
can
still
build
okd
4.7
and
when
it
drops
okd
4.8.
B
One
other
thing
that
I'll
say
that
I'm
going
to
throw
out
soon
is
that
this
same
process
that
I'm
showing
you
here.
While
this
one
is
opinionated
toward
an
okd
release
with
just
a
couple
of
tweaks
to
what
I've
done
here,
you
can
also
make
this
build
nightly
releases.
B
So
if
you
were
into
running
some
automated
testing-
or
you
just
wanted
to
try
out
some
features
of
say,
okd
4.9
before
it
drops
or
an
upstream
4.8
release
that
might
have
a
fix
in
it
that
you're
looking
for
you
could
use
this
same
thing
to
do
that,
and
in
fact
you
don't
have
to
go
the
whole
last
mile
of
building
the
crc
binary,
a
usable
single
node,
cluster,
okay
and
I'll
show
you
that
in
just
a
minute
the
official
github
pages.
B
B
I
have
those
forked
over
to
my
personal
github,
with
the
necessary
changes
that
are
currently
needed
to
build
ready
containers
for
okd
4.7.
So
here
this
is.
This
is
hanging
off
of
mine,
where
I
have
my
my
current
fork
and,
like
I
said,
a
couple
of
pull,
pull
requests
away
from
getting
that
back
into
the
the
mainstream.
B
B
B
And
that's
what
I
show
you
here
that
you
you
run
the
helper
script
I
created,
which
sets
your
environment
for
you,
you,
cd
into
the
opinionated
directory
structure
that
I've
created
for
you
fetch
the
latest
check
out
the
okd
4.7,
build
branch,
pull
down
any
changes
and
then
execute
when
you
run
snc.sh
it's
time
to
go.
B
B
A
running
cluster
I'm
going
to
ask
real
quick,
is
this:
is
the
screen
readable
or
do
I
need
to
crank
up
the
font
on
my
on
my
bash
shell.
B
Up
we
go,
we
go
up.
We
go
continuing
to
scroll
fun
because
it's
fun
to
have
videos
that
lots
of
scrolling
upward
upwards
upward
almost
to
the
top
okay
good.
My
mouse
was
just
about
to
run
off
of
the
mouse
pad
there
all
right.
So
you
see
here
I
I
ran
my
setup
script,
which
created
my
environment.
B
I
fetched
and
I
checked
out
and
I
hold
and
snc.sh,
okay
and
and
what
that
does
that
that
then
starts
the
process
of
building
this
single
node
cluster.
So
the
first
thing
it's
doing
it's
interacting
via
that
port
16509
with
my
underlying
libert
and
it's
creating
the
virtual
machine
and
then
it
goes,
and
it
tells
me
that
it's
using
release
from
august
22nd,
which
is
our
the
latest,
that
we
currently
have
out
there
and
it's
pulling
down
the
client,
the
installer.
So
it's
got
the
oc
command
and
now
it's
got
the
shift.
B
B
So
I'm
not
going
to
read
through
all
of
this,
because
I
know
you
guys
don't
want
me
to
but
you'll
see
here
it
starts
once
the
cluster
once
the
api
is
up
and
running,
which
will
happen.
B
See
it's
it's
proving
it's
sitting
there
in
a
loop.
It's
probing
waiting
for
the
etd
api
to
be
up,
and
it
says:
aha
api
is
up
and
at
that
point
its
are
doing
some
things
to
the
cluster.
B
Here,
it's
setting
one
of
those
nice
fun,
unsupported,
config,
overrides
so
that
it
can
run
as
a
single
node
cluster
and
then
the
bootstrap
continues
to
run
till
the
bootstrap
is
complete
and
then
it
tears
down
the
bootstrap
resources
and
then
it
sits
and
waits
for
the
cluster
to
initialize
pause
here
again
for
a
minute,
because
there's
another
thing
that
we,
the
okd
community,
can
do
with
this,
I'm
envisioning
something
that
we
can
set
up
with
some
ci
that
will
allow
us
to
run
opinionated
and
automated
tests
against
builds.
B
One
of
the
pain
points
that
we've
had.
I
think
that
we
would
all
agree
to
as
an
okd
community
is
that
the
the
tests
against
the
nightlys
that
they're
not
as
thorough
as
we
would
like
to
really
uncover
any
places
where
a
you
know,
a
new
fedora
core
os
release
might
have
broken
something
we
don't
get
to
it
until
we
actually
try
running
this
thing
with
you
know,
with
a
full
install.
B
I
have
a.
I
have
a
hypothesis
that
if
we
built
some
tests
around
this
code,
ready
containers
build
that
we
could
actually
use
it
to
to
test
nightly.
Okd
builds
if
we
wanted
to,
or
at
least
test
our
releases
before
we
drop
them
as
a
release
so
that
we
can
validate
yes,
a
a
full
running
cluster
can
be
created.
The
bootstrap
process
completes
properly.
B
Okay,
now
I'll
get
back
to
the
code
ready
at
this
point
in
the
process
it
is,
it
is
completing
the
the
install
process
and
now
it's
starting
to
do
the
opinionated
activities
that
I
was
talking
about.
So
you
can
see
here
that
it's
it's
deleting
a
lot
of
machine
configs.
B
The
last
thing
that
it
does
is
cleans
up
all
of
the
completed
pods
because
again,
even
though
they're
using
ephemeral,
storage,
their
logs
and
things
are
actually
occupying
space
on
that
virtual
machine
and
so
by
cleaning
up
all
of
these
deleted,
pods
and
cleaning
up
other
things
that
are
more
ephemeral
within
the
cluster.
We're
preparing
that
virtual
machine
to
be
compressed
down
to
as
small
an
image
size
as
possible.
B
And
it
also
is
just
a
single
script
command
the
create
disk
sh,
you
pass
it
the
the
var
that
is
just
the
the
installation
in
the
snc
project,
where
it
put
all
of
the
information
about
that
single
node
cluster
and
where
that
cucu
image
is
going
to
end
up.
B
This
will
run
for
a
very
long
time
because
after
it
creates
the
after
it
creates
the
initial
qmu
image
the
the
cucow2.
It
then,
is
going
to
create
a
bundle
with
that
image
that
is
hypervisor
specific.
So
it's
going
to
have
to
do
this
three
times
it
does
it
for
libert,
it
does
it
for
hyper
kit
and
it
does
it
for
hyper
v
and
and
it
compresses
each
of
those,
and-
and
this
might
be
an
area
where,
where
some
efficiency
could
be
added
because
it
does
it
serially.
B
So
it
doesn't
kick
these
off
in
in
parallel
it
it
does
it
serially,
and
it
does
take
a
long
time
because
it's
creating
significantly
sized
image
and
then
it's
dipping
that
significantly
sized
image.
So
you
go
from.
B
B
That's
that's
10
gigabytes
worth
of
stuff
that
it
took
and
turned
into
a
three
gigabyte
bundle
image,
and
it's
doing
that
free
time.
Okay,
so
I'm
showing
you
this
just
to
say,
don't
think
something's
broken.
It's
gonna
sit
there
for
a
long
time
the
create
disk
takes
a
while.
So
you
can
go
out
for
dinner,
while
you're
waiting
for
it
to
come.
B
Unless
you
have
access
to
some
really
significant
hardware
and
some
really
fast
disk,
then
it
might
not
take
quite
so
long,
but
my
poor
little
nook,
8
i3,
that
it
runs
on.
It,
takes
all
right.
Well,
the
next
step,
the
third
thing,
which
is
building
the
crc
executable
that
actually
doesn't
take
too
long.
So
once
the
create
disk
shell
has
pleated
and
we're
back
from
dinner
and
we've
had
our
little
dram
of
dram
buoy
as
an
apertive
to
get
relaxed
for
the
evening
ready
to
build
our
code
ready,
container
image.
B
This
is
where
we
use
the
other
project,
which
is
erc,
and
it's
the
it's.
The
go
code
for
creating
code,
ready
containers,
the
binary
all
right,
so
we
go
in
there,
make
clean
and
then
a
make
embed
bundle
and
make
embed
bundle.
What
it
does
is
it
it
does
a
cross
compile
of
the
crc
binary
for
windows,
mac
os
and
linux.
B
B
B
I
don't
have
a
windows
operating
system
anywhere
in
the
house,
not
because
I'm
as
bigoted
as
I
used
to
be
toward
windows
or
microsoft,
it's
just
because
I
don't
have
one.
So
I
actually
can't
test
the
windows
version.
B
So
at
this
point
is
where
I
push
the
binaries
up
to
the
fedora
server
that
the
fedora
community
was
kind
enough
to
give
us
some
space
on
and
then,
when
you
guys,
go
to
okd
dot,
io
and
download
code
ready
containers,
it
is
there
and
ready
for
you
to
use,
and
hopefully
it
hasn't
been
30
days
since
I
was
able
to
build
a
release
and
push
it
up
there.
B
B
C
C
There's
been
a
couple
questions,
let
me
go
over
them
for
folks
that
are
just
watching
here
and
that
aren't
throwing
through
the
questions.
One
of
the
questions
was:
do
you
need
a
pull
secret
from
redhat.com
as
part
of
this
setup?
If
not
what
is
used
in
its
place
or
what
mods
are
needed
and
yeah?
So
basically,
you
there's
actually
an
example
in
chara's
documentation
and
I
put
something
in
the
channel
if
there's
actually
something
in
the
okd
documentation
of
you
can
use
a
fake
full
secret,
just
a
jason
string.
C
Basically,
that
has
that
and
but
it's
worth
pointing
out
that
getting
a
red
hat,
pull
secret
isn't
hard
and
it
doesn't
cost
you
anything
if
you
log
into
the
urlconsole.redhat.com
and
navigate
to
the
open
shift,
part
and
click
on
any
of
the
installers.
There's
a
little
button
there
to
create
a
polls
secret
and
actually
I
can-
we
can
put
the
hyperlink
in
with
the
meeting
notes
and
it.
So
you
don't
have
to
pay
anything
to
get
to
get
that.
C
If
you
use
a
red
hat,
pull
secret,
you
do
get
access
to
more
operators
in
your
operator
hub
within
the
club.
So
there
is
some
advantage
to
that.
C
B
The
the
official
code
ready
please
you
do-
you-
do-
need
to
sign
up
for
a
red
hat
developer
account,
but
those
are
also
free
and
and
with
that,
then
you
can
get
your
official
pull
secret
and
use
the
you
know
the
non
okd
code
ready
if
you're
sewing.
C
B
No,
I
I
haven't
and
I'll
stop
sharing,
so
so
I
can
see
you
guys
and
you
guys
can
see
me.
No.
I
haven't
seen
anything
around
that.
Maybe
I'm
going
to
speculate.
B
I
know
the
reason
for
me
is
so
darn
easy
to
do
it
with
libert
that
there
wasn't
a
reason
to
look
for
another
way
to
do
it
and
it
even
I
will
even
say
this:
it
even
works
with
nested
libert,
because
I
used
to
run
these
builds
on
a
virtual
machine
that
was
running
on
one
of
my
nooks
that
had
a
lot
more
cpu
and
ram,
and
so
I
actually
ran
the
I
would
actually
provision
over
nested
virtualization
and
then
it
would.
B
A
Brett
toefl,
having
tough
time
with
the
the
q
a
widget,
but
he's
asking
that
he
was
a
little
confused
around
the
last
part
of
what
cheryl
was
talking
about
about
the
tie-in
to
the
30-day
time
out
of
the
cert.
B
Yeah
yeah
yeah
and
that's
something
I
was
actually
actually
thinking
over
the
weekend
that
that
there
there's
there's
a
way
we
could.
We
could
fix
that
in
the
crc
executable,
because
somebody
posted
their
very
clever
work
around
for
it
and
one
of
the
issues.
What
it
is
is
the
the
cluster
when
it
first
comes
up.
It
has
a
cert,
that's
only
good
for
about
24
hours
and
the
snc
logic
in
it.
B
C
B
That
deletes
that
temporary
cert,
the
24-hour
cert
and
then
waits
for
the
certificate
signing
request.
The
csrs
then
show
up
on
the
node
so
that
it
can
approve
them,
and
at
that
point
now
you
have
a
30-day
dirt
well
after
30
days,
any
good
anymore,
and
if
your
crc
instance
has
been
shut
down,
there's
been
nothing
to
create
that
csr
or
react
to
the
csr
and
so
code
radio
containers
working
after
30
days.
B
There
may
be
some
other
things
tied
into
that,
but
I
believe
that's
really
the
essence
of
it
and
the
workaround
is
during
the
bootstrapping
of
your
crc
instant.
You
run
crc
start
and
it's
coming
up
and
then
it
won't
come
up
because
search
is
expired.
Now
you
can
actually
export
cube,
config
oc
into
the
cluster
and
approve
the
csr,
and
if
you
approve
this
vsr,
then
crc
should
continue.
B
I
see
no
reason
why
we
couldn't
put
that
logic
into
the
go
code
of
crc
itself,
so
that
as
crc
is
starting
up
one
of
the
because
it
creates
it,
creates
ssh
keys
and
it
injects
the
ssh
keys
into
the
cluster.
It
does
a
whole
lot
of
things.
I
don't
see
any
reason
why
one
of
the
things
that
it
does
couldn't
be
to
use
the
logic
to
check
to
see
if
the
cert
is
expired
or
even
about
to
expire,
kill
the
cert
and
wait
for
the
csr
request
and
approve
the
csr
record.
C
There
we
go
excellent,
we
got
another
question
here.
If
s
and
c
sh
fails
to
run,
can
I
just
run
it
again?
Do
I
need
to
do
a
clean
operation.
B
No,
you
can't
just
run
it
again.
It
is,
unfortunately,
not
item
potent.
It
leaves
a
giant
mess
behind
if
it
fails
to
run.
Let
me
show
you.
Let
me
share
again,
though,
because
I
I
have
a
fix
for
that
too,
because
that
happens
to
me
a
lot,
especially
when
the
code
changes
here.
C
B
Okay,
good,
okay,
so
at
the
bottom
of
my
blog
post,
you
see
this
post
build
cleanup
here.
This
actually
also
works.
If
snc.sh
fails
same
thing
with
the
createdisk.sh,
if
createdisk.sh
fails
you
you
need
to
do
these
steps.
C
B
And
actually,
since
you
asked
that
question,
I'm
going
to
modify
this
blog
post
and
actually
put
that
explicitly
in
here
too,
because
that
is
something
you
will
run
into,
even
even
with
the
good
releases
that
that
successfully
build
every
once
in
a
while
you'll
crash
into
a
race
condition.
B
I've
hit
this
a
few
times
when
you
tear
down
the
bootstrap
node.
There
is
occasionally
some
sort
of
a
race
condition
in
there
that
something
gets
messed
up
when
the
bootstrap
gets
ripped
out
and
the
install
can't,
so
it
will
crash
out
or
it
will
run
through
its
full
time
and
then
it
will
time
out
after
40
minutes.
B
So
so
what
you
do
and
I've
I've
got.
Shell
commands
here
drop
into
a
shell
script.
In
fact,
I
have
it
in
a
shell
script
that
I
just
run.
That's
basically
a
cleanup.
What
it
does
is.
It
finds
the
virtual
machine
that
starts
with
crc
dash,
finds
the
the
the
machine
network
and
the
pool,
and
it
does
a
destroy
undefined
on
all
of
those
for
the
bootstrap
and
the
master,
so
that
cleans
up
the
libert
resources
and
then
the
last
thing
is
wipe.
B
The
images
that
are
sitting
out
there,
which
it
in
an
opinionated
way
it
puts
in
open
shift
images
under
var,
lib
libvirt
and
then
the
last
thing
is
removing
the
crc
stuff
from
the
snc
directory
that
was
created
during
the
snc,
build
or
the
create
disk.
C
Excellent
and
we've
got
a
couple
more
questions
filtering
in
here.
Will
this
work
on
fedora
34
george
wants
to
know,
and
I'm
assuming.
That
means
the
build
process.
B
C
Right,
but
that
would
be
something
for
the
community
to
check
out.
I
think
just
to
give
it
a
shot
on
fedor
34
and
give
it
a
shot
on
fedora
core
os
like
it
would
be
really
interesting
to
actually
do
that
on
f.
C
So
that's
something
to
explore.
Well,
we
can
actually
talk
about
that
at
the
next
working
group
meeting,
see
if
we
can
get
volunteers
and
actually
I'd
I'll
volunteer
to
try
it
on
f
cos.
Yeah.
B
C
Yes,
all
right,
let's
see
that
we
have
here.
Did
I
miss
any
a
little
bit
more
about.
There
was
a
little
bit
more
a
desire
for
more
clarity
on
operators,
so
operators
that
have
red
hat
rpms
need
the
official
pull
secret
operators
that
any
other
operators
can
be
installed
with
the
fake
pull
secret.
C
So
I
hope
that
clarifies
we
don't
actually
have
like
a
generated
list
where
you
could
compare
the
two
that
wouldn't
be
too
hard,
though
maybe
we
could
automate
a
script
that
actually
does
that,
but
there's
a
significant
amount
of
of
difference
in
operators
between
using
the
fake
pull
secret
and
using
an
official
red
hat.
One.
C
And
I
think,
that's
all
that,
if
all
the
questions
that
have
come
in,
I
want
to
take
this
moment
to
talk
a
little
bit
about
something
that
charo
touched
on,
which
is
sort
of
the
impetus
for
having
this
streaming
session
was
obviously
to
spread
the
knowledge
and
have
folks
familiar
with
this
process,
but
in
particular
to
get
people
volunteering
to
help
the
working
group
to
to
have
trc,
updated
and
available
on
a
continuous
basis.
Charo
is
a
redhead
employee
and
is
really
busy
and
has
graciously
provided
all
of
this.
C
But
we
can't
rely
on
on
that
and
I
think,
there's
benefit
to
multiple
people
working
on
a
project,
not
just
in
terms
of
time
and
resources,
but
innovation
possibilities
for
innovation,
possibilities
for
building
out
some
ci,
and
that's
something
else
that
that's
really
needed
here
is.
Is
it
doesn't
take
a
lot
to
script
this
out
into
a
ci
into
a
pipeline?
C
And
so
the
working
group
will
be
looking
at
this
in
the
coming
weeks
and
it'd
be
nice
to
have
folks
who
have
been
on
this
call
or
who
are
going
to
be
watching
the
video
reach
out
to
us
at
the
working
group.
Diane
can
post
our
information
in
the
chat
if
you're
not
familiar
with
it.
You're
watching
this
on
one
of
the
the
streaming
platforms
or
just
happened
to
it
on
youtube,
get
in
touch
with
us
come
to
our
meetings.
C
C
So
please
do
get
in
touch
with
us,
and
this
video
will
be
archived
and
we'll
be
talking
about
what
came
out
of
this
meeting
at
the
working
group
session
and
also
we'll
be
improving
the
documentation
and
we'll
be
taking
what
charo
does
and
then
constantly
building
on
it,
and
our
goal
is
whoever
contributes
to.
This
can
also
contribute
to
the
documentation.
So
that
it
can
be
easily
handed
off
to
other
community
members
into
the
future,
I
think
that's
diane.
Do
you
have
anything
else?
You
wanted
to
add.
A
Well,
I
just
wanted
to
to
put
out
there
one.
I
put
the
link
to
the
okd
working
group,
google
forum,
and
you
can
subscribe
to
that
and
you'll
get
notifications
of
the
upcoming
events.
We
usually
meet
on
tuesdays
at
9
a.m.
Pacific
standard
time
noon.
Eastern
time
you
can
find
all
the
details
on
okd
dot,
io
and
that's
you
know
great
place-
do
that
we
would
love
to
have
folks
who
are
watching
this
who
are
participating.
Thank
you.
A
All
for
your
questions
to
you
know,
take
a
look
at
the
ups.
You
know
paddling
upstream
blog,
without
a
paddle
blog
and
the
links
to
here,
and
if
you
want
to
post
that
url
again
in
the
chat.
That
would
be
great.
A
I
did
earlier
in
the
in
the
session,
but
we'd
love
to
have
you
test
this
out,
give
us
your
feedback,
let
charo
know
what's
missing
and
make
some
official
documentation
outside
of
the
blog
on
the
the
okd.io
site,
for
this
build
process
that
to
come
out
of
here
as
well.
You
know
and
anything
that's
missing
from
this
process.
I
mean
there's
a
couple
of
questions
about
docker,
compose
and
stuff.
A
All
of
those
are
opportunities
for
you
to
contribute
to
this
project
and
there's
more
than
enough
folks
to
help
coach
you
and
mentor
you
in
creating
that
content,
and,
if
that's
you
know
your
preference,
we
would
love
to
have
work
with
you
and
make
that
happen
and
get
that
documentation
up
and
available.
So
really
take
a
look
at
okd.io
wander
around
the
blog
post
test
it
out
and
join
us
on.
One
of
the
okd
working
group
calls
coming
up
soon.
B
Yeah
real
real
quick
if
we've
got
a
couple
more
minutes,
there's
one
more
question
that
popped
in
on
the
chat
about
the
the
resource
requirements.
Do
we
have
a
few
more
minutes.
B
Oh
all
right
so
carlos
santana
yeah
love
your
guitar
playing
carlos
did
a
question.
I
will
admit
I
like
joe
satriani,
but.
A
B
B
Anyway,
anyway,
yes,
so
so,
crc
is
admittedly.
B
B
This
is
also
an
area
where
I
think
the
the
community
could
help
out,
because,
obviously
the
vast
majority
of
the
engineering
resources
on
openshift
are
focused
on
data
center,
and
so
what?
What?
Because
code
ready
containers
is
a
single
node
cluster?
It's
a
single
node
cluster,
that's
being
built
still
from
all
of
the
pieces
that
are
more
or
less
unmodified
from
how
they
would
expect
to
operate
in
a
data
center,
and
I
have
a
hype
that
I
think
the
community
could
dig
into
that.
B
If
we
took
crc
running
and
prove
into
it
a
bit,
I'm
the
worst
offenders,
because
they're
operators
right
we
can
get
in
there
and
find
the
operators
that
are
taking
up
the
most
resources
dig
into
those
operators
a
bit
because,
again,
all
of
their
code
is
also
out
on
github.
B
My
hypothesis
is
that
it's
really
just
a
matter
of
some
configuration
around
some
of
those
operators,
because
they're
they're,
currently
sized
to
handle
cloud
scale
workloads
and
on
your
laptop.
They
don't
need
to
be
so
if
they
have
quota
set.
You
know
minimum
quota
set
for
resources
or
something
those
may
not
need
to
be
as
onerous
for
running
on
your
laptop.
B
My
hypothesis
is
that
if
the
community
rallied
around
that
a
bit,
we
could
get
in
there
figure
out
some
tuning
that
can
be
done,
that
isn't
obvious
from
just
a
man,
a
yaml
config
file
or
something
right,
because
because
these
operators
they're
built
cloud
scale,
so
a
lot
of
what
the
resources
that
they're
asking
for
and
things
aren't
necessarily
exposed
as
something
you
can
figure
outside
of
the
opry.
B
So
it's
going
to
take
a
little
work
but
yeah.
I
would
love
to
get
in
and
crank
this
down
a
bit
for
a
few
reasons,
one
because
it
would
be
nice
to
be
able
to
run
it
on
a
laptop
that
didn't
have
to
have
32
gig
of
ram,
but
also
because
oh
yeah,
as
as
the
arm
support,
starts
to
get
out
it'd,
be
really
cool
to
be
able
to
run
it
on
one
of
these
little
guys
with
eight
giga
ram
right.
B
So,
yes,
I
would
love
to
get
into
that
and
make
this
thing
run
a
little
slimmer,
because
I'd
also
like
to
have
a
openshift
cluster
running
on
some
eight
gig
pies.
C
B
Yeah,
you
know
I'm
going
to
hypothesize
a
bit
here
because
on
this
screen.
B
You
can
add
worker
nodes
to
a
single
node
cluster.
I
don't
know
of
any
reason
at
all
why
you
couldn't
take
you,
couldn't
just
stop
with
running
snc.sh,
not
tear
it
down
and
create
the
cucao
image
just
run
the
at
the
the
single
node
cluster
here
then
create
what's
up
with
libert
additional
worker
nodes.
B
No
reason
at
all
that
I
can
think
of
why
you
wouldn't
be
able
to
do
that,
and-
and
this
is
this-
is
a
very
quick
way
to
get
an
opinionated
single
node
cluster
up
and
running
you.
You
need
to
put
a
h
a
proxy
in
front
of
it
because
it
is
you
remember
we
created
the
the
firewall
on
the
zone
and
everything
and
it
is
listening
on.
B
An
internal
network,
a
168
network
right
here,
it's
listening
on
this
168
122
24
network,
so
to
to
get
to
it
from
off
your
workstation.
You
need
to
throw
h
a
proxy
in
front
of
it
and
let
h
a
proxy
with
a
couple
of
virtual
mix.
Go
from
the
the
hidden
168
122.1
network,
to
whatever
your
your
home
network
is
so
you
could
get
to
the
cluster,
but
yeah
single,
no
cluster
be
doable.
C
All
right,
I
think,
that's
all
of
the
questions
that
we
have
here
we're
just
about
to
the
hour,
which
I
think
would
be
a
good
time
to
stop.
Oh,
we
got
something
coming
in,
let's
see
with
the
new
4.8
edge
bootable
single
node
cluster.
What's
the
difference
with
crc?
Is
the
bootable
iso
smaller
footprint?
B
I
don't
know
yet
because
I
haven't
built
one
for
okay
d,
but
my
my
hypothesis
there,
because
the
the
bootstrap
node
does
does
not
live
in
the
resulting
cucao
image,
is
that
that
in
and
of
itself
isn't
changing
the
footprint
of
the
rc
that
that
crc
will
will
still
have
this
the
same
footprint.
B
A
Let's
hope
that,
out
of
all
this
effort
that
charo
has
put
into
getting
the
blog
and
walking
us
through
this,
all
a
few
of
you
out,
there
will
be
enticed
to
volunteer
to
start
helping
out
building
these
and
especially,
if
someone's
so
inclined
to
do
it
for
the
nightly's
and
has
some
resources
automate.
A
That
on
that
would
be
super
super
awesome,
and
so
please
do
join
us
for
with
jamie
and
I
and
the
rest
of
the
okd
working
group
members
next
week
and
we'll
probably
discuss
what
we
learned
here
and
get
your
feedback
there,
but
also
we'll
be
posting
this
up
on
youtube.
It
will
be
the
live,
unedited
version
we'll
be
there
almost
immediately
once
we
stop
talking
here
so
jamie
any
final
words
jarrow.
Any
final
words
here.
C
Yeah
again,
yeah
encouraging
people
to
show
up
at
the
working
group
meetings
and
they're,
very
casual
and
and
they're
fun,
actually,
because
folks
get
to
talk
about
what
they're
experimenting
with
what
they'd
like
to
see
and
and
their
projects
all
different
kinds
of
projects.
If
you're
interested
in
something
other
than
building
code
ready
containers,
we
have
a
variety
of
other
projects
that
could
use
some
helping
hands.
A
And
there's
always
somebody
willing
to
talk
with
you
and
and
share
their
stories
too.
So
I
put
the
link
in
for
the
upcoming
okay
okd
right
openshift
commons
gathering
at
kubecon,
which
is
going
to
be
hybrid,
so
it'll
be
virtual
and
in
person
and
john
fortin,
who
is
an
amazing
contributor
to
the
the
conversations
and
the
workloads
around
okd,
is
going
to
share
the
marketamericashop.com
production
case
study
for
okd
with
us.
So
I'm
I'm
psyched
to
hear
that
story.
A
We've
got
lots
of
people
who
are
using
okd
in
lots
of
variations
of
ways
not
just
for
your
home
lab
anymore,
so
take
a
take.
A
risk
join
the
call
the
call
to
action
and
check
out
the
crc
build
process.
We'd
love
to
hear
your
feedback.
A
A
Thanks
again,
both
you
guys
for
making
this
happen.
Much
appreciated.
A
Awesome
all
right,
everybody,
that's
the
challenge.
Let's,
let's
see
if
we
can
coerce
a
few
more
people
into
doing
this
and
maybe
find
someone
to
with
some
resources
to
deploy
it
and
make
it
automated.