►
From YouTube: OKD Working Group 2020 05 05 Full Meeting Recording
Description
OKD Working Group 2020 05 05 Full Meeting Recording
A
Hello,
everybody
welcome
to
the
okd
working
group
and
we're
gonna
just
get
started
here.
If
you
haven't
yet.
Please
add
your
name
into
the
working
group
meeting
notes.
The
link
is
in
the
chat
and
we'll
get
rockin
and
rollin
here
today
we
have
a
couple
of
things.
Thank
you
all
for
participating
in
red
hat
summit.
Last
week.
The
summit
content
is
online
at
the
at
the
registration
page
I'll,
throw
that
in
chat
to
Christian
and
Vadim,
give
an
awesome
state
of
OK
d4
talk.
A
We
had
a
little
bit
of
traffic
in
the
okay
de
chat
room.
Wasn't
it
is
wonderful
as
I
thought,
but
I
think
they
tried
to
reproduce
the
the
booth
experience
and
have
one
chat
worm
pretty
much
for
every
single
booth.
So
it's
not
as
effective
as
I
think
in
person,
but
it
was
still
a
very
interesting
learning
curve
for
everybody
else.
I
don't
know.
Buddy
I
saw
you
in
the
chat
room
a
few
times.
B
A
Thought
it
was
pretty
amazing
from
an
open
shift
Commons
perspective.
At
one
point
we
had
3600
and
some
folks
logged
into
one
of
the
things
one
of
the
presentations
that
we
did
in
the
morning.
I
haven't
got
all
the
details
on
the
total
counts
of
everything,
but
it
was
pretty
amazing
to
get
that
many
eyeballs
on
some
of
the
stuff.
That's
going
on
here,
I.
A
It
was
I
think
the
first
day
there
were
8,000
people
logged
in
for
Monday,
which
was
day
zero
for
and
which
is
when
we
host
the
open
ship
Commons
of
those.
You
know,
I
think
a
lot
of
them
were
people
just
looking
to
see
make
sure
they
were
set
up
for
the
following
days,
but
then,
on
the
next
day
it
was
insane
I.
Think
at
one
point,
I
saw
70,000.
It
may
have
gotten
up
to
80,000
people
registered,
so
it
was
pretty
crazy.
I'm,
just
I
still
think
for
me
what
you
know.
A
B
A
Was
I
mean
if
there
was
a
lot
of
upfront
work
on
getting
stuff
there,
but
I
overall
I
was
pretty
impressed
remains
to
be
seen.
I've
got
two
more
Commons
gatherings
coming
up
that
are
going
to
be
virtual
stand-alones,
though
using
the
same
platform,
so
any
feedback
people
have
loved
to
hear
that
I'm
still
getting
access
to
the
dashboard
behind
the
scenes
to
see
what
actually
happened.
So
I'll
keep
you
all
posted
when
that
happens.
So
today,
I
know
Vadim
and
Christian
are
both
here.
C
Yeah
sure
so,
hi
everybody
so
I
think
we're
getting
really
close
to
merging
the
MCO.
Our
branches
four
point:
six
development
will
open
next
Monday
and
I'll
by
then
I'll
have
all
the
PRS
needed
lined
up
and
they'll,
be,
hopefully
they'll
get
reviewed
quickly
and
merged
very
soon.
After
the
opening
of
the
branch
of
you
master,
the
unfreezing
of
the
master
branch
next
week,
it's
quite
a
there's.
C
The
dual
support:
PR
I'm
working
on
right
now,
first
spec
to
inspect
three
dual
support
in
MCO
and
that's
quite
large.
So
it
may
take
a
few
days
to
get
that
reviewed,
but
yeah,
eventually,
it'll
it'll
be
merged
into
MCO,
and
then
we
can
sort
of
rid
us
rid
ourselves
of
the
F
course
branch
pork
in
MCO
for
the
Installer.
C
We
may
need
to
carry
the
D
fork
a
little
bit
longer,
but
we've
sort
of
come
to
the
conclusion
that
we
can
actually
release
okay,
DG
a
with
the
Installer
still
being
worked,
because
it's
yeah
it's
easier
to
just
maintain
one
fork.
Instead
of
two
and
yeah
there
shouldn't
be
too
many
large
breaking
changes
in
the
install
in
that
time
frame
anyway.
D
B
C
Damn,
that's
actually
not
quite
clear
when
that
will
switch
over
will
happen.
So
until
then
we
may
have
to
carry
the
Installer
fork,
but
we
may
even
get
it
merged
sooner
and
sort
of
have
dual
support
of
the
Installer
in
one
in
one
master
branch
and
introduce
a
build
flag
or
something.
So
we
have
to
two
different
bill:
no
binary
builds
or
all
the
installer
but
yeah
for
in
for
now,
OCP
still
defaults
to
spec
two,
even
though
we
will
have
dual
support
in
the
MCO
already.
A
E
E
E
Unlike
CRC,
it
would
be
a
proper
full-blown
OGD
cluster,
the
operators
enabled,
but
it
would
use
a
bootstrap
node,
of
course,
but
you
would
be
able
to
put
all
the
parts
into
one
single
machine.
Of
course
you
won't
be
able
to
upgrade
it,
but
other
than
that.
It's
it's
a
as
good
as
many
other
Oh
kitty
cluster.
That
pull
request
is
not
yet
merged,
but
I
think
it
should
should
be,
should
be
merged
eventually,.
C
Cool
I,
so
for
there's
a
few
other
issues
we've
had
in
the
past
and
they're
also
getting
close
to
to
getting
resolved,
and
that
is
the
missing
OpenStack
support.
We
have
right
now,
so
we're
still
waiting
for
the
new
ignition
at
2.3
release.
I
think
it
is
for
spec
3.1
and
that
PR,
to
sort
of
finish
that
release
is
open
right
now
and
that's
going
to
get
merged
this
week.
So
the
next
Fedora
core
OS
base
image
will
we'll
have
that
support
and
we
will
sort
of
re-enable
OpenShift
OpenStack
the
support
with
that
as
well.
E
I,
don't
expect
a
lot
of
changes
in
for
that
for
right
now,
since
it
has
just
been
out
of
the
door,
so
we
can
delay
this
particular
visa
to
2
weeks
there.
We
won't
miss
much
I,
guess.
C
E
C
F
E
B
C
E
C
Before
we
do
GA,
we
should
have
at
least
one,
if
not
two
or
three
beta
releases
off
of
that
release
branch.
Essentially,
though,
right
now,
the
plan
is
to
get
all
the
all
the
dual
support
stuff
into
the
new
master,
which
will
be
four
point
six
and
from
there
we
will
do
a
back
part
into
sort
of
F
course
0.5,
which
will
be
released.
F
B
C
B
C
G
B
B
Ireland,
that's
why
I'm
asking
because
all
of
this
conversation
of
stable
branches
and
whatnot
is
confusing
to
me,
because
we
still
don't
have
an
okay
T
for
release,
though,
from
from
my
perspective,
the
way
I
see
this
is
if
we
can
get
to
having
if
we
can
get
to
having
MCO
merged,
what
stops
us
from
just
shipping
MCO
from
the
future,
with
everything
else
being
a
stable
tree.
That's.
C
Essentially,
what
the
plan
is,
so
we
plan
to
release
okay
D
at
this
point
together
with
OCP,
4.5,
okay,
and
so
once
OCP
4.5
goes
GA.
We,
we
can
be
sure,
that's
stable.
So
on
top
of
that,
we
will
just
back
part
the
very
few
things
that
are
already
in
master
at
this
point
into
F
course.
4.5
and
that'll
be
the
GA
relief,
so
yeah
I'm
expecting
to
release
okay
D
together
with
OCP
4.5.
C
H
B
Ga
right
well,
I
mean
well
so
here's
the
other
thing
like
we
still
don't
have
something
for
okay
d,
io
to
say:
hey
you
want
to
do
this
now
go
go!
Do
it,
we
still
don't
I,
don't
care
if
it's
GA
or
beta
or
whatever,
but
we
still
have
OC
cluster
up,
because
we
don't
have
a
CRC
that
words
for
okay
D.
We
still
list
okay,
t3
11
in
the
container,
and
we
still
list
mini
shift
like
at
this
point:
I,
don't
care
whether
it's
beta
or
GA.
E
B
C
Do
agree
with
Neal
here,
though,
that
having
the
OSI
cluster
up
and
all
the
old
stuff
on
the
front
page
is
not
a
good
look,
and
also
we
that's
another
thing.
We
don't
have
the
CRC
yet
so
that's
what
buddy
mentioned
earlier,
the
PR
that
allows
or
single
host
single
node
clusters
and
that
will
essentially
unblock
Oh
CRC
builds
of
okay
D
as
well.
Is
that.
C
A
B
So
so,
for
me,
I'm
just
gonna,
say
two
months
from
now
having
GA,
okay,
fine,
whatever
my
more
immediate
concern
is
I
want
a
beta
that
people
can
use
in
every
mechanism
that
we
support
for
OCP
and
that
and
we
already
we
have
all
we
don't
have
the
CRC
and
I'm
honestly
I,
don't
care
if
it's
a
little
janky
we
could
I
would
like
to
like
what
can
we
do
to
help
make
this
happen?
Is
what
I
want
to
ask?
Yes,
O'neil.
F
D
F
Finished
writing
up
the
the
instructions.
Okay,
you
do
it,
but
even
with
the
current
beta,
you
can
get
a
full
full
single
node
cluster
up
and
running.
There's
a
couple
of
things:
they're
a
little
wonky.
You
have
to
do
during
the
install
process,
but
but
you
can
get
one
up
and
running
and
and
it's
barfing
a
few
things
until
we
get
until
that
PR
that
vadim
was
referring
to
goes
in,
but
it's
it's
perfectly
functional
and
you
can
you
can
try
everything
out
won't
around
on
your
you
know
eight
or
a
twelve
gigabit.
A
If
we
have
a
code
ready
container,
then
we
can
replace
this
mini
shift
or
just
shift
this
mini
shift
container
stuff
over
to
the
downloads
page
and
emphasize
you
know
the
CRC
on
the
and
the
single
node
cluster
stuff.
Here,
that's
my
game
plan.
So
I've
been
you
know
a
bit
overtaken
by
summit,
but
also
been
waiting
for
something
to
give
people
to
replace
the
mini
shift.
F
D
F
Think
praveen
was
probably
doing
his
builds
on
on
a
rel,
8
machine,
or
maybe
a
upstream
Fedora,
but
the
single
no
cluster.
It
came
up
and
running.
So
what
I
did
from
there
is
realized
that
what
it
was
doing
was
was
not
that
far
different
from
how
I
was
building
my
bare
metal
upî
clusters,
sitting
on
libvirt
using
bbm
C,
so
I
hacked
up
the
tutorial
I
had
put
together
and
used
it
to
build.
A
single
node
cluster
then
realized
that
I
wasn't
thinking
it
all.
F
The
way
through
and
I
didn't
need
a
load
balancer
with
the
bootstrap
I
could
actually
use
DNS
temporarily,
so
so
I've
redone
it
again
just
using
DNS
a
records
to
poor-man's
load.
You
know
balance
between
bootstrap
and
master,
while
they're
coming
up
once
the
master
is
up
and
bootstrap
is
complete.
You
just
remove
those
DNS
records
and
destroy
the
bootstrap
node
if
you're
using
libvirt.
F
Once
you
get
it
to
that
point,
you
can
shut
it
down
edit,
your
you
know,
verse,
edit,
your
your
host
and
add
the
RAM
to
that
master
that
you
were
using
for
the
bootstrap.
So
if
you're,
if
you're
doing
this
on
a
little
box
with
32
gig
of
ram,
you
can
easily
create
a
single
node
cluster
that
has
24
gig
of
ram
and
that's
actually
pretty
usable.
A
A
F
It's
not
what
it
effectively
would
do
is
it
would
allow
somebody
to
to
create
for
themselves
what
you
actually
get
with
CRC,
because
really
what
CRC
is
and
I
may
be
speaking
a
little
out
of
turn
because
I've
only
tinkered
on
the
edges
with
it,
but
it's
basically
a
pre-built
virtual
machine
that
is
then
bundled
up
so
that
you
can
pull
it
down
and
run
the
script
that
configures
your
local
environment.
So
if
your
local
environment
is
hyper-v
or
libvirt
or
I
forget
at
the
moment
the
one
that's
native
to
the
Mac
it
it
configures.
F
This
bundle
to
run
that
virtual
machine
on
that.
So
you
get
a
locally
running
instance
for
anybody
that
wants
to
do
some
significant
work
with
it.
I'm
not
sure
running
it
on
a
laptop,
even
a
nicely.
Beefy
laptop
is
even
anything
something
someone's
going
to
want
to
do
so
running
it
on
a
little
sidecar
server
is
probably
a
better
option,
and
that
was
the
approach.
I
was
taking.
B
Yeah,
this
problem
is
that
this
stuff,
eventually
has
to,
in
some
form
or
fashion
work
on
people's
laptops,
which
is
I,
think
one
of
the
things
people
of
the
MCO
and
all
these
other
things
to
change
for
CRC
have
been
working
towards
trying
to
shrink
the
the
Minimum
Viable
openshift,
so
that
people
can
do
it
on
there.
Yeah.
F
And
that's
some
surgical
work.
We
could.
We
could
really
start
doing
after
the
fact.
Well,
I
think
one
of
the
at
least
personally
what
I've
observed
one
of
the
one
of
the
challenges
that
we
have
is
that
we
have
to
go
back
into
the
open
ship
code
itself
and
undo
things
that
were
written
into
it
for
a
3-node
environment
right
and
I.
Think
the
polar
requests
that
we've
got
open
around
the
Etsy
decorum
is
a
good
example
right,
the
Etsy
decorum
guard
because
it
was
built
for
a
data
center.
F
B
The
flip
side
of
it
is,
you
could
do
clever
things
like
have
three
of
them
running
in
containers
inside
of
there
and
have
them
quorum
on
one
node,
it's
stupid
and
you
shouldn't
do
it
in
production.
But
those
are
the
kinds
of
things
that
that
I
did
for
making
openshift
3x
work
in
the
single
machine
when
I
wanted
it
to
pretend
to
be
production.
Ii
yep.
F
A
D
F
A
F
D
A
It's
post
summit:
we
can
work,
we
can
fix
this,
but
I
was
hoping
that
we
would
fix
this
with
either
this
single
node
cluster
install
process
or
CRC
and
then
announced
the
reuse
routing
of
this
page
or
something
and
I
and
I
am
happy
to
take.
You
take
a
look
here.
I
think
everybody
can
see
this
if
you
have,
if
you
see
broken
links,
this
is
where
this
page
lives.
Let's
throw
that
in
the
chat,
if
you
haven't
done
so
already.
A
G
C
F
G
Just
want
to,
if
possible,
I
just
want
to
revisit
something.
Neil
said
a
few
minutes
ago
about
you
know
deploying
OpenShift
to
like
containerized
to
a
fully
containerized
installation
earth.
This
is
something
that
I'm
like
really
curious
about,
and
I've
been
looking
at
from
the
perspective
of
the
cluster
API
work
that
we
do,
because
there
is
a
way
to
make
cluster
API
look
at
a
provider.
That's
like
a
container
based
provider
and
I'm
curious.
If
there
is
like
there's
a
lot
of
work.
E
For
OpenShift,
theoretically,
that's
possibility
is
that
you
kind
of
create
a
container
from
an
ignition
file.
Oh.
B
B
B
C
G
C
Single
node
cluster
is
the
topic
that
will
follow
us
along
a
little
bit
more
because
there's
a
few
different
proposals
for
openshift
upstream
going
on
right
now
how
to
solve
that,
and
we
don't
really
know
how
that's
going
to
end
up
and,
of
course,
we'll
have
to
follow
whatever
upstream
decides
eventually.
So,
even
if
we,
whatever
we
do
right
now,
it
may
change
in
the
future,
but
I
think
it's
yeah
I
think
it's
definitely
good
to
to
get
that
working
on
on
the
4.4
code
already.
A
H
C
H
F
F
A
Would
be
a
great
thing
in
my
humble
opinion,
so
let's
see
if
we
can
get
that
going
in
the
next
couple
of
weeks
and
and
don't
worry
too
much
chair
about
your
grammar
or
things
and
then,
if
we
can
have
a
list
of
any
variations
on
themes
that
we
have
to
do
to
make
it
work.
I
found
other
other
platforms.
It.
F
F
I
did
actually
have
one
more
comment
on
one
thing:
I
forgot
to
mention:
building
the
single
node
cluster,
using
the
the
CRC
things
that
Praveen
has
done.
The
the
F
caused
instance
that
it
spins
up
for
both
the
bootstrap
and
the
master
node
by
default
only
have
a
gig
of
space
allocated
for
sis
root,
which
is
grossly
undersized
too,
to
fit
everything
that
goes
in
a
single
node
cluster,
so
I
actually
had
to
modify
the
terraform
config
in
the
and
then
build
a
custom.
F
Installer
I
had
modified
the
code
for
the
terraform
in
the
installer
to
create
34
or
a
32
gigabyte
disk,
and
then
it
successfully
ran
doing
it.
The
the
way
that
I'm
doing
it
with
the
UPI
install
stuff
I
didn't
have
to
do
that,
because
I'm
building
the
libvirt
and
telling
it
you
know
how
big
a
disk
it
has
I
don't
know.
If
anybody
here
has
any
insight
into
how,
with
the
ipi
method
that
CRC
is
using,
it
knows
how
big
to
make
its
history
I.
E
F
A
A
Yeah
once
we
uploading
ourselves
all
right
and
the
beta
is
available,
but
there's
still
a
blocker
on
opens
time
just
going
through
the
list
here,
and
so
that's
still
be
still
is
truth
and
we
haven't
done
anything
around
documenting
how
and
okay
d4
release
is
built
yet,
and
that
was
one
of
the
things
that
we
wanted
to
do
so,
but
I
think
getting.
F
F
C
This
is
where
all
these
CI
jobs
in
the
open
shift
organization
live
and
that's
also
where
okay
D
builds
come
from
and
they
then
get
promoted
to
beta
releases
at
the
moment
from
there,
and
then
we
could
also
have
documentation
about
rebuilding
everything
on
your
own
somehow,
but
maybe
we
should
already
link
to
the
release
repository
because
you
can,
you
can
just
you
know,
check
out
the
files
there
and
all
the
jobs
dig
through
that.
It's
not
the
yeah.
It's
not
super
easy
to
see
what
we're
what
lives
where,
but
you
can.
C
A
Think
it's
rather
than
asking
people
individually
to
dig
in
we're
just
as
some
sort
of
short
documentation
about
what
all
the
pieces
and
parts
are,
but
personally
I'd
rather
get
this
single
node
stuff.
Then,
in
the
next
coming
weeks,
yeah.
F
A
Is
there
anything
else
we
should
be
covering
I
know,
I
was
gonna,
mention
that
kou
Connie?
You
went
virtual
on
us,
so
we're
not
gonna
host
an
okay,
D
working
group
meeting
physically
at
that
place
at
that
time,
but
we
may
do
something
in
the
background.
I'll
try
and
figure
out
what
possibilities
there
are
when
I
meet
with
the
DNC
F
folks
later
this
week,
maybe
they'll
make
a
chatroom
available
there
for
us
to
blather
on
about
wonderful,
ok,
D
stuff,
but
I'm.
E
Are
several
talks
which
are
related
to
Oh
kitty,
but
not
directly?
For
instance,
C
groups
to
be
to
talk
from
joosep
is
what
we're
planning
to
have.
You
know
lp3.
Once
we
were
base
to
kubernetes
119
I'll,
find
a
few
more
slightly
related
to
Oh
kitty
as
well.
A
I
think
I've
got
a
talk
scheduled
on
Friday
to
talk
about
the
coop
convert
rule
stuff
with
the
new
stack.
So
if
there's
things
that
we
want
to
promote
or
something
I
can
sneak
it
in
that
conversation
as
well
get
the
word
out,
is
there
anything
else
that
we
should
be
talking
about
today
we
got
10
more
minutes,
left
I'm,
not
going
to
show
anybody.
My
kovat
haircut
that
I
let
an
18
year
old
kid
cut,
but
I'm
not
going
to
be
in
video
for
a
few
days.
C
C
So
trippy
in
two
weeks,
just
the
next
meeting
should
be
in
two
weeks
from
now.
I
think
we
shifted
the
cadence
because
of
at
least
that
was
my
understanding
because
of
the
virtual
summit
by
one
week.
C
C
Yeah,
that's
fine!
So
let's
do
it
next
week
same
time
to
them,
then,
two
weeks
after
that,
again
cool
so
by
next
week,
just
to
answer
Joseph's
question
about
the
procedure
that
leads
to
GA
I've
written
up
an
enhancement
proposal
that
I
will
put
up
on
the
ok,
D
enhancement
in
the
enhancements
repository
in
OpenShift
very
shortly,
and
that
will
sort
of
be
the
document.
The
living
document,
where
everybody
from
the
engineering
and
architecture,
side
and
OpenShift
will
chime
in
to
she
sort
of
nail
down
the
the
ga
definition
of
OpenShift.
C
Of
course,
will
will
we
have
our
recommendations
in
there
and
yeah
so
that
should
be
up
yeah
tomorrow.
I
just
want
to
get
some
code
ready.
First,
so
I
can
just
say
you
know
not
a
lot
of
work.
It's
already
done
just
merged
this
PR
and
we're
there
yeah,
but
I
will
post
a
link
in
the
to
that
in
the
OpenShift
dev
channel
and
all
the
slack
channels
were
usually
communicating.
A
A
Charlie,
it's
any
way
possible
by
next
Tuesday.
You
have
that
right
up
added
into
the
readme
MD.
That
would
be
great,
and
then
we
could
look
at
that
and
just
focus
on
getting
a
single
node
installation
in
okay
di
Oh
updated
over
the
next
week
would
be
my
priority
list
from
everybody.
So
if
you
can
make
a
pull
request
against
the
okay
di
Oh
website,
if
you
see
anything
wrong
or
have
suggestions,
just
throw
it
in
the
issues
or
or
okay
di,
oh
I'll
try
and
adjust
and
address
them.
It.