►
From YouTube: OKD Working Group 2021 02 16 Full Meeting Recording
Description
OKD Working Group 2021 02 16 Full Meeting Recording
A
A
A
So
yeah
so
welcome
everybody
to
the
open,
the
okd
working
group
meeting
this
week.
We've
had
some
really
good
progress
on
the
okd
dot
io
site
that
we'll
talk
about
a
little
bit
later,
but
I'd
like
to
have
vadim
talk
about
the
latest
release
and
maybe
take
over
the
the
screen
share
as
well
and
walk
through
the
open
issues
too.
So,
if
you
want
to
take
it
away,
vadim
that
would
be
great.
B
Sure
not
certainly
if
I
can
do
the
screen
sharing,
but
in
voice.
Let
me
have
so.
I
usually
create
a
tracking
issue
where
we
list
all
the
issues
we
fix
and
some
unresolvable
issues
we're
heading.
So
the
biggest
problem
fixed
in
this
release
was
mirable
payloads.
B
That
was
the
largest,
probably
problem
fixed.
Now
we
also
picked
up
quite
a
few
ocp
fixes.
Most
notably
tana's
component
has
been
using
way
too
much
memory
that
was
resolved
and
we
also
have
a
payload
with
sudo
and
kernel
cv
fixes
coming
from
fedora.
B
I
think
the
most
problematic
parts
remaining
here
are:
oh
wait.
We
also
fixed,
finally,
the
installation
on
vsphere
over
then
openstack,
at
least
it
passes
an
rci
that
was
the
remaining
parts
we
needed
to
fix
in
systemd
years
old.
The
fixes,
of
course
we
need
an
additional
confirmation
on
that,
because
we
still
don't
trust
ci
entirely.
B
Image
should
be
properly
mirrorable
yep,
oh
okay,
great!
We
cannot
do
this
retroactively.
We
cannot
change
the
previous
releases,
unfortunately,
because
that
would
mess
up
the
manifestations
and
we
would
have
to
do
the
whole
signing
again
and
we
would
effectively
have
to
release
a
new
payload
for
the
previous
version,
but
maybe
it's
all.
Oh
sorry,.
C
Have
a
delay-
I
I
I
maybe
you
remember-
that
I
wrote
a
few
scripts
and
it
works.
I
beautified
them
and
I
currently
I
upgrade
an
aircraft
cluster
in
my
company,
not
my
production
one,
but
I
I
only
had
to
disable
the
repositories
for
fallout
5
and
I
use
my
scripts
say,
mirror
and
fix
the
images
on
quay.
B
Shot,
that's
that's
acceptable,
as
in
all
of
your
manifest
hashes
are
different,
so
you
have
to
override
the
signature
check
and
it's
not
the
latest
stable
release.
So
it's
definitely
not
recommended
to
be
installed
anyway,
yeah
right,
but
going
ahead.
We
probably
won't
have
to
change
a
lot
and
things
would
be
relatively
simple,
but
in
any
way
we
now
have
a
way
how
to
fix
it
manually.
So
that's
the
way
to
go.
B
I
haven't
closed
that
issue,
because
I
want
to
make
sure
that
our
build
farms
are
constantly
producing
the
valid
images,
but
we
can
count
it
as
half
fixed,
basically
yeah.
The
most
unresolved
issues
are
probably
two
very
long-standing
problems
with
ovn
and
openshift
sdn.
These
are
becoming
variants,
various
people
who
are
recommending
there.
I
think
we're
mixing
several
bugs
there.
So
if
anybody
understands
sdns
or
anything
else,
both
of
those
github
issues
should
have
bugzilla's
counterparts
and
we
should
be
commenting
there
and
providing
necessary
information.
C
Vadim
you
mentioned
yesterday
that
there
is
a
network
manager
problem
with
current
release.
This
is
fixed
or.
B
Network
manager
in
fedora
has
been
updated
and
due
to
our
due
to
the
way
we
deliver
packages,
so
we
deliver
dora
from
it
stable
and
we
have
to
add
it.
The
matching
version
of
the
network
manager
obs
on
four
six
upgrades
is
noticeable,
but
it
breaks
4.5
to
4.6
upgrade.
B
I
will.
Hopefully,
we
will
release
another
four
six
table
this
week,
maybe
next
week
with
the
fix
included
but
direct
four
foot,
five
to
four
to
six
upgrades
not
yet
possible
for
this
release.
Hopefully
that
will
be
fixed
next
time
and
speaking
of
the
future,
we
are
preparing
4.7
release
candidate,
not
yet
unstable,
but
something
would
be
great
to
to
look
into
it's
not
yet
done,
because
we
need
to
fix
the
sh
authentication
problem
in
mco.
First,
otherwise,
installations
would
be
undebugable.
You
would
not
be
able
to.
B
So
once
that
resolved,
we
would
have
an
out
of
out
of
channel
basically
for
seven
release
candidate
deployed
and
hopefully,
by
the
time
most
cp
switches
releases
for
seven
stable
will
go
247
as
well.
B
Yeah,
that's,
I
think,
pretty
much.
All
I've
got
for
now.
A
So
so
I
know
I
I've
been
talking
with
joseph
a
lot
because
he's
been
doing
a
lot
of
work
on
the
okd
dot
io
site
for
us,
so
amber
should
take
a
look
at
that,
and
but
one
of
one
of
the
issues
is-
and
I
know
joseph's
been
struggling
with
this
too-
is
that
the
because
of
the
the
fast
cadence
of
the
four
six
releases,
which
I
hilariously
think
that's
amazing,
to
say
the
word
fast
because
of
all
the
delays
to
get
to
4.0,
but
to
see
about
getting
a
little
bit
more
stability.
A
I
think
in
the
releases
is
he's
having
some
issues,
because
each
time
he
has
a
release
he's
having-
and
maybe
joseph
can
say
this
better
than
I
can.
You
know
what
how
we
are
getting
better.
How
can
we
get
better
stability
in
each
of
these
releases
so
that
they
don't
break
people
as
they
try
and
update,
and
maybe
joseph,
if
you
want
to
add
in
a
few
more
words
here,
because
I
know
this
was
a
big
concern
of
yours.
C
Yes,
there
is
no
big
big
news,
but
I
think
okd,
where
openshift
brings
so
much
features
with
it.
Yes,
that
you
don't
have
to
care
about
the
host
system,
that's
one
of
the
top
features
for
us,
and
if
exactly
that
is
a
problem
in
okd,
I
think
it's
not
yeah.
C
We
should
get
a
workaround
for
that
test.
More
spread
the
tests
among
the
community
members,
if
possible,
I
don't
know
because
in
4
5
upgrading
was
so
was,
was
almost
fun
to
do,
because
it
always
worked,
and
I
it
would
love
to
get
in
this
situation
back.
I
know
that
fedora
had
a
few
problems
that
they
were
not
synchronized
with
okd.
B
B
7
is
much
smaller
and
already
in
review,
so
I
forgot
to
mention
the
most
important
news:
ever
our
enhancement
has
been
merged,
so
we're
now
officially
part
of
openshift
and
folks
in
48
would
start
reviewing
our
for
for
installment.
B
What
why
we
suffer
so
many
instability,
fedora
cores
33,
definitely
has
has
added
some
more
to
that.
But
the
main
reason
is
we
like
tests,
as
in
we
have
ci
runs
for
overheard
openstack,
and
I
don't
know
if
I
could
trust
them,
because
they
show
that
things
are
passing.
I
don't
see
bug
reports
after
a
couple
of
weeks.
We
probably
should
be
able
to
to
just
to
trust
them
that
they
are
so
valid,
and
this
is
the
configuration
our
users
are
using.
B
As
for
vsphere,
we
have
plenty
of
exteriors
and
we
get
results
there.
So
I
trust
this
here
ci,
because
it's
pretty
much
close
to
what
we're
seeing
as
for
the
cadence
openshift,
does
releases
weekly
on
four
streams,
three
and
then
in
the
good
days,
and
we
promote
in
like
two
or
three
days
from
candidate
to
stable.
So
that's
like
four
times
faster
than
no
kitty
and
instability
and
stability,
on
the
other
hand,
doesn't
come
just
because
it's
just
been
lying
there
and
nobody
used
it
and
after
a
couple
of
weeks
it's
ready.
B
It
comes
after
the
bugs
are
being
fixed.
So
if
we
delay
the
cadence,
we
will
have
bugs
coming
from
ocp
being
fixed
and
stable
much
longer.
Like
thanos
thing.
If
we
had
weekly
releases,
we
would
have
it
fixed
sooner.
But
if
we
didn't,
we
would
say
I'm
sorry,
we
have
a
guardians
of
one
month,
so
you
all
have
to
wait
or
use
nightlys,
so
testing
nightlys.
More
often.
A
Sorry
guys,
because
we
had
a
bit
of
noise
going
on
everybody
again,
just
self-mute,
if
you're
not
talking
so
one
of
the
things
that
we
could
do,
I
I
think
is
get
better
documentation
around
what
it
takes
to
test
on
the
different
platform,
because
I
know
joseph
you
were
talking
about
testing
on
azure,
because.
C
Yeah
so
because,
because
also
he
got
also
developed,
so
you
have
a
delay,
and
I
hear
myself
because
I
think
all
these
developer
features
of
openshift
and
okd
are
absolutely
mostly
used
on
premises,
because
there
is
a
source
code
and
on
premises
you
have
vsphere.
Normally
two
days
may
change,
but
nowadays
I
think
vsphere
is
the
most
commonly
used
platform
on
premises.
C
B
B
We've
added
vsphere
tests
for
almost
all
critical,
okay
components
and
once
we
totally
sure
that
we
trust
them
because,
right
now,
a
couple
of
conformance
tests
are
failing
and
we
need
to
find
out
if
it's
a
random
noise,
is
it
the
noisy
neighbors
or
is
it
some
ocd
instability
very
much
likely
that
it's
just
some
noise
and
we
need
to
figure
out
before
we
make
it
a
blocking
job
and
sure
that
it's
so
luckily,
with
john
14,
we
were
last
two
weeks
that
was
that
was
perfect.
B
John
has
been
able
to
test
quite
a
lot
of
changes
very
very
rapidly.
That
was
incredibly
fruitful.
I'm
a
bit
worried
about
other
platforms
and
the
whole
bare
metal
ubi
thing,
because
there
is
no
recipe
which
it's
everyone,
meaning
almost
every
bare
metal
upi
bug
is
unactionable,
because
it
depends
on
the
local
setup.
B
D
But
I
think
that
is
actually
let
me
just
jump
in
here
very
quickly.
I
think
that
is
actually
the
one
issue
with
most
upi
setups,
not
just
bare
metal
but
also
vsphere,
and
also
the
other
platforms,
because
upi
is
just
the
user
provides
the
infrastructure.
We
can't
really
yeah
have
one
piece
of
code
account
for
all
the
possibilities
there.
So
I
think
easier,
much
easier.
It
would
be
much
easier
for
us
if
people
were
were
to
go
to
the
ipi
installation,
because
we
we
know
what
we're
getting
there.
D
Obviously,
we
have
folks
that
have
vsphere
systems
that
are
too
old.
I
think
neil.
That
was
the
problem
with
your
company
and
other
folks
that
just
don't
have
that
infrastructure,
so
that's
not
feasible
for
everybody,
but
as
a
recommendation
I
think
yeah
it
should
be
ipi,
because
we
can.
C
Yeah
because,
as
a
I
can
speak
for
my
company,
we
also
must
use
ipi
upi,
because
we
have
an
external
load,
balancer
and
lots
of
specialties.
C
But
I
think
if
you
man,
if
you
can
manage
to
just
define
a
core
system
or
upi
set,
always
is
tested
and
you
have
to
ensure
that
your
load,
balancer
works
and
so
on.
You
have
to
take
care
about
that
alone
and
dns
and
so
on.
C
B
B
B
D
Their
what
what
they
have,
we
still
have
one
incompatibility
issue
with
the
vsphere
upi
test
flow
that
we
currently
have
for
ocp.
It
actually
still
uses
the
if
cfg
files
for
defining
the
network
config
and
that
is
not
supported
in
fedora
core
os.
I
actually
have
a
pr
open
on
the
installer
repository
release,
repository
release
or
one
of
those
to
change
that
I
think
it's
on
the
installer
and
then
we
should
be
able
to
run
the
the
upi
test
for
okd
as
well
that
we
should
probably
just
ping.
C
This
is
great:
do
you
think
that
it's
possible
that
we
can
run
your
tests
on
our
local
setups,
or
is
it
too
complicated
to
set
that
up
as
a
test
environment,
at
least
at
the
point
that
a
running
cluster
is
here,
and
I
throw
manifests
on
that
and
I
understand.
B
Sorry,
that's
very
that's
very
easy.
We
run
a
subset
of
conformance
kubernetes
tests
and
at
this
stage
we
don't
look
too
deep
into
why
they
are
failing
honestly,
I'm
mostly
concerned
about
install,
because
it
thinks
some
important
part
breaks
during
install
it
most
likely
will
will
not
let
it
finish
at
all.
When
it
comes
to
conformance
tests,
they
usually
verify
kubernetes
parts.
The
api
server
can
can
respond,
and
so
on.
All
of
that
is
shared
with
ocp
anyway,
and
it's
real
it's
real
hard.
B
B
We
have
tests
for
upgrade,
which
is
effectively
running
ocm
upgrade,
but
just
in
golang
application,
because
we
need
to
watch
for
disruptions
of
api
server,
so
verifying
that
on
nightlys
and
giving
us
a
fast
feedback
on
if
we
broke
something
or
some
important
fix
landed
in
ocp,
and
we
need
to
do
a
release
now
that
would
that
would
benefit
okd
most.
If
I,
I
think
so,.
C
So
you
mean
if
we,
if
we
could
find
an
automated
setup,
that
maybe
installs
okd
upgrades
it
with
knight
lease,
and
this
is
at
all.
What
do
you
run
tests
on
on
the
installation?
No.
B
I
think
also
make
it
set
up.
We
have
it
in
ci,
we
might
want
to
extend
it
and
since
our
enhancement
is
merged,
we
now
have
full
rights
to
do
that.
The
thing
is,
we
cannot
trust
ci
if
it
does
not
match
what
our
community
sends
box
to.
B
So,
if
you
have
some
throwaway
cluster,
you
would
use
your
own
configuration
the
way
you
want
it
to
look,
and
we
would
ensure
that
our
ci
results
are
actually
valid
and
have
the
same
failures
as
in
actual
user
setup,
because
without
establishing
the
trust,
all
that
ci
is
useless.
B
Say,
for
instance,
that
sdn
issue
we
are
passing
from
release
to
release
it
was
originally
reported
for
gcp,
but
folks
with
bare
metal
jumped
in
and
also
said.
I
also
have
this.
We
contacted
sdn
folks
and
they
said
it's
gcp
specific
setup.
Some
health
checks
were
not
set
correctly.
It
was
fixed
later
in
installer,
and
now
I
don't
know
what
to
do.
Should
we
close
it
because
it
was
reported
originally
for
gcp
and
apparently
fixed,
or
it's
actually
a
long-standing
bug
somewhere
else
and
gcp
just
shows
it.
A
So
what
is
the?
What
is
the
next
step
here,
because
we
we
talk
about
this,
a
lot
about
trying
to
get
the
community
to
to
step
in
and
do
some
of
this
testing
on
on
a
regular
basis
for
each
of
the
releases
and
jamie?
I
don't
know
if
you
want
to
speak
up,
but
he's
written
some
automated
setup
for
upi
and
vsphere
is
this?
A
Is
it
helpful
if
someone
like
joseph
or
neil
or
jamie
at
their
or
bruce
sets
up
a
testing
pipeline
and
just
for
their
platform
and
just
as
nightly
every
time
we
do
a
release?
Does
a
nightly
run
of
the
testing?
Is
that
something
that
we
should
be
aspiring
to.
C
B
A
So
when
and
and
joseph
and
I
have
been
talk,
I'm
just
going
to
be
honest,
joseph
and
I
have
been
talking
a
lot
on
the
side.
How
we
do
the
okd
site
upgrade
what
and
if
jamie
gets
his
his
automated
setup
and
upi
stuff
documented
and
available.
A
Will
it
help?
Would
it
help
if
we
did
sort
of?
I
have
talked
often
about
hosting
sort
of
a
hackathon
for
building
out
the
operators
and
stuff,
but
instead,
if
we
did
a
hackathon
in
which
the
morning
was
walk
through
this
automated
setup
for
testing,
upi
or
whatever,
and
how
to
do
it
and
then
had
the
rest
of
the
afternoon
sort
of
coaching
sessions,
everybody
could
have
their
breakout
room
and
be
trying
to
do
it
for
on
their
system,
whatever
they
are,
so
that
they
could
get
some.
A
You
know
one-on-one
help
if
they
were
crashing
or
burning
or
had
questions,
but
is.
Is
that
like
because
I
can
set
up
a
you
know
a
date
somewhere
out
there
in
the
next
month
or
so
to
do
that
and
ask
everybody
who's
interested
in
doing
these
sort
of
nightly
setups
or
release
cycle,
and
we
could
just
get
that
going
so
that
we
had
those
tests
and
that
stuff?
So,
if
that's
is
that
something
people
would
like
to
have
to
see
happen,
neil
bruce
joseph
community
members
non-red
hatters.
A
Would
you
use
that
if
between
jamie
and
I
we
got
a
date
together
that
worked
for
everybody
and
hosted
a
morning
session,
walk
and
jamie?
Maybe
explains
that
and
vadim
and
christian
were
available
and
then
in
the
afternoon,
maybe
two
hour
session.
After
that
explanation,
you
had
breakout
rooms
where
people
could
come
and
coach
you
if
you
were
having
problems.
F
A
that's
a
generally
really
nice
idea
and
it
make
it
would
help
make
okd
and
openshift
appear
a
lot
less
daunting
to
people.
A
Yeah
and
as
as
bruce
is
saying
in
the
chat,
we
need
to
get
the
the
documentation
done
first,
so
and
and
again
next
week,
we'll
have
another
doc
section,
because
we
were
so
successful
with
it
last
week
and
well
on
the
tuesday.
So
maybe
we
can
each
week
we
can
work
through
a
different
set
of
docs
that
we
need
to
get
done.
A
That's
that's
what
I
was
doing
and
I
can
use
either
blue
jeans,
which
everybody
obviously
can
use
here
and
use
the
primetime
version,
and
we
can
have
breakout
rooms
in
it
or
set
up
a
hop-in
where
we
have
a
main
stage
and
a
bunch
of
pseudo
tracks
and
figure
out
that
so.
A
G
A
Thinking,
I'm
thinking
that's
because
we
talk
a
good
game
about
it
and
everybody,
and
I
think
now
we
have
that
we're
at
the
tipping
point
where
we
really
need
to
get
the
stability
and
this
testing
cycles
in
so
that
it's
not
always
on
the
demon,
christian
and
the
other
folks
to
do
this,
and
I
would
really
love
to
get
those
63
bugs
that
are
out
there
or
issues
down
to
something
more
reasonable
sooner
than
later.
So
that
would
be
very
helpful.
So
I
will
work
with
you,
jamie
jamie.
F
Yep,
I
will
be
back
and
back
in
action.
G
F
A
So
that
that's
my
goal-
and
you
know
all
of
you-
non-red
hat
editors
who
are
here
if
we
can
I'll-
do
some
sort
of
calendary
thing
and
find
a
date
that
works
for
everybody
and
jamie
once
we
have
the
docs
done
and
it
probably
will
be
a
weekend
day,
I'm
thinking
just
because
then
everybody
won't
have
the
distractions
of
their
work.
If
that's
okay
with
folks,
but
I'd
really
like
to
get
through
this
issue
and
get
this
done
and
get
this
this
set
up.
C
A
day
one
day
where
we
set,
where
we
tell
about
our
environments
and
set
up
test
environments,
ideally
a
common
one,
this
would
be
great
yeah
do
because
this
is
a
black,
a
black
box.
To
me,
if
okd
is
running,
it
runs
very,
very
smooth,
very
stable,
but
sometimes
you
are
searching
for
the
missing
missing
link
here
and
I'm
always
dressing
vadim,
it's
a
yeah
in
the
night,
sometimes
in
the
night,
to
get
one
hint
and
I
think
we
can
help
each
other
much
more.
A
And
that
would
be,
that
would
be
great
and
then
anyone
who
doesn't
want
to
work
on
that
we
can
hang
out
in
another
another
chat,
room
there
and
work
on
documentation
or
something
even
more
fun
during
the
day,
so
yeah
so
maybe
and
josh.
I
might
tap
you
to
give
me
a
hand
with
that
as
well,
because
you
know
that
would
be.
A
That
would
be
fun
and
josh
is
still
working
without
sound,
or
maybe
I
muted
him
and
if
you
guys
don't
know,
josh
josh
is
in
the
open
source
team
here
at
red
hat
in
the
cto
office.
So
mr
kubernetes,
these
days
and
other
things.
A
So
it's
wonderful
to
have
you
here,
even
if
you're
not
fully
on
camera
yeah
thanks
that
that
would
be
great,
so
yeah,
so
we
we've
beaten
that
horse
and
he's
not
dead
yet
so
we'll
get
there.
I
was
gonna
reiterate
and
I
put
the
tweet
in
there.
It
is
on
thursday
our
session
at
devconf
cz
so,
and
I
think,
vadim
and
and
christian
you're
on
tap
with
me
to
do
that.
It's
5
30
central
european
team
time.
A
So
just
a
reminder
and
I'll
send
a
note
out
to
the
mailing
list
shortly
reminding
everybody
who
wants
to
come
and
if
you
see
my
tweet
and
retweet
it
and
you're
going
to
come,
then
we'll
get
more
people
at
the
party.
So
that
would
be
wonderful
all
right.
Let's
see
what
else
is
on
our
agenda
today.
Did
I
promise
anybody
of
like
five
minutes
of
fame
to
talk
about
something?
I
know
I
every
once
in
a
while.
I'm
I
mess
up.
C
A
Oh
yes,
thank
you,
I'm
gonna
stop
sharing
again
and
I'm
gonna,
let
joseph
if
you
would
like
to
drive
through
you.
A
Okay,
okay,
dot
io,
so
based
on
last
tuesday's
working
group
on
docs.
We've
made
some
changes
and
we
being
joseph
and
I
just
put
them
into
power
placed
this
morning.
So
we
try
to
follow
and
simplify
this
the
navigation
here
with
what
is
okd
installation,
documentation,
community
and
there's
still
more
work
to
be
done
and
the
faq
and
going
here.
So
we.
What
I'd
like
everybody
to
do
is
is
to
test
that
we
did
put
the
general
surgeon's
warning
for
3.11
in.
A
Thank
you
very
much
and
if
you're
still
looking
for
okd,
it
takes
you
to
this
page,
which
is
looks
very
similar
to
what
it
looked
like
before.
So
try
to
to
keep
people
from
from
from
using
311.
If
we
can
help
and
then
there's
the
okd
section
we
do
and-
and
oh
that
looks
very
nice,
I
like
see
what
you
did
there.
We
took
out
the
video
that
was
the
road
map
video
here
for
now
and
put
in
the
the
impression
slider.
A
I
would
like
to
ask
that
we
update
the
what's
on
youtube
for
the
okd
for
update,
so
because
we
need
it
one
for
kukan,
may
and
red
hat
summit,
and
so
I
will
tap
probably
vadim
and
charo
and
and
christian
to
set
up
a
time
to
re-record
a
little
video
for
here,
as
well
as
the
slider
and
work
through
updating
some
of
this
verbiage
here.
So
if
you
have
comments
on
the
verbiage,
we've
stuff,
this
guy's
floating
over
I'll
have
to
fix
that.
C
You
must
press
control
f5,
it's
in
the
cache.
Okay.
C
What
does
this
jumping
images
have
gone?
They
all
have
the
same
size
now.
A
Okay,
there
we
go
so
we've
changed
this
a
little
bit
here,
there's
still
some
more
tweaking
to
do
on
this,
but
we've
tried
to
clean
up
this
section
so
that
people
just
come
straight
to
the
community,
there's
a
little
section
on
contribution
that
we're
going
to
try
and
get
a
little
more
verbiage
over
here.
A
But
the
structure
is
pretty
much
here
and
I
have
for
some
reason,
lost
all
of
my
images
here.
So
I
think
I
probably
did
something
in
the
build,
but
there
should
be
the
images
here
that
will
click
through
to
the
different
projects
and
these
two
pieces
are
going
to
be
merged.
C
A
Yeah
so
we've
rejigged
this
structure,
and
now
it's
about
rejigging
some
of
the
content
as
well,
some
other
sites.
What
I've
noticed
around
this
end
user
section
have
a
section
in
the
github
repo,
where
folks
can
add
their
names
if
they
are
using
openshift
or
using
okd
or
using
the
open
source
project,
and
I'd
like
to
do
that
rather
than
relying
on
metadata
tags
here,
so
that
people
can
self
add
themselves
to
be
using
okd.
A
So
I'm
going
to
work
through
that
in
the
in
the
github
repo
on
the
community
side
and
pull
it
from
there
instead.
But
this
is
the
structure
and
the
one
other
there's
everything
now
looking
better
looking
better
the
one
other
thing
that
we
were
going
to
add
and
maybe
joseph
is
to
add
a
blog
yes
and
the
one
trepidation
I
have
about
the
blog.
A
Is
I'm
a
big
opponent
of
documenting
by
blogging,
so
I
will
try
and
coach
people
when
things
look
like
they
should
be
pieces
of
documentation
that
we
need
to
turn
them
into
real
documentation
and
maintain
them
and
speaking
of
documentation.
A
When
you
select
the
version,
it
says
the
latest
and
311
is
there.
So
I'm
thinking
we
need
to
do
an
ask
of
the
docs
team
to
have
like
a
four
listed
below
here
so
that
people
here,
because
the
latest
takes
you
to
the
latest
version
here
of
four,
but
it
doesn't
reference
four
in
here,
so
there's
a
little
jig
that
we
have
to
do
for
that
to
look
better
too.
C
Know
the
single
set
block
is,
I
think
we
need
a
central
place,
ideally
on
okdiyo
ibn
can
write
things
like
migration
hints
yeah,
as
example,
for
people
that
are
sitting
on
four
or
five
and
wants
to
migrate
to
four
six.
There
are
a
few
easy
steps
they
must
do
and
they
it's
it's
yeah
and
they
are
on
on
1.6,
but
you
have
to
search
through
several
issues
and
yeah
be
very
brave
to
do
that.
C
I
think
a
little
bit
more
documentation
in
a
central
place,
maybe
maybe
only
with
links
to
issues
yeah,
but
this
would
be
great.
I
I'm
missing
that
a
lot.
G
Sorry
should
we
also
call
out
that
the
openshift
okd
github
repo
exists?
I
know
in
the
beginning,
we
were
trying
to
push
that
as
the
place
to
get
like
open,
okd
specific
documentation,
whereas
the
actual
doc
site
was
just
adapted
from
the
just
like
a
reference.
B
No,
I
don't
think
so.
I
mean
okay,
the
io
has
a
link
to
both
official
documentation
and
our
github
repo,
where
we
can
easily
push
some
things
like
joseph
suggested,
blog
updates
or
some
kind
of
a
micro
blog
with
fresh
changes,
and
there
should
be
an
a
github
icon,
which
leads
you
to
openshift,
show
kitty.
C
We
could
also
use
the
okd
github
repo,
because
github
has
this
github
pages.
We
could
also
use
this.
Then
you
have
to
write.
I
think:
okay,
no,
not
okay,
open
shift!
That's
not
good,
because
the
you
need
the
company
where
your
name
dot,
github
io,
and
then
you
have
a
page
where
you
can
set
up
a
block,
a
micro
block.
B
We
have
github
pages,
enabled
on
open
shift
organization
and
github.
This
is
essentially
probably
we
would
have
to
jump
through
a
few
hoops
if
we
really
want
this
so.
B
A
Like
in
the
in
the
repo
where
okd
lives
then
to
try
and
jump
through
those
hoops,
to
be
quite
honest,
I
will
try
but.
A
Yeah,
so
this,
if,
if
people
don't
know
common
github
commons
and
opd
and
project
quay
and
a
few
others
live
over
here
in
okay
dot
io,
they
live
here
with
us,
the
dash
cs
after
it.
And
that
way
I
don't
have
to
jump
through
those
hoops
with
the
engineering
team.
A
That's
that's
where
we're
at
so
feedback
thoughts.
A
I
think
we've
covered
this
off
joseph
I'm.
Sorry.
If
I
I
or
changed
you
on
it
was
the
best
the
most
valuable
place
to
test.
I
think
we
we
covered
that
with
vsphere
over
bare
metal
and
aws,
and
that's
going
to
ease
that
so
and
the
other
piece
is
looking
at
the
agenda
here:
we've
hit
everything
that
was
on
the
agenda,
so
is
there
anything
else
that
vadim
christian
or
anyone
else
wants
to
bring
up
today
and
I'll
stop
sharing?
My
screen
hope
you
all
know.
D
I
can
tease
some
of
the
things
that
are
going
to
come
to
okd
in
the
puppet
in
the
coming
months,
but
I
think
I
want
to
first
hear
from
neil
and
three
about
their
enhancement
proposal
for
multi-arch
fastest.
F
Y'all
see
my
screen
yep
all
right
so
tree,
and
I
had
been
talking
about
for
a
few
weeks
or
so
now,
and
we
started
writing
this
yesterday
and
then
realized.
We
have
no
idea
what
we're
doing,
which
eh
we
we
were
talking.
F
We
talked
about
a
few
weeks
back
about
the
idea
of
being
able
to
use
okay
with
mixed
architectures
in
the
clusters,
where
you
optimize,
by
having
most
of
your
things,
be
one
architecture
and
then
some
of
the
things
be
another
or
an
even
balance
or
whatever,
and
in
this
case
specifically,
we
were
interested
in
the
idea
of
having
art
arc64
for
most
the
nodes
and
then
x86
where
it
is
wanted
or
needed,
or
vice
versa.
Just
you
know,
cost
optimization
and
performance
optimization
reasons
and
we
looked
around
and
didn't
see.
F
F
Exactly
so,
we
kind
of
started
trying
to
write
this,
and
then
it
turned
out.
We
have
no
idea
what
we're
doing,
and
so
I
think
street
you
had
some
questions
in
particular,
better
questions
than
I
did
about
what
we're
supposed
to
do
here
with
this
enhancement
proposal
stuff,
because
I
have
no
idea
what
we're
doing.
G
Yeah,
like
there
was
oh
christian,
I
don't
know,
go
ahead.
First,
I'll
no
yeah!
I
was
there
there's
a
lot
of
stuff,
that's
very
like
specific
to
the
workflow
like
if
you
scroll
down
now,
they're
like
what
are
the
risks
and
mitigations.
What
is
the
design
details
and
I
think
for
for
a
proposal
like
this,
it
sort
of
fell
to
me
looking
at
examples
of
of
other
enhancements
that
for
something
like
this,
it
would
need
to
be
rather
more
involved
than
either
of
us
really
have
the
knowledge
to
sort
of
speak
to.
D
Yeah,
so
I
I
think
the
the
best
way
to
do
this
is
to
just
put
something
up,
and
then,
during
the
review,
fill
in
the
missing
parts
as
they
yeah
as
they
are
kind
of
reviewed.
I
don't
think
it
has
to
be
perfect
from
the
beginning.
The
one
thing
I,
where
you
you,
you
were
just
explaining
how
how
you
were
thinking
about
this
kind
of
having
an
arm
cluster
and
then
adding
x86
nodes
to
it.
D
I
would
start
from
kind
of
the
status
quo
where
the
standard
cluster
would
be
x86
and
you
want
to
add
a
a
node
with
a
different
architecture
to
that,
but
it
I
would
formulate
it
in
an
agnostic
way
where
you
just
say
we
want
to
add
worker
nodes
that
have
a
different
cpu
architecture
to
the
cluster
yeah.
F
This
is
specifically
that
running
an
openshift
cluster
is
too
freaking
expensive
and
one
of
the
things
that
you
know
I
I
did
a
back
of
the
napkin
analysis,
so
nothing
particularly
concrete
or
useful
to
to
put
out
there
but
like
running
most
of
the
nodes
as
arch
64,
and
then
only
having
things
that
absolutely
needed
to
be
x86
using
that
like,
for
example,
if
they're
virtualization,
nodes
or
whatnot,
or
if
they're
edge
nodes
or
something
like
that,
it
cuts
the
cost
by
more
than
60
percent.
D
I
know
I
don't
doubt
that
I
I
I
do
think
that
is
a
different
goal,
though
cutting
the
costs.
So
I
I
think
this
has
to
be
agnostic,
so
it
can
be
used.
So
so,
in
the
end,
you
can
essentially
say
I'm
going
to
install
my
install
my
iron
cluster
and
then
add
a
few
x86
or
power
nodes
to
it
or
whatever,
but
because
we
don't
yet
even
have
arm
here.
D
This
is
actually
one
of
the
pieces
I'm
gonna
I
was
gonna,
follow
up
with
so
the
the
multi-arch
effort
is
going
on
and
I
don't
have
any
specific
dates
yet,
but
we
are
going
to
be
starting
an
arm
effort
soon
and
that
effort
is
going
to
be
okay.
The
first.
D
I
think
that
is
great,
and
I
think
that
work
will
go
hand
in
hand
with
this
proposal,
but
I
don't
think
you
should
kind
of
because
I
don't.
I
think
you
want
too
much
here
when
you
say
you
want
this
to
be
arm
based
masternodes,
which
isn't
really.
D
At
least
it
shouldn't,
be
the
concern
of
that
problem.
This
should
be
just
like
heterogeneous
cataract,
heterogeneous
heterogeneous.
F
D
Heterogeneous
class
cluster
architecture,
where
some
of
the
worker
nodes
have
a
different
architecture,
whereas
the
normal
setup
is
homogenous
everything,
master
and
workers
share
the
same
architecture,
and
this
should
be
agnostic
enough.
So
you
can
say
whatever
the
architecture
we
have
for
the
master
nodes,
we
can
still
add
a
a
machine,
config
pool
essentially
and
machine
sets
or
worker
nodes
of
a
different
architecture.
So
I
think
for
the
implementation
details
here.
D
This
is
mostly
something
that
will
have
to
be
adapted
or
the
mco
and
the
machine
api
operator
will
have
to
be
adapted
to
account
for
this
yeah.
Okay,
but
I
don't
think
you
should
you
should
be.
You
should
be
saying
we
want
this
to
run
on
arm
masters,
because
that
is
kind
of
the
arm,
multi-arch
effort,
which
yeah,
I
think,
is
much
broader
than
what
this
enhancement
proposal
should
be.
G
Yeah
and
that
that
makes
total
sense
we
initially
did
have
it
as
like.
You
know,
be
able
to
run
workers
of
multiple
different
architectures
paired
to
the
same
set
of
masters,
but
then
neil
and
I
were
doing
a
little
bit
of
digging
and
it
seems
like
red
hat
core
os
has
like
arm-based
builds,
so
we
were
wondering
if
ocp
already
had
arm
versions
of
all
the
necessary
containers
ready
to
go.
So
you
could
run
like
a
whole
arm
cluster
because
we
couldn't
find
any
information
one
way
or
the
other.
D
So
yeah
there's
no,
we
only
have
the
base
operating
system,
our
costs
and
we
also
have
f
cross
bills,
but
we
don't
have
any
containers
yet
right.
F
D
And
so
they're
being
built,
but
I
don't
think
they're
being
distributed
to
it
to
anywhere,
yet
we
don't
upload
them
to
any
of
the
clouds
yet.
So
that
is
probably
thing
that
will
be
done
quite
soon,
because
this
effort
is
now
kind
of
starting
to
yeah
to
roll.
So
this
will
be
one
of
the
first
things
we'll
do,
because
we
already
built
those
images
and
we'll
start
with
distributing
them
as
soon
as
we
kind
of
start
the
actual
work
on
it.
D
B
Yeah,
as
I
have
to
mention
you,
if
you're
planning,
to
submit
this
enhancement,
which
is
very
ambitious,
I
have
to
say
it
needs
to
be
renamed
from
multi-arch
to
mixed
arch,
because
multi-arch
means
you
can
have
xa
64,
openshift,
cluster
and
arm
64
or
whatever
any
other.
But
these
are
separate
that
don't
don't
touch
each
other.
B
What
you're
proposing
is
to
have
mixed
large
workers
or
notes
in
the
cluster,
meaning
all
the
workload
this
year
needs
to
be
able
to
be
available
in
two
both
versions
and
work
with
each
other
perfectly,
which
is
a
very
complex
task.
F
I
have
to
say
wait
what.
Why
is
that?
Why
is
that?
Distinction
exist
because
ow,
because
multi-arch
has
traditionally
meant
you're
in
a
heterogeneous
architecture?
Environment.
That's
been
the
case
with
openstack.
That's
been
the
case
with
with
even
vmware
with
when
they
introduced
arm
esxi,
and
things
like
that.
So
why
is
this
different
with
openshift,
because
even
kubernetes
upstream
calls
multi-arch
clusters
being
mixed
architecture
in
the
same
class
in
multiple
architectures
in
the
same
cluster.
B
B
B
That
means
some
are
actually
arm
and
some
are
x64,
and
these
are
built
from
the
same
source,
but
they're
still
different,
meaning
they
might
show
up
bugs
and
in
openshift
we
don't
allow
customizing
this.
We
cannot
set
two
different
pull
specs.
These
have
to
point
to
one
single
image.
B
This
is
fixed
by
introducing
manifest
lists
and
oc
admin.
Release
has
to
understand
those
manifest
lists
properly,
build
them
and
embed
for
every
single
image.
It
has
to
include
two
copies
of
it,
so
your
payload
grows
from
five
gigs
to
at
least
ten.
If
you
just
want
to
support
two
architectures,
there
are
tons
and
tons
of
problems
like
that.
F
F
That's
fine.
I
just
wanted
to
know
that
I
understand
the
complexity
of
actually
doing
this.
My
question
was
mainly
the
terminology
because
upstream
kubernetes
openstack
linux,
all
these
things
refer
to
the
idea
of
heterogeneous
stuff
as
multi-arch,
rather
than
the
term
mixed
arch.
So
I
went
with
that
term
because
that's
what
everyone
else
was
using.
A
I
have
a
quick
question
for
christian,
the
folks
that
are
working
on
the
multi-arc
work
internally
at
red
hat.
Is
there
anyone
from
that
team?
That's
on
that's
currently
coming
to
the
okd
working
group,
or
is
that
something
we
could
ask
someone
to
come
and
give
us
an
update
on?
D
Currently
joining
these
meetings,
so
yeah
I'll
I'll
get
somebody.
I
I'm
let's
talk
about
this
after
diane
quickly
and
I
think
we'll
we'll
get
some
updates
from
from
them
soon
all
right.
Okay,
I'm
here
from
the
power
side,
obviously,
but
I
thought.
F
D
What
I
wanted
to
to
kind
of
say
about
the
enhancement?
Neil,
that's
really.
I
think
that
is
actually
going
to
be
very
useful
or
it
could
be
very
useful
to
have
kind
of
a
an
arm
worker
and
then
use
that
to
build
armed
containers
on
it
are
the
same
for
power.
D
So
we
wouldn't
need
an
entire
power
or
arm
cluster
for
building
these
images,
and
we
could
that's
a
possibility
yeah
and
we
could
reuse
our
ci
system
pro
and
build
arm
containers
with
the
same
system
or
multi-arch
containers
with
that
same
system,
possibly
so
that
would
be
great
to
have.
F
That
was
some
of
the
motivation
for
me
as
well,
because
at
work
we
have
an
open.
We
have
an
open,
build
service
instance
for
building
packages
and
stuff,
and
it
already
does
this
kind
of
mixed
architecture,
scheduling
things
and
we
started
exploring
arm-based
stuff
for
some
workloads
and
it
turned
out
to
be
really
nice,
but
we
also
have
no
actual
way
of
of
doing
containers
and
applications
on
that
architecture
platform.
Right
now,
and
so
I
wanted
to
try
to
get
ahead
of
that
and
see
if
we
can
get
get
stuff
in
place
for
that.
D
So
yeah,
I
would
definitely
suggest
you,
you
just
open
that
pr
as
a
work
in
progress
draft
thing
and
tag
all
the
architects
on
them
between
me,
everybody.
You
know
and
yeah,
we'll
we'll
kind
of
work
on
that
as
a
group
and
also
get
the
attention
from
from
architects
and
get
their
input.
A
So
yeah
so
we're
at
the
end
of
the
hour,
with
two
minutes
to
spare
and
joseph's
asked.
Actually,
can
you
spend
two
minutes
talking
about
the
future
of
okd,
which
I
think
is
a
much
longer
conversation
than
two
minutes.
D
I
can
actually
try
to
make
it
short
in
30
seconds,
so
I
think
the
biggest
part
is
that
multi-arch
effort
and
the
most
interesting
part
for
us,
then.
Obviously
we
have
the
openshift
roadmap
out.
All
of
that
is
going
to
be
included
in
okd,
obviously,
and
now
specific
to
okd.
D
We,
we
we've
been
missing
the
ipi
bare
metal
platform
in
okd.
We
haven't
been
supporting
it,
and
we've
made
some
progress
towards
supporting
that.
There's
one
one
pr
missing
that
has
to
go
in
and
then
we
can
start
building
ironic
yeah,
openshift,
ipi,
environmental,
api
installs
with
ironic,
okay,
and
I
think
that
might
be
another
thing
that
folks
might
be
able
to
actually
use
if
you
have
and
if
me
supported
or
a
machine
that
supports.
D
If
me
are
three
machines,
that's
a
part
of
me
you'll
be
able
to
to
install
your
own
cluster
on
on
bare
metal
hardware.
So
we
hope
to
merge
that
miss
that
last
outstanding
pr
soon
and
then
bills
will
be
starting
first
in
4.8,
but
we'll
try
to
backport
that
at
least
to
4.7,
which
is
also
going
to
be
released
soon.
So
yeah
that's
kind
of
my
update.
A
That
was
good
in
less
than
two
minutes
well
done,
and
then
I'm
gonna
use
the
last
minute
because,
as
josh
asked
is
red
hat
summit
and
kubecon
are
coming.
So
we
need
to
refresh
the
the
demos
that
we
have
and
do
an
updated.
What's:
okay
d,
I
think
I
had
a
60
second
one
or
just
under
60.
A
You
know
two
minute
one,
so
I'm
just
gonna
tap
on
charo
and
vadim
and
christian
to
to
help
me
with
that
and
if
folks
have
short
demos
of
things
that
they'd
like
to
reach
out
to
me
and-
and
I
can
always
host
a
session
like
this
blue
jeans
and
record
it
and
edit
it
into
something
that
we
can
use
in
the
in
the
demo
as
well.
So
we
can
have
a
thread
on
that
in
the
mailing
list
on
google
groups.
A
So
that's
that's
my
ask
and
I
think
josh
was
here
because
I
think
he
wants
them
all
by
friday
because
that's
when
they
always
want
them
six
months
ahead
of
every
event.
So
so
I
will.
I
will
work
on
the
what
is
opd
and
getting
the
site
content
up
to
snuff
so
that
it
syncs
and
we
can
talk
about
how
to
get
a
roadmap
and
what
is
okd
two
minute.
Video
done
sometime.
A
Two
minutes
isn't
hard
to
do.
It's
just
finding
the
one
hour
to
record
the
two
minutes
in
everyone's
schedule.
That's
hard
to
do
so
I'll
work
on
that
with
you
guys
in
slack
and
then
jamie
go
to
the
beach
with
your
kid
go
back
on
vacation.
I
don't
know
where
you
are,
but
I
wish
I
was
there
because
the
beach
would
be
nice
right
now.
A
So,
thank
you
all
a
really
good
conversation
today
and
I
will
work
with
jamie
and
everybody
else
to
find
that
time
for
a
testing
hackathon
and
getting
the
documentation
on
that.
We
will
have
another
meeting
next
week
on
docs.
So
if
you're
interested
in
that,
please
come
and
any
feedback
on
okd.io
or
typos
or
things
that
should
be
there.
A
That
aren't
send
a
note
and
maybe
put
docs
in
the
title
and
the
mailing
list
on
the
google
group
or
send
them
directly
to
me
or
get
there
get
it
to
me
somehow
all
right,
any
final
words
vadim
christian,
all
right
good!
Thank
you.
D
Let
me
just
let
me
just
thank
walim
for
his
his
work
on
okd.
It's.
I
think
I
it's
been
a
lot
lately.
I
think,
especially
on
his
shoulders,
so
I
really
hugely
appreciate
your
work
with
him
and
I
think
we
all
do
so
yeah.
Thank
you.
A
Yeah,
I
think
this.
This
is
where
we
can
push
and
try
and
take
some
of
it
off
the
shoulders
in
the
next
little.
While
now
that
we've
gotten
through
this,
because
4.7
is
coming
soon
and
it
going
to
be
another
game
changer.
So
looking
forward
to
working
with
you
all
on
that
all
right,
vadim
you're,
the
best
take
care
everybody.