►
From YouTube: OKD Working Group Meeting 11-22-2022
Description
The OKD Working Group's purpose is to discuss, give guidance to, and enable collaboration on current development efforts for OKD, Kubernetes, and related CNCF projects. The OKD Working Group includes the discussion of shared community goals for OKD 4 and beyond. Additionally, the Working Group produces supporting materials and best practices for end-users and provides guidance and coordination for CNCF projects working within the SIG's scope.
https://okd.io
A
B
Thank
you
for
coming
to
the
okd
working
group
meeting
for
November
22nd
of
the
Year
2022.
This
is
our
first
meeting
on
Zoom
and
folks
can
check
out
the
updated
Fedora
calendar
and
there's
also
a
new
page
on
the
GitHub
in
the
okd
working
group
project
as
part
of
the
okd
project.
Github
repo-
and
there
was
also
something
sent
out
over
the
Google
discussion
and
I-
can
even
put
something
in
the
in
the
chat
next
time
in
the
in
the
slack
Channel.
B
So
let's
go
ahead
and
jump
into
the
meeting
stuff
because
we
do
have
some
that
wants
to
present
here.
Please
do
put
your
name
in
the
attendees
section
so
that
we
know
that
you
were
here
or
not.
Here
is
the
case.
Maybe
if
there's
any
information
that
needs
to
get
to
you
so
a
gender
review.
Let's
take
30
seconds
to
look
over
the
agenda
and
please
let
us
know
if
there's
anything
that
you
would
like
added
changed
or
removed.
B
B
B
Modifications
anything
that
we've
missed
all
right
folks
seem
to
be
happy
with
that.
So
let's
move
forward
and
start
out
with
our
okd
release
and
cicd
updates
with
Christian
Vadim
Luigi
Etc.
Take
it
away,
Christian.
C
Sure,
hey
everybody,
so
Vadim
has
cut
a
new
okay
and
F
cross
release
this
weekend.
C
Yeah
and
that's
about
all
I
can
I
can
say
right
now
regarding
that
I
haven't
had
a
chance
to
actually
look
at
the
the
feedback.
Whether
there's
any
any
comments,
I
think
there
are
a
couple
of
comments.
Let
me
quickly
check.
C
Yeah
I
think
that
there
is
one
issue
still
with
the
machine
operator.
Machine
config
operator,
not
initializing
and
Vadim
has
a
PR
open
on
the
installer
to
fix
that.
So,
if
you
hit
that
it's
a
race
condition,
so
you
you
might
get
lucky
if
you
just
try
again
for
anybody
who
hits
that
issue
upon
installing
yeah.
So
hopefully
we
will
figure
that
out
and
with
the
next
release
that
will
be
fixed
and
that's
it
for
the
release.
C
News
I
think
I
can
go
ahead
and
also
touch
on
the
next
s-cos
updates,
our
maybe
the
internal
okd
Engineering
Group.
Last
week
last
week
we
had
hack
week,
meaning
everybody
was
able
to
to
work
on
on
things
they
were
interested
in
and
we
had
a
couple
of
folks
internally
joining
joining
the
okd
effort
or
rejoining
with
Zach.
C
Having
worked
on
a
on
a
tecton
pipeline
to
automate
our
release
procedure,
which
is
gonna,
which
is
going
to
be
immensely
helpful,
going
forward
for
both
the
F
cross
releases,
as
well
as
the
s-cos
releases,
and
that
is
really
really
awesome
so
that
that's
kind
of
in
preparation
for
for
the
internal
CFE
customer
focused
engineering
team.
That
Luigi
leads,
which
will
take
over
the
release
Engineering
in
the
in
the
long
term,
and
there
will
be
I
think
a
more
a
stricter
release.
C
Cadence
every
three
weeks,
at
least
for
the
okd
on
s-cost
releases
and
we'll
we'll
still
have
to
sync
with
Vadim
a
little
bit
whether
he
wants
to
hand
that
over
to
the
CFE
team
as
well.
If
he
does,
then
the
Cadence
would
also
be
three
weeks
for
okadian
escos.
We
plan
to
do
a
sprintly
release,
and
that
is
every
three
weeks.
C
C
It's
important
to
note
that
for
the
official
s-cos
release
stream,
the
actual
payload
will
will
continue
to
be
built
in
prowl,
but
we
are
also
working
on
the
okd
release
pipeline,
which
will
enable
anybody
to
build
their
own
custom
release,
and
if
we,
if
we,
if
we
really
get
that
into
good
shape,
then
we
might
also
internally
change
some
things
and
and
make
use
of
that.
That
is
more
of
a
long
term
plan,
though
we
do.
C
C
Yeah,
so
there's
a
lot
of
interesting
things
coming,
we
I've
also
been
working
on
getting
the
okay
decoros
pipeline,
released
to
techton
techton
HUB,
so
I'll,
probably
it'll
take
yeah
a
couple
of
days
more
for
that
to
get
released,
but
once
we
have
that
we
really
have
a
a
discoverable
entry
point,
I
think
for
for
these
Pipelines
and
hopefully
we'll
get
more
people
using
them
and
get
feedback
on
how
to
improve
them.
That
way,
so
a
lot
of
changes
coming
there.
B
Excellent,
do
we
have
Luigi?
Did
you
want
to
add
anything
in
terms
of
anything
that
was
discussed
here?
Anything
you
want
to
throw
on
there.
C
I
actually
forgot
to
talk
about
what
you
worked
on
Luigi
last
week,
which
is
also
really
exciting.
So
there
is,
did
you
want
to
talk
about
the
the
build
to
shipwrite
conversion?
Oh.
C
So
essentially,
our
payload
pipeline
currently
uses
openshift
builds
to
to
do
the
container
builds
of
of
each
component
in
the
payload.
C
That
is
obviously
a
an
openshift
API,
so
you
require
openshift
to
build
openshift,
which
is
a
bit
of
a
chicken
and
egg
problem,
and
there
is
also
essentially
a
successor
to
this
builds
API,
which
is
the
the
shipwrite
builds.
Api.
Shipwrite
is
an
is
an
operator,
so
you
can
install
it
on
any
kubernetes
cluster
and
that
will
well
and
that
really
allows
you
to
to
do
these
builds
on
any
on
a
kind
cluster
on
any
kubernetes
cluster.
C
So
we've
been
working
on
converting
those
builds
V1
version,
one
to
the
shipwright,
builds
V2
API,
and
to
help
with
that,
Luigi
has
has
crafted
a
a
tool
to
to
automatically
kind
of
convert.
Translate
those
resources
into
the
shipwright
API
we're
not,
and
and
shipwrite,
isn't
quite
there
yet.
C
So
we
will
be
able
to
use
shipwright
for
the
builds
and
then
there's
another
thing
kind
of
that
we
as
a
stretch
goal
either
I
was
going
to
work
on
ourselves
or
the
the
shipwright
folks
have
actually
picked
up
that
work,
which
is
using
shipwrite,
builds
as
custom
tasks
within
tecton
pipelines,
meaning
we'll
then
be
able
to
put
the
entire
okd
payload
pipeline,
which
currently
isn't
really
pipeline,
because
it's
only
build
tasks
that
aren't
tact
on
tasks
will
eventually
be
able
to
run
all
of
that
within
a
tecton
pipeline
too.
C
So
really,
the
goal
here
is
to
move
everything
in
into
tecton
and
I
forgot.
Another
thing
which
is
which
we'll
hear
about
in
this
meeting
as
well
from
Robert
and
Marco
possible
Marcos.
Here
too,
the
the
search
Tool
Luigi
has
actually
worked
on
a
pipeline
to
automate
running
the
certification
tool
on
the
cluster
that
you
on
yeah,
on
on
a
release
that
you
just
built.
C
So
you
essentially
build
a
release
deployed
somewhere,
and
then
you,
you
run
the
certification
tool
on
that
cluster
to
run
end-to-end
and
conformance
tests,
which
should
really
give
you
good
signal
whether
your
build
was
a
success
and
that
again
can
be
run
by
any
by
anybody.
It
doesn't
need
prow
to
run
the
end-to-end
tests,
which,
which
is
the
way
we're
currently
doing
it.
So
another
thing
really
kind
of
loosening
the
grip
prowl
has
on
us
and
making
the
things
you
know
more,
more
approachable,
so
really
cool.
D
No
problems,
it
was
I
had
a
lot
of
fun
doing
it.
Just
just
a
quick
note
on
on
the
provider
certification
tools,
you
guys
have
done
an
outstanding
job.
It
was
really
easy
to
implement
and
wrap
it
up
in
a
Docker
file
and
use
it
in
techtons.
So
kudos
to
you
guys
well
done
foreign.
B
Excellent,
let's
now
move
on
to
our
Fedora
core
OS
updates
with
Timothy.
G
Hey
so
Judy
would
be
quite
quick,
and
the
first
item
is
that
we're
moving
Federal
courses
to
Fedora
37
know
that
it
has
been
released.
I
think
it's
two
weeks
ago,
we
released
it
or
something
all
that
last
week,
and
so
testing
is
now
based
on
federal,
37
and
stable
will
be
in
a
week
or
something
and
no
major
issue
so
far.
So.
E
G
Please
give
it
a
test
and
it
will
probably
end
up
in
some
okd
release
at
some
point,
may
not
be
412,
but
maybe
later
all
right.
Another.
G
The
second
point
I
had
is
an
example
of
what
you
can
do
with
chorus
layering,
so
I'm
not
sure
correctly
ring
is
fully
enabled
or
they're
yet
in
okd
it
should
be
in
for
12,
we'll
see
the
status
of
things
there,
but
an
example
of
what
will
be
possible
in
the
in
the
future
with
correctly
ring
here
is,
for
example,
how
to
get
gdfs
support
on
top
on
federal
correct.
So
if
you
want
to
have
cdfs
support
in
your
KD
cluster,
that
could
be
possible
with
that.
H
Okay,
so
yeah
we
had
our
meeting
last
week.
Let
me
get
the
note
a
few
things,
so
we've
got
to
volunteer
to
actually
write
up
the
single
node
openshift
process
and
so
Dwayne's
going
to
do
that
with
Luigi.
H
H
We
asked
for
views
on
what
would
the
best
design
and
I
think
everybody
chose
a
different
design,
so
I'm,
currently
mocking
up
and
I,
will
then
ask
for
votes
as
to
which
of
the
mock-ups.
We
want
to
go
forward
with,
and
then
we
have
quite
an
interesting
discussion
around
video
tutorials
and
some
of
the
posted
to
be
in
the
discussion
forum
where
people
have
been
saying
they
get
stuck
and
then
posting
links
from
openshift
4.6
4.8,
and
we
think
that's
largely
due
to
the
following
out
of
date:
YouTube
tutorials.
H
So
we
are
looking
to
work
out
what's
out
there.
What
are
the
popular
YouTube
sort
of
tutorials
for
the
various
different
install
options
of
different
platforms
and
UPI
API
Etc,
and
then
we
want
to
actually
then
work
out.
Can
we
get
people
to
actually
update
them,
or
should
we,
as
a
community,
be
proactively
managing
a
set
of
tutorials
that
are
already
always
up
to
date,
so
the
first
part
of
that
is
just
finding
out
what's
there.
H
So
if
you
go
to
the
the
hack
MD
for
the
community,
we
have
set
up
a
discussion
thread
and
I'll
post
that
in
once,
I
finish
I'll
post,
that
into
the
chat
for
this
meeting,
where
we're
trying
to
harvest.
If
you
know
of
a
good
resource,
a
good
YouTube
video,
a
good
blog
site,
a
good
website
where
people
have
taken
time
to
actually
write
a
good,
install,
video
or
set
of
instructions
and
we're
just
trying
to
find
what
are
the
best
resources
out
there.
And
so
you
can
then
work
out.
H
What's
the
next
step.
But
we
think
that's
one
of
the
problems
that
people
are
hitting
they're
getting
old
access
to
Old
tutorials
and
then
things
have
moved
on.
Things
have
changed
or
they're
trying
to
actually
install
a
4.6
openshift
today,
where
we
obviously
want
them
to
be
updated.
So
we
had
quite
a
lively
discussion
around
that
and
we
also
talked
about
the
community.
Okay,
the
catalog.
So
this
is
do
we
want
to
create
an
okd
catalog?
H
How
do
we
go
about
doing
that?
What's
the
process
for
it?
What's
the
governance
for
anything,
that's
in
that
catalog,
so
we
have
an
issue
on
the
on
the
okd
project
and
again
the
note
says:
Jamie's
going
to
add
content
for
that
and
I
think
that's
it
in
terms
of
what
we
discussed
unless
I'm,
forgetting
anything
Jamie.
H
B
I
B
End
of
April
April,
okay,
does
anyone
know,
have
they
closed
down
submissions
yet
or
not?
Yeah.
B
All
right,
well,
I,
think
moving
forward
we're
going
to
want
to
stay
on
the
ball
for
that
and
see
when
we
have
opportunities
for
submissions
just
sort
of
rewinding.
For
folks
that
aren't
aware,
Diane
is
basically
for
the
most
part
done
right
and
Luigi
is
going
to
be
our
red
hat.
B
And
at
that
point
you
know
since
saying
has
essentially
stepped
away
or
is
slowly
stepping
away
over
the
next
couple
weeks
we're
going
to
want
to
sort
of
all
contribute
towards
filling
in
the
things
that
she
would
do
like
keeping
our
eye
out
for
opportunities
for
presentations,
keeping
our
eye
out
for
opportunities
for
collaboration
and
whatnot.
We
don't
want
it
all
to
fall
on
Luigi's
shoulders,
Luigi's
already
doing
so
much
on
the.
B
Side
right
so
we're
gonna
have
to
sort
of
pick
up
the
slack
a
little
bit
and
I'll
be
I'll,
be
you
know,
I'll
have
more
time,
hopefully
in
the
next
couple
weeks
to
start
doing
some
of
that
stuff,
but
keep
your
eyes
open.
Keep
your
eyes
peeled
for
opportunities.
Christian.
You
have
your
handle
yeah.
C
Yeah
I
just
wanted
to
ask
really
now
that
Diane
is,
is
stepping
away.
Do
we
know
who
is
the
main
organizer
for
openshift
comments,
events
going
forward
because
we
obviously
now
for
the
cubecon
cfp,
that
is,
for
the
official
kubecon
talks.
Usually
that
was
also
always
the
opportunity
to
do
talks
at
a
an
openshift
comments,
which
would
typically
be
a
kind
of
day
Zero,
pre
pre-q
con
event.
C
So
we
should
definitely
sync
up
with
her
and
get
her
to
always
tell
us
and
to
always
invite
as
many
of
us
to
the
comments
there
I
I
know:
I've
met
Karina,
she's,
she's,
awesome,
so
I
think
we
we
should
kind
of
definitely
get
you
Jamie
and
Karina
in
touch
to
kind
of
get
that
if
you
haven't
met.
You
know.
C
And
yeah
feel
free
to
if
you
you
know,
want
to
make
this
a
broader
meeting
feel
free
to
to
add
me
there
as
well
otherwise
yeah
she's,
she's,
super
nice
and
I
think
it
would
be
good
if,
if
you
knew
each
other
and
just
to
make
her
aware
that
we're
always
interested.
Obviously
in
this
working
group
to
to
join
the
comments,
conferences,
yeah.
B
I
I
Let
me
just
actually
it's
funny:
I
wanted
to
ask
the
exact
same
question,
who
would
kind
of
be
the
responsible
contact
person
now
for
the
openshift
commons
Gathering?
So
do
we
have
an
email
address
for
Karina
somewhere
or
what
would
be
the
appropriate
way.
B
C
B
Or
actually
just
invite
her
here
next
week
or
yeah
yeah,
let's
invite
to
the
next
one
Jack
I'll
I
will
share
the
email
with
you,
but
let's
also
here's
the
email
going
in
the
chat
here
there
we
go.
B
On
that
topic,
before
we
move
on
to
our
presentation.
F
Hello,
let
me
try
to
share
my
screen
here.
C
H
J
F
So
today
we
will
run
quickly
about
what
is
opacity:
okay,
shifts,
provide
certification
tool
and
I
will
share
some
information
that
can
be
booked
right
now.
So
we
are
building
obesities
of
openshifts
provide
certification
tool.
It
is
a
tool
to
automate
and
to
keep
more
easy
way
to
run
into
and
test
for,
confirms
for.
F
Code
confirms
and
openshift
confirms
in
clusters
that
it
was
designed
to
to
test
clusters
that
is
not
fully
integrated
with
openshift,
for
example,
if
you're
a
community
Builder
would
like
to
install,
pin
shift
in
your
provider
and
if
you
have
a
hardware
and
you're
selling
the
hardware,
if
you
would
like
to
install
openshift
and
certify
it,
you
can
do
it
using
agnostic
installation
platform
known
well,
no
as
a
platform
known,
and
then
you
can
run
these
two
through
understand.
F
If
it
will
pass
in
how
conformance
Suite
it
will
keep
the
focus
on
the
tests
end-to-end
tests
for
confirms,
which
means
that
we
are
running
cubicon,
Farms
Suites.
This
is
official
official
suite
for
search,
kubernetes
certification
and
it
will
also
run
open
shift
conformance
Suite.
It
will
run
more
than
3000
tests,
conformance
tests
for
openshift
and
will
provide
the
feedback.
F
Basically,
the
two
is
split
in
four
components:
it's
a
CLI.
We
have
a
CLI,
we
have
a
plugins
end-to-end
test
itself
and
documentation.
F
The
CLI
is
built
on
top
on
top
of
Sono
boy,
so
you
are
not
building
the
two
from
scratch.
We
are
reusing,
Community
tools
like
snowboard
that
is
also
used
by
kubernetes,
confirms
and
basically
the
CLI
we
will
implement.
The
specific
nuances
for
openshift
should
provide
the
certification
program.
F
So
we
will
automate
the
execution
of
the
end-to-end
test.
We
will
filter
some
results
and
provide
a
better
feedback
and
we
will
Implement
house
some
fresh
flights
tests
to
avoid
running
the
end-to-end
tests
in
closer.
That
is
not
healthy,
for
example.
So
the
plugins
for
someone
that
do
not
does
not
know
sonobi
so
no
buy
is
a
very
extensive
tool.
It
tells
what
it
runs
by
the
food,
it's
on
a
boy
and
the
kubernetes
in
Twin
test,
but
it's
possible
to
extend
some
by
implementing
two
games.
F
F
F
How
to
make
the
end-to-end
test
defined
on
the
orange
the
orange
hippo
is
the
the
car
where
about
the
end-to-end
tests
from
openshifting
leaves.
So
if
you
would
like
to
know
more,
there
is
a
very
good
readme
here
that
we
will
explain
what
operative
test
does
and
how
to
run
a
specific
touch
and
so
on.
They
provide
a
certification
tool.
We
automate
there's
a
cushion
of
openshift
test
utility
using
sonobi.
Basically,
this
is
a
very.
F
Certification
tool,
so
sorry
we
are
implementing
custom
plugins
and
we
are
providing
a
documentation
for
the
users
that
single
documentation
will
guide
you
through
how
to
download
the
two.
How
to
what
is
the
prehexit
to
implement?
What?
What
is
the
topology
of
the
closet
that
you
should
create,
install
openshift
and
run
the
tool
we
are
running
by
the
dedicated
environment?
It
will
be
changed
before
the
next
release.
F
It
is
basically
will
isolate
the
certification
to
running
inside
the
cluster
in
a
specific
node,
applying
things
to
avoid
description
in
this
education
environment.
But
if
you
would
like
to
try
just
follow
this
user
documentation
and
it
provides
very
detailed
how
you
can
download
it
check
if
the
cluster
is
in.
F
Let
me
share
more
details
about
the
tool
itself
under
the
hood,
to
explain
when
you
use
him
so
on
this
here
lie
as
I
commented.
The
tool
is
basically
splitting
in
Project
CLI,
including
the
CLI
extends
sonobi
leave.
So
we
are
using
all
the
features
functional
by,
but
implementing
some
specific.
F
Needs
for
certification
pool
as
I
call
it,
and
you
are
embedding
the
Manifest
for
the
plugin,
so
we
are
avoiding
to
the
users
who
send
a
lot
of
CLI
comments
when
trying
to
customize
some
buy,
for
example.
So
that's
the
main
goal
to
create
a
CLI
instead
of
using
directory.
So
nobody,
let
me
know-
and
the
plugins
itself
that
I
comment
that
we
implemented
the
plugin
for
openshift
test
utility.
F
We
will
extract
the
openshift
test
utility
available
on
the
running
cluster,
so
in
the
runtime
it
will
extract
the
openshift
as
utility
and
run
the
end-to-end
test
for
information
how
the
end-to-end
test
is
defined
inside
that
binary.
For
that
reason,
we
are
doing
that
and
with
that
strategy
we
can
ship
the
operations
provide
certification
to
without,
regarding
the
release
of
the
the
open
shift,
so
we
can
run
the
two
in
any
version
of
openshifts
because
we
are
reusing
the
existing
and
we
are
extracting
in
the
runtime,
the
operation
status
utility
yeah.
F
This
is
the
under
the
blocks
that
we,
the
previous
open
shift,
subscription
tool
have,
and
this
is
the
basic
activity.
Sorry
yeah,
I!
Guess
it's
better
image!
This
is
the
building
blocks
of
the
Arctic.
Perfect
operations
provide
certification
tool
as
the
user.
They
will
run
the
CLI
in
their
machine
in
your
own
machine.
You
can
download
it
in
and
run
it
from
your
machine
against
the
institution
cluster.
When
you
run
it,
the
Sonoma
aggregator
will
be
created
someone
by
server
and
this
one
will
buy.
We
will
schedule
it.
F
The
plugins
that
is
available
and
created
by
this
by
the
of
the
Op
City.
Those
plugins
run
run
as
jobs
in
the
cluster
and
for
a
specific
plugin
that
we
are
implementing
for
openshift
test
booking.
It
will
start
running
the
end-to-end
tests
as
normal.
If,
if
you
can
run
it
directly
shopping
shift,
Test
YouTube,
you
can
download
it
in
run,
but
the
plugin
will
do
it
automatically,
and
there
is
no
difference
in
in
that
flow.
F
When
did
all
the
conference
tests
have
been
finished,
the
aggregator
will
aggregate
all
the
data
and
provide
a
single
turbo
file
through
the
user.
So
that's
the
main,
the
main
goal
and
the
main
automation
that
we
are
using
so
buy,
because
we
can
Define
every
logic
that
we
would
like
to
run
in
the
Target
cluster
in
the
CLI,
and
this
one
will
buy
you
orchestrate
everything
that
you
need
to
run
after
that
the
user
will
collect
the
the
artifact
and
send
through
red
hatch
through
evaluate
it.
F
The
goal
is
to
evaluate
if
the
certification,
if
they're
closer,
is
equal,
Farms
or
not,
which
open
shift
support
and
go
pharmaceutical
up
with
kubernetes,
but
as
Christian
and
Luigi
is
doing,
you
can
display.
You
can
process
those
data
and
print
it
on
the
pipeline
in
failed
if
it's
not
conformist
or
test,
if
it
is
in
conference
or
not,
the
future
idea
is
to
to
send
it
automatically
to
the
support
flow,
because,
as
I
comment
in
as
the
name
said,
this
is
a
a
tool
created
for
this
certification
problem.
F
So
we
are
creating
some
specific
information
for
certification.
We
need
to
process
this
data
in
our
back
end
and
provide
a
better
feedback
to
our
partners,
who
would
like
to
certify
the
year
cluster,
but
it
can
be
used
by
community
on
so
run.
The
cluster
collect
it
and
evaluate
it
without
contacting
Red
Hat
without
sending
the
data
to
our
head.
C
Thank
you,
Marco
I
do
I
have
to
one
question
so,
first
of
all,
yeah
for
the
community,
it's
probably
less
interesting
to
to
provide
that
data
to
redhead
for
a
certification.
Really,
the
tool
is
intended
for
for
new
platform
providers
that
aren't
currently
supported
to
be
able
to
self-certify.
C
If
we
are
using
this
kind
of
to
run
end-to-end
tests
on
on
release
payloads
that
we
build
ourselves
in
in
one
of
the
pipelines,
can
it
also
be
used
with
a
specific
platform
set?
Not
that
is
not
a
setting
platform.
None
like.
Could
we
build
an
AWS
or
could
we
build
a
release,
payload
deployed
to
AWS
and
then
run
the
Search
tool
there,
or
is
it
only
supported
if
platform
none
is
set.
F
The
two
there
is
no
large
key
of
the
platform
integration,
so
the
the
openshift
test
utility
will
have
all
the
logic
of
which
into
industrial.
So
there
is
no
nothing
specific.
If
you
are
running
a
class
in
AWS,
the
two
will
run.
If
you
are
running
platform
now,
the
two
will
run
so
that
is
not
no
locking.
J
Hi
Marco
I
have
a
question,
so
I've
tried
to
run
version,
0.
J
five,
six
of
the
tool
on
our
okd
clusters
and
we
actually
deploy
our
clusters
without
a
container
registry,
because
all
the
Clusters
share
a
central
registry-
and
it
looks
like
the
tool-
has
like
a
pre-flight
check
early
on
where
it
essentially
says
you
can't
find
a
registry
and
then
bombs
out.
Is
there
a
way
to
run
this
tool
with
an
external
registry.
E
I
think
we
feel
yeah
I
think
we
fell
out
on
that,
because
the
end-to-end
tests
would
fail
actually
I'm
trying
to
remember
this.
We
set
that
we
set
that
up
a
long
time
ago.
Maybe
it
was
because
the
toll
wouldn't
start
properly.
F
Remember
specifically,
there
is
here
in
this
image
the
two
will
get
the
open
shifts
test
from
the
running
cluster,
so
we
need
to
extract
from
the
internal
resist
that
we
should
start
to.
Take
yeah
currently
is
not
possible
to
run
in
an
external
registry,
but
yeah,
it's
not
possible.
It
could
be
done.
A
Yep
yeah
so
Leroy
part
of
this
as
well
like
when
we
were
designing
the
certification
tool.
We
were
designing
it
also
to
certify
like
partner
clusters
that
might
be
trying
to
create,
like
deployments
of
openshift,
that
we
could
then
support,
and
so
the
official
way
that
we
deploy
openshift.
A
A
My
impression
is
that
you
would
be
able
to
use
the
image
registry
operator
to
have
it
point
to
your
external
registry,
and
then
these
tests
should
work
transparently,
because
in
the
end,
what
we'd
like
to
do
is
enable
Partners
to
have
the
flexibility
to
change
how
that
registry
works,
but
but
all
the
the
all
the
necessary
internal
changes
aren't
quite
there
yet.
So
that's
that's
part
of
what
you're,
seeing
with
the
way
the
cert
tool
is
running.
C
To
very
quickly
add
to
what
Mike
just
said:
there
is
the
install
flexibility
effort
and
an
adjacent
effort
called
composable
openshift,
which
will
actually
and
I'm
not
sure
whether
it's
targeted
at
4.13
or
for
14
right
now.
C
That
is
going
to
allow
you
to
really
pick
and
choose
from
from
all
the
car
components
that
are
currently
essentially
always
in
your
install
mandatory.
You
can't
not
choose
them,
you'll
be
able
to
essentially
unselect
things,
possibly
even
the
yeah,
the
internal
registry
saying
you
don't
want
the
internal
registry
to
be
deployed,
or
you
don't
want
the
builds,
API
or
whatever
to
be
installed,
and
so
that'll
help,
obviously,
with
shrinking
the
the
footprint
for
openshift,
installs
and
yeah.
C
C
J
B
B
I
I
think
what
we'll
do
is
maybe,
if
I
can
just
throw
in
a
quick
question:
yeah
yeah.
So
so
how
long
do
these
tests
kind
of
run
on
a
typical
cluster?
And
can
you
kind
of
pick
and
choose
different
different
I?
Don't
know
areas
of
the
tests,
so
maybe
just
to
give
some
context.
My
background
would
be
when
we're
doing
so.
I
We
because
we're
building
our
own
okd
clusters
with
various
customizations
and
stuff
layered
on
top
and
then
we're
doing
integration
tests
and
at
the
moment,
all
of
those
integration
tests
are
kind
of
written
from
scratch
by
hand
from
us.
So
it
would
be
interesting
for
us
because
we
don't
we
don't
really
want
or
need
to
be
certified,
but
it
would
be
interesting
to
like
take
maybe
some
bits
that
are
relevant
for
us
and
like
run
these
tests.
But
if
it's
like
one
thing
that
runs
for
six
hours,
then
it's
maybe
not
super
suitable.
G
E
Here,
I'll
link,
I'll
link
an
actual
test.
Excuse
me:
I
have
a
call
this
morning
we
sorry
I'm,
trying
to
put
it
in
chat.
We
run
the
We
Run
The
Tool
periodically
in
in
prowl
in
openshifts,
CI
and
I
put
a
link
in
the
chat.
E
F
F
And
if
you
would
like
to
run
a
specific
subset
of
tests,
you
can
download
directly
open
shift
stats
utility
and
to
run
these
commands.
So
you
can
run,
for
example,
machine
test
storage
test
by
applying
regax
here,
and
you
can
run
directly
open
your
specialty
and
provide
you
a
feedback,
because
the
certification
tool
is
more
enclosure
through
certification
problem.
But
if
you
would
like
to
run
directly
specific
test,
you
can
do
that.
I
can
show
them.
I
All
right,
that's
already
super
interesting
or
super
exciting
news.
Actually
yeah.
D
And,
and
also
one
other
thing,
sorry
one
other
thing
is
that
you
can
also
Target
a
specific
node
by
labeling
the
node.
The
documentation
is
very
good
on
showing
you
how
to
do
that
and
then
for
for
snos
type
of
thing.
You
could
label
the
node
tank
the
node
and
run
the
test
against
that
single
node,
which
is
very
helpful.
B
All
right.
We
have
about
10
minutes
left.
Is
there
anything
else
that
folks
in
the
community
want
to
talk
about
before
we
end
the
meeting.
C
C
So
there
is
I.
Don't
think
Alessandra
is
here
today,
but
there
is
definitely
some
work
in
progress
there
and
we
are
hoping
we
can
share
that
soon.
We
we
have
definitely
already
submitted
a
talk
for
a
conference
in
February
with
that
as
a
topic.
So
until
then
we
better
get
it
done
so
yeah,
just
just
as
a
kind
of
heads
up
arm
support
is
not
too
far
away
now.
C
Both
really
so
apparently
with
four
point
12,
which
I
think
we're
going
to
release
in
in
January
I,
think
it'll
still
be
kind
of
separate
payloads
for
each
for
each
architecture,
but
with
4.13
and
going
forward
and
we'll
have
that
earlier
in
okd,
we'll
have
truly
multi-arch
manifested
manifest
listed
release
payloads.
C
It
might
make
sense
to
still
provide
a
single
Arch,
payloads
or
at
least
provide
a
way
to
to
only
download
single
large
payloads.
So
you
don't
have
to
mirror
all
the
images,
even
though
you're
not
using
them
but
yeah.
The
goal
is
definitely
to
to
make
that
manifest,
listed
and
support
multiple
arches
in
one
payload.
There
are
I
think
a
couple
of
blockers
for,
for
that
still
we
are.
We
are
going
to
be
able
to
build
it
earlier,
though,
for
okd
than
then
for
ocp.
C
I'm,
just
seeing
I
can't
wait
for
mixed
x86
arm
and
windows,
I'm,
not
sure
about
the
windows
part,
but
yeah.
B
As
Jack
says,
hyper-converged
infrastructure,
exactly
all
right,
I
think
we've
got
everything.
Oh
one
last
thing
is
that
Brian
and
I
have
taken
control
of
the
Twitter.
So
but
it's
not
going
to
do
us
any
good.
If
folks,
don't
retweet
that
stuff
that
goes
out.
So
there
was
two
tweets
I
think
yesterday
or
the
day
before,
please
retweet
that
stuff,
like
it
and
retweet
it
Twitter,
is
the
platform
that
that
people
are
still
sort
of
using
for
right
now.
Yes,
we
need
a
mastodon
and.
B
Do
and
then
IKEA
style
set
of
directions
will
come
with
it,
I'm
sure.
So
if
we,
if
someone
wants
to
to
bring
that
up
at
the
community
meeting
next
week,
we
can
talk
about
Mastodon.
B
Lots
of
folks
are
going
there,
but
my
joke
is
related
to
the
fact
that
a
lot
of
like
non-technical
folks
are
sort
of
struggling
with
Mastodon.
So
that's
it's
understandable,
like
I,
think
for
people
that
are
used
to
dealing
with
this
stuff
it's
easier,
but
for
folks
that
aren't
like
it's
not
as
easy
yeah,
social.okd
dot,
IO,
exactly
you're
right,
jack
yeah.
B
We
do
still
need
Twitter
until
it's
dead,
so
please
retweet
and
if
there's
something
that
that's
relevant
to
okd,
that
you
want
to
tweet
out,
please
let
Brian
or
myself
know
and
we'll
make
sure
that
it
gets
out
there
and
Luigi
I.
Think
at
this
point,
you're
you're
the
point
person
with
red
hat,
because
Diane
has
sort
of
disappeared.
So
we'll
we'll
be
relying.
D
Yeah
yeah
I
have
all
intentions
to
bug
her
and
say:
listen.
Give
me
all
your
contacts
and
all
your
working
documents
and
all
there
is
so
that
I'm
not
left
in
the
dark.
You
know,
but
yeah
I'll
I'll
hook
up
with
I'll
get
some
info
from
her
and
be
able
to
to.
You
know
continue
without
any
problems,
hopefully
yeah.
B
All
right
folks,
well
thanks
so
much
for
coming
to
the
main,
did
Zoom
work
for
everybody.
I
mean
other
than
the
passcode
issue
in
general,
though,
are
folks
comfortable
with
this
moving
forward
because
I
haven't
changed,
all
of
the
calendar
invites
yet,
but
if
you're
good,
we
can
do
that.
Yeah,
yeah,
okay,
great
all,
right
thanks!
So
much
folks
and
we'll
talk
to
you
next
time.