►
From YouTube: OKD Working Group Meeting 07-05-2022
Description
The OKD Working Group's purpose is to discuss, give guidance to, and enable collaboration on current development efforts for OKD, Kubernetes, and related CNCF projects. The OKD Working Group includes the discussion of shared community goals for OKD 4 and beyond. Additionally, the Working Group produces supporting materials and best practices for end-users and provides guidance and coordination for CNCF projects working within the SIG's scope.
https://okd.io
A
B
Welcome
folks,
this
is
the
okd
working
group
meeting
for
July
5th
of
the
Year
2025
and
I've
dropped
the
agenda
in
the
chat,
we'll
drop
it
one
more
time
for
folks
that
have
just
come
in
and
feel
free
to
take
30
seconds
to
look
it
over.
Let
me
know
if
there's
anything,
that's
been
missed
that
you
would
like
added
moved
around
or
anything
like
that.
D
So
the
agenda
mentions
since
all
stream,
core
Os
from
updates
from
Timothy,
but
he's
not
here
so
I,
don't
know
somebody
else
can
speak
to
that
or
if
we
should
mix
it
from
the
agenda.
Yeah.
C
E
So
what
we
probably
need
a
little
bit
more
time
because
there
are-
and
then
Timothy
is
probably
the
best
person
to
to
talk
about
this.
But
there
are
a
couple
of
blockers
on
on
the
car
OS
side,
which.
F
E
Has
now
it
is
now
leading
which
is
great
to
have
him
in
working
group
as
well,
but
yeah
that
might
be
a
little
bit
more
time.
We
need
for
that,
not
sure
if
we
make
we
certainly
won't
make
mid-july
maybe
end
of
July
but
August.
E
B
Right
all
right,
then:
let's
move
into
okd
release,
updates.
E
So
again,
I
don't
really
have
a
lot
to
say
about
this.
I
was
out
last
week,
but
there
is
a
new
release
from
the
24th
of
June.
E
I
haven't
been
following
through
with
any
issue
on
them,
though.
So,
if
you
have
any,
please
make
sure
to
to
file
them.
E
E
E
So
if
yeah
just
I
want
to
throw
the
link
in
here,
if
you're
interested
in
following
with
what
we're
doing
internally,
that
is
the
the
kanban
board
for
our
art
issues.
There
yeah
and
that's
it
for.
F
E
C
C
E
I
think
we've
now
before
it
was
that
we
usually
did
the
release
just
before
the
okay
working
group
meeting
and
now
we
do
it
in
the
alternating
week.
So
it's
again
nine
days
ago,
maybe
we
can
in
the
future,
because
in
the
future,
we'll
at
least
for
f
cost,
we'll
move
to
a
sprintly
release,
which
is
every
three
weeks.
Currently
we
have
this
every
two
weeks
we
might
align
that
with
the
F
cost
release
well
in
the
future
and
then
we'll
just
at
a
release.
E
Every
three
weeks,
I,
don't
think
I,
wanna
I
wanna
ask
fajim
to
change
his
Cadence
at
the
moment.
For
now
this
will
just
be
the
way
it
is
yeah.
C
And
one
common
position
to
say
thank
you
for
doing
the
Dublin
event
and
the
video
is
up
it's
unless
it's
unlisted,
it's,
the
public
I,
will
share
the
link
with
everybody
here
in
a
few
seconds
and
Ryan
in
us,
we
could
put
a
little
blog
post
with
that
video
embedded
in
it
of
his
talk
on
the
that
that
you-
and
he
did
on
that.
So
thank
you
both
for
for
doing
that
in
Dublin.
E
That
was
an
absolute
pleasure
and
it
was
great
meeting
you
Brian,
really
cool
yeah.
B
All
right,
let's
move
on
now
to
f-cos
updates,
which
Dusty
says
there
are
none,
so
we
can
skip
I
had
any
any
questions
for
Dusty
actually
or
any
feedback
on
F
cost
stuff.
While
we're
here
foreign.
E
Dusty
I
have
one
question,
but
it
might
not
be
it
might
be
for
Timothy.
Actually,
one
of
the
blockers
for
S
cause
is
the
lack
of
a
couple
of
RPMs,
namely
cryo,
I,
think
cryo
tools
and
I
think
those
are
the
only
ones
that
aren't
actually
built
from
from
openshift
code.
Oh,
that
might
be.
That
might
be
one
that
is
from
the
fast
data
path
repository.
E
Do
you
have
any
any
idea
how
these
RPMs
are
going
to
be
built
for
Central
stream
and
or
if
anyone
can
help
expedite
this
I'm
I'd
be
happy
to
to
reach
out
to
to
the
maintenance
and
help
them
set
up
those
for
percentile
stream?
But
if
you
don't
know
that
that's
so
different.
E
D
E
B
If
whoever
has
the
click
keyboard
could
mute
yourself,
that'd
be
fantastic.
Let's
move
on
now
to
documentation
subgroup
updates
with
Brian.
F
Okay,
so
last
week
we
had
quite
an
interesting
discussion,
so
technical
documentation,
we're
gonna
sort
of
look
to
do
a
couple
of
tidy
up
things
there.
So
one
of
them
is
with
the
guides.
We
actually
want
to
work
out.
What
a
guide
is,
because
what
we've
got
down
currently
is
guides
on
more
example,
setups
so
I
think
we're
going
to
turn
them
more
into
blog
entries,
and
then
we
are
going
to
be
looking
at
what
the
strategy
is
for
creating
guides.
F
Where
do
we
want
them?
Who
do
we
want
to
create
them?
And
what
should
they
contain?
Create
a
template
and
and
have
some
standard
guides
in
terms
of
how
people
on
board?
With
that
we're
talking
about
the
repo
move
and
as
soon
as
we
line
up
the
Red
Hats
necessary
resources
that
can
actually
do
the
DNS
updates,
we
will
be
looking
to
actually
move
the
open
shift,
slash
okd
and
the
openshift
Cs
okayd.io
Repose
into
the
okd
or
the
project
okd
GitHub
repo,
so
we're
actually
moving
them
outside
the
openshift
repos.
F
So
the
community
can
take
greater
ownership
of
those
without
sort
of
stepping
on
red
hat
internal
M
requirements.
F
F
The
other
thing
we
talked
about
was
the
technical
documentation,
so
I've
been
trying
to
get
some
technical
documentation
together
and
and
falling
over
a
number
of
issues
in
terms
of
where
things
aren't
quite
as
clear
as
they
want
to
be
and
I
notice.
There
was
an
item
put
in
from
Jack
and
yes,
we
we
are.
F
We
are
going
to
cover
that,
hopefully
as
a
discussion
today,
and
it's
really
about
how
we
enable
a
community
to
be
more
active
to
build
and
customize,
and
also
the
possibility
of
the
community
building
and
okd
operator.
Catalog
and
we've
been
talking
to
Red
Hat
Engineers
for
several
months
about
this,
but
there
never
seems
to
get
to
the
top
of
their
to-do
list.
So
there
are
quite
a
few
active
community
members
that
want
this
to
happen.
F
So
if
we
unsort
of
cork
the
ability
to
build,
we
should
allow
the
community
to
actually
build
that
catalog
with
the
project
okd
get
repo,
so
we
had
quite
a
conversation
around
there
and
I
want
to
pick
up
on
those
later
in
the
meeting
and
so
I'll
stop
there
any
questions,
if
not
I'll
hand
it
back
to
Jamie.
B
And
a
couple
other
things
are
that
specifically,
some
of
the
the
posts
or
the
things
that
were
guides
that
are
getting
moved
to
blog
posts
are
the
home
lab
guides
from
Sri
and
Vadim,
both
of
those
who
are
sort
of
their
individual
individual,
just
descriptions
of
Homeland
which
weren't
really
guides
right,
because
they
didn't
really
explain
like
how
to
follow
a
process
or
anything
like
that
installation
or
day
two
or
anything
like
that.
So
those
are
the
ones
that
are
going
to
get
confused.
Con
moved
to
blog
posts.
C
B
B
Glenn
Marcy,
who
you've
probably
noticed
in
the
slack
channel,
has
been
doing
a
lot
for
Sno
and
it's
going
to
be
doing
a
blog
post
for
us
to
help
describe
some
basics
of
getting
okdsno
up
and
and
one
other
one
coming
up.
Someone
else
is
doing
a
blog
post
as
well,
so
we're
actually
going
to
start
building
content.
B
That's
going
to
be
available
as
blog
posts
and
I
think
that
that's
going
to
pull
in
a
lot
of
people
and
one
other
thing
from
the
documents
documentation
subgroup
that
wasn't
mentioned
is
Diane,
did
reach
out
to
Red
Hat
to
get
details
on
modifying
the
MX
records
of
the
okd
io
website,
so
that
we
can
actually
manage
our
own
email
and
start
creating
some
email
addresses
specific
to
okd
dot
IO,
which
would
be
helpful
for
a
lot
of
stuff.
Like
says
Twitter,
and
things
like
that,
so,
okay,.
C
Diane
I
found
the
hand
raising.
Thank
you
for
your
hints
in
the
chat.
Two
things
about
the
MX
I
I
did
make
the
request
for
that
to
get
that
to
change
so
that
we
could
have
an
okd
dot
IO,
the
legal
is
reviewing
it.
You
do
have
one
on
and
Dusty
and
everybody
asking
for
fedora,
so
I,
don't
think
it'll
be
a
problem,
but
before
the
infrastructure,
people
will
do
anything
they
need
to
have
legal,
but
the
request
is
in
for
that
on
the
hackathon
on
creating
our
own
operator
registry.
C
There's
a
gentleman
at
Red,
Hat,
Austin
McDonald.
Some
of
you
may
know
him
he's
kind
of
the
community
lead
architect
for
operator,
sdks
and
operator
framework,
and
he
I
asked
him
if
he
would
be
willing
to
jointly
do
that.
Bring
in
someone
to
talk
about
olm,
explain,
maybe
a
little
intro
and
hop
in
on
that
and
he's
totally
game.
What
I
would
like
to
do.
Brian
is
connect
you
and
him
together,
because
you
had
a
hit
list
of
ones.
C
You
wanted
to
get
done
and
have
a
good
sense
of
what
would
need
to
be
the
setup
for
the
day.
So
if
the
two
of
you
could
get
together,
what
I'd
like
that
hackathon
to
be
once
you
guys
figure
out,
that
is
something
that
brings
the
Opera
greater
framework
Community
to
the
okd
community.
To
help
us
do
that
work,
and
maybe
some
of
if
we
know
the
I,
think
you
had
five
or
six
operators
that
you
were
Keen
to
get
and
that's
you
can
be
selfish
here,
because
you
know
you're
gonna
help
drive.
C
This
then
maybe
find
the
people
who
wrote
those
operators
at
red
hat
or
elsewhere
and
invite
them
to
come
and
be
the
ones
in
the
room,
helping
us
hack
on
that
and
just
get
some
engagement
there
and
if
I
was
the
other
thing
I
was
going
to
say
is
if
the
okd
Cento
stream,
core
OS
thing
is
going
to
get
delayed.
C
Maybe
we
should
do
this
in
in
August
and
use
August
everybody's
vacation
month,
all
over
Europe
but
use
August
as
a
hackathon
opportunity
and
and
start
because
I'm
after
tomorrow,
where
Brian
is
talking
for
us
in
London
at
the
Gathering
I,
don't
have
any
events
that
I
have
to
host
and
organize.
C
E
Just
a
very
quick
mention
we
here
at
redhead
are
in
the
openshift
arc.
We
have
a
is
it
hack
week
or
shift
week
next
week,
meaning
we'll
be
given
some
free
time
to
work
on
the
things
we
want
to
work
on
personally,
essentially,
so
we
were
thinking
this
might
be
an
opportunity.
E
You
know
to
motivate
rat
headers
to
help
with
the
operator
building.
Since
with
s
cause,
this
is
very
much
with
the
core
OS
team.
We
can't
expedite
that
that
really,
but
on
the
operator
side,
we
can
really
do
do
a
lot
already.
So
if
you
have
any
specific
operator
you
you
want
to
build,
and
you
maybe
already
know
or
know
a
person
that
maintains
that
or
please
reach
out
to
them
and
ask
them
whether
they
have
time
next
week
and
tell
them
oh
come
on.
E
You
have
shiftwood,
so
so,
please
reach
out
to
them
and
reach
out
to
us
too.
We
can.
We
can
help
organize
meetings
for
a
specific
operations.
F
Okd
get
repo
and
I
want
it,
so
it
it
builds
on
okd,
with
text
on
pipelines
and
get
Ops.
So
it's
totally
outside
of
Red
hacks
internal
systems,
so
the
community
can
actually
do
it
on
their
own
clusters.
F
So
we
need
that
infrastructure
defined
sorted
out
whatever
so
again
that
that's
something
that
I
was
hoping
that
the
hack
would
do.
But
if
there's
people
looking
for
work
to
do
next
week
on
free
time.
C
E
E
The
the
infrastructure
we
want
to
use
I
think
there
are
two
options.
We
could
either
use
GitHub
actions
and
just
kind
of
use,
GitHub
infrastructure
for
this,
or
we
use
the
operate
first
Cloud,
which
will
leverage
for
for
more
things
in
the
future
as
well
like
an
update
graph
of
proposed
into
that
instance
like
that.
But
this
would
be
the
place
where
we
could
possibly
deploy
electron
and
even
just
an
okd
cluster
potentially
for
this
working
group
that
we
will
then
maintain
and
own
ourselves.
E
So
yeah
I
think
that
might
be
a
good,
a
good
starting
point
and,
depending
on
what
you
want
to
do,
GitHub
actions
might
be
the
way
of
least
resistance
here,
but
I
think
eventually
having
properly
defined
textile
pipelines
for
this
is
is
preferable.
So
yeah
I
can
I
think
I
can
help
help.
C
So
I'm
just
going
to
raise
my
hand
and
try
and
be
let
be
quick
and
succinct
so
Brian.
If
you
can
write
up
that
list
of
operators
that
you
were
looking
for
and
send
them
to
Christian
and
I
I,
don't
think
Brian
has
the
ability
to
target
the
people
who
are
on
the
shift
week,
the
hack
week
next
week.
So
if
you
can
you
and
maybe
and
and
CeCe
Austin
McDonald
too
Christian
and
make
that
connection,
because
that
is
use
that
time-
and
you
know
if
we
got
a
few
of
them
done
in
advance.
B
Excellent
I
want
to
keep
things
moving
along
because
we
do
have
a
guest
and
I
want
to
make
sure
that
we
get
to
actually
have
multiple
guests
now
hand
it
over
to
Brian
Cook
to
talk
about
the
build
software
and
the
red
hat
hybrid
application,
the
cloud
Brian
Cook
take
it
away.
G
So
this
maybe
this
is
related
to
the
previous
conversation.
I
wasn't
quite
catching
the
context
there,
but
so
I
think.
If
I
was
right,
there
might
be
a
third
option
for
you
for
doing
some
of
these
builds.
So
I
am
a
I'm,
a
product
manager
at
Red,
Hat
in
so
primarily
in
the
past.
G
I
have
worked
on
our
internal
container,
build
infrastructure
and
we
have
put
together
a
team
to
rebuild
some
of
our
well
pretty
much
all
of
our
container
build
infrastructure
in
a
way
that
allows
us
to
build
to
cure
containerized
software
in
a
managed
service
for
Red
Hat
to
be
able
to
ship
to
customers,
but
also
in
a
way
that
customers
could
sign
up
and
use
that
very
same
infrastructure
that
very
same
managed
service
in
order
to
build
their
own
software,
and
we
have
tried
to
be
very,
very
thoughtful
in
how
we
design
it
so
that
it
would
scale
from
very
small
projects
up
to
very
large
projects.
G
And
so
we
have,
you
know,
done
a
lot
of
Investigation
on
like
how
the
openshift
build
process
works
today
and
how
we
could
generalize
it
to
work
for
things
besides
openshift,
but
also
one
day
on
board
openshift
our
service.
And
to
that
to
that
effect,
we
have
like
a
very
kind
of
specialized
QE
controller
which
handles
a
lot
of
this
stuff
and
might
make
your
job
a
lot
easier
and
in
fact,
under
the
hood,
we
Implement
tecton
Pipelines.
G
Allow
you
to
get
basically
provision
opinionated,
techton
pipelines
for
you
that
that
sent
to
your
your
git
repo
as
a
pull
request,
and
then
you
would
merge
it
and
after
merging
it
you're
able
to
modify
it.
But
once
you
modify
it
your
what
we
call
a
tenant
so
like.
If
you
guys
would
do
this,
you
would
have
your
own
tenant
and
you
have
tenant
policies
that
can
put
restrictions
on
how
you're
able
to
modify
things.
H
G
Can
always
modify
it,
however,
you
want,
but
the
Restriction
comes
in
when
you
so,
for
example,
if
you
wanted
to
do
like
the
kinds
of
things
that
we
do
with
openshift,
you
have
to
do
all
of
your
builds
in
a
network
disconnected
hermetic
environment
right,
and
so
you
could.
You
could
remove
that
step
from
your
pipeline,
but
if
that
policy
is
in
effect
for
your
tenant
it
would
the
build
would
be
successful.
You
could
test
it,
you
could
run
it
like
a
staging
environment.
C
G
Wouldn't
be
able
to
release
it
because
your
tenant
policy
would
prevent
that.
So
our
goal
is
to
allow
people
like
yourselves,
who
are
Community
folks
or
people
who
are
our
customers
to
be
able
to
use
this
service
to
build
salsa
level,
four
compliant
software
and
to
be
able
to
generate
provenance
for
those
builds
using
chains
and
yeah
be
able
to
write,
also
custom
policies
besides
salsa,
in
order
to
put
guardrails
around
what
you
want
to
be
allowed
to
be
released.
G
So
Diane
and
I
were
talking
about
this
off
to
the
side
a
while
back,
and
she
mentioned
for
a
new
home
to
build
your
software
and
what
we're
working
on
might
be
a
really
good
fit
for
you,
and
it
might
be
a
really
interesting
test
for
us
and
I
think
a
challenge,
a
nice
challenge
for
us
to
try
to
prove
that
that
we
can
scale
to
those
things
that
we
kind
of
designed
for
on
paper
already
and
so
yeah.
That's
what
I
wanted
to
see
where
it
would
go.
C
So
if
folks
have
have
questions
about
that,
I
think
my
primary
first
ask
is
of
this
group.
Is
that
will
be
you'll
have
to
sign
on
with
your
red
hat
ID
to
interact
with
it?
That
was
what
I
was
remembering
from
this
conversation,
which
happened
like
a
couple
of
months
ago.
It
wasn't
something
that
it
was
totally
open.
So
if
you
could
explain
that
or
maybe
that's
changed
a
little
bit,
it's.
G
A
managed
service,
so
we
would,
we
would
provision
you
like
a
tenant
like
I,
said
in
a
managed
service,
and
you
would
have
to
sign
on
with
some
red
hat
SSO
enabled
ID.
That's
correct,
I
think
that's
about
the
only
kind
of
restriction
around
it
we'll
be
using
compute
from
various
clouds.
That's
connected
to
it,
so
we'll
be
able
to
like
connect
Amazon
clusters
via
kcp
in
order
to
provide
places
for
testing
and
compute
and
stuff,
like
that,
I
think
the
the
major.
G
The
thing
that
I
think
is
going
to
help
you
out
the
most.
Knowing
you
know
what
I
know
about
how
we
build
openshift
is
that
if
you
are
wanting
to
integrate
like
run
integration,
tests
against
these
okd
builds
with
its
100
plus
containers.
F
What
is
actually
built
is
a
bit
of
a
nightmare
for
us,
because
a
lot
of
the
images
that
get
used
just
within
the
repos,
all
within
the
red
hat
firewall
so
I
think
that's
where
a
lot
of
the
community
found
problems.
Is
we
we're
not?
We
don't
know
how
to
create
an
equivalent
build
that
produces
what
the
internal
system
has.
G
So
this
will
not
be.
This
will
be
completely
transparent
to
you
now.
What
I
would
it?
What
it
will
not
solve
is
like
it
won't
it
won't
make.
The
open
shift
builds
completely
transparent
to
you,
but
these
builds
will
be
completely.
There
will
be
your
builds
completely
transparency.
We
will
still
have
to
figure
out
how
to
build
those
images
in
in
your
pipelines
right.
So
there
may
still
be
some
like.
G
We
have
to
figure
out
how
those
Transformations
work
in
order
to
put
them
in
these
pipelines,
but
once
they're
in
there
they're
your
pipelines,
there's
nothing
hidden
from
you.
There
I
don't
know
I,
don't
I,
don't
want
to
promise
this,
and
you
know,
but
I
would
say
that
if
it's
down
to
like
trying
to
build
openshift
okd
on
like
GitHub
actions
or
like
say
the
operate
first
Cloud
versus
we
might
get
more
interest
from
the
openshift
engineering
team,
because
this
will
be
a
future
Target
for
them
building
on
it.
G
So
for
them
they
might
be
interested
in
proving
that
okd
can
build
at
scale
efficiently
and
reliably
on
this
as
a
precursor
to
them.
Moving
to
it,
whereas
you
know
you
guys
going
to
like
operate
first,
probably
wouldn't
be
that
interesting
to
them,
because
they
know
they
would
never
really
openshift
build,
stop
right.
First.
C
Yeah
and
I,
you
know
what
that
I
I
was
of
the
conversation.
You
had
two
other
small
projects
ahead
of
us
that
you
were
testing
with
I
forget
exactly
which
ones
they
were,
but
it
was
something
that
you
would
it
sounded
like
early
september-ish.
You
might
be
freed
up
to
be
available
to
work
with
us
on
something
around
okd
on
fcos.
C
G
It's
it's
I
think
it's
still
possible
to
start,
then
just
to
be
completely
transparent
with
you
guys,
you'd
be
like
kind
of
on
our
bleeding
edge,
but
that
might
be
kind
of
fun,
but
our
we
are
planning
to
onboard
our
first
customers
at
August
as
true
and
that
will
be
kcp
and
that
API
curio
service
registry
service.
G
So
we
have
support
for
building
golang
false
I
would
say
we
can
build
anything,
but
we
have
these
like
very
sort
of
starter,
nice,
starter
pipeline
templates
for
golang
and
Java
and
we'll
be
following
up
with
python
and
npm
right
after
that.
So
I
think
some
of
the
npm
stuff
might
be
necessary
for
some
of
the
open
shift
images.
G
C
Is
so
Jamie
made
me
this
would
take
a
pause
is
to
me
this
sounds
really
as
a
viable
option
more
viable
than
like
GitHub
GitHub
actions.
You
could
do.
We
could
do
that
operate
first
they're.
C
It
seems
a
little
theoretical
and
not
a
lot
of
resources
available
to
us
to
do
that.
So
this
was
the
third
option.
I
wanted
to
bring
to
the
table
and
see
if
it
was
amenable
to
the
group
to
do
something
with
Brian
Cook's
team
around
getting
a
community
build
service
going.
Oh.
B
Yeah
and
I
think
that,
well
let
me
let's
do
a
quick
straw
poll.
Real,
quick
and
Christian
I
saw
you
your
hand,
real
quick.
So
let's
drop
we'll
I'll
just
go
across
my
screen
Bruce.
How
does
this
sound
to
you?
What
are
your
thoughts.
H
Trying
to
find
my
unmute
button,
no,
this
sounds
really
interesting.
H
The
I
I
guess
the
boundary
is
an
interesting
question
because
you've
got
you
know,
sort
of
rebuilding
okd
from
scratch,
which
is
technically
interesting,
but
in
the
end,
doesn't
give
you
any
additional
capabilities
because
it's
already
being
built
for
us
and
then
you've
got
the
Delta
between
okd
and
ocp.
H
H
So
if
it
I
would
like
to,
you
know,
push
Beyond
just
okay
d
as
it
currently
exists,
but
whether
or
not
I
ever
got
to
there.
It
doesn't
really
matter.
You
got
to
get
started.
F
I
think
it
sounds
a
really
really
good
option,
and
especially
if
this
is
where
Red
Hats
go
in,
if
we
can
stay
sort
of
in
sync
with
Red
Hats,
my
primary
driver
is,
as
you
say,
it's
the
bits
that
probably
we
need
to
figure
out,
no
matter
which
way
we
go.
It's
like.
How
do
we
do
the
prow
stuff,
that's
sort
of
hidden
in
the
open
and
and
get
that
figured
out
so
I
think
we're
gonna
have
to
do
that.
F
Whichever
way
we
do
it,
whether
it's
actions,
the
first
cloud
or
whether
it's
this,
we
that's
going
to
be
the
challenge.
But
if
we're
going
to
use
the
same
pipelines,
that
red
hat
openshift
will
eventually
take
on
I
think,
that's
goodness
and
let's
not
diverge.
If
we
don't
have
to.
B
Okay
and
Muhammad:
what
are
your
thoughts.
B
Muhammad,
we
don't
hear
you,
let's
Trail
Miko
Mike.
What
do
you
think.
A
I
tend
to
agree
with
you
know:
Brian
Ennis,
I,
think
it'd
be
really
cool
for
us
to
set
up
like
I.
Unfortunately,
I
think
we're
gonna
need
to
bite
the
bullet
and
we're
probably
going
to
need
to
have
community
members
who
build
up
some
proud
experience
and
understand
how
to
operate
it
for
us,
I,
I,
ultimately
kind
of
agree
that
you
know
we
need
to
set
up
our
own
CI
infrastructure
and
if
that
just
looks
like
it's
a
copy
of
what
red
hat
is
running
internally.
A
To
begin
with,
like
I,
think
that's
okay,
but
like.
Ultimately,
if
we
want
to
be
able
to
do
like
true
experimentation
in
the
okd
like
community
and
be
able
to
really
like
innovate
in
a
direction
that,
like
is
out
in
front
of
ocp,
I,
think
that
we
need
to
own
enough
of
the
process
so
that
we
have
community
members
who
understand
it's
like
how
to
do
things
like.
Okay,
we
want
to.
We
want
to
build
a
fork
of
okd
that
does
X
Y
and
Z.
B
Okay,
great
and
Diane,
we
know
sort
of
what
you
feel
since
you've
found
in
Alessandro
a
guest
here.
Did
you
have
any
thoughts.
I
And
Migo
was
saying
I'd
say
and
it's
challenging
to
test
other
ways
of
of
building
an
unknown
and
say
one
concern
that
I
shared
with
other
Forks
in
the
past
meetings
is
about
the
decision,
possibilities
of
building
okd
for
arm
and
there's
heterogeneous
in
the
autogenous
version
that
will
be
shipping
down
in
a
while,
but
yeah
so
heterogeneous
stuff
will
be
easier.
It
could
be
easier
with
tactone
or
even
other
CIA
ways
CI
in
tools,
the
other
stuff
will
probably
need
infrastructure,
and
so
this
we
need
to
be
achieved.
B
Great,
thank
you
Alessandra.
We
have
someone
chuba
who
drops
in
but
I,
don't
know
that
we've
ever
heard
anything
from
from
Cuba.
You
have
any
thoughts
on
this
question.
B
Okay,
Kristoff.
J
So
it
doesn't
necessarily
need
to
be
a
full-blown
CI,
but
then
some
some
may
be
looking
into
some
of
the
currently
still
closed,
Source
or
or
close
binary
operators.
Maybe
that
would
be
an
interesting
take
to
to
figure
out
how
we
can
build
those
ourselves.
I
E
Yeah,
so
just
to
what
Jack
just
said,
it's
all
open
source
I
think
the
problem
is
just
that
it's
hard
to
reproduce
because
with
the
prowl
system
we
currently
have
it's
very
obscure.
It
is
all
in
the
open,
but
you
have
to
know
where
to
look
and
it's
it's
really
not
easy.
E
Blog
and
I
I
I
just
want
to
make
sure
I
I'm
all
for
for
trying
this
new
build
system.
I.
Think
that
sounds
really
exciting,
especially
if
there's
multi-arch
support
already
planned
as
well.
E
I
think
we
should
definitely
try
that,
but
I
I
do
have
the
concern
that
it
doesn't
improve
the
situation
in
regards
to
our
kind
of
the
the
vendor
locked
in
with
Retta
that
we
currently
have,
which
obviously
we
we
might
not
lose
if
we
move
from
one
very
obscure
system
like
Pro
to
another
one
that
isn't
that
you
just
it
can't
just
deploy
yourself
the
ideally,
which
is
why
we
we're
talking
about
techton.
So
much
is
you
have
a
a
Techtron
Pipeline
and
you
can
deploy
that
pipeline
anywhere
on
any
kubernetes
cluster.
E
You
don't
need
a
red
hat
service.
You
don't
need
any
special
version
of
techton
or
anything
if
we
can
make
it
so
that
the
service,
this
new
build
service
that
Brian
has
presented
is
able
to
kind
of
consume,
something
like
a
standard
tecton
pipeline
or
something
in
a
format
that
the
normal
techton
pipeline
could
also
consume.
E
Then
I
see
absolutely
no
reason
not
to
do
it.
I
think
if
we,
if
it's
just
kind
of
a
hosted,
tecton
tecton
as
a
service,
that
we
should.
G
Yeah,
let
me
like
give
you
a
few
more
details,
but
it
is.
It
is
tecton,
specifically
it's
openshift
pipelines
right,
but
the
the
reason
why
it's
managed
service
one
is
there's
a
lot
of
pieces
to
integrate
in
order
for
us
to
get
people
where
we
want
to
get
like
our
goal
is
to
be
able
to
have
somebody
be
able
to
create
salsa
level,
four
compliance
software
in
like
15
minutes
from
a
from
a
repo
that
we
can
build
right.
G
G
It
can
produce
pipeline
level
attestations
that
get
stored
as
oci
artifacts
that
are
signed
and
and
all
the
things
that
are
necessary
for
salsa
level,
four,
so
compliance
right,
and
so
in
order
for
you
to
wire
that
all
together
yourself,
it's
a
lot
of
work
because
we
had
a
we
had
a
solution.
Architect
try
to
go.
Do
it
and
it's
like
crazy,
so
we
we
do
like
our
long-term
goal
is
that
we
would
have
a.
G
Will
require
us
to
build
an
operator
that
can
like
do
all
this
and
deploy
it,
and
at
that
point
that
would
become
an
open
source
thing
and
there
would
be
open
source
version
of
this
operator
and
then,
if
you
guys
wanted
to
like
remove
yourselves
from
our
managed
service,
you
could
go
to
pull
that
operator
wherever
you
wanted
to
like
pick
up
your
stuff
and
move
it.
But
for
now
the
most
expedient
way
for
us
to
get
this
to
be
available
and
kind
of
develop.
G
So,
for
for
I
just
want
to
be
like
as
honest
I'm
going
to
guess
like
for
the
next
two
years.
You
probably
are
stuck
with
a
managed
service
right
and
as
we
iterate
on
that,
but
we
we
definitely
understand.
There's
a
desire,
especially
for
people
who
want
this
stuff.
A
lot
of
times
want
to
run
in
a
disconnected
environment.
So
there's
a
mismatch
there
right.
G
We
know
that,
so
we
want
to
make
this
available
for
those
people
that
are
running
in
paragraphed
environments
as
well,
but
it's
just
it's
not
yet
you
can
try
it
this
way
and
I
think
it'll
give
you
a
massive
sort
of
head
start
on
on
where
you
want
to
go,
and
then
maybe
you
like
it
and
you
stay
with
it.
Maybe
you
want
to
run
it
yourself
later
and
when
that
becomes
available
you
do,
but
there
there
won't
be
anything
that's
sort
of
in
the
mix
here.
G
That's
like
special,
it's
pectin
and
tecton
results
and
chains
and
cosine
and
Sig
store
project,
and
you
know
all
all
that.
B
Stuff,
so
all
right,
I
want
to
be
mindful
of
our
other
topics
and
our
guests
who
are
going
to
be
discussing
them
Brian.
What
would
be
our
next
steps
if
we
wanted
to
move
forward.
G
Next
steps,
I
think,
would
be
for
us
to
gather
up
a
list
of
people.
We
would
want
to
get
access
to
an
initial
workspace
when
we,
when
we
would
on
board
and
then
like
Diane,
said
it'll
be
a
little
while
so
it'll
likely
be
at
least
September,
but
we
would
have
that
list
and
we'd
create
a
workspace
like
you
go.
G
You
folks
should
pick
a
subset
of
images
that
you
might
want
to
use
as
a
like
a
test
project
right,
like
pick
a
few
images
that
you
know
how
to
build,
maybe
that
you
know
how
to
test
as
well.
G
Maybe
if
you
can
find
ones
that
can
sort
of
be
built
and
run
and
tested
without
building
all
of
openshift.
That
would
be
nice,
so
you
could
just
have
a
few
images
and
you
can
say:
okay,
that's!
This
is
the
scope
of
how
we're
going
to
evaluate.
If
this
is
what
we
want
to
use
and
then
we
can,
we
can
work
with
you
to
get
those
stood
up
when
we're
ready.
F
Yeah
just
very
good
question,
so
one
of
the
things
we're
looking
through
quite
quickly
is
look
at
some
of
the
missing
operators
like
the
pipeline
operates
like
the
OCS
operator.
Could
we
use
those
as
like
sub-components,
because
they're
sort
of
self-contained
and
we
can
build
them
and
test
them
on
a
standard
cluster?
G
It
will
that
that
will
work
as
long
as
you're,
not
in
super
hurry,
so
in
Q4.
So
our
first
goal
was
to
deliver
not
operator
controlled,
Services.
Okay,
the
reason
is
because
testing
operators
requires
you
to
have
that
automation,
where
you
can
set
up
a
brand
new
cluster
right
and
so
we're
actually
building
that
automation,
we're
building
it
in
Q4
and
we'll
be
testing
it
in
Q4.
So
if
you
guys
want
to
ride
along
with
us,
while
we
build
it
and
and
test
those
with
it
that
you
okay,
awesome.
G
Going
to
be
replacing
some
of
that
prow
workflow,
that
gets
done
for
the
openshift
Jeep,
with
provisioning
based
on
hypershift,
in
order
to
save
more
time
and
avoid
using
hibernated
clusters,
which
is
ultra
complicated.
C
Real
quick,
what
I
would
suggest
Jamie
is
that
we
create
us,
like,
like
the
docs
group,
a
subgroup
of
people
who
want
to
work
on
this
project
and
put
a
note
out
to
the
mailing
list
and
see
if
there's
a
center
of
gravity
and
if
I
have
to
recruit,
like
pecton
devops,
open
sourcing
people
to
help
us
so
real,
quick.
That's
what
I
would
suggest
we
do
just
like.
Docs
have
another
little
subgroup
of
people
who
are
interested
in
in
learning
about
this.
B
Was
thinking
the
exact
same
thing?
Okay,
thank
you,
Brian
Cook.
We
will
touch
base
with
you
in
a
couple
weeks
and
let
you
know
where
we're
at
with
our
efforts.
I
very
much
appreciate
you
coming.
Let's
move
on
now
to
did
Marco
show
up
I,
don't
think
Marco's
here,
so
we
can
skip
the
ansible
yeah
and.
B
Operators,
we
I
think
we
we
know
where
we're
at
with
operators
right.
Is
there
anything
else
we
need
to
touch
on
with
operators.
I
think
we
sort
of
know
where
we're
at
with
with
that.
Yes,.
H
Yeah
just
very
quickly
the
I
I
put
a
a
note
in
the
operator's
bug
bug
list,
because
it's
sort
of
worse
than
I
had
thought
when
I
was
looking
into
the
k-native
part,
there's
some
sort
of
unbeknownst
to
me
repositories
that
aren't
in
the
normal
chain
of
repositories
that
red
hat
uses
to
build
some
of
these
things.
H
And
anyway,
you
can
have
a
look
at
that.
Brian.
B
Is
there
an
example
in
the
discussion,
could
you
link?
Could
you
put
a
link
to
that
particular
message
that
you
posted
with
the
example
in
the
meeting
minutes
under
the
discussion
under
this
particular
item
under
this
agenda
item.
B
B
So,
let's,
let's
put
it
in
the
meeting,
notes:
okay,
so
that
it's
there
because
I
do
want.
We
have
nine
minutes
left
and
we
do
have
okay
Jack
to
talk
a
little
bit.
Jack
actually
had
customizing
okd
how
to
figure
out
Source
repositories
for
images
that
go
ahead.
J
This
nicely
ties
into
the
discussion
we
had
before
we're,
basically
knowing
how
okd
is
built
as
a
whole
thing
so
because,
just
today
we
had
this
issue
that
some
of
our
source
image
builds
failed
due
to
some
git
update
or
remote
repositories
that
require
a
New,
York,
Git
Version,
some
something
along
those
lines,
because
in
in
April
there
was
this
the
git
cve
and
then
some
of
those
things
were
then
also
patched
on
the
server
side
and
other
things
were
patched
on
the
client
side.
J
Long
story
short
the
the
search,
the
image
build
failed
due
to
the
image
or
the
container
that
is
executing
those
steps,
and
so
I
was
asked
to
to
basically
write
write
this
blog
post
about
how
we,
how
we
customize
okb
and
what
kind
of
the
maybe
some
of
the
challenges
are
and
I
would
say
that
this
surprisingly
enough
for
an
open
source,
yeah
software
or
system
is
actually
sometimes
one
of
the
hardest
things
kind
of
figuring
out
which
image
you
need
to
touch.
So
sometimes
it's
it's
very
easy.
J
Let's
say
you
want
to
change
something
in
the
console.
You
look
at
the
console
operator.
You
look
at
the
you
know
exactly
which
image
you
need
to
touch
because
it's
called
console
operator
and
then
you
just
go
to
the
repository,
which
is
github.com
open,
shift,
slash,
console,
Dash
operator
easy,
but
sometimes
you
also
have
these
cases
where
you
know.
Okay,
well,
I
have
this
image
in
my
cluster.
That
does
something
and
that
is
used
by
some
component
now.
I
need
to
figure
out
how
to
how
I
can
even
replace
that
image.
J
So
because
what
we
are
doing
is
we
are
doing
our
own
okd
release,
but
we're
just
taking
the
Upstream
releases
and
then
replacing
some
of
the
images,
so
that
is
with
the
OC
ADM
release
new
command,
and
then
you
can
specify
the
images
that
you
want
to
replace
now.
Sometimes
it's
also
even
not
trivial.
Just
what
the
name
of
the
image
would
be
because,
for
example,
for
the
cluster
Ingress
operator,
it's
just
classic
English
operator,
but
then
I
have
another
example
here.
J
That
is
the
the
that
is
the
open,
openstack
machine,
controller's
image
which
is
actually
coming
from
the
cluster
API
provider,
openstack
repository
or
the
associated
image,
and
now
just
just
figuring
out.
J
Those
connections
is
sometimes
surprisingly
hard
and-
and
we
had
the
same
now
today
with
this
source
to
image
Builder
image,
where
we
kind
of
it,
it
took
us
a
while,
like
a
long
while
longer
than
it
should
have
to
figure
out
where
this
image
is
coming
from
and
in
the
end
we
kind
of
had
to
You
Know,
download
the
image
and
kind
of
look
at
the
the
artifacts
that
are
inside
and
based
on
which
image
it
was
built,
and
maybe
look
at
some
of
the
metadata
that
is
in
there
and
it's
just
unfortunately,
very
hard
to
sometimes
to
figure
it
out
and,
for
example,
this
this
source
to
image.
J
Builder
image
is
actually
built
from
openshift,
slash
Builder,
which
well
I
mean
I.
Guess
if
you
know
it,
it
makes
sense.
But
you
know
you,
you
would
never
look
for
that,
because
you
would
maybe
look
for
something
like
source
to
image
or
or
Docker
image
Builder,
which
is
actually
the
name
of
the
image
that
you
need
to
replace
when
you're
doing
the
release.
J
So
sometimes
these
these
kind
of
connections
going
from
I
haven't
I,
have
a
piece.
I
have
a
component
that
I
want
to
change
now.
I
need
to
figure
out
which
image
I
need
to
replace
and
then
I
need
to
figure
out
where
that
image
is
coming
from.
That
is
sometimes
very
hard,
and
it
would
be
great
if
we
can
kind
of
find
a
way,
maybe
some
documentation,
maybe
some
additional
metadata
to
make
that
a
bit
easier,
and
this
is
just
kind
of
the
the
topic
that
I
that
I
wanted
to
bring
up
with.
F
Jack,
maybe
I
can
help
out
with
a
little
bit
of
that,
so
I'm
actually
trying
to
pull
this
documentation
together.
So
on
okd.io,
there's
a
new
section
called
okd
development
and
that's
where
I'm
trying
to
pull
this
content
and
one
of
the
things
that
I
found
out
is,
if
you
look
at
the
OC
atom
release,
you
can
actually
put
a
dash
dash
commit
URLs,
which
shows
the
actual
GitHub
commit
URL
for
every
component
included
in
a
release.
So
I
think
that
answers
part
of
your
question.
F
So
how
do
you
get
from
what
you
see
on
screen
to
the
actual
component
responsible
I?
Don't
think
we
have
an
answer
for
that
at
the
minute,
but
certainly
once
you
identify
what
the
image
is
getting
the
link
to
the
exact
git
commit,
you
can
do
that
and
again,
if
you
look
at
that
page,
it's
the
overview
of
the
okd
development
section
I'm
at
the
bottom,
there
there's
there's,
actually
commit
URLs
commits
and
pull
specs.
F
So
that
gives
you
the
exact
information
in
terms
of
how
you
identify
the
image
using
it
in
a
specific
build.
J
Yeah
that
that
is
definitely
that
something
that
that
does
help
a
lot,
but
maybe
just
if
we,
if
we
can,
if
we
can
as
a
community
like
work,
work
on
that
documentation
a
bit
and
because,
like
I,
said
for
some
components,
it's
it's
literally
crystal
clear,
because
everything
is
kind
of
consistently
named
and
you
can.
You
can
trace
it
back
and,
of
course,
I.
I
cannot
expect
now
that
you
know
we're
now
going
to
fix
all
of
the
let's
say,
Legacy
or
the
weird
naming
things
that
are
in
there.
J
F
Yeah
I
mean
and
any
volunteers
that
want
to
help
create
that
technical
documentation
you're
very
welcome
to
come
and
join
in.
C
Yeah
I
think
that's
what
we're
just
missing
is.
Is
the
volunteers
I'm
just
going
to
reiterate
that
and
Jack?
If
there's
anybody
in
your
world
that
might
have
some
expertise
that,
even
if
they
would
do
a
talk
on
that,
we
could
transcribe
it
to
get
a
starting
point.
That
would
be
great.
B
B
F
Go
right
ahead,
okay,
so
one
of
the
challenges
is
once
you've
worked
out.
What
image
you
want
when
you
want
to
build
it.
Prow
actually
does
a
replace
on
the
image.
That's
in
the
registry,
so
I
actually
went
through
every
image
using
an
okd
release
and
believe
it
or
not.
There
are
50
different
images
specified
in
repos
within
the
red
hat
internal
registry.
F
So,
as
someone
who
wants
to
build,
you
have
to
then
work
out
what
to
replace
them
with,
because
you
can't
build
them
because
they're
not
accessible
outside
the
the
firewall,
so
I
mean
one
advantage
of
getting
to
know.
Christian
last
week
was
I
now
feel
like
it
now
bugging
with
questions,
so
I've
been
trying
to
get
Christian
to
find
the
source
of
truth
of.
What's
in
those
images,
where's
the
container
file
and
again
it
comes
out.
F
There
is
a
container
file,
but
then
there's
some
proud
stuff
that
goes
and
changes
a
container
file
to
actually
work
out.
What's
actually
then
built
that,
then
the
image
users-
and
this
just
seems
to
me
way
over
complex
just
to
try
and
as
I
said
it's
in
the
open,
but
the
number
of
Hoops
you
seem
to
have
to
jump
through
so
I'm
thinking.
Can
we
actually
create
a
standard
container
file
in
the
GitHub?
F
The
okd
will
use
to
build
its
images,
so
we
want
the
base
image
and
the
Builder
image
they're
the
two
main
ones,
and
then
we
basically
use
those
we
have
the
source
of
them.
We
have
them
built
in
the
key.io.okd
I'm,
sorry
Diane.
It
is
pronounced
key.
We
have
it
in
the
repo
and
then
we
just
build
it,
and
then
the
community
can
just
use
those
images.
We
don't
have
to
go
through
this
torturous
process
of
actually
trying
to
work.
What's
in
them,
foreign.
J
Yeah
I
think
that's
a
good
point,
especially
since,
since
so
many
of
the
images
that
get
built
like
have
a
static
gold
binary
in
them
anyway,
at
least
I
think
80,
plus
percent.
So
it's
not
like
you
need
a
crazy,
complicated
bass
image
and
that's
in
fact
exactly
what
we
are
doing
for
for
images
that
we
are
replacing
in
in
the
custom
opinion
release,
because
it's
most
of
the
time
you
don't
really
have
a
lot
of
requirements
for
the
for
the
environment
or
for
the
base
image.
J
E
I
I
totally
agree
with
what
with
well.
What
has
been
said
here.
I
just
want
to
add
these
50
different
images
are
probably
going
to
be
replaced
internally
to
just
one
or
two
different
ones,
because
unfortunately,
the
problem
directive
in
the
docker
file
indicator
repo,
isn't
the
canonical
reference,
it's
being
replaced
on
the
Fly
by
the
prow
build
system.
E
There
is
a
bob
that
tries
to
update
that
reference
in
the
repository
pushing
opening
PRS,
but
those
are
often
not
merged
in
time
or
just
you
know,
aren't
aren't
timely,
so
I
I
do
think
it's
very
valuable
and
yeah.
E
The
Builder
image
is
essentially
just
a
I
think
it's
Ubi
or
rail
base
or
Central
stream
based
with
go
in
it
with
the
golang
binary,
so
it
can
build
the
binaries
additional
dependencies
sometimes
have
to
be
installed
there
as
well,
and
then
that
that
binary,
that
is,
built
is
put
in
a
minimal
container,
which
is
the
base
image
which
doesn't
have
to
have
anything
in
it.
It
just
has
to
run
that
binary,
so
it
can
be
very
minimal.
E
I
know
that
Mike
has-
and
he
mentioned
that
in
the
chat.
There
is
an
open
version,
a
freely
distributable
version
of
the
Builder
image,
which
is
he
linked
it
here.
The
release,
golang
118.
But
again
we
we
don't.
We
don't
build
that
ourselves.
We
don't
it's
very
obscure
where
it
is
built,
because
the
it's
all
in
the
ocp
builds
data
repository
of
the
open
Park,
but
it's
still
very
obscure,
there's
different
branches,
and
then
that
gets
taken
into
our
internal
system.
E
The
arch
build
system
gets
pushed
out
to
prow
is
used
there
as
a
base.
Image
gets
transformed,
so
it
is,
it
is
very
obscure
and
if
we
could
just
provide
an
open
definition
for
a
builder
image
and
a
base
image
based
on,
let's
say:
Central
Street,
because
that'll
work
everywhere,
that
is
on
fcos
and
on
ocp
and
just
universally
I,
think
that
would
be
really
valuable.
E
Because
then,
in
our
own
build
pipelines,
we
could
just
use
those
as
the
as
the
build
and
base
images
and
and
if
there
is,
if
we
can
kind
of
sync
that
with
the
internal
one
we
have.
Obviously
that
would
be.
That
would
be
awesome
and
yeah
like
the
Deep
Mysteries
of
the
art
team.
That
seems
to
be
a
common
theme,
because
that
is
kind
of
it's
a
different
thing.
E
We
in
redhead
in
in
openshift
mostly
deal
with
Pro,
but
then
we
have
this
other
build
system,
art
that
actually
builds
our
releases
and
also
builds
the
base.
Images
called
the
CI
and
we
have
it's
very
obscure
to
us
if
yeah
I,
don't
even
I,
wouldn't
know
where
to
check
for
that.
That's
a
different
team
yeah
just
to
add
that
as
a
little
bit
of
the
context
and
I
realized
we're
already
over
time.
B
Yeah,
let's
go
ahead
and
wrap
this
up
and
we'll
continue
this
discussion
in
two
weeks
and
asynchronously
and
it
sounds
like
we're
starting
to
get
all
of
the
the
players
and
the
pieces
together
to
tackle
this
issue
across
operators
across
the
builds.
The
cluster
builds
Etc.
So
let's
carry
this
conversation
on
and
we'll
talk
to
each
other
online
and
at
the
next
meeting.