►
From YouTube: OKD Working Group Meeting 08-03-2021
Description
The OKD Working Group's purpose is to discuss, give guidance to, and enable collaboration on current development efforts for OKD, Kubernetes, and related CNCF projects. The OKD Working Group will also include the discussion of shared community goals for OKD 4 and beyond. Additionally, the Working Group will produce supporting materials and best practices for end-users and will provide guidance and coordination for CNCF projects working within the SIG's scope.
More info at https://okd.io
A
All
right,
let's
get
started
first,
with
an
agenda
review.
If
you
could
look
at
the
agenda
real
quick,
I
posted
in
the
chat,
and
it's
also
available
in
the
calendar-
invite
and
take
a
quick
look
at
it.
Let
me
know
if
there's
anything
you
want
to
add
or
remove
shuffle
around
will
take
about
20
seconds
to
do
that.
So
let
me
know
if
there's
anything
that
needs
to
be.
A
All
right
looks
like
folks
are
good.
Let's
start
out
with
introductions
I'll
go
just
across
my
screen,
starting
with
bruce.
B
C
Excellent
diane
diane
mueller,
I'm
been
a
long
time.
Okd
working
group
co-chair
with
everybody
else
here
and
also
the
con
community
development
person
for
the
openshift
cloud
platform.
Bu
so
been
here
for
a
while.
D
Hi
I'm
vadim
lizki.
I
work
for
red
hat
and
I'm
a
technical
colleague
for
kdd
project.
E
Hey
I'm
john
forten.
I
work
for
market
america
here
in
north
carolina
senior
systems
architect
and
we're
doing
a
lot
with
openshift
okd
in
our
environment.
So
every
week's
little
different.
E
F
Hi,
I'm
terry.
I
work
in
the
chorus
team
at
redux
so
mostly
working
on
fedora
chorus.
G
Yeah
mike
mckeon,
you
might
see
my
handle
as
elmico.
People
call
me
that
online
I
am
an
individual
contributor
from
red
hat,
I'm
an
engineer
working
on
cloud
infrastructure,
stuff
and
yeah.
I
just
love
the
okd
community,
excellent
brian.
D
I
Sorry,
sorry,
hey
everybody!
I'm
christian
glomek,
I'm
an
engineer
on
on
the
openshift
arc
at
red
hat
and
I
work
on
arm
enablement.
A
Excellent
he's
enabling
arms
I'm
jamie
mcgarrett,
and
I
am
a
co-chair
here
of
these
meetings
and
at
the
university
of
michigan,
where
we
have
multiple
okd
clusters
doing
various
tasks
all
right.
Let's
now
move
on
to
release
updates
with
the
d.
D
D
There's
been
a
couple
of
rather
small
ones
found,
so
this
will
land
in
okay,
four
seven
nightlys
our
major
blocker
now
is
obn
is
disrupting
services,
and
our
ci
is
very
unhappy
about
this,
so
we
need
new
fixes
merged
in
for
a
to
prevent
that
from
happening,
we
merged
for
nine
fixes,
so
we're
waiting
for
them
to
be
truly
picked
back.
D
There
also
has
been
an
obn
issue
related
to
all
their
workers,
not
joining.
It
also
has
been
fixed
in
49
we're
waiting
for
confirmation
so
that
it
could
be
your
big
back
to
parade
so
helping
with
this
would
really
help
most
likely
this
weekend,
I
will
release
a
new
stable
based
on
for
seven
to
get
all
these
things
we
get
from
cubelet
and
from
fedora
os,
because
we
didn't
have
a
release
for
quite
a
while
and
we're
also
railing
up
the
railings.
D
So
we're
also
gathering
folks
from
engineering
internally
to
get
more
volunteers.
So
hopefully
we'll
get
more
fresh
faces
here
and
I
believe,
that's
all
related
to
the
release
upgrades
from
me.
A
Thank
you,
vadim
and
if
you
could,
particularly
in
the
notes,
notate
the
things
that
you
wanted,
help
on
that
you
mentioned
for
the
community
to
help
on
underline
or
star
those
in
the
meeting.
D
D
Other
tasks
are
very
complex
to
start
with,
for
instance,
we
want
network
manager
132
in
fedora
34
or
we
or
migrate
to
fedora
35
sooner,
because
34
32
network
manager
has
critical
fixes
for
us,
and
currently
we
pull
them
from
the
copper
instead
of
having
a
tested
release
from
fedora,
so
other
tasks
are
pretty
large.
Most
are
related
to
our
internal
infrastructure,
like
fixing
release
controller,
to
have
proper
channels,
and
so
on.
D
I'm
not
sure
how
to
structure
all
those
to-do's
should
we
just
I
we've
published
them
internally,
so
I'm
hoping
engineers
could
hop
in
onto
that
and
I'm
not
sure
how
to
involve
community
that,
should
we
just
dump
this
list
and
find
some
assignees
or
carefully
curate
and
give
more
like
easy
to
start
with
jobs.
That's
something
we
would
be
would
be
interesting
to
discuss.
A
F
F
And
yeah-
and
the
second
point
is
mostly
continuation
from
last
week
and
the
week
before
we
we
do
have
one.
We
we
have
our
ongoing
work
to
bring
arch
64
support
in
federal
courts
and
also
are
still
working
looking
into
whether
we
ship
by
default
with
kubernetes,
focused
or
single,
not
focused
defaults,
but
that
should
not
really
directly
impact
the
okd
community.
It's
much
more
federal
or
site
discussion,
but
essentially
it
will
involve
those
changes
and
some
configuration
changes.
F
A
And
now,
let's
are
there
any
questions
on
that?
Actually,
I
should
ask
any
questions
from
folks
here
on
what
he
mentioned.
C
D
I
don't
think
so
we
can
we
cherry-pick
the
state
whatever
is
in
stable,
fedora
pro
s.
Whenever
we
want
to
it's
just
manual
action
now
we
previously
had
an
automatically
import,
but
with
the
new
system
we
have
to
do
this
manually,
so
we
can
do
this
anytime.
I'm
planning
to
do
this
on
friday,
so
it
shouldn't
be
an
issue
really.
I
Yes,
not
really
a
question
but
yeah
for
for
okd
on
arm
the
aws
ami,
app
or
arch
64
ami
for
fcos
will
also
be
required.
There
is
actually
a
jira
card
and
I'm
not
sure
if
it's
public,
but
this
should
be
the
one
where
we
could
follow
that
track,
that
work,
I'm
just
going
to
link
it.
If
it's
public,
I'm
I'm
going
to
put
it
in
the
notes
as
well.
I
just
posted
it
on
the
channel
yeah.
F
Yeah
I've,
just
just
one
one
minor
comment:
there's
a
testimony
that
I've
posted
a
link
to
in
the
previous
notes
from
the
previous
meetings.
So
you
can
get
that
here.
If
you
want
to
try
out
federal
course
on
aws
and
about
the
the
system
generation,
it's
the
system,
derivation
is
unstable
right
now,
so
it
comes
with
the
fixes
for
the
for
the
cve.
F
So
that's
why
we
currently
have
have
that
in
stable
and
will
well
then,
next
time
we
make
a
testing
the
next
time
we
promote
testing
so
in
about
one
week
or
something
we'll
be
back
to
a
fixed
version
of
systemd
and
a
new
version
of
the
camera
as
well.
A
Right
now,
moving
over
to
our
issues,
section,
there's
only
one
new
issue-
that's
been
added
since
then-
and
this
was
open
nine
days
ago
by
proto-sam-
and
this
is
the
update,
the
okd-based
crc
build
who's.
Our
point
person
for
crc
right
now.
C
Well,
it
has
been
charles
gruver
for
the
most
part,
but
I
think
I
think
he's
gotten
busy
yeah.
So
we
may
need
to
find
someone
in
the
crc
actual
product
management
engineering
loop,
as
opposed
to
a
working
group
person.
To
do
that
so.
C
Which
is,
is
the
link
in
the
notes
I'm
looking
at
the
notes
hold
on
one
second.
A
So
it's
just
this
one
and
now
looking
at
discussion
items
looks
like
we've
had
and
brian.
We
will
be
getting
to
that
that
overall
topic
and
that
discussion
sort
of
later
we've
got
bootcube
service,
not
starting.
A
All
right,
update,
okay,
decide
broken
yes,
broken
due
to
n
dots.
A
Oh
yeah,
that
was
the
red
head
support
so,
but
if
you
want
to
close
790
since
it's
since
it's
ocp
or
do
you
want
them
to
close
it
once
they've
gotten
support
from
red
hat.
D
D
They
don't
take
into
account
that
you
can
have
an
obn
installed
and
so
on.
We
fixed
this
in
for
nine,
but
we
never
managed
to
get
this
verified.
Ci
has
been
pretty
rude
to
us,
I'll,
give
it
a
couple
more
tries,
and
if
we
ensure
that
this
fix
is
fixed
in
ci,
we'll
cherry
pick
this
back
to
for
even
for
seven,
but
if
we
could
make
sure
that
manually
this
works.
That
would
be.
That
would
be
even
better.
H
I'm
going
to
try
to
my
over
system
after
the
meeting.
Is
it
any
of
the
dailies,
because
I
noticed
they're
failing
on
on
some
of
the
other
platforms?
Is
there
a
specific
nightly
that
you
want
to
want
me
to
test
or
just
the
latest
one.
D
D
Oh,
it's
already
set
to
verified,
perfect
cool,
so
we
can
take
right
away,
but
some
additional
confirmation
that
we're
missing
anything
else
would
be
great
thanks.
D
That's
about
release
not
present
and
stable
right.
So
that's
one
of
the
tasks
we
wanted
to
pass
to
release
controller
owners.
Currently
everything
which
matches
a
regex
is
landing
in
a
particular
channel.
So
if
you
use
stable4,
you
would
have
anything
with
tagged
into
this,
but
in
in
red
hat's
system
called
osu's.
We
have
real
channels
where
we
manually
can
move
releases
between
stuff.
D
D
And
if
you
install
a
nightly,
you
can
still
choose
a
stable
release
and
eventually
it
would
be
pruned
by
a
release,
controller
and
there's
no
way
identifying
that
you
you've
installed
nightly.
So
what
we
want
is
to
make
sure
that
you
will
get
a
proper
notification
that
it's
not
it's
a
nightly.
D
It
can
be
pruned
and
users
could
decide
on
their
own
if
it's
a
testing
stuff
or
they
want
to
migrate
to
something
stable,
or
they
have
mirrored
the
release
of
some
more
information
that
that
this
release
may
go
away,
and
that
would
be
the
ultimate
fix
for
this
issue.
But
at
this
point
you
would
probably
have
to
either
reinstall
the
cluster
or
hope
that
the
images
are
now
gone
in
the
registry,
and
hopefully
these
are
not
pruned
on
the
image.
So
most
likely
the
upgrading
stable
would
woodwork
but
not
really
guaranteed.
D
I'm
not
sure
should
we.
How
do
we
phrase
this
in
discussion,
because
it's
very
very
long
topic
and
pretty
complex
one?
We
probably
should
have.
A
D
A
All
right,
that's
about
it
for
the
discussions
that
that
are
up
there,
all
of
the
other
ones.
We've
are
pretty
straightforward
or
we've
talked
about
previously.
There's
a
lot
of
questions
on
certificates.
I
wonder
if
we
should
somehow
document
the
certificate
process
better
or
certificate
management
and
process
better,
just
because
I'm
noticing
there's
been
a
lot
of
stuff
in
well.
Some
of
that
was
one
person
hitting
multiple
places
of
communication,
but
there
is
seems
to
be
a
lot
of
stuff
on
certificates.
A
It
might
be
helpful
to
shed
some
light
on
that
process
at
some
point
and
how
folks
can
manipulate
certificates
better.
It's
an
observation.
A
C
I
One
interesting
thing
here,
I
think,
is
first
of
all,
okd
manages
its
own
certificates
itself,
but
I've
also
seen
the
question
whether
you
can
kind
of
do
that
or
trigger
a
new
renewal
before
openshift
would
do
it
itself
so,
and
I
don't
have
an
answer
to
that,
and
the
second
thing
I
think
noteworthy
is
that
there
is
a
search
manager
operator
now
on
operator
hub,
which
I
mean
certain
managers
is
probably
the
almost
default
way
to
manage
certificates
on
on
kubernetes
in
general,
and
that
should
I
mean
you
can
you
can
already
just
install
the
the
standard
circ
manager
on
on
an
okd
cluster,
but
there
is
now
also
the
operator
which
will
do
that
for
you
and
keep
keep
that
certain
energy
deployment
up
to
date.
I
For
you,
then
I
I'm
not
sure
in
which
catalog
it
is,
but
I
think
it
should
be
in
on
one
of
the
public
operator
help
catcher
blocks.
D
Yeah-
it's
certainly
it's
certainly
in
community
operators.
The
problem
is
that
search
manager
is
creating
and
taking
care
of,
and
user
level
certificates.
Like
things
you
would
use
for
your
ingress
and
things
like
that,
but
this
issue
discussed
the
internals
of
where
and
which
certificates
this
cube.
Api
server
has
what
does
cube
api
operator
does
with
them.
How
do
they
pass
it
on
to
each
other?
And
things
like
that?
D
So
that's
very
hardcore
technical
stuff,
and
I
believe
it's
already
described
at
least
the
expected
result
is
already
described
in
enhancements
repo,
but
it's
very,
very
technical,
so
I
don't
think
we
have
a
real
short
just
how
to
do
how
to
trigger
their
update
sooner,
but
that's
the
starting
point.
At
least
I
I'm
pretty
sure
that
api
server
folks
should
have
a
shorter
description
so
that
other
teams
would
be
able
to
like
understand.
What's
going
on
and
oh
yeah.
I
Absolutely
absolutely
I
agree,
and
I
should
probably
make
that
clearer.
The
distinction
is
between
certificates
owned
by
by
openshift
itself,
which
openshift
itself
manages
and
then
service
grade
certificates
which
could
be
can
be
handled
by
the
cert
manager,
but
that
is
for
for
user
deployed
services
on
top
of
openshift,
just
like
it
would
work
on
top
of
any
kubernetes
cluster
but
yeah.
I
agree.
I
We
should
definitely
find
out
what
the
case
is
for
those
those
openshift
owned,
okd
owned
certificates
and
how
one
could
trigger
renewal
and
manage
those
or
change.
How
openshift
manages
those
itself.
A
Excellent,
all
right,
let's
we're
almost
at
the
halfway
point,
so
let's
jump
into
new
business.
So
this
came
up
in
the
documentation
meeting.
It
came
up
in
a
discussion
item
that
brian
posted
and
I
wanted
to
to
foster
discussion
with
the
group
about
the
fact
that
there's
kind
of
a
mix
of
working
group
activity
and
sort
of
for
lack
of
a
better
term
support
activity
in
the
various
channels
of
communication
and
should
those
maybe
be
separated,
should
there
be
a
boundary
of
some
kind.
A
A
You
know
channel
basically
or
sub
group
right,
and
so
I
wanted
to
run
that
by
folks
and
so
what
are
folks
thoughts
about
this
says?
Brian?
Maybe
you
can
lead
the
discussion
because
I
know
you
did
a
lot
of
thinking.
H
I
mean
this
really
came
out
with
the
confusion
I
had
when
I
sort
of
started
following
the
community
and
wanting
to
get
involved
just
looking
at
the
the
various
sort
of
threads
that
are
there.
You
end
up
with
quite
a
list
of
places
where
you're
told
to
go
to
look
for
information,
and
I
know
that
we've
had
a
few
sort
of
frustrated
people
that
have
just
posted
the
same
question
everywhere,
hoping
to
get
get
a
response.
H
And
for
me
it
would
be
easier
if
you
said,
if
you,
if
you're
looking
for
some
money
to
help
you
out,
go
here
and
then
have
a
place,
that,
if
you're
looking
to
sort
of
join
the
community
and
get
involved
and
learn
about
what
you
can
do
and
what
helps
needed
go
here.
But
at
the
minute
we
sort
of
have
to
filter.
Through
everywhere
we
go
seems
to
be
a
mixture
of
people
asking
for
support
and
discussions
about
sort
of
work
in
progress
or
work.
That
we'd
like
to
get
done.
H
And
so
I
think,
from
a
new
user
point
of
view
or
a
casual
user
for
the
community
having
a
place
and
and
being
specific,
this
place
is
for
this
activity
would
help
this
sort
of
general
community.
C
Well,
I
I
would
love
to
see
a
separation
in
the
google
group
between
the
working
group,
administrative
release,
kind
of
updates
and
conversations
and
bug
fixes
in
an
addition
and
a
wreath
to
have
a
separate
place
for
people
to
post
what
we
are
avoiding
the
topic
of
technical
support,
kind
of
questions,
so
some
sort
of
separation
without
giving
the
idea
that
it
is
a
supported
release
of
openshift
by
red
hat.
C
So
it's
that
kind
of
conflagration
issue,
but
I
also
am
wary
of
creating
yet
another
channel
to
monitor
for
all
of
us
out
there
in
the
universe
paying
attention
to
the
kubernetes,
openshift,
dev
and
openshift
user.
C
So,
but
I
think
if
the
instructions
were
clearer
on
the
okd
working
site
about
that,
we
could
do
that
and
leverage
the
issues
list
or
the
discussions
list
better.
That's
just,
I
think,
leftover
from
the
docs
meeting.
H
H
A
Yeah
and
diane
actually
linked
to
I
you
know,
and
I
I'll
have
to
look
back
in
the
notes.
I
don't
the
site
that
you
linked
to
they
actually
for
their
community,
lincoln
brian.
You
were
at
the
docs
moon.
Their
link
to
the
community
stuff
goes
to
the
discussions
of
github.
A
If
we
did
that
that
would
resolve
this
and
it
allows
people
to
link
directly
to
to
tickets
and
other
discussion
items,
and
it's
one
clear
place
for
linking
to
code
parts
seems.
C
A
J
Would
make
sense,
and
it
would
certainly
make
it
a
little
less
cluttered
when
it
comes
to
the
the
working
group
mail
that
I
that
I
get
now
because
it's
like
hey
this
is
tag
for
a
kicking
group
working
group
mail
asking
about
openshift
deployment
stuffs
like
I
don't
know
what
I'm
supposed
to
do
here.
A
J
A
J
Yeah
so
hail,
I'm
neil,
I'm
a
senior
devops
engineer
at
datto,
working
on
software
engineering
release
engineering
focused
on
packages
and
containers
and
that
sort
of
stuff-
and
I
run
one
of
the
okd
deployments
internally
at
datto,
I'm
here
mostly
with
my
datto
hat
off
and
my
fedora
hat
on,
where
I
work
on
technology
and
development
in
the
fedora
community
and
providing
a
bridge
between
the
fedora
community
and
the
openshift
community.
A
Excellent.
Thank
you,
sir
okay.
So
if
we
were
to
do
a
straw
poll
vote,
straw
poll
vote
right
now
raise
of
hands
of
switching
the
community
like
just
having
a
community
link
that
goes
to
the
discussions
and
making
the
working
group.
Just
the
working
group,
google
group
just
for
working
group
communications,
show
of
hands.
I
If
I
may
just
jump
in
here
very
quickly,
I
think
brian
suggested
removing
the
google
group
entirely
even-
and
I
think
we
we
created
it
specifically
for
for
notifications
around
our
meetings
and
maybe
even
announcements
of
new
releases.
I
do
think
for
that.
We
should
keep
on
using
it,
but
we
should
clearly
or
make
it
clear
that
user
related
issues
and
and
yeah
just
anything
usage
related
shouldn't
be
posted
there
that
it's
and
yeah,
that's
really
only
for
for
the
working
group.
C
We
can
update
the
description
to
reflect
that
for
the
group
and
then
you
know,
keep
nudging
people
to
the
right
places.
I
think
that's,
I
do
think
you
know,
as
the
person
behind
the
the
screen
on
the
google
group
thing.
We
need
that
for
administrative
stuff
and
and.
A
This
would
not
be
to
push
anyone
out
who
might
be
interested
in
participating
in
work
group
stuff,
so
we
will
probably
want
to
make
that
clear.
Like
hey
yeah,
this
is
just
for
working
group
stuff.
But
if
you
are
interested
in
working
group
activities,
please
join
us.
I
think
we
don't
want
it
to
seem
like
it's
a
walled.
You
know
space
and
and
like
it's
only
for
like
a
select
few
special
people
or
anything.
I
Absolutely-
and
we
could
even
make
it
like
in
the
fedora
devel
list,
where,
if
you're
new
to
the
working
group
or
you
want
to
join
the
working
group,
you
can
send
your
your
own
self-introduction
to
that
list
and
just
you
know
say
who
you
are,
what
you
do,
why
you're
interested,
but
just
not
have
it
as
a
like
problem
solving
channel,
because
that
just
gets
very
spammy
and
we
have.
I
I
think
we
should
really
focus
that
to
github
and
for
like
ad
hoc
questions,
you
can
still
use
slack
for
that
as
well,
but
on
github,
we'll
we'll
have
we'll
be
able
to
track
it
and
link
to
it.
While
it's
yeah,
it's
get
getting
very
unwieldy
very
quickly
on
on
those
google
groups.
C
Yeah,
so
I
think
if
we
had
sort
of
a
boilerplate
that
everybody
could
cut
and
paste
or
their
their
own
variation
of
oh,
this
is
a
great
question
here.
Please
go
and
post
it
over
here
in
discussions
and
feel
free
to
just
cut
and
paste
your
email
and
place
it
here,
but
do
check
and
see
if
someone
else
has
had
a
similar
thread
already
going
so
we're
not
repeating
ourselves
too.
C
A
Vadim
and
other
folks
working
on
issues,
because
now
it's
right
in
the
discussion
and
then
the
team
can
say:
oh,
this
is
really
something
that
needs
to
have.
An
issue
opened
up
just
go
right
here
to
this
other
tab
and
open
up
an
issue.
So
it
simplifies
that
process
as
well.
So
excellent.
A
G
G
That's
growing
here
because-
and
I
think
about
this
almost
like
in
the
same
references
like
fedora
like
when
we
get
to
the
point
where
we
have
people
who
want
to
start
contributing
code
and
want
to
get
into
the
developer
space
about
around
okd
we're
gonna
have
to
set
up
like
separate
spaces
for
those
discussions
to
happen,
because
I
think
you
know,
as
vadim
kind
of
hinted
at
earlier.
You
know
like
syncing
up
with
the
engineering
effort.
G
That's
happening
inside
of
red
hat
is
no
small
task,
and
if
community
members
want
to
start
contributing
to
the
separate
pieces,
then
like
this
working
group
meeting
is
probably
not
the
place
to
have
those
discussions.
We're
going
to
need
developer,
oriented
spaces
where
we'll
be
able
to
connect
engineers
from
within
red
hat,
who
know
the
specific
components
with
community
members
who
would
like
to
contribute
to
those
components.
So
that's
just
another
angle:
we're
not
talking
about,
but
we're
maybe
we're
not
that
tipping
point
yet.
But
I'm
hopeful.
J
I
don't
think
we're
there
yet
it'll
it'll
be
a
little
while
you
know
one
of
the
the
bigger
challenges
right
now
is
that
when
it
comes
to
contributing
specifically
to
the
openshift
platform,
there
is
no
straightforward
way
for
someone
to
figure
out
if
they
even
can
yet
and
like
that,
that's
a
starting
problem
that
once
that
is
more
directly
resolved,
it
becomes
easier
to
like,
say:
hey
you
want
to
help
make
okd
better.
You
know
from
a
code
perspective,
then
we
can
start
directing
people
towards
that
and
do
that
sort
of
thing
like.
J
I
know
that,
right
now,
whenever
I've
tried
to
build
like
even
a
small
portion
of
okd,
it's
just
it's
a
rabbit
hole
of
of
crazy
because,
like
figuring
out,
actually
how
to
do
it
and
getting
it
to
like
work
is
is,
is
not
trivial
at
yeah
you're.
Absolutely
right
now,.
E
E
J
G
I
mean,
I
think,
there's
a
pushback
associated
with
this,
though
right
like.
I
know
for
a
fact
that
my
team
would
be
happy
to
accept
a
patch
from
anyone
in
the
community,
but
the
problem
is
how
would
I
know
that
you,
you
know,
aside
from
our
ci
testing,
it's
unclear
that
anyone
approaching
one
of
our
projects
would
even
understand
how
to
set
up
the
test
environment
to
build
it
properly.
G
J
I
mean
for
sure,
like
that
I
have
certain
things
that
I'm
interested
in
that
I'd
like
to
see.
You
know
built
into
okd
and
the
open
shift
platform
for,
for
you
know,
admittedly,
somewhat
selfish
reasons,
but
also
because
you
know
I
want
to.
I
want
to
be
able
to
do
that
kind
of
stuff,
but
right
now
I
don't
even
know
how
I
would
do
it.
E
The
problem
is
that
this
is
a
commercial
product,
you
know
and
we're
getting
the
open
source
part
of
it,
but
we
can't
necessarily
add
to
the
open
source
part
of
it
without
also
affecting
the
commercial
product,
and
that's
where
I
think
you
know
part
of
the
pushback
is
you
know
you
put
something
in
that's
going
to
affect
the
commercial
product
and
everything
like
well
wait.
You
know
that's
an
outside
person,
I
don't
know
whether
we
can
really
do
that
or
not.
J
Well,
that's
that
that
would
be
very
so
like
if,
if
anyone
in
red
hat's
teams
were
having
had
that
particular
attitude,
I
think
that
would
be
a
problem
because,
like
that's
pretty
much
all
of
of
red
hat's
stuff
and
and
that's
definitely
not
what
came
to
my
mind,
it's
my
like
what
came
to
my
mind
is
that
just
the
mechanics
of
actually
making
a
change
to
openshift
is
unbelievably
undocumented
and
to
a
level
that,
like
someone
who
even
has
some
kubernetes
expertise
like
kubernetes
contribution,
expertise
is
probably
going
to
have
a
hard
time
working
through
openshift.
C
And
there's
another
there's
another
level
to
this
too.
You
know
the
kubernetes
we
contribute
into
from
red
hat
engineering
perspective.
We
contribute
a
lot
into
the
upstream
of
kubernetes
itself.
So
if
some
of
the
things
you're
looking
for
are
kubernetes
things
contributing
into
the
kubernetes
work,
streams
is
probably
where
you
should
be
making
those
contributions.
It
depends
on
what
you're
trying
to
you
know
to
accomplish
too,
but
so
there's
like
there's
multiple
levels,
but
as
as,
as
you
pointed
out,
neil
and
probably
mike
too
is
you
know.
C
Everybody
here
is
looking
for
patches
and
pieces,
but
it's
the
build
process.
I
think,
and
the
testing
of
it
that
gets
incredibly
complicated
and
really
the
reality
is
okd
is
a
sibling,
a
stream
rather
than
an
upstream
to
open
shift.
So
that's
you
know
that
that's
the
caveat,
but
creating
an
issue
or,
or
you
know,
making
a
patch
to
open
shift
or
any
of
the
pieces
that
are
under
the
hood
of
what
is
now
called
openshift,
whether
it's
you
know
something
that
came
through
the
k
native
stuff.
C
A
lot
of
it
happens
in
the
very
very
upstream
of
this
and
then
gets
pulled
into
openshift.
So
not
that
I
want
you
all
to
disappear
and
go
into
the
kubernetes
working
groups
or
the
k
native
or
envoy
or
whatever
istio
folks.
But
that's
where
a
lot
of
like.
If,
especially,
if
there's
these
core
changes
to
something
that
you're
looking
for
or
support.
A
I
Yeah
just
reading
the
chat
here,
vadim
said
make
a
pr
and
ci
will
do
the
rest.
That
is
unfortunately
the
way
currently
is.
It's
the
only
way
to
really
test
your
changes
because-
and
I
don't
want
to
sugarcoat
this-
setting
up
a
your
own,
proud
deployment,
which
is
the
the
build
system
we
use,
would
be
in
an
incredible
lot
of
work,
and
it's
also
that,
I
think,
is
the
real
issue.
I
Here
we
have
some
of
the
build-up
bases
we
use
that
aren't
freely
distributable
because
they
are
rel
or
they
have
some
rel
parts
in
them.
So
we
probably
have
to
provide
a
and
a
pure
centos
or
fedora
based
alternative
a
builder
base
image.
So
we
can
actually
do
those
builds
locally
even
for
just
single
components,
because
once
you
have
a
single
component,
you
can
then
take
any
any
release.
I
Payload
replace
that
specific
image
you
build
with
your
of
the
ability,
replace
the
image
in
the
release
payload
with
the
change
custom,
build
you
made
and
then
deploy
that
to
test
it,
but
it's
still
for
even
for
the
single
components.
I
It's
not
always
possible
for
non-red
hat
folks
to
do
those
builds
because
some
of
those
builder
bases
just
aren't
available
to
them,
and
I
do
think
that
is
a
part
for
for
community
developers,
which
is
a
real
yeah,
a
real
barrier,
and
I
think
that
is
probably
good,
a
good
thing
or
our
other
thing.
We
we
definitely
have
to
do
at
some
point,
provide
a
freely
distributable,
centos
or
fedora-based
build-up
base.
So
we
can
actually
build
those
components
locally
everywhere.
A
Okay,
I
want
to
wrap
this
thread
up
because
we
do
have.
Thank
you
christian.
I
do
want
to
wrap
this
up,
because
we
do
have
a
fair
amount
of
items
in
the
last
16
minutes
to
get
through
vadim
okd
operator,
catalog
still
work
in
progress.
Do
you
have
anything
to
say
about
that?
And
actually
there
was
a
crash
question
about
brian.
What
operator
were
we
talking
about?
At
the
docs
meeting
pipelines,
pipelines,
yeah
status
of
pipelines,
yeah.
D
All
of
that
is
in
the
works
we
didn't
have.
We
men
haven't
seen
anything
really
happening,
but
we
draw
attention
to
this
every
couple
of
weeks.
I
think
it's
almost
daily
this
time.
It's
also
related
to
the
status
of
getting
volunteers
and
so
on.
Hopefully
we'll
have
it
soon,
but
to
the
teams
some
particular
teams
themselves
already.
We
just
need
the
catalog
and
we
can
start
filling
it,
but
the
ball
is
on
mauland
dev
site
and
we're
doing
our
best
to
push
them.
No
estimates
so
far.
A
And
another
one
that
came
up,
this
came
out
of
the
docs
group.
Maybe
I
should
have
put
this
a
little
bit
earlier,
but
name
and
scope
of
the
install.md
and
clarification
on
the
documentation.
A
This
sort
of
came
out
of
that
same
discussion
and
what
brian
brought
forward
in
his
observations
and
wrote
out
for
us
vadim.
Is
there
a
better
way
that
we
could
do
the
install
md
and
and
sort
of
separate
stuff
out
that's
install
versus
building
versus
testing
versus,
etc.
It
seems
like
install
that
md
is
kind
of
a
mix
of
everything
right
now.
Isn't
it.
D
Yeah,
I
think
I
would
rather
pass
this
to
dog
steam
or
other
dog's
work
group.
Oh
because
I
want
to
cram
a
lot
of
things
in
one
single
readme
and
that's
probably
the
worst
way
imaginable.
I
can
think
of
really
how
to
structure
this
properly.
D
There
are
tons
of
information
we
want
to
cover
there
and
at
least
have
some
links.
We
could
refer
people
straight
to
so
I
don't
know
how
to
how
to
properly
structure
this
I'm
hoping
dog's
team
would
have
better
insight.
A
D
Yeah
I
mean
I
don't
have
any
great
thoughts
on
this
really.
I
would
rather
have
some
options
to
choose
around
like
different
layers
and
I'm
thinking
the
more
different
links
we
have
the
better
so
that
we
could
link
folks
to
particular
parts,
but
it
could
be
parts
of
one
single.
It
could
be
headlines
in
in
one
single
markdown,
there
could
be
different
markdowns
documents,
so
I
don't
really
have
any
preference
on
how
to
structure
this
the
way.
D
A
About
it,
yeah,
let's
talk
about
how
the
for
sure
we
just
didn't
want
to
make
your
life
harder
or
anyone
else
from
red
hat
who
ultimately
gets.
You
know
the
brunt
of
a
lot
of
questions
and-
and
you
know,
has
to
do
a
lot
of
it
impacts
you
a
lot.
Okay,
so
next
thing
this
one
mike
wanted
to
bring
in
mike.
Take
it
away
migration
from
entry
to
out
of
tree
cloud
controller
managers.
G
Yeah,
so
this
is
a
big
change.
That's
coming
in
the
kubernetes
community
and
you
know
it
might
be.
I
might
be
skating
way
ahead
of
the
puck
here.
You
know
considering
that
we're
kind
of
working
on
the
4.8
release,
but
in
4.9
ocp
we're
going
to
start
releasing
a
tech
preview
and
I
guess
for
azure
stack
hub.
It
will
be
ga
we're
going
to
start
releasing
these
out
of
tree
cloud
controller
managers.
Now
this
is
a
big
migration,
that's
happening
in
upstream
kubernetes.
They
think
they'll.
Do
it
they'll
finish
the
migration
around
1.25?
G
But
these
are
the
controllers
that
talk
between
like
the
kubelet
and
the
cloud
and
do
things
like
you
know,
set
up
services
and
routes
and,
like
you
know,
handle
node
life
cycle
terminations
and
whatnot
right
now,
all
this
code
is
handled
in
tree
in
the
in
the
kubelet
and
it's
all
being
moved
out
of
tree
to
separate
repositories,
which
means
that
there
will
be
separate
deployments
for
the
controllers,
and
these
will
be
done
like
kind
of
per
cloud.
G
G
You
should
bring
this
up
in
the
okd
group,
because
this
will
be
something
that
will
be
hitting
okd
and
4.9,
and
I
know
how
much
everyone
here
loves
to
tinker
and
play
around,
and
this
is
one
of
those
areas
where
we'll
probably
need
lots
of
soak
time
in
terms
of
testing
these
things
out
now
to
begin
with,
it's
just
going
to
be
like
aws
and
I
think
openstack
and
azure
stack
hub
will
be
released
in
4.9,
but
then
4.10
we'll
see
like
ibm
cloud
potentially
alibaba
others,
and
there
will
be
more
and
more
of
these
kind
of
vsphere
as
well
so
like
as
these
start
to
get
rolled
out.
G
This
is
going
to
be
a
change
in
the
way
that
open
shift
or
okd
is
deployed,
and
probably
by
4.11
or
4.12,
we're
going
to
start.
Looking
at
this
as
the
standard
way,
the
the
the
entry
stuff
will
go
away
and
be
deprecated
at
some
point,
but
it
won't
that
deprecation
problem
won't
happen
to
like
4.13
or
4.14.
J
Is
this
again
like
I,
I
think
I
missed
the
a
bit
of
this.
What's
changing.
G
The
so
this
is,
if
you've
been
following
upstream
coop,
there's
been
this
talk
of
entry
to
out
of
tree
cloud
controller
managers.
Oh.
G
So
that's
that's
what
that's
what's
happening
here.
The
entry
cloud
controller
managers
will
go
away,
we're
going
to
start
releasing
dev
preview
and
tech
preview
in
4.9
and
then
that'll
kind
of
roll
forward.
So
I
just
wanted
to
bring
it
up
here.
Hopefully
I
offered
to
do
a
demo
about
some
of
the
stuff
at
some
point,
so
hopefully
we
could
show
it
off
in
okd.
G
You
know.
Ideally
it
won't
make
a
big
difference,
but
I
know,
since
people
here
are
working
at
the
cloud
layer,
I
imagine
you're
going
to
run
into
problems
with
it
so
yeah.
That's
it.
G
Oh,
and
just
I
guess,
by
way
of
engineer,
I
put
two
links
in
the
document.
One
is
the
out
of
tree
migration
enhancement,
which
describes
like
the
real
nitty-gritty,
technical
details
of
how
we're
gonna
handle
this
in
openshift
and
the
other
one
is
a
link
to
a
new
operator
that
we've
created,
which
will
manage
the
deployment
of
these
cloud
controller
managers.
A
Very
cool:
well,
thanks
for
helping
us
get
ahead
of
the
game,
as
it
were
good
and
we'll
check
in
with
you
periodically
on
where
things
are
with
that,
like
maybe
quarterly
or
something
like
that.
G
Yeah
no
that'd
be
awesome.
I
think,
like
you
know
part
of
the
thing
here
too,
and
this
is
kind
of
speaking
to
what
you
know.
There's
this
awesome
conversation
happening
in
the
comments
here.
This
is
kind
of
speaking
to
something
that
john's
getting
at,
which
is
that
you
know,
at
least
from
my
team's
perspective.
You
know
cloud
infrastructure
team.
G
You
know
we.
We
see
the
value
that
this
community
brings
to
the
work
that
we're
all
doing
and
what
we
would
like
to
do
is
figure
out
how
we,
as
engineers
inside
red
hat,
can
make
this
more
successful.
You
know
how
can
we
get
the
community
more
involved
in
what
we're
doing?
How
can
we
get
these
components
that
we're
working
on
you
know
closer
to
upstream,
closer
to
like
what's
actually
happening
at
the
head
of
development?
G
So,
like
you
know
really,
I
would
like
to
help
to
solve
exactly
this
kind
of
disturbance
that
john
is
talking
about.
We
don't
want
it
to
be
seen
as
a
hostile
community
for
changes
to
come
back
from
outside
in.
We
would
like
to
accept
those
especially
for
the
work
that
our
team
does,
where
we
might
have.
G
You
know
bugs
at
the
provider
layer
between
you
know
whether
it's
these
cloud
controller
managers
or
the
machine
api
controllers,
our
team
is
working
on
both
these
things
and
there
is
no
way
that
we'll
be
able
to
handle
the
amount
of
clouds
that
are
coming
in
and
that
we're
planning
to
bring
in
and
so
like.
What
we
want
to
do
is
we
want
to
help
build
out
the
community
side
of
this.
G
G
Right,
I
know
for
a
fact
what
john's
saying,
if
john
showed
up
to
machine
api
operator,
repo
or
one
of
our
providers
and
opened
a
pr
it
would
get
attention.
We
would
not
just
be
like
no
you're
from
outside
now
granted,
if
you're
trying
to
put
a
feature
in
that
might
take
longer
to
get
merged,
but
anyways
soapbox,
disabled.
G
A
Okay,
excellent,
okay:
we
have
five
minutes
left,
and
so
I
want
to
make
sure
diane
has
time
to
talk
about
what
do
we
have
next
on
our
list?
It's
kubecon
n,
a.
C
Well,
I
also
wanted
to
put
in
john
the
the
more
we
can
teach
stuff
and
if
you
want
to
work
with
me
and
do
a
briefing
or
a
video
on,
you
know
how
to
teach
that
or
a
workshop
on
teaching
that,
and
I
would
be
totally
on
board
with
that,
giving
you
the
space
creating
a
hop
in
or
whatever
it
is.
C
We
need
to
do
and
recording
it
and
breaking
it
down
into
the
steps
because
yeah,
the
more
the
more
people
we
can
get
doing-
and
this
is
the
constant
ask
that
I
have
from
product
management
and
engineering.
Is
you
know?
What
can
we
use
okd
to
do
you
know,
and
most
of
it
is
testing
and
deployment
testing
and
appointment
is
what
I
keep
hearing.
C
But
you
know
arm
is
coming
soon,
so
we'll
make
christian
talk
about
that
next
week
or
the
week
after
when
he
comes
back
from
pto
and
has
it
all
done
and
working.
C
So
that
brings
me
to
kubecon
and
I
will
put
an
email
out
on
the
sort
of
administrative
google
group
I'm
asking
as
well,
but
if
anyone
is
planning
on
being
at
kubecon
north
america
in
person,
I
would
love
to
know
because
the
you
know,
even
at
red
hat,
it's
severely
limited
who
can
travel
so
having
people
who
are
okd
savvy,
who
are
going
to
be
there?
C
That
would
be
great
to
know,
and
I
can
use
you
either
in
the
booth,
maybe
with
an
exchange
of
more
than
t-shirts
or
something
this
time.
But
it's
going
to
be
very
limited
attendance.
I
think
at
kukan
north
america
for
openshift
expertise.
C
So
I
am
definitely
interested
yeah
amy,
I
think
amy
didn't
you
get
a
talk
accepted
and
I'm
pretty
sure
having
a
talk
accepted
gets
you
up
a
notch
in
permission
to
leave
if
you're
a
red
hatter,
I'm
behind
a
border
at
up
in
canada-
and
I
don't
even
have
permission
to
travel
to
the
us
yet.
K
C
Pushing
for
you,
amy,
and
so
is
chris
morgan,
your
boss,
so
we
we
have
call
come
to
jesus
meeting
tomorrow
about
who's
going
to
get
to
go,
but
I'm
looking
more
for,
like
external
folks
from
the
okd
working
group
or
fedora
timothy
in
your
community.
If
you
hear
of
someone
who's
going,
we
just
want
to
make
sure
we
have
experts
in
the
booth
and
as
well
as
in
some
of
the
upstream
working
group
meetings
too.
C
We
may
do
it,
we
may
we
will
do
an
okd
working
group
office
hours
again
virtually
so
you
know
look
for
that.
But
you
know
right
now:
it's
severely
limited
who's,
who's
going
to
kubecon
in
los
angeles.
So
if
you
are
and
you're
watching
this
recording
reach
out
to
me-
and
let
me
know-
because
I
will
use
you
and
give
you
swag.
A
C
Well,
and
one
more
I
put
in
the
link,
how
is
it
paddling,
upstream,
with
or
upstream
without
a
paddle.com
launched,
with
taro
gruver
who's,
a
working
group
member
and
some
great
tutorials
are
on
there,
as
he
says,
all
the
home
lab
and
crc
stuff
and
be
in
parallel
with
this
meeting,
I've
been
chatting
with
him
while
he's
been
supposedly
working
on
something
else
and
not
available
to
attend.
C
He
will
attempt
to
do
a
rebuild
of
the
crc,
with
the
current
release
to
address
that
issue
that
we
talked
about
earlier,
so
just
he's
out
there
in
the
ether
he
just
can't
show
his
face.
But
if
you
get
a
chance
to
take
a
look
at
his
his
new,
his
new
blog.
C
Yeah-
and
he
said
he
would
write
up
the
process
to
do
the
build
and
create
some
documentation,
it's
in
his
own
personal
repo
right
now,
but
and
make
a
dot
md
file
in
the
actual
okd
repos,
so
that
if
someone
else
wanted
to
build
it
or
he
got
hit
by
a
bus,
let's
go
with
one
more
one
won
the
lottery,
much
better
charo.
If
you're
watching
this
you're
winning
the
lottery
soon
there
we
go
all
right
folks.
Thank
you.
So
much.
I
Sooner
or
later,
we'll
we'll
actually
automate
those
builds,
hopefully,
hopefully.
B
I
Than
later,
and
have
some
bi
and
build
automation
for
crc
as
well,
we've
definitely
brought
that
up
internally
now,
yeah
we'll
let
you
know.