►
From YouTube: OKD Working Group Meeting 10-12-2021
Description
The OKD Working Group's purpose is to discuss, give guidance to, and enable collaboration on current development efforts for OKD, Kubernetes, and related CNCF projects. The OKD Working Group includes the discussion of shared community goals for OKD 4 and beyond. Additionally, the Working Group produces supporting materials and best practices for end-users and provides guidance and coordination for CNCF projects working within the SIG's scope.
https://okd.io
B
We're
at
three
minutes
after
and
we've
got
10
people
in
here.
So
let's
go
ahead
and
get
started.
Hey.
Let's
do
a
quick,
hey,
neil!
Let's
do
a
quick
review
of
the
agenda.
Take
a
look
at
the
agenda
and
let
me
know
if
there's
anything
that
I've
missed
before
we
get
started
anything
you
want
to
modify
change
etc
into
it
in
the
chat
as
well.
B
All
right
all
right,
let's
start
out
with
the
release
updates
with
the
team.
Oh
sorry
and
folks
don't
forget
to
put
your
name
in
attendees
section
that
lets
us
know
who
is
here
and
maybe
who
might
have
missed
in
some
information
it's
important
to
some
of
the
stuff
with
it.
C
C
It
took
us
quite
a
while
to
to
push
it
through,
mostly
because
of
our
strict
upgrade
scenario
where
there
is
no
way
for
us
to
manually
add
an
edge,
so
we
have
to
pass
it
through.
Ci
and
ci
runs
disruption
tests,
meaning
the
test
will
fail
if
we
disrupt
services,
api
or
end
user
workloads
using
router
more
than
one
percent.
C
During
the
upgrade
due
to
an
ovn
bug,
we
were
disrupting
it
around
10,
13
15
in
some
cases,
so
we
tried
to
push
it
through
to
to
prioritize
it,
but
unfortunately
it
wasn't,
it
hasn't
been
fixed.
So
what
we
now
do
is
we
run
the
tests,
the
very
same
tests
to
it,
but
without
the
disruption
test,
so
now
we
can
add
an
edge
from
four
seven.
C
Okay,
two
for
eight
any
further
upgrades
like
from
one
for
a
to
the
other,
don't
need
that
hack
and
disruption
is
within
the
acceptable
limits
and
so
on
and
moving
forward.
We
would
be
more
strict
about
this,
but
this
is
a
one-time
hack.
We
had
to
do
and
thus
we
now
have
finally
more
or
less
updated,
giblet
version
121.
C
C
C
Unfortunately,
we
don't
have
14
night
lives,
yet
we
need
to
push
a
few
more
changes
to
ci
and
we'll
have
them
available
soon
and
another
topic
we
would
need
to
discuss
not
necessarily
today,
but
soon
is
the
group's
v2
in
for
nine.
We
have
all
things
in
place
and
we
just
need
to
decide
how
exactly
do
we
want
to
enable
them?
Probably
the
safest
way
would
be
publishing
a
guide.
How
to
do
this
today.
All
you
need
to
do
is
just
one
machine
config.
C
Then
we
can
enable
it
for
fresh
installations
and
probably
in
pretend
we
can
automatically
add
this
manifest
in
during
an
upgrade.
But
again
all
depends
on
the
testing
feedback
we
get
and
I
believe
that's
all
we
have
for
today.
There
is
some
work
to
rebase
the
410
to
fedora
35,
but
we
haven't
really
started
it.
It's
just
in
the
plan,
so
not
much
to
report
on
yeah.
D
We
should
we
should
consider
the
four
nine
night
lease
have
just
started
right,
so
we
should
consider
making
four
nine
night
leads
just
do
see
group
v2
by
default.
Just
I
think
that,
like
before
we
start
getting
to
rc
phases
like
we
were
just
doing
nightlys
of
four
nine.
D
Let's
just
do
it
and
see
how
it
goes,
because
I
don't
know
how
else
we
will
get
like
the
requisite
feedback
to
make
sure
that
we're
getting
this
right
in
time.
For,
like
saying
do
we
want
to
do
it
secret
v2,
for
new
installs
unstable?
We
want
to
do
it
on
upgrades,
or
do
we
want
to
defer
upgrades
to
410?
I
think
the
the
best
way
to
be
able
to
get
that
is
just
just
start
doing
it
in
the
nightlys
and
see
how
that
goes.
C
Yeah,
the
main
difference
is
ocp
is
late
and
for
nine
cycles,
so
we
can
enable
c2,
but
if
we
find
some
bug
in
builds
q,
blood
or
something
like
that-
probably
kernel's
implementation
is
very
good
now
for
like
what's
been
like
10
years
or
something,
but
all
other
places
are
not
that
well
prepared
for
c
groups
to
do
so.
If
we
find
a
bug
there,
it
would
take
us
a
while
to
actually
get
it
lumped
back
to
four
nine,
because
we
have
to
wait
the
freeze
before
the
ga
and
things
like
that.
C
So
that
would
take
a
couple
of
weeks
next.
How
exactly
to
enable
this?
We
can
differentiate
between
fresh
for
nine
installs,
get
c
groups
due
to
the
rest
remaining
on
c
groupsv1.
We
can
do
everyone
gets
groups.
V2
unconditionally
we
can
do
here
is
a
guide
how
to
enable
it,
and
we
can
have
a
dedicated
ci
job.
To
do
this.
C
I'm
thinking
fresh
installs
is
probably
the
safest
way
we
will
automatically
get
this
tested
in
ci
yeah,
and
my
main
concern
is
that
existing,
like
things
like
builds,
probably
would
take,
would
take
the
biggest
hit.
The
cubelet
is
probably
well
defined.
I
would
work
with
containers
in
security,
but
things
like
builds,
probably
the
the
riskiest
part
and
showing
this
to
the
new
installs
would
be
the
best
solution.
Okay,
so
I
think
we
have
a
tracking
ticket
somewhere
and
I
will
post
an
implementation
guide
there
and
we'll
see
where
it
takes
us.
D
Okay,
I
mean
I'm
fine
with
us
doing
it
for
new
installs
with
four
nine.
Now,
if
we
want
to,
I
just
I
want
to
just
push
the
button.
I
want
the
button
to
be
pushed
for
for
c
group
v2,
basically
as
soon
as
possible,
because,
like
part
of
the
problem
that
I've
seen
so
far
with
the
secret
v2
stuff,
is
that
we're
now
stuck
in
this
catch-22
cycle
of
pain.
D
Where,
like
we
can't
shake
out
the
remaining
issues
with
secret
v2
until
people
just
straight
up
start
using
it,
because
that's
what
happened
when
fedora
switched
to
secret
v2
by
default,
like
literally
zero
of
the
container
tooling,
virtualization,
tooling,
etc,
supported
c
group
v2
until
the
forcing
function
happened
and
because
koryos
reverted
that
when
they
when
they,
you
know,
made
their
releases.
D
We
never
got
that
back
pressure
into
the
kubernetes
world
to
fix
this,
because
nobody
was
really
doing
anything
with
it,
and
I
just
really
don't
like
the
fact
that
this
has
just
been
stalled
out
for
so
long
and
I
and
a
lot
of
the
underlying
runtime
stuff
has
been
fixed
over
the
years.
In
you
know,
122
kubernetes
122,
which
I
think
is
what
openshift
49
is
actually
based
on,
has
support
for
c
group
v2.
So
at
this
point
I
just
kind
of
want
to.
D
I
want
us
to
be
a
proper,
forcing
function
here,
because
otherwise
I
don't
know
how
how
it's
actually
going
to
like
get
over
the
hump
and
and
realistically.
I
think
we
understand
at
this
point
get
bit
given
from
the
the
amount
of
upi
problems
that
our
ci
is
not
a
good
substitute
for
real
world
interactions.
So
while
the
ci
definitely
covers
a
strong
subset
of
it,
I
want
us
to
just
start
having
users.
D
B
That's
the
only
way
to
get
through
it,
so
I
think
one
of
the
things
if
we
look
at
and
actually
timothy
is
going
to
be
talking
about
this
in
a
second.
B
If
we
look
at
like
the
f
cost
group,
they
have
a
testing
day
or
testing
week,
as
the
case
may
be,
it
might
be
helpful
to
actually
organize
that
for
something
like
this
call
out
to
the
community
and
say:
hey
here's
something
we
particularly
want
to
test
we'll
create
a
little
matrix
if
you
click
on
the
link
to
the
what
timothy's
going
to
talk
about,
there's,
there's,
basically
a
matrix
that
people
can
check
boxes
for
installation,
etc,
etc.
So
that
might
be
one
way
to
approach.
B
D
Yeah
for
sure
yeah,
so.
B
Who
wants
to
be
in
charge
of
that
effort?
Who
wants
to
to
do
the
initiative
of
creating
a
little
grid
or
pending
something
that
we
can
send
out
to
the
community?
B
D
Yeah,
it's
a
great
idea.
I
just
I
don't
have
time
right
now
to
do.
I'm
already,
like
stretched
super
thin
right
now,
it's
a
great
idea,
so
someone
has
to
do
yeah,
it's
a
great
idea.
I
just
hope
I
hope
my
enthusiasm
would
make
someone
else
also
enthusiastic
about
the
idea,
like
I
just
got
my
first
demo
okd
4-8
deployment,
poco
deployment
working
just
just
the
past
day
right
in
our
openstack.
D
So
like
it's
been,
it's
been
a
rough
ride
so
far
my
colleague
is
going
to
start
filing
issues
and
submitting
documentation
prs
for
some
of
the
stuff
that
turned
out
to
be
the
words
I
would
say
are
not
quite
right,
because
we
encountered
some
interesting
quirks
in
our
in
our
open
stack
based
deployment
trying
to
use
ipi,
mostly
in
the
realm
of
like
odd
wording,
incomplete
information
in
some
cases
and
like
just
some
missing
coverage
and
stuff.
B
E
Yeah
in
general,
like
I
totally,
I
totally
agree
with
neil's
kind
of
optimism,
enthusiasm,
etc.
Here
I
like
the
idea
of
getting
the
group's
veto
kind
of
out
the
door
getting
people
using
it,
the
the
hesitancy
or
the.
I
guess.
The
only
thing
that
would
be
kind
of
concerning
to
me
is
that,
like
yeah,
we
get
it
out
there
and
get
people
installing
it,
and
then
we
get
like
a
glut
of
people
who
start
to
have
issues
based
around
something
there,
which
is
what
we
want,
but
then
like.
E
B
Vadim
has
has
volunteered
to
be
the
point
person
on
this
so
vadim
and
I
will
do
the
setup
for
it
and
since
I
actually
did
a
little
bit
of
work
on
the
f
cos
working
groups
testing
date
stuff,
I
can
sort
of
duplicate
the
same
thing
for
our
group
and
we'll
go
from
there
and
we'll
check
in
at
our
next
meeting
and
let
you
know
where
our
efforts
are:
okay,
neil
we're
gonna
get
you
at
some
point
to
do
something,
though
we
will
rope
you
into.
D
Oh,
I'm
sure
I
mean
there's,
there's
already
a
couple
of
things,
I'm
I'm
already
working
on
and
then
the
fact
that,
like
I'm,
finally
getting
an
okd
deployment
working
in
our
in
our
internal
openstack,
you
know
with
ipi
like
is
basically
like.
That's
been
like
the
starting
blocker
that
we've
just
struggled
to
get
over,
and
so
now
that
I
can
have
this.
D
I
need
to
make
this
deployment
reproducible,
so
I
can
blow
it
away
and
make
it
again
and
again
and
then
once
that's
all
shaken
out,
I
I'm
hoping
that
I
can
actually
start
doing
some
more
interesting
stuff
because,
like
part
of
the
blocker
has
been
for
me,
is
that
it's
a
little
difficult
for
me
to
do
more
than
just
like
kind
of
cursory
look
at
things.
Try
to
you
know,
guide
people
and
and
help
that
kind
of
stuff.
B
B
Efforts
for
sure
so
that
we
know
that
you've
got
that.
Well,
ideally,
we'd
have
a
matrix
of
okay
so
and
so
has
access
to
such
and
such
resources.
D
Well,
yeah
that
that's
kind
of,
like
that's
kind
of
I
think
where
we
want
to
go
here
because,
like
yeah,
I
haven't
had
time
to
bask
at
the
glory
like
it
literally
started
working
at
like
eight
o'clock
last
night,
just
like
dad-
and
I
were
like
okay,
we're
done
for
now.
D
D
If
we
can
get
like
you
know,
community
resources,
a
a
community
matrix
of
some
kind
of
like
people,
are
using
okd
in
this
particular
platform
and
configuration
so
that
when
there
are
things
that
we
need
to
test,
we
have
a
fast
track
to
make
sure
that
those
feet
things
can
be
evaluated
like,
for
example,
like
my
hope,
is
that,
with
our
new
openstack
deployment
that
we
have
internally,
that
we're
running
okd
on
in
our
proof
of
concept,
setup
right
now
and
I'm
hoping
to
figure
out
how
to
productionize
it.
Eventually.
D
That
will
make
it
easy
for
me
to
go
into
the
future
and
say
like
hey.
We
need
to
test
this
thing.
Can
I
just
like
you
know,
yolo
a
few
resources
on
our
in
our
internal
openstack
to
just
do
some
testing
and
stuff
and
I'll
blow
it
away
afterwards
and
they'll
be
like
yeah
sure?
Why
not
and
that's
the
kind
of
thing
that,
like
that's,
what
I'm
I'm
trying
to
move
towards,
because
that
way
I
can
say
hey.
You
know
for
openstack
ipi
stuff.
D
B
Us
posted-
and
let
us
know
if
you
can
do
more
than
log
in
and
and
and
automate
that,
so
that
will
be
helpful
in
similar
news.
Actually,
I
have
access
to
vsphere
again
so
I'll
be
doing
my
vsphere
based
testing
again
and
using
my
oct
stuff,
which
pretty
much
automates
vsphere
upi.
B
Let's
see,
who
else
had
a
hand
up,
someone
else
have
a
hand
up
anyone,
nope,
okay,
okay,
let's
did
you
have
anything
else.
Vadim
did
you
maybe,
as
we
transition
into
f
cos,
did
you
want
to
talk
about
issue
210
in
okd
machine
os,
the
the
conversation
that's
coming
between
jay
levin
and
yourself
and
sort
of
what
that
points
to.
C
Yeah
well,
we
circled
back
to
brother
fedora.
Cross
team
has
reached
out
and
told
us
that
rebuilding
the
rocker
rice
is
a
bad
idea.
I
don't
know.
Sometimes
we
must
have
different
package
versions-
that's
certainly
not
ideal,
so
we
would
need
a
way
to
overlay
the
necessary
rpms
and
we
don't
want
to
go
back
to
unpack,
rpms
and
layer
them
as
files.
That's
that
was
a
terrible
decision.
C
So
now
what
we
do
is
we
take
a
list
of
packages,
duracross,
installs
patch
them
add
additional
ones
and
make
our
own
floral
cross-like-ish
operating
system,
there's
also
a
terrible
solution,
but
on
a
global
scale
a
better
fix
would
be
open.
Openshift
would
rather
machine.
C
Config
operator
would
one
day
learn
how
to
layer
different
images
which
contain
those
three
comments
and
pack
them
together
and
you
would
have
a
properly
functioning
system,
but
we're
pretty
far
away
from
that,
and
the
short-term
goal
is
to
reuse
as
much
as
possible
to
reuse
artifacts,
built
by
fedorokuros
project
as
much
as
possible,
so
that
we
could
just
simply
add
a
couple
of
packages
from
us
and
some
configuration
files.
But
there
are
lots
of
hurdles
in
a
way.
I
would
appreciate
some
looks
in
today's
discussion.
C
We
might
want
to
move
this
to
slash
okiti
repo,
because
we
also
want
to
have
a
fail-safe
way
to
pin
some
particular
packages
like
we
have
to
do
now
due
to
kubernetes
issues,
but
we
also
don't
I'm
not
very
excited
about
rebuilding
operating
system
on
every
pull
request.
C
So
that's
going
to
be
a
long
discussion,
but
it
feels
it
would
be
productive
and
that
that
would
help
us
shape
the
whole
open
shift
in
a
new
fashion,
because
it
was
mostly
the
change
of
why
on
earth
are
we
building
an
os
on
every
build
request?
It's
us
and
braille
cross
as
well
we're
changing
how
openshift
eventually
would
start
overlaying
different
os3
comets
on
one
system,
dynamically,
just
building
them
from
a
bunch
of
images.
D
So
I
recall,
I
think
it
was
like
six
months
ago,
rpm
os
tree
gained
support
for
being
able
to
enable
and
layer
modules
onto
it
so
like
the
modular
cryo
can
now
be
layered
per
matching
versions
and
stuff
like
that,
and
I
think,
a
few
weeks
ago
I
want
to
say,
with
the
last
rpmos
3
release,
you
now
have
the
ability
to
replace
base
packages
with
over
with
layered
packages
so
like.
If
you
want
to
delete
a
package
or
replace
or
swap
out
something,
there
is
an
experimental
feature.
D
I
forgot
what
the
sub
command
is,
but
basically
the
api
exists
for
now
being
able
to
to
do
mutations
like
that
without
having
to
rebuild
the
entire
base
os
tree
image,
which
I
think
is
necessary
for
certain
particular
configurations
with
openshift,
especially
if
you
want
to
you
know,
switch
from
a
non-modular
to
a
modular
version
of
a
component
in
fedora
core
os,
which
may
be
necessary
for
something
like
using
particular
versions
of
run,
c
or
c
run
or
whatever
that
have
both
modular
and
non-modular
variants.
D
So
that
should
actually
be
in
place
now.
I
guess
the
the
remaining
effort
would
be
to
wire
it
up
with
mco.
If
I'm
right,
vadim.
C
Yeah,
but
it
still
doesn't
it's
great
and
that
might
find
its
uses,
but
that's
not
the
problem
we're
facing
right
now,
so
we
initially
hit
the
modularity
problem.
We
kind
of
worked
it
around
by
fixing
upstream
cryo
builds
in
opensuse.org,
and
we
build
all
of
them
as
plane
repos,
and
we
just
mix
in
the
repo
there
so
but
switching
back
to
modularity
might
be
a
solution
which
we
would
pick,
but
that
just
feels
like
additional
steps,
because
the
very
same
people
build
it
upstream
and
the
very
same
people
build
it
in
fedora.
C
So
why
would
they
bother
running
two
builds
instead
of
just
one
when
it
comes
to
replacing
a
package
that
might
find
its
uses.
But
again
some
packages
are
not
that
easy
to
replace,
for
instance,
we're
now
getting
a
problem
where
we
have
to
roll
back
this
linux
policy.
C
C
So
these
are
the
problems
we
would
need
to
discuss
in
this
ticket
and
find
a
decent
solution.
Perhaps
we
might
have
to
fall
back
to
rebuilding
in
some
cases
and
the
usual
happy
path
would
be
using
federacross
artifacts.
So
all
the
cars
are
on
the
table.
We
just
need
to
pick
which
ones
to
which
features.
Do
we
want,
based
on
quite
extensive
experience?
B
F
First,
okay,
so
the
main
idea
is
that
we're
trying
to
move
to
a
model
where
we
have
a
base
image
and
where
we
allow
people
to
have
customization
baked
values
that
you
would
ship,
just
like
you,
ship
container
images
with
the
base,
merch
and
layers
on
top
and
so
the
id
that
our
industry
would
be
able
to
pull
these
image
and
then
apply
the
layers
on
top
on
on
the
os
directly,
which
fits
really
really
well
with
the
mco
model,
where
we
essentially
ship
the
os
as
a
container,
and
we
would
just
ship
like
so
specific
layers
on
top,
so
one
layer
would
be,
for
example,
cryo,
and
everything
like
that
or
potentially
it
could
be
any
replaced
packages
or
think
that
that
would
be
of
use
for
4kg.
F
So
we're
doing
that
both
for
federal
course
and
and
of
course,
red
hat
caress,
the
the
open
shift
in
general,
but
yeah.
It
is
to
to
to
make
this
generic,
but
not
fully
generic.
You
won't
be
able
to
do
all
the
changes
you
want
in
the
in
the
os,
but
like
that
replacing
packages
and
things
like
that
would
be
literally
possible.
F
So
yeah.
The
the
basic
idea
is
not
that
we
don't
like
people
rebuilding
fedora
for
us,
it's
that
when
people
do
so
well,
they
lose
all
the
testing
that
we
do
and
essentially
we
lose
all
the
testing
that
that
everybody
else
do
so.
It
splits
this
thing
in
two
and
that's
not
great
for
us.
So
that
means
that
we
essentially
never
just
well.
We
never
ship
anything
to
our
keys
directly
and
okay
never
uses
federal
directly.
So
we
don't.
F
We
cannot
really
use
okay
to
test
federal
courses
in
ci,
so
yeah.
The
basic
idea
is
that
we
try
very
much
to
bring
us
to
clusters
so
that
we
can,
in
the
end,
make
sure
that
we
run
okay.
The
end-to-end
testing
in
ci,
for
example,
infrared
cores,
at
least
for
the
releases,
maybe
not
for
vrs
anything
but
at
least
for
a
release
like
making
sure
that
when
we
do
a
release
passes
and
to
enter
single
qd,
which
would
be
like
the
basics
so
yeah.
F
So
that
nicely
turns
out
into
the
notes
after
this
meeting,
which
is
yes,
which
is
the
wrong
testing
mostly.
So
we
have
a
testing
week
right,
which
happen,
which
is
happening
right
now,
where
we're
focusing
on
federal
state
five
changes,
so
we're
rebasing
for
federal
certified
this
time,
we're
going
to
try
to
rebase,
really
really
close
to
the
fedora
certified
release.
So
it's
going
to
happen
like
something
like
the
day
or
the
week.
F
I
don't
remember
the
details,
but
it's
going
to
happen
really
really
close
to
the
release
the
right
now
federal
35.
So
the
next
stream
is
base
sort
of
the
federal
aquarius
the
neck.
Next
stream
is
based
of
the
r35,
and
the
testing
spring
will
be
rebased
some
time
later.
F
So
any
help
here
to
test
would
be
great,
and
hopefully
that
breakthrough
things
in
okay
too.
That's
where
we
need
the
most
testing,
probably
and
yeah,
so
we
are
full
arch
for
support
right
now.
You
can
find
the
artifacts
on
the
dominant
page
and
everything
like
that
and
on
the
same
yeah
on
the
same
theme
of
testing.
F
We
are
trying
to
bring
kubernetes
so
upstream,
upstream
kubernetes
is
doing
end-to-end
testing
on
federal
course,
node
with
cryo
and
we're
trying
to
bring
that
into
our
own
ci,
so
that
we
can
at
least
for
the
release
to
maybe
at
the
beginning
tests
that
the
upstream
communities
test
us
with
cryo
on
on
fedora
correct.
F
So
all
that
should
bring
us
much
much
more
closer
to
having
making
sure
that
most
of
those
things
like
work
in
the
default:
okay,
installation
for
infrared
careers
that
changes
of
course,
yeah
and
then
potentially
having
a
full
okay
testing
in
federal
rci,
so
yeah.
The
goal
of
this
is
is
very
much
not
to
to
shame
anybody
on
on
we're
building
federal
careers.
B
Excellent,
so
I
what
I'd
like
to
do
is
keep
this
ticket,
and
this
is
again
number
210
issue
210
and
okd
machine
os.
It's
in
the
notes.
B
I
want
to
keep
this
in
our
meeting
regularly,
so
we
talk
about
this
regularly
so
that
doesn't
slip
by
and
that
we
can
continuously
address
it
and
also
I'll
make
sure
that
folks
have
ample
warning
when
the
fcos
testing
is
coming
up,
so
that
we
can
leverage
our
resources
to
test
fcos
underneath
of
the
various
okd
releases
and
try
those
together
timothy.
Do
you
want
to
move
on
to
the
rest
of
what
you
have.
G
Okay,
so
we've
done
most
of
the
work
to
switch
over
the
mk
docs
to
I
o
site.
Now,
just
waiting
for
red
hat
to
change
the
dns
over,
so
the
most
has
gone
to
maine
the
github's
automation's
kicked
in
that's
all
in
place
now
and
the
cnn's
been
added
to
it.
So
it's
all
ready
just
to
switch
over
as
it
says
in
the
the
notes
we've
incorporated
and
how
to
change
the
official
docs.
G
So
that's
the
docs,
okay,
the
io,
the
the
product
documentation
and
we've
also
got
a
section
how
to
update
the
okay,
I
o
site
within
the
new
content.
If
you
want
to
have
a
look
at
it,
if
you
go
to
the
repo
and
go
to
the
github
pages,
I'm
being
served
by
github,
you
can
actually
see
the
new
site,
but
hopefully
soon
we'll
get
that
switched
over
in
the
dns.
B
Another
thing
that
came
up
that
I
think
michael
was
on
the
call
yeah
michael's
still
here
so
michael
is
helping
us
with
an
issue
where
there's
4.9
references
in
the
okd
docs.
So
michael,
is
there
any
update
on
that.
H
B
We'll
take
a
look
at
that
and
then
folks,
if
you
run
into
any
issues
or
you
want
to
see
something
different,
let
the
docs
team
know
and
then
we'll
bring
it
up
at
the
docs
meeting,
we'll
talk
about
that.
There
and
diane
grabbed
a
coat
of
conduct,
and
so
this
is
pilfered
from
the
ansible
folks
because
they
actually
ran
it
through
a
bunch
of
legal
so
check
that
out
and
then
we'll
take
a
vote
on
it
at
the
next
full
group
meeting.
B
Basically,
the
idea
is
that
we
have
this
code
of
conduct
in
place
on
the
website
and
also
announce
it.
Just
like
cncf
does
at
the
beginning
of
our
meetings,
just
to
make
sure
that
that
people
are
aware
that
any
event
or
thing
related
to
the
group
adheres
to
the
code
of
conduct.
B
So
take
a
look
at
it.
If
there's
changes,
you
want
to
make
suggestions
if
something's
unclear
or
anything
like
that,
be
prepared
with
those
maybe
write
them
out
and
send
them
to
the
larger
group,
then
we
can
talk
about
them
at
the
next
meeting.
Any
questions.
H
B
H
Oh,
we
actually
had
that.
I'm
sorry,
it's
in
the
it's
in
the
jira
issue:
okay,
244.
B
Okay
and
sandro
is
not
here,
but
you
can
see
the
updates
there
and
I'll
read
these
off,
really
quick
just
to
okay,
it's
for
the
for
folks
that
don't
know
the
okay
virtualization
special
special
interest
group
of
okd,
actually
brian.
Why
don't
you
go
ahead
and
read
these
off
since
a
lot
of
this
deals
with
stuff
that
you're
dealing
with
in
terms
of
the
website
and
everything.
G
Okay,
so
they're
moving
their
docks
to
okd
and
they
initially
did
set
up
their
own
site,
so
they
they
have.
They
have
raised
a
pull
request
to
actually
move
that
over.
G
That
actually
did
raise
a
couple
of
issues
around
conversations
we
had
at
this
meeting
two
weeks
ago,
around
social
media
and
they
they've
created
their
own
social
media
communities.
So
there
was
a
question
of
as
this
group
did.
Are
we
okay
with
that,
as
we
sort
of
decided
not
to
do
social
media
for
okd
having
a
work
group
that
has
a
social
media
presence?
Is
that
something
that
you
want
to
support?
So
that's
sort
of
an
open
issue?
G
G
So
the
page
stays
pure
markdown
they've
decided
to
actually
remove
that
tracker.
There
is
an
open
question
on
a
pull
request.
Are
we
okay
with
the
social
media
links?
So
that's
an
open
question.
G
Other
than
that,
within
the
work,
they're
doing
and
they've
successfully
tested,
ok
for
eight
installation
on
bare
metal,
upi
and
they've
got
a
guide
for
it
and
they're
going
to
put
that
onto
a
page
on
the
okay
site
and
they're
working
with
the
rook.io
community
to
get
rook
set
operator
to
the
community
opidi
operators,
which
I
think
is
going
to
be
goodness
all
around
and
then
they're
working
with
assisted
installer
project
to
support
okay,
the
virtualization
as
well.
G
I
don't
know
if
anybody
else
knows
any
more
about
that
and
then
they're
also
adding
automation
for
testing
a
hyper
converged.
Cluster
operator
with
okd
and
they've
got
a
link
to
that
which
is
in
the
hack
and
the
notes
has
any
questions
come
out
of
that?
G
Ideally,
we
we
want
an
ansec.
Do
we
want
to
support
social
media?
Are
there
any
objections
to
that
or
how
do
we
feel
about
that?
As
a
group.
B
B
If
the
docs
group
sort
of
decided
against
doing
one
of
our
own
and
instead
like
having
it,
go
through
the
openshift,
twitter
and
whatnot,
just
because
we
don't
really
have
people
to
man,
something
like
that
ourselves
right
now
and
plus
there's
a
wider
audience.
If
we
go
through
the
openshift,
twitter
and
commons
and
stuff
like
that,.
E
Yeah,
I
think
a
lot.
A
lot
of
this
depends
on.
Are
they
already
using
that
twitter?
Is
that
already
like
an
established
communication
channel
for
them
that
their
users
are
expecting?
Because,
if
that's
the
case,
I
don't
see
the
harm
in
having
a
section
on
their
docs.
That
says,
like
you
know,
for
updated
information
go
here,
but
I
agree
with
brian,
like
we
don't
necessarily
need
to
have
the
embedded.
E
B
They
do
have
their
their
twitter
is
established.
They
have
62
followers,
all
of
those
followers
sort
of
came
at
once
and
it
looks
like
they
post
every
like
once
a
week
or
a
couple
times
a
week
or
something
like
that
since
they
started
it,
but
there's
not
a
lot
there
so
yeah.
I
don't
know
that
we'd
want
to
scroll,
because
if
you
have
a
scroll
of
social
media
that
doesn't
have
really
any
updates,
it
actually
doesn't
look.
So
you
know,
which
is
what
diane's
concern
was
about
ours
right.
G
Brian
and
then
mike
okay,
as
I
said,
I
also
think
in
a
way
it's
disconnected
that
group
from
the
rest
of
us,
because
they
don't
use
any
of
the
communication
channels
that
the
other
working
groups
use
this
main
group,
the
documentation
group.
So
I
think
several
people
on
this
meeting
wanted
to
be
involved
in
the
project
and
nothing
comes
through
any
of
the
other
okd
channels.
B
Well,
we
can,
we
can
fix
some
of
those
things.
So,
for
example,
I
have
access
to
the
calendar,
the
fedora
calendar
and
a
couple
other
folks.
Here
we
could
add
a
fedora
calendar
event
for
them,
which
would
post
to
the
working
group
mailing
list
that
automatically
gets
sent
to
the
working
group.
B
We
could,
I
don't
know
diane
hasn't
gotten
back
in
in
terms
of
how
timely
forwarding
stuff
through
the
openshift
twitter
would
be,
but
we
could
theoretically
forward
either
a
fresh
message
or
whatever
or
could
be
posted
directly
from
the
openshift
twitter
or
the
over
to
twitter
could
forward
what
they
have
either
way.
B
It
definitely
does
feel
splintered
a
little
bit,
but
some
of
that
might
be
our
part
of
being
overwhelmed
and
not
doing
enough
to
get
them
sort
of
yeah.
I
think
once
their
website
is
within
ours,
then
it
it
changes
things
a
little
bit.
Yeah.
G
Yeah,
it
just
feels
that
we
obviously
want
to
support
them
and
we
want
to
get
people
using
and
testing
and
playing
with
their
stuff.
I'm
just
I
just
don't
know
how
people
find
their
stuff
if
they're
on
the
either
the
slack
channel
or
the
they're
listening
to
the
google
group
mailing
list
or
anything,
it's.
B
Well,
let's
I'll
touch
base
with
them.
Diane
was
talking
about
setting
up
a
meeting
with
them
another
meeting
with
let's
set
up
another
meeting,
let's
invite
them
again,
because
I
think
diane
invited
them
again.
Let's
try
again
to
have
them
get
some
representation
here,
and
this
is
another
thing:
is
their
all
of
their
members
are
from
one
particular
region
right.
So
it's
kind
of
the
inverse
of
the
situation
that
we
have
right
now,
a
little
bit
timothy,
it's
a
little
bit
later.
B
I
think
it's
like
what
8
p.m
there,
or
something
like
that
for
you.
You
know,
so
it's
it's
a
little
bit
later
for
some
other
folks
here,
but
generally
we
can
all
sort
of
attend
with
them.
It's
like
very
different.
You
know
availability,
so,
let's
invite
them
again
and
then
we'll
go
from
there.
Does
that
seem
like
a
plan
just
to
get
more
conversation
going.
E
Yeah,
I
was
just
gonna
say
like
I
did.
You
know
like
given
what
brian's
kind
of
talking
about
and
what
in
the
discussion
here
like
it,
I
think
it's
right,
there's
less
value
to
include
like
their
twitter
stuff
in
the
official
docs.
You
know
it's
perfectly
fine.
If
that's
the
way,
they're
gonna,
you
know
they
want
to
have
their
twitter
and
kind
of
do
stuff
there,
but
I
would
my
preference
would
be
to
see
like
yeah.
E
How
do
we
do
outreach
to
that
virtualization
group
so
that
we
can
get
kind
of
the
support
of
the
full
lake
okd
community
behind
them?
So
rather
than
saying?
Well,
you
guys
have
your
twitter
and
whatnot
like
how
do
we
be
more
inclusive
so
that
we
can
get
your
message
out
along
the
channels
that
you
know
we're,
preferring
to
use
yeah.
B
All
right:
well,
let's
do
that!
Is
anyone
interested
anyone
else
interested
in
being
in
on
on
a
meeting
with
them?
If
we
have
to
do
something
sort
of
outside
of
the
bounds
of
this
meeting
to
accommodate
their
time,
anyone
else
want
to
be
in
on
that
I'll
it'll
probably
be
diane
myself
and
brian
anyone
else.
E
Yeah
I
mean
I'm
happy
to
join
and
help
facilitate.
However,
I
can
I
don't
have
a
lot
of
specific
technical
knowledge
going
into
that
virtualization
group,
but
you
know
yeah,
I
mean
I'm
happy
just
to
help
from
a
community
perspective.
I
think,
from
a
community
perspective,
you're
good
at
facilitating.
B
So,
let's
do
that
all
right!
Let's
move
on
now
to.
E
Before
before
we
move
on,
I
I
had
my
hand
up
before
I
just
I
wanted
to
ask
brian
a
question.
A
technical
question
about
the
docs
are
the
are
the
work
are
the
workflows
there
working
properly,
because
I
I
tried
to
clone
something
out
of
the
workflow,
and
one
of
the
directories
was
not
looking
quite
right
for
me.
So
I
didn't.
I
just
wanted
to
ask
I
you
know
I
didn't
open
up
a
bug
or
anything,
because
I
wasn't
sure
your
tool
chain
was
a
little
bit
different
than
the
one
I
was
using.
G
As
far
as
I'm
aware,
yes,
so
obviously
the
the
automation,
the
automation,
is
fully
on.
Github
uses
github
actions
to
do
all
the
builds
on
on
pr's.
If
you
find
a
problem,
please
let
me
know
because
obviously
I'm
aware
people
use
different
operating
systems
and
then
we
may
have
to
update
the
instructions
so
yeah,
just
ping
me
and.
E
B
And
this
actually
came
up
at
the
docs
meeting
is
how
do
we
document
how
folks
can
contribute
to
the
documentation-
and
you
know,
being
able
to
use
pod
like
a
a
container
with
like
pod
man
to
run
the
software
and
stuff
like
that
to
generate
the
site
and
stuff
like
that,
so
docs
on
that
forthcoming,
so
that
everyone
can
sort
of
participate.
G
B
B
Okay.
Moving
on
to
the
next
piece
of
business
is
the
release
change
log
vadim,
you
weren't
at
the
last
meeting,
but
this
came
up
and
if
you're
still
around
here,
can
you
explain
a
little
bit
about
what
happened
with
the
chain
lunch
logs
and
we
received
several
issues
related
to
the
change
logs
and
the
commits
look
like
they
disappeared?.
C
Yeah
they
did
disappear.
In
fact,
the
problem
is,
we
have
two
forks
and
which
is
installer
and
mco.
C
So
every
time
we
make
a
new
mco
release,
we
pull
in
changes
rebase
hours
on
top
and
push
them
so
the
old
comment,
unless
it
gets
tagged
or
somewhere,
it
gets
pruned
by
github
eventually
and
the
change
logs
are
dynamic.
They
are
kept
in
cash.
So
when
I
check
them,
they
are
looking
fine
because
the
github
hasn't
pruned
it
yet,
and
when
I
look
back
after
a
week,
the
github
has
pruned
the
comment.
C
Changelog
has
been
evicted
from
the
cache
it
tries
to
fetch
it.
Github
doesn't
find
the
comment
and
we
get.
We
got
literally
nothing.
C
The
current
solution
is
we
well,
we
got
away
in
salaries
merged
in
for
nine,
so
that
problem
goes
away
would
go
away.
Eventually.
The
mco
issue
still
remains.
My
current
workflows
that
I
tag
with
some
nonsense.
Tag
like
for
seven
and
current
date,
like
october
12,
push
the
tags
and
so
on.
So
the
solution
is
manual
just
to
make
sure
that
github
keeps
all
the
comments
but
coming
forward.
What
we
need
is
well
either
on
board
our
fork
correctly,
but
that
would
prevent
us
from
rebasing
our
changes
on
top.
C
C
There
is
a
cli
command
which
can
build
you,
the
difference
between
two
images
of
the
oc
admin
info
as
something
like
that
you
can
build
a
change
log
between
two,
so
it's
never
gone,
but
it
might
be
unavailable
for
some
time
in
a
release
controller.
So
that's
that
not
much
we
can
do
about
this
because
of
the
hacky
way.
We
build
machine
machine,
config
operator
and
installer
for
some
time.
C
Yeah,
probably
it
would
be
a
good
idea
to
file
a
pr
which
mentions
that
they
get
in
the
known
issue
so
that
we
won't
have
duplicates.
We
can
also
come
up
with
the
manual
workaround
finally
codified,
properly.
B
Okay,
good
all
right
there
wasn't
anything.
Did
anyone
want
to
pull
anything
out
of
the
discussion
section
of
the
repo?
Did
we
have
any
discussions
come
in
that
were
stood
out
in
any
way?
I
didn't
see
anything
and
pull
anything
out,
but
if
folks
saw
something
they
wanted
to
discuss
real
quick,
we
got
a
few
minutes.
B
Enough
and
new
business-
and
this
brings
it
up
so
location
of
the
main
main
community
vadim.
We
talked
about
this
at
the
doc's
group.
We
talked
about
this
at
the
main
group
meeting
two
weeks
ago.
The
idea
is
that
there's
a
lot
of
push
to
get
a
sep,
a
repo
that
we
can
all
participate
in
right
as
opposed
to
this
current
one.
B
Basically,
it's
you
and
diane
some
people
who
aren't
really
involved
much.
You
know
in
christian,
I
think
so.
Is
there
any
downside
to
just
a
new
get
repo
and
moving
this
stuff
over?
Obviously
diane's
going
to
do
some
looking
in
terms
of
like
any
legal
stuff,
she
said
she
was
going
to
do
that.
B
But
do
you
see
any
downside
to
just
moving
issues
and
discussion
and
the
other
sections
into
a
repo
that
we
can
then
control
access
to
that
more
people
can
participate.
C
It's
just
a
matter
of
naming,
maybe
we
should
move
to
give
lab
or
some
the.
C
Name:
yeah
openshift,
cs
organization:
diane
can
find
us.
Somebody
who
can
create
a
repos
just
adds
a
small
postfix
there.
So
well.
B
So
in
the
docs
meeting
we
talked
about
that
a
little
bit
and
apparently
that's
the
openshift
cs.
Stuff
is
two
people
who
likes
gave
her
access
and
ability
and
those
people
have
like
moved
on
and
there's
other
people,
who've
moved
on
and
they've
changed
their
name
multiple
times.
So
the
just
the
the
docs
group
in
the
discussion
in
there
and
the
discussion
that
the
main
group
had
two
weeks
ago
was
just
a
completely
fresh,
org
and
repo
so
not
connected
to.
Do
you
see
any
downside
to
that.
C
No,
it
shouldn't
be
if
oqd
is
opi.
Gokiti
is
an
organization
that
would
be
nice,
but
we
would
still
have
quite
a
long
confusion
period
where
people
don't
know
which
repo
to
file
tickets
at
and
things
like
that,
but
other
than
that,
I
think
I
think
it
should
be
fine
yeah
does
anyone
else
have
any
thoughts
on
that.
B
A
Yeah
yeah,
no
did
I
miss
here.
I
thought
I
heard
vadim
say
git
lab.
A
B
A
And
so
I'm
using
git
lab
as
well
self-hosted
version,
but
the
the
world
seems
to
be
against
us,
so
maybe
vedine
could
say
a
little
bit
more
about
what
was
in
his
thoughts.
C
My
only
concern
is
naming
all
other
problems.
I
am
pretty
sure
we
can
tackle
if
we
can
use
different
kid
hosting
for
our
repos.
That
sounds
good.
I'm
pretty
much
open
to
suggestions
just
a
matter
of
getting
a
cute
name
so
that
we
could
list
them
all
on
our
pages
and
people
would
not
get
confused.
Where
do
they
file
issues
and
where
do
they
discuss
different
stuff.
A
So
there's
a
little
more
yeah
because
they
also
have
a
trello
like
task
board,
which
I
haven't
found
on
github.
Yet.
D
The
github
one
is
just
bad:
that's
the
only
problem
with
it
just
kind
of
sucks,
because
it's
super
hard
to
connect
into
issues
and
things
like
that
and
it's
it's
not
it's
not
easy
to
connect
and
to
work
to
connect,
work
items
to
task
items
to
schedules
like
it's,
it's
a
very
rudimentary
and
frankly,
quite
annoying
implementation,
as
opposed
to
both
get
lab
and
packer,
which
tie
cards
on
the
board
to
actual
issues
themselves.
B
All
right,
I
don't
want
to
spend
too
much
more
time
on
this.
We've
got
four
minutes.
We've
got
a
couple
more
things
to
do,
but
it
sounds
like
everyone's
on
board
at
the
next
meeting,
we'll
devise
a
plan
for
starting
to
move
things
to
a
new
place
and
at
that
point
diane
can
chime.
In
with
what
she
learned
from
legal,
crc
subgroup.
Sorry,
timothy
yeah.
F
D
B
F
Okay,
so
that
we
will
move
essentially
all
the
non
building
repos
to
this
organ,
keep
like
okay,.
B
C
H
C
Be
the
owner
of
the
repo,
so
I'm
not
feeling
excited
about
okay
to
testing
every
single
pr
we
do
for
our
working.
So
probably
brow
is
just
not
a
good
fit
for
us,
but
anything
code,
wise
anything
which
lands
in
okay
release
image.
It
absolutely
must
use
github.
It
absolutely
must
come
from
openshift
reverse
and
must
use
a
pro
yeah.
C
B
D
B
Exactly
okay,
crc
sub
group:
neil,
do
you
have
an
update
on
that.
D
B
That's
fine!
That's
fine,
bare
metal
testing.
Ci
group.
There
was
discussion
about
that.
We'll
talk
about
that
next
week.
Our
next
meeting
the
office
hours
is
tomorrow
at
5
pm
eastern
promote
the
link.
B
Oh,
I
don't
have
the
link
there,
but
I'll
put
the
link
in
the
document
share
it
out
via
your
social
media.
Diane
has
shared
out
via
twitter
and
whatnot,
and
it's
myself
and
vadim
you're
gonna
be
there
who
all's
gonna
be
there.
It's
charo
myself,
timothy
and
I
think,
there's
like
four
or
five
of
us
that
are
going
to
be
there.
So
it'll
be
lots
of
fun.
It's
only
half
an
hour,
but
hey
it's
something
during
coupon,
which
is
cool,
and
this
will
be
the
last
time.
B
I
think
that
we
have
chris
short
sort
of
narrating
for
us,
since
chris
short,
is
moving
on
greener
pastures.
Okay,
I
think
that
is
it
we're
at
time.
Anyone
have
any
last-minute
things.
B
No
all
right
well
cool.
Yes,
please
promote
the
event
tomorrow
and
vadim,
and
I
will
talk
about
the
things
that
we
have
on
our
task
list
and
there
will
be
a
new
task
list.
Written
up
based
on
this
that'll
be
added
to
the
notes
and
then
I'll
send
an
email
out
with
the
tasks
and
who's
responsible
for
them,
so
that
we
can
be
a
little
more
timely
with
our
tasks
and
a
little
more
on
top
of
them.