►
From YouTube: OKD Working Group 2019 11 26 Full Meeting Recording
Description
next meeting details here: https://github.com/openshift/community/projects/1
A
Well,
hello,
folks
and
welcome
to
another
okd
working
group
meeting
we're
running
a
little
behind
here,
there's
a
few
folks
that
are
on
another
call
and
hallo
and
we're
waiting
to
see
if
Christian
can
join
us.
But
I
will
kick
it
off
here
a
little
bit
and
you
should
be
able
to
see
my
screen.
Hopefully
everybody.
If
you
can't
please
let
me
know
we're
using
the
github
OpenShift
community
projects,
slash
one
as
the
agenda
item
driving
portion
of
the
day
and
I
just
wanted
to.
A
Let
people
know
that
if
you're
coming
to
dev
comp
in
Bruno
and
the
Czech
Republic
at
the
end
of
January,
we
did
get
a
talk
accepted
and
you
know
the
talk
is
called
the
road
to
okd
for
operators,
Fedora
core
OS
and
K
a
days
and
I,
pretty
sure.
Danny
myself
and
Christian.
Glen
Beck
are
the
three
speakers
for
that,
but
we
will
happily
incorporate
other
people
who
are
present
as
well
as
we
are
want
to
do.
A
I
also
have
requested
an
OK
D
working
group,
meeting
room
and
I'm
pretty
sure
I'm
going
to
get
it,
but
if
they
haven't
told
me
the
location
and
date
yet
so,
hopefully
we
can
bring
together
some
of
the
Fedora
core
OS
folks,
the
okd
folks
and
other
people
who
are
attending
at
that
event
and
get
that
done
and
have
another
a
little
bit
of
time,
because
we
did
have
a
really
nice
working
group
meeting
at
the
coop
con
event.
Last
week
we
had
I,
don't
know
probably
seven
or
eight
people
in
the
room.
It
helped.
A
So
we
can
get
the
word
out
and
the
big
ask
really
yet
at
all
of
both
of
those
things
was
for
people
to
give
feedback
and
test
okd
for
based
on
the
documentation
and
and
the
alpha
release
that
we
have,
and
we
got
some
buy-in
from
a
couple
of
folks
from
GE
and
North
Carolina
University
University
of
North
Carolina
and
a
couple
of
others
who
said
that
they
would
test
it
as
well.
So
that's
been
helpful
and
moved
I
hope.
A
B
So
that
did
kind
of
call
out
that
we
I
saw
been
open
to
do
to
pull
together
kind
of
a
an
overarching
roadmap.
So
I'm
getting
Derek
is
helping
me
review
kind
of
a
first
draft
and
I'll
share
that
with
the
the
working
group.
When
when
is
there
something
there
to
review
it's
still
kind
of
in
a
proto
form?
So
it's
not
really
a
complete
thought.
Yet
there
was
a
couple
of
other
topics.
B
B
So
the
so,
then,
just
I
think
there's
a
need
to
queue
that
up.
They
need
to
have
a
broader
discussion
about
now
that
we
at
least
have
some
the
elements
of
Fedora
chorus
working
folks
who
want
more
flexibility
and
how
okd
is
deployed.
What
the
concrete
path
might
be
for
that
it
would
be
a
good
topic
for
a
future
discussion.
A
Is
he
I'll
read
what
he
said
he
said:
he'd
been
eight
he's
not
able
to
attend,
but
he's
managed
to
deploy
metal
three
on
his
Intel
nook
lab
environment
and
he's
ready
to
set
it
up
and
do
some
testing
and
build
a
clowder
with
a
bit
of
guidance
and
assistance
and
he's
asking
a
question
that
I
asked
in
the
the
meeting
as
well
as
how
do
we
want
to
get
feedback?
And
what
do
we
want
tested
in
this?
With
this
current
release
and
I?
A
Don't
think
that's
totally
clear,
we
did
kind
of
say
we're
going
to
use
the
issues
list
on
in
github
for
that,
but
no
that's
what
I
was
hoping
that
Christian
had
done
a
little
bit
of
work,
clarifying
that
and
I
don't
see
him
on
the
call
and
I
picked
him
a
little
bit
here.
So
I'm
kind
of
curious
as
to.
A
B
Mean
at
this
point:
if
people
want
to
open
issues,
let's
just
do
it
in
openshift
/o
keyd,
that
was
reference
and
then
getting
I
had
to
put
something
into
the
readme.
So
I
just
put
open
/o
keyd
is
that
target,
so
the
issues
on
that
should
be
sufficient,
and
then
we
would
just
sort
out
I,
honestly,
I,
think
we're
gonna
have
better
long-term
outcomes
with
the
infrastructure
integration
we
have
for
code
across
components
if
we
can
turn
them
into
bug.
B
A
So
if,
for
now,
we
can
land
on
this
I'm
curious
just
to
other
people
who
are
on
the
call,
though,
what
we
and
Christians
joining
now
I
can
see
that
popping
up,
but
what
we
are
what's
the
best
thing
to
ask
people
like
Charo
or
UNC
or
GE
besides,
please
just
try
and
deploy
it
and
spin
up
a
cluster
and
give
us
your
feedback
on
that.
Is
there
something
beyond
that
that
we
should
be
asking
people
in
this
initial
alpha
release
to
to
be
testing.
B
We
run
the
standard
test,
suites
for
or
cross
repos
stuff,
the
same
ones
that
we
had
in
3x
just
more
expanded.
Those
are
run
right
now.
I
think
the
biggest
challenge
is
that
the
release,
the
branch
is
her
fedora
for
us
need
to
get
merged
into
master
branches,
and
so
those
are
some
of
the
the
items
that
are
on
Christiansburg
ma'am.
B
If
we
can't,
if
we
can't
get
those
things
merge,
then
there
just
tend
to
drift,
and
then
we
we
break
somewhere
between
you
know
three
to
three
to
four
weeks
in
as
changes
go
into
the
installer
or
that.
So
that's
the
ignition
three
work,
a
number
of
elements
around
go
mod
support
and
a
couple
three
pose
I
think
those
are
ones
that
just
need
once
those
are
merged
into
a
master
branch.
I
think
then
we
move
into
that
next
phase.
B
A
Other
thing
that
I've
been
asking
people
since
there's
not
really
a
workload
there,
anything
to
request
there
or
people
to
test
certain
specific
things
on
running
on.
There
is
to
tag
what,
where
they're
deploying
it
so
that
we
know
that
it
deploys
on
bare
metal
on
with
the
metal
cube
stuff
or
on
Amazon,
so
to
get
a
variety
of
places
where
people
have
actually
deployed
it.
So
we
know
that
it
has
a
successfully
run
on
different
clouds
and
different
places.
So
that's
the
only
other
ask
that
I
have
really
is
to
get
that
variety
in.
A
B
Also,
we
should
talk
about
this
too,
so
we
had
a
couple
issues
that
required
Fedora
core
OS
changes.
We
need
to
get
that
process
kind
of
the
handoff
there
and
what
someone
would
do.
Vadim
I,
don't
know
if
if
he
took
a
record
of
this,
but
you
know
it
was
a-
he
went
through
the
process
of
trying
to
get
an
NFS.
B
You
told
his
package
bump
because
it
didn't
work
in
fedora,
but
it
did
on
rel,
and
so
the
upstream
EDD
tests
failed,
so
that
process
of
identifying
what
needs
to
be
changed
in
fedora
ass,
who
to
talk
to
and
where
to
go,
to
open
bugs
and
then
how
to
make
sure
it
makes
it
back
in
I
think
that
he
would
be
a
good
person
to
document
what
he
did
there
and
use
that
as
the
basis
for
some
iteration
doc.
While
we're
still
in
the
preview
phase.
D
So
that
would
be,
and
then
that's
the
thing
that
may
be
out
by
like
two
weeks.
Maybe
we'll
get
there
next
friend
I.
Don't
think
the
NCO
team
will
have
time
this
friend
to
review
it.
So
what
I've
been
doing
today
is
rebasing
just
rebasing
the
F
cross
branches
on
top
of
current
masters,
but
we
could
just
cut
another
alpha
release
that
is
yeah
just
more
update
up
to
date.
Now,
with
with
current
master
current
OCP
yeah,
that's
sort
of
what
I'm
and.
B
I'm,
you
know,
and
part
of
the
part
of
the
reason
for
that
is,
there's
like
three
or
four
big
things
merging
in
a
very
short
timeframe,
so
the
cluster
I'd
CD
operator
is
merging
and
that's
going
to
change
the
bootstrapping
flow
somewhat
I,
don't
anticipate
it
affecting
f.cuz.
But
you
know
that's
nature
that
beast.
It
probably
will
and
then
I
know,
there's
a
couple
of
things
queued
up
for
for,
for
that
might
be
somewhat
dramatic,
including
ipv6
support
and
so
forth
as
those
start
to
come
out.
B
D
So
the
next
big
thing,
then,
is
the
installer
repository
the
rebase
of
that
and
there's
also
one
thing:
I
want
to
get
into
master
really
soon,
which
is
the
migration,
should
go
modules
in
the
Installer
repository,
so
that
would
be
really
nice
to
get
in
that
to
master
as
soon
as
possible,
as
well
so
I'll
work
on
that
yeah
after
this
initial
rebase
right
now,
but
yeah
I
hope
in
the
future.
Those
three
bases
don't
won't,
be
that
much
work.
A
D
I'm
doing
right
now
is
we
have
these
F
cost
branches
in
the
machine,
config
operator
and
the
Installer
repositories,
and
they
sort
of
have
diverged
from
the
master
branches,
because
all
the
the
new
PRS
go
into
master,
but
they
don't
go
into
the
F
cross
branches.
What
we
need
to
do
right
now,
some
as
a
stopgap,
is
to
rebase
the
F
cross
branches
on
top
of
current
master,
to
maybe
cut
another
alpha
release
and
then
in
the
future.
We
want
to
really
make
that
divergence
as
small
as
possible.
D
A
A
C
Unfortunately,
I
have
not
the
main
system.
I
have
access
to
for
being
able
to
do
things
like
this
is
an
overt
system
and
we
don't
have
an
overt.
We
don't
have
over
it
supported
in
the
open,
shipped
installer,
yet
so
I
don't
have
that
yet
for
me
to
test
OpenStack
based
deployment,
that's
going
to
be
a
couple
months
out,
but
that
is
something
that's
on
my
on
my
to
do
list.
B
Yeah
I
mean
I
know
that
the
metal
stuff
should
work
over
is
coming
in
hot,
the
GCP
stuff,
but
I
think
we're
at
a
spot
to
where
this
is
like,
where
a
lot
of
the
little
details
there's
a
lot
of
room
for
folks
to
help
close
some
of
those
gaps,
I
think
maybe
Christian.
This
is
a
place
where
we
could
do
a
better
job
of
listing
out
the
things
that
are
just
known,
not
working
and
need
someone
to
go,
investigate
and
a
pointer
to
where
they
could
investigate.
B
Because
I
see
you
know
in
OpenShift,
devin
slacking
on
Cates
slack.
We
were
definitely
listing
a
lot
of
these
but
I'm
for
folks
following
along
who
aren't
in
the
Select
channel,
they
might
might
miss
those.
D
B
We
use
the
okey
D
issues
list
for
now.
We
should
maybe,
at
the
end
of
this
call
like
getting
the
habit
of
triaging
that
and
then
triaging
the
board,
or
maybe
we
can
do
that
as
a
separate
or
followed
by
them
like
via
a
chapter
like
if
we
can
help
keep
those
lists
up-to-date
or
at
least
we
can
make
sure
that
folks
don't
feel
like
they
don't
know
we're
a
place
to
jump
in.
It
is.
A
E
D
E
C
A
C
I
mean
I
know
for
a
fact.
One
of
the
reasons
why
G
CP
is
broken
for
F
cos
is
because
they
have
a
bug
filed
against
me
for
the
Google
compute
engine
package
that
I
need
to
update
and
then
split
up
some
you
deverel
stuff,
and
so
that's
been
a
thorny
mess
for
me
to
try
to
fix
for
I,
think
four
or
five
weeks
now
so
like
because
we
don't
in
the
Fedora
project.
We
don't
have
any
resources
for
testing
properly.
These
things
on
the
various
clouds.
B
B
Another
good
one
good
point:
you
know
like
the
places
where
there
is
like
we
don't
use
the
GCP
packages
and
our
cause
deliberately,
because
we
don't
want
all
the
complexity
there
and
there's
equivalents
already
in
openshift
to
mitigate
to
provide
the
the
core
of
the
function
since
it's
not
a
general
purpose
operating
system.
So
there's
there's
a
couple
of
discussions
that
on
those
we
need
to
make
sure
we
can
connect
the
right
folks
because,
like
Abhinav
or
somebody
would
have
been
like
yeah
they're,
you
don't
even
need
that
package.
B
We
should
be
bypassing
that
during
both
of
those
phases,
because
we
already
provide
something
out
of
the
box.
So
that's
a.
How
do
we
make
sure
those
kinds
of
discussions
are
possible
and
people
catch
it
when
they
hit
it
and
then
the
other
part
of
it
well,
I
was
going
to
say
would
be
so
at
least
I
would,
as
was
assuming,
we
do
in
AWS
and
metal
CI
promotion
process.
B
But,
as
you
start
getting
into
the
EPI
and
different
metal
configurations,
we're
just
never
gonna
have
the
CI
infrastructure
to
test
at
all
and
so
I
think
I'm.
That
is
that's
another
area,
I
think
where,
from
a
broad
community
perspective,
the
only
way
that
we'll
be
able
to
know
that
certain
hardware
configurations
work
is
by
someone
trying
it
and
there
being
a
record.
Do
we
have
records
like
that
in
other
parts
of
Fedora
today
for
different.
C
Core
OS
is
so
totally
disconnected
from
the
rest
of
fedoras
like
overall
process
that
it
doesn't
take
advantage
of
any
of
that
and
to
make
matters
slightly
worse.
Fedora
core
OS
has
been
operating
in
its
own
bubble
and
life
cycling
that
they
are
not
participating
as
part
of
the
general
QA
validation
process.
That
is
in
place
for
Fedora
releases,
not
necessarily
that's
good
or
bad,
because
the
release
cadence
is
different.
But
that
means
that
they
need
to
develop.
F
I
can
speak
to
that
a
little
bit.
This
is
dusty,
though.
F
F
C
F
Yeah
I
mean
and
obviously
Fedora
core
OS
doesn't
get
away
from
all
of
the
problems,
as
we
still
have
issues
like,
for
example,
right
now
we're
building
images
for
Google
compute,
but
we
don't
have
accounts
that
they
can
use
floating
them.
Medora
and
stuff
like
that.
So
artifacts
exist,
but
you
can
this
take
an
image
90
in
you.
C
B
Actually
think
maybe
what
we
should
do
is
we
should
have
a
issue
in
okd
for
each
of
the
platforms,
including
metal,
and
then
a
list
of
current
status
or
people
can
add
comments.
There
I
think
it's
probably
accurate
to
say
any
place
where
someone
hasn't
tried.
We
should
create
an
issue
and
say
you
know
if
it's
a
if
it's
a
platform
that
even
you
cares
about,
we
say
platform,
readiness
for
preview
and
then
we
say,
isn't
working
yet
and
in
folks
can
add
comments
if
they
do
get
it
working
or
workarounds.
D
Specifically
for
sure
so
the
problem
there
was
we
had
to
rip
out
a
Vadim
ripped
out
the
azure
support,
because
that
couldn't
be
moved
to
go
modules
in
the
Installer
repo.
So
yeah,
that's
actually
the
next
thing
I'll
look
into
so
that
may
be
news
very
soon
or
there
will
be
news,
but
they
might
may
not
be
good
as
well
so
yeah
right
now,
it's
not
possible
because
those
dependencies
weren't
weren't
working
with
go
modules
which
we
need
for
ignition
spec
3.
A
G
I
can
create
them
for
me
personally,
I
would
I
would
just
you
know,
try
to
I
would
all
prefer
to
kind
of
try
to
match
what
we
have
at
the
moment
and
I
know
you
know
aw.
She
is
fine
over
there.
Maybe
the
Binford
to
get
in
and
then
yeah,
metal
and
I
think
that
would
be
a
good
good
starting
point.
Maybe
for
the
kind
of
yeah.
A
G
A
B
A
I,
wouldn't
hesitate
and
adding
things
because
then,
like
Philip
is
just
coming
in
and
saying
he'll
test
on
vSphere
tomorrow
without
NXT,
so
just
just
add
them
and
then
if
people
come
up,
we
there's
a
for
me.
The
visibility
of
the
different
platforms
is
very
helpful
because
otherwise
I'm
gonna
stuck
with
notes
in
my
meeting
here
to
track
track.
Where
we're
at
when
people
ask
and.
B
D
C
B
H
A
A
Disappear
didn't
I
I
had
too
many
things
on
my
screen,
as
you
can
all
tell
just
having
one
of
those
days.
I've
been
camping
for
the
weekend,
and
this
is
my
first
meeting
back
so
I
apologize
I
escaped
after
coop
con
yeah
Joshua
Tree,
which
was
absolutely
gorgeous,
but
now
getting
back
into.
It
is
tougher
than
you
think.
So,
where
were
we
at.
A
F
Yeah
I
don't
have
any
updates
there.
I
there
was
something
weird
we
were
gonna
try
to
get
with
I'm
a
second
speed.
What
the
fattest
was
on.
The
are
outstanding
feature
requests
think
he
got
back
to
us
and
that
something
about
hopefully
by
DEFCON
Danny
wearing
they
would
be
able
to
put
out,
oh
that,
wouldn't
satisfy
our
request,
but
I'm,
not
percent
sure,
on
that
party's
changed
gotta
catch
up
with
him,
at
least
by
then.
D
C
D
E
A
Link
it
back
somewhere,
so
it's
we
can
track
it
better
and
maybe
rewrite
this
open
task
here,
a
little
bit
that
would
be
great
driving
right
along
here
today.
I
think!
That's
it.
The
the
other
open
topic
still
is
resourcing.
Okay,
D
and
I
still
think
that's
on
my
to-do
list
to
talk
with
engineering,
so
I'll
hit
hit
that
up.
I
Got
a
little
bit
of
feedback,
so
I
set
up
4.3
on
an
AWS
several
times
and
only
had
like
a
minor
issue
and
I
think
it
was
just
waiting
for
the
API
server
which
I'm
guessing
you
know
occasionally
happens,
but
which
was
like
the
first
time.
So
the
installer
timed
out
when
went
standing
that
up
the
the
next
two
times
I
set
it
up.
I
didn't
have
a
problem
at
all,
so
it
could
have
been
a
fluke.
The
only
other
two
questions
or
issues
suggestions,
maybe
I,
have
is
I.
I
Think
I've
got
an
issue
out
there
around
the
documentation
right
now.
The
readme
links
to
the
okb
3.11
documentation,
which
may
be
a
bit
confusing
because
we
don't
have
any
Oh
Katie
branded,
locating
for
documentation.
Yet
so
I
didn't
know
where
we
were
on
that
and
then
the
other
question
was
around
like
the
update
streams.
E
B
You
should
that
sounds
like
something's,
subtle,
I,
don't
know
what
that
might
be.
The
lack
of
a
channel
I'd
have
to
go
back
and
take
a
look
at
that
I
think
like
for
now.
The
update
stream
stuff
is
without
the
branches
being
merged
into
master
effectively.
We
can't
really
take
updates
from
the
rest
of
the
stream
safely.
So
I
did
say
the
moment
that
we
have
the
stuff,
the
F
cost
of
tracking
master.
B
We
should
try
to
get
to
the
point
where
we
are
pulling
it
all
of
the
changes
into
the
okd
stream,
and
then
the
moment
we
have
that
I
do
think
that
the
upgrade
stuff
should
try
to
make
that
to
get
the
stream
from
okd
working.
It's
probably
something
small
and
subtle.
Just
it's
without
being
able
to
easily
kick
new
builds,
it's
kind
of
a
not
very
useful
tool.
B
It
may
be
some
period
where
upgrade
do
not
work
up
to
a
week
or
two,
and
so
we'll
need
to
sort
through
what
that
means
for
like
the
okay
d
streams
and
how
we
track
them.
So,
but
I
think
those
are
all
probably
things
we
won't
hit
really
until
four
or
five,
because
I
just
think
based
on
the
progress
we're
making.
It's
gonna,
be
we
land
those
branches
on
the
master
in
the
next
three
four
weeks,
but
we
might
actually
miss
the
first,
the
cubelet
rebase
of
the
and
the
control
plane.
Rebase.
B
Oh,
you
know
upstream
bought
something
with
upgrades
or
you
know
something
subtle
is
broken
so
that
period
the
week
or
two
around
those
more
impactful
rebase
is
landing
and
we
have
skew
tests
and
all
that
I
think
it's
just
trying
to
as
we
slowly
get
the
okd
streams
running
at
the
same
pace
that
the
OCP
in
CI
streams
are
will
basically
be
in
the
spot
where
we
can
bring
all
that
automation
to
bear,
and
then
it
will
be
much
less
of
an
issue.
So.
C
Maybe
this
wasn't
the
right
question
that
the
right
time
to
ask
this,
but
it
just
came
up
since
it's
come
up
a
bunch
of
times
now.
Why
are
we
looking
like
in
terms
of
the
timeline
of
bringing
ignition
spec
v3
into
even
to
regular
OCP,
so
that
we
don't
have
this
divergence
anymore
between
okay,
Dee
and
Ossie
P.
B
It's
a
question
for
Christian
I
thought
he
had
said
you
know
the
goal
would
be
to
try
and
get
it
in
soon,
in
which
case
it's
joint
toleration
and
we'll
just
have
enough
CI
that
we
don't
break
regress
it
until
relgious
and
the
rail
work
is
basically
predicated
on
Christians
work
anyway.
So
Christian
is
effectively
doing
the
ignition
v3
work.
For
all
intents
and
purposes,
okay,.
D
Yeah
yeah,
exactly
so
I
think
the
the
Machine
config
operated
team
has
put
it
on
the
on
on
the
agenda
for
four
point:
four,
but
I'm
not
sure
how
committed
they
are.
It
may
be
pushed
back
to
four
point
five,
but
I
think
we
like
the
entire
group,
wants
to
get
it
in
with
four
point.
Four.
D
The
first
step,
at
least
in
the
MCO
repo,
is
sort
of
get
changes
in
that
would
allow
the
MCO
the
master
MCO
to
run
on
both
without
really
having
that
additional
requirement
of
migrating
from
two
to
three
for
OC
P.
So
that's
an
additional
step.
The
first
step
is
just
to
have
one
MCO
that
runs
with
both
but
doesn't
know
how
to
translate.
D
So
if
it
gets
v2
it'll
just
operate
on
v2
and
if
it
gets
v3
it'll
operate
on
v3
and
then
the
additional
step
is
to
some
added
translator
and
migrate,
existing
configs
from
v2
to
v3,
which
is
needed
for
OCP.
But
you
know
not
for
okay,
D,
of
course,
but
I
think
we'll
we'll
be
there
very
very
soon,
maybe
in
a
week
or
two
or
maybe
three
weeks,
but
yeah.
D
F
A
D
So
that's
definitely
on
the
backlog.
I
haven't
really
prioritized
it
because
there's
just
too
many
things
right
now.
We
need
to
work
out,
but
once
we've
sort
of
settled
a
little
I
think
yeah,
it's
probably
not
gonna
happen
before
we
get
the
first
proper
or
maybe
we
can
get
it
out
with
the
GA
release.
The
first
one
of
our
KD
but
yeah
definitely
not
been
one
of
the
Alpha
stages.
I
think.
A
D
Not
sure
how
thee,
how
the
actual
support
even
an
o
CP
with
e,
because
the
quadratic
containers
is
essentially
a
one
one
machine
cluster
and
there
was
definitely
some
some
operators
going
degraded,
or
at
least
the
MCO
don't
degrade
it
in
there
I'm,
not
sure
if
that's
fixed,
even
in
in
OCP
but
yeah.
We
don't
have
to
block
on
that
for
okay,
D,
but
it'll.
It'll
definitely
be
nice.
If
that,
where
those
issues
were
fixed,
I
think.
B
This
is
just
a
bigger
challenge,
which
is
CR.
C
is
a
lot
of
work
to
tread
water
because
the
core
enablement,
like
you
know,
single
node
clusters,
aren't
real
clusters,
so
we
have
to
do
the
the
minimal
stuffs.
I
mean,
there's,
definitely,
backlog,
work
and
initiative,
work
tracking,
the
cleaning
up
the
install
in
bootstrap
process
so
that
we
can
eventually
shrink
down.
There's
just
a
bunch
of
steps
between
here
and
there.
B
So
this
is
I
think
is
that
if
folks
want
to
commit
a
lot
of
time
to
making
CRC
work
on
F
cause,
that
could
be
great.
But
it's
going
to
be
a
lot
of
work
and
a
lot
of
effort
I
cautioned
that
if,
if
people
find
it
value
more
valuable
than
the
core,
okay,
t-distribution
I
think
that's,
you
know,
there's
certainly
reason
for
it,
but
I
think
we
should
walk
into
it
with
open
eyes
that
it's
gonna
be
a
lot
of
work
and
it's
gonna
continue
to
be
a
lot
of
work
for
the
neck.
A
So-
and
this
is
my
my
naivety
in
some
sense,
so
the
code
ready
containers
in
some
sense
semblance
is
really
replacing
the
mini
shift
and
mini
cube,
easy
to
deploy
and
test
process,
because
we
heard
a
lot
not
a
lot,
but
a
few
times.
People
asking
what
where's
the
where's
the
mini
shift-
and
my
sense
is
that
CRC
was
supposed
to
help
with
that.
A
B
Trying
to
brute
force
mini
shift
on
top
of
okay
de
NOC
P,
which
is
there's
still
a
number
of
underlying
things
which
is
you
know.
A
CDS
about
self
hosted
open
shipped
in
general,
is
about
self
hosted
clusters
that
survive
individual
machine
failures
and
that's
like
almost
completely
orthogonal
to
what
yeah.
A
B
B
The
work
to
get
to
the
point
where
we
could
reduce
the
expense
of
flipping
the
switch
I
think
is
the
stuff
that
they're
certainly
they're,
certainly
be
interested
in
folks
helping
out
it's
a
lot
of
deep
changes
to
the
bootstrapping
and
install
flow
of
the
cluster,
which
is
another
cost,
and
so
it's
not
that
it's
a
bad
idea.
It
is
that
it
comes
with
a
lot
of
costs
while
we're
still
trying
to
get
okd
up
and
sustainable.
A
Thank
you
for
that.
So
the
other
thing
that
I
have
opened.
That's
not
an
engineering
task
is
the
is
writing
an
article
for
the
Fedora
magazine
and
and
now
that
coop
con
is
passed.
Neal
I'll
tap
you
in
Christian
and
start
trying
to
draft
something
there
and
share
that
with
you,
maybe
in
the
next
in
the
coming
week,
or
so,
we
can
at
least
get
an
outline
done
and
for
fedora
sure
so
sounds.
A
Anything
else
so
I
don't
want
to
shortchange
you
on
your
last
ten
minutes
in
this
meeting,
even
though
everybody
can
see
all
the
other
meetings
that
are
popping
up
in
my
window
here,
do
we
someone
is
asking:
do
we
still
need
to
build
it
ourselves
for
okd
to
work
and
I?
Pretty
sure
the
answer
is
yes
to
that.
But
Christian
is
saying
no
build.
D
D
D
So
you
don't
need
to
build
that
separately.
That's
included
in
the
machine
in
the
in
the
payload,
in
the
release
I
off
the
alpha
and
if
you're
happy
with
using
the
alpha
version,
which
is
sort
of
last
week's
date,
then
you
shouldn't
be
needing
to
to
build
it
yourself.
Of
course,
you
can
and
update
the
sort
of
the
cluster
with
the
current
version,
but
right
now
at
least
the
F
cost
branch
of
MCO
hasn't
really
changed
from
last
week,
though
it
wouldn't
really
make
sense.
A
So
the
only
other
thing
I
have
is
in
my
head
is
taking
a
look
at
the
the
installation
documentation
here
and
is,
if
there's
feedback
on
here
on,
you
know,
questions
like
the
MCO
stuff.
Is
there
other
things
that
we
should
be
adding
in
here
to
make
people
or
things
that
people
want
clarified
here,
and
if
so,
you
don't
have
to
answer
that
question
now.
If
you
can,
let
us
know,
there's
more,
that
we
need
to
add
into
the
getting
started
stuff.
D
G
A
A
A
Alright
guys,
then
that's
all
we
have
for
now
and
we'll
just
output
the
video
from
this
up
shortly
once
it
gets
processed
by
blue
jeans,
and
if
you
haven't
added
your
name
into
the
to
the
attendee
list,
I'll
try
and
capture
that
as
well.
The
thanks,
everybody
and
really
I
will
publicize
the
the
video
that
Christian
did
with
the
lightning
talk,
and
that
really
my
goal
right
now,
besides
trying
to
keep
track
of
all
of
these
moving
balls,
is
to
get
more
eyeballs
on
okay
D
for
from
end
users
who
are
deploying
it
too.