►
From YouTube: OKD Working Group Meeting Sept 17 2019
Description
OKD Working Group Meeting Sept 17 2019
co-chairs: Diane Mueller (Red Hat), Christian Glombeck (Red Hat) and Daniel Comnea (Synamedia)
https://groups.google.com/forum/#!forum/okd-wg
A
All
right,
if
you
could
just
add
yourself
into
the
attendees
list
and
we'll
give
everybody
a
few
more
minutes
to
join
like
I,
said
there
are
multiple
calls
going
on
at
the
same
time,
so
that
first
agenda
item.
My
hair
here
is
an
answer
to
some
folks
questions
on
the
mailing
list
about
the
meeting
Kate
cadence.
A
A
Meeting
notes,
so
everybody
should
have
the
ability
I
had
proposed
and
I
think
that
the
first
meeting
is
to
try
and
meet
every
two
weeks,
all
the
communities,
and
so
that's
sort
of
gave
us
this
week
from
the
last
one
I
mean
it
had
agreed
to
meet
on
today,
but
I
did
not
post
it
on
any
calendar.
Aware
so
I
think
one
thing
I'm
going
to
suggested
that
on
in
the
community
page
we
post
the
meeting
dates
so
that
it's
in
the
github
repo
in.
A
C
C
A
A
Would
be
me
and
I
could
handle
it
I
so
far,
mostly
so,
why
don't
I
do
this?
I
will
make
I'll
make
an
issue
on
the
and
people
can
respond
to
it.
A
D
A
Okay,
so,
and
could
you
say
who
that
was
speaking
because
I
am
because
I
don't
have
Neil
Tompa?
Okay,
you
know
you're
kind
of
important
to
this
conversation,
so
I
will
put
it
in
if
Christian
can
get
him
a
his
excuse
slip
from
his
meeting,
although
with
the
later
time
then.
C
C
A
C
C
So
this
is
really
just
a
few,
basing
fixes
and
really
just
yeah,
nothing,
nothing,
that's
really
important.
It's
just
spacing
and
semantics,
and
just
a
few
fixes
so
I
was
thinking
instead
of
doing
another
call
for
agreement
for
this.
We
could
tentatively
approve
this
in
the
meeting
per
the
charter
process
for
this,
and
if
anybody
has
an
issue
with
it,
he
can
the
voice
met,
yep
voice
up
and
and
do
that
on,
the
PR
I
think
we
we
don't
have
to
do
another
call
for
agreement
for
this.
D
E
D
C
It's
yeah
I
just
removed
the
whip
because
there
was
actually
another
another
fix
coming
in
yesterday
from
Steven
who
is
here
I
think
today,
but
so
yeah
I
just
pushed
that,
and
it
should
be.
If
you
find
another
fix,
please
do
to
point
me
to
it:
I'll
fix
it
and,
if
you're
all
fine
with
it,
if
we
can
agree
on
that,
maybe
you
can
all
just
put
a
plus
one
in
the
chat.
So
we
have
it
formally
approved
and
then
I'll
merge
it
tonight.
A
A
C
Yeah
I
don't
really
have
anything
to
share
on
the
screen
right
now
so
from
the
f
course.
C
It's
not
really
an
update
coming
from
from
the
F
cos
teen,
but
rather
from
me
working
on
getting
the
missing
rpms
into
F
costs
and
I've
been
hacking
on
that
for
the
past
few
days
and
those
are
the
OC
and
hyper
shift
rpms,
which
will
build
on
on
the
prowl
CI,
which
we're
also
using
for
our
CP,
so
yeah
I'm,
just
getting
getting
sort
of
a
separate
pipeline
inside
there
up
at
the
moment
and
I
hope
I
can
I'll
have
something
to
to
for
review
by
the
body
test
platform
team
that
that'll
handle
this
or
review
this
yeah
this
week
sometime,
oh
yeah,
this
is
sort
of
the
state
we're
at
we're
in
at
the
moment.
C
Are
there
any
questions
about
this
at
the
moment?
So
the
the
current
plan
is
to
get
some
I
haven't
fleshed
out
the
details
yet
and
I'll
probably
need
to
sync
with
Clayton
about
so
my
current
thinking
was:
maybe
we
don't
need
a
PR
/
PR
test
for
the
RP
angle,
but
just
just
master
commit
builds
but
yeah.
We
could
do
her
PR
as
well.
F
F
Because
I
think
this
happen,
all
the
time
with
our
PMS
is
people
are
like
I've
got
these
rpms
and
then
someone
reef
actors
of
the
build
system
and
yeah
three
months
for
the
last
ten
years.
So
I
don't
know
that
we're
gonna
stop
doing
it,
so
maybe
that
one's
probably
the
easiest
like
we
should
start
with
PRS
and
if
they
get,
if
you
want
to
bring
it
another,
we
can
absolutely
do
that.
It's
just
a
good
all.
C
Right,
yeah
I
definitely
have
some
some
detailed
stuff
that
I
want
to
hash
out
with
you
Clayton,
but
I'll
probably
write
right
up
something
for
you
tonight,
but
that's
yeah
that
that's
not
really
for
this
discussion.
I
just
need
to
get
some
details
on
this,
but
yeah
the
next
step,
getting
the
RPMs
after
that
we'll
tackle
getting
all
the
operators
and
the
Installer
ready
for
F
course,
which
is
mostly
mostly
the
Installer
and
DMC.
Oh,
the
Machine
config
operator.
A
C
So
I
am
Not
sure
the
F
course
team
has
set
a
date
in
stone.
Yet
from
what
I've
heard,
they
are
definitely
planning
it
for
the
first
or,
let's
say
January,
so
that
that's
the
tentative
date
I
first
and
you
read
January
but
I
as
I
said.
I
can't
really
give
a
guarantee
on
that
and,
as
we
in
okd
will
for
our
initial
release
depend
on
on
the
F
course
GA
release.
We
will
release
after
that
I
hope
very
soon,
but
right
now,
I
can't
really
give
any
concrete
yeah
and
nothing
concrete.
A
C
Don't
think
we
need
any
any
huge
changes
to
the
roadmap.
I
was
thinking
of
because
now
I
have
a
more
clear
view
of
what's
actually
needed
to
to
make
this
happen.
I
might
put
them
a
few
more
details
in
it
in
the
roadmap,
but
other
than
that
I
think
it's
pretty
much
a
little
pretty
much
stay
the
same.
C
If
there
is
no
no
more
suggestions
from
the
community
at
this,
at
this
stage,
I
was
going
to
make
a
call
for
agreement
on
the
roadmap.
We
could
also
agree
on
it
in
the
working
group,
but
I
I
was
thinking
might
merit
a
they
form.
A
culture
agreement.
A
A
So
me,
tomorrow,
after
the
gathering,
you
and
I
can
sit
down
and
do
that
send
that
out
yeah.
F
Think
there's
and
this
this
is
almost
like
related
to
the
next
one,
but
I
think
one
of
the
things
is,
as
we
start
getting,
ofcoz
up
and
I
know.
This
was
something
Michael
brought
up
on
the
email
thread
around
both
trying
with
other
operating
systems
and
potentially
not
requiring
the
OS
to
be
coupled,
where
there's
a
couple
of
things
going
on
right
now
that
just
doing
in
various
bits
of
the
code
base
to
make
it
easier
to
omit
things
in
both
preparation
of
things
that
we
might
need
for
okd
as
well.
F
As
you
know,
upcoming,
supporting
like
thinner
profiles
of
openshift,
or
something
like
that.
That
meant
some
of
the
operators,
as
well
as
making
testing
easier
so
like
work
that
I've
been
doing
to
do
skewed
testing
to
test.
You
know
42
OCP,
with
4:1
nodes
that
we
would
also
want
to
do
with
F
cos
as
well.
F
There's
an
another
effort
or
another
discussion
going
on
so
folks
here
may
not
be
as
familiar
with
planned
work
around
the
cluster
sed
operator,
which
is
you
know
the
long
and
short
of
it
is
the
opinion
ater
they're.
The
Installer
today
is
setting
up
at
CD
we're
gonna,
we're
looking
to
change
that
architecture
early
so
that
there's
an
operator
that
manages
membership
in
the
NCD
quorum
for
masters,
which
would
essentially
mean
once
you
have
a
singleton
cluster
up.
You
can
then
grow
to
three
and
the
Installer.
F
This
is
over
four
three
and
four
four
time
frames.
So
over
the
next
six
to
nine
months.
That
operator
would
manage.
You
know
essentially
during
install
you'd,
start
a
bootstrap
node
and
it
would
be
a
full
node.
You
could
grow
now,
there's
a
bunch
of
other
things
that
assume
that
there's
more
than
one
master,
but
there
have
been
a
lot
of
discussions
around
getting
to
the
point
where
you
can
run
a
single
F
cause
control,
plane,
node
as
well
as
our
cause
and
well
that's
not
in
the
like.
F
It
kind
of
overlaps,
I
think
with
the
medium
term
bucket
for
the
roadmap,
but
it's
gonna
be
somewhat
complex
to
plum
that
into
the
Installer.
But
if
there's
folks,
who
want
to
follow
along
I
can
give
an
update
for
that
or
have
Sam
actually
come
on
and
give
an
update
for
that.
Maybe
the
next
one,
but
that's
probably
the
biggest
change
that
I
could
see
impacting
some
of
the
objectives
that
we'd
have
for
okd
and
making
it
easy
to
do
Singleton's
and
so
forth.
F
Kind
of
there's
still
a
whole,
but
so
a
single
node
is
not
resilient
to
anything,
even
rolling
updates
and
there's
various
parts
of
the
infrastructure
like
operators
that
have
node
exclusion,
rules,
the
ingress
controller,
etc.
So,
there's
kind
of
the
two
parts
we're
looking
at
one
is
rather
than
the
Installer
setting
of
Etsy
D
and
that
being
kind
of
an
unmanaged
thing.
F
What
I
think
like
just
from
like
a
wearing
the
OCP
hat
the
plan
was,
is
there
was
no
intent
to
support
anything
less
than
three
masters
until
the
three
master
flow
was
like
totally
rock-solid
so
that
you
could
do
like
you
know?
The
end
goal
is
that
masters
or
Machine
sets
or
machine
deployments,
and
they
have
rolling
updates
and
you
can
throw
away.
You
can
throw
away
any
of
the
masters
and
get
bigger
masters
and
have
it
all
managed
transparently.
So
it's
like
a
big
architectural
change.
One
of
the
consequences
of
that
is.
F
We
at
the
same
time
wanted
to
ensure
that
bootstrap
feels
like
going
from
one
to
three.
That
means
one
has
to
work
briefly
and
then
the
next
step
after
that,
as
that
starts
to
stabilize,
could
be
making
one
work
really
well
versus
the
kind
of
compromised
it
kind
of
works
accidentally
today
that
enablement
is
kind
of
structured
around
the
idea
that
we
want
to
move
the
NCD
quorum
under
active
management,
which
provides
all
those
big
operational,
wins
and
scape
at
a
three
master
or
four
master
set
up,
but
so
there's
some
overlap.
F
There
I
just
wanted
folks
to
know
that
that's
some
of
why
the
single
master
has
been
the
single
node
cluster
has
just
been
kind
of
a.
It
is
something
we
can
do.
It
doesn't
really
work
the
way
that
you
would
expected
to
live,
vert
kind
of
works,
but
doesn't
quite
work.
The
way
you
expect
it
to
I'll
have
Sam
come
for
the
next
one,
I
guess
and
do
an
update
on
where
the
cluster
ICT
operator
is
and
get
folks
places
to
go,
look
where
they
can
follow.
F
Along
with
that,
it
seems
very
natural
that
the
singleton
would
come
along
that
path,
because
the
goal
would
be
the
bootstrap
node
becomes
a
single
ten
and
then
adds
masters
and
then
gets
rid
of
itself.
I
think
we'd
be
in
a
spot
there,
where
we
could
potentially
discuss.
You
know
just
letting
you
stop
at
the
bootstrap
node,
but
we
haven't
like
it's
still
really
early
in.
F
G
F
E
F
B
F
C
This
actually
reminded
me,
this
discussion
reminded
me
of
one
thing
that
I
wrote
that
I
added
to
the
roadmap
recently,
which
was
in
I,
think
phase
one
adding
lipfird
support
and
I
I
would
actually
be
in
favor
of
dropping
that
statement
from
the
roadmap
altogether
and
instead
putting
something
in
that
will
try
to
adapt
code
ready,
contain
us
at
some
stage.
I.
Don't.
D
F
Code,
ready
containers
is
kind
of
waiting
for
some
of
this
work
to
so
maybe
there
is
a
real
problem
with
libvirt
today,
which
is
the
design
of
the
lippert
golang
library
requires
you
to
basically
have
the
shared
libraries
available,
which
works
for
some
use
cases,
but
not
for
others.
So
I
think
I
would
kind
of
agree
like
it.
It
might
be
like
a
we
should
have.
We
need
to
have
something
in
a
short
term.
F
F
The
current
plan
is
that
very
early
in
4/3,
which
branch
is
supposed
to
happen
soon,
as
stabilization
4-4-2
finishes
out,
we
would
branch
4,
4,
3,
and
that
would
come
in
very
early
so
that
it
has
time
to
soak
in
the
limited
mode,
not
the
full
management
mode,
but
like
a
basically
replacing
what
we
do
at
ignition
for
at
CD
today
with
something
that's
just
a
single
controller.
Again,
it's
Sam's,
probably
a
lot
better
than
I
am
to
talk
about
exactly
where
it
is
so
we
could.
C
Right
my
yeah,
my
thinking
about
this
was
as
a
bullet
point.
There.
I
can't
really
tell
how
much
work
it
would
entail
so
because
I'm
not
I,
haven't
really
used
live
bird
in
the
past
and
if
I
were
to
be
the
one
responsible
for
for
implementing
that,
I
would
definitely
ask
for
for
more
precise
steps
that
I
could
follow
and.
F
Just
like
to
add
to
this
there's
a
bunch
of
things
that,
like
the
metal
cube,
guys
have
been
working
on
that
are
intended
to
help
make
the
single
or
they
may
need
things
like
metal.
The
bouncers.
When
you
have
l2
connectivity,
they're
working
on
you,
know
bootstrap
DNS
that
doesn't
require
an
external
DNS
sort
service.
It
might
actually
be
that
after
Sam,
we
also
should
have
the
metal
cube,
guys
here
to
talk
and
kind
of
go
over
what
they've
done.
So
that's
something
Diane
and
I
can
help.
C
F
Some
of
the
components
that
are
more
optional
these
days,
like
elastic,
search
and
so
forth,
took
a
lot
of
those
rpms
with
them
when
they
got
kicked
out
jenkins
when
they
got
kicked
out
to
be
their
own
operator.
Jenkins
is
the
same
way
it's
currently
in
the
payload,
but
it's
getting
the
boot
for
for
an
operator,
in
which
case
it'll
it'll
kind
of,
contain
all
that
complexity
on
its
own
there's.
Still
a
chunk
of
rpms
they're
still
coming
from
effectively
the
rel
base,
they're
redistributable
right
now
as
part
of
the
UVI
set
up
I.
F
C
Yeah
that
sounds
good
to
me.
Just
to
get
started
off,
I
was
thinking,
I
could
reuse
them
and
yeah.
F
F
They
should
be
published
in
the
OpenShift
channels,
but
it
could
be
because
of
the
way
that
they're
coming
out
that
they're,
not
a
one-to-one
mapping
between,
so
that
I
will
Christian
and
I
will
take
a
follow
up
on
that.
But
I
think
the
end
result
probably
would
be.
D
Yeah
that
makes
sense
to
me
and
I'd
like
to
definitely
see
that
be
use
the
Doric
content
as
a
source
for
this
kind
of
stuff,
because
then
that
way
we
can.
The
the
hope
would
be
that
it
would
make
it
easier
for
us
to
take
advantage
of
new
user
space
functionality
as
it
arrives
with
them
within
the
Fedora
distribution,
as
opposed
to
ubi
and
Rowell,
where
it's
much
much
much
slower.
G
F
F
D
F
Fedora
channels
are
probably
the
natural
ones
for
those
and
I
think
all
we
would
probably
say
is
we
would
just
have
the
parallel
pipe
lines
wherever
I
think
that
the
best
part
about
it
would
be
like
what
we
should
probably
do
is
look
the
moment.
We
have
cause
stuff
up.
We
can
start
drip-feeding
these
and
slowly
rolling
them
out
they're,
very
simple,
docker
files
across
the
board,
so
it
should
be
pretty
easy
to
go
through
and
get
them
and.
D
F
A
problem
because
Cube
is
highly
tied
to
a
particular
golang.
Like
notes,
none
of
the
core
libraries
will
move
to
go
1:13
until
it's
been
out
by
upstream.
So
that's
gonna,
be
that's
gonna,
be
like
this
I
think.
The
tension
between
an
oak
ad
and
a
fedora
which
is
OK
D,
is
making
decisions
based
on
cube
first,
but
over
a
second,
especially
when
it
comes
to
things
like
golang
version
now,
I
mean
that
may
be
fine
if
we're
just
using
the
what's
called.
What's
the
channels,
SEL
variants,
yeah.
F
Go
lame
version.
Every
golang
version
that
has
gone
out
has
had
serious
high
scale,
regressions
in
subtle
ways,
including
security
issues,
and
so
at
this
point,
I'd
say
like
if
you're
running
a
different
cube
version
than
what
upstream
is
running.
It's
probably
going
to
bite.
Someone
who
expects
it
to
just
work
doesn't
mean
that
it's
gonna
break
commonly
like
the
obvious
stuff
yeah,
but
like
things
like
the
module
boundaries,
hopefully
that
trends
down
but
one
one
nine
through
113
have
been
particularly
problematic.
D
F
F
Think
this
again
gets
at
that
who's
in
charge
is
Fedora
driving
or
is
Cube
driving
and
I
mean
my
bias.
Just
historically
has
always
been
cute
kids
to
drive
versus
us
driving
the
OS
up
from
cubed
down
has
been
vast
and
I.
Think
that's
just
though,
like
kneeling,
you
and
I
obviously
represent
the
the
extremes
on
these
particular
arguments.
So
well.
D
E
Okay,
can
I
can
I
jump
in,
can
I
jump
in
and
and
change
a
bit
the
subject,
if
you
don't
mind
so
just
you
know
taking
the
benefit
of
having
Clayton
around
if
I'm
looking
at
the
phase
zero.
The
first
bullet
point
today
is
ensure
install
itself
respectively
now
in
the
Installer
guitry
pocket,
and
there
is
a
peer
raised
by
the
OpenShift
team,
the
OpenStack
team,
with
regards
to
design
doc
around
the
ignition
boost,
wrapping
booster
ignition
and
over
there.
E
So
my
question
to
you
is:
you
know
today
we
we
leverage
the
the
terraform
ignition
provider
to
kind
of
obviously
generate
a
stay
at
the
metadata
and
everything
now
the
team
over
there.
They
suggested
to
kind
of
come
with
with
a
regime
written
lingo
and
no
longer
rely
on
the
ignition
telephone
provider.
Could
my
question
to
you
is
yeah.
F
E
F
Absolutely
be
nuanced
here,
but
there's
no
long-term
future
for
us
using
terraform
extensively
terraform
is
documentation,
detail
but
happens
to
be
convenient.
It
not
what
we
like
at
this
point,
like
we've
we've
come
close
to
discussing
this,
there's
just
no
reason
to
force
it
right
now,
but
like
replayed
completely
just
tearing
out
the
AWS
terraform
flow
and
using
the
cloud
formation,
template
will
probably
be
a
better
and
user
experience.
I
don't
think
terraform
is
giving
us
a
lot.
F
Sorry,
we
don't
trust
it
for
destroy
because
we
have
requirements
above
and
beyond
the
destroy
flow.
So
I
think
what
there's
a
general
pattern
that
the
Installer
like
folks
working
on
the
Installer
in
the
terraform
is
just
something
that
we
happen
to
use
and
it
will.
Its
importance
is
certainly
being
de-emphasized
know
up
til
now,
everybody's
done
a
new
platform
by
adding
terraform,
but
I
actually
don't
think
the
AWS
terraform
stuff
is
better
than
the
cloud
formation
template
that
you
can
use
outside
the
Installer.
F
E
E
F
Sure
will
more
extensively
used
resource
groups
in
the
future
or
resource
templates,
because
the
azure
or
the
azure
team
wants
them
to,
and
it's
fits
better
with
how
they
run
and
there's
more
features
so
as
you're
currently
is
not
as
far
in
that
direction
as
it
could
be,
but
I
would
expect
it
to
go
more
in
that
direction
over
time
as
well.
Ok,.
D
Does
that
mean
they
for
every
cloud
in
every
club
every
target
platform?
So
that's
AWS
as
you're
OpenStack,
we,
your
reimplemented,
a
mechanism
in
which
you
are
going
to
support
provisioning,
managing
and
destroying
clusters,
so
the
dirty
secret
is,
is
that
terraform
doesn't
work
across
clouds
like
that.
F
It's
it's
more
of
we,
the
the
initial
implementation
right
was
the
tectonic
installer
and
it
was
very
heavily
like
we're.
Just
gonna
do
some
terraform
and
then
we
grew
up
a
point
where,
like
there's
things
that
are
almost
great
and
there's
things
that
it's
not
great
good
at
the
terraform
got
relegated
more
to
it
is
an
implementation
detail
of
each
provider,
as
we
have
been
like
I
did.
F
I
did
a
quick
pass
on
doing
the
conversion
from
AWS
to
GCP,
to
do
the
initial
GCP
I
would
say:
I,
probably
I,
couldn't
paste
it
and
then
changed
every
line,
and
so
and
then,
as
we've
gone
further,
we've
had
to
go.
Do
even
more
so
I
would
say
there
might
be
some
wins
for
people
to
have
like
a
generic
tariff
or
like
I,
think
it
might
actually
be
better
for
someone
to
go
move
to
an
outside
terraform
that
copies
I,
just
don't
see
it
being
generic,
because
none
of
its
generic
so
yeah.
F
The
basic
answer
is
people
have
started
by
seeding,
realize
the
limitations,
pretty
quick
as
we
move
more
stuff
under
control
like
the
Machine
API
for
all
the
clouds.
I.
Think
the
Machine
API
is
the
future
where
machines
and
then
everything
else
that's
left
is
like
security
groups
load
balancers.
All
that
that's
all
special
and
custom
per
cloud
anyway.
So
it's
kind
of
the
we've
standardized
the
the
common
stuff
into
machine
API,
again
credential
mentor.
Everything
else
is
and
a
custom,
but
there's
less
need
for
terraform
across
that,
because
they're
on
the
same
high.
F
And
that
we
haven't,
we
say
that
sometimes
but
I
think
it's
more
of
just
like
we're.
Not
gonna
just
go
rip
out
all
the
eight
of
those
stuff
tomorrow
or
next
week
or
next
month.
It's
just
in
that
long
term.
Arc
of
it
works.
We're
probably
not
gonna
touch
it
unless
we
need
to,
but
the
first
opportunity
there
is
for
a
big
like
refactor
for
some
other
use
case
like
we
get
full
machine
management.
F
Like
the
moment
we
have
full
machine
management
or
masters
with
Nicosia
cluster
NCD
operator
lands,
I
think
there's
a
strong
desire
to
just
go,
and
potentially
we
would
tear
out
all
the
instance
related
stuff
and
at
that
point,
you're
talking
more
about
the
shell
rather
than
the
cluster
and
even
stuff
like
so
coming
is
the
adding
stuff
to
the
Installer.
So
you
can
bring
your
own
VPC.
We're
having
a
document
with
all
that
bring
your
own
VPC
stuff
is
for
the
ipi
flows.
F
That
documentation
is
likely
to
look
a
lot
like
a
cloud
formation
template
with
some
Docs,
and
you
know
here's
what
the
default
config
might
be.
You
can
choose
to
do
whatever
you
want
or
bring
your
own
when
we
do
that,
that's
even
another
chunk
of
stuff,
where
terraform
makes
even
less
sense.
That's
yeah.
A
B
D
So
they're
ready
container
thing
that
so
I
took
a
quick
glance
at
it,
and
just
kind
of
looked
at
it
and
I
didn't
see
a
way
to
use
it
with
like
a
Red,
Hat
developer
subscription
or
anything
like
that.
It
doesn't
actually
seem
to
provide
a
way
to
it
to
work
in
that
manner
and
similar
to
like
how
you
can
get
a
rel
developer
sub,
or
something
like
that.
So
I'm
not
entirely
sure
what
people
are
supposed
to
do
with
this.
D
F
A
G
A
A
D
So
dog
meat
have
we
moved
on
to
the
openshift
for
to
code
stream.
Now,
at
this
point,
because
I
saw
there
was
a
blog
post
about
developer
previews
about
this,
so
I'm
not
sure
what
what
does
that
mean
from
the
okt
perspective?
Are
we
starting
from
four
or
are
we
still
working
from
four
one?
Presumably.
F
F
I
F
F
When
the
tree
servers
and
everything
I
mean
telemetry
is
technically
opted
out,
no
matter
what
version
you're
running
you
just
don't
get
the
magic
like.
Oh,
we
saw
that
you
broke,
we
go
fix
it.
You
didn't
know
that
it
broke
you
gonna
fix,
that's
like
and
I
think
we
want
that
for
I
mean
honestly
I
think
in
the
long
run,
that's
like
a
goal
for
okay
D
for
people
who
opt
in
to
also
get
that
kind
of
experience,
but
yeah
the
air-gapped
is
about
content
mirroring.
F
What
we've
started
doing
is
the
OCP
cuts
go
out
to
anybody
who
has
tried
out
open
ship
comm.
So
it's
basically
like
rolling
data,
but
it's
rolling
beta
of
the
entire
lifecycle,
the
the
master
streams
so
like
the
differences
between
the
two
code
bases
is
minimal
or
the
differences
are
basically
zero,
except
the
RPM
sources
and
the
base
images
for
4-3.
The
first
thing
that
will
happen
when
four
three
lands
is
the
core
rebase
to
116
there'll,
be
a
brief
period
of
instability.
F
F
So
I,
don't
I,
don't
know
how
much
value
there
isn't
staying
behind
the
other
than
like,
while
we're
iterating
other
than
maybe
like
a
brief
period
of
stability,
but
we
generally
just
revert
whatever
broke,
even
if
it's
our
cause
or
whatever
so
letting
even
for
F
cos
and
okay
D.
We
can
get
in
that,
but
especially
if
Christian,
you
think
you're
gonna
want
to
have
all
that
stuff
in
faster
anyway.
A
A
F
D
C
I
mean
Christian,
you,
you
don't
yeah,
that
was
yeah,
that
was
I,
don't
know
how
it
got
in
in
the
roadmap
or
why
everybody's
talking
about
it,
I
was
sick.
I
was
definitely
thinking
that
we
were
waiting
for
F,
cross,
J
and
I.
Think
the
F
course
team
suggested
that
but
of
course,
I'm
happy
if
we
can
make
it
work
before
F
cross.
This
J
I'll
definitely
take
that
as
well.
Yeah.
F
And
I
mean
I
get
I
think
it
depends
on
what
the
definition
of
done
is
I,
don't
know
that
I've
caused
GA
means
F
cause
is
done
in
any
real
sense.
I,
don't
know
how
much
more
done
it
has
to
be.
Are
you
just
looking
at
that
work
that
you've
been
working
on
I,
don't
see
any
reason
to
not
be
working
on
the
preview,
basically
from
the
beta
releases,
so
well.
F
A
C
A
I
was
waiting
for
the
panic
to
the
end
of
the
call,
but
there
you
go
so
yeah
I'll,
try
and
get
it
to
Minnie.
Do
you
think?
Because
we
should
get
them
a
metal
cube
team
member
that
Daniel,
a
guy
you
mentioned
for
the
next
meeting
or
for
the
falling
meeting?
Is
that
overloading.
A
A
I
Like
a
throat
out
as
a
curiosity
in
the
3x
environment,
the
the
process
for
building
from
source,
all
of
the
artifacts
that
were
necessary
to
install
and
open
ship
cluster
was
was
pretty
straightforward.
It's
obvious
that
was
for
that
process
has
changed
because
the
the
builds
don't
work
anymore.
If
you
just
follow.
F
Be
happy
to
do
deep
dive
on
how
bin
chef
CI
works
today
for
for
and
record
that,
and
maybe
we
can
do
that
it,
we
can
do
it
a
Commons
meeting
or
we
could
do
it
here.
The
Commons
meeting
might
actually
be
a
bigger
I
mean
there's,
probably
a
huge
set
of
people
who
are
interested.
If
you
guys
want
to
ask
questions,
though
we
could
do
a
separate
session
or
something.
A
F
F
I
think
we
could
like
that
that
specific
question
I
could
do
a
five
minute
answer,
but
I
think
without
the
rest
of
the
pipelines.
That
kind
of
hangs
there
and
I
think
you
know
for
okay
to
be
successful.
We
want
okd
to
be
able
to
inherit
a
lot
of
that
release.
Automation,
yeah
for
all
the
bits
that
carry
over
that
aren't
like
fedora
specific,
because
that
I
think
is
one
of
the
true
innovations
and
for
or
innovations
is
a
strong
word.
F
E
F
I
A
Alright,
we're
at
the
end
the
yard,
of
course,
at
people's
time.
Our
next
planned
meeting
would
be
two
weeks
from
now
on,
a
Tuesday.
If
we
keep
to
the
same
cadence
but
I
think
Christian,
you
said
you
were
not
available
on
the
first
or
you
want
pto
that
week,
yeah.
B
C
C
A
A
But
let's
try
for
the
15th,
everybody
and
I'll
put
a
note
and
I
will
update
the
MMD,
the
read
the
MD
with
the
next
upcoming
meeting
alrighty
then
thank
you,
everybody.
It
may
take
me
a
little
while
to
get
this
video
uploaded,
because
the
Wi-Fi
here
in
Italy
is
not
great,
so
maybe
Monday
morning
just
be
forewarned
before
I
get
back
to
fiber
optic,
maybe
Saturday
we're
lucky
all
good
thanks.