►
From YouTube: OKD Working Group Meeting 06-07-2022
Description
The OKD Working Group's purpose is to discuss, give guidance to, and enable collaboration on current development efforts for OKD, Kubernetes, and related CNCF projects. The OKD Working Group includes the discussion of shared community goals for OKD 4 and beyond. Additionally, the Working Group produces supporting materials and best practices for end-users and provides guidance and coordination for CNCF projects working within the SIG's scope.
https://okd.io
A
Welcome
to
the
okd
working
group
meeting
for
june
7th
of
the
year
2022,
the
agenda
is
in
the
chats
and
it's
also
available
on
the
calendar,
invite
and
we'll
post
it
again
to
make
sure
any
folks
have
just
joined.
Have
it
and
take
a
moment
to
look
over
the
agenda,
see
if
there's
anything
we
missed,
we
do
have
a
an
action-packed
meeting
and
give
me
one
second
here:
actually
dusty
is
gonna
pop
in
as
well.
He
just
needs
me
to
hand
him.
A
Dusty
will
be
here
in
a
second,
and
we
have
guests
today,
so
we're
going
to
keep
things
at
about
15
minutes
per
guest,
that'll
fit
our
guests
and
then
15
minutes
for
the
various
updates
that
we
usually
have
here.
A
So
don't
forget
to
put
your
name
in
the
agenda
doc
as
an
attendee
just
so
we
know
that
you
were
here
and
that
allows
us
to
keep
track
of
if
there's
important
information
that
someone
needs
to
know
and
they
weren't
here
and-
and
we
need
to
get
that
to
you
all
right.
Let's
start
out
with
okd
release
updates
with
christian.
Take
it
away.
B
Yeah
sure
I
think
this
is
a
short
one,
so
I'm
not
really
news
anymore,
so
my
name's
cut
another
release
and
he
did
it
the
weekend
before
last
weekend
I
haven't
seen
any
any
bugs
particularly
reported
for
this
version.
If
you
do
have
any
please
file
an
issue
on
our
tracker
yeah
other
than
that,
I
think
that's,
that's
it
already
nice
and
short.
C
Short
feedback,
I
did
install
it
and
it
does
seem
to
fix
the
chef
rook
issue.
At
least
all
of
my
pvs
were
there
with
no
touching
or
handholding,
and
things
seemed
to
to
work
great.
So
that's
thank
you
for
that.
B
Yeah,
I
think
we
we
unpinned
the
kernel
because
afix
was
merged
and
backported
into
into
fedora,
so
that
issue-
hopefully
it's
gone,
yeah
awesome.
A
Anything
about
this
issue
of
incorrect
urls
to
docker
images
in
the
samples
we've
been
getting
reports
of
it,
but
no
one's
really
delved
into
it.
Yet.
B
I
haven't
really
seen
it
on
a
live
cluster.
I've
seen
something
similar
in
ci
recently,
where
we
were
hitting
the
docker
hub
pull
limit,
because
some
of
those
images
might
still
be
coming
from
docker
hub.
I
there
was
someone
posted
another
issue
today
and
it
seems
like
all
of
the
issues
that
couldn't
be
pulled
were
septal
seven
base
images,
that's
yeah,
they
might
be
deprecated,
I'm
not
sure
about
the
state
of
them
and
there
is
a
a
workaround
in
the
docks.
I
also
linked
the
docks
where
you
can
manually
remove
deprecated
images.
B
D
B
Don't
really
know,
I
think
we'll
have
to
who
should
we
file
a
bug
on
this?
There
should
be
a
samples
operator
component.
Okay,
if
there
isn't.
Please
pick
me
up.
A
All
right,
let's
move
on
then
to
fedora
core
os
updates
with
dust
people
start
out
with
dusty.
E
Hey
y'all,
can
you
hear
me
cool
yeah?
I
don't
have
anything
too
groundbreaking.
Just
wanted
to
re-note
that
all
streams
of
fedora
core
os
are
are
on
fedora
36.
Now,
as
of
a
few
weeks
ago,
we
actually
have
the
second
update
for
our
stable
stream.
That's
based
on
fedora
36,
going
out
later
today.
E
So,
basically,
you
know
this
is
our
second
round
of
updates
for
fedora
36
on
our
stable
stream.
Hopefully,
people
haven't
had
too
many
issues
with
that,
but
let
us
know
if,
if
you
see
any,
we
also
updated
our
nutanix.
B
Okay,
sorry
dusty,
just
a
quick
note:
okay
is
still
on
fedora
35,
even
though
we're
rebuilding
fcors
with
the
fedora
36
manifests
using
federal
35
packages
at
the
moment.
So
it's
a
bit
yeah.
It's
a
bit
messy
right
now,
yeah.
E
That's:
okay,
yeah!
That's
that's
perfectly
fine!
This
is
kind
of
like
you
know.
If
okd
were
trying
to
test
against
fedora.
36
too.
Here
are
some
changes
so
also
our
nutanix
artifacts
that
we're
shipping
in
fedora
core
os
are
now
using
qcal
internal
compression
instead
of
being
externally
compressed.
So
instead
of
a
qcal
2,
dot,
gz
or
z,
I
forget
which
one
it
was
it's
now
just
a
dot.
Q,
cal
2
and
this
update
was
made
at
the
request
of
nutanix
so
that
people
could
import
directly
into
their
platform
from
the
url.
E
A
Right
any
feedback
or
questions
for
dusty
in
terms
of
recorder
of
us.
A
F
Absolutely
thank
you
so,
as
folks
know,
okd
runs
on
fcos
and
it's
it's
a
pretty
awesome
stable
base
for
okd,
but
we've
been
playing
with
this
idea
of
s-cos,
which
is
really
looking
at
centos
stream
and
kind
of
doing
something
in
between,
say
the
rel
core
os
and
the
fedora
coreof,
and
I
know
there's
kind
of
rebuilding
that
happens
today
with
okd.
F
You
know,
as
christian
pointed
out
there,
so
I
wanted
to
kind
of
ask
the
question
you
know
this
is
something
we're
still
kind
of
looking
at
just
our
own
engineers
playing
with
an
idea,
but
I
wanted
to
broach
it
more
so
here
to
ask.
Does
that
sound
like
something
that
the
community
would
be
interested
in
either
from
the
point
of
view
of
testing
you
using
helpers
like.
A
G
Maybe
try
turning
off
your
video.
I
I
H
If
I
could
try
to
I'm,
I'm
michelle,
I'm
I'm
steve's
colleague,
see
if
you're
concerned.
F
H
F
F
And
all
I
did
was
reconnect.
I
know
I
had
no
idea
not
sure
what
came
through,
but
but
I'll
kind
of
take
a
step
back
here.
You
should
be
able
to
do
a
video
now
by
the
way,
because
you've
got
let's
try
it.
Let's
try
it:
okay,
cool
so
effectively.
F
An
idea
we've
been
kind
of
toying
with
is
the
idea
of
s-cos
or
coreos
stream
core
or
I'm
sorry,
simple
stream
core
os,
and
then
you
know
calling
that
s
cost,
but
part
of
that
is
part
of
our
own
work
of
wanting
to
make
sure
that
the
work
we
do
is
represented
in
the
normal
stream
for
fedora
in
decorah
stream
into
rel,
but
that
also
kind
of
raised
the
question.
Would
this
be
something
that
could
be
useful
for
other
communities
such
as
the
okd
community,
which
currently
is
built
on
fcos?
F
F
F
You
know
consistent
building,
consistent
releases,
that
kind
of
stuff
a
lot
of
it,
automated
very
similar
to
f
cos,
but
then
provided
out
as
that
base
or
whatever
you
know,
for
our
own
testing,
as
well
as
any
other
community
or
group
that'd
be
I'll
kind
of
stop
there,
because
a
lot
of
the
the
deeper
related
stuff.
You
know
we
don't
know
right.
We
haven't
really
dived
that
deep
into
it
yet,
but
did
want
to
bring
this
earlier
rather
than
later
to
the
community
to
kind
of
gauge
interest
and
see
what
folks
think.
A
So
what
is
the?
What
is
the
the
the
status
of
the
project
outside
so
like
it's
separate
from
okd
and
integration
with
okd?
This
is
something
that's
happening
anyway,
or
is
it
reliant
on
integration
and
building
along
with
okd?
B
Yeah,
please
please
do
correct
me
if
I
say
something:
that's
not
okay,
awesome,
so
I
I
do
think
we
will
kind
of
do
it
anyways,
because
we
will
have
improved
testing
a
better
testing
story,
essentially
testing
on
all
the
platforms
currently,
as
some
of
you
may
know,
okd
on
f
course
isn't
really
tested
end-to-end
on
all
the
platforms,
just
because
we
haven't
been
able
to
get
it
working
and
get
the
resources.
B
So
this
we
will
use
internally
as
testing,
but
the
the
idea
is
that
the
community
can
benefit
from
that
as
well
by
us
just
making
that
those
bills
available
as
as
we've
done
before,
but
this
time
as
cost-based,
and
I
also
want
to
mention
the
we
we
currently
don't
have
a
plan
to
replace
f
cos
or
okd
on
f
cos.
With
this
new
version,
it's
going
to
be
an
additional
release,
and
so
you'll
have
two
variants
to
pick
from
essentially
going
forward.
B
If
we
go
through
with
this,
if
there's
interest
here,
where
you
could
say,
okay,
I
I'll
try
the
okd
on
on
s-cos
instead
of
the
f-course
edition
and
then
further
down
the
road.
We
may
kind
of
flip
the
default
over
to
to
s
cos.
If,
if
the
community
wants
that,
if
the
community
would
like
to
stay
on
fcos
as
a
default,
we're
not
going
to
flip
that
switch.
If
they
do,
we
might
do
it,
it's
really
just
an
offer
from
from
our
side.
B
If
there's
interest,
we
will
go
through
with
it,
but
I
think
entirely.
We
will
be
doing
these
builds
for
for
testing,
because
this
is
essentially
our
ci
early
ci
testing
as
well.
Our
early
ma
on
the
master
branch
would
be
the
next
release
testing,
but
we
can
then
also
replicate
those
bills
from
the
master
branch
on
the
release,
branches
and
kind
of
build
a
stable
release
on
the
escrow
space,
which
is
what
the
okd
community
would
would
probably
consume,
not
not,
obviously
the
master
builds.
B
We
don't
want
to
give
give
okay,
the
our
release.
Okay,
as
experimental
version,
it's
still
going
to
be
the
stable
code
bases
from
all
the
payload
components
as
well
as
then,
the
the
core
operating
system.
We
have
this
new
essential
stream
core
os.
I
I'm
just
gonna
pipe
in
here
that
the
two
folks
who
are
the
most
vocal
about
things
of
this
nature
that
I
was
hoping
have
on
the
call
are,
are
not
here
today
and
I
pinged
neil
gampa
and
john
forton
to
see
where
they
were
where
they
are,
but
I
know
they're
both
very
busy
folks.
So
I
I
think
now
that
we've
talked
about
it.
A
little
bit
we'll
be
able
to
survey
some
of
the
the
group
and
most
people
watch
the
recording
after
we
post
it
so
yeah
so
bruce
bruce.
I
Maybe
if
I
can
put
you
on
the
spot
at
bcit,
you're
you're,
currently
using
okd
with
f
cos.
C
Yeah
and
the
one
of
the
difficulties
with
the
stream
version
is
that
I
sort
of
reluctantly
went
from
centos
xyz
to
centos
stream,
with
the
understanding
based
on
the
discussion
that
there
were
not
going
to
be
any
further
versions,
but
it
was
just
going
to
be
one
stream,
but
then
the
last
I
looked.
C
For
I'm
not
sure
what
reason,
but
since
the
the
difficulty
is
that
with
previous
centos's,
there
was
sort
of
a
an
upgrade
from
one
version
to
the
next
pass
and
with
the
stream
there
is
no
upgrade
from
one
version
to
the
next.
So
it
seems
that
if
you
actually
tried
to
use
it,
you
would
then
get
locked
in
to
a
version
that
became
quickly
obsolete
and
you
couldn't
upgrade
without
reinstalling
scratch.
F
I
think
in
this
case,
oh
I'm,
sorry
go
ahead,
so
go
ahead
and
see
sorry.
I
say
I
think
in
this
case
you
know
the
the
upgrade
ability
in
terms
of
core
os
is
a
little
bit
different.
You
know,
I
don't
know
the
state
personally
of
of
centos
stream
upgrading
between
things,
but
the
way
that
coreos
does
the
upgrades
with
rpm
os
tree
and
kind
of
does
pivots
and
that
kind
of
stuff
through
containers.
F
I
think
it
is
a
little
bit
of
a
different
story,
so
you
know
looking
at
different
versions.
If
you
look
at
fedora
core
os
how
we've
gone
through
multiple
versions,
there,
people
can
keep
upgrading
forward.
You
know
in
the
in
the
community
of
befcos
and
then
with
our
costs
with
an
open
ship.
Similar
thing
of
people
can
keep
upgrading,
even
though
it's
going
between,
you
know,
rel
versions.
F
Why
versions,
but
still
rel
versions?
I
think
it's
it's
a
little
bit
different
because
it
would
be
more
cluster
focused
rather
than
the
operating
system.
I
install
the
operating
system
and
I
maintain
the
operating
system
and
then
have
to
do
an
upgrade
there,
but
but
yeah.
I
think
I
think
it's
a
little
bit
different,
but
but
I
hear
your,
I
hear
your
statement
for
sure.
E
Yeah,
since
it's
I
mean
it's
still,
it
would
still
be
delivered
as
an
os
tree
and
so
theoretically,
just
like
fedora
coreos
today,
there's
a
history
and
you
could
roll
back
to
a
previous
version
in
the
history
and
whatnot
so
you're,
not
quite
at
the
mercy
of
like
just
yum
repositories,
getting
updated
and
going
away
or
something
like
that.
B
And
we
actually
determine
that
upgradeability
through
our
end-to-end
testing,
when
we
do
a
release
and
with
s
cars
we're
planning
to
actually
do
more
testing
than
we've
done
with
fcos
in
the
past,
so
that
might
we
can
have
even
more
confidence
then
that
the
upgrade
is
going
to
succeed
and
yeah.
As
dusty
mentioned,
it's
the
mechanics
that
are
the
same
as
as
currently
used,
essentially
rebase
to
to
a
new
version
or
yeah
pivot
from
one
os
to
the
next.
B
So
it's
always
you
can
roll
back
and
if
obviously,
we
would
have
tested
that
new
os
ostrichomet
before
and
what
what
kind
of
upgrade
graph.
We
would
be
setting
up
for
this,
whether
it
would
be
real
cincinnati
graph
or
just
as
we
have
now
a
release
controller
that
essentially
runs
the
end
to
end
test
and
if
it's
determined,
it's
upgradable,
it'll
it'll
create
that
edge,
which
is
only
like
a
stem.
B
We
don't
really
have
a
tree
and
in
that
release,
controller
graph
in
f
cos
currently
how
we,
how
we
do
this
in
an
s
course
or
how
we're
going
to
do
this
because
we
haven't
decided.
Yet,
if
there's
really
interest
a
lot
of
interest,
we
might
might
even
set
up
a
cincinnati
graph,
although
we
will,
we
still
have
to
figure
out
what
kind
of
resources
we
we
have
available
for
for
doing
this,
and
obviously
what
the
interest
of
from
the
community
side
is.
A
Christian,
could
you
talk
a
little
bit
more
about
for
for
those
of
us
who
are
interested,
and
this
would
be
also
for
people
who
will
be
watching
the
video
later?
Those
are
those
that
are
interested
in
sort
of
the
testing
and
the
ci
aspect.
A
How
does
a
a
centos
based
os
improve
your
testing
abilities?
You
provide
a
little
bit
of
specifics
on
that.
B
It
just
fits
in
more
tightly
with
what
we
with
what
we
currently
have
already.
We
can
essentially
just
reuse
the
tests
we
already
have
in
place,
and
it's
not
this.
The
difficulty
of
of
rebuilding
the
f
cross
base,
pushing
it
somewhere
and
then
consuming
that
in
our
ci
system.
We
just
build
the
image
in
ci
and
then
we
consume
it
immediately.
That
means
we
could
have
end-to-end
tests
running
for
okd
on
the
on
the
os
definitions
on
the
os
repository,
which
is
the
equivalent
to
the
okd
machine
os.
B
So
we
could
more
easily
right
now
we
have
this.
We
we
don't
really
test
fcos
changes
continuously.
We
we
have.
B
We
upgrade
the
the
sub
module
from
time
to
time
that
the
fedora
core
os
sub
module,
as
well
as
the
the
openshift
os
sub
module,
and
so
we
only
have
like
a
discrete
testing
whenever
we
bump
that
we
test
again,
but
we
don't
get
this
per
commit
each
time
doing
the
the
tests
and
that
that
creates
skew
between
the
the
versions
we
test,
and
that
makes
it
sometimes
very
hard
to
trace
back
what
change
caused.
What
issue
yeah
and
we
think
that,
with
the
new
model,
that's
going
to
be
better.
E
But
following
up
on
jamie's
question,
why
why
what
the
new
model
is
that
better
is
like
you
mentioned
having
to
build
the
fedora
coreos
payload
before
you
test
it?
What
about
cintos
stream
core
os
is
like?
Is
that
payload
already
built
for
us
somewhere
or
not.
B
We
will
still
have
to
create
the
payload
ourselves,
which
is
going
to
be
exactly
the
same
process
as
we've
been
doing
it
with
f.
It's
just
that
the
car
operating
system
that
we
get
it
right
from
straight
from
the
core
teams
build
pipeline.
Essentially
once
it's
set
up
and
we
don't
need
this
rebuild
of
f
cos
on
the
outside,
and
this
is
actually
this
is
possible
without
copper,
os
layering.
B
If
we
had
coreos
layering,
we
could
also
move
the
fedora
build
pipeline
back
into
our
crow
system,
and
that
would
yeah
we
wouldn't
need
the
external
serious
ci
builds
anymore
in
general.
I
think
it's
for
us,
it's
testing,
because
it's
closer
to
what
what
the
next
archos
release
looks
like,
and
it's
also
what
was
I
going
to
say
it's
also.
B
It
may
be
more,
maybe
procedures
more
stable
than,
of
course,
depending
on
on
on
the
user.
I
guess-
and
it
also
will
offer
a
lot
of
feedback
loop.
You
can
if
the
community
can
contribute
directly
to
s-cos
the
changes
will
land
in
in
ocp
in
the
product,
and
obviously
you
know,
okay
as
well
much
quicker
than
than
in
the
current
model,
where
you,
where
the
community
first
of
all,
doesn't
really
have
a
point
of
contact.
B
It's
hard
to
to
contribute
directly
to
a
component
of
openshift
and
contributing
to
anything
in
fedora.
It
makes
it
it
isn't
direct.
You
then
have
the
fedora
compose
in
between
and
you
possibly
need
to
wait
for
the
next
fedora
core
os
release
until
the
change
lands
in
in
your
okd
payload
and
with
s-cos
that
feedback
loop
is
shorter
and
we
actually
have
a
proper
point
of
contact
for
the
community
to
contribute
to,
because
the
the
center
stream
community
is
not
it's
not
just
openshift.
B
It's
obviously,
all
the
the
kind
of
yeah
big
partners
of
red
hats,
automotive
and
so
on
that
they
is,
they
all
contribute
to
centos
directly
and
with
that,
the
okd
community
would
be
in
a
place
to
also
contribute
to
central
stream
directly
and
have
a
yeah
have
a
shorter
feedback.
Loop
really.
A
Okay,
I
wanted
to
be
mindful
of
time
because
we
do
have
other
guests,
and
so
let's,
let's
spend
we've,
got
three
more
minutes.
Let's
say
to
answer
any
further
questions
and
then
we
can
schedule
more
time,
maybe
at
the
next
meeting,
so
I'm
going
to
be
respectful
to
our
other
guests,
yeah
brian.
You
want
to.
F
Oh
sorry,
go
ahead.
I
did
want
to
bring
a
little
bit
back
to
the
question
of
you
know.
Is
this
something
that
sounds
interesting
to
folks
in
this
community
and
something
that
we'd
like
to
look
at
together?
That's
that's
kind
of
what
I'm
hoping
you
know
to
to
get
to
from
this.
This
talk
and
then
timothy
when
he
rejoins
back
up
over
and
the
next
call
can
you
know
either
start
working
with
the
community
more
on
details,
or
you
know,
working
in
other
directions
around
that
cost.
A
My
my
sense
is
that
the
community
or
the
people
on
the
skull
will
probably
want
to
have
some
async
conversation
other
than
just
sort
of
deciding
in
the
next
two
minutes
like
what
you
know.
What
we're
thinking
and
stuff
like
that.
That's
is
that
my
am.
I
feeling
the
room
right
here.
Folks
yeah,
I'm
seeing
lots
of
hints.
I
Yeah,
I
think
we
need.
We
need
to
give
people
time
to
socialize
and
do
this
and
brian
brian,
you
had
something
yeah.
J
J
Having
a
better
tested
solution
is
going
to
be
a
better
end
result
for
the
community,
but
I
just
wanted
to
check
what
is
implication
for
us,
the
community,
in
terms
of
documentation
updates,
we
spent
quite
a
lot
of
time
getting
docs
or
caddita.
I
o
updated.
We
want
to
make
sure
that
the
community
can
build
this
distribution,
we're
doing
a
lot
of
work
with
the
fedora
trying
to
make
sure
that
we
enable
a
community
we're
trying
to
create
the
technical
documentation.
J
So
I
just
want
us
to
have
that
side
of
the
conversation
as
well.
So
as
we
launch
this,
we
have
the
documentation
in
place.
We
have
the
transposed
images
so
the
the
internal
registry,
we
know
what
the
equivalent
images
look
like
for
the
the
s-cos
releases,
and
things
like
that.
So
I
just
want
us
not
to
lose
that
side
of
the
the
discussion
as
well.
I
There
are
other
folks
on
the
call
on
jack
and
who's
going
to
give.
I
think
the
next
talk,
but
I
know
this
is
new
to
everybody.
I
So
you
know
maybe
in
the
next
call
when
timothy
comes
on
and
can
answer
more
questions
and
we
can
socialize
this
on
the
mailing
list
as
well,
and
you
know
get
get
the
word
out
there
and
reach
out
to
folks
like
john
forton
and
neil
folks
who
abandoned
us
today.
So
if
you're
watching
this.
This
is
why
you
have
to
come
to
meetings
so
carry
on.
B
All
right,
I
I
just
very
quickly
wanted
to
know
this,
isn't
anything
we
want
to
push
on
to
you
as
a
community?
If
you
don't
like
it,
we're
not
going
to
do
it,
it's
just
an
additional
option.
We,
you
know
you
can
definitely
stay
on
fcos
if,
if
you're
comfortable
and
if
that's
what
you
want
to
do,
we
really
want
to
just
gauge.
Is
there
interest
here
in
the
broader
community?
Would
there
be
potential
users
in
in
this
group?
B
So
we
don't
just
do
this
work
and
end
up
without
anybody
using
it,
then
we'll
likely
just
do
it
on
our
master
branches,
for
ci
testing
and
not
not
actually
build
the
release
branches
for
stable,
okay
d
release.
B
All
right
great!
Nobody
needs
to
worry
that
we're
gonna.
You
know
pro
f
cause
okay
and
f
house
in
the
trash
or
anything.
But
that's
gonna.
That's
you
to
stay,
and
this
is
just
an
additional
variant
as
an
option.
A
Awesome
thanks
christian
okay.
Let's
now
move
on,
we
steve
we'll
get
back
to
you
and
the
other
folks,
once
we've
had
a
little
bit
of
a
synchronous,
conversation
and
and
folks
have
had
a
chance
to
sort
of
flesh
out
their
thoughts
for
sure.
A
Let's
go
now
to
jack
to
talk
about
okd
at
cern.
I
know
we've
got
a
lot
of
people
that
are
interested
to
hear
these
details
so
take
it
away.
Jack.
L
Hey
hello,
thanks
for
having
me,
can
you
hear
me
well
all
right,
excellent,
so
yeah
we
we
met
at
kubecon
actually
like
two
weeks
ago
or
so
now
and
yeah,
and
this
community
meeting.
I
just
wanted
to
share
a
bit
kind
of
what
we're
doing
with
okd
or
specifically
our
use
cases.
I've
been
bothering
some
people
and
on
kind
of
the
weird
deployment
scenario
that
we
have
here,
but
I'm
not
going
to
go
into
that
this
time,
but
more
focus
on
the
use
cases.
L
So
we've
been
running
openshift
for
for
quite
a
while
at
cern.
Now
I
think
it
was
since
like
2016-17
somewhere
around
there,
and
the
initial
draw
was
really
the
fact
that
the
the
deployment
pipelines
and
the
build
configs
so
deployment
configs
and
build
configs
were
integrated
because
at
the
time
we
were
looking
for
something
that
we
could
use
together
with
gitlab
and
gitlab
did
not
have
the
integrated
gitlab
ci.
So
the
question
was:
how
do
people
build
containers
and
then
how
do
they
deploy
them?
L
L
But
openshift
is
really
more
targeted
at
or
okd.
In
our
case,
our
deployment
is
more
targeted
at
users
who
just
want
to
run
some
simple
or
sometimes
also
not
so
simple
web
apps,
any
kind
of
application
that
that
is,
is
less
power
hungry.
So
this
is
what
we
are
calling
the
or
what
we
have
is
the
platform
as
a
service
cluster
flavor,
then.
In
addition,
we
also
have
what's
called
webios,
so
cern
has
its
own
file
system
that
is
called
eos,
which
is
backed
by
tape
drives
in
our
data
center.
L
Actually,
and
this
is
used
heavily
at
cern
for
storing
large
amounts
of
data,
and
people
are
also
using
this
to
host
websites,
and
this
webios
cluster
is
then
kind
of
as
a
serving
as
the
front
end.
For
this
and
we
we
decided
to
go
with
the
operator
approach
here,
so
that
we
allow
users
to
create
these
custom
resource
definitions
that
our
are
served
by
our
operators.
L
That
basically
say
something
like
hey.
I
have
a.
I
have
a
folder
here
in
the
on
the
eos
file
system,
and
I
want
to
have
it
available
under
this
host
name
and
then
some
other
parameters
and
then
basically
the
website
becomes
online
and
the
user
doesn't
need
to
take
care
of
any
kind
of
web
hosting.
We
are
doing
it
all
for
them,
and
that
includes
both
static
websites
as
well
as
dynamic
websites
with
php
or
other
kinds
of
cgi
things.
L
L
Then
our
our
third
use
case
is
the
is
the
drupal
use
case
where,
where
because
drupal
is
the
most
widely
used,
cms
at
cern,
and
so,
for
example,
if
you
go
to
home.cern,
you
will
land
on
one
of
the
pages
that
is
managed
by
by
this
cluster
and
my
colleagues
from
the
drupal
team.
They
really
went
all
in
with
the
operator
approach,
so
it's
also
an
okd
cluster.
But
in
addition
there
are
several
operators
deployed
on
top
of
it.
L
That
really
give
give
this
cms
a
fully
managed
experience,
because
drupal
managing
drupal
is
a
relatively
complex
procedure,
because
you
need
to
not
only
keep
track
of
the
application,
which
is
already
difficult
enough,
because
it's
not
really
a
cloud
native
application.
It
has
a
lot
of
state
that
you
need
to
take
care
of.
You
cannot
just
randomly
do
upgrades
or
updates,
but
then,
in
addition,
you
also
need
to
do
things
like
managing
your
database,
schemas
managing
migrations
there
etc.
L
L
L
L
So
we
are
we're
working
on
providing
more
application
templates
to
the
user,
such
as
that
that
they
can
really
easily
get
going
and
then
sometimes
it
also
happens
that
we
find
that
a
user
has
two
specific
needs
to
be
hosted
on
this
cluster
or
we
would
need
to
add
some
really
really
specific
configuration
settings
to
an
application
which
we
don't
want
to
add
there,
because
it's
it's
really
supposed
to
serve
the
80
or
90
percent
of
users
and
not
like
the
the
the
10
of
ugly
ducklings.
L
L
So
here
are
the
instructions
to
to
go
set
that
up
yourself
and
then
they
again
have
their
own
project
and
they
can
do
whichever
kind
of
modifications
they
like.
But
in
this
app
catalog
cluster
we
are.
We
are
really
only
allowing
the
creation
of
custom
resource
definitions,
but
we
are
not
allowing
any
any
other
modifications,
so
users
are
not
allowed
to
to
modify
their
deployments
their
services,
their
pods.
They
can
see
that
they
are
running
there,
but
they
are
not
allowed
to
modify
them
and
also
this
approach.
L
Actually
works
works
quite
nicely
for
us
because
most
people
are
actually
fine
with
with
what
you
get
what
you
give
them
out
of
the
box.
Sometimes
we
get
some
feature
requests.
Then
we
implement
new
things
and
if
we
see
that
it's
too
obscure
or
that
it's
too
specific,
then
we
just
redirect
the
user
into
our
general
purpose
cluster
and
all
in
all.
So
we
have.
We
have
these
four
very
large
production
clusters
and
I
should
also
say
rather
high
density,
so
the
clusters
themselves
are
not
super
large.
L
They
have
around
60
nodes
worker
nodes,
but
each
of
them
is
hosting
around
1000
user
projects.
So
that's
individual
username
spaces
and,
as
a
result
of
this,
we've
also
seen
some
interesting
challenges,
actually
scaling
all
of
the
operators
to
handle
that
workload,
because
the
mem,
especially
we've,
seen
that
the
memory
consumption
can
get
quite
large.
A
Excellent,
what
are
some
of
the
challenges
that
you've
come
across
in
terms
of
building
your
own
okd
in
terms
of
automating
the
process
in
terms
of
getting
components
to
to
rebuild
it?
What
are
some
of
the
challenges?
You've
come
across
well,.
L
The
way
you
have
it
when
you,
for
example,
deploy
a
red
hat
openstack,
but
mainly
we
just
have
the
openstack
compute
part,
and
we
have
a
little
bit
of
openstack
networking,
but
this
is
also
not
fully
standard,
mainly
due
to
the
fact
that
well
cern
had
its
the
same
kind
of
networking
flat
network
layout
for
the
last
30
or
40
years
in
the
data
center,
and
of
course,
it's
it's
very
hard
to
change
that.
L
So,
for
example,
our
openstack
network
has
no
sdn
and
that,
for
this
reason
we
cannot
deploy
a
regular
okd
and
just
tell
it
to
to
deploy
to
openstack
platform,
because
then
it
will
start
to
to
set
up
the
whole
sdn
machinery.
L
C
L
Which
are
backed
by
cfs
but
the
okd
installer
actually
expects
the
nfs
back
end.
All
of
these
sorts
of
things
which
you
would
just
have
if
you
had
a
like
a
regular,
vanilla,
openstack
deployment,
which
we
don't
have,
so
we
really
mainly
have
the
compute
and
everything
around
we
need
to
kind
of
integrate
ourselves.
So
it's
mainly
around
storage
logging,
networking
and
also
some
authentic
authentication
parts.
L
A
Well,
christian
has
his
hand
up
christian
god.
B
Yeah,
let
me
just
start
by
saying
I
I
find
this
super
cool
as
an
openshift
developer.
It
makes
me
proud
that
cern
runs
runs.
Okay.
This
is
awesome
and
thank
you
for
coming
here
and
presenting
this
to
us.
I
I
really
enjoy
this.
This
is
awesome,
I'm
I
wonder
your
okd
build
process.
What
do
you
just
consume
a
standard
payload
or
do
you
do
any?
Do
you
switch
out
any
images?
What's
your
build
process
or
your
preparation
process
for
for
making
an
okay,
payload.
L
Yeah,
so
so
you
already
hinted
in
the
right
direction,
so
we
take
the
the
okd
releases
from
from
github.
Basically,
and
then
we
start
switching
out
some
of
the
some
of
the
images
that
are
inside,
which
is
luckily
relatively
easy.
So
we
can
just
use
the
the
overwriting
of
the
images
and
then
we
basically
those
images
that
we
need
to
have
replaced.
L
We
just
built
ourselves,
but
I
would
also
say
that
a
good
amount
of
infrastructure
we
are
actually
just
deploying
ourselves,
because
we
are
basically
telling
the
openshift
installer
to
install
to
platform
none,
and
then
you
don't
even
get
that
much
out
of
the
box,
but
we
kind
of
need
to
deploy
it
ourselves.
So
examples
examples
would
here
would
be
the
openstack
cloud
controller
manager.
L
A
That's
cool
great
to
hear
that
you're
using
our
go
cd,
any
other
questions
from
folks.
Here,
we've
got
about
five
more
minutes
left
a
little
under
five
minutes.
I
I'll,
just
I
this
is
me
just
curious
what
you
heard
earlier
today
about
the
the
s
the
centos
stream
for
os.
Would
that
be
anything
that
you
think
that,
and
I
know
it's
just
first
time
you've
heard
about
it-
is
that
anything
that
would
help
your
processes?
Is
it
having
a
more
stable?
I
Is
that
anything
you
think
they
would
be
interested
in
or
is
it?
You
know
too
much
work
to
switch
now
that
you've
got
that
build
process
and
handcrafted
everything.
L
Yeah,
so
so
not
I'm
gonna
say
not
super
excited
simply
because
of
the
fact
that
we
don't
usually
need
to
touch
the
machine
os
image,
which
is,
of
course,
a
good
thing.
That's
how
it
should
be.
It's
just
underlying
infrastructure.
L
We
also
probably,
unless
we
we
have
to
or
we
see
any
immediate
benefits
we
would
not
switch
because
we
already
have
our
current
deployments,
but
I
can
say
that
at
the
end
of
last
year
there
was
this
issue
where
I
don't
recall
the
details,
but
where
kernels
had
deadlocks,
where
one
of
our
clusters
was
quite
heavily
affected,
and
that
was
a
kernel
bug.
L
Basically,
so
we
had
to
actually
roll
back
to
one
of
the
previous
images
or
the
kernels,
and
that
was
relatively
painful
in
core
is
simply
due
to
the
rpm
os3
and
you
cannot
just
easily
monkey
around.
So
it
took
us
quite
a
while
to
then
figure
out
how
we
can
build
our
own
machine
os
image
with
a
custom
kernel
inside.
B
B
Yeah,
I
I
do
think
that
the
core
was
layering
which
will
eventually
get
in
both
fcos
and
s-cos.
I
will
make
this
easier
because
you'll
be
more
easy.
It'll
it'll
just
be
much
easier
to
to
make
your
own
machine
os,
because
you
don't
need
to
to
do
the
rpm
os
3
compose.
B
You
can
kind
of
just
layer
your
changes
on
top
of
the
existing
one
and
using
a
docker
file,
which
is
like
the
dev
experience,
is
amazing
already,
so
this
is
steve's
team
and
timothy
or
the
core
team
and
they've
been
doing
amazing
work
and
dusty.
Obviously,
on
on
that
front,
I'm
really
looking
forward
to
landing
that
in
okd.
B
So
that
would
be
one
thing
that
will
make
it
easier,
but
that
will
be
valid
for
both
s-cos
and
also
acquisition,
yeah.
E
Definitely
interested
in
the
the
bad
experience
with
the
rollback,
though,
like
theoretically,
the
rollback
should
be
pretty
straightforward,
at
least
by
the
rpmos
street
level.
The
only
reason
you'd
need
to
build
your
own
machine.
Os
content
is
like
if
you
wanted
to
diverge
for
like
a
period
of
time.
Right
so
like
let's
say,
there's
a
bug
that
doesn't
get
fixed
for
a
month
and
a
half,
and
you
want
to
update
everything
else.
L
We
didn't
specifically
roll
back,
but
we
wanted
to
have
a
a
specific
kernel
version,
basically
in
the
in
the
image,
and
that
was
kind
of
the
painful
part,
and
it
was
of
course
mainly
painful
because
we,
let's
say
kind
of,
didn't,
know
what
we're
doing,
because
we're
not
chorus
developers
like
that's,
not
not
all
bread
and
butter,
and
then,
once
you
figure
it
out,
you
kind
of
understand,
okay,
why
this
works
that
way,
and
but
at
the
time
also
we
couldn't
find
a
lot
of
documentation
how
to
build
your
own
machine
os
image
so
that
that's
why
it
was
really
quite
painful.
E
Yeah,
oh
well,
you
should
have
been
able
to
just
do
it
all
client-side,
so
you
could
have
done
an
override,
replace
for
a
specific
kernel
and
the
client
would
have
kept
that
right.
But
yes,
coreos
layering
will
make
it
much
easier
just
to
override
like
a
specific
package
or
whatnot
and
then
carry
that
delta.
Until
you
know
your
particular
problem
gets
fixed.
A
So
I
want
to
get
to
michelle,
had
a
comment
to
share
and
then
we
have
a
hand
up
alexander.
H
Yeah,
I
I'm
happy
to
hear
folks
talking
about
layering.
If
that
is
something
that
folks
would
be
interested
in,
we
definitely
are
interested
in
your
interest.
There
is
a
github
repository
of
examples.
You
know
we
lot
would
very
much
love
your
feedback,
even
if
it
was
just
visually.
H
Looking
at
the
workflow,
adding
another
example
requesting
another
example,
or
even
if
this
would
be
something
that
is
so
interesting
that
you'd
you'd
love
to
be
the
leading
edge
for
that,
and
it
is
a
pretty
significant
change
and
an
active
conversation
about
how
to
how
to
release
this
and
be
able
to
get
feedback
as
we
go
along.
H
So
if
there's
enough
interest
in
in
the
community,
then
it
could
possibly
be
something
we
would
consider
offering
on
okd
first
that
has
been
raised
before,
but
if
there's
interest,
that
would
be
interesting.
So
I
just
posted
some
examples
and
feedback
is
very
much
welcome.
A
And
alessandro,
I
think
you
had
a
question
for
jack
hello.
M
Nice
to
meet
you
so
yeah,
the
most.
It's
small
that
I
was
wondering.
So
I
imagine
that
in
your
cluster
you
have
four
classes
with
16
workers
notes
that
are
at
least
not
too
much,
but
is
great.
Christine
was
saying
that
before
and
as
as
I
can
imagine
that
there
are
a
lot
of
sensitive
data
in
the
logs
and
in
the
metrics
that
you
have
about
the
cluster.
M
From
cpu
memory
to
networking,
especially
and-
and
there
are
projects
that
are
rising
like
the
network-
will
sell
ability
on
the
letter
for
that
to
work
within
conocd2
and
that
could
help
collecting
data
about
the
cluster
and
anonymizing
them
and
putting
them
available.
Like
some
projects
helps
within
that,
like
over
at
first,
do,
could
be
very
interesting
for
a
lot
of
universities,
a
lot
of
people
around
the
world
that
want
to
improve
things
in
those
kind
of
environments.
A
L
So
I
don't
think
there's
a
overall
online
documentation
available
anywhere.
It's
simply
because
well
it's
we
didn't
really
see
a
need
for
that,
but
in
terms
of
sharing
some
of
the
data
anonymously,
I
I
think
it
would
surely
be
possible,
though
I
didn't
understand,
in
which
format
this
should
happen.
I
know
that
they're,
like
openshift
clusters,
are
sending
some
structured
logging
somewhere
to
red
hat.
If
you
enable
that,
are
we
talking
about
this
or
are
we
talking
about
some
general
purpose?
Data
dump.
M
No,
I'm
not
talking
about
the
telemetry
that
I
was
talking
more
about
data
in
promedios
or
the
ones
that
are
in
the
logging
operator,
and
it
comes
from
the
workers
that
you
run
not
from
the
control
plane.
Okay,
so
you
say
you
have
hundreds
of
workloads
running
on
top
of
okay.
The
data
will
be
very,
I
can
say,
gold
for
people
in
the
desert.
M
Community,
okay-
and
there
are
very
few
repositories
that
provides
data
of
this
kind
like
alibaba
cloud,
which
is
providing
their
name
in
the
format
of
a
csv,
but
they
are
very,
very
poor
in
terms
in
terms
of
quality
and
the
same
options
for
some
data
that
comes
from
2018
from
the
googleborg
billions
today.
They
are
also
in
the
csv
format
in
that
case,
but
I
mean
also,
the
format
can
be
whatever.
M
Whatever,
usually,
people
use
csv
in
the
community,
but
I
can
also
think
that
hundreds
of
workloads,
thousands
or
logs
in
production
can
be
very
huge,
and
so
other
formats
can
be
tout
and
informant
could
be
very
good.
That
is
extracting
it
from
images
and
any
other
operators
that
do
that
does
consider
a
billion
tops
on
top
of
this
cluster,
and
I
don't
know
maybe.
L
Yeah,
I'm
I'm
not
generally
against
it.
We
would
need
to
look
into
it,
and
I
mean
we
cannot
just
give
a
full
dump
of
our
prometheus
instance
to
whichever
data
is
in
there
and,
of
course,
the
same
for
the
logs.
A
I
So
the
next
in
two
weeks
time
we'll
have
marco
back
we'll,
also
have
timothy
back
to
do
more
of
a
deep
dive
and
answer
any
questions
that
we
come
up
with
around
the.
C
I
Stuff,
as
well
as
I'm
gonna,
try
and
get
brian
cook
to
come
to
talk
about
building
a
community,
hosted
and
managed
process
for
okd
and
fcos.
So
to
move
that
conversation
forward
as
well,
so
yeah
and
again
jack.
You
made
my
day
talking
about
cern
I've
known
in
the
background
that
you've
been
doing
that
for
ages,
and
it's
just
nice
to
hear
the
details.
So
thank
you
for
sharing
that
today.
L
A
And
we'll
have
you
back
on
for
an
update
to
tell
us
what
new
things
you've
discovered
and
new
things
you're
working.
L
A
All
right,
docs
meeting
is
next
week
same
time
and
channel
main
meeting
is
two
weeks
from
now.
Meeting
minutes
are
now
getting
posted
as
time
permits
up
on
the
okd
website
and
videos
are,
are
coming,
we've
got
a
little
bit
of
a
backlog
but
they're
making
their
way.
So
thank
you.
I
I'll
just
say:
I'm
gonna
get
today's
out
to
you
right
away
as
soon
as
it's
rendered,
so
that
we
can
socialize
this
and
the
folks
that
weren't
on
the
call.
Please
do
give
us
your
feedback
on
the
mailing
list
and
in
the
slack
channels
and
and
come
to
two
weeks
from
now.
With
your
questions.
A
Oh
one
last
thing
the
docs
group
did
sign
off
on
the
survey,
so
the
okd
survey
is
ready
to
go
out
so
we'll
be
promoting
that
through
the
various
channels
and
hopefully
get
some
feedback
from
okd
users
to
help
make
it
more
welcoming
community
and
maybe
steer
things
moving.