►
From YouTube: OKD CentOS Stream CoreOS Build Pipeline Explained - Luigi Zuccarelli & Christian Glombek OKD WG
Description
OKD CentOS Stream CoreOS Build Pipeline Explained
Guest Speakers:
Luigi Zuccarelli
Christian Glombek
Diane Mueller
OKD Working Groupe
https://tekton.dev
https://okd.io
A
Recording
hello,
everybody
and
welcome
to
another
okd
working
group
briefing
on
some
of
the
exciting
new
things
that
we're
doing
to
make
okd
work
in
lots
of
different
platforms.
A
A
A
whole
bunch
of
folks
in
red
hat
have
been
working
on
diligently
to
get
an
mvp
out
for
us
all
in
the
working
group
to
test
and
deploy,
and
I
think
it's
really
interesting,
because
it's
it's
showcasing
sort
of
a
tectonic
gift
shall
we
say
to
techton
and
a
more
flexible,
build
pipeline
approach
that
we're
working
on
with
the
opd
and
the
okd
streams
initiative.
A
I'm
going
to
get
luigi
and
christian
to
introduce
themselves
and
then
luigi's
going
to
walk
us
through
this
and
we're
going
to
have
some
q
a
and
we
are
we'll,
come
back
and
talk
about
some
of
the
next
steps
after
that,
after
we're
done
so
luigi.
Take
it
away
and
thank
you
both
for
being
here
today
well.
B
B
So
you
know
the
ask
was
basically
to
get
a
sort
of
pipeline
going
and
to
also
have
it
facing
for
the
for
the
community.
So
the
natural
selection
of
the
natural
choice
really
was
techton
tecton
is
is,
is
really
a
open
source.
Kubernetes
native
ci
cd,
build
platform.
That's
based
on
on
on
containers
that
are
totally
configurable.
B
You
can.
Basically,
then
the
great
thing
about
tectonic
is
its
flexibility,
and
it
has
notions
of
tasks,
pipelines
and
pipeline
runs
which
we'll
just
go
through
later.
So
basically,
what
we
wanted
was
to
get
a
pipeline
that
you
could
use
to
build
locally
as
well
as
use
it
in
on
a
dedicated
openshift
platform
or
a
kubernetes
sort
of
installation.
B
With
with
multiple
nodes
that
you
know,
you
could
really
get
some
real
hardware
behind
behind
your
pipeline,
so
that
was
when
that
was
the
main,
the
main
requirement
and
just
to
to
give
you
an
overview
of
the
the
architecture.
B
Basically,
I
don't
know
if
you
can
see
yeah
so
so,
basically,
the
the
core,
the
real
core
of
the
of
the
part
of
the
of
the
pipeline
are
the
are
the
actual
tasks
over
here.
So
I've
tried.
What
I'm
trying
to
do
is
just
show
you
a
very
high
level
overview.
So
basically,
you'll
have
a
notion
of
a
task.
B
Could
you
know?
Basically
it
will
launch
launcher
container
a
pod
within
that
pot.
You'll
have
containers
or
several
containers,
and
then
you
can
embed
a
script.
You
know
you
could
do
that
simple
bash
script,
so
you
could
do
some
real
funky
things
with
other
different
languages,
so
we
based
ours,
basically
using
bash
script
and
we'll
have
on
on
some
of
them
enable
go
so
you
could
actually
run
go
code,
natively,
luigi.
C
Before
you
run
to
the
next
slide,
let
me
quickly
jump
in
here
and
elaborate
a
little
more
on
on
the
requirements
we
have
for
running
the
pipeline.
So
what
what
you
see
here
also
is
is
the
on
the
nodes
are
connected
to
the
nodes.
Is
the
kvm
device
plug-in,
and
so
we
we
run
the
the
core
os
builds,
and
in
this
case
the
centos
core
os
centos
stream,
core
os
builds.
C
We
run,
we
can
run
them
in
any
environment,
as
you
have
mentioned,
one
requirement
for
doing
that
is
virtualization
kvm
virtualization
to
be
enabled
on
the
platform,
and
we
do
that
both
in,
for
example,
locally
kind
of
kubernetes
in
docker
or
kubernetes
in
parkland
cluster,
or
on
an
open
chip
cluster
or
on
a
standard
kubernetes
cluster.
We.
B
C
Enablement
of
kvm
with
the
kvm
device
plug-in,
which
is
also
being
used,
for
example,
by
the
cube
vert
initiative
in
a
similar
manner,
and
that
just
exposes
that
kvm
device
to
the
pods
that
actually
run
where
the
actual
builds
run
and
so
yeah.
There
is
core
os
assembler.
That
is
the
the
build
tool
for
building
core
os,
and
we
run.
B
Yeah,
okay,
cool
now
good
good
one.
So,
basically
what
I've
shown
you
is
in
order
for
that
to
happen
on
and
and
I've
specifically
labeled
this
as
as
operate
first
here
at
the
top.
So
this
is
a
open
shift,
dedicated
cluster
and
so
what?
What?
What
you'll
see
here
is
a
machine
config
and
machine
config
that
help
us,
as
christian
mentioned,
deploy
the
kvm
device
plug-in
and
allow
for
nested
virtualization
on
each
node.
B
So
we,
I
think,
on
the
cluster,
there's
something
like
26
nodes
and
obviously
we
can't
have
our
kvm
plugin
on
every
single
node.
So
we
have
a
notion
of
a
daemon
set
that
will
deploy
the
kvm
device
plugin
on
specific
nodes,
and
then
the
machine
config
will
also
have
labeled
nodes
so
later
on
in
the
in
the
demo,
when
I'll
show
you
the
web
front
end
for
operate.
B
First,
we'll
show
you
the
this
sort
of
will
go
into
the
kvn
plugin
and
the
daemon
set
that's
deployed
that
just
makes
it
more
efficient
and
effective.
We
don't
have
to
overload
the
the
actual
operate
first
cluster,
so
basically,
together
with
the
tasks
and
and
pipelines.
B
So
basically,
a
pipeline
will
consist
of
several
tasks,
so
you
could,
you
could
have
say,
like
10
tasks
and
in
a
pipeline
you
can
say
I
only
want
to
use
task,
1,
3
and
9,
or
something
like
that.
So
it
gives
you
great
flexibility
to
you
know,
set
up
and
deploy
different
different
pipelines,
and
then
you
have
the
notion
of
a
pipeline
run
and
in
that
pipeline
run
you
will
set
specific
parameters
and
you
can
tell
the
pipeline
run
well,
run
this
pipeline
with
x
parameters
or
run
this
part
line
with
y
parameters.
B
So
you
can
see
here.
There
is
great
flexibility
and,
together
with
that,
we
have
our
back
row
row
based
access
control
and
we'll
have
service
accounts
and
secrets
tied
to
the
actual
pipelines.
What
I've
done
here
is
just
to
show
you
an
overview
of
you
know
the
the
the
actual
layout
we
make
use
of
customization,
which
is
a
great
little
plug-in.
B
So
you
could
basically
use
your
your
cube
ctl
to
deploy
this
whole
application
and
instead
of
you
know,
you
say:
cube
ctl,
apply
or
cube,
ctl
create
minus
f,
you
use
the
minus
k
directive
and
it
will
then
call
customization
and
customization
will
then
look
into
each
folder
and
see
if
there's
another
customization
file
and
basically
load
whatever
that
second
or
third
or
fourth
or
embedded
customization
file
says
so
at
the
bottom.
B
Here
you
can
see,
we've
had
and
and
christian
will
go
into
more
of
the
the
the
the
coza,
which
is
the
the
core
assembler
tasks
and
and
sort
of
explain
more
about
them.
But
we
have
a
couple
of
tasks
that
we're
going
to
be
using.
We
start
off
with
the
rpm
artifact,
then
we'll
we'll
do
it
in
a
closer
init.
Then
we'll
do
a
build
and
then
and
then
add
all
on
the
the
different
other
tasks
as
we
go
along.
B
C
Me
again
just
very
quickly
jump
in
here.
The
the
great
thing
about
using
essentially
git
ops
here
with
customize
is
all
of
the
all
of
the
complexity
of
our
of
the
infrastructure
set
up
here
that
enabling
kvm
and
everything
we
can
do.
We
can
set
up
all
of
that
in
a
cluster
with
one
command
which
is
cube,
ctl
apply
minus
k,
and
then
you
essentially
choose
one
of
the
overlays
for
your
environment.
C
So,
for
example,
in
a
kind
cluster
you
would
just
apply
the
local
overlay
and
it'll
set
up
everything
for
you.
The
kvm
device
plug-in
and
everything
nested
will
will
be
enabled
in
your
virtualization.
Kvm
virtualization
will
be
enabled
in
your
local
kind
cluster,
with
just
one
command
and
it'll
it'll
create
all
the
tasks
for
you
and
then
it's
just
up
to
you
to
trigger
one
create
one
pipeline
run
and
your
your
pipeline
with
the
default
config
will
be
running.
C
So
it's
really
really
nice
to
have
all
of
this
in
in
just
a
git
repository
apply,
it
run
it
and
it'll
it'll
run
in
essentially
one
minute,
yeah,
no
no
setup
time
essentially
for
for
first
time
users.
Here
it's
it's
really
great
to
to
do
it
that
way,.
B
Yeah,
so
what
I
was
going
to
actually
show
in
the
demo,
what
I'll
do
is
I'm
going
to
escape
from
the
actual
presentation
and
go
to
our
this,
but
it's
basically
the
guitar
okd
project
okd
core
os
pipeline.
That's
the
name
of
the
project.
B
Cool,
so
what
I'm
going
to
do?
Obviously,
I've
I've
actually
cloned
this
repo
on
my
on
my
local
machine,
and
so
what
I'm
going
to
be
doing
is
is
actually
deploying
this
here
on
a
local,
a
local
instance
on
my
on
my
kind
cluster.
So
what
I'll?
Just
just
do
this
quickly
is
just
to
okay,
it
nodes
just
to
show
you
that
I'm
on
a
kind
cluster
I
have
one
node
is
this:
is
this
font
is
this?
Can
you
guys
see
it?
Okay,.
B
Should
I
just
go
a
little,
let
me
just,
I
think
I've
reached
my
my
front
limit
here.
Yeah
I've
reached
my
front
limit.
Okay,
so
we'll
just
have
to
bear
with
me
so
and
then
we
can
have
a
look
here.
I
I'll
go
run
run
the
the
actual
install
of
of
of
techton,
so
techton
is
going
to.
Basically
it
will.
Oh
sorry,
why
is
this.
B
B
B
B
And
we
can
see
over
here,
so
we
have
tekton,
basically
being
deployed.
It's
going
to
be
deploying
some
controllers,
it's
a
web
hook
and
some
services,
it
does
the
deployment.
B
And
so,
while
that's
going
on
ahead
I'll
do
the
second
part
is
actually
go
ahead
and
deploy
our
application
that
christian
mentioned
earlier
on.
If
you
notice
it
has
the
cube,
ctl,
apply
minus
k,
directive
or
flag,
and
it's
going
to
deploy
the
we
have
a
notion
of
overlay,
so
you
can
have
an
overlay.
We
have
two
overlays
for
now,
one
for
local
development
and
one
for
operate
first,
so
I'll
go
ahead
and
deploy
that
it
goes
ahead
and
then
we'll
configure
the
the
application
and
I'll
go.
B
Have
a
look
we'll
look
at
at
the
name
space
and
there
it's
deployed.
B
Let's
have
a
look,
get
all
nice
and
and
it
will,
it
will
deploy
for
us
the
it
will
deploy
all
the
relevant
parts
plus
the
the
actual
daemon
set
for
the
device
plug-in,
and
you
can
see
there
is
a
device
plug-in.
That's
that's
installed
and
I've
cheated
a
bit
I've
already.
I've
already
quickly
did
a
already
done.
A
pipeline
run
and
you
can
see
those
have
completed,
but
so
basically
now
we're
in
a
we're
in
the
place
where
we
can
actually
go
and
and
have
a
look.
B
And
I've
I've
done
a
a
a
s,
build
that
has
succeeded
and
what
we
can
do
quickly
is
just
have
a
look
at
the
pipeline
run,
which
is
this
one
here,
and
we
can
actually
just
say
we
want
to
describe
this
pipeline
run
it'll,
give
us
an
overview
of
what
actually
happened
and
what
tasks
were
running.
C
And
what
you
were
doing
and
what
what
you're
doing
now
is
using
the
tkn
tecton
cli,
which
has
has
these
convenience?
Yes,.
B
Over
and
above
installing,
the
tecton
pipeline,
you
would
need
to
install
the
client.
C
A
B
Absolutely,
and
if
you
can
see
here,
it
actually
tells
you
what
parameters
are
going
to
be
used
in
the
in
the
pipeline
run
that
that
we
are
going
to
be
initiating
so
to
initiate
this.
It's
simply
doing
something
like
create
minus
minus
f,
and
then
we
we
have
a
pipeline
runs
already
set
up
for
you
in
in
in
our
overlays.
B
So
there's
a
pipeline,
overlays
local
pipeline
runs
and
we'll
do
the
the
pipeline
run
for.
B
The
for
the
build-
and
that's
it
that's
as
easy
as
it
gets
you're.
Basically,
your
pipeline
run
has
started
I'll.
Just
what
I'll
do
is
just
show
you
I'll
do
a
pipeline
run
list
and
we
should
have
a
pipeline
run
that
has
been
initiated.
B
We
can
follow
this
by
by
tailing
the
log.
B
And
we
can
have
a
look
at
the
the
actual
vlogs
for
each
build.
B
But
what
I
can
do,
while
this
is
running
in
the
background
we
can
come
back
to
that.
I
wanted
to
just
show
you
the
the
web
front
end
for
our
for
our
okd
build.
Let
me
see,
I
think
this
is
also
it
needs
to
be
expanded,
but
is
that
okay.
B
Can
you
see
the
search
yeah?
They
look
great
so
here
here
we
have
the
the
actual
kvm
device
plug-in
namespace,
and
you
can
see
that
with
our
machine
config
and
machine
config
pools
it
is
deployed
and
with
a
daemon
set
that
was
used,
we
are
running
these
pods
for
the
plug-in.
B
Although
all
this
really
doesn't
and
christian,
you
can
explain
exactly
the
details
there
for
the
the
way
that
the
the
dev
kvm
is
is
used
for
for
us
to
speed
up,
because
it
is
a
nested
virtualization.
There's
a
kimu
and
I
think,
other
stuff
that
gets
run
in
the
kosa
build.
C
C
That
is
the
core
os
assembler,
that
is
the
tool
to
build
core
os,
essentially
any
any
kind
of
flavor,
and
that
uses
a
vm
to
actually
do
those
builds.
So
we
need
that
virtualization
capabilities
exposed
to
the
pod
where,
where
that
build
runs,
and
that
that
kvm
device
plug-in
essentially
does
that
it
comes
through
the
kvm
device
from
the
host
into
the
part
makes
it
available
inside
the
part
to
the
workloads
running
there.
B
And
so
those
are
the
compute
nodes
that
our
workloads
are
going
to
be
running
on.
So
if
we
go
back
and
this
is
operate
first,
what
we
have
here,
we've
actually
also
also
had
a
couple
of
runs:
we've
had
one
or
two
failures,
but
but
the
notion
here
is
for
the
for
the
pipeline
runs
it's
just
a
great
visual
to
show
you
exactly
what
what
the
pipelines
are
doing
and
we
can
have
a
look
here.
We
had
a
look
at
this
build
it.
B
Actually,
it
lists
the
the
actual
tasks
that
were
run.
If
we
go
back
to
the
pipeline
run
itself
over
here,
you
can
have
a
look
at
each
one
of
the
logs.
You
can
see
the
what
happens
with
the
artifact
coffee
copy,
the.
C
Just
and
very
quickly,
to
again
elaborate
the
artifacts
copy
step,
the
rpm
artifacts
copy
step.
Here.
What
we're
doing
is
we
actually
have
the
openshift
client
and
the
openshift
hypercube
rpms.
C
They
are
part
of
the
of
center
stream
coreos
and
they
are
being
built
in
our
okd
build
system,
which
is
a
a
pro
cluster
that
we
also
use
as
ci
for
openshift
and
in
order
to
pull
those
because
we
don't
have
them
in
fedora
or
in
centos
repositories.
Yet
so
we
kind
of
pull
them
from
our
ci
and
build
system,
and
because
we
have
this
artifact
image
where
these
rpms
are
shipped
so
in
order
to
extract
them
and
make
them
available
to
the
to
the
real
core
os
assembler
compose
that
we're
running.
C
B
Yeah
and
then
we
have
another
one
highlighted
for
the
client
for
the
and
and
for
the
hypercube
as
well,
and
then
you
know
just
I
just
quickly
wanted
to
just
show
the
the
different
logs
that
we
can
actually
go
view
the
logs
at
any
given
time,
there's
quite
a
lot
of
logs,
but
the
great
thing
that
what
we,
what
we
never
showed
in
the
actual
overview
of
the
architecture,
is
this
notion
of
a
of
a
persistent
volume
claim.
So
what
we
do
is
we
mount
the
pipelines.
B
We
mount
the
volumes
in
a
pipeline
so
that
we
can
access
all
the
artifacts
that
actually
get
built.
So
your
iso
image
at
the
end
will
also
have
a
container
container
gets
pushed
to
to
query
so
that
we
have
it
on
a
registry,
and
so
this
isn't.
This
is
really
neat
so
that
we
can
once
the
jobs
have
completed,
we
we
can
go
and
access
these
pvs
and
extract
all
the
the
the
relevant
the
relevant
artifacts
yeah.
C
In
terms
of
text,
we
in
terms
of
techtron,
we
use
that
pdc
as
a
workspace
and
a
workspace
is,
is
accessible
to
all
the
tasks
within
the
pipeline.
We
can
have
one
one
task,
initializing,
our
the
the
config
repository
where
the
manifests
that
define
the
operating
system
live,
and
then
we
have
another
step
that
also
has
access
to
that
same
file
system
within
the
workspace.
To
do
the
build
it'll
do
just
the
os
tree,
build
in
the
first
step,
store
it
in
the
workspace
and
then
in
another
step
in
another
task.
C
C
So
we
can
kind
of
add
up
without
having
to
each
time
create
a
new.
A
C
For
each
step
or
for
each
task
we
have,
we
have
one
pipeline
that
has
access
to
the
bank
file
system
all
the
time.
That
is
really
neat
yeah
in
terms.
C
This
is
a
workspace
we're
using
as
a
workspace
or
mounting
the
workspace.
B
Yeah,
that's
it
so
we
as
indicated
yeah
the
good
one
there
christian
so
so
the
the
workspace
is
available
to
all
tasks
across
all
the
runs
and
and
then
this
workspace
is
actually
also
if
you,
if
you
have
a
look
in
the
pipeline,
run
it
gets
initiated.
B
I
don't
know
if
we
can,
if
we
can
show
it
here,
but
in
the
yaml,
let's
see
we
create
a
template.
Let
me
just
see
if
I
can
find
that.
C
Right
in
the
pipeline
definition,
we
only
specified
that
there
is
a
workspace
and
then,
in
the
actual
pipeline
run
you
we,
we
further
configure
what
that
workspace,
yeah,
how
that
workspace
is
created.
It
could
be
an
empty
deer,
which
is
what
we
do
for
the
local,
the
local
runs
in
the
kind
cluster
or
it
could
be
a
pvc
on
a
on
a
real
openshift
cluster
like
like
here.
B
B
We
tell
it
what
the
access
access
mode
is
and
what
storage
to
use
and
then
the
provision
within
within
operate
first
will
pick
this
up
and
automatically
create
that
pv
for
us
and
then
start
the
task.
So
it
does
this
all
automatically.
We
just
have
to
specify
that.
C
What
we
do
here
is
we
use
a
volume
claim
template.
That
means
we
actually
create
a
new
volume,
new
pvc.
We
could
also
reuse
an
existing
pvc
by
specifying
that
specifically,
so
we
could
kind
of
over
over
time
with
different
pipeline
runs,
still
work
on
the
same
file
system.
Right
now,
we're
using
a
volume
claim
template
to
set
up
a
new
pvc,
but
we
could
also
reuse
it.
B
C
And
maybe
luigi,
maybe
you
could
go
into
the
pipelines
and
kind
of
start
a
new
pipeline
through
the
ui
there's
nice,
it's
oh!
No!
Actually,
on
the
on
the
openshift
cluster
on
operate.
First,
the
pipeline
section
on
the
resources:
oh
and
we
are
having
logs
now.
B
C
C
And
it
will
take
you
to
a
very
nice
ui
that,
yes,
yes,
all
the
parameters
and
if
you
scroll
all
the
way
down
it'll,
let
you
also
define
what
to
mount
as
a
workspace
here.
So
we
can
either
do
a
persistent
volume
plane
which
would
be
an
existing
volume
or
we
use
what
we.
What
we
did
there.
A
volume
claim
claim
template
to
create
a
new
one
or
the
empty
directory
is
what
we
do
locally.
So
you
you,
you
can
also
trigger
these
pipeline
runs
through
the
the
openshift
ui
here
yeah.
C
I
guess
that's
not
necessary
you
can.
You
can
do
all
of
that
just
by
applying
the
yaml,
as
as
you
you've
shown
in
in
your
local
cluster.
B
B
I
mean,
I
know
it's
very
difficult
to
hear,
but
what
is
so
nice
is
that
the
the
tecton
command
line
gives
you
the
notion
of
different
colors
for
different
tasks.
So
what
I
could
do
here
is
just
break
this
and
just
show
you,
the
the
one
that
succeeded
will
get
the
notion
of
different
of
different
colors
so
that
you
can
actually.
B
View
the
logs
a
better
you
know,
so
let's
just
have
a
look
here
and
we
can
do
more.
B
Yeah
yeah
there's
a
ton
of
stuff
happening,
but
you
can
see
basically
here.
The
first
task
is
in
green,
then
copy
rpms,
then
the
inet,
and
then
it
goes.
It
does
a
couple
of
things
and
then
it
does
the
fetch
and
build
which
is
nice.
It
gives
you
a
great
separation
of
concerns
regarding
the
logs
that
you
can
use
for
for
fairly.
You
know,
debugging
and
and
looking
at
different
places
on
on
on
in
your
logs.
You
know,
I
don't
know
so
for
for
the
the
actual
demo.
B
I
think
that's
about
it
that
we
have
to
show.
What
I
can
do
is
just
share
the
the
the
actual
links,
some
useful
links
that
we
that
we
have
that
you
guys
could
go
reference.
B
The
the
get
the
get
account
is,
and
I
think
diane
will
have
this
slide
deck
available
for
the
guys,
hey
yep
yeah,
so
you
could
go
there.
Prs
are
welcome.
I
mean
we
are
we're
having
this.
This
is
community
facing
we
want.
We
want
guys
to
to
get
involved
and
and
and
get
stuck
in
and
play
around
and
use
it
on
your
on
your
local
kind.
B
So
we
have
tested-
I
just
have
to
say
this:
we've
tested
this
on
on
an
okd
single
node
cluster
we've
tested
on
kind,
I'm
not
tested
on
a
vanilla
kubernetes.
It
was
a
five
node,
vanilla,
kubernetes
cluster
and
then
obviously
openshift.
We
still
have
to
try
microshift,
I
think
christian.
That
might
be
next.
C
Yeah
and
again
for
right
now,
this
is
very
much
geared
towards
building
centaur
stream
coreos,
as
we
are
kind
of
maturing
this
pipeline,
it'll
it'll
also
be
more
agnostic
and
more
generic
and
in
the
future,
we'll
be
able
to,
for
example,
rebuild
fedora,
core
os
or
rebuild,
or
just
build
your
own
os
with
your
own
manifest
list
or
your
your
own
manifest
of
rpms.
C
This
is
really,
and
we
we
already
have
internal
customer
interest
here
from
from
colleagues
who
want
to
build
their
own
okd
stream
just
to
test
out
new
features
and
and
have
essentially
a
a
specific
build
for
their
feature
where
they
can
just
cut
out
all
the
all
the
things
they
don't
need
and
just
build
build
what
they
need-
and
this
is
essentially
one
part
of
the
future
okd
streams
pipeline,
which
will
combine
this
base.
C
Okd
core
os
pipeline
with
another
pipeline
that'll
then
build
the
the
the
images
that
make
make
up
the
components
of
the
cluster
part.
So
this
builds
the
base
operating
system
and
then,
which
is
the
okd
core
os
pipeline,
and
then
there
is
another
pipeline
in
that
same
okd
project,
github
organization.
C
That
is
actually
that
can
be
used
to
build
all
the
images
that
are
in
a
an
okd
payload.
So
what
we
really
want
to
look
at
eventually
is
having
making
it
really
easy
for
anybody
to
build
their
own
okd
payloads,
and
we
we
have
two
officially
supported
streams
that
all
we
will
have.
Apparently
we
only
have
one
which
is
okd
on
f
cross
okidi
on
fedora
core
os.
C
Soon
we
will
have
a
second
one,
which
is
okd
on
center
center
stream,
which
is
what
what
this
pipeline
builds
now
yeah
and
in
the
future,
anybody
will
be
able
to
create
their
own
stream.
I
think
that
is
one
of
one
of
the
key
takeaways
here
you
can
build
your
own
very
much
bespoke
thing
for
your
purposes,
your
own
operating
system,
your
own
base
operating
system
with
this
pipeline
and
then
with
the
yellow
pipeline,
the
the
payload
components.
A
Yeah,
so
this
is
getting
so.
This
is
one
of
those
things
that's
over
time,
having
been
with
the
openshift
and
ocp
and
all
the
evolutions
we've
we've
always
done
these
like
dev
previews,
which
have
been
you
know,
let's,
let's
get
it
out
with
the
release.
A
This
is
something
some
crazy,
wonderful
new
feature,
either
a
customer
or
an
engineer
or
a
community
member
wants
to
test
and
play
with
and
to
in
my
mind
what
this
streams
project
lets
us
do
is
gives
us
a
lot
more
freedom
to
innovate,
whether
it's
engineering
within
red
hat
or
engineering
resources
outside
of
red
hat,
both
at
the
os
level
and
at
the
ocp
okd
level
too.
So
you
know
you
mentioned
earlier:
there's
there's
a
lot
of
internal
clients
for
this
at
red
hat.
A
So
there's
a
lot
of
demand
internally
to
use
this,
which
is
great
because
that
means
we
have
lots
of
eyeballs
on
it
too.
But
we've
heard
from
outside
of
red
hat,
that's
lots
of
interest
in
custom.
Okd
builds-
and
this
is
this-
is
I
think
why
this
project
is
so
exciting
and
this
paradigm
shift
to
focusing
on
pecton
pipelines
and
creating
things
that
the
community
can
build
and
fork
and
and
use
to
create
okd's
for
whatever
bespoke
purpose
they
have.
A
I
think
the
you
touched
on
something
also
christian.
We
can
continue
to
have
the
fedora
core
os
build
of
okd,
and
you
know
do
that
because
there's
some
bespoke
things
for
the
fedora
thing,
but
the
core
os
layering.
You
touched
on
that
a
little
bit.
Can
you
talk
a
little
bit
on
the
kind
of
freedom
that
now
that
gives
us
and
and
how
this
showcases?
Some
of
that.
C
Absolutely
so
what
this
pipeline
does
that
we
just
saw
this
runs
core
os,
assembler
and
coreos.
Assembler
is
the
is
all
is
already
the
tool
that
is
used
to
build
fedora
core
os,
the
fedora
core,
os
that
the
the
official
fedora
chorus
release
isn't
quite
ready
to
run
kubernetes
on
it.
It
doesn't
have
the
dependencies
needed
to
do
that
and
and
to
do
run
to
run
okd.
C
So
what
we
do
and
what
is
now
possible
and
it
will
be
possible
with
which
what
we
will
be
using
with
okd
4.12
and
going
forward
forward
from
there
is.
We
will
take
the
official
fedora
core
os
release
and
then
using
core
as
a
layering.
We
will
change
that
base
a
little
bit.
C
We
will
add,
on
top
a
few
changes
and
a
few
rpms,
that
we
need
as
as
dependencies
for
running
okd
the
cluster
on
top
of
that
base
operating
system
and
the
great
thing
about
coreos
layering
is
that
it
it
just
works
like
any
container
build
you're
used
to
you.
You
have
a
docker
file
or
a
container
file
where
you,
where
you
just
install
more
packet,
more
packages
using
rpm
os
3.
So
you,
you
have
a
from
directive
importing
fedora
core
os,
because
fedora
corvos
is
now
available
as
a
native
container
image.
C
You
import
that
through
the
from
directive,
and
then
you
just
rpm
os
3
install
a
couple
of
more
rpms
on
it,
and
then
you
commit
that
to
a
new
derived
container
image
and
that
yeah
fedora
core
fluorocarbons
coreos
layering,
the
generic
one
is
really
useful.
In
that
sense,
you
can
manipulate
the
base
operating
system.
You
have
a
container
image
that
that
wraps
the
entire
operating
system.
It
has
everything
in
it.
C
The
boot
loader
everything
it's
the
whole
operating
system,
but
it's
shipped
as
a
container
image
and-
and
there
is
the
there-
is
a
whole
bunch
of
benefits
to
this.
We
we
we
can
use
it
manipulated
like
in
a
container,
build
what
what
most
developers
are
nowadays
used
to,
but
we
can
also
distribute
it
through
standard
container
registries,
and
I
think
that's
another
great
benefit
here.
C
We
can
essentially
ship
a
a
bootable
image
as
a
as
a
as
a
container
image
which
really,
which
makes
this
really
really
much
easier
to
do
than
than
uploading
different
boot
images
for
each
platform.
Rpm
os
3
now
also
supports
creating
these
container
images
and
rebasing
to
these
container
images.
C
I
we
haven't
talked
a
lot
about
rpm
osg
today,
just
to
very
quickly
note
the
core
os
assembler,
essentially
wraps
rpm
west
tree
and
runs
rpm
os
3
compose,
which
is
the
command
to
create
that
to
create
that
operating
system,
and
this
is
what
what
we
run,
we
so
what
we
run
in
tecton.
I
I
didn't
want
to
do
a
show.
A
We're
gonna
go
even
deeper,
with
colin
waters
walters
rather
at
the
openshift
commons
gathering
coming
up
and
he's
gonna
give
us
a
lightning
talk,
which
I
think
will
be
amazing
to
get
him
to
say
anything
in
less
than
10
minutes.
A
But,
however,
he's
going
to
go
dive
deep
on
that
and
we'll
have
a
recording
that
dives
deep
on
the
core
os
layering
for
you
all
shortly
sometime
in
early
october,
but
the
other
thing
I
think-
and
I
just
want
to
make
sure
that
everybody's
clear
on
that
this
is
the
core
os
pipeline.
A
So
the
next
thing
that
christian
and
luigi
and
everybody
has
to
do
is
it
has
to
tuck
this
into
the
existing
prow
pipeline
to
do
all
the
testing
and
everything.
So
if
maybe
you
could
talk
a
little
bit
about
that
because
this
is
you
know
this
is
part
of
the
okd
working
group
initiative.
But
there's
this
whole
other
thing
that
has
to
happen
now
to
get
the
mvp
out
the
door,
so
maybe
a
little
bit
of
a
update
on
what
the
status
is
and
when
we
can
and
how
that's
all
about
work,
absolutely.
C
So
currently,
and
as
you
may
know,
we
we
have
the
the
okd
prowl
instance,
which
is
it's
also
used
as
the
openshift
ci
system.
So
we
we
use
that
as
the
the
build
it's
really
the
okd
build
system
that
we
use
as
a
ci
for
openshift
as
well,
but
from
experience
we
know
that
prowl
is
a
bit
difficult
to
work
with,
and
it's
not
a
system
that
has
a
a
big
community.
C
So
it's
it
may
be
hard
to
find
resources
on
how
to
set
it
up,
how
to
use
it.
It
is
a
very
powerful
system,
but
it's
also
not
the
easiest
to
work
with.
C
So
that's
one
of
the
reasons
we
chose
to
implement
this
pipeline
in
with
with
tecton
primitives,
because
we
think
there
is
more
the
the
community
is
is
already
there
and
we
kind
of
we
will
fit
right
in
there
and
what
we
will
be
able
to
do
with
this
is
we
will
be
using
tecton
tasks
and
pipelines,
and
we
will
integrate
that
with
pro
at
first
and
we
may
call
prow.
C
We
may
call
tacton
pipeline
runs
from
within
prowl,
but
eventually
we
kind
of
want
to
migrate
over
to
more
tecton
native
setup,
where
most
or
more
of
the
okd
build
system
is
techton
instead
of
prowl
so
and
because
we
think
that
is
it.
It
is
more
composable,
more
obvious
to
work
with
you.
C
So
our
goal
really
is
to
to
make
techton
into
the
future
okd
build
system
and
move
away
from
prowl
over
time.
B
Yeah,
so
sorry,
the
nice
thing
I
was
just
going
to
mention
of
that
is
that
your
the
entry
into
getting
started
is
is
so
minimal
because,
like
we've
shown
you
just
with
kind
once
you've
installed
kind,
there
was,
I
think
I
issued
three
commands
to
install
tecton,
install
our
pipeline
and
come
on
to
actually
start
the
pipeline
run
and
you're
up
and
running
and
and
then
basically,
you
can
go
ahead
and
change
the
actual
scripts
within
each
task.
B
So
it
is
extremely
flexible,
extremely
easy
and
run,
and
so
I
think
that
this
is
what
we're
hoping
that,
because
of
that,
the
sort
of
momentum
and
adoption,
adaption
or
adoption
whatever
of
using
the
pipelines
is,
is
going
to
take.
You
know,
take
hold
because
of
of
the
ease
of
use
and
that's
that's
a
good
point.
You
bring
the
christian.
A
Yeah
yeah
because
we've
seen
people
building
okd
for
bespoke
versions
of
openstack
and
other
things,
and
so
my
and
I
think
the
community's
hope
in
the
okd
community
is
well
once
we
have
these
tech,
tong
pipelines.
A
People
will
either
leverage
the
operate
first
community,
as
well
as
we
are,
or
use
these
tecton
pipelines
locally
or
in
their
own
on
their
own
clouds
and
and
start
giving
us
a
whole
bunch
of
feedback
on
okd
and
things
that
we
can
add
in
on
top
of
okd
to
start
innovating
in
the
upstream,
as
opposed
to
waiting
for
things
to
trickle
down
from
ocp.
And
this.
A
This
opens
up
so
many
possibilities
and
in
some
ways
gets
rid
of
gets
rid
of
what
I
would
call
some
technical
debt
that
we
probably
have
at
in
the
engineering
departments
across
red
hat,
trying
to
keep
these
dev
preview
things
going
on
beyond
just
dev
preview.
So
I'm
really
excited
about
the
possibility
for
driving
more
innovation
into
okd
and
ocp
and
kubernetes
and
all
of
the
other
related
folks,
including
techton,
because
it's
it's
wonderful
to
be
able
to
collaborate
with
yet
another
community
to
make
this
happen.
C
Absolutely-
and
one
thing
I
wanted
to
say,
is
stay
tuned
for
our
s-cos
mvp,
which
is
the
okd
on
centaur
stream,
core
os
minimum
viable
product,
so
our
kind
of
first
version
to
prove
that
it
works,
is
coming
soon
stay
tuned
for
that
we
will
also
be
releasing
this
pipeline
and
the
yellow
pipelines
from
our
okd
project,
github
organization,
to
techton
hub
in
order
to
make
those
available
to
a
broader
audience
and
gather
more
feedback
really
and
obviously
prs
are
welcome.
C
If,
if
you
see
something
that
isn't
ideal
or
you
want
another
task,
that
does
something
related,
please
feel
free
to
to
open
a
pull
request.
We
will
be
very
happy
to
take
a
look
at
it
if
you're,
if
you
already
have
tacked
on
a
lot
of
technical
experience,
please
also
just
review
the
code
we've
already
put
in
the
repositories
we
are
all
kind
of.
C
We
were
very
excited
to
finally
get
to
use
tecton
for
work,
but
we,
you
know
we
I
I
at
least
I
haven't
had
a
lot
of
experience
before
starting
this
mvp
yeah
we'd
be
very
glad
to
take
your
reviews
on
what
we
have
absolutely.
A
Yeah,
absolutely
so,
and
the
the
okd
working
group
meets
on
tuesdays
on
a
regular
cadence.
You
can
get
that
from
the
fedora
calendar.
You
can
find
us
on
slack
as
well
in
the
kubernetes
slack
channels
for
its
openshift
users,
I
believe,
is
the
correct
channel
in
the
kubernetes
slack
channel.
A
So
if
you're
looking
for
us
there
and
we'll
probably
start
popping
into
tecton
slack
channels
or
irc
or
whatever
the
tech
town
community
is
using
these
days,
but
we're
really
excited
to
do
this
and
to
to
bring
this
to
the
forefront.
A
And
I
can't
thank
luigi
and
his
team
and
zach
and
michelle
crikey
and
everybody
else
on
the
customer
facing
engineering
teams
for
all
the
work
that
they've
done
here
and
for
neil
gampa
and
jamie
and
a
bazillion
other
people
in
the
okd
working
group
for
making
this
collaboration
happen
and
there's
also
another
whole
group
to
thank
the
center
west
cloud
big
out
there
for
taking
this
project
on
with
us
and
we're
looking
forward
to
all
this
wonderful
cross-community
collaboration
to
to
seed
some
future
innovations
and
to
really
streamline
the
build
processes
for
okd
and
make
it
available
everywhere.
A
So
christian,
thank
you
again
for
your
time.
This
is
just
one
of
those
projects
that
highlights
why
open
source
rocks
so
much
and.