►
From YouTube: OKD4 Release Update and Road Map Clayton Coleman (Red Hat) - OpenShift Commons Briefing
Description
OKD4 Release and Road Map Update
Kubernetes Distribution
https://okd.io
Clayton Coleman (Red Hat)
OpenShift Commons Briefing
https://okd.io
https://commons.openshift.org
https://try.openshift.com
A
On
board
right
now,
so
we're
gonna
get
started
so
hello
and
welcome
everybody
to
another
open
ship.
Commons
briefing
this
time,
clayton
coleman,
who
is
our
lead
architect
for
open
shift
and
all
things
kubernetes
at
Red
Hat,
is
going
to
walk
us
through
an
update
on
what's
going
on
with
ok,
d4
and
sort
of
our
vision
and
the
vision
that
we're
looking
for
input
on
from
the
community.
A
B
Dan
so
good
morning,
everyone,
as
Diane,
said
my
name-
is
Clayton
Coleman
I've
been
involved
in
open
shift
for
a
very
long
time
and
in
kubernetes
for
a
very
long
time
as
well.
There
were
a
lot
of
discussions
recently,
and
some
of
them
were
I
mean
a
lot
of
people
talking
about
this
in
various
forms.
I
wanted
to
kind
of
cover
where
we're
at
with
okd
how
we
got
to
this
point.
What
I
personally
can
do
better
in
the
future
and
what
we
as
a
community
can
do
better.
B
If
there's
any
comments
or
feedback,
there's
lots
of
ways
to
reach
me.
I've
talked
to
a
lot
of
people
in
the
last
year
about
what
you
know
we're
openshift
is
going,
but
it's
what
kind
of
serve
as
a
as
summarizing
everything
where
we're
at
and
then
we
can
take
steps
forward
together.
So
the
first
off
I
will
start
with
an
apology,
as
I
said
in
the
email
and
there's
two
apologies
here.
B
The
first
apology
is:
there's
nothing,
novel
and
juicy
here
that
isn't
in
the
email
I
sent
out,
although
we
can
certainly
generate
it
as
a
group
and
if
there's
anything
that
isn't
captured
I
will
make
sure
to
relay
that
in
other
forums.
But
the
real
apology
is
okd
kind
of
got
away
from
from
myself
and
others.
B
We
didn't
communicate
as
openly
and
as
transparently
as
we
should
and
have
historically
done,
and
so
one
of
the
things
that
I'd
like
to
do
is
use
this
as
kind
of
a
reset
to
say:
okay,
we've
kind
of
got
into
habit
with
okay
D.
What
can
we
do?
A
little
bit
better
to
communicate
more,
be
more
deliberate
about
the
kinds
of
feedback
we
take
and
do
what
we
can
to
open
up
better
channels
of
communication
and
collaboration
for
our
PD
going
forward.
B
So
a
lot
of
folks
I
would
hope
who
care
enough
about
OpenShift.
Probably
most
of
the
things
I
say
shouldn't
be
a
surprise
when
we
talk
about
where
we're
going
with
open
shift
in
February
of
last
year,
RedHat
acquire
core
OS
for
us
is
a
really
exciting
company.
A
lot
of
great
people
worked
at
core
OS,
who
believe
very
strongly
in
open
source.
They
believe
you
are
strongly
olynyk's,
they
believe
very
strongly
in
kubernetes.
They
believe
very
strongly
and
the
tools
and
patterns
around
the
ecosystem
when
we
acquired
them.
B
For
us,
a
lot
of
folks
who
worked
on
open
ship
did
like
a
very
natural
fit,
but
there
was
a
lot
of
early
discussion
about
well.
We
have
some
opportunity
here
to
really
be
shake
things
up
and
to
do
better
and
there's
a
lot
of
energy
on
the
core
left
side
and
a
lot
of
energy
of
folks
involved
in
open
shift,
and
so
you
know
over
the
the
months
following
that,
you
know,
as
with
any
acquisition
at
Red
Hat.
We
always
we
always
try
to
open
source
everything,
but
sometimes
it
takes
some
time.
B
There
were
a
few
things
that
core
OS
had
specifically
around
tecktonik.
There
couldn't
be
directly
open
sourced
or
we
didn't
want
to
just
directly
open
source
it
without
thinking
about
it.
So
we
went
to
this
period
where
the
plan
was
acquire
core
OS,
develop
a
brilliant
plan
and
then
ship
that
plan,
and
that's
not
actually
what
happened.
You
know,
as
most
of
you
can
probably
guess.
The
real
world
is
more
complicated
than
that.
So
we
spent
a
lot
of
time
in
April
and
May
kind
of
trying
to
pull
together.
B
Hey
we've
got
a
lot
of
things
that
everybody
likes
and
all
these
different
technologies
things
like
atomic
and
there's
people
out
there
who,
like
things
in
row,
there's
people
out
there
who,
like
things
in
quote/unquote,
pure
upstream
kubernetes
and
there's
people
who,
like
the
opinionated,
openshift
approach,
way,
that's
security,
there's
a
ton
of
tools
in
the
ecosystem
that
were
being
developed
and
we
start.
We
were
starting
to
see
the
blossoming
of
the
cube
ecosystem.
B
So
a
lot
of
time
in
March,
April
and
May
was
let's
kind
of
explore
what
we
could
do,
and
so
then
you
know
reality
cast
says
same
where
we
that
we
were
this,
when
course
was
requires
about
three
nine.
If
I
recall
correctly
and
we
ended
up
shipping
310
311,
the
court
west
guys
have
been
working
on
operators
and
the
operator
pattered,
which
is
really
just
the
natural
evolution
of
the
kubernetes
mindset
of
you
have.
B
You
know
some
api
that
says
what
I
want
the
world
to
be
like
in
a
controller
that
goes
and
does
that
and
sometimes
that
world
is
I
want
to
Postgres
database
to
exist,
and
sometimes
that
world
is
I
want
all
of
my
applications
to
have
secrets
or
have
a
private
keys
that
they
can
use
to
communicate
securely.
So
the
operator
stuff
took
some
time
and
there
was
a
lot
of
things
we
wanted
to
do
with
310
311
to
continue
evolving
OpenShift,
and
so
we
kind
of
got
a
later
start
than
we
would
have
liked
on.
B
Where
do
we
go
next?
But
that
was
good
because
it
gave
us
time
to
really
talk
through
operators
understand
where
the
community
was
going.
There
was
a
ton
of
work
that
actually
had
to
be
done
in
cube
and
I'll
talk
a
little
bit
about
that
later.
But
we
got
a
we
kind
of
got
311
out
the
door
and
then
okay,
let's
just
get
this
thing
wrapped
up
before
December,
and
that
did
not
happen.
We
did
do
a
cubic
on
December
5
demo
on
December
I
think
it
was
December
9th,
although
no
I'm
might
be
misremembering.
B
But
you
know
at
that
cube
demo.
We
showed
kind
of
an
open
shift
for
all
the
pieces
put
together.
It
wasn't
quite
duct
tape
and
baling
wire.
A
lot
of
the
things
were
actually
soaked
and
fundamental,
but
we
were
still
closing
out
of
the
details
so
the
next
day
we
all
got
them
we're
like.
Okay,
let's
actually
turn
this
into
something
that's
stable
and
support
and
finish
all
the
edge
cases
and
finish
all
the
to
hold
the
gaping
holes
that
we
think
still
exist
and,
oh
yeah.
B
B
You
know
some
of
this
was
a
a
challenge
of
time
and
focus.
Some
of
it
was
changes
that
went
that
happened
during
the
evolution
of
open,
shipped
I'll
talk
about
some
of
them
your
abs.
Anyone
on
this
call
can
absolutely
call
me
out
and
say
we
could
have
done
a
better
job
that
is
absolutely
100%
true
and
going
forward.
I
want
to
make
sure
that
we
at
least
make
a
better
effort
as
a
community
to
say
you
know,
let's
periodically
checkpoint
a
roadblock,
there
were
some
other
changes
that
came
along
with
the
Kouros
acquisition.
B
The
other
admission
is,
we
all
got
addicted
to
slack
and
slack
is
evil
and
you
should
never
use
it
because
it
gets
you
out
of
a
certain
mindset.
So
we're
gonna
try
to
do
a
better
job
of
having
discussions
and
doing
a
little
bit
more
of
supporting
our
external
slacks,
as
well
as
making
sure
that
our
regular
forums
for
communication
are
being
used
as
effectively
as
they
could
so.
B
The
timeline
for
openshift
4
if
you've
seen
the
open
shifts
or
stuff
it
was
a
mindset,
change
a
lot
of
the
you
know.
Very
little
of
this
should
be
a
surprise.
We
did
at
least
communicate
these
sorts
of
changes,
but
I
think
the
implications
on
okd
are
kind
of
interesting.
I
mean
that's
what
the
email
and
this
this
talk
is
about
early
on
there's
a
lot
of
lessons
from
container
linux
and
tectonic
that
we
thought
were
relevant
and
there
was
a
lot
of
lessons
from
the
current
ok,
d39,
310
and
311
we're
in--okay
d.
B
We
were
we're
getting
a
lot
of
feedback.
You
know
upgrades
could
be
better
install
this
complicated,
there's
a
lot
of
moving
pieces
day.
2
was
harder
than
it
had
to
be,
and
so
we
drafted
what
I
would
call
the
quote
unquote,
trying
to
get
us
to
think
about
what
is
important,
and
we
talk
about
some
of
these
at
Red
Hat
Summit
last
year.
So
again,
hopefully
not
surprising,
but
I
want
to
talk
about
the.
Why
and
how
they
they
manifested.
So
the
first
rule
was
actually
go,
create
something
useful
and
relevant.
B
That's
you
know,
probably
what
all
of
us
aspire
to
rules
0
for
a
reason.
We
don't
often
think
about
it.
We
all
want
to
create
good
things.
The
history
of
where
we
were
was
okay,
de
and
kubernetes
had
evolved
in
this
process.
We
kind
of
got
to
thinking
okay.
Well,
we
want
to
create
something
good,
but
we
want
to
also
create
something
better
than
what
we
have
today
and
the
better
was.
How
do
we
learn
from
all
of
the
mistakes
we've
made
to
make
something
really
impressive?
B
The
idea
of
a
distribution
of
kubernetes,
which
is
we've
said
kubernetes,
is,
is
important,
potentially
as
Linux
in
terms
of
providing
a
standard
way
to
run.
Applications
across
multiple
servers.
I
certainly
don't
believe
that
kubernetes
will
run
all
software,
but
it
has
a
decent
chance
of
being
as
impactful
and
privileged
Linux,
and
that's
a
much
less
controversial
statement
than
it
would
have
been
three
or
four
years
ago,
when
we
were
getting
a
lot
of
shade
from
folks
about
Animas
kubernetes.
Think
it's
going
to
work
out.
B
Maybe
you
should
just
use
insert
our
thing
here,
but
just
like
Linux,
you
sustained
open-source
communities
that
provide
long
term
stability
and
commitment
to
moving
these
sorts
of
moving.
These
sorts
of
technologies
forward
is
really
important
if
kubernetes
is
the
core
and
in
the
early
days
of
kubernetes
kubernetes
was
all
you
need
needed
a
lot
of
what
I
would
all
the
distributions
for
kubernetes
that
are
out?
There
are
distributions
of
an
installer,
which
is
great.
B
You
have
to
have
an
installer
and
some
commitment
to
update,
but
I'd
say
the
thing
that
truly
differentiates
a
Linux
distribution
from
where
we
are
in
kubernetes
is
that
Linux
distributions
have
tooling
patterns
and
processes
that
solve
the
problems
that
enable
long
term
support
and
scale
of
those
platforms.
So,
for
instance,
tools
like
rpm,
apt
and
yum
were
novel
at
one
time
before
those
you
could
put
all
these
bits
together,
but
you
didn't
have
a
way
for
people
in
the
community
to
come
in
and
package
their
software
and
get
it
into
a
chain.
B
You
needed
tools
like
Koji
and
the
fedora
build
systems
to
do
CI
testing
and
to
pull
all
these
pieces
together.
So
where
we
kind
of
are,
is
we're
in
the
very
early
stages
of
kubernetes,
one
of
the
things
in
the
create
something
good
was
well.
If
we
don't
think
that
what
we
have
is
really
the
tools
that
you
need
for
a
really
sustained
kubernetes
distribution,
what
would
those
tools
be
and
how
can
we
braid
them?
The
corollary
to
the
first
rule
of
create
something
good?
Is
there's
going
to
be
gaps
anytime?
B
There's
a
you
know.
We
have
a
technology
that
you
know
we
accidentally
left
out,
which
happens.
Sometimes
it's
going
to
be,
it
gets
lost
in
the
shuffle.
Oryx
I
might
be
it's
on
the
list,
but
in
order
to
get
something
I'll
get
stabilized
and
prove
that
it
actually
works,
we
cut
out,
and
so
definitely
the
transition
from
OpenShift
three
to
four
involves
some
of
those
trade
austin.
B
You
know
I
I
regret
as
many
of
them
as
you
do
I'm.
Certainly,
a
call
to
everybody
in
the
community
is,
if
you
see
something
that's
missing.
I
may
not
be
that
anyone
notices
and
as
a
as
a
community
and
as
an
open-source
project
and
as
you
know,
we
need
documentation
of
what
changed
and
a
good
overview
that
that
recap
of
the
last
year
or
so
was
very
busy
for
a
lot
of
people
and
I.
Think
there's
there's
some
things
that
we
just
missed
because
we're
all
human.
B
So
what
are
the
rules,
the
real
rules,
so
the
rules
kind
of
came
down
to
use
kubernetes
kubernetes
when
we
started
okd
and
in
the
very
early
days
of
kubernetes
it
was
good
at
a
very
limited
set
of
things
so
that
you
know
the
first
couple
lines
of
of
OpenShift
code
got
laid
down.
I
believe
before
be
one
beta
one
even
showed
up
in
the
kubernetes
api.
I
think
we
were
still
writing
directly
to
SED
from
cubelets.
B
It
was
a
very
early
time
at
that
time
and
at
the
time
when
I
launched,
Cabernets
was
good
for
doing
a
very
small
set
of
things,
but
we're
better.
Now
we,
you
know
in
one
three
deployments
got
added
in
one.
Six
deployments
actually
started
working
correctly
in
one
seven
and
one
eight
we
added
stateful
sets
volumes
and
persistent
volumes
mostly
actually
work.
These
days
we
added
extensibility
and
so
over
the
evolution
of
kubernetes.
B
It's
gotten
better
and
better
a
better
learning
software,
it's
gotten
more
complex,
which
is
something
else
we'll
talk
about,
but
as
a
rule
kubernetes
is
something
that's
capable
of
being
a
better
hosting
environment,
and
if
I
was
going
to
differentiate,
you
know
what
makes
kubernetes
novel
as
an
ecosystem
compared
to
some
of
the
previous
environments
and
open
source
that
run
across
multiple
machines.
I
would
say:
the
kubernetes
focus
has
always
been
about
running
software.
All
the
software,
and
sometimes
you
have
to
meet
kubernetes
in
the
middle,
but
there's
no
arbitrary
line.
B
One
was
if
you
can
use
kubernetes
to
run
itself,
and
obviously
operators
are
a
big
part
of
the
story
that
we've
talked
about
so
far,
because
operators
are
little
bits
of
the
kubernetes
mindset
being
applied
to
running
software
that
runs
on
top
of
kubernetes.
So
it's
going
a
meta-level
operators
are
encoding
domain
expertise
into
code
in
a
way
that
worked
well
with
the
declarative.
Api,
like
I,
want
there
to
be
a
Postgres
database.
I
want
to
have
an
object.
B
Storage
bucket
I
want
to
have
a
security
policy
applied
and
each
of
these
each
of
these
patterns.
You
know
it
still
takes
a
lot
of
work
to
write
an
operator,
but
we
want
to
make
it
easier
because
we
make
writing
operators
easier.
That
makes
it
easier
for
people
out
there
to
go,
build
the
next
generation
of
tools
and
technologies
that
you
don't
have
to
think
about
things
like
sto
and
K.
Natives
are
big
complex
projects
and
there's
a
thousand
other
smaller
operators
that
can
provide
just
as
much
value
if
we
can
make
it
easier.
B
So
this
is
selfish.
You
know
make
our
lives
easier,
but
it's
also
idealistic
of
use
operators
ourselves,
use
kubernetes
ourselves
when
we
build
open
ships
so
that
if
we
hit
a
problem
that
breaks
it,
we
go
fix,
it
there's
a
ton
of
examples
and
open
ship.
For
about
this.
I
have
a
another
talk
that
I'm
kind
of
writing
that
I'd
like
to
give
at
some
point
in
the
future,
which
is
what
are
all
the
lessons
we
learned
while
writing
OpenShift
for,
and
one
of
them
was
when
you
actually
depend
on
kubernetes
to
work.
B
You
have
a
lot
less
excuses
about
fixing
some
of
these
problems,
so
there's
a
bunch
of
great
stability
and
quality
work
that
went
into
kubernetes
as
a
result
of
trying
this
exercise
and
I
think
there's
a
lot
more.
You
know
just
some
of
the
things
that
I've
seen
folks
and
like
the
ingress
work
team
working
on
is
service
load
balancers,
don't
quite
work
right
for
graceful
shutdown,
it's
kind
of
a
big
deal
and
for
a
long
time
you
know,
there's
a
kind
of
user
developer
separation,
unclear
entities,
the
people
who
use
it.
B
The
people
who
develop
it
and
I
really
believe
strongly
in
making
you
know
everybody
who
develops
kubernetes
have
to
care
that
kubernetes
works
for
both
their
use
cases
as
well
as
other
people's
use
cases.
The
rule
to-
and
this
was
just
lessons
learned-
is:
if
an
update,
isn't
simple
people?
Don't
do
it?
There's
a
lot
of
weird,
you
know:
I
we're
kind
of
in
a
weird
state
where
every
software
project
gets
complex
enough,
but
everybody's,
like
oh
I,
don't
know
if
I
want
to
update
it
and
we
start
talking
about
things
like.
B
Oh,
you
know,
I'll
just
create
one
use
clusters
and
I'll,
never
trust
whether
a
cluster
has
to
stay
around
because
I'll
create
it
and
then
I'll
just
delete
it
without
ever
updating
it,
and
then
some
people
run
single
tens
on
their
laptop.
You
obviously
can't
do
a
rolling
update
of
a
single
instance,
and
so
there's
a
lot
of
design
points
in
this
space.
For
for
the
folks
I
talk
to.
B
You
know
they're
they're
clusters
that
don't
just
have
a
fixed
lifetime,
but
there's
a
lot
of
good
practices
that
come
with
not
treating
your
clusters
as
being
long
lived,
and
so
we
always
wanted
to
strike
that
balance,
and
so
we
said,
okay,
we're
going
to
limit
some
of
the
choices
around
how
you
run
the
cluster
and
the
configuration
topologies
and
maybe
we'll
expand
those
in
the
future.
But
the
update
model
has
to
be
it
has
to
work
every
time
and
has
to
be
totally
predictable.
B
It
has
to
be
totally
repeatable
and
there's
a
bunch
of
consequences
that
came
out
of
that.
So,
for
instance,
updates
have
to
always
work.
Then
people
have
to
really
take
API
versioning
seriously
and
kubernetes
has
always
done
that,
and
openshift
has
always
done
that
right.
You
know
the
commitments
to
we
build
something
we
write
it
once.
We
will
support
it
for
ever,
or
at
least
until
there's
something
better
that
we
can
migrate
people
transparently
to
that's
hard.
It
requires
a
lot
of
testing,
and
so
some
of
the
things
that
fell
out
of
this
was
ok.
B
B
You
know
we're
not
I,
don't
know
about
everybody
else,
but
when
I
look
at
the
state
of
software
today,
I'm
more
worried
than
I
was
five
years
ago.
We
got
a
lot
more
tools
and
we
have
a
lot
more
things
out
there,
and
you
know
it's
hard
to
trust
the
hardware
and
it's
hard
to
trust
the
operating
system
and
it's
hard
to
trust
the
infrastructure
and
it's
hard
to
trust.
The
extensions
that
people
run
and
I
think
that's
only
going
to
get
worse
right.
B
We
haven't
seen
any
sign
that
that's
going
to
get
simpler,
we're
just
gonna,
keep
building
more
and
more
complexity,
and
so
I
really
believe
in
the
core.
Os
mission
of
the
only
way
out
of
this
is
to
make
sure
that
things
can
stay
up
to
date
and
to
stay
up
to
date.
You
need
to
be
able
to
trust,
updates
and
you'll
be
able
to
deliver
updates,
and
everybody
has
to
believe
that
the
updates
will
work,
and
so
that
means
well,
you
know
in
a
community
setting.
That
means
a
lot
of
automation.
B
So
some
of
the
things
that
open
shift
you
know
enable
the
better
sharing
of
information
about
the
cluster
through
things
like
toiletry
in
the
open
source
communities.
If
I
have
a
problem
of
my
cluster
I
personally
want
somebody
else
to
be
able
to
see
that
work.
You
know
that
someone
else
who's
having
the
same
problem
can
see
I'm
having
that
problem
as
well.
There's
a
lot
of
manual
back-and-forth
that
happens
when
we
report
issues
and
fix
bugs
I'd
like
to
do
more
to
help
the
community
be
able
to
share
expertise
around
when
problems
are
happening.
B
The
third
big
thing
in
open
shift
for
that
was
in
a
sense
also
an
evolution
from
what
core
West
had
done
is
container
Linux
was
the
idea
of
a
simple,
auto
updating
operating
system.
That's
really
geared
towards
fitting
into
a
cloud
native
infrastructure,
which
means
it
can
be
easily
replaced.
It's
much
less
about
configuring
things
on
the
machine
and
deciding
what
the
configuration
is
at
a
at
a
fleet
level
that
got
integrated
into
tectonic.
You
know,
as
a
kubernetes
distribution,
you
could
do
updates
and
integrate
them.
B
One
of
the
discussion
points
that
came
out
of
the
oaken
shift
and
tectonic
group
was
how
can
we
go
the
next
step
and
the
next
step,
in
my
opinion
and
I,
think
we've
got
some
experience
with
it
now
and
it
has
a
lot
of
advantages
is
a
machine
that
runs
workloads.
Further
cluster
is
not
separate
from
the
cluster;
it
belongs
to
the
cluster.
It
participates
in
the
cluster.
The
Linux
kernel
you're
running
is
one
of
the
biggest
factors
about
what's
going
to
cause
success
or
failure.
B
The
workloads,
because
every
workload
inside
a
Linux
container
is
talking
to
the
Linux
kernel
and
the
Lynx
kernel
has
a
really
strong
API
and
we
talked
about
backwards.
Compatibility
api's
Linux
is
a
shiny
model
of
this,
but
there's
a
bunch
of
other
things.
The
configuration
on
that
node.
What
kind
of
disks
are
configured
the
network
configuration?
B
What
happens
when
somebody
updates
your
overlay
network
driver
or
your
overlay
network
daemon
and
that
causes
a
disruption
to
the
app
so
thinking
about
the
node
and
the
OS
and
the
kernel
in
the
node
is
being
separate
from
a
kubernetes
cluster
I
think
misses
the
real
opportunity,
which
is
this
should
function
as
one
harmonious
unit,
and
so
you
know
finish:
if,
for
the
operating
system,
payload
is
delivered
as
part
of
the
update.
If
you
update
the
control
plane,
you
also
update
the
machines
that
has
some
you
know
initial
disadvantage
as
well.
B
It
means
you're,
restarting
machines
more
often,
but
what
it
also
means
is,
if
we're
restarting
machines
more
often-
and
this
is
another
lesson
from
kubernetes-
is
you
do
something
all
the
time
and
it
works
the
first
time
you're
pretty
sure
it's
going
to
work
the
10,000th
time.
You
never
do
something
like
backups.
B
You
can
do
all
the
backups
you
won,
but
if
you
never
test
the
restore,
you
probably
don't
have
a
very
good
backup
system.
You
just
have
a
very
expensive
tape,
drive
and
so
doing
things.
All
the
time
is
a
key
component
in
kubernetes
success
and
I.
Think
one
of
the
things
that
we've
learned
over
the
last
year
of
experimenting
with
this
approach
of
integrating
the
OS
and
the
machines
into
the
cluster
is
by
doing
that.
B
You
get
a
huge
advantage,
so
in
OpenShift,
3x
I'm,
pretty
confident
that
people
have
problems
with
node
upgrades
and
sometimes
they
went
great
and
sometimes
they
did.
The
next
level
up
is
well
what
kinds
of
problems
happen.
Well,
some
people
had
problems
with
draining
didn't
quite
work
correctly,
and
some
people
had
problems
with.
Oh,
you
know
the
API
server
might
have
gone
down
because
the
ansible
script
we
were
running
didn't
take
into
account
this
one
workload
rule
one
was
we're.
Gonna
use
have
more
things
running
on
this
cluster.
B
If
we
have
more
things
running
on
this
cluster
that
have
to
keep
working
for
the
cluster
to
keep
running,
it's
really
important
that,
if
updates
are
happening
all
the
time
that
the
OS
updates
happen
all
the
time
and
they
work
every
time,
and
so
as
a
result
of
a
lot
of
this
work,
I
am
so
much
more
confident
in
our
update
process
on
nodes,
which
is
you
know,
a
new
update
is
available.
The
OS
needs
to
get
updated.
The
machine
gets
drained,
new
workloads
get
put
on
there.
B
Are
these
very
special
things
that
you
set
up
and
you
keep
updating,
you've
got
tweaks
and
a
accumulated
craft.
Integrating
the
OS
and
making
the
OS
work
well
for
the
cluster
allows
us
to
stay
things
like
well.
The
cluster
knows
what
the
nodes
should
be,
that
machine
isn't
special
on
clouds.
You
can
throw
away
that
VM
get
a
new
one,
and
the
machine
looks
the
same
on
metal.
B
Making
the
hardware
making
workloads
you
know,
gracefully
move
off
is
really
the
same
problem
as
what
happens
if
you
just
shut
down
a
VM
instance
so
trying
to
line
up
these
problems
that,
even
though
they're
they're
subtly
different
by
integrating
the
OS
by
having
the
OS
be
a
step
above.
You
know
where
it
is
today,
with
fedora
and
CentOS
and
rel
of
being
able
to
atomically
update
being
able
to
get
its
configuration
in
the
cluster.
B
You
get
a
really
exciting
opportunity
to
leverage
the
the
in-place
update
model,
as
well
as
the
throw
that
thing
out
and
get
a
new
one
model.
The
throw
the
thing
out
and
get
a
new
one
works
really
well
on
clouds
and
vert
the
MS
and
the
clean
it
all
down
to
the
to
the
wire
for
an
update
and
bring
it
back
up,
works
really
well
on
metal,
and
so
there's
a
I.
B
Think
this
of
all
of
the
things
that
we've
done
and
we've
seen
value
in
this
is
one
of
the
hardest
changes,
because
everybody
loves
their
machines
and
loves
to
customize
them
and
loves
to
the
hack
around.
Once
you
can
get
past
that
mindset
treating
nodes
like
you
treat
pods
where
they
come
and
go
and
if
a
problem
happens
we
just
go
get
another
one
or
we
wiped
it
back
down
to
a
clean
slate
and
bring
it
back
up
again.
B
So
a
second
consequence
of
if
we're
managing,
more
and
updates
have
to
be
simple
and
operators
exist,
as
I
talked
about
API
stability
and
the
idea
of
an
operator
is
kind
of
antithetical
to
be
set
up,
the
configuration
at
the
beginning
and
never
change
it,
which
is
good
because
a
lot
of
the
things
people
hate
is.
Oh,
that's
something
you
have
to
set
up
at
install
time
and
you
know
there's
always
some
flexibility
trade-off.
B
The
settings
can
realistically
be
changed
if
you're
running
on
Amazon.
You
can't
change
the
fact
that
you're
running
on
Amazon
on
cluster,
obviously,
but
you
can
change
how
many
machines
you
have
in
the
future.
We
want
to
be
able
to
let
you
change
the
sizes
of
master
nodes
just
by
changing
a
global
config.
Setting
that
says
the
size
of
the
master
nodes
should
be
this.
You
know
a
rolling
update
of
the
masters
happens
and
you've
got
bigger
instances.
B
Configuration
like
what
your
auth
settings
are
should
happen
transparently
automatic,
so
you
stand
up
a
cluster
and
a
lot
of
people
who've
tried
out.
Ocp
4
will
see
that
there's
a
out-of-the-box
default
setting
and
we
encourage
you
to
set
up
a
real
off
that
really
tackles
one
of
those,
the
David
the
day.
One
just
works
and
the
day
two
lets
you
change,
so
you're
not
locked
into
the
choice
of
you
know
at
install
time.
You
have
to
get
all
of
these
settings
perfectly
right
for
the
cluster
install
fails.
B
The
opinionated
part
I
think
is
where
I
alluded
to
the
beginning
is
going
to
hurt
a
little
bit.
We
had
to
take
some
choices
away
and,
and
I
can
absolutely
guarantee
you
that
some
of
the
choices
that
we
took
away
are
things
that
people
are
gonna
want
back
and
I.
B
Think
almost
all
of
the
choices
and
all
the
changes
are
possible,
but
they
might
require
more
work
and
that
work
is
doing
the
work
that
before
we
might
have
kind
of
danced
around
say
we'll
just
add
an
ansible
variable
for
this
well
2400
ansible
variables
later
we
said.
Well,
maybe
we
should
move
ahead
at
all
those
ansible
variables
and
that
mindset
of
trying
to
figure
out
the
config
that
really
matters
which
off
configurations
you
want
which
credentials
should
nodes
have
to
communicate
with
remote
registries?
B
B
Hopefully,
by
the
time
we
get
ok
before
out
is
an
API
viewer
and
you
can
go
look
at
the
deployments
and
ingresses
and
persistent
volumes
and
see
all
the
API
fields
and
what
they
mean,
and
you
can
do
that
exact
same
thing
for
the
configuration
of
the
cluster,
so
you
can
see.
Oh
these
are
the
config
flags
that
I
can
change
live
on
a
cluster
to
change
the
config,
so
each
of
these
builds
on
each
other.
B
We
know
in
this
area,
there's
probably
the
most
change
from
a
3x
cluster,
but
I
gotta
believe
that
by
doing
this
we
can
make
the
overall
experience
better
and
we
still
have
room
to
go
fix
the
problems
that
we
see
due
to
the
emissions
that
are
accidentally
emitting
things.
So
those
were
kind
of
the
the
quote-unquote
rules.
B
They
were
technical
choices
based
on
the
feedback
that
we've
gotten
across
the
community
in
OK
de
NOC
P
in
kubernetes,
in
cryo,
in
15
other
communities
in
concert
with
partners
and
people
who
are
just
looking
to
integrate
everybody
who
wants
kubernetes
to
be
easier.
Those
are
kind
of
that.
Well,
here's
here's!
How
we
can
move
this
forward
so
for
the
ok
D
for
ingredients.
I,
think
this
is
the
question
where
I
don't
want
to
preach.
B
Ask
I've
got
some
diocese
they're,
just
my
biases.
They
don't
mean
that
this
is
the
right
thing
to
do,
and
I
think
I
want
to
have
the
discussion
with
with
all
of
you
with
folks
who
may
not
be
on
this
call,
but
are
watching
it
later.
The
people
who
read
the
email
do
we
all
agree
on
the
sets
of
things
that
will
move
out:
okay,
t44,
because
okay
t4
is
a
community
project.
B
It's
community
supported
it's
a
smaller
subset
of
people
than
the
people
who
care
very
deeply
about
kubernetes,
and
so
you
know
we
want
to
be
able
to
take
the
people
contributing
to
kubernetes
and
to
cryo
and
to
Linux
and
system
D
H
a
proxy
and
bring
those
together
in
a
coherent
way.
We
can
only
do
it
if
we
all
agree
and
we're
all
working
on
the
same
tools.
We've
got
some
patterns
that
we
can
benefit
from,
but
I
think
that.
B
B
The
build
tooling
released,
tooling.
Everything
that
we
have
today
for
building
OCP
for
was
always
intended
to
be
able
to
potentially
support
okt
for,
however,
it
went
and
obviously
over
the
over
the
evolution
of
the
last
year.
There
were
various
decision
points
that
caused
us
to
say:
oh
well,
we
can't
do
this.
What
we
could
do
this
every
step
along
the
way
it
has
always
remained
open,
there's
nothing,
that's
private,
I
think
there's
some
process
stuff.
B
That
needs
to
be
more
public
than
I
guess
today,
and
but,
as
a
rule,
we've
tried
to
keep
everything
in
the
open
source
domain
is
just
putting
it
together,
require
some
choices.
So
I
think
one
of
the
things
that
we
need
is
a
philosophy,
so
the
okd
three
philosophy
I,
might
articulate
as
a
enterprise
and
developer
friendly
kubernetes.
That
is
easy
to
install
and
configure,
and
you
know
our
bar
for
easy
to
install
and
configure
was
pretty
pretty
low
in
the
early
days
of
kubernetes
and
as
prepare
daddy's
got
better.
B
That
bar
went
up
I'd
like
to
to
keep
that
mission,
but
I
personally,
the
security
aspect,
the
fact
that
we
need
a
continuous
process,
not
just
a
very
iterative,
waterfall
style,
style
process
and
kind
of
led
me
to
to
propose
you
know
we
can
build
an
okay
t-distribution.
It's
truly
a
distribution,
it
has
the
tools
that
allow
us
to
mix
and
match
in
to
allow
people
to
support
software
a
long
term,
that's
operators
and
CI
and
process.
B
Well,
let's
go
fix
that,
let's
roll
out
an
update
there,
that's
a
big
step,
but
I
think
that
it's
something
that
you
would
truly
differentiate,
okd
from
every
other
approach,
I've
seen
out
there,
which
is,
if
we
can
trust
the
updates
to
happen.
It
really
gets
to
that
original
core
OS
mission
of
we
can
secure
our
systems
and
our
process
by
having
a
process
that
rewards
us
for
being
able
to
continuously
deploy
an
ecosystem
components.
B
Operators
are
a
big
part
and
we
want
everybody
in
the
ecosystem
to
feel
like
they
can
quickly
and
easily
automate
the
features
that
are
important
to
them.
Anything
that
you
need
to
quote
unquote
run
kubernetes,
which
today
is
kind
of
its
kernel,
but
as
you've
got
ingress
controllers
and
openshift
has
the
registries
and
there's
logging
and
metrics
and
people
want
to
go
in.
B
You
know,
databases
of
service
and
cloud
native
integrations.
Each
of
these
pieces
need
something
to
manage
it.
I
think
operators
are
kind
of
the
for
the
RPM
repo
equivalent
of
a
Linux
distribution.
If
we
can
do
a
good
job
of
making
it
easy
to
build
and
distribute
operators
and
keep
them
continuously
up
to
date,
and
you
trust
that-
and
you
can
reliably
update,
I-
think
that's
something
that
we
haven't
seen
before
as
a
truly
unique
opportunity,
making
it
easy
to
extend
this
one's
tricky.
You
know
everybody
has
a
different
opinion:
I
love
it.
B
You
know
that's
an
impossible
but
difficult,
and
you
know
I
left
option
two
here,
which
is.
This
is
just
my
approach:
I'd
love
to
have
feedback,
I've
gotten
some
privately
I
know.
There's
been
some
discussions
and
other
forums
take
some
time.
Think
about.
Does
this
mission
resonate
with
you?
Do
you
believe
in
a
different
mission?
And
if
you
do,
let
us
know.
B
One
of
the
choices
when
integrating
the
OS
was
the
tricky
one:
okay.
Well,
we
need
an
OS
and
originally-
and
this
was
a
container
of
Linux
evolution.
There
was
some
discussion
about
whether
Red
Hat
core
OS
would
be
an
open
up
stream
based
distro
that
was
continually
updated,
eventually
kind
of
about
halfway
through
the
process.
We
said
well,
there's
some
some
challenges
and
advantages
to
that
being
ro
and
I.
B
Think
that's
the
biggest
blocker
that
I
see
between
where
we
are
now
which
is
OCP
for,
is
based
on
top
of
rel
core
less
and
where
we
need
to
be.
Fortunately,
the
Fedora
core
OS
team,
Ben
Gilbert
is
helping
to
lead
that
there's
a
community,
the
community
tracker,
they
were
working
very,
very
diligently
to
get
the
standard
set
of
tools
to
evolve
the
container
Linux
tools
and
to
correct
some
of
the
gaps
that
you
know
that
we
have
known
about
from
container
linux
into
something
that
could
be
released.
That's
coming
really
soon.
B
One
of
my
suggestions
would
be
you
know
we
could
go
build
something
entirely
novel
but
being
able
to
use
Fedora
core
OS
be
continuously
updating
to
benefit
from
the
Fedora
kernel
to
get
automation
around
it
to
pull
in
all
the
pieces
that
they
worked
on
with
container
Linux
and
atomic
I.
Think
there
is
an
opportunity
there,
which
is
the
value
in
the
sort
of
in
a
fedora
core
LS
to
an
oak
84
style
distribution
is
that
we
can
really
line
up
those
release
cycles.
B
We
can
work
in
the
open
and
there's
already
a
set
of
folks
who
believe
really
strongly
and
in
that
mission,
but
again,
like
all
of
the
other
points
that
I'll
bring
up
today.
This
is
a
suggestion.
I'm
certainly
open
to
options.
How,
through
our
core
OS
evolves.
It's
just
like
any
other
Fedora
community
you
show
up,
and
you
have
opinions
and
stuff
gets
done,
that's
how
all
open
source
works,
and
so,
if
you,
maybe
you
don't
want
to
you,
don't
like
some
of
the
things
Fedora
core
West
might
be
doing
getting
involved.
B
There'll
be
some
follow-up
discussion
in
the
lists,
at
least
if
we
do
go
down
this
path,
that
this
is
a
path
that
makes
sense
to
everybody.
One
of
the
discussions
would
be.
We
can
probably
pretty
easily
pull
together
a
quick
prototype
from
the
Fedora
core,
as
preview
releases
that
they're
working
on
and
show
kind
of
how
this
would
work
to
get
some
first
steps
going.
So
I
don't
think
it's
terribly
far
from
where
we
are,
which
is
another
thing
that
appeals
about
it.
B
You
know
it's
certainly
I
will
say
it
may
be
possible
to
get
okay
be
working
on
Fedora
as
it
is
today
or
Santa's
today,
or
a
bun
as
it
is
today,
it
would
probably
lose
what
I
consider
to
be
one
of
the
biggest
advantages
that
integrated
POS
with
a
cluster.
We
need
a
way
to
be
able
to
boot
the
cluster
up
and
tell
it
what
it
what
it
should
be.
You
need
able
to
update
that
OS,
atomically
and
then
roll
it
back.
B
Those
are
achievable
with
other
technologies,
and
so,
if
there's
somebody
who's
really
passionate
about
that
and
wants
to
leave
the
door
open
or
to
make
a
case
for
that,
please
do
you
know
I.
Think
for
Fedora.
Caresses
is
a
pragmatic
choice,
but
it's
certainly
not
the
only
one
and
I
don't
want
to
be
that
person
who
says?
No,
because
that's
not.
How
can
we
do
these
work
so
that
discussion
is
something
that
we
should
also
have.
B
We
need
a
building
release
process
so
right
now
the
CI
systems
for
us
before
are
open-source
they've
always
been
intended
to
be
able
to
be
used
for
okd.
Do
you
open
a
PR
to
one
of
the
open
ship
repos,
there's
a
set
of
PRS
that
are
set
of
PR
jobs
that
run
those
are
standing
up
clusters?
They
happen
to
be
doing
it
on
top
of
bits
that
are
closer
to
they
happen
to
have
in
them
a
rel
core,
OS
payload,
but
the
actual
images
themselves
in
CI
are
built
from
the
upstream
code.
B
The
ones
that
don't
use
any
packages
are
just
using
the
ubi
base
images
and
that
ubi
base
image
is
the
same.
That's
available
for
free
use,
so
one
option
would
be.
We
can
continue
down
that
path.
Use
the
existing
release
tooling,
add
some
more
PR
and
release
jobs
on
top
of
the
VI.
If
there's
any
gaps
in
terms
of
dependent
rpms
and
we
can
solve
that
with
sometimes
pad
cigar
with
fedora
and
the
the
general
idea
would
be.
B
We
could
use
the
same
release
tooling
and
pipeline
that
we
have
which
runs
the
tests
and
stand
up
the
clusters
and
Red
Hat,
certainly
willing
to
absorb
some
of
the
CI
cost
if
there's
others
out
there
who
want
to
help
integrate.
Ci
I'd
love
to
start
that
discussion
and
just
like
kubernetes,
where
we
all
individuals
and
companies
contribute
to
running
CI
as
part
of
that
integrated
flow,
we
have
the
same
capabilities
in
OCP
and
I.
B
Don't
think,
there's
anything
that
would
prevent
us
from
having
a
broader
set
of
folks
contribute
and
integrate
with
our
CI
there's
other
options.
We
could
continue
this
and
toss
past
sig
work
and
do
like
kind
of
a
quote
unquote
downstream
rebuilds
from
what
the
CI
validates
today.
That
has
some
advantages
and
some
disadvantages.
We
can
reuse
the
simplest
infrastructure
or
we
can
put
introduced
in
fedora
as
well.
We
use
the
fedora
infrastructure
that
will
introduce
some
points
of
difference.
B
One
of
the
things
that
I've
tried
to
do
is
everything
that
we've
ever
done
and
OCP
has
always
been
with
the
mindset
that
we
want
as
little
difference
as
possible
from
the
okd
bits.
Where
the
point
of
differentiation
is
on
lifecycle
and
support,
not
on
code
and
I,
think
we
can
stay
pretty
close
to
that.
But
again,
that
is
a
point
of
discussion
that
I'd
love
to
hear
feedback
on
and
all
of
the
channels
that
we
have
and
finally,
like
we
need
volunteers,
we
need
people
to
help
out.
B
B
Go
through
the
discussions
around
what
we
can
do
to
get
a
roadmap
instead
of
work
items
have
the
discussions
about
some
of
these
points
of
integration
and
these
points
of
choice
that
we
can
make
as
a
community
and
then
really
ramp
that
up
so
that
you
know,
maybe
in
a
few
weeks,
maybe
in
a
few
months
we
can
get
to
a
point
where
we
can
see.
You
know
Katie
for
that
people
are
happy
with
and
we
can
keep
iterating
in
parallel.
B
So
I
don't
think
we're
that
far
in
a
short-term,
but
I
think
it
really
depends
on
what
where
people
want
to
see
the
project
go,
and
that
is
fully
open,
and
this
is
the
beginning
again
like
open
shift
for
the
mindset,
the
technologies,
everything
that
comes
from
core
last
everything
that's
happening
in
Fedora
everything
that's
happening
in
kubernetes,
these
all
reinforce
and
play
on
each
other.
I
have
some
really
strong
opinions.
I've
always
had
those
strong
opinions,
but
I
want
to
hear
everybody
else's
opinions.
So
please
get
involved.
B
B
Probably
the
easiest
place
in
the
near
term
is
going
to
be
the
mailing
lists,
specifically
the
dev
list
on
the
dev.
A
list
at
OpenShift
are
ahead
calm,
but
all
of
these
other
channels
are
certainly
valid.
Any
questions
that
we
can
answer
from
the
chat
and
then
hopefully
have
some
discussions
and
both
of
that
community
momentum
on
the
lists
and
then,
if
we
need
more
more
of
an
iterative
execution
model,
we
can
absolutely
do
that
with
you
know,
having
an
additional
processing
tools
and
that's
all
I
have
I
Diane.
Do
we
want
to
take
questions?
A
And
Derek
Carr
has
been
sitting
and
answering
some
of
the
things
on
the
pending
list
in
the
chat
rather,
and
please
hang
on
everyone's
there
and
and
I,
don't
just
want
to
reiterate:
I
mean
I
would
really
love
it.
If
we
could
do
this
in
the
mailing
lists,
and
so
we
can
keep
track
of
the
Congress
and
have
commentary
on
it,
this
slack
channel
doesn't
doesn't
save
history.
Just
though
people
know
the
common
select
channel
I
mean
the
common
slack
channel
yeah
and.
B
A
So
and
Derek
you're,
you
been
answering
a
lot
of
questions
here
and
if
you
want
to
unmute
yourself,
maybe
recaps,
maybe
some
of
the
PDB
conversation.
C
Confusion
or
concern
on
what
that
meant
for
the
impact
to
the
end-user
applications
running
on
the
cluster.
So
in
the
chat
there
we
tried
to
clarify
that
the
upgrade
process
that
is
handled
respects
disruption
budgets.
If,
as
a
cluster
operator,
your
end-users
applications
are
not
using
pod
disruption
budgets
by
default
and
are
not
running
a
Chae,
then
of
course
those
end-user
applications
would
be
prone
to
disruption
when
rolling
on
upgrades,
I
think
what
might
not
have
been
captured
was
in
the
oka
d4
story
here.
C
The
cluster
operator
still
has
the
power
to
choose
when
to
initiate
upgrade.
So
if
you
need
do
it
now,
things
like
maintenance,
windows
or
stuff,
like
that,
you
still
have
that
power.
It's
just
the
differences,
we're
trying
to
eliminate
the
toil
and
burden
and
reliability
skew
that
occurs
when
you
manage
the
OS
separately
from
the
control
plane
components.
So,
instead
trying
to
get
much
more
like
an
appliance
style
view
where
you
know
all
these
things
were
tested,
soup-to-nuts,
together
and
I.
Think
that's
the
process.
That's
leading
one
to
talk
through
yeah.
B
So
kind
of
like
from
the
early
days
container,
Linux
kind
of
has
the
the
update
the
auto
update
button
is
on
by
default.
We
wanted
to
get
to
that
point
for
everybody,
but
obviously
you
know
that
takes
a
bit
to
build
trust
so
for
okd
and
for
OCPD
both
cases
we
don't
actually
have
it
anything
today.
That'll
do
that
force,
auto
update.
One
of
the
discussion
points
was:
how
do
we
introduce
the
simplest
possible
thing
that
gives
everybody
what
they
want?
B
One
of
the
discussion
points
was
like
hey,
there's
just
a
window
where
we
trigger
updates
and
outside
that
window
we
wouldn't
trigger
updates.
No,
obviously,
if
an
embargoed
CVE
comes
out
and
you
need
to
get
firmware
updates
and
those
firmware
updates
are
coming
with
the
OS,
you
know
you
might
not
want
that.
It's
kind
of
a
tough
one
because,
like
a
lot
of
the
big
security,
announces
happen
during
daytime
hours,
so
some
of
the
aliens
Derek
said
like
we
want
upgrades,
does
not
be
disruptive.
B
A
B
So
this
one's
come
up
a
lot,
so
I'll
put
shift
so
OCP
four
turns
on
a
telemetry
subsystem
by
default.
That
sounds
like
a
small
amount
of
data.
What
version
your
ad
is
your
update?
Failing?
Are
your
operators
healthy,
the
alerts
that
are
firing,
and
most
of
that's
just
so
like
this
is
kind
of
like
a
baby
step
which
is
fedoras
had
similar
systems
like
this
in
OCP?
It's
an
opt-out
mechanism.
B
Today,
we're
probably
going
to
clarify
some
of
that,
but
part
of
the
long-term
goal
I
think
is
like
these
are
big
complex
systems
and
they
fail.
You
know
the
reality.
Is
they
fail
in
big
and
complex
ways,
I
think
I
personally
and
Derrick
and
I
had
talked
about
this
a
lot
and
a
ton
of
other
people
talk
about.
It's
like
the
point
of
the
telemetry
is
to
get
there's
not
so
many
of
us
like
there's,
not
a
billion
people
running
over
chip
clusters
to
where
you
can.
B
You
know,
there's
not
a
billion
in
clusters
out
there,
and
so
you
just
look
at
like
ten
thousand
of
them
and
you
can
get
a
statistical
sample.
There
might
be
ten
thousand.
Our
a
hundred
thousand
okay
chip
clusters,
the
more
that
we
can
do
to
try
to
see
when
people
hit
problems
and
then
just
like
fix
that
I
think
is
a
key
part.
That's
kind
of
what
I
meant
about
community
support
so
like
having
a
way
to
opt-in,
do
I'm
a
tree
for
your
test.
B
Your
deaf
clusters-
and
you
know
the
set
of
information
like
is
it's
coming?
It's
a
white
list
of
Prometheus
things
and
it's
not
a
lot
of
data,
but
it
is
like
I'm
broken
I'm,
not
broken
I.
Think
the
community
telemetry
aspect
is
I,
think
that
will
I
think
that
has
an
opportunity
to
help
all
of
us
improve.
You
know
how
else
will
we
catch
when
an
automated
update
break
something
I?
Don't
think
it's
reasonable
if
you,
this
is
like
a
trust
thing,
which
is,
if
you
want
to
update
your
software
all
the
time.
B
That
means
you
need
an
automated
process.
If
the
automated
process
kind
of
ends,
when
we
throw
the
software
over
the
wall
from
an
open-source
community
to
the
people
using
it,
then
it's
on
you
to
go,
then
everybody
has
to
Alert,
never
has
to
move
back.
I,
don't
know
that
everybody
will
be
comfortable
with
community
telemetry
I.
Think
that's
a
I
think
that's
a
discussion
that
we
should
have
in
the
community
for
sure.
I
think
that
the
outcome
can
be
a
lot
better
than
your
rat
today.
I
believe
strongly
in
it.
But
I
do
think.
D
Yeah
to
follow
up
on
that
thanks,
Clayton
I
think
some
interesting
points
we
could
consider
over
time
is
whether
people
can
add
the
OLM
and
opt
in
for
telemetry,
even
if
they're
on
a
gke
cluster
and
reports,
some
metrics
or
to
limit
some
platform
health
on
their
latest
core
kubernetes
features
for
the
upstream
kubernetes
community.
I
know
that's
kind
of
might
be
outside
of
our
scope.
Also
I
know
with
poor
OS.
They
had
a
kind
of
like
a
unstable,
a
feed
you
could
get
on
for
their
for
their
OS
and
I
know.
D
B
B
That's
a
great
question,
I
think
so
one
of
the
challenges
in
like
I'll
go
like
really
deep
and
then
broaden
out.
So
one
of
the
challenges
I
think
we
see
historically
with
open
source
software,
is
that
the
possible
space
is
much
greater
than
the
software
that
we
can
all
test,
and
so
one
of
the
things
that
we
have
talked
about
that
was
kind
of
those
like.
B
Could
you
run
a
single
machine
in
your
cluster
that
runs
like
an
earlier
the
the
next
test
version
of
the
kernel,
and
you
run
a
subset
of
workloads
on
there
that
are
either
like
low
risk
or
like
demonstrative,
and
then
could
we
tie
that
into
telemetry
so
that,
like
before
that
kernel
comes
out
of
testing
you
find
the
issue
is
something
that
was
like
a.
It
was
like
a
real
small
thing,
but
like
it
was
that
mindset
of
instead
of
making
it
you
have
to
make
a
big
choice.
B
Can
you
make
a
small,
less
risky
choice,
some
of
its
like
traffic
splitting
right?
Like
the
idea,
you
know
you're
rolling
out
a
new
version
and
you
run
like
1%
of
the
traffic
through
it.
In
a
read-only
fashion,
and
then
you
find
any
bugs
that
way
or
you
find
out
if
you
have
a
performance
regression
that
way
before
you
say:
okay,
you
know
this
component
seeing
the
traffic,
but
it's
not
actually
do
anything.
Now
it
actually
starts
doing
things,
but
it's
the
only
1%
trying
to
look
at
how
we
can
do
that.
B
I
think
is
an
option.
I
think
it
is
a
great
question
about
what
I
think
that
some
of
this
is
what
does
Fedora
core
OS
look
like
I
would
probably
say
that
if
the
goal
is
for
people
to
trust
updates,
it
has
to
be
secure
and
stable
enough
that,
while
there's
some
risk,
the
benefit
outweighs
the
risk.
So
right
now
the
risk
is,
is
that
we
all
kind
of
get
in
that
spot.
B
If
we
just,
we
only
update
infrequently-
and
you
know
we
can
forget
to
update
and
then
we
have
Windows
and
the
whole
thing
doesn't
update.
So
sometimes
we
have
stuff
that
gets
left
behind.
How
can
we
kind
of
provide
a
block
and
appliance
style?
That's
safe
enough
and
secure
enough
and
continuous
enough
that
it
for
most
people
in
the
community
it
provides
that
right
set
of
value
and
it's
going
to
be
I.
Think
it's
gonna
be
a
discussion
that
we
have
to
have
I,
don't
want
to
force
it
on
anyone
for
sure.
A
Wouldn't
close
this
video
and
the
slides
for
the
mailing
list
still
Twitter
and
link
it
into
okay,
video
and
posted
on
slack
as
well,
so
it
should
be
available
in
the
next
four
hours
or
so,
depending
on
how
fast
bluejeans
process
isn't
and
we'll
have
to
do
this
again,
I'd
like
to
see
sort
of
a
regular
cadence
and
this
and
to
bring
in
the
Fedora
core
OS
folks
as
well,
who
are
working
on
this
to
give
us
some
updates.
So
we
can
continue
this
conversation.
A
So
please
do
if
you
can
post
on
the
mailing
list
any
of
your
questions
we'll
try
and
follow
up
to
everything
that
people
have
asked
and
if
you
want
to
join
the
commons
slack
channel,
then
be
an
email
and
we'll
get
you
on.
Unfortunately,
life
is
a
manual
process,
so
I
have
to
add
you
individually,
so
Thanks.
B
That's
it
if
you
don't
like
parts
of
it,
that's
as
just
as
important
and
input
to
this
process
as
anything
else,
it
is
the
kind
of
holistic
thing
that
we
have
today
and
I
am
absolutely
okay,
if
you
know
as
their
discussion
with
community
that
we
think
that
Katie
wants
to
be
different.
But
if
you
haven't
had
a
chance
to
see
what
you
want
to
be
different
from,
please
take
that
chance
and.
A
I'll
add
the
links
into
that
so
again,
thanks
Derrick
for
answering
the
questions
in
chat
and
Clayton
for
coming
to
the
table
and
sharing
the
philosophy
and
vision
and
opening
up
the
conversation,
so
I
look
forward
to
a
lot
more
conversations
and
feedback
from
everybody.
So
thank
you
all
again.
Thank
you.