►
From YouTube: OKD Working Group 2020 07 21 Full Meeting Recording
Description
OKD Working Group 2020 07 21 Full Meeting Recording
https://okd.io
A
Yeah
I
would
have
to
rejoin,
but
I
could
show
a
single
picture
I'll
be
right.
Back:
okay,.
B
We'll
give
him
a
minute
and
I'll
turn
up
my
volume.
So
the
other
thing
that
I
wanted
to
do
and
is
to
get
people
to
sign
up,
and
I
will
create
a
little
form
for
that
for
the.
B
For
the
august,
17th
live
demoing
of
each
of
the
different
platforms,
okd4
running
on
that
and
set
it
up
so
I'll,
send
an
invitation
and
then
a
note
to
the
mailing
list
to
get
people
to
sign
up
and
volunteer
for
demos
demo
sessions.
Does
anyone
have
an
idea
about
how
long
each
of
those
sessions
should
be?
Should
they
be
a
half
an
hour
or
one
hour?
Does
it
take
a
full
hour
to
deploy
okd,
say
on
gke
or
and
talk
about
it.
D
One
hour
is
generally
the
case.
I
mean
it
takes
twelve
and
a
half
minutes
just
to
deploy
a
three,
not
twelve
and
a
half
twelve
now
for
one
node,
a
single
node,
but
about
32
minutes
for
it
for
the
three
node
reduced
setup,
okay,
yeah
I'll
I'll,
actually
test
a
a
bare
metal
mirrored
install
I'll
time
it.
D
What
I've
been
doing
in
my
lab,
because
it's
not
it's
not
too
terribly
long,
because
it's
pulling
from
a
local
registry
that
does
speed
it
up
some
when
it's
not
having
to
drag
the
images
across
the
internet.
B
Yeah
so
and
then
like,
if
I
make
them
in
one
hour
slots
and
we
just
schedule
them
the
whole
day
and
and
live
stream
or,
however
long
and
however
many
we
have
then
in
the
extra
segment
we
can
have
q
a
on
whatever,
like
whether
it's
azure
or
gke
or
aws,
or
bare
metal
or
whatever.
B
So
that
that's
my
goal
and
I'm
sticking
to
it
for
now
and
then
I'll
create
a
landing
page
for
that
sort
of
a
la
the
openshift
commons
gatherings
and
we
can
and
I'll
but
I'll
do
it
off
of
okd.io
as
opposed
to
off
of
commons
gap.
The
common
site
and
people
can
come
as
they
wish.
So
each
hour
would
have
to
have
like
the
first
five
minutes
might
have
to
be
what
is
okd
by
each
of
the
the
persons
just
a
little.
B
You
know
a
couple
of
minutes
about
what
it
is
and
then
go
into
the
demo
and
we'll
then
we'll
have
the
end
of
the
day,
and
this
is
what
diane
being
sneaky
we'll
have
you
know
six
or
seven
videos
of
each
of
these
topics
to
add
for
people
to
use
from
the
website.
So
I
am
my
my
motto:
is
renew
reuse,
recycle
content
everywhere
and
that's
really
what
I'm
going
for
so
vadim.
You
should
be
back
in
now.
Vadim-
and
I
see
christian
is
here
so
how
about?
A
There,
it
is,
you
might
know
that
if
you're
using
red,
hat's,
pull
secret,
ocp
and
okd
as
well
are
reporting
data
using
telemetry
back
to
our
servers
and
we
are
using
it
to
build
a
very
useful
ads.
So
here
is
the
stats
for
the
okidi
for
the
last
week.
A
For
instance,
yellow
is
beta
5
about
40,
active
clusters
right
now,
and
the
blue
and
green
are
better
six
and
better
four
respectively
and
some
of
something
like
five
rc
clusters
active
based
on
that
we
can
say
that
upgrades
are
not
very
popular,
which
is
a
bit
surprising,
but
I
guess
that's
what
we
should
improve
on
and
we've
got
a
lot
of
new
installs
which
are
persisting,
meaning
the
number
doesn't
drop,
so
people
just
don't
destroy
their
clusters,
which
is
pretty
good.
A
I
guess
we
have
no
clue
how
to
estimate
if
a
fake
ball
secret
has
been
used,
but
I'm
assuming
the
numbers
are
similar
on
the
issue
side.
I
don't
think
we
have
any
new
interesting
bugs.
I
think
the
only
one
is
that
you
have
to
create
workers
twice.
A
E
Add
I
don't
think
so.
Honestly,
I've
been
head
down
in
the
code
for
the
past
couple
of
days
ever
since
we
got
ga
out
and
yeah,
not
much
more
to
say
from
my
side,
I'm
I'm
definitely
interested
in
feedback
from
from
folks
that
have
installed
it
and
yeah.
Maybe
one
thing
we're
I'm
we're
in
the
middle
of
migrating,
the
ocp
installer
to
ignition
spec
3.
E
So
very
soon,
we'll
have
redchat
core
os
with
ignition
v2
that
supports
spec
3
and
then
the
installer
will
be
the
ocp
and
okd
installer
will
will
already
be
much
more
aligned
than
they
are
right.
Now
the
mco
we
have
successfully
merged
together
so
from
4.6
on
there
will
be
just
one
branch
avadim
and
I
are
currently
working
on
figuring
out
how
to
test
both
ocp
and
okd
from
the
same
branch
there,
but
yeah.
E
We
will
figure
that
out
and
right
now,
obviously
we're
in
okd
4.5
at
ga
and
just
wanted
to
say
I'm
very
happy
about
that.
Yeah
thanks
everybody
who,
thanks
for
team
thanks
diane
from
my
side,
great
work
and
especially
diane
for
for
keeping
us
on
edge
with
this
and
vadim
yeah.
For
for
just
doing
lots
and
lots
of
work,
there.
D
Quick
question
on
the
telemetry:
does
it
skew
the
results
of
your
telemetry?
If
we
create
and
destroy
multiple
clusters,
you
know
over
a
period
of
time
or
is
the
telemetry
counting
actual
active,
live
clusters.
A
This
graph
counts
active
life
clusters.
Ci
is
doing
the
very
same
thing
like
creating
crazy
amount
of
clusters.
Meanwhile,
so
we
can
see
some
jitter.
I
think
we
would
come
up
with
a
better
graph
showing
if
a
cluster
has
lived
at
least
a
day,
we'll
keep
it
in
the
stats.
But
that's
something
we
that's
something
we
should
invest
in
on
ocp
side
as
well.
D
B
All
right,
I
flashed
it
up
on
the
screen
for
a
minute,
but
I
just
was
going
to
share
as
well
the
and
I'll
try
sharing
again
now
I
apologize.
I
need
to
show
everybody
that
right
off
the
bat.
The
survey
that
I
went
I
sent
out
about
you
know.
Adoption
really
was
meant
to
just
sort
of
get
us
a
baseline
right
now,
and
so
I
can
redo
it
in
six
months
or
three
months
or
whatever
in
a
cadence.
B
So
we
can
sort
of
watch
this
going
up
and
I'll
share
the
the
results
with
it
here
I
haven't
done
any
really
deep
things,
but
some
of
it's
pretty
obvious
and
I
think
a
lot
of
thank
you.
A
lot
of
the
responses
came
from
people
in
the
working
group,
so
which
is
natural,
but
there
are
a
few
outside
folks
as
well,
and
that's
pretty
pretty
a
lot
of
it
is
we're
just
it's
very
early
to
stuff,
and
what
I
was
really
interested
in
is
what
people
were
looking
for
in
terms
of.
B
You
know
what
we
can
do
as
a
working
group
to
help
them
and
maybe
developer
workshops,
operational
workshops
and,
as
always,
better
documentation
on
the
ocp.
On
the
openshift
side.
We
are
also
seeing
a
lot
of
asks
for
help
for
migrating
from
three
to
four.
So
that
didn't
surprise
me
at
all-
and
you
know
there
was,
you
know
some
basic
stuff.
B
The
lack
of
a
dual
stack
support
was
a
couple
of
issues
yeah,
but
it's
really
still
pretty
pretty
early
days
and
if
you
guys
saw
my
tweet
with
the
survey
in
it,
if
you
could
retweet
that
on
your
twitter
thing,
so
we
can
get
more
people
in
in
the
door.
But
it
is,
you
know,
I'm
not
surprised
other
than
the
fact
that
the
colors
don't
coordinate
with
the
words
here
with
the
graphics,
but
I
can
fix
that
too.
B
B
It's
very
interesting
to
try
and
figure
out
who
is
actually
using
it
out
there
in
the
universe,
so
the
surveys
will
just
keep
repeating
and
if
there's
other
questions
that
I
should
be
asking
please
let
me
know
I
base
this
on
one
that
we
sent
out
to
some
openshift
folks,
so
I
could
compare
openshift
folks
to
okd
folks
so
that
that's
my
bit
for
the
day,
but
there's
nothing
here.
Nothing
here,
hugely
shocking.
From
my
my
point
of
view.
B
The
thing
that
I
I
know
I
heard
from
joseph
post.
That
was
some
conversations
about
the
operators
that
were
available
only
for
ocp
opera
operators
and
I'm
wondering
yes,
if
we
wanna,
if
joseph
you
wanna
express
what
you
need
here,
and
maybe
we
can
figure
out
if
that
is
this
working
group
or
another
set
of
resources
that
we
need
to
attach
to
that
project.
C
Okay,
yeah,
we
are
a
colleague
of
mine,
was
using
a
few
operators
like
the
serverless
operator
without
knowing
that
it
is
only
available
with
a
subscription,
because
it's
very
easy
to
get
a
image,
pull
secret
yeah,
and
we
didn't
understand
that
it's
only
for
trials
and
yeah.
C
Now,
after
the
ga
of
okd,
we
tried
to
set
up
everything
clean
in
a
clean
environment
and
we're
surprised
that
some
of
the
operators,
the
serverless
and
see
istio
as
a
service
mesh
operator,
yeah
weren't
partially
available
for
okd,
and
this
was
surprising
for
us,
because
it's
not
so
clear
if
you
install
okay,
that
you
will
see
what
the
limits
are,
what
what
you're
allowed
to
to
use.
C
I
was
talking
with
vadim
about
the
service
mesh
operator.
There
is
a
substitute
called
maestra,
but
I
I
don't
know
how.
How
often
does
it
it
is
updated.
I
I
it's
a
little
bit
behind.
In
its
version
behind
of
c
service
mesh
operator
from
red
hat,
the
serverless
operator
is
not
available
in
a
community
version
and
I
think
it
would
add
lots
of
value
to
okd
if
they
were
available
yeah
because
they
are
based
on
open
source
projects.
C
So
there
is
a
little
understanding
from
our
side
why
they
are
yeah,
not
also
maintained
the
same
fashion
as
okd
with
images
that
are
the
same
as
the
openshift
ones,
which
is
great
yeah,
say
the
images
from
okd
and
openshift
are,
I
think,
almost
all
the
same,
and
it
would
be
great
to
have
a
similar
situation
with
the
most
important
operators.
D
So
one
thing
that
kind
of
threw
me
for
a
loop.
I
you
know
I
looked
at
what
the
operator
hub
presented
to
me.
It
was
there's
a
it's
kind
of
empty
there's,
not
a
lot
available
to
you
and
that's
actually
more
depressing
and
disheartening
than
you
than
I
than
I
imagined
it
would
be
because,
like
during
the
betas
and
the
nightlys,
and
I've
played
with
a
few
of
them
here
and
there
like
it,
looks
incredibly
full
and
incredibly
functional
like
you
could
do
so
much
and
now
you
can
basically
do
nothing.
E
We
won't,
we
won't
ever
be
able
to
make
you
happy
neil,
I
think
no,
but
kidding
aside
wow.
That's
definitely
gonna.
E
No,
that's
definitely
the
thing
we
we
will
look
at
now
that
jason,
so
one
thing
that
isn't
super
visible
to
the
outside
is
that
internally
at
rata,
that's
different
groups,
different
teams
working
on,
so
we
have
the
core
openshift,
which
is
okd,
which
doesn't
include
any
of
the
operators
that
are
on
operator
hub
available
either
for
free
as
community
variants
or
by
subscription.
E
So
now
that
we
have
the
base
working,
we
can
actually
approach
the
teams.
That
would
be.
You
know,
working
on
on
getting
that
to
work
on
okd
to
actually
make
that,
so
we
will
do
that
and
yes,
we
will
follow
up
on
that.
E
Definitely
so
I
think
the
the
duke
verge
operator
just
merged
the
pr
last
week
to
make
it
work
on
on
okd,
so
I'm
not
sure
whether
they'll
be
promoting
that
to
operator
hub
right
away
or
whether
it
may
already
be
there,
but
yeah
that
should
technically
technically
work
now
and
we'll
follow
up
to
make
the
keyboard
operate.
Anti-Serverless
maestra
operator,
well,
serverless
and
istio
operators
are
also
available
there
yeah
definitely.
I
agree.
That
is
a
very
good
use
case
and
we
should
deliver
on
that.
C
Yeah
is
it
possible
that
they
also
get
built
together
with
with
the
releases
of
okd,
so
they
are
no.
E
No,
it's
not.
We
have
completely
different
life
cycles
there,
which
is
actually
a
feature
because
they're
services
and
not
part
of
the
car,
so
the
life
cycles
are
completely
yeah
independent
of
each
other.
D
So,
like
one
thing
that
I
was
a
little
surprised
was-
and
I
see
it
right
now
when
I
look
on
operator
hub,
I
o
the
website
and
it's
there
but
like
when
I,
when
I
looked
in
inside,
of
okd,
like
the
rook
operator
for
doing
ceph
as
the
back
end
for
your
open,
okd
was
not
available
and
that
actually
kind
of
threw
me
for
a
loop,
because
I
kind
of
expected
that
to
be
there,
because
a
lot
of
the
documentation
leans
very
heavily
on
saying,
hey
you,
you
really
should
be
using
ceph
for
the
storage
and
I
could
do
no
ceph,
and
that
was
a
little
weird.
A
The
difference
here
is
that
operator
hub
lists
kubernetes
operators,
it's
considered
to
be
upstream
for
people
who
do
istio
and
maestro
operator,
and
they
test
on
pure
kubernetes,
and
they
expose
it
as
a
kubernetes
separator.
A
The
problem
is
that
some
of
these
operators
are
known
not
to
work
on
okiti
because
of
sdcs
because
of
other
issues
and
so
on.
This
is
why
they
are
hidden
from
the
community
side.
D
A
This
discrepancy
is
also
different
from
ocp,
where
we
package
and
we
can
prepare
a
custom
version.
So
in
the
end
we
had
a
chat
with
people
from
operator
hub
and
they
said
that
it's
mostly
a
problem
of
time
of
the
team.
They
are
unable
to
manage
three
different,
potentially
different
streams
of
their
operator,
and
we
are
working
with
them
how
to
introduce
community
who
would
support
kubernetes
versions,
who
would
support
okidi
and
so
on.
It's
a
very
tricky
problem
and
we're
just
trying
our
first
steps
with
cube,
weird
and
image
streams
on
fedora.
A
B
Yeah,
I'm
not
quite
sure
which
which
person
you
were
talking
to
and
I'm
glad
you
were
you've
already
talked
yeah
and
I
really
want
to
make
that
distinction
operator
hub.io
is,
is
kubernetes
generic
and
so,
and
a
lot
of
them
haven't
been
tested
and
so
and
there
there
are
thousands
more
out
there.
I
just
haven't
done
a
lot
of
outreach
to
populate
it
yet
because
some
of
the
there
isn't
a
lot
of
automation
behind
operatorhub.io.
B
To
be
quite
honest,
there
are
humans
and
and
testing
them,
and
you
know
there.
There
is
no
certification
process
there
at
all
and
really
what
operator
hub.io
is
is
basically
just
a
catalog
that
you
know
you
could
stand
up.
Anyone
could
stand
up
their
own
catalog
and
put
a
ui
around
it.
So
it's
a
pretty
simple
website.
D
B
So
yeah,
so
if
you
want
so,
you
know,
there's
a
so
the
operator
wish
list
for
okd.
Yes,
that's
a
really
good
thing,
maybe
rather.
E
So
that
would
be
helpful
for
us
to
prioritize
because
we
want
to
get
there
eventually,
so
we'll
just
have
to
keep
bugging
the
teams
and
maybe
put
some
of
our
own
work
hours
into
this,
but
there's
many
operators.
So
if
you
could,
please
add
all
the
operators
you
want
to
see
which
are
you
know
the
most
urgent
for
you
into
that
list.
We
that
that
would
be
helpful.
B
B
D
D
And
neil
you
can
get
the
rook
operator
deployed
using
the
yaml
file,
okay,
yeah
open
shifts
and
I
dropped
a
link
to
a
copy
of
it
that
I
made
that
I
did.
D
Okay
that'll
help
yeah
because
I'm
I'm
now
starting
to
look
at
what
it's
gonna
take
to
do
the
thing
to
like
replace
her:
oh
openshift,
origin,
three
cluster
with
okd4,
and
so
it's
gonna.
It's
gonna
be
interesting,
so
the
preliminary
preliminary
exploration
has
begun
for
doing
it
for
reals,
so
that'll
be
helpful
because
this
time
we
want
to
do
it
kind
of
right
rather
than
what
we
did
now,
and
I
don't
I'm
not
proud
of
what
we
have
right
now.
C
And
serverless
yes,
but
serverless,
I
understand
at
least
the
service
mesh
is
available
in
a
community
version.
It's
not
not
not
up
to
date,
I
think,
but
at
least
yeah
as
there
were
efforts
to
to
publish
it
to
the
community
catalogue
which,
which
is
great
but
yeah,
but
I
don't
know
how
often
it
it
be
updated
or
how
good
is
it
tested,
and
the
same
is
for
every
operator
in
the
operator
hub
for
sure.
But
this
would
be.
This
was
would
make
okd
similar
as
feature
rich
as
ocp.
C
I
think
which
would
be
great,
because
we
were
waiting
so
long
for
okd
to
have
a
service
mesh,
and
now
it
is
not
yeah.
You
know
it's
not.
Obviously.
Obviously
pre
was
supported.
D
So
you
know
defeat
from
the
jaws
of
victory,
and
all
that
yeah
is
the
is
the
developer
content
that
sits
behind
the
samples
operator
in
kind
of
the
same
boat?
No,
it's
it's
worse
off.
The
samples
operator
is
nobody.
Nobody
has
any
samples
to
provide
to
begin
with.
The
the
samples
operator
for
ocp
is
populated,
with
a
mixture
of
ubi
and
non-uvi
content
and
separating
all
that
stuff,
like
I've.
D
Looked
at
it
personally,
like
look
separating
all
that
stuff
is
complicated,
it's
probably
a
lot
easier
to
go
back
and
start
and
build
up
content
based
on
fedora
based
fedora
base
image
and
centos
based
image
and
start
putting
together
a
mixture
ourselves,
because
the
the
stuff
that
they
use
for
ocp
is
like
not
reusable
at
all,
like
that.
That's
really
the
samples
operator
framework
is
great.
D
The
samples
that
it
provides
are
not
usable
for
non-ocp
users,
so
so
that
that's
the
problem
with
it,
but
anything
ubi
based,
is
distributable
right,
careful,
careful,
careful
anything
uvi
based
as
long
as
it
doesn't
layer
on
top
in
unintentionally,
and
that's
what
makes
it
like
a
little
bit
of
a
trap,
because
if
you
build
something
using
ubi
images
on
top
of
a
rel
host,
your
rel
certificate,
your
rel
subscription
populates
in
and
activates
the
extra
content
automatically,
and
so
it
is.
D
Unless
you
explicitly
do
work
to
make
sure
you
don't
include
it,
you
you
leak
in
real
content
and
and
with
the
way
that
ubi
is
currently
made.
I
am
not
confident
that
none
of
those
samples
don't
have
any
non-uvi
content.
D
So
so
that's
why
it's
it's
much
easier
for
us
to
be:
let's,
let's
just
make
it
ourselves
in
a
with
the
system
that
literally
cannot
pull
from
rel.
A
B
Who
is
the
point
person
that
you're
talking
to
christian
for
the
rdo
stuff?
If
anybody.
B
B
A
B
Yeah,
I
think
that's
probably
our
next
next
step
is
maybe
to
sort
of
let's,
let's
get
that
wish
list
together
if
the
nvidia,
if
it's
just
the
drivers
and
then
we
can
try
and
figure
out
who,
the
who
to
coordinate
with
and
put
names
next
to
those
of
red,
hatters
or
nvidia
people
or
whomever
it
is
that's
nvidia
rather
people
and
and
move
that
forward,
because
I
think
that's
a
significant
piece
of
piece
of
work
on
a
lot
of
people's
parts
and
then
it's
ongoing,
maintaining
those
things
as
well,
so
you've
got
to
get
buy-in
for
them
to
not
just
do
it
once,
but
to
do
it
continuously.
D
It
is
also
unlikely
that
we
will
be
able
to
make
the
nvidia
gpu
operator
work
like
just
from
a
practical
perspective.
D
Because
the
problem
is,
I
don't
know
how
you're
going
to
make
sure
you
match
with
the
running
kernel
as
things
move
forward.
D
And
if
uefi
is
activated,
they
won't,
they
won't
load.
That's
not!
Okay!
This
problem!
No,
but
it's
your
problem
with
the
gpu
operator
like
they.
They
don't
load.
So
it's
not
going
to
work.
It's
it
works
in
rel
because
there's
an
ugly,
very
hacky,
terrible
thing
that
they've
done
to
make
it
so
that
it
works
with
even
uefi
mode,
but
it
will
not
work
in
fedora
right
now.
D
I
I
don't
currently
have
answers
for
how
to
improve
that,
though
it
is
something
we're
tangentially
looking
at
in
the
fedora
workstation
working
group,
because
it's
causing
other
problems
like
hey
people
who
put
fedora
workstation
on
laptops
with
nvidia
gpus,
they
enable
the
driver-
and
it
doesn't
do
anything
so
so
that
it
there
there's
problems
to
solve
there.
D
I'm
just
I'm
just
not
giving
I'm
just
giving
this
warning
that
it
is
unlikely
that
we
will
have
the
nvidia
gpu
working
in
all
cases
like
in
azure,
for
example,
it's
just
not
going
to
work
because
of
because
of
that.
B
Beyond
the
list
that
we
have
here
now,
are
there
other
ones?
I
mean
someone
was
just
asking
for
what
what
did
we
filter
out
and
maybe
that
he
not
write
this
instant,
but
if
you
can
grab
a
list
of
what
got
filtered
out.
That
might
also
be
a
a
thing,
a
thing
to
add
in
here,
not
as
a
wish
list,
but
just
for
reference.
A
I
didn't
see
a
reason
why
those
should
be
filtered
out.
All
of
them
are
optional.
Some
of
them
might
not
work
in
your
setup,
but
that's
a
different
story,
but
for
us
this
list
is
helpful
because
we
can
start
contacting
teams
and
ask
them
to
implement.
You
know
to
revive
their
okay
support.
Basically,
if
we
get
some
of
them,
that's
great,
we
don't
commit
getting
them
all
of
them
by
I
don't
know
next
week
and
so
on.
That's
just
not
gonna
work,
that's
outside
of
our
reach.
B
So-
and
I
was
just
gonna-
ask
a
question:
the
the
service
mesh
one,
if
I
brought
in
say
kong
from
kuma,
I
think
or
kuma
from
kong.
Let
me
get
my
names
right
or
tetrade.
B
B
B
Excuse
me
that
that
we
that
I
could
ask
to
see
if
they
will
put
theirs
in
first
of
all
an
operator
hub.io
and
I
haven't,
and
I
should
and
then
to
see
if
they'll
get
it
and
tested
on
okd.
So
that's
that's.
A
another
possibility
is,
and
we
all
have
been
probably
watching
the
istio
k
native
google
conversation,
so
it
might
not
be
a
bad
backup
plan
to
have
those
available
as
well.
C
I
think
this
gpu
thing
is
a
very
important
because
it's
for
machine
learning,
it's
yeah
best
practice
to
use
gpus
yeah.
B
So
one
of
the
things
just
yesterday,
I
did
a
an
ask
me
anything
session
with
the
open
data
hub
folks
and
I'm
trying
to
get
them
to
and
I'm
pretty
sure
they
already
have
tried
and
done
it
successfully.
We
just
haven't
demoed
it
running
open
data
hub
on
okd,
so
that
there's
a
pure
openstack
open
source
stack
for
open
data
hub
it,
which
is
just
a
reference
architect
here,
it's
not
for
ml
and
ai,
it's
not
a
product.
E
B
Yeah
so
maybe
offline
christian,
we
could
figure
out
what
those
demos
are
and
just
get
them
staged
and
and
broadcast
them
out
to
the
universe
I'd
like
to
to
be
on
in
on
on
whatever
those
reference
architectures
the
more
we
can
get
that
more
content
that
that's
great,
but
I'm
thinking
the
open
data
hub.
One
will
drive
the
gpu
piece
if
they,
if
they
build
open
day
because
they
rely
so
heavily
on
gpu,
so
that
might
be
a
way
to
a
way
to
nudge
the
nvidia
people
to
to
do.
B
A
Okay,
no,
I
don't
think
I've
ever
contacted
them.
I
don't
think
I
ever
contacted
that
team,
but
it
doesn't
require
any
fancy
stuff
on
the
hose.
So
I
didn't
see
any
reason
why
it
shouldn't
work.
D
That's
another
one
that
you
can
deploy
now
with
the
with
the
yaml
files.
If
you
go
straight
to
the
project,
even
though
it
may
not
show
up
in
the
operator
hub,
you
can,
you
can
get
or
you
can
go
upstream
and
get
eclipse
j7
and
deploy
it
via
the
operator.
D
Yeah
because,
like
just
like
with
with
seth,
that's
how
I've
been
deploying
it
is
just
via
the
yaml
file
straight
from
the
project.
D
It
it's
in
the
it's
in
the
documentation
for
my
lab.
I
did
drop
the
links
in
there.
Actually
I'll
go
ahead
and
bring
this
up.
I've
started
preparing
a
pull
request
to
see
if
you
guys
like
this
idea
for
our
okd
site,
to
add
a
section
for
recipes
just
little
short
snippets
of
how
do
I
install
eclipse
chay
in
my
okd
cluster
or
what
one
of
them
I've
written
up
is
is
deploying
seth
or
you
know,
adding
persistent
storage
to
the
image
registry.
B
Yeah,
so
I
would
love
to
do
an
okd
cookbook
just
saying
I
did.
I
did
a
whole
bunch
of
them
when
I
was
at
active
state
for
python
and
other
languages
with
people.
I
think
that's
a
really
effective
way
to
get
get
recipes
out
there
and
get
people
get
examples.
So
that's
we
could
do
a
okay
d,
slash
cookbook
and
then
have
a
whole
bunch
of
recipes
out
there,
and
I
think
that's
a
known
thing
in
in
the
tech
world
to
do.
Cookbooks
like
that.
B
So
yeah
put
the
put
an
issue
in
actually
on
okd
dot,
io
on
that
site,
as
and
and
on
this,
and
then
we
can
just
maybe
just
do.
I
can
set
up
the
infrastructure
for
that
and
people
can
just
do
pull
requests
to
add
them.
As
joseph
knows,
I
just
basically
merge
stuff.
B
Anyone
gives
me
and
pray.
Merge
and
pray
is
what
I
do
so.
I'm
happy
charlotte
to
do
that
and
then
I
actually
think
that
would
be
a
good
e-book
to
share
like
a
little
to
if
everybody
brought
their
recipes
together.
That
would
be
a
really
great
way
to
do
that,
so
a
plus
for
that,
but
I'm
pretty
sure
there
was
someone
on
the
code
ready
team
that
was
looking
at
building
an
okd
code,
ready
thing
and
I'll
I'll
dig
up
the
name
from
an
email.
B
I
have
it
somewhere
christian
and
vedim,
and
we
can
figure
that
out
some
someone
was
working
on
it.
I
just
think
it
didn't
get
published
anywhere.
B
B
I'm
going
to
add
them
and
share
my
screen
again.
What
else
should
we
cover
off
here
and.
C
We
are
preparing
okd
for
some
of
our
production
clusters
and
we
found
out
that
we
had
problems
in
integrating
the
monitoring
in
our
environment,
because
we
have
a
central
monitoring
which
monitors
several
openshift
clusters,
and
I
was
talking
with
vadim
also
in
the
slack
channel.
But
I
would
like
to
talk
here
also
about
that
and
we
had
to
turn
off
the
monitoring
operator
for
that.
So
we
had
to
bring
our
own
monitoring
stack
on
okd,
because
the
operating
monitor
monitoring
operator
is
overwriting,
our
our
promises,
rules
and
dashboards
and
yeah.
C
I'm
just
asking
if
it's
possible
to
to
turn
off
modules
some
models,
you
don't
want
to
have
for
some
reason
during
the
installation
and
without
any
hacks,
because
in
other
distributions
colleagues
of
me
always
show
me.
Hey
here
is
a
button
suck
switch
off
and
you
don't
have
to
mess
around
with
anything
you
you
don't
like,
and
I
think
it
would
be
a
great
advantage
if
you
are
possible
to
do
so
on
your
own
risk
sure,
because
yeah,
you
are
responsible
for
everything,
but
it's
it's
possible
to
also
feed
the
ui
with
metrics.
C
A
This
is
a
very
tough
topic
and
when,
when
joseph
says,
adjust
simeo,
it
means
rip
it
out
entirely.
That's
the
biggest
problem
here,
because
okd
has
to
have
all
the
features
so
cp
has
one
of
the
features
of
ocp
is
constant
monitoring
and
it's
embedded
very
heavily
in
every
single
part
of
the
product.
You
would
other
other
operators
might
render
degraded
if
they
say
my
metrics.
A
A
And
you
cannot
disable
it
because
other
operators
require
metrics
as
well.
So
we
will
start
gently,
pushing
ideas
to
minimize
senior
to
the
team
and
that's
your
own
risk,
similar
to
what
ed
city
has
you
can
have
a
non-aha.
A
Prometheus
instance,
the
problem
is
that
your
cluster
won't
be
able
to
upgrade
because
you're
losing
knowledgea,
and
we
cannot
guarantee
that
it's
gonna
work
that
can
be
worked
around.
A
I
guess
we'll
see
the
way
it's
implemented,
but
there
are
options.
Another.
A
So
more
brutal
options
available
right
now
is
ripping
out
senior
out
of
manifests
in
the
release
image.
You
can
replace
it
with
dummy
rail.
I
don't
know
real
ubi
images
they
would
just
be.
There
have
no
manifests
to
be
always
say
I
did
my
best.
I
have
played
everything
we
had
and
would
move
on
again.
You
would
have
to
maintain
your
own
fork
for
that
which
is
not
really
complex,
but
that's
not
okay,
anymore.
A
Another
brutal
thing
is,
you
can
ignore
this
in
cvo
and
scale
down
the
monitoring
operator
to
zero.
You
won't
be
able
to
upgrade
because
you
have
overrides
in
studio,
so
nice
options
are
pretty
much
very
limited
right
now,
mostly
because
demio
is
very,
almost
every
core
operator
is
actually
core
and
very
critical.
We
cannot
disable
them.
They
were
carefully
picked
up,
but
we
definitely
will
work
on
minimizing
its
impact.
A
C
The
problem
is
not
as
a
memory
consumption
but
that
you
can't
deploy
a
second
promises
operator,
because
there
is
no,
you
can
say
on
which
namespaces
the
second
from
users
should
list.
But
you
cannot
say:
please
don't
listen,
it's
not
exclusive
yeah
and
that's.
A
A
A
E
Yeah,
the
broader
the
broader
issue
here
is
really
that
that's
the
thing
we
don't
support
in
any,
not
an
ocp
and
therefore
also
not
in
okd
right.
So
what
I
would
suggest,
because
it's
definitely
out
of
out
of
what
we
can
do
right
now
and
in
the
very
short
term.
What
I
would
suggest
is
to
to
in
order
to
raise
this
awareness
with
the
team.
That
actually
does
that.
E
I
just
open
an
issue
on
the
on
the
github
repository
for
the
monitoring
operator,
asking
whether
it
would
be
possible
to
deactivate
that
specific
part
you
know
just
as
for
them
to
have
to
have
a
card
that
says
this
is
an
actual
use
case,
because,
right
now
we
we
don't
offer
that
option
where
to
deactivate.
C
I
interrupted
charles,
I
think.
D
No,
actually
that
you
guys
ended
up
where
I
was
going.
I
was
asking
more
about
your
specific
use
case
and
that's
that's
really
the
place
I
know
in
in
311
right
now,
both
on
the
origin
side
in
the
lab
and
in
the
production
side
in
the
data
center.
D
We
are
running
two
prometheus
instances.
Now
this
is
pre-operator
right.
We've
got
prometheus
that
came
with
the
cluster
and
is
monitoring
all
of
the
cluster
infrastructure,
and
we
followed
the
rules
on
it
and
didn't
muck
around
with
that
one.
But
we
did
deploy
our
own
set
of
prometheus
infrastructure
in
the
cluster.
That
is
monitoring
all
of
the
apps.
So
so
it's
it.
We've
got
it
watching
the
name
spaces
that
we
deploy
our
apps
in
and
it
it's
working
fine
side
by
side
with
the
with
with
what
came
deployed
with
the
cluster.
A
For
the
apps
we
have
user
workload
feature
which
basically
control,
which
spins
up
a
new
prometheus
controlled
by
simio.
The
problem
is
that
sending
metrics
back
to
a
different
monitoring
system.
I
guess
what
you
could
work
is
an
approach
used
by
telemetry.
A
It
sends
a
part
of
metrics
back
to
different
prometheus
server,
leaving
cmo
fully
intact.
This
is
how
we
get
those
fancy
graphs.
Basically,
your
clusters
are
sending
a
part
of
critical
control,
plane
data
back
to
our
servers
using
remote
rights.
So,
instead
of
fully
removing
simio
and
replacing
it
with
your
solution,
you
could
send
the
very
same
metrics
to
a
different
prometheus
and
maintain
a
monitoring
system
based
on.
A
B
So
is
there
so
this
the
vedim
you've
put
in
this
cluster
monitoring
thing
is
that
that's
definitely
being
something
being
worked
on
by
the
openshift
engineering
team.
A
B
And
if
people
noticed
in
the
chat,
I
put
a
link
to
a
form
which
I
also
put
in
the
community
page.
If
you
could
sign
up
and
tell
me
which
one
which
sessions
you'd
like
to
do
and
what
time
zone
you're
in
for
the
august
17th
event,
I
will
try
and
work
a
schedule
that
fits
to
your
time
zones.
B
I'm
pretty
sure
kubecon
is
running
on
eu
time,
so
it'll
be
early
for
me
on
the
west
coast,
but
please
just
do
fill
it
in
and
we'll
try
and
do
that.
And
if
multiple
people
talk
about
the
same
platform
like
five
people
want
to
do
it
on.
You
know:
aws,
we'll
we'll
get
you
all
together
and
you
can
chat
about
it
and
be
in
one
one
hour
and
one
person
could
be
the
driver,
so
we'll
figure
that
out
too
there
you
go
so
we're
almost
to
the
top
of
the
hour.
D
A
A
D
B
So
jamie's
jamie
has
said
that
he
is
jamie.
Where
is
that
production
workload?
B
B
I
I
gotta
make
up
first,
I'm
figuring
out
the
t-shirt
situation,
but
we'll
we'll
figure
it
probably
a
keger
at
umich.
C
If
you
are
talking
about
a
price,
we
are
shortly
before
getting
our
okay
d4
cluster,
in
production,
for
internal
usage,
for
different
teams,
and
that's
why
I'm
asking
about
this
gpu
thing,
because
I
think
it
will
come
in
the
midterm
and
but
yeah
we
are.
We
are
planning
to
getting
ga
in
the
next
next
very
few
weeks.
D
And
as
for
as
for
datto,
like
we're
we're
working
on
figuring
out
when
we're
going
to
do
our
deployment,
there's
some
underlying
unfortunate
architectural
things
that
we
need
to
fix
first,
but
we
are
starting
to.
We
are
starting
to
scope
and
plan
our
okd4
deployment
to
replace
our
open
shift
origin
deployment
so
that
that's
that's
a
common
at
some
point,
hopefully
soon
rather
than
later,
because
nobody
more
than
me
wants
us
to
switch
to
locating
four
already.
D
B
Still
saying
it
sounds
like
jamie's
in
the
lead
here
so
and-
and
we
may
make
him
our
showcase
on
august
17th,
so.
D
B
So,
charles,
so
I
think
that
one
of
the
takeaways
is
I'd
like
to
have
a
conversation
on
the
side
with
you
about
designing
the
cookbook
recipe
pages
for
okd.io
and
give
you
proofs
to
do
to
do
so
and
and
start
thinking
about
that,
because
I
think
that's
a
really
great
thing
and
then
please
fill
out
the
form
that's
in
there
communist.
If
I'm
on
my
reading
here,
fedora
head
communication,
yeah,
that's
what
we
need
the
anarchist
version
of
okd,
the
anarchist
guide
to
okd.
B
That's
next
all
right!
Well,
I
don't
know!
Maybe
jamie
deliver
the
child
first,
okay,
I
don't
know
make
sure
the
child
is
ga
yeah.
I
know
yeah
before
before
anything
else.
Otherwise,
there'll
be
some
other
problems
in
your
life.
So
let's,
let's
see
what
we
can
do
and
yeah.
So
if
you
can,
everybody
fill
out
the
form.
B
I
will
try
and
create
a
landing
page
for
the
august
17th
thing
that
we
can
all
use
and
schedule
people
on
and
you
can
see
what
your
time
slots
are
and
we
can
promote
it.
It's
a
bit
of
gorilla
marketing
during
kubecon,
so
we'll
have
to
use
our
stealthy
social
channels
and
everything
else
to
get
the
word
out
about
it.
B
But
and
there's
probably
70
other
things
that
are
happening
on
day
zero
at
kubecon
as
well.
But
we
can
rise
up
above
the
noise,
hopefully
and
at
least
capture
all
that
content
in
a
day-long
thing
and
where
you
probably
wear
your
t-shirts.
If
we
can
get
them
printed
and
shipped
in
time
and
maybe
set
up
a
store
and
sell
t-shirts
or
something
at
kucon
and
pop
t-shirts
and
popcorn.
C
I
have
a
question:
do
I
was
thinking
about
that
in
the
last
days?
I
would
very
appreciate
it
if
we
could
do
some
kind
of
hackathons
for
for
different
tasks,
which
it
will
improve.
Okd
such
like
yeah.
This
is
a
gpu
thing
to
get
that
working
here
and
propose
a
poc
or
a
blueprint.
How
to
do
that?
I
would
love
to
do
so,
because
I
think
we
have
some
knowledge
in
in
this
working
group
to
pick
out
several
things
which
are
too
hard
to
solve
some
alone
yeah.
E
B
Yeah,
that's
definitely
the
way
forward,
I
think
for
us,
and
I
think
it
might
be
something
that
we
can
cross-pollinate
with
the
operator
framework
group,
and
you
know
co-host
that
once
if
we
can
get
our
list
and
the
point
people
for
each
of
those
things
that
are
on
our
wish
list
identified,
that
might
be
the
good
basis
for
a
cncf.
Now
that
operator
framework
is
in
cncf
and
okd,
co-sponsored,
hackathon
and
I'd
be
happy
to
to
do
that.
B
So,
let's
get
that
list
prioritized
figure
out
who
who's,
who
there
I'll
go
on
to
and
reach
out
to
the
folks
on
that
the
operator
framework
side
and
once
the
list
is
there
and
see.
If
I
can
figure
out,
you
know
how
we
could
do
that.
So
that's
that's!
Not
a
bad
thing
at
all.
So
I'll
add
that
into
the
list
of
possibilities
beyond
ga,
but
still
adoption
is
really
where
it's
at
right
now
and
more
feedback
and
adoption
when
we
have
the
operators
will
be
easier.
B
I
think
they
can
do
more
things
more
workloads
easier.
So
that's
a
key
piece
of
it
and
you
know
again
creating
content
and
updating,
doing
the
recipes
and
continuing
what
you
guys
have
been
doing
wonderfully
home
labs
content
on
live
streams,
openshift.com
medium.
All
that
stuff
is
really
it's
huge
and
we'll
just
keep
doing
the
outreach
and
getting
more
bodies
here
and
to
talk
on
this
call.
B
So
with
that
who
is
livelace
here,
this
profiling
apps
for
cpu,
ram
and
gpu?
A
Profiling
is
a
very
interesting
topic
if
we
focus
on
core
operators
that
would
be
extremely
helpful
to
the
ocp
project,
but
life
hacking
on
that.
We
would
need
to
grow
some
expertise
on
that.
A
Well,
that
can
be
done,
though.
We
just
need
more
time
to
contact
folks
from
the
core
team,
because
I
don't
really
know
where
to
start,
I'm
usually
just
bringing
out
weird
log
entries
to
the
team
and
they
fix
it.
But
that's.
B
Yeah
all
right,
so
that's
yeah,
good
good
topic
for
another
another
session
and
let's
keep
adding
to
this
wish
list
and
see
if
we
can't
make
it
all
happen,
probably
not
in
the
next
week,
but
maybe
two
weeks
from
now-
and
you
will
all
hear
from
me
if
you
fill
out
that
form-
and
I
will
send
the
form
to
the
mailing
list
as
well
to
sign
up
for
the
august
17th
and
then
I'll
start
a
thread
with
a
schedule
proposed
and
people
can
yay
or
nay
their
slots.
B
In
that
thing,
I'll
figure
out
what
time
we
actually
have
to
start
at,
I
think
it's
like
6
a.m
on
the
west
coast.
So
I
love
I
love
that,
but
we
will
figure
it
all
out
all
right
guys.
Okay,
thank
you
very
much
talk
to
you
all
soon.
Thank.