►
Description
livestream
Zak Berrie, Red Hat
Machine Learning Specialist, discusses and disambiguates the concepts of DevOps, MLOps and AIOps. In doing so, he attempts to find an elusive “Unified *Ops Field Theory.”
A
All
right,
everybody
welcome
again
to
another
openshift
commons
briefing.
If
you're
watching
us
on
facebook
or
youtube
or
twitch,
we
are
going
to
have
live
q
a
at
the
end,
as
always
so
ask
your
questions
there
and
we'll
aggregate
them
and
throw
them
back
at
the
speaker
afterwards,
and
today.
A
I'm
actually
really
excited
for
this
conversation,
because
it's
a
conversation
you
hear
banding
about
the
internets,
often
and
everybody
seems
to
have
fixed
the
word
ops
to
the
end
of
their
things,
whether
it's
devsecops
or
deb,
ops
or
ml,
ops
or
aiops,
and
at
red
hat
we've
been
doing
a
lot
of
work
around
oh
and
get
ops
too.
Thank
you
walid!
So
there's
an
ops
for
everything
and
in
the
mlai
space.
A
We've
done
a
lot
of
talking
in
the
past
and
comments
around
this,
and
I
invited
zach
berry
here,
who's
the
machine
learning
specialist
here
at
red
hat
and
to
help
us
have
a
conversation
about
these
three
areas.
I'm
sure
we'll
bandy
about
into
other
ones
as
well
and
as
he
put
it
so
succinctly
in
the
description,
try
and
come
to
a
unified
theory
field.
Theory
of
ops.
Today,
in
this
conversation,
so
zach,
I'm
gonna,
let
you
introduce
yourself,
take
it
away
and
then
we'll
be
doing
some
q.
A
ask.
A
B
Yeah
thanks
dan-
and
you
know
what
this
is
also
very
much
an
opinionated
discussion
right.
This
is
my
view
on
the
world
and
I'm
I'm
happy
to
take
criticism
or
to
hear
where
you
think
I
might
be
on
the
right
track
or
or
on
the
wrong
track.
You
know
when
it
comes
to
discussing
the
meaning
of
terms
and
what
those
teams,
what
those
terms
mean
to
us.
It's
always
a
something
that
is
worked
out
through
usage.
B
These
terms,
I'm
going
to
attempt
to
peer
into
the
future
and
see
what
might
what
the
near-term
future
might
hold
around
the
problem
statement
that
we
have
we've
come
up
with
and
then
I'm
going
to
make
an
attempt
to
address
a
path
forward
right
and
how
that
might
be
meaningful
for
us.
So
my
biography,
I
am
was
it-
was
a
linux
admin
on
a
devops
team
back
in
the
what
we
now
call
the
dot
com
era
right
9804
or
something
like
that.
B
And
since
I
came
to
red
hat
and
since
I
started
doing
a
different
kind
of
work,
the
the
term
devops
was
coined.
But
looking
back
at
what
I
did
in
in
those
in
those
days
in
the
early
web
operations,
it
really
was
was
a
devops
model.
We
just
didn't
have
a
we
just
didn't
have
a
name
for
it.
I
spent
a
year
and
a
half
as
a
trainer
here
at
red
hat,
and
then
I
spent
a
decade
as
a
solution
architect
mostly
covering
strategic
accounts
out
in
the
west
commercial
group.
B
Here
in
the
u.s,
I
got
up
to
the
level
of
a
principal
solution
architect
and
then
for
the
last
two
and
a
half
years,
I've
been
working
as
a
solution.
Sales
specialist
here
here,
red
hat
the
same
in
the
same
rough
area
and
my
particular
role
this
year
is
around
machine
learning,
workloads
and,
however,
those
work
across
across
various
platforms
that
red
hat
has
to
offer
so
not
not
product
specific,
more
workload
specific.
B
B
So
there's
been
some
earlier
transitions
right,
so
we've
moved
largely
from
the
use
of
proprietary
software
to
open
source
software
right
at
least
a
partial
movement
in
that
regard.
A
move
from
big
iron
to
commodity
commodity,
hardware,
from
physical
servers
to
virtual
servers
from
kind
of
a
throw
it
over
the
fence
mentality
towards
a
devops
mentality.
B
You
know
where
it
software
developers
would
write
and
test
somewhat
in
isolation
and
then
they
would
hand
it
over
to
an
operations
team
to
run
it,
and
you
know
now
and
now
it's
your
problem-
I
I
don't
know
what
the
proper
term
for
that
for
that,
throw
it
over
the
fence
the
what
the
pre-devops
system
system
is.
I'm
I'm
open
to
suggestions
on
what
you
might
call
that
I've
heard
people
say
itil,
but
itill
is
not
correct.
B
I
still
can
still
work
within
the
context
of
devops,
so
I'm
interested
to
hear
what
you
what
you
have
to
say
for
that
and
in
terms
of
the
actual
development
methodology
we
tended
to
move
from
waterfall
methodologies
towards
agile
and
then
there's
been
a
move
towards
containers
with
orchestration,
and
I
I
don't
I
don't
buy
into
the
virtualization
leading
to
containers
thing.
I
think
that's
that's
totally
wrong.
B
They
solve
different
challenges,
but
what
I
do
think
is
is
worthwhile
to
consider
is
a
transition
from
app
and
configuration
automation
towards
containers
with
containerization
with
with
orchest
orchestration,
so
this
would
be
more
towards
well.
I
spent
a
lot
of
time
working
with
clients,
maybe
a
decade
ago,
who
were
trying
to
use
puppet
or
salt
or
cf
engine
or
something
like
that
to
manage
the
their
whole
pipeline
and
manage
how
they
delivered
applications.
B
You
know
not
just
with
random
configuration
of
configuration
files
on
linux
machines,
but
and-
and
it
seemed
like
that
was
a
it
was
a
very,
very
difficult
way
to
solve
that
challenge
and
that
largely
the
ability
of
rather
than
thinking
of
the
the
configuration
of
the
os
as
a
as
a
stateful
thing.
That
needs
to
be.
You
know,
reconfigured,
and
you
need
to
keep
track
of
that
state
and
so
on.
B
You
know
cloud
can
be
defined
in
different
ways.
I
I
tend
to
think
of
it
as
just
you
know
the
ability
to
have
the
resources
on
demand
when
you
need
them
by
just
calling
an
api,
or
something
like
that.
I
I'm
also
interested
to
hear-
and
you
know
maybe
put
this
in
the
in
the
comments-
what
what
should
the
retronim
be
for
the
time
before
cloud
computing?
I
I
haven't
heard
a
good
answer
there.
B
So
you
know
if
we,
if
we,
if
you
know
a
retronim
being
a
term
like
landline
right,
nobody
called
it
a
landline
before
there
were
cell
phones,
it
was
just
a
phone.
So
what's
the
retronym
for
the
time
before
we
had
cloud
cloud
resources,
I'm
interested
to
hear
in
the
comments
there's
also
a
revolution
happening
in
data
right
now.
That
could
be
a
whole
talk
of
its
own.
So
I'm
going
to
leave
that
somewhat
out
of
scope.
B
Okay,
so
I'm
going
to
frame
up
these
transitions
because
I
believe
that
we
are
in
the
midst
of
a
similar
transition
right
now,
and
that
is
a
transition.
That's
about
implementation
of
data,
science,
artificial
intelligence,
machine
learning,
techniques
into
I.t,
okay,
so
big
questions
what's
next
right?
What
what?
Where
are
we
heading
from
here?
What
can
we
learn
from
earlier
transitions?
B
B
I
mean
a
thing
that
is
very
much
sticks
in
my
mind
from
the
earlier
transition
towards
virtualization
is
that
you
would
often
have
conversations
with
application
teams
or
dbas
or
so
on
and
so
forth,
when,
when
an
it
organization,
when
an
ops
organization
was
attempting
to
virtualize
where
you
know
there
was
an
immediate
resistance
or
like
no
don't,
you
know,
don't
virtualize
my
my
application
and
the
the
thing
that
I
think
is
could
be
learned
from
that
right.
B
The
important
lesson
from
that
earlier
transition
was
that,
if
you're
going
to
make
a
major
change,
the
the
benefits
of
that
change
need
to
be
distributed
widely
right.
You
can't
have
a
sort
of
narrow
benefit
benefits
that
are
applied
narrowly
right.
So
that's
what
I
mean
by
what
can
we
learn
by
earlier
translation
transitions?
What
are
the
other
lessons
that
are
embedded
in
there?
What
have
we
learned
about
people
and
organizations
that
are
important
here?
The
more
time
I
spend
in
this
industry?
B
The
more
I
find
that
my
my
job
is
about
trying
to
teach
organizations
and
trying
to
change
how
organizations
work
rather
than
a
particular
technology
focus.
The
technology
needs
to
support
the
organizational
change
and
the
way
that
people
think
and
believe,
and
how
do
we
apply
these
lessons
to
our
benefits?
Okay,
so
these
are
the
big
big
questions
that
I'm
trying
to
trying
to
frame
up
here,
so
the
red
queen.
So
this
this,
I
think,
is
important
analogy
and
I'm
going
to
touch
on
it
a
few
different
times
at
times
here
or
important
idea.
B
So
this
comes
from
from
lewis
carroll
from
through
the
looking
glass
there's
a
scene
in
that
in
that
book,
where
alice
is
attempting
to
run
a
run
away
from
from
the
red
queen,
but
she
finds
that
the
ground
is
moving
underneath
of
her
matching
the
speed
at
which
she
runs.
B
So
she
says
you
know
so
the
queen
says
now
here
you
see
it
takes
all
the
running
you
can
do
to
keep
in
the
same
place
if
you
want
to
get
somewhere
somewhere
else,
you
must
run
at
least
twice
as
fast.
B
This
is
a
very
interesting
idea
and
I
think
it's
one
that
doesn't
get
enough
attention
with
within
within
it,
and
I
I
know
it
best
from
its
application
within
evolutionary
biology,
there's
a
there's,
an
excellent
book
called
red
queen
by
matt
ridley
and
the
basic
idea
being
that
when
you
have
competition,
all
advancement
right,
every
advent,
any
advantage
or
any
effort
that
you
that
you
put
into
moving
forward
is
only
advancement
in
terms
of
how
it
gives
you
a
relative
benefit
versus
your
competition.
B
Okay,
so
it's
if
you
it
doesn't
matter
if
you're
moving
100
miles
an
hour
if
your
competition
is
moving
100
miles
an
hour
and
moving
100
miles
an
hour
is
not
enough
if
your
competition
is
going
120
right.
So
we're
going
to
come
back
on
this
theme
of
the
theme
of
the
red
red
queen
where
moving
forward
is,
it
is
essential
just
to
say
in
the
same
place
so
now
we're
going
to
get
into
a
glossary
and
arbitrary
disambiguation
of
of
some
terms.
B
Okay,
so
I
I
think
that
the
a
lot
of
these
these
questions
are
still
up
for
debate,
and
you
know
again.
This
is
my
my
stake
in
the
ground,
and
so
first
the
term
devops
looking
around
for
definitions
of
devops
there
there
are
many
and
they
they
overlap
and
they
and
they
have
many
many
differences.
But
what
I'm
gonna
I'm
gonna
settle
on
for
for
this
discussion
is
one
from
ibm's
kevin
minich
back
in
2013.
B
B
Are
the
ability
to
experiment
fail
and
learn,
as
a
small
team,
clearly
defined
external
commitments
and
expectations,
expectations
of
others,
the
ability
to
measure
outcomes
and
the
ability
to
match
responsibility
and
capability
and
already?
If,
if
you
look
at
this-
and
if
you,
if
you
are
thinking
anything
about
machine
learning
or
or
or
data
science
in
general,
there
are
some
items
in
here
that
that
already
should
ring
familiar
right,
the
ability
to
experiment
and
and
measurement
okay.
So
I
I
think
we're
we
also
have.
B
We
already
have
some
level
of
of
overlap
to
begin
with,
so
all
right,
so
where
does
agile?
Where
does
agile
fit
into
all
of
this?
So
let's
digress
into
agile
a
little
bit
so
agile,
devops
or
different
concepts.
B
They
devops
often
relies
on
agile,
so
they're
often
conflated
but
agile.
I
think
in
itself
should
be
discussed
here
and
these
concepts
should
not
be
discussed
as
part
of
devops,
because
I
think
that's
a
little
bit
too
much
of
a
conflation
of
terms-
and
you
know
really.
This
is
a
the
whole
point
of
this
discussion
is
a
pedantic
discussion
of
terms
right.
So,
let's,
let's,
let's
discuss
a
couple
of
the
ideas
from
from
agile.
B
These
are
three
selected
stanzas
from
the
the
agile
manifesto,
so
deliver
working
software
frequently
from
a
couple
of
weeks
to
to
a
couple
of
months
with
preference
of
shorter
time
scales,
business,
people
and
developers
must
work
together
daily
throughout
the
project.
Okay,
so
tight
integration
between
developers
and
and
the
business
people
that
you
serve
your
customers,
whatever
form
they
take
and
build
projects
around
motivated
individuals,
give
them
the
environment
and
support
that
they
need
and
then
trust
them
to
get
their
job
done.
B
Okay,
so
this
is
this,
I
think,
is
also
quite
quite
key,
so
a
little
further
digression
into
into
agile,
as
we've
been
as
I've
been
exploring
these
these
concepts
under
the
under
the
requirements
from
covid19.
B
B
I'm
gonna
say
here
that
we
had
significant
existing
organizational
debt
yeah
in
that
we
were
not
the
most
organized
people
to
get
to
start
going
into
this
going
to
the
crisis.
So
what
do
we
do?
You
know:
we've
we've
we've
we
fumbled
around
for
a
few
weeks
and
then
the
idea
came
to
me
at
a
certain
point
that
hey,
I
spend
a
lot
of
my
time
talking
about
agile
methodology.
B
What
if
we
tried
that
here
at
home
right
what
if
we
gave
it
a
try,
so
we
implemented
a
daily
stand-up
meeting
every
morning
at
7
00.
Am
we
reviewed
our
daily
backlog
of
tasks
between
the
three
of
us
and
and
added
new
things
and
prioritized
what
was
was
on
there
as
an
example,
you
know
list
over
on
the
side
here
of
the
things
that
we
would
add
to
this
list.
Right,
like
you,
know,
walk
feed
and
walk
the
dog.
B
My
daughter
would
do
her
online
mandarin
language
conversation
course,
water,
the
gun,
laundry
and
we're
also
putting
like
just
the
stuff
that
I
needed
to
get
done
for
work
on
that
day.
So
you
know
review
activity
in
salesforce.com,
my
wife
needed
to
she's
a
ghost
writer,
so
she
did
a
cold
she
needed
to
do
a
cold
read
of
a
manuscript
etc.
B
So
what
did
we
learn
from
that
process?
Well,
we
found
that
our
our
backlog
grew
indefinitely,
which,
if
we
were
able
to
we,
we
were
able
to
clearly
document
that
we
were
in
fact
quite
understaffed
for
the
for
the
work
that
was
in
front
of
us.
Our
individual
talents
were
were
important
and
that,
while
we
were
trying
to
work
as
a
team
and
look
at
the
the
outcomes,
the
set
of
outcomes
as
a
whole,
we
found
that
some
individuals
were
much
better
suited
to
particular
tasks.
B
So,
for
example,
my
daughter
is
much
better
with
mandarin
than
I
am
and
I'm
much
better
with
salesforce.com
than
she
is
right.
We
also
learned
a
bit
about
about
shifting
shifting
left,
so
we
found
that
our
our
lunch
deadlines
were
often
much
better
met
if
somebody
shifted
to
do
the
dishes
and
in
that
morning,
so
the
kitchen
was
clean
when
it
was
time
to
make
lunch,
and
we
learned
also
about
our
our
constraint
constraints
and
about
failures.
B
So
you
know
more
more
than
one
of
these
agile
days
ended
with
somebody
in
tears,
which
I'm
told
from
people
in
the
industry
is
not
an
uncommon
occurrence
from
agile.
B
So,
let's
move
on
to
a
little
bit
of
disambiguation
of
different
different
terms
related
to
data
science,
the
data
science
being
a
broad
term
that
has
under
underneath
of
it
artificial
intelligence
underneath
that
machine
learning
and
under
that
deep
learning,
I
I
admit,
I'm
a
little
bit
sloppy
when
using
these
terms
myself,
but
I
think
it's
a
good
idea
to
just
sort
of
baseline
them
a
little
bit.
I
am
going
to
in
this
presentation,
focus
mostly
on
on
machine
learning,
okay,
so
machine
learning.
B
Let's
simply
look
at
this
in
terms
of
the
how
this
would
work
within
a
within
a
business
setting,
so
we
would
set
some
sort
of
goal.
We
have
some
sort
of
objective
that
we
would
like
to
reach.
You
gather
and
prepare
data,
so
that
might
be
you
know
improving
the
quality
of
the
data.
You
know
reviewing
the
data
to
make
sure
there
aren't
there
isn't
known
noise
or
errors
within
the
data,
there's
also
a
process
of
labeling
the
data.
B
So
the
simplest
you
know,
the
I
think
canonical
example
here
is
that
if
you
want
to
build
an
ml
model
that
can
tell
the
difference
that
can
identify
pictures
of
cats
and
pictures
of
dogs
in
photos.
You
need
to
start
out
with
a
corpus
of
data
that
contains
photos
of
cats,
photos
of
dogs,
and
then
you
would
go
through
ahead
of
time
and
label
that
you
put
like
a
little
box
around
each
dog
and
each
cat,
and
you
would
label
it
cat
or
dog.
B
Then
you
develop
the
model,
which
is
both
a
development
and
also
a
computational
process
of
turning
turning
that
label
data
into
a
a
mathematically.
A
mathematical
array
that
subsequent
data
can
be
can
be
compared
against,
statistically
in
in
an
I.t
setting.
B
You
would
then
deploy
that
model
as
part
of
a
development
process
and
implement
the
the
way
that
your
applications
are
going
to
interface
with
it,
and
then
you
would,
you
know,
monitor
and
and
manage
the
model
subsequently,
and
then
that
would
provide
feedback
back
into
further
model
development,
and
one
would
hope
that
this
process
would
also
provide
you
know,
would
also
drive
feedback
back
to
the
business
or
back
through
a
devops
process
to
the
folks
who
are
setting
goals
in
the
in
the
first
place.
B
Okay,
so
here's
my
here's,
my
shot
of
at
disambiguating
ml,
ops
and
ai
ops,
so
devops,
we've,
we've
already
defined
ml
ops
means
applying
devops
techniques,
two
challenges
that
are
well
addressed
by
machine
learning.
Okay,
so
we
are.
B
We
are
going
to
look
for
problems
that
are
well
addressed
with
machine
learning
and
we
are
going
to
try
to
attack
those
problems
using
a
similar
process,
as
we
would
have
used
for
application
development
under
under
devops,
and
I
think,
a
reasonable
analogy
to
to
think
of
here.
When
we're.
You
know
when
we're
thinking
about
how
organizations
should
look
at
how
machine
learning
can
be
used.
Internally
is
to
think
about
how,
if
you're,
within
application
development,
you
have
a
smart
person
plus
an
objective
plus
an
algorithm
right.
B
They
spend
some
time
thinking
of
how
the
the
algorithm
should
work.
Then
you
add
a
programming
language
or
you
write
some
code.
That
plus
some
time,
gives
you
a
microservice
or
gives
you
some
useful
software.
So,
okay,
so
smart
person,
a
plus
objective,
plus
the
algorithm
plus
the
coding
language
coding
environment
plus
time,
gives
you
some
useful
software,
okay,
software
that
can
respond
to
input
and
give
appropriate
output.
B
B
Plus
time
gives
you
a
machine
learning
model
and
then,
and
in
the
end,
that
machine
learning
model
is
able
to
do
similar
things
than
some
sort
of
algorithmic
solution
might
be
able
to
do.
It
might
even
be
able
to
address
the
exact
same
problem.
Okay,
so
how
they're
the
same?
You
need
a
smart
person.
You
need
an
objective
and
you
need
time
in
both
cases.
B
In
the
case
of
of
software
development,
you
work
from
an
algorithm
and
in
the
in
case
of
machine
learning,
you
use
data
to
create
a
model
okay,
but
I
I
think
in
in
terms
of
the
a
first
estimate
you
can
think
of
these
processes
as
as
very
similar
in
terms
of
how
they
should
fit
into
the
scopes
of
of
devops
or
the
scope
of
how
you
try
to
solve
problems
with
nit
with
software
in
general.
B
Okay,
so
my
my
my
venn
diagram
here:
okay,
oh
last,
last
concept,
here
so
ai,
ops
and
here
any
these
are
the
two
that
I
think
are
are
often
conflated,
and
I
don't
think
that
there's
any
reason
why
these
two
terms
necessarily
couldn't
mean
the
reverse.
I
think
we
just
need.
We
just
need
two
different
terms
for
two
different
concepts,
and
so
this
is
this
is
the
you
know.
This
is
red
hat
stake
in
the
ground
of
of
how
they
should
work.
B
So
so
ai
ops
is
applying
machine
learning
techniques
back
on
devops
and
it
itself.
So
can
you
use
machine
learning
for
log
analysis?
Can
you
use
machine
learning
to
look
at
the
patterns
of
behavior
with
within
your
incoming
connections?
Can
you
can
you
examine
how
your
users
interact
with
with
software
using
machine
learning
techniques?
Okay?
So
that's
that's
what
we're
calling
ai
ops
now?
B
B
Is
you
do
some
mlops,
because
it's
just
one
of
the
techniques
you
have
available
to
you
and
then,
on
top
of
that
you
also
use
ai
ops
to
improve
your
own
devops.
So
you
know
really.
This
can
turn
into
like
you
know,
turtles
all
the
way,
all
the
way
down
sort
of
sort
of
situation,
but
the
place
where
they,
where
they
overlap,
is
the
the
famous
pizza
rule
right,
no,
no
more
than
two
pizzas
per
team.
Whoever
is
is
doing
all
this.
B
They
need
to
to
go
back
to
those
those
key
values
of
devops,
where
you
have
small
teams
that
are
allowed
to
work
independently
to
deliver
based
on
feedback
from
their
from
their
customers
and
rapidly
iterate.
Okay,
so
another
little
digression
here
so
are
the
robots
coming
for
our
jobs
and
I'm
gonna
say
pretty
emphatically
no
right
will
will
it
workers
be
replaced
by
robots?
B
No,
will
there
be
a
different,
a
difference
in
the
in
the
rate
of
growth
in
it
right
will
the
rate
of
growth
slow,
perhaps
or
will
it
excel,
will
accelerate
yeah,
perhaps
and
and
artificial
intelligence?
Maybe
in
machine
learning
may
well
be
one
of
the
factors
that
leads
to
a
a
relative
growth
or
relative
reduction
in
you
know
in
in
the
rate
of
growth
that
that
we
see
in
it.
B
But
overall
it
workers
are
not
going
to
be
replaced
by
by
machine
learning
models
and
there's
a
few
key
reasons
for
this
number.
One.
Ai
is
fantastic
for
some
problems
and
it
is
absolutely
terrible
for
some
other
problems
right.
So
you
know
playing
chess
great,
organizing
a
chess
tournament,
probably
not
so
great
right.
There
are.
There
are
many
cases
where
you
know
that
there
are
many
cases
where
it
is.
B
There
are
tasks
that
are
very,
very
easy
for
humans
to
do,
but
very,
very
difficult
or
perhaps
not
achieved
by
artificial
intelligence
at
this
point
right
or
or
only
done
badly
at
this
point
right,
so
you
know
the
classic
touring
test
is
just
you
know.
If
you
communicate
with
the
with
the
computer,
can
you
tell
that
it's
a
computer
and
not
a
human
right?
B
Whether
then
you
know
there
may
be
cases
where
the
touring
test
has
been
passed
in
some
context,
but
certainly
not
when
it
comes
to
picking
up
the
phone
right.
You
definitely
know
when
you're
on
the
phone
with
the
computer
at
this
point
right
and
many
things
related
to
what
it
actually
does
have
a
surprising
you
know
have
more
to
do
with
how
those
actual
human
interactions
work
than
we
might
think.
We
might
think
we
have
technology
jobs
when
in
reality,
many
of
us
have
actually
personal
interaction.
B
Jobs,
and
the
other
thing
to
remember
here
is:
let's
go
back
to
that
concept
of
the
of
the
red
queen.
So
indeed,
so
a
robot
and
a
human
working
together.
So
this
is
human
intelligence
that
is
supplemented
by
artificial
intelligence,
artificial
intelligence
that
is
used
to
do
the
things
that
human
intelligence
does
really
badly
right,
be
able
to
get
answers
out
of
large
amounts
of
data
being
able
to
make.
B
You
know,
objective,
objective
judgments
out
of
based
on
questions
that
can
be
represented
in
data
right.
Those
are
things
that
humans
don't
do
very
very
well,
but
we
can
do
very
well
if
aided
by
machines
right,
so
the
robot
plus
human
beats
either
human
or
robot
that
are
acting
independently
of
one
another,
and
this
is
a
red
queen
situation
right.
B
Everybody
is
going
to
be
competing
with
each
other,
we're
all
going
to
be
looking
for
differential
advantage
against
one
another,
and
so
everybody
will
every
one
will
continue
to
employ
humans,
even
if
they
only
create
marginal
value,
because
in
a
in
a
competition
situation
in
a
red
queen
situation,
marginal
value
makes
a
difference.
B
So,
even
if
humans
make
only
make
computers
10
more
effective,
it
will
still
be
worthwhile
to
have
the
humans
there,
because
that
10
edge
will
make
you
better
than
your
competitors.
So
my
answer
is,
you
know,
quite
in
fact,
emphatically.
B
No
robots
are
not
coming
from
our
job
coming
for
our
jobs,
okay
appearing
hearing
into
the
future-
and
you
know
that
nice
jeff
bezos
quote
here,
which
I
encourage
you
to
go,
look
up
about
about
interfaces
or
or
working
with
working
with
apis.
B
So
looking
into
the
future,
what
are
machine
learning
tools
where
do
machine
learning
tools
and
techniques
lead
I.t,
okay
or
maybe
I
should
broaden
this
out
to
you-
know
to
data
science
and
artificial
intelligence
as
well.
B
So
far,
only
challenges
with
a
very
high
return
on
investment
have
been
tackled
using
these
techniques
right.
So,
for
example,
the
internet
ad
market
has
been
very
well
tackled
because
there
was
a
massive
return
on
investment
available
and
that's
you
know,
that's
why
google
exists
and
you
know
why
they
make
so
much
money,
but
on
aggregate,
if
you,
if
you
think
about
that
across
industries
or
you
think
about
it,
not
just
across
what
we,
what
we
do
particular
to
our
individual
industry.
B
So,
like
red
hat,
writes
software
right
so
there's
a
lot
that
can
be
done
in
terms
of
software
of
you
know:
building
and
selling
software
for
red
hat,
but
there
are
also
things
that
all
sorts
of
things
that
could
be
done
just
to
improve
our
internal
operations
and-
and
you
know,
improve
improve
how
we,
how
we
get
things
done
as
a
as
a
company.
You
know
artificial
intelligence
and
you
know
machine
learning
can
improve
your
garden.
If
you
want
so
there's
still
a
lot
of
room
here,
a
major
shift
is
happening
to
it.
B
That's
similar
to
the
shift
that
happened
when
other
industries
adopted
lean
techniques
from
from
manufacturing,
so
lean,
sometimes
also
known
as
the
as
the
toyota
way.
You
know,
I
think,
if
you're
not
familiar
with
lean,
I
highly
encourage
you
to
to
look
into
it,
and
I
I
mentioned
this
because
you
know
I
happen
to
know
the
example
of
lean
being
applied
to
medical
laboratories
very
well.
B
My
dad
happened
to
spend
most
of
his
career
doing
this
kind
of
work,
and
you
know
they
went
very
much
from
a
science
conducted
on
a
sort
of
cottage
industry
basis
where
you
would
have
you
know,
an
individual
laboratory
tech
that
was,
you
know
basically
trained
in
science,
conducting
tests
in
a
very
like
ad
hoc
or
perhaps
even
individual
way.
B
You
know,
perhaps
not
even
perhaps
not
even
uniformly
to
the
person
who
would
sign
up
for
the
next
shift
and
that
was
moved
very
much
towards
towards
a
manufacturing
model
where
you
know
the
the
large
testing
facilities
that
exist
all
over
the
world.
You
know
like
the
huge
ones
from
say
quest,
for
example,
they
are
performing
science
in
an
industrial
way
that
has
been
improved
and
efficiencies
have
been
have
been
drawn
out
of
those
systems
as
though
it
was
a
manufacturing
plant
that
produced
cars.
So
that's
a
major
change.
B
The
same
thing
is
going
to
happen
to
I.t
in
terms
of
the
implementation
of
artificial
intelligence
and
and
machine
learning
techniques.
It's
going
to
be
that
big
of
a
shift,
and
one
of
the
questions
that
we
we
have
in
front
of
us
is
are
we
you
know?
Is
our
industry
going
to
change
willingly
and
in
a
in
a
productive
way
and
very
rapidly?
B
Or
is
this
something
where
we're
going
to
change
as
companies
go
out
of
business
because
they
can't
compete
or
are
we
gonna
change,
as
as
people
retire
right
and
as
new
people
that
don't
know
the
old
ways
come
into
the
industry?
Right?
That's
that's
really
more.
The
question
we
have
in
front
of
us,
so
the
the
knowledge
of
these
techniques
has
become
much
more
widespread
and
the
cost
necessary-
and
I
decided
back
to
the
earlier
comment
on
on
the
roi
for
earlier
challenges.
B
The
cost
for
the
resources
and
tools
needed
to
implement
machine
learning
are
following
falling
dramatically
due
to
commodity
hardware
and
open
source
software
and,
of
course,
the
the
the
pressure
that
that
the
very
large
gaming
industry
put
on
the
price
of
of
gpus,
which
had
this
turned
out
to
have
this
other
ancillary
use
and
accelerating
machine
learning
workloads.
So
you
know
we
can
think
we
can
thank
gamers
for
that.
B
The
other
thing
I
I
concept
that
I
really
want
to
get
across
here
to
to
my
to
my
audience,
is
that
we
are
on
the
cusp
of
something
great
there.
Something
amazing
is
something
amazing
is
happening
right
now
and
using
using
machine
learning.
The
variety
of
problems
that
we'll
be
able
to
address
with
applications
and
just
with
our
work
with
nit
are
going
to
are,
are
going
to
increase
dramatically.
B
Okay,
so
there
are,
there
will
be
many
sorts
of
of
these
sorts
of
problems
that
we
would
not
have
even
considered
tackling
before
that
will
now
become
just
part
of
our
normal
operations
and
that's
a
very
exciting
thing.
In
my
opinion,
that's
it's
gonna,
be
a
great
thing
to
see.
B
So,
let's
I
have,
I
have
a
scale
I
I
wanna.
I
wanna
talk
this
out
in
terms
of
the
opportunity
in
front
of
us
using
a
scale
that
I
devised
that
came
nearly
out
of
com
nearly
completely
out
of
my
head,
and
that
I
am
convinced
is,
is
mostly
wrong,
but
there's
a
saying
from
from
data
science
that
all
models
are
wrong.
B
The
question
is
whether
or
not
they're
useful
right
and
this
model
that
I've
come
up
with
here,
I
think,
is
probably
mostly
wrong
in
in
terms
of
all
of
its
individual
particulars
or
many
of
its
individual
particulars,
but
I
think
it's
still
useful
as
a
whole.
So,
let's,
let's,
let's
talk
about
that
a
little
bit,
so
we're
going
to
call
this
barry's
incorrect
scale
of
ai
penetration
on
a
per
industry
basis
right.
B
So
I
I
in
in
my
in
my
arbitrary
scale
here,
I'm
going
to
define
a
class
a
as
the
industries
where
ai
is
so
prevalent
that
if
you
are
not
a
heavy
user
of
it,
you
simply
cannot
compete
at
all
right,
you're
in
the
process
of
being
driven
out
of
business,
and
you
may
be
relying
on
other
competitive
advantages
for
for
the
time
being.
But
unless
the
dynamics
of
the
situation
change,
it's
it's.
B
These
industries
are
are
heavily
penetrated
by
these
techniques
and
much
of
the
technology
and
the
techniques
that
we
are
going
to
be
working
with
were
modernized
and
popularized,
and
they
may
even
be
open
source
projects
that
came
out
of
these
companies
right.
So
these
these
are
the
industries
that
are
really
leading
the
way,
and
these
are
also
industries
that
are
generally
not
red.
B
Hat
customers
as
much
or
the
the
the
huge
players
in
these
fields
are,
you
know,
don't
tend
to
do
these
things
on
their
own,
so
I'm
not
going
to
focus
on
them
that
much
so
class
b
would
be
a
case
where
competitors
who
are
lagging
right.
B
You
know,
data
science
or
ai
is
prevalent
in
the
industry,
and
so
anyone
who
is
in
the
industry
that
has
not
gotten
on
board
with
what
is
now
the
norm
in
the
industry
is
currently
facing
a
huge
competitive
disadvantage
right,
so
finance
insurance,
energy
exploration,
sports.
All
of
these
industries
have
gotten
a
a
a
lot
of
press
in
terms
of
how
much
they
how
much
they've
benefited
from
these
from
these
techniques.
B
So,
if
you're
in
class
c
right
now,
the
competitors
are
enjoying
significant
advantage
if
they,
insofar
as
these
technologies
are,
have
been
adopted,
right
information
technology
utilities,
healthcare
manufacturing,
transportation,
tourism,
real
estate,
defense,
it's
not
commonplace
at
this
point
to
apply
to
to
apply
artificial
intelligence
techniques
towards
their
endeavors,
but
those
who
are
those
who
are
are
gaining
art
are
leaving
their
competitors
behind
in
the
you
know,
in
the
the
red
queen
view
of
this,
and
then
we
have
class
class
d,
and
I
I'm
and
I
apologize
if
you
are,
if
you
happen
to
be
someone
who
works
at
one
of
these
or
an
organization
from
from
this
segment
and
and
you're
gonna,
tell
me
no
actually
we're
we're
doing
this
and
we're
great,
and
we
should
at
least
be
in
class
c.
B
I
apologize
if
that's
the
case
good
job,
so
civil
government
education,
construction,
agriculture,
these
these
are
cases
where
these
these
techniques
don't
are
not
at,
are
not
used
as
prevalence
and
large
opportunities
exist
to
advance
against
your
competitors.
B
If
you
adopt
these
techniques
so
again,
I
I
don't
want
to
say
that
that
my
judgments
or
my
guesses
in
any
one
of
these
particular
areas
exist
like
we
can
reshuffle
this
a
bunch
of
different
ways
and
make
it
less
less
wrong
than
it
is
right
now.
But
I
simply
want
to
point
out
that
there
is
a
class
like
this.
There
are
distinctions.
There
are
industries
that
are
ahead
of
others.
B
There
are
segments
of
industries
that
are
ahead
of
others,
and
especially
if
you
are
at
one
of
those
you
know
c
or
d
organizations
or
c
or
d
markets,
the
the
benefits
that
are
available
to
you
with
these
techniques
are
massive
and
the
you
know
the
roi
is
is
is
huge,
and
even
if
you
are
up
in
say,
class
class
b,
the
question
should
not
be
like.
You
know:
okay,
great
we're,
you
know
we're
we're
doing
what
we
need
to
be
doing
to
compete
within
our
within
our
industry.
B
The
question
should
be
okay,
since
we
have
all
of
these
great
facilities
for
data
science
and
machine
learning.
What
other
challenges
can
we
apply
that
to
right?
What
are
the
lower
return
on
investment
challenges
where
these
techniques
and
can
can
start
making
sense
right?
So
those
are
the
questions
we
need
to
be
asking
so
so
what
are
the?
What
are
the
risks
here?
The
question
for
it,
professionals-
and
I-
and
I
I
touched
on
this
earlier-
is:
will
we
implement?
B
Will
we
integrate
data
science
techniques
into
the
discipline
of
it
in
the
light
of
what
we
have
already
learned
from
devops,
or
will
data
science
end
up
standing
aside
as
a
separate
priesthood
right?
Will
that
will
that
mentality
of
the
throw
it
over
the
wall
system
right?
Will
models
be
thrown
over
over
a
wall
to
operation
teams?
You
know
what
in
much
in
the
way
that
application
code
was
once
thrown
over
the
wall
to
to
operation
teams
and
will
will
data
science
practitioners
as
they
proliferate?
B
Will
they
find
themselves
estranged
for
my
t
in
the
way
that
I
t
is
sadly
often
estranged
from
the
business
from
the
business
or
the
customers
that
it
supports
right
and
that's
that's
what
devops
is
meant
to
fix,
but
obviously
devops
is
not
everywhere
or
another
system
that
works
as
well
as
devops
isn't
everywhere.
So
this
this
does
happen
right.
Are
we
going
to
see
the
same
mistakes
repeated
in
a
in
a
new
in
a
separate
domain
of
technology,
or
are
we
or
are
we
not
so?
B
Lastly,
here
the
path
forward,
you
know
at
least
from
at
least
from
red
hat's
point
of
view.
So
how
do
we
get?
You
know
what
does
red
hat
advise?
So
you
know
if
a
client
comes
to
me
and
says:
what
do
we
do
here?
B
We
talk
about
adopting
open,
architectures,
open
processes,
open
cultures,
you
know
process
and
culture
stuff.
I've
talked
about
mostly
here
so
when
it
comes
to
you
know
I'll
leave
those
aside
for
now,
we've
they're
the
most
important
things,
but
for
the
rest
of
this
product
presentation,
I'm
going
to
focus
more
on
the
technology
side.
B
Okay,
if,
if
you're,
if
you're
talking
to
red
hat
about
this,
what
are
you
know?
What
are
what
are
we
doing
right?
So
we
have
the
you
know:
conceptual
model
of
a
ml
technology
underneath
of
that
are
the
the
ml
devops
tools
right
things
like
tensorflow,
jupyter,
notebooks,
python,
selden
accessory.
These
are
the
is
the
actual
like
application
platform
that
your
data
scientists
will
use
to
get
their
work
done
and
use
to
collaborate
with
other
members
of
the
devops
team,
they're
the
data
services
and
pipelines.
B
B
So
you
know,
are
you
are
the
people
that
are
working
within
these
teams
able
to
get
the
resources
they
need
when
they
need
when
they
need
them?
Are
they
empowered
right?
That's
really
what
what
that?
What
that's
about
gpu,
your
computer,
acceleration
is,
is
very
very
important
and
red
hat
has
put
a
lot
into
that
and
we
at
red
hat
believe
that
it's
very
important
to
remain
flexible
into
in
terms
of
how
the
underlying
infrastructure
works,
their
different
workloads
fit
in
different
environments.
B
Significantly
better-
and
there
are-
you
know,
interesting
problems
around
lock-in
or
you
know
lack
of
flexibility.
If
you
commit
fully
to
any
one
of
these
types
of
infrastructure,
so
we
recommend
maintaining
a
a
hybrid
approach
to
infrastructure.
B
So
software
tools,
the
red
hat,
decision
manager,
tools
and
process,
automation
tools,
make
it
easier
to
have
a
framework
to
work
with
your
work
with
the
business,
and
you
know
to
perhaps
even
offload
some
of
the
logic
of
how
business
processes
work
to
business
analysts
rather
than
having
to
see
that
get
operation
operationalized
in
code,
our
runtimes
tool,
you
know:
real-time
schooling
allows
you
to
build
the
right
kinds
of
applications
for
it
for
for
agility,
in
the
sort
of
environment
for
data
pipelines
and
services.
B
Our
integration
tools
are
our
solid
red
hat
amq
and
cueing,
including
amq
streams,
which
is
our
kafka
distribution
fuse,
which
is
our
apache
camel
distribution.
These
things
are
coming
together
in
a
very
interesting
way
with
openshift
at
this
point.
B
Basically,
the
whole
stack
above
that
is
coming
together
into
event-driven
architectures,
that
are,
you
know,
function
as
a
service
kind
of
architectures
that
are
similar
to
what
you
might
have
seen
with,
say:
aws
lambda
or
the
badly
named
service
serverless
architecture,
but
more
widely
applicable
and
more
customizable
for
your
needs
than
the
sort
of
kind
of
one
size
fits
all
that
you
get
out
of
existing
functional
as
a
service
offerings
and
then
underneath
software-defined
infrastructure
right
so
being
able
to
connect
to
those
gpus
and,
having
the
you
know,
the
container,
storage
and
virtualization
platforms
that
you
need
underneath
to
support
it.
B
Working
in
open
source,
open
communities
is
key
to
all
of
this.
The
the
project
that
we
use
to
bring
together
community
collaboration
is
the
open
data
hub.
It's
our
way
to
work
with
partners
to
create
operators,
so
that
they're,
tooling
works
seamlessly
on
top
of
openshift,
and
much
of
this
of
course
applies
also
to
applies
also
to
kubernetes.
B
This
is
an
interesting
space
because
there
are,
there
are
many
different
ways
to
solve
the
same
challenges,
different
approaches
that
are
preferred
by
different
teams
and
with
with
good
reason,
so
red
hat
is
trying
to
be
more
of
a
general
friend
to
many
partners
than
trying
to
necessarily
pick
winners
in
this
space.
B
You
know
there
are
a
few
places
where
we
have
made
commitments,
for
example,
to
to
kubeflow,
which
is
a
a
way
to
drive
machine
machine
learning,
workloads
on
top
of
kubernetes
or
on
top
of
openshift,
and
we
have
a
very
large
spectrum
of
third
parties
that
we're
working
with
it's
it's
frankly
difficult
to
keep
track
of
all
the
different
isvs
hardware
partners,
and
you
know
major
partners
that
you
might
not
think
about
as
as
being
key
for
red
hat
like,
for
example,
we're
doing
great
work
with
microsoft
right
now,
around
making
data-driven
applications
available
for
users
and
running
on
top
of
openshift.
B
So
if
you
have
questions
questions
about
this
stuff,
you
know
talk
to
your
friendly
neighborhood
red
hatter.
There's
there's
a
lot
we
can
go
into
here,
but
I
just
want
to
give
you
a
view
of
red
hat's
approach
when
it
comes
to
when
it
comes
to
technology.
B
All
right,
so
we
got
10.
10
minutes
left
diane
any
any
any
questions
in
the
chat.
A
There
there
are
a
couple
fatigue,
just
asked
any
hardware
acceleration
with
edge
tpu
with
google,
slash
coral,
google,
coral
I
had
heard
of
before.
So
I'm
not
sure
whether
it's
on
your
radar.
B
So
tpu
so
tensor
processing,
tensor
processing
units
not
supported
yet
within
within
within
openshift.
That's
an
area
that
I
think
is
considered
emerging
right
now.
The
work
that
we've
been
doing
with
nvidia
to
make
a
scheduling
of
gpus
a
possibility
is,
is
going
is
going
great.
It's
fully
supported
within
within
red
hat,
we're
gonna,
see
better
fpga
and
tpu
support
going
forward.
A
I
think
we
just
did
something
I
think
I
just
saw
a
reference.
Architecture
float
out
yesterday
around
the
dell
reference
architecture
for
gpus
diane
fedema
did
something
we'll
probably
have
her
on
that,
but
I
haven't.
I
haven't,
heard
much
about
tpus
yet
so
I
think
that's
still
a
work
in
progress
or
some
research
that
we
probably
need
to
do
you
mentioned
really
early
on
in.
In
your
talk,
which
I
have
to
say
one
of
the
things
I
think
the
outcome
from
this
talk
should
be
a
book
reading
list,
because.
A
A
red
hat
machine
learning
book
is,
is
that
was
that
a
book
or
is
that.
B
Oh
they're,
so
we
have
a,
we
have
an
ebook
you're
right.
I
should
reference
that
ebook.
I'm
not
sure
that
I
do
in
this.
I
I'm
not
sure
I
don't
mention
the
red
hat
ebook
in
this.
I
should
we
have
a.
I
have
it
on
my
machine
here.
What's
the
title
again,.
A
Yeah,
if
you
can,
you
know
not
not
write
this
instant,
but
just
send
me
a
link
to
that.
Waleed
was
asking
for
that
early
on.
B
Yeah,
it's
hold
on
a
second,
it
is.
A
And
chris
is
talking,
it's.
B
Called
the
book's
called
top
considerations
for
building
a
production,
ready,
aiml,
environment
and
and
yeah
it's
it's
a
free
ebook,
that's
available
on
our
on
our
website,
we'll
throw
the
we'll
throw
the
link
in
the
in
the
notes,
and
I
think
it
does
do
an
excellent
job
of
giving
you
a
you
know,
a
sort
of
corporate
overview
of
what
the
challenge
is
in
front
of
us.
B
You
know
perhaps
less
less
about
the
the
sort
of
higher
level
concepts,
but
more
about
how
you
what
the
opportunity
is
and
how
you
how
you
would
apply
this
so
yeah
very,
very
much
inform
my
thinking
around
this.
A
Yeah,
I
think
somebody
finally
found
the
link
and
threw
it
in
the
chat.
That's
great
thanks,
beverly,
you
had
a
question
wondering
if
you
you
wanna
unmute
yourself
and
ask
it.
C
Yeah
absolutely
zach.
That
was
a
really
great
presentation
and.
B
C
B
Yeah,
certainly
is
possible,
I
mean,
and
this
is
something
that
you
could
completely
outsource
right
I
mean
and-
and
I
I
think,
that
a
pattern
that
we're
seeing
in
in
the
field
at
this
point
is
that
a
line
of
business
that
has
a
has
a
budget
and
has
a
business
problem
with
a
large
roi
may
go
out
and
hire
an
si
or
they
may
hire
their
own
data
scientists
and
they
may
just
you,
know,
use
aws
or
they
may
use
azure
or
one
of
the.
B
You
know
one
of
the
public
clouds
for
google
to
perform
data
science
work,
you
know,
perhaps
get
the
insights,
they
need
perhaps
build
some
models
and,
and
then
in
terms
of
how
that
will
will
translate
into
will
translate
into
how
it
fits
into
an
overall
application
architecture.
B
It
will
turn
into
a
sort
of
thing
where,
like
hey,
you
know,
the
line
of
business
as
a
whole
could
look
at
the
people
that
are
actually
running
the
production,
applications
and
say:
hey.
We
made
this
thing
now,
go
run
it
right
or
this
is
the
you
know
you.
This
has
a
lot
of
value.
B
You
need
to
now
make
sure
that
it
stays
up
24
7
and
gives
the
result
gives
the
results
it
wants,
or
you
know
we
need
you
to
shoehorn
this
into
your
architecture,
not
not
as
not
as
part
of
a
coherent
whole.
So
I
and
you
know,
and
if
you
think
about
it
in
terms
of
how
how
our
feedback
cycle
is
going
to
work,
are
you
going
to
be
able
to
coherently
manage
how
that
application
works
as
a
whole
moving
forward?
B
A
I
also
thought
there
was
in
your
diagram
where
you
had
the
ai
ops,
the
ml,
ops
and
devops,
and
the
pizza
the
pizza
diagram
there
yep.
I,
I
really
liked
the
the
way
that
you
said,
applying
ai
ops,
equaled,
applying
machine
learning
back
on
devops,
and
I
t
itself,
and
I
see
that
even
at
red
hat
now
you
know
the
in
the
telemetry
and
looking
at
where
people
are
going
applying
that
to
the
back
end
of
openshift
and
all
the
different.
You
know
flavors
of
hosted
openshift
so
that
we
can
see.
A
So
I
it's
the
the
emergence
of
of
these
two
fields
spun
out
of
devops,
and
these
devops
practices
is
really
what
I
think
is
empowering
the
use
of
ml
and
ai
in
a
lot
of
organizations
and
organizations
that
just
use
ml
or
just
try
and
train
something
on
one
set
of
data
and
don't
like
incorporate
it
in
the
whole.
Workflow
are
are
missing
out
on
a
big
chunk
of
the
opportunity
here.
You
know
when
we
see
some
of
the
work
that
you
know.
A
A
lot
of
there's
been
a
lot
of
redheaders
volunteering
on
covid
tracking
projects
and
so
we're
seeing
sort
of
one-off
applications
of
using
the
open
data
hub
reference
architecture.
A
But
the
opportunity
is
is
to
tune
that
architecture
and
to
apply
ml
ops
to
that
op
that
open
data
hub
or
you
know,
whichever
architecture
you're
using
to
really
take
it
next
or
level
up
yeah
your
your
organization's
use
of
ml
or
ai,
wherever
you
know,
whichever
side
of
the
fence
you're
on
there,
if
you're
on
the
admin,
I
I
t
side
or
if
you're,
on
the
training
data
models
and
doing
data
science,
research.
B
Yeah-
and
I
think
this
is
something
we're
going
to
end
up
seeing
in
term
in
terms
of
openshift,
so
as
machine
learning
as
a
workload
becomes
a
commonplace
workload
on
top
of
openshift
right,
you
know,
part
of
part
of
every
large
environment
or,
or
you
know,
or
one
of
our
common
deployment
types.
B
I
you
know
a
thing
that
you
see
in
openshift
already
is
that
the
tools
that
are
used
within
openshift
to
the
advantage
of
people
that
are
managing
applications
right,
so
things
like
you,
know
prometheus
and
you
know
the
moderate
monitoring
tools,
and
so
on
that
we
we
take
those
tools
and
we
turn
around
and
apply
them
back
to
the
platform
itself.
B
Okay,
so
as
it
becomes
commonplace
to
to
use
these
techniques
in
for
openshift
workloads,
we
are
also
going
to
see
a
proliferation
of
using
the
techniques
for
management
of
openshift
itself.
B
I
don't
know
how
long
it's
going
to
take
to
get
there,
but
I
know
that
it's,
I
know
that
it
works
underway.
Yeah.
A
We're
doing
yeah
I
get
to
see
little
glimpses
of
it
every
once
in
a
while
it
and
it's
pretty
amazing.
So
I
I
want
to
just
take
this
time.
If
you
can
go
back
to
your
your
final
slide,
maybe
that
the
landscape
slide
there
and
then
just
to
thank
you
and
a
number
of
the
folks
that
are
on
his
landscape
slide
have
done
past
openshift
commons
briefings,
profit
stores
been
on
we're
having
you
know
a
whole
lot.
A
lot
of
these.
B
There's
a
there's,
a
converge,
ios
talk
starting
right
now,.
A
Yeah
right
after
this
one,
so
if
you
want
to
pop
pop
the
the
link
to
that
in
if
people
want
to
jump
in
there,
that
would
be
great,
but
there's
a
ton
of
content.
That's
going
on
and
there's
lots
of
great
work
going
on
in
red
hat
and
elsewhere,
and
really
commons
is
what
we're
trying
to
do
is
get
you
in
that
fire
hose,
make
it
useful
and
j
and
zach.
I
have
to
say,
I
feel
a
little
pity
for
your
daughter
or
your
son,
who's.
B
A
This
agile
training
and
upbringing,
but
I
think
it
might
be
useful
and
so-
and
I
wanted
to
really
thank
you
for
taking
the
time
today
and
beverly
and
everybody
else
for
coming
and
with
your
questions
and
and
participating
in
the
conversation.
That's
great
we'll
have
definitely
have
you
back
and
do
some
deep,
diving
on
this
and
try
and
get
some
more
of
these
commons
members
and
partners
on
board,
especially
the
ones
that
that
have
operators
that
already
work
with
openshift.
A
We
have
a
push
in
the
next
coming
month
to
to
get
as
many
of
the
certified
operators
to
do.
You
know
to
show
off
their
their
stuff
to
make
this
easier
for
everybody.
So,
thanks
again,
and
we
look
forward
to
hearing
from
you
again
and.
B
A
This
because
I'm
going
to
have
to
go
out
and
read
that
red
queen
book
and
a
couple
others
that
you
mentioned
so
thanks
for
the
reading
list,
but
take
care.
Everybody.