►
Description
What's the Deal with Managed Services and Model Delivery?
Guest Speaker: Audrey Reznik, Red Hat
June 21 2021
OpenShift Commons Briefing
Link to Slides: https://bit.ly/3vQbI3h
Abstract: In this briefing, we learn what managed services are and how they are being more frequently used in hybrid cloud platforms to enhance and help with the Model Delivery process. We will finish the session, by taking a quick look at what managed services are available in the new Red Hat OpenShift Data Science platform.
A
Well,
hello
and
welcome
everybody
to
another
openshift
commons
briefing
today
we're
going
to
be
talking
with
audrey
resnick,
who
is
a
data
scientist
here
at
red
hat
and
we're
going
to
talk
about
managed
services
and
data
science
and
her
topic
today
is:
what's
the
deal
with
managed
services
and
model
delivery?
So
it's
going
to
be
a
bit
of
a
technical
overview.
B
All
right
well,
good
day,
folks,
my
name
is
audrey
and
it's
really
a
pleasure
to
be
able
to
speak
to
you
today
about
managed
services
and
and
model
delivery.
This
is
going
to
be
a
gentle
introduction
onto
what
managed
services
are
and
how
they
can
be
used
to
gracefully
deploy
your
your
model
into
a
hybrid
cloud.
B
So
the
items
that
I'm
going
to
talk
about
today
are
exactly
what
are
managed
services.
Who
cares
about
them?
Where
do
I
find
managed
services
and
what
do
managed
services
have
to
do
with
model
delivery,
specifically
so
we'll
kind
of
go
through
a
use
case
and
then
the
question
that
is
really
important.
Are
these
managed
services
easy
to
use,
because
whenever
somebody
tells
me
about
something
new,
that's
usually
the
first
thing
I
ask
is
well,
it
sounds
good.
B
Managed
services
in
generic
I.t
are
the
practice
of
outsourcing
the
responsibility
for
maintaining
and
anticipating
a
need,
a
need
for
in
a
range
of
processes
and
functions,
and
that's
all
in
order
to
improve
it
operations
and
really
to
cut
expenses.
So
we're
going
to
look
at
everyday
examples
of
managed
services
that
you
would
find
in
a
normal
it
organization,
and
this
will
give
us
a
really
good
baseline
in
terms
of
not
only
what
these
services
are,
but
will
get
you
thinking
about
what
managed
services
could
be
available
for
data
science.
B
So
the
first
thing
that
we
have-
and
I
think
everybody's
familiar
with-
is
help
desk.
Then
we
go
on
to
looking
at
equipment,
installation
and
then,
along
with
that
hardware,
maintenance,
also
with
equipment
installation.
It's
also
moving
those
moving
services
that
could
also
be
placed
into
that
firewall
and
security.
We
need
to
be
able
to
keep
our
organizations
safe
and
secure.
We
don't
want
people
breaking
in.
B
We
do
that
part
of
the
ways
by
keeping
up
to
date
on
antivirus
patches,
various
updates
systems.
Monitoring
is
a
big
part
of
the
services,
because
we
want
to
see
how
our
systems
are
performing
are
they
are
they
performing
well
enough
with
the
amount
of
users
that
I
have
or
that
we
have?
If
we
add
more
users,
are
those
the
systems
being
overwhelmed
and
speaking
of
being
overwhelmed?
What
about
disaster
recovery?
What
happens
if
our
main
facility
or
our
shop
is,
is
wiped
out?
B
B
That's
just
a
small
amount
of
managed
services,
and
there
are
so
many
many
many
more
and
today
in
data
sciences.
We
have
the
complexity
of
adding
cloud
as
part
of
the
platform
that
we
work
on,
and
there
are
a
number
of
services
that
come
with
that.
Therefore,
the
the
security,
the
data
repos,
the
servers,
the
communications
sharing
services,
everything
that
we
have
is
just
a
little
bit
more
complex,
so
kind
of
understanding.
Now
what
generic
it
managed
services
are.
B
Let's
look
at
what
kind
of
what
I
call
managed
services
for
data
science
or
the
services
that
would
make
sense
in
data
science
would
be,
and
how
would
these
services
help
us
deploy
an
aiml
application
into
or
model
into
production,
whatever
managed
services
we
create?
We
have
to
allow
the
data
scientists
to
focus
on
building
their
models
while
building
their
solutions,
data
scientists
being
one
of
them.
I
want
to
experiment
really
with
the
latest
bells
and
whistles.
I
don't
want
to
deal
with
upgrades.
I
don't
want
to
deal
with
supported
versions.
B
I
don't
want
to
have
anything
to
do
with
compatibility
issues.
I
just
want
to
focus
on
my
solution.
However,
this
does
not
mean
that
a
data
scientist
is
able
to
walk
right
up
to
it
devops
hand
them
their
laptop,
where
they've
been
creating
an
aiml
model
in
isolation
and
say:
okay,
I'm
good.
I
need
this
model
to
go
into
production
tomorrow.
B
B
So
in
data
science
there
are
a
few
steps
that
data
scientists
are
interested
in
when
thinking
about
creating
an
aiml
model
to
solve
a
particular
problem,
and
I
feel
that
these
steps
would
make
good
managed
services.
So
that's
what
we're
going
to
go
through
and
look
at
this
all
starts
with
data
acquisition,
so
we're
looking
at
extracting
and
transforming
the
data.
B
The
next
thing
you
want
to
do
is
to
be
able
to
run
experiments
and
create
the
models.
We
can
provide
a
notebook
environment
for
model
experiments
and
for
the
customers
that
would
like
to
access
the
curated
data
science
packages.
We
have
anaconda
commercial
edition
integrated
for
those
that
are
looking
to
take
advantage
of
things
like
auto
ml
and
then
make
use
of
things
like
ibm
watson,
studio
once
you've
coded
these
experiments
and
you've
determined
that
they
will
fit
the
model
that
you
have
and
it
looks
like
your
model
is
good
and
primed.
B
You
want
to
be
able
to
access
any
hardware
accelerators
to
speed
up
the
time
to
value
we
partnered
with
nvidia
to
provide
gpu
capability,
and
what
we
also
then
want
to
do
is
go
ahead
and
then
deploy
these
models
as
services.
Once
you
have
your
models
developed,
you
can
use
our
source
to
image
templates
that
we
have
or
use
openshift
pipelines
to
deploy
and
do
endpoint
for
testing.
You
can
also
use
selden
deploy
for
for
model
serving.
B
Then
we
want
to
look
at
monitoring
the
models
and
tracking
performance.
We
can
continue
to
use
things
like
seldom
deploy
or
watson,
machine
learning
and
watson
open
scale
for
any
of
the
model,
monitoring
and
performance
tracking
to
know
when
you
need
to
kind
of
retrain
your
model
and
redeploy
it,
and
when
you
look
at
this
overall
picture
that
I've
set
up
here
or
this
path,
keep
in
mind
that
for
any
it
ops,
that's
looking
at
this
or
devops.
B
This
flexibility
can
really
be
a
nightmare
because
they
want
a
reliable,
stable
reproducible
environment
for
their
customers,
which
we
hope
that
we
can
provide
here.
So
now
that
we've
defined
these.
This
set
of
managed
services-
let's
look
at
who
beside
data
scientists,
would
care
about
these
services.
B
It's
not
only
the
data
scientists,
but
also
the
data
engineers
and
I.t
ops
that
care
about
these
services
and,
alongside
with
these
managed
services,
there
are
other
things
that
kind
of
really
fall
into
place
nicely.
You
want
to
have
an
ai
ml
model,
operational
life
cycle
and
that's
kind
of
what
I
sort
of
outlined
in
the
previous
slide.
B
The
other
thing
that
you
want
to
care
about
is
that
production-ready
platform
and
this
platform
has
to
be
something
that
itops
feels
really
good
about,
because
the
managed
services,
as
I
mentioned,
can
be
a
nightmare
for
it.
Ops
is
they
want
something:
that's
reliable,
that's
stable
and
reproducible
for
their
for
their
customers.
B
And,
lastly,
the
ability
to
kind
of
deploy
and
portability
ability
to
move
from
the
platform
form
that
you
initially
developed
on
the
ability
to
deploy
and
move.
Your
application
allows
you
not
to
be
tied
to
a
particular
vendor,
and
I
personally
personally
feel
that
to
be
very
innovative
these
days,
you
need
to
be
able
to
try
a
wide
variety
of
technologies
and
services,
and
that
means
trying
out
a
large
number
of
vendors
so
that
you
can
create
the
best
product
that
you
can
for
your
customers.
B
Now,
with
all
of
these
things
said,
there
actually
is
a
middle
ground
where
we
can
make
everybody
happy.
At
least
I
feel
that
there
is
so,
let's
see
if
we
can
create
kind
of
a
data
science
managed
services
platform
that
satisfies
this
middle
middle
ground.
All
these
items
that
we've
talked
about,
so
we're
going
to
start
with
infrastructure,
the
hybrid
cloud
platform
and
we'll
go
for
hybrid
cloud,
so
that
we
have
things
on-prem
inside
our
own
network.
B
Maybe
we'll
use
something
as
amazon
web
services,
as
our
public
cloud
portion
should
offer
very
a
very
consistent
experience
across
on
on
premises
in
the
public
cloud,
as
well
as
to
the
edge
locations,
and
all
of
that
has
to
be
efficiently
managed
by
it
operations.
B
B
We
could
have
all
these
services,
but
what
would
be
really
fantastic
is
to
have
all
these
supported
kind
of
on
a
self-service,
hybrid
multi-cloud
platform
and
that's
the
platform
that
would
really
go
ahead
and
empower
anybody,
such
as
a
data
scientist
or
date
engineer
or
software
developer,
to
be
agile
and
collaborative
through
the
whole
process
and
that's
without
depending
too
much
on
it
operations
for
individual
tasks.
We
don't
want
to
fill
out
many
tickets
to
say
I
need
access
to
this
or
I
need
this
type
of
service.
B
So
here's
kind
of
the
conceptual
architecture
for
this
ai
ml
model
services
so
we'll
go
into
kind
of
a
typical
project
lifecycle.
So
we
have
data
engineers
that
are
working
on
gathering
and
preparing
the
data
to
make
sure
it's
ready
for
the
data
scientists
to
develop
their
machine
learning
or
ai
models
and
a
managed
service
that
we
could
possibly
choose
to
use
is
starburst
and
starburst
is
a
fully
managed
service.
You
can
access
your
data
using
trino.
B
B
B
We
may
need
to
use
numpy
we
may,
or
for
python
we
may
be
able
to
use
pandas
or
numpy.
We
might
be
able
to
choose
something
else,
such
as
tensorflow.
If
we're
working
on
some
sort
of
problem,
but
we
want
the
data
scientists
at
the
end
of
the
day,
to
really
be
able
to
experiment
with
those
packages
so
again
whether
it's
tensorflow
high
torch
scikit-learn,
any
others.
The
whole
idea
is
to
have
these
these
tools
or
these
services
available,
so
that
data
scientists
can
do
the
experimentation.
B
So
again,
this
is
kind
of
part
of
the
model
life
cycle.
We
want
to
go
ahead
and
get
our
model
and
be
able
to
deploy
it
and
start
some
inferencing,
making
predictions
kind
of
based
on
that
data
and
see
if
what
the
problem
that
you're
trying
to
solve
is
is
going
to
be
solved
by
by
what
you're
experimenting
on
right
now.
So
there
are
manage
self-managed
services
such
as
celt
and
deploy,
which
help
us
build
pipeline,
build
a
pipeline
sorry
and
actually
go
ahead
and
deploy
our
model.
B
The
work
does
not
stop
there
when
the
model
is
deployed.
I
know
some
people
that
say:
okay,
I'm
done
you
have
to
continuously
monitor
and
manage
any
of
your
aiml
models
that
you
create
in
production.
Make
sure
that
they're
making
the
right
predictions
make
sure
that
there's
drift
not
happening
and
you're
not
going
to
be
doing
that
by
staring
at
a
monitor
and
looking
at
your
model
performance
through
some
simple
little
script
that
you
you've
written,
you
want
to
have
some
cited
services
that
will
give
you
alerts.
B
Tell
you
when
the
model
is
drifting
so
that
you
can
continuously
again
go
ahead
and
monitor
and
manage
your
your
model
in
production
to
make
sure
that
they're
making
those
right
predictions
and,
of
course,
when
you
do
find
something
that
is
drifting
or
something.
That
is
not
quite
right.
You
need
to
have
that
ability
to
retrain
those
models
as
as
needed,
so
keeping
that
in
mind
that
that's
kind
of
our
ideal
sort
of
managed
services
and
kind
of
the
platform
that
would
go
along
with
it.
B
Let's
actually
take
a
normal,
or,
I
should
say
an
actual
machine
learning
use
case
that
one
of
my
own
colleagues
is
working
on
and
see
if
this
kind
of
data
managed
services
and
model
delivery
platform
that
we've
kind
of
come
up
with,
would
actually
work
for
that.
B
So
there
is
a
project
being
undertaken
by
one
of
my
colleagues,
guillaume
mukier,
for
metro,
london
that
has
to
do
with
license
plate
detection
so
that
all
has
to
do
with
looking
at
the
cars
grabbing
the
license
plate
and
being
able
to
monitor
traffic
movement,
car
registration
and
any
sorts
of
licensing
fees
again
through
the
license
plate
detection
that
machine
learning
model
has
to
have
the
ability
to
detect
the
license
plate
on
a
vehicle.
B
Data
can
then
be
stored
or
read
or
analyzed
through
kafka
in
this
instance,
and
just
for
folks
that
don't
know
kafka
is
an
open
source
software
that
basically
provide
a
framework
so
that
you
can
store,
read
and
analyze
any
of
your
streaming
data.
So,
for
instance,
here,
if
we're
looking
at
some
of
that
streaming
data
and
we
found
a
license
plate
for
somebody
where
something
was
notably
important
about
that
car,
we
could
throw
an
amber
alert.
B
Finally,
we
have
to
actually
go
ahead
and
store
that
data,
whether
we
use
an
object
warehouse
or
we
go
back
to
a
vehicle
registration
database.
Storing
that
data
then
gives
us
that
ability
to
do
analysis.
Further,
I
mean,
can
we
look
at
that
data
and
do
some
analysis
on
traffic
movement,
congestion,
parking,
etc?
B
B
Time
is:
are
the
services
that
a
data
scientists
could
use
actually
exists
as
the
red
hat
openshift
data
sciences
platform,
and
I'm
going
to
use
this
platform,
which
we
call
red
hat
openshift
data
science
to
show
you
how
you
can
use
these
managed
services
that
we
have
that
we've
actually
discussed
also
discussed
discussed
to
deploy
an
ml
model?
B
Each
managed
service,
whether
it's
red
hat
or
a
partner
service
component,
basically
we'll
go
ahead
and
integrate
along
with
a
series
of
quick,
starts
and
tutorials.
So
that
way,
users
can
not
only
work
with
their
self-managed
services.
They
can
also
self-teach
or
understand
things
better
about
that
manage
services
so
that
they
can
get
started
working
with
any
of
the
components
or
services
and
once
users
have
enabled
components.
So
in
this
example,
in
the
far
screen
capture
in
the
back,
you
see
that
I've
enabled
jupiter
hub.
So
that's
a
component.
B
That's
going
to
be
available
for
my
use
again,
along
with
all
the
quick
starts
and
tutorials.
B
That
will
always
continue
to
be
available
for
for
people
to
look
at,
but
let's
specifically
go
back
to
this
jupiter
hub,
manage
service
and
and
launch
it,
and
what
I'm
going
to
do
is
I'm
not
going
to
walk
through
a
real
demo,
because
we
know
how
those
happen
sometimes,
but
it'll
be
kind
of
a
canned
slide
demo,
so
we'll
be
kind
of
clicking
through
things
on
slides
and
seeing
how
this
all
comes
together
so
again,
we're
assuming
that
our
data
set
for
the
licenses
has
already
been
curated.
B
Therefore
we
begin
by
using
jupiter
hub
so
that
we
can
experiment
with
the
data
and
just
to
note
here
just
because
we
use
a
jupiter
hub,
managed
service
at
this
point
in
time
it
doesn't
mean
that
we
can't
integrate
with
any
other
services,
that
is,
that
are
out
there
or
go
back.
We
certainly
can
we
have
the
freedom
to
do
that
and
you
have
the
freedom
and
the
ability
to
manage
and
use
as
many
services
as
you
like
realistically,
when
you're
developing
something
there
may
be
other
parts
of
the
system
that
you
haven't
thought
about.
B
B
Don't
know
what
a
container
is
a
container
you
can
think
of
it
as
a
single
entity
or
unit
that
combines
your
entire
runtime
environment,
which
would
include
your
application
any
of
the
dependencies
libraries
any
of
the
python
libraries
that
you
may
be
using
other
binaries
and
any
of
the
configuration
files
needed
to
run
your
application.
It's
all
bundled
into
one
package
and
by
containerizing
that
application
platform
as
dependencies.
B
B
I
don't
know
like
aws
so
again,
containerization
just
pl
provides
that
clean
separation
of
concerns,
so
that
developers
can
focus
on
their
application
logic
and
dependencies
and
then,
of
course,
the
it
teams
can
focus
on
the
deployment
and
management
of
that
container,
without
bothering
about
the
application
details
such
as
a
specific
software
version
or
configurations
to
an
app.
B
B
You
have
the
ability
to
add
one
or
more
gpus,
based
on
the
type
of
data
analysis
that
you're
doing
and,
of
course,
in
the
ml
code
that
you're
working
on
in
this
rendition.
We
won't
use
gpus
but
remember
we
can
always
go
back
and
recreate
our
notebook
image
with
different
options.
If
we
choose
so
and
then
users
also
have
the
ability
to
add
environment
variables
that
they
would
need
in
your
project.
B
So
this
is
an
example
of
adding
an
aws
s3
access,
key
id
environment
variables
to
access
an
s3
bucket,
so
access
your
data
in
aws
and
we're
going
to
then
once
we
finish,
adding
in
the
secret
access
key
environment
variable
and
its
value.
We
click
the
start
button
to
spawn
our
new
jupiter
notebook
image
and
that
can
take
a
bit
to
spin
up.
B
This
means
without
having
to
install
and
maintain
anything
on
your
computer
and
without
disposing
a
lot
of
local
resources
like
cpu
and
ram.
You
can
go
ahead
and
still
conduct
your
data
science
work
in
this
stably
managed
environment.
So,
let's
go
ahead
and
populate
jupiter
lab
right
now,
with
our
current
license,
plate
get
repo.
B
So
what
we'll
go
ahead
to
do
is
go
up
into
the
main
menu,
we'll
choose,
get
and
we'll
choose
cloner
repository
and
then
we'll
enter
the
name
of
the
repository
and
press
the
clone
button
to
clone
the
license
plate
workshop
repository
note,
you
could
be
asked
for
your
get
credentials
so
you'd
enter
your
credentials
and
then
again
press
ok
to
continue,
and
what
you'll
then
see
is
that
actual
license
plate
workshop
repo
files
appear
under
the
the
name
pane
in
the
left
hand
side
of
of
the
actual
window.
B
We
can
then
go
ahead
and
open
up
any
of
the
the
notebooks
and
or
we
could
be
creating
notebooks
in
in
this
case,
I'm
showing
just
an
example
of
a
notebook
that
we
used
to
recognize
and
extract
the
license
plate
numbers
from
car
pictures,
and
we
installed
some
libraries
a
little
earlier
on
in
this
jupiter
notebook
that
weren't
part
of
the
container
image.
That's
also
something
important
to
realize
is
that
not
every
image
will
be
totally
perfect
for
everybody.
There
may
be
additional
items
that
you
can
install
and
that's
very
easy
to
do.
B
Earlier
on,
before
we
got
to
this
point,
we
learned
how
to
kind
of
create
the
code
that
would
be
able
to
extract
the
number
from
the
given
license
plate.
But
of
course
you
can't
use
a
notebook
like
this
in
a
production
environment.
I
do
know
people
that
have
tried
to
use
jupiter
notebooks
in
production.
It's
not
a
good
idea.
B
It's
not
a
good
idea.
Therefore,
we're
going
to
package
this
code
as
an
api
that
you
can
directly
query
from
another
application,
and
we
do
this
by
creating
a
flask
application.
A
few
explanations,
though
the
the
code
that
we
wrote
for
this
particular
problem.
That
guillaume
was
working
on
and
said
all
those
jupiter
notebooks
that
you
saw
previously
end
up
being
repackaged
as
a
single
python
file.
B
With
that
we
call
prediction.pi
and
basically
it's
just
code
that
was
in
all
the
cells
of
the
notebook
and
put
together
within
a
single
file
and
to
use
that
code
as
a
function
that
you
can
call
you
just
add
a
function
called
say
predict
that
takes
a
string
as
an
input
which
would
be
the
name
of
a
picture.
B
So
in
this
case
here
we're
just
going
ahead
and
launching
it
locally
and
we
could
go
ahead
and
then
test
our
flask
application
and
see
if
it's
working
and
it
looks
like
our
status
returned
that
it
was
okay.
So
now
that
the
application
that
we
verified,
that
it's
working
we're
ready
to
package
it
as
a
container
image
and
have
it
run
directly
on
openshift
as
a
service,
and
when
you
do
that,
you're
able
to
call
that
service
from
any
other
application.
B
B
B
We
go
ahead
and
we're
going
to
import
our
license
plate
code
from
the
get
repository
to
be
built
and
deployed,
and
we're
going
to
select
a
number
of
options
to
create
a
deployment
for
this
this
model.
Most
importantly,
we
want
to
be
able
to
create
a
route,
but
as
a
url
through
which
we'll
be
able
to
access
our
application.
B
B
B
B
We
can
also
run
this
app
from
a
jupiter
notebook
who
would
have
thought
in
this
case
we'll
go
ahead
and
we'll
add
an
image
and
I'm
just
calling
it
card
dot,
jpeg,
just
a
photo
of
a
car
with
a
license
plate
and
I'll
also
go
ahead
and
add
that
url
or
route
that
I
created
an
open
shift
and
if
I
go
ahead
and
run
the
cell
I'll
see
that
the
prediction-
and
I
got
to
screen
capture
the
car
so
that
you
could
see
what
the
license
plate
number
was
that
the
prediction
came
back
with
vu69
yde,
which
is
actually
correct.
B
So
now
that
we've
done
that,
let's
take
a
look
and
see
if
the
options
that
we
were
talking
about
in
the
managed
services
platform
that
we
originally
put
together
are
actually
there
and
they
are
again.
This
was
a
conceptual
architecture
for
managed
services
and
and
model
delivery
that
that
we
were
talking
about.
B
But
again,
this
is
actually
the
architecture
for
the
red
hat
openshift
data
science
platform
and,
as
we
discussed
earlier,
we
have
that
typical
aiml
model
or
workload
lifecycle
from
gathering
and
preparing
your
data.
Developing
your
model.
Integrating
your
models
and
app
development
and
doing
some
model
management
and
in
the
bottom,
the
the
gray
area
that
you
see
is
the
manage
cloud
platform.
That's
provided
either
by
red
hat,
openshift,
dedicated
or
red
hat
open
shift
service
on
aws,
initially
aws.
B
Right
now
is
the
public
cloud
for
launch
of
this
service,
we'll
be
looking
at
azure
in
the
future,
and
we
do
include
the
nvidia
gpu
support
and
then,
of
course,
in
the
red
hat
managed
cloud
services.
We
provide
our
core
red
hat
open
shift
data
science
offering
so
that's
going
to
have
jupiter,
tensorflow
high
torch
source
to
image
for
publishing
and
also
tie-ins
with
other
optional
add-on
cloud
services.
Things
like
open
shift
streams
for
apache
kafka
and
our
open
shift
api
management
service
for
optional
launch
partners.
B
We
do
include
the
service
such
as
starburst
for
data
access
and
prep
and
of
course
I
mentioned
anaconda
for
package
distribution
repositories,
and
then
we
also
have
software
partner
offerings
like
ibm
watson,
studio
and
cell
done,
deploy.
B
So
what
did
we
learn
today?
Well,
I
hoped
what
you
learned
today
that
what
managed
services
are
and
that
they
are
a
big
deal
when
it
comes
to
deploying
a
model,
because
they
make
the
process
easier
for
data
scientists
to
experiment
on
for
the
data
engineers
to
curate
the
data
and
when
the
data
scientists
have
built
the
model.
B
A
Well,
thank
you
for
this
and
you
you've
covered
off
like
many
of
my
favorite
subjects,
one
of
which
is
is
jupiter
hub
and
and
jupiter
notebooks
and,
and
I
love
the
the
the
tip
about
trying
not
to
or
to
refrain
to
from
uv
jupiter
notebooks
in
production.
A
That
might
be
the
thing
that
I
need
to
be
reminded
about
the
most,
so
the
the
one
request
we
got
from
the
chat
was
if
we
could
get
a
hold
of
your
slides
to
share
them
with
with,
of
course-
and
I
let
people
know
that
I
I
would
make
sure
I
could
do
that
for
them.
But
I
I
just
wanted
to
thank
you.
A
This
has
really
been
interest,
a
very
interesting
approach
to
it
because,
most
of
the
times,
if
you're
a
data,
scientist
or
someone
who
dabbles
in
in
research,
you
end
up
trying
to
do
all
of
this
by
yourself
or
or
with
minimal
I.t
support.
So
having.
B
Tried
to
do
that
before
in
my
previous
lifetime
and
it's
it's
not
easy,
and
it's
not
fun
and
at
the
end
of
the
day,
when
you're
working
on
pipeline
delivery,
you're,
like
I'm
a
data
scientist,
I
just
want
to
work
on
my
freaking
code.
Why
am
I
doing
this?
You
don't
have
to
with
with
managed
services
for
for
data
science.
A
Yeah,
so
I
think
this
is
like
a
huge
step
in
the
right
direction
and
that
you
know
there's
I'm
sure
there
are
other
managed
services
too,
but
it's
wonderful
to
see
it
all
working
on
openshift.
So
thank
you
for
for
the
tour
divorce
today
and
we'll
share
this
with
the
folks
that
are
out
there
in
the
universe,
looking
to
try
this
out
and
look
forward
to
having
you
as
new
features
and
functions
come
available
to
talk
us
through
those
as
well.
So
many
thanks
for
your
time
today.
B
Thank
you
for
having
me
on
board.
Remember
folks,
questions
bring
them
on
in.
You
can
always
look
me
up
on
linkedin
and
get
my
my
contact
through
there
and
I'd
be
happy
to
answer
questions.