►
From YouTube: OpenShift Coffee Break: MLOps with OpenShift
Description
Get your espresso ready for the next OpenShift Coffee Break as we talk about MLOps with OpenShift!
Our special guest is Max Murakami, Specialist Solution Archirect at Red Hat, and we'll discuss how we can automate training and delivery of machine learning models using Open Data Hub and OpenShift GitOps.
Twitch: https://red.ht/twitch
A
A
Hey
welcome
back
to
the
openshift
tv
coffee
break
good
morning.
Everyone
welcome
back
here.
My
name
is
natalie
bint,
I'm
product
marketing
manager
for
openshift.
Here
at
redat,
and
today
I
with
my
awesome
colleague
max
muragami,
hello
max.
How
are
you
good.
A
Our
pleasure
to
have
you
back
here
max,
because
today
we
have
another
very
interesting
topic
to
talk
about.
Last
time
we
talked
about
openshift
data
science
with
you
today.
The
topic
is
getting
more.
You
know
interesting
in
in
the
sense
that
we're
talking
about
how
to
automate
also
data
science,
so
the
topic
of
today
is
actually
emelops,
and
I,
I
think
max
you
have
some
cool
agenda
to
show
us
like
how
to
do
envelopes
on
on
openshift
kubernetes.
Am
I
right.
B
Yeah
so
we'll
talk
a
little
bit
about
you
know
general
ai
projects,
what
kind
of
role
mlobs
placed
there
and
I
brought
a
demo.
So
we
can
have
a
look
at
yeah
how
we
can
apply
this
in
in
real
life
scenarios.
Basically,.
A
That's
awesome,
that's
fantastic!
Max
and
max.
Do
you
do
you
want
to
present
yourself
again
for
the
people
that
don't
know
who
are
you
what
you
do
here
in
that,
and
I,
like
your
t-shirt,
your
apollo,
that
that's
great.
A
Awesome
awesome
thanks
for
interacting
so
max
you,
you
are
a
data
scientist.
Basically,
no,
you
are
a
data
scientist
in
the
into
the
kubernetes
world.
If
I
can
summarize
and
simplify
very
much
what
what
your
profile,
because
usually
here
at
openshift
tv
coffee
break,
we
are
all
kind
of
devops
developers.
We're
really
excited
to
have
also
data
scientists
to
hear
from
the
data
scientists
side
of
thing
and
hear
from
you,
yeah.
B
So
in
the
past
before
I
joined
the
iit
industry,
I
worked
in
research.
I
did
some
yeah,
you
can
say
data
science
and
and
research,
so
I'm
fairly
familiar
with
with
how
data
scientists
are
working
with
the
kind
of
challenges
they
have
and
yeah.
After
that,
when
I
joined
the
it
industry,
I
was
getting
more
into
software
development
and
trying
to
really
kind
of
integrate
and
and
combine
all
those
different
aspects,
and
that's
that
was
actually
before.
B
A
Max
you,
you,
you
say:
lots
of
cool.
You
know
lots
of
cool
words
that
very
popular,
very
trending
gear,
ops,
devops,
ai,
so
is
envelopes.
The
kind
of
a
container
of
all
those
subjects.
B
Yeah
so
we'll
we'll
have
a
close
look
at
that
in
a
couple
of
minutes.
But
if
you
wanted
to
make
that
short,
it's
basically
learning
from
software
development
in
terms
of
devops
and
git
ups
and
applying
those
principles
in
terms
of
best
practices
which
have
taken
a
few
decades
now
to
the
new
domain
of
machine
learning.
And
there
are
a
lot
of
similarities
actually
but
with
some
particular
pieces
in
terms
of
machine
learning
that
are.
A
B
And
then
we
talk
about
envelopes
so
on
a
very
high
level,
I
would
say
some.
I
would
say
it
like
this,
so
why
why
don't
we
jump
straight
into
the
conversation,
so
I've
brought
this
yeah
a
couple
of
slides,
not
too
many,
because
we
actually
want
to
see
how
this
is
working
in
a
real
application.
A
B
So
what
I
like
talking
about
in
the
beginning,
when
I
talk
to
organizations
or
people
that
are
interested
in,
is
this
kind
of
generic
ai
journey
that
that
we
see
many
organizations
really
going
through,
even
though
they
have.
You
know
a
lot
of
different
particular
business
use
cases,
but,
in
terms
of
you
know,
on
a
abstract
technical
level,
there's
certain
steps
in
sequence
that
that
organizations
really
are
faced
with
when
they
start
to
go
this.
A
B
Have
this
data
and
they
want
to
start
using
this
data
to
tap
into
this
growing
amount
of
data
to
optimize
the
businesses
to
optimize
their
decision
making.
So
that's
when
we
started
by
start
talking
about
topics
like
data
infrastructure,
data
management
and
the
goal
really
is
to
gain
a
kind
of
unified
business
reporting
here.
B
So
the
next
stage
at
some
point
is
when
the
organization
hires
the
first
data
scientists
and
the
aim
is
okay.
They
they've
already
started
tapping
into
the
data
they.
They
now
want
to
have
even
deeper
insights
into
their
own
data
right.
So
then
it
becomes
more
about
advanced
statistical
analysis,
finding
kind
of
hidden
patterns
within
the
data
that
they
can
use
to
even
improve
their
decision
making
further
and
yeah
so
data
scientists.
As
I
said,
some
of
the
challenges
really
are.
B
If,
if
we
talk
about
data
scientists
working
with
their
very
own
environments
on
their
own
local
missions,
for
example,
so
ideas
like
centralized
data
science
platforms
really
become
key
topics.
When
we
discuss
this
here
so
then,
moving
on
after
data
scientists
have
successfully
identified
hidden
patterns
and
are
then
trying
to
yeah
create
machine
learning
models,
basically
that
that
capture
those
hidden
statistical
patterns
within
their
data
and
that
they
can
then
use
to
automate
interpretation
of
their
own
data.
Basically
right.
B
So
if
you
think
about
massive
amounts
of
text-
and
you
have
organizations
that
where
there
are
people
who
have
to
read
those
texts
and
tag
them
and
try
to
classify
those,
that's
a
typical
use
case,
where
machine
learning
models
bring
a
whole
lot
of
yeah
automation
into
the
whole
system
to
gain
a
massive
efficiency
boost.
B
And,
lastly,
I
would
say
for,
for
some
organizations
at
least
the
kind
of
long-term
goal
is
then
to
leverage
those
machine
learning
models
in
production
to
integrate
that,
maybe
with
some
applications
that
they
already
have
to
not
only
interpret
the
the
data
that
they
already
have.
But
then,
maybe
even
in
real
time,
data
that
is
flowing
in
maybe
from
customers.
B
In
terms
of
you
know,
web
web
applications,
or
even
from
devices
out
there
that
are
sending
massive
amounts
of
data
and
that
we
need
to
compress
in
a
meaningful
way
and
that's
where
machine
learning
also
plays
a
critical
role
here.
So
I
would
say
this
is
kind
of
a
more
or
less
generic
picture
when
we
talk
about
ai
projects
with
organizations,
and
what
I
want
to
highlight
here
for
today
is
really
this,
this
this
chain
of
value
generation
that
the
data
science
plays
here
in
this
whole
ai
project.
B
That
means
the
new
kind
of
business
logic.
If
you
will
that
the
data
science
scientists
are
producing
in
their
environments,
maybe
on
their
own
laptops,
and
that
at
some
point
needs
to
get
into
yeah,
let's
call
it
production
somehow
integrated
in
into
their
own
business
right,
and
if
we
take
a
step
back,
then
this
already
reminds
us
a
lot
of
classical
application.
Development
right,
so
developers
on
the
one
hand,
side
and
what
they're
producing
needs
to
robustly
and
efficiently
and
reliably
be
integrated
in
a
larger
system.
B
That's
where
we
see
a
lot
of
organizations
struggling,
especially
those
who
are
new
to
the
field
of
machine
learning,
they're
just
getting
started,
and
typically
they
tend
to
focus
exclusively
on
the
very
tiny
piece
in
the
center,
which
we
call
well
data
science
or
machine
learning.
So
the
model
code
basically
and
there's
a
whole
bunch
of
concerns
outside
or
surrounding
this
piece.
Basically,
that
needs
to
be
addressed
right.
So
a
lot
of
questions
like
how's
the
data
collected
and
maintained
for
reuse.
B
How
are
meaningful
features
extracted
from
the
data
or
how
do
we
then
monitor
the
model,
performance
and
detect
model
drift,
but
also
other
concerns
that
are
kind
of
independent
of
the
machine
learning
itself,
like
you
know,
serving
infrastructure
process,
management
and
so
forth,
and
in
the
end,
it's
really
the
key
or
or
the
the
most,
the
primary
blocker
really
of
of
those
organizations
to
to
make
it
put
to
production
right.
So
they
tend
to
neglect
these
concerns.
They
tend
to
focus
exclusively
on
metal
model
development.
That
is
the
experimentation
phase.
B
But
in
the
end,
if
they
fail
to
address
the
end-to-end
machine
learning
life
cycle,
then
they
accrue
technical
debts
which
ultimately
becomes
a
barrier
to
production.
So
it's
really
that
most
ai
projects
fail
because
yeah
those
machine
learning
models
stay
in
the
experimentation
phase
and
never
make
it
to
production,
which
means
for
for
the
organization
that
the
project
is
not
returning
their
investment.
B
The
data
for
the
machine
learning
work
doing
the
feature
engineering,
then,
where
the
data
scientist
comes
into
play
then
doing
the
model
development
talking
about
the
model,
training
model,
validation,
very
important
point
as
well,
and
then
try
finally
trying
to
integrate
that
model
into
a
productive
system
into
an
application
and
making
sure
that,
even
if
it's
then
running
on
production
that
that
the
monitoring
is
still
going
on,
validation
is
still
going
on.
And
if
you
take
a
step
back,
then
this
actually
reminds
us
a
lot
of
the
classical
devops
workflow
right.
B
So
we
have
the
development,
on
the
one
hand,
side,
and
we
quickly
try
to
to
get
through
all
the
different
steps
to
to
production
to
where
we
deliver
the
developed
software,
and
we
try
to
iterate
quickly
through
this
and
in
fact
just
like
the
devops
circle
or
the
devops
workflow
in
terms
of
envelopes.
B
We
also
talk
about
not
a
purely
yeah
straight
line
in
terms
of
a
workflow
but
really
also
iterations
right,
so
you
can
think
about
if
we
detect
in
production
that
the
model
there's
a
there's
a
model
drift
going
on.
That
could
mean
that
we
might
have
to
adapt
the
model,
training
and
then
train
a
new
model.
B
We
might
even
have
to
adapt
the
model
architecture
and
then
again
train
that
model,
or
that
might
even
mean
that
we
have
to
go
back
to
the
to
the
actual
data
sets
and
have
to
yeah
somehow
improve
data
cleaning
or
integrate
additional
data
sources
here,
so
not
trying
to
talk
about
technology
here.
So,
on
the
one
hand,
side,
we
have
a
a
rough
overview
of
of
the
process
in
terms
of
how
mlops
is
working.
B
So
then
we
can
start
talking
about
technologies
that
actually
support
this
whole
workflow
in
terms
of
lm
ml
ops
and
what
we
see
is
really
there's
a
huge
ecosystem
of
open
source
tooling
out
there
that
tries
to
complement
those
different
steps
of
the
mlob
cycle
and
I've
just
given
a
sample
of
some
of
the
tools
that
that
are
popular
out
there,
that
that
our
that
some
of
our
customers
and
organizations
that
we
talked
to
like
to
use.
So
definitely
they
are
really
fitting
pieces
of
technology.
B
If
you
think
about
such
a
kind
of
mlob
suite
for
different
parts
of
the
envelopes,
workflow
right,
so
I'm
talking
about
you-
know
the
data
infrastructure
data
integration
to
the
data
science
platform,
then
to
the
yeah,
continuous
integration,
continuous
deployment,
as
well
as
the
model
inferencing
stage
and
finally
also
monitoring,
and
especially,
if
folks
are
familiar
with
openshift
and
also
the
ecosystem
around
that.
You
will
immediately
see
that
some
of
those
open
source
projects
also
play
a
big
part
in
terms
of
open
shift
as
well
right.
B
So
we
know
that
argo
cd,
for
example,
is
there,
as
this
is
already
integrated
with
openshift
in
terms
of
openshift
getup.
So
that
only
highlights
highlights
the
fact
that
there's
actually
a
large
overlap
again
between
the
the
world
of
classical
application,
development
and
delivery
and
in
the
world
of
machine
learning
and
intelligent
applications
and
where
we
can
actually
leverage
a
lot
existing
platform
capabilities
out
there.
B
B
So
we
have
the
open
data
hub,
on
the
one
hand,
side
where
it's
really
about
integrating
those
different
popular
open
source
tools
and
then
making
sure
that
it's
really
working
simply
on
top
of
openshift
as
well,
and
we
then
talk
about
a
operator
which
is
the
open
data
hub
operator.
Everyone
who
has
an
openshift
cluster
has
access
to
the
data
hub
operator.
B
They
can
quickly
install
their
particular
suite
of
the
different
kind
of
projects
and
then
quickly
get
yeah
up
and
running
with
with
their
own
ml
of
suite
on
openshift,
basically
and
then
based
on
you
know
the
upstream
project
in
terms
of
open
data
hub,
we
now
have
a
commercial
offering
based
on
that,
so
basically
the
downstream,
which
is
the
openshift
data
science.
So
I
talked
about
this
a
couple
of
weeks
ago.
B
I'm
not
going
to
go
into
too
much
detail
here,
just
a
very
high
level
overview
here
of
what
that
openshift
data
science
is
about.
So
it's
currently
a
managed
service
offering
on
aws
available
as
an
add-on
to
openshift,
dedicated
and
and
openshift
service
on
aws
and
we'll
talk
mainly
about
like
the
core
functionality
in
terms
of
the
data
science
platform,
which
is
a
managed
jupyter
hub.
A
B
Really
then,
leverage
a
fully
supported
stack
from
the
isv
partner
as
well
as
red
hat
as
well.
So
if
you
want
to
get
started
and
just
get
your
hands
on
that,
there
is
the
road
sandbox,
the
road
sandbox
that
is
part
of
our
developer
portal.
So
all
you
need
is
a
red
hat
account
that
you
can
get
for
free
in
the
developer
portal,
and
then
you
can
just
yeah
provision
your
own
openshift
data
science
environment,
basically,
and
and
have
some
interactive,
walkthroughs
and
tutorials
and
see
what
what
this
platform
is
about.
B
All
right
then
about
the
scenario
and
maybe
one
one
particular
scenario
that
could
be
realistic
to
some
of
the
organizations
out
there.
So
I've
talked
about
the
kind
of
typical
envelopes
workflow
that
was
very
abstract
on
a
process
level.
So
now,
if
you
want
to
have
an
idea
of
how
this
could
like
could
look
like
for
a
given
organization,
then
definitely
we're
talking
about
different
people
who
play
a
key
part
in
in
the
whole
end-to-end
workflow
right.
So
we're
talking
again
about
data
analysts
who
want
to
have
a
unified
view.
B
Then
we
talk
about
data
engineering
where
it's
it's
it
becomes
important
to
have
have
a
way
to
process
large
amounts
of
data
to
finally
extract
meaningful
features
out
of
this
large
amount
of
data.
Then
we
have
the
data
scientists
who
are
working
on
those
features.
They
are
building
their
models
and
then
we
have
this
yeah
cicd
pipeline,
which
again
looks
very
similar
to
classical
application.
B
Cicd
pipelines-
and
here
we
talk
about
you-
know,
reading
data
training
in
a
model
that
that
has
been
defined
before
validating
the
model,
uploading
a
model
then
into
a
centralized
model,
store
and
finally
pushing
a
model
or
pushing
the
changes
to
a
git
repository.
So
a
git
repository
that
is
used
for
you
know,
infrastructure
is
code
and
basically
all
the
kubernetes
manifests
that
argo
cd
can
then
use
in
terms
of
git
ops
to
automatically.
B
If
you
want
fully
automatically
deploy
those
kubernetes
resources
and
synchronize
them
in
your
development,
environment,
test,
environment
and
maybe
even
production
environment.
So
then
we
have
the
production
environment
on
the
on
the
right
hand,
side
and
they
might
even
leverage
some
pre-built
inference
servers
where
they
can
then
basically
just
inject
the
model
that
they
have
previously
uploaded
in
their
store.
B
And
of
course,
given
you
know
the
the
large
ecosystem
out
there
there's
a
lot
of
different
components
that
you
can
kind
of
switch
out
for
one
one
or
another,
but
that's
basically
for
you
yeah
for
us
to
gain
an
overview
of
of
of.
B
Workflow
right
so
from
the
from
the
raw
data,
on
the
one
hand
side
to
how
this
is,
then,
how
then
the
the
value
of
the
data
is
being
then
transported
finally,
into
an
overall
application.
A
If
you
send
a
question
in
the
chat
we
can,
we
can
talk
about
it
because
here
I
see
many
as,
as
you
described
many
personas,
the
data
analyst,
the
the
data
engineer,
data
scientist,
the
envelopes
engineer,
I'm
wondering
if
there's
anyone
in
the
chat
that
see
itself
himself
herself
in
this
those
personas
and
I'll,
also
to
take
the
opportunity
max
to
say,
ciao,
hello
to
fabio,
which
is
following
chat
and
ciao
hello
to
chiro
that
is
following
in
chat
cheer,
I
know,
is
sisops
devops,
so
like
chiro,
where
are
you?
A
How
do
you
see
yourself?
Do
you
see
yourself
in
this
model?
I
think
max.
To
be
honest,
this
is
gonna
be
more
and
more
often
you
know
normal
to
have
a
aiml
workflow
in
any
business
right.
So
I
I'm
guessing
the
envelopes
engineer.
It's
a
devops!
That's
also
understand
this
concept,
of
course,
but
it's
a
devops
that
can
also
handle
this
work.
Workflow
is:
is
that
correct
max
is?
Do
you
think
that
envelops
engineer
can
be
a
devops
with
those
skills.
B
Yeah
definitely
so
again,
there
are
a
lot
of
similarities
and
mlops
draws
on
devops
and
github's
right.
So
we
made
this
this
parallel
and
this
reference
before
and
in
the
same
way,
you
can
say
emelop
and
mlop's
engineer
is
more
or
less
and
a
devops
engineer
who
is
then
working
on
yeah,
not
only
the
devops
cycle,
but
also
tries
to
play,
apply
all
those
principles
to
machine
learning
projects
and
ai
projects
like
this
right.
B
A
Okay,
cool
thanks
for
this
clarification
and
it's
really
cool,
as
you
said
max
that
that
it
looks
very
similar
to
the
classical
pipeline
here.
Maybe
what
the
new
thing
probably
is?
You
know
the
data
analyst
is
collecting
and
cleaning
the
data,
and
then
there
there
are
few
past
few
steps
before
we
go
into
the
automation
question
max,
do
you
think
automation
and
when
automation
we
say
you
know,
kubernetes,
automation
or
any
automation
can
also
help.
In
the
data
analysis.
Data
engineering
in
the
data
science
part.
B
Yeah,
that's
a
very
good
question.
So
definitely,
yes,
of
course,
if
there's
a
way
to
automate
something,
then
it
definitely
makes
sense
to
automate
it.
B
I
would
say
you
face
some
different
types
of
challenges
in
the
data
engineering
domain
that
might
not
be,
or
that
might
be
due
to
the
fact
or
that
might
be
even
trigger
trickier,
to
automate
due
to
the
fact
that
there's
actually
not
so
much
of
a
similarity
with
what
we
see
in
classical
software
development
right
so
especially
when
it
comes
to
the
whole
data
infrastructure
to
dealing
with
data,
that's
not
something
which
we
can
easily
compare
to
in
the
classical
application
world.
B
So
that
is
also
the
reason
why
that's
that's
a
relatively
young
field,
the
whole
data
data
engineering
ecosystem.
So
we
have
so
there's
there's
a
number
of
open
source
projects
out
there
that
are
relatively
young,
and
we
start
talking
about
concepts
like
you
know.
So
we've
been
talking
about
data
warehouses
for
some
time
data
lakes
also,
but
there's
a
trend
to
to
go
more
in
the
direction
of
yeah
data
measures,
as
you
call
them
right.
A
B
Trying
to
so
in
a
similar
way
trying
to
take
that
that
that
large
monolith,
which
we
often
see
with
organizations,
so
they
acquire
more
and
more
massive
amounts
of
data,
but
now
the
use
cases
for
the
data,
so
maybe
they
have
different
data.
B
Scientists
seems
they
become
more
and
more
distinct,
which
means
that
it
might
make
sense
at
some
point
to
think
about
how
to
distribute
this
and
how
to
yeah
leverage
modularity,
basically
in
terms
of
building
data
catalogs
that
have
little
coupling
between
each
other
right
so
that
you
can
really
enable
your
data
scientist
teams
to
leverage
different
parts
of
the
of
the
data
to
leverage
different
data
pipelines
without
too
much
overlap
right.
So
a
little
bit
again,
referring
back
to
what
what
we've
seen
in
the
application
domain.
B
Where
we've
been
talking
about
monoliths
for
some
time
and
then
going,
then
we've
seen.
Of
course,
the
trends
will
talk
more
and
more
about
microservices,
where
it's
more
about
breaking
them
down.
Decoupling
them
such
that
the
teams,
especially
when
the
software
projects
become
larger
and
larger,
that
the
teams
still
regain
their
agility.
So
we
see
actually
similar
concepts
going
on
in
the
data
infrastructure
world,
and
that's
where
I
see
a
lot
of
the
automation
will
definitely
play
play
a
huge
part
in
the
future
as
well.
A
That's
fantastic.
I
I
think
it's
a
really
amazing
field
and,
as
you
said,
it's
it's
new
is
building
we're
building
this
knowledge.
It's
it's
very
cool
to
hear
from
the
expert
like
you,
these
things
max.
So
thanks
for
this,
and
if
you
have
any
question,
please
send
in
the
chat
and
now
I
think
max.
We
have
more
content
because
I'm
also
interested
to
know
more
about
you
know
open
data
hub
and
all
those
software
I
see
here,
but
I
guess
your
cover
next
in
the
slide.
B
Yeah,
so
all
the
different
projects
that
that
you
see
here
right
so
starting
with
kafka,
trino
spark
all
the
way
to
tactile
and
argo.
Then
so
that's
basically
one
one
way
to
apply
open
data
hub
for
a
particular
use
case
for
a
particular
project
right
so
again
not
only
addressing
a
very
single
use
case
here,
but
what
we're
talking
about
the
suite
this
tool
set
of
different
open
source
tools,
which
you
can
really
apply
to
to
all
the
different
domains
coming
up
within
your
yeah
within
your
ai
project?
B
Basically,
so,
if
we're
now
going
to
start
more
about
the
demo
which
I'm
going
to
show
you,
then
obviously
the
the
the
topic
of
today
is
going
to
be
really
the
the
automating
of
this
whole
delivery
chain
right.
So,
starting
with
what
the
data
scientist
is
doing,
all
the
way
to
how
this
is
going
to
end
up
in
as
efficiently
and
automatically
as
possible
than
an
integrated
overall
application.
A
B
Level,
what
we're
going
to
see
is
we're
going
to
start
again
with
the
data
science
piece
so
talking
about
the
jupiter
lab
which
we're
going
to
use
to
trigger
execution
of
an
allyra
pipeline,
then
this
lyra
pipeline
is
going
to
be
executed
using
tekton,
which
is
then
part
of
kubeflow
pipelines,
also
part
of
open
datahub.
B
We're
then
going
to
leverage
ceph
as
model
store
right
so
ceph,
which
is
also
available
as
openshift
data
foundation.
In
our
particular
instance,
we're
going
to
leverage
selden
for
an
inference
server
which
is
encapsulating
the
the
model
and
then
finally,
argo
cd
for
then
automating
the
model
deployment,
the
model
rollout.
Basically
so,
there's
a
very
rudimentary
and
simple
notion
of
model
versioning
which
we're
also
going
to
leverage
here
so
yeah,
ceph,
really
or
the
s3
bucket
functionality
of
stuff
really
being
a
very
rudimentary
form
of
model
store.
B
If
you
will
so
talking
a
little
bit
about
argo
city,
I
think
the
audience
is
very
well
familiar
with
argo
cd.
Of
course,
it's
great
for
continuous
deployment
based
on
the
github's
principles
right
so
again,
synchronizing
on
the
one
hand,
side,
kubernetes
objects
with
the
manifest.
In
a
reference
git
repository
now
in
our
particular
case,
we're
going
to
talk
about
an
inference
server
manifest
in
the
git
repository,
which
is
then
containing
a
reference
to
a
model
that
is
persisted
in
the
model
store.
B
So
if
you
have
a
look
at
the
complete
ci
cd
picture,
then
it
looks
like
this,
so
I'm
starting
with
a
ci
part.
We
have
a
training
workflow
which,
in
the
pipeline,
in
terms
of
training,
the
bundle
validating
the
model,
uploading
the
model
to
the
model,
store
and
then
again,
updating
the
model
reference
within
the
inference,
server,
manifest
right
and
then
august
cd
kicks
in
it
detects
that
there's
a
manifest
update
and
then
it's
applying
the
gitobs
thing.
It's
applying
the
new,
manifest
openshift.
B
The
operator
in
this
case
of
of
seldon
will
then
deploy
the
new
inference
pod,
which
is
then
going
to
include
the
updated
model
so
yeah
using
the
new
model
version
which
has
been
stored
before
in
our
s3
model
store.
A
B
That's
yeah:
that's
an
attempt
to
kind
of
build
a
little
bit
more
than
you
know
a
toy
example,
so
something
that
is
a
little
bit
resembling
like
what
we
see
with
some
customers
out
there
with
some
organizations,
especially
like
from
manufacturing
right
so
think
about
a
factory
on
the
one
hand
side
right,
so
we're
talking
about
a
factory
site
on
the
left
hand,
side
and,
on
the
other
hand,
side
we're
talking
about
a
core
data
center
could
be
like
on-prem
or
could
even
be
something
leveraging
in
the
public
cloud.
So
the
whole.
B
B
We
are
assuming
so
we're
simulating
kind
of
a
case
where,
in
the
factory
there
is
a
yeah
production
line,
yeah
on
which
items
are
produced
and
we
have
a
number
of
sensors,
so
two
sensors,
which
are
detecting
vibrations
on
this
production
line,
and
we
have
a
line.
Data
server,
which
is
then
gathering
these
vibration
data
and,
on
a
regular
basis,
are
feeding
them
forward
to
our
yeah
openshift
cluster
running
in
the
factory
so
to
a
message
broker
and
those
particular
vibration.
B
Data
are
then
going
to
be
to
be
fed
forward
to
an
application
consisting
of
a
notification
service
which
is
forwarding
again
these
vibration
data
to
a
dashboard,
so
think
about
a
machine
operator
which
is
working
in
the
factory
and
he
has
to
oversee
that
the
equipment
is
not
breaking
that
they
that
it's
maintained
on
a
regular
basis.
B
So
if
this
service
is
then
detecting
an
anomaly,
it's
feeding
this
signal
to
the
dashboard
service
and
the
machine
operator
will
be
alerted.
So
the
machine
operator
will
get
a
heads
up
watch
out.
There's
something
going
on
in
the
in
the
line
in
the
production
line.
Please
check
it
and
make
sure
that
it's
maintained
and
there's
nothing
breaking.
B
So,
on
the
other
hand,
side
those
vibration
data
are
then
also
being
fed
forward
to
a
kafka
cluster
and
via
mirror
maker.
These
data
are
then
mirrored
into
the
central
data
center
and
finally
end
up
in
a
data
lake.
So
now
we
have
the
data
scientist
coming
into
play.
Who
is
then,
of
course,
interested
in
the
data
so
he's
working
on
his
data
science
platform
in
terms
of
a
jupiter
jupiter
notebook
environment?
Actually
we're
going
to
leverage
openshift
data
science
for
this
one
and
he's
now
taking
a
look
at
all.
B
The
data
is
yeah
coming
up
with
an
improved
model
version.
An
improved
model
architecture
is
defining
a
training
workflow
in
the
machine,
learning
ci
system
and
can
then
execute
this
workflow.
So
once
this
workflow
has
been
is
now
being
executed
as
a
pipeline,
it
can
then
access
the
training
data.
It
can
then
do
the
whole
pipeline,
which
we've
shown
before
and
at
some
point
once
the
machine
learning
model
has
been
validated.
B
We'll
then
upload
a
new
version
of
the
machine
learning
model
into
the
central
model
store
and
remember
also,
you
know,
the
git
repository
is
being
updated
with
a
new
reference
of
the
new
model
version
for
the
anomaly
detection
service.
So
once
the
new
reference
is
now
live
is
now
included
in
the
in
the
service
manifest.
B
It
will
then
start
a
new
version
of
that
anomaly,
detection
service
and
then
automatically
downloading
the
latest
version
from
the
model
store,
so
so
to
make
sure
that
there's
always
the
new
version
running
in
the
latest
and
improved
version
running
without
really
interrupting
the
whole
system.
B
So
that's
the
system
that
we'll
have
a
look
at
in
a
minute
on
a
very
high
level.
Basically.
B
A
Are
two
open
shift
clusters
that
are
basically
connected
via
kafka,
mirroring
right,
there's,
a
kafka
mirror
maker
which
streams
the
data
from
one
cluster
to
another
cluster
and
sorry,
if
I
understood
correctly,
the
the
ci
part,
the
cicd
part
is
in
this:
the
core
data
center
cluster,
which
will
generate
the
new
the
new
model
so
update
the
model
actually
from
the
data.
A
The
model
is
a
container
image
right.
It's
a
you
know
the
representation
of
the
model,
it's
inside
the
container
image
and
the
the
argus
when
there
is
a
new
container
image,
the
application
will
be
deployed
automatically
through
the
github's
workflow.
Is
that
correct.
B
A
A
B
A
B
Is
already
there
it's
compatible
with
the
typical
machine
learning
model
of
formats
and
the
great
thing
is
you
know
you
don't
have
to
maintain
that
image.
You
don't
have
to
maintain
the
code
of
the
service
and
you
even
get
some
some
kind
of
nice
functionalities
out
of
the
box
if
you
include
libraries
for
explainability,
for
example
right
so
in
the
end
and
and
we'll
have
a
look
at
this
in
a
minute.
What
we're
really
going
to
change
is
basically
within
the
configuration
of
this
particular
service.
B
Try
my
best,
so
you
should
now
see
the
dashboard.
So
again,
that's
the
dashboard
service,
that's
that
the
machine
operator
basically
has
access
to,
and
he
has
this
front
end
and
we'll
see
in
real
time
on
a
regular
time
interval
the
the
latest
vibration
data
coming
in
from
the
production
line,
servers
and
every
once
in
a
while.
B
We
might
see
those
weird
spikes
right,
which
might
seem
strange
to
us
and
actually
the
system
is
then
connected
to
the
anomaly
detection
service,
and
this
should
now
indicate
and
give
us
an
alert
and
that's
what
we're
seeing
right
now.
You
know
there's
an
there's
an
anomaly
going
on.
Please
have
a
look
at
your
production
line
right.
So,
of
course,
the
operator
doesn't
have
to
sit
all
the
time
in
front
of
the
dashboard
could
also
be
that
this
alerts
being
pushed
to
his
own
device.
A
And
max
in
the
speed
of
openshift
tv,
coffee,
break
and
openshift
tv,
this
is
absolutely
live
demos.
What
you
are
seeing
it's
just
live
so
that
that
that's
what
we
do
right,
we
do
the
magic
here
with
live.
Demos
maxis2
is
going
to
do
that
max.
Is
this
cluster
also
on
internet,
so
can
that
we
could
share
to
the
audience
if
they
want
to
click
and
see
their
cell,
that
hey
those
guys
are
really
doing
stuff
live.
B
Yeah,
that's
a
good
point.
So,
yes,
it's
a
live
demo,
so
I'm
hoping
that
every
will
everything
will
run
just
smoothly.
It
is
everything
is
running
here
really
on
an
openshift
cluster
on
aws,
so
we're
really
abstracting.
You
know
the
whole
edge
part
over
here.
It's
not
really
with
public
access,
though
right.
A
B
But
I
should
actually
have
included
a
reference
that
the
whole
system-
that's
that
we
talk
about
here,
it's
available
as
a
what
we
call
validated
pattern,
which
is
some
yeah,
which
is
a
new
initiative
where
we're
trying
to
maintain
such
such
showcases
with
such
scenarios
over
here
in
public
repositories,
fully
documented
and
such
that
you,
as
an
openshift
user,
can
quickly
deploy
this
on
your
own
openshift
cluster
and
quickly.
B
Yeah
have
a
look
at
how
this
application
is,
is
working
just
playing
around
with
this
and
studying
like
the
integration
of
the
different
parts
in
detail.
So
if
you
look
up
industrial
edge
validated
pattern,
so
this
particular
thing
is
the
industrial
edge
scenario.
Then
you
will
directly
find
this.
B
Yeah,
so
I'm
so
so
what
I'm
showing
you
today
is
based
on
the
original
industrial
edge,
validated
pattern,
so
the
whole
machine
learning
ci
cd
is
not
yet
part
of
the
public
repository
but
yeah.
That's
something
that
we're
working
on
to
really
get
this
into
the
public
as
well,
but
yeah.
A
B
Okay,
then,
let's
have
a
look
at
the
application
on
the
openshift
cluster.
So
again,
let's,
let's
think
about
this,
like
a
test
environment,
you
know
just
a
project
where
we're
trying
to
really
deploy
all
the
pots
that
that
make
up
this
application
for
testing
purposes
and,
on
the
one
hand,
side
we're
talking
about
the
machine.
A
B
Those
those
particular
signals
will
then
be
fed
back
to
the
line
dashboard
as
well,
but
for
our
purposes
we're
now
going
to
focus
on
the
anomaly
detection
service.
So
we
can
see
that
right
now,
there's
one
port
running.
The
whole
system
is
integrated.
Everything
is
up
and
running
here
and
we're
going
to
try
to
update
this
automatically
using
the
latest
model
version
which
we're
going
to
create
in
a
couple
of
or
in.
A
B
A
minute
or
two
and
importantly,
the
configuration
of
this
particular
inference
server,
is
mediated
by
yeah,
a
seldon
deployment,
custom
resource.
So
again,
referring
back
to
to
sell
and
core
here
which
basically
gives
us
the
whole
functionality
of
a
pre-built
inference,
server
and
importantly,
for
our
purposes,
also
a
reference
to
where
the
infant
server
can
find
the
actual
model
that
it's
encapsulating
right.
So
what
happens
is
that
when
the
pot
is
then
starting
up,
it's
going
to
find
and
download
the
particular
model
within
this
particular
s3
bucket,
which
we
have
configured
as
well.
B
A
Max
what
are
what
are
the
storage
supported
if
I,
this
is
a
object
storage,
but
you
mentioned
also
the
shift
data
foundation.
So
what
are
the
storage
that
we
can
use
in
in
with
this
seldom
deployment.
B
B
Another
common
approach
is
to
use
google's
version
of
object,
storage
and
then
there
might
be
one
or
two
additional
ones,
so
definitely
something
that
that
you
can
find
within
the
selden
core
documentation
over
there.
A
B
Yes,
okay,
so
one
thing
which
we
should
maybe
remember
for
for
what
we're
going
to
see
next,
is
this
timestamp
right
so
again,
a
rudimentary
form
of
model
versioning
where
we
encode
the
version
as
part
of
the
the
timestamp
of
when
this
particular
model
instance
has
been
created.
So
this
is
6
51.
B
B
Okay,
so
let's
keep
this,
we
are
going
to
have
a
look
now
at
argo
cd.
So
again
as
as
you
might
already
know,
argo
cd
in
terms
of
openshift,
git,
ops,
so
already
available
with
openshift,
gives
us
the
ability
to
synchronize
a
number
of
different
applications,
so
to
speak.
So
we
have
set
up
this.
B
This
particular
application
running
in
our
test
environment
called
manuela
tests,
we're
going
to
jump
into
that,
and
we
can
see
that
there's
really
a
large
number
of
kubernetes
objects
which
are
being
tracked
and
automatically
synchronized
with
respect
to
the
reference
git
repository.
So
everything
is
green.
Everything
is
in
sync,
which
is
great,
and
the
particular
object
which
we
are
interested
in
is
the
anomaly
detection
cell
and
deployment.
So
we
can
have
a
look
at
that
and
we
will
see
yeah.
B
This
is
exactly
the
one
that
we
have
seen
before
exactly
with
the
same
timestamp
over
here.
So
we're
going
to
see
in
a
few
minutes
that
that
argo
city
will
then
automatically
update
this
particular
resource
once
the
manifest
has
been
updated
as
well.
B
Okay,
let's
go
down
here
all
right,
so
then
the
last
bit
is
the
data
science
platform
again
referring
back
to
openshift
data
science,
so
my
environment
has
been
shut
down
because
of
inactivity.
So
no
problem.
We
can
just
spawn
that
again
and
I'm
using
a
very
cool
new
feature
of
openshift
data
science,
which
is
bring
your
own
notebook
image
feature.
B
Yeah,
so
what
is
this
about?
So
in
terms
of
openshift
data
science,
we
as
red
hat,
provide
a
number
of
yeah
notebook
images
which
we
support,
which
we
regularly
update.
B
Then
there's
a
number
of
notebook
images
from
our
isv
partners
and
now
openshift
data
science
users
have
the
ability
to
also
integrate
their
own
custom
notebooks
in
terms
of
bring
your
own
notebook
image
so
right
now,
so
so
the
way
to
do
this,
this
is
documented.
You
will
find
this
in
the
openshift
data
science
documentation.
A
B
One
thing
to
note,
though,
since
we're
talking
about
bring
your
own
notebook
images,
the
integration
of
those
image
streams
is
supported
by
red
hat,
but,
of
course,
not
the
real
content
of
the
notebook
images
because
yeah.
This
is
something
which
which
everyone
is
then
working
on
their
own
and
providing
on
their
own
okay.
So
we
have
in
our
particular
environment.
B
We
might
inject
some
some
environment
variables
in
terms
of
credentials
and
then
we
can
provision
our
notebook
and
again
now
in
the
background.
The
platforms
then
send
I'm
spawning
up
the
pot
for
me.
So
my
very
own
pot,
where
I
have
my
jupiter
environment,
making
sure
that
the
persistent
volume
is
provisioned
as
well
and
that
is
connected
with
yeah
and
that
that
might
already
already
contain
some
of
the
previous
data.
Some
of
the
previous
work
that
I
have
done
before.
B
A
Scientist's
point
of
view:
what
is
the,
what
is
the
advantage
of
what
is
the?
What
is
the
plus
of
having
this
openshift
data
science?
Is
it
in
order?
As
a
data
scientist,
I
could
have
a
jupiter
notebook
as
a
service
right,
it's
a
on-demand
capacity.
How?
What
do
you
see?
Also
in
this
demo,
what
openshift
data
science
provide
out
of
the
box
to
implement
the
topology,
the
architecture
that
you
show
us.
B
Yeah,
so
definitely
openshift
data
science
is
a
great
way
for
data
scientists
to
very
quickly
get
into
their
data
science
work
right,
so
really
what
they
care
about.
So
thinking
back
about
when,
when
I
was
working
in
terms
of
experimentation,
research,
what
I
always
didn't
really
like
is
is
working
with
infrastructure
right,
so
setting
up
my
own
jupiter
environment,
trying
to
make
sure
that
that
all
the
dependencies
are
there.
B
The
dependencies
are
consistent
with
what
my
colleagues
are
using
that
I
manage
the
kind
of
compute
resources
if
I
use
my
own
machine,
for
example,
or
if
I'm
using
a
virtual
machine
worrying
about
that
might
be
offline
or
or
not,
and
all
of
that
is
really
yeah
abstracted
away.
We're
really
talking
about
a
platform
that
gets
you
up
and
running
fast,
where
you
can
just
directly
set
up
and
provision
your
own
environment,
and
also
with
within
with
notebook
images
that
you
have
standardized
on
with
your
whole
data
science
team.
B
B
So
what
what
we
see
here
now
is
I've
already
cloned
a
git
repository
where
some
of
the
you
know,
data
science
work
has
been
stored
in
terms
of
jupyter
notebooks,
so
for
for
those
who
might
not
be
familiar
with
jupiter
notebooks,
that's
really
a
very,
very
popular
format
of
doing
experimentation
right.
So
we
talk
about
kind
of
a
mixture
of
markdown
documentation,
as
well
as
python
code
blocks
that
you
can
indirectly
execute
interactively
and
debug
interactively
so
really
nice
for
also
documenting
data
science
work
right.
B
So
we
have
a
number
of
jupyter
notebooks
here
in
this
particular
repository.
One
for
pre-processing,
then
for
feature
extraction
and
basically
capturing
all
the
different
steps
within
the
the
machine
learning
life
cycle.
And
what
we've
done
is
we
set
up
this
lyra
pipeline
where
we're
chaining
together
those
different
steps
of
the
jupyter
notebooks
right
so
we're,
starting
with
with
a
notebook
that
we
just
saw,
which
is
about
pre-processing,
the
the
raw
data?
B
Then,
once
that's
finished,
it's
directly
going
to
execute
the
next
one,
which
is
about
feature
extraction,
then
the
the
training
bit,
the
verification
bit
of
the
trained
model
and
finally,
once
the
verification
is
successful,
then
pushing
the
model
into
the
model
store,
as
well
as
updating
the
git
repository.
B
So
again,
this
reminds
us
a
little
bit
of
typical.
You
know
ci
pipelines
and
that's
that's
kind
of
very
similar
to
what
we
want
to
achieve.
So
that's
what
I
would
set
up
as
a
data
scientist,
because
I
manage
and
own
all
those
different
jupyter
notebooks
and
then
what
I
can
do
is.
I
can
go
ahead
and
run
this
pipeline,
so
I
can
run
this
on
a
backend
in
terms
of
text
on
in
terms
of
kubeflow
pipelines,.
A
B
Yes,
exactly
so
tacton
as
part
of
cube4
pipelines,
that's
really
specialized
for
those
for
those
particular
integrations
with
lyra,
for
example,
right
so
where's,
where
it's
mostly
about
yeah
running
those
jupiter
notebooks
in
sequence.
So
what
we
can
now
see
if
we
switch
now
to
the
keep
flow
dashboard
that
now
gives
us
the
runs,
and
we
can
now
see
this
new
run
that
we
that
I
have
just
now
submitted,
and
we
can
directly
step
into
this
one
and
we
see
live
how
all
those
different
steps
are
now
being
executed.
B
We
can
directly
have
a
look
at
the
logs,
so
in
real
time
now
we're
going
to
see
all
the
different
yeah
parts
of
the
jupyter
notebook
being
executed
and
yeah.
Of
course
that's
great
for
debugging
for
for
knowing
really
what's
going
on
houses,
how
this
is
finally
being
executed
in
a
real
run
time
nice.
So
this
is
now
going
to
take
a
minute
or
so
stepping
through
all
those
different
parts
of
the
whole
data
science
workflow.
If
you
will.
A
Mac
max,
I
think
I
take
the
opportunity.
There
is
a
question
in
the
chat.
There's
a
marula
sedaya
that
is
asking
how
new
image
data
sets
are
processed
to
a
machine
learning
model.
B
For
the
image
data
sets
yeah
so
so
that
there
could
be
different
types
of
data
set
right.
So
the
kind
of
data
set
that
we're
talking
about
in
this
particular
demo.
It's
actually
not
about
images,
so
you
could
have
cameras
in
the
factory
right
where
it's
about.
You
know:
detecting
objects
on
the
images,
but
here
actually
we're
talking
about
about
vibration
data.
B
So
it's
really
very
simple,
a
time
series
of
yeah
of
scalars
that,
where
you
can
just
see
how
this
vibration
is
evolving
over
time,
that
you
can
just
plot
it
nicely
as
a
curve-
and
that's
that's
a
very
simple
approach
to
yeah
to
to
showcase
this
kind
of
normally
detection
how
this
could
look
like
in
terms
of
a
sensor
that
is
really
only
measuring
an
amplitude
of
of
the
vibration
of
a
particular
production
line,
but.
A
B
A
whole
lot
of
different
possible
use
cases
in
terms
of
image
processing
when
actually
we
have
camera
next
to
the
production
line,
actually
a
very
common
use
case
as
well.
But
again,
there
are
so
many
different
types
of
applications
where
you
can
really
apply
machine
learning,
ai
models,
but
in
the
end,
on
an
abstract
level,
you
can
you
can
really
leverage
those
envelopes
processes
the
same
way
in
the
end.
A
Yeah
yeah,
I
think
that
was
that
was
the
question.
If
you
have
an
any
other
question,
please
let
us
know
in
the
chat
but
yeah.
I
think
we're
we're
going
into
the
flow
of
the
demomax.
We
have
some
minutes
left.
We
can
complete
the
demo
for
sure.
So,
let's
take
this
time
to
to
finish
the
flow
and
I
see
the
flow
is
going
pretty
well,
I
see
some
green
check,
so
this
is
a
cube
flow
pipeline
that
can
it's
conv.
It's
using
aider,
tecton
and
argo
cd.
B
Exactly
so
in
this
particular
instance,
it's
it's
based
on
tecton,
which
you
might
know
as
openshift
pipelines
as
well,
and
just
want
to
highlight
the
facts.
So
the
pipeline
has
run
through.
You
might
have
noticed
that
argo
cd
has
picked
up
this
change
of
the
manifest,
so
it
has
also
just
synced
the
manifest
and
right
now
here
in
our
openshift
cluster.
B
We
can
see
actually
there's
a
new
pot
version
of
the
anomaly
detection
service
that
which
is
now
up
and
running
and
via
rolling
deployment
once
this
has
once
this
is
really
running
and
can
receive
requests
now.
The
old
one
that
has
run
before
is
being
terminated
so
really
making
sure
that
there's
no
downtime
by
us
deploying
a
new
version
here.
So,
of
course,
what
you
know
in
any
case,
from
from
rolling
deployments
on
top
of
openshift
and,
lastly,
to
confirm
that
it's
really
the
new
version
of
the
model
that
is
running
so
again.
B
This
is
what
we
have
seen
before.
We
now
see,
there's
there's
a
new
version
of
that
yeah,
seldom
deployment
custom
resource.
Now
we
can
see
okay.
Now
we
have
a
different.
Oh.
B
B
It
managed
to
upload
the
model
with
this
particular
timestamp
in
our
model,
store
and
managed
to
get
this
reference
now
into
our
running
systems
in
our
anomaly
detection
service
as
well,
and
we've
seen
that
the
new
service
has
already
been
deployed-
and
this
is
now
we
are
running-
live
within
our
environment.
So
that's.
A
Pretty
cool
max,
we
have
seen
a
live
demo,
the
update
of
the
model
in
another.
You
know,
as
you
mentioned,
that
there's
you
could
also
put
the
model
in
the
container
or
you
have
your
infrared
inference,
server
that
can
download
the
model
from
a
packet
from
object,
storage,
like
we've,
seen
in
in
seldon
and
and
and
use
that
model
for
fro
after
the
ci
cd
loop
in
a
gitob's
way,
impressive.
A
Super
cool
max
and
I
was
just
to
wrap
up
everything
I
was
wondering
if
having
openshift
data
science,
let
you
helps
you
in
installing,
like
seldom
or
open
vino
or
other
jupiter
notebook
as
a
service.
A
So
if
you
want
to
implement
the
same
thing,
I
guess
having
openshift
data
science
can
really
bootstrap
everything
because
there's
cube
flow.
There
are
all
the
software
needed
right.
Otherwise,
you
have
to
install
it
manually
or
kind
of
maintaining
manually.
B
Yeah,
so
we
have
the
open
data
app
on
the
one
hand,
side
and
that's
basically
where
yeah
we
have
used
most
of
the
components
out
of
for
this
particular
demo
in
terms
of
cube
flow
in
terms
of
of
cell
and
core,
and
you
have
openshift
data
science.
On
the
other
hand,
side
where
you
can
already
see
that
we're
collaborating
with
with
the
company
selden,
so
you
can
actually
leverage
the
commercial
offering
the
fully
supported
version
of
the
seldon
inference
server
if
you
want.
B
A
So
so
to
test
stuff,
you
want
to
try
stuff,
you
can
download
open
data
hub
on
top
of
openshift
and
you
try
your
stuff
in
the
community
version.
If
you
want
to
enterprise
version,
you
can
use
openshift
data
science
and
if
you
want
to
try
for
free
openshift
data
science,
I
put
the
link
in
the
chat.
You
go
to
the
openshift
data
science
sandbox
thanks
max
so
max.
This
sandbox
is
it's
just
a
preview
or
a
has
sandboxer
hold
this
feature,
or
it's
kind
of
a
limited
on
what
you
can
try.
B
Yeah,
the
the
sandbox
gives
you
a
full
full-fledged
openshift
data
science
environment,
where
you
can
check
out
all
the
different
components.
There's
all
the
resources
that
give
you
insight
about
the
particular
applications
that
you
can
enable,
and
then
you
can
directly
provision
your
own
jupyter
notebook
with
yeah
yeah
out
of
the
box,
notebooks
that
we
provision
and
you
can
even
play
along
with
some
of
the
tutorials
and
and
walkthroughs
that
are
also
part
of
the
road
sandbox,
to
see
how
a
data
science
project
could
look
like
leveraging.
A
Thanks
max
that
was
really
really
cool,
really
impressive.
I
I
think
we've
seen
lots
of
stuff.
I
think
I
have
an
issue
with
my
camera,
but
you
know
what
we
are.
We
are
at
the
end
of
the
of
the
session,
so
it
doesn't
matter.
What
I
wanted
to
to
remind
all
of
you
is
that
we
have
our
appointment
here
at
openshifttv.
A
If
you
stay
with
us
and
hey,
I
fixed
my
camera.
If,
if
you
stay
with
us
today,
we
have
the
level
up.
No,
we
have
the
ask
an
admin
show
at
3,
00
p.m.
Our
time
set
time
and
we
will
come
back.
The
openshift
tv
coffee
break
will
come
back
next
wednesday.
Always
10
a.m.
A
Here,
doppelship
tv
with
another
session,
we'll
talk
about
camel
this
time
this
this
time
and
you
know
what
max
I
hope
you
become
our
special
recurring
guest
because
those
are
envelopes
talk
are
really
really
cool.
Today
the
demo
was
very
nice,
so
I'm
I'm
sure
you
and-
and-
and
I
will
be
here-
happy
if
you
can
come
back
with
some
cadence
like
each
month
or
when
you're
available.
It's
a
it's
really.
It's
really
a
pleasure,
so
thank
you,
everyone
for
having
joining
us
today.
A
We
will
come
back
next
wednesday,
wednesday.
We
have
all
also
with
the
other
awesome
co
host
and
for
now
there's
a
question
in
the
chat:
is
there
any
access
to
spider
editor?
Also,
I
don't
know.
B
So
not
out
of
the
box
in
terms
of
our
yeah
jupiter
notebooks
that
we
provide.
But
that's
definitely,
if
you
have
your
own
notebook
image
where
we
also
have
access
to
spyder.
A
Okay,
thanks
thanks
yeah,
we
had
this.
Also
the
the
last
minutes
question.
I
think
it
was
really
interesting
if
you
had
additional
questions
max,
if
you
like
to
you,
want
to
share
your
twitter,
handle
or
any
way
to
contact
you.
I
think
it's
cool.
If
you,
if
you
write
to
the
private
chat,
I
can
send
to
the
the
public
if
you
want
in
the
while,
I
will.
Let
you
remind
you
that
you
can
check
the
openshift
tv
schedule
and
to
see
what
is
happening
in
today
and
the
next
day.
A
Please
stay
tuned
with
us
and
here's
the
email
from
max.
If
you
want
to
send
an
email
to
max
to
ask
about
the
demo
about
any
anything
about
the
topic
we've
seen
today,
I
really
recommend
you
to
try
openshift
data
science
sandbox
for
free.
We
put
the
link
in
the
chat
and
hey
max.
That
was
really
great.
Thank
you
for
being
our
super
special
guest
today
and
I
hope
to
see
you
soon
here
in
the
in
the
show.
Thank
you,
everyone
for
joining
us
and
see
you
next
wednesday,
ciao.