►
From YouTube: OpenShift on Machine Learning SIG October 2018 Full
Description
Spark Operators
Using Ceph for ML workloads on OpenShift
C
A
A
B
A
E
A
D
A
A
A
A
A
I'm
gonna
just
give
you
my
bias
on
things.
I
am
very
much
looking
to
find
updates
from
everybody
in
the
ml
space
around
the
operators.
So
that's
been
sort
of
my
bent
the
past
little
while
so
that's
kind
of
why
I've
asked
Jerry
and
Sharon
and
and
other
people
to
to
talk
about
them
today,
so
I'm
gonna,
hang
on.
My
phone
is
ringing
off
the
handle
to
make
sure
it's
not
the
next
speaker
hang
on.
A
Wasn't
that
Alright
good,
so
let's
try
and
get
started
now.
We've
got
about
eight
people
by
myself.
A
If
you
could
all
just
add
your
names
in
here,
so
one
of
the
things
that
that
I've
been
doing
in
the
interim
I'm
also
feeding
a
lot
of
the
work
around
the
open
the
operator
framework.
So
that's,
then,
one
of
the
and
I'm
creating
a
community
listing
of
all
the
viable
that
I
can
find
community
operators,
and
so
this
past
couple
of
weeks,
I've
been
looking
at
all
of
the
spark
operators
and
the
chain
chain
errs
and
mx
things,
and
you
know
just
a
ton
of
things
that
I
don't
really
know
a
lot
about.
A
So
I
am
using
this
occasion
to
get
Jerry
to
give
us
an
update,
and
if
you
have
one
I
will
put
the
link
into
the
operator
framework
page
after
this
note
and
please
join
the
google
group
around
operator
frameworks
and
give
us
your
feedback
on
it,
and
even
if
you're,
not
using
it,
you
don't
have
to
be
using
the
operator
framework.
You
can
have
a
home
base
or
an
Ansel
based
operator.
We're
happy
with
that
too.
A
D
No
I
just
want
to
remind
everybody
we're
here
as
a
machine
learning
community
part
of
open
shift,
so
OpenShift
Commons
sitting
somewhat
between
the
open
ship
community
itself
and
enterprise
users.
So
this
is
a
good
place
for
people
to
share
experiences
that
they
have
about
machine
learning
and
how
it
runs
on
kubernetes.
A
B
You
learn
somali.
My
name
is
Yuka
crimson
I
work
for
that
from
the
team
called
relatives
that
aho
and
I'll
be
talking
in
the
next
15
minutes
about
start
operators
and
also
a
little
bit
about
the
operators
in
general.
Yes,
where
is
brief?
Outline
I'm
using
this
presentation
from
different
presentation?
I've
already
done,
it
will
be
still
relevant
to
our
topic,
so
I'll
describe
what
the
operator
pattern
is
then
describe
the
pros
and
cons
between
shooting,
configured,
Maps
or
custom
resources,
then
I
will
compare
to
existing
Park
operators
and
I
always
do
that.
B
B
Custom
systems
like
spar
or
others
is
a
cooperative,
a
tiff
native
application,
meaning
that
doesn't
actually
make
any
sense
without
kubernetes,
because
it's
calling
the
queries
api's.
Is
it
heaven
based
systems?
It
reacts
on
various
events
when
those
resources
are
credit,
created,
updated
or
deleted,
and
before
I
think
it
was
called,
also
controls
you
name.
B
There
is
a
simpler,
very
simple
example,
because
at
the
end,
in
the
machine
learning
special
interest,
groups
of
people
might
don't
know
about
operators.
So
the
first
operator
has
to
register
itself.
It
has
to
say
the
covenants
API.
They
I'm
here,
I'm,
listening
for
custom
resources
of
type
X,
okay,
kubernetes
ever
registered
at
the
request.
Okay
and
after
some
time,
if
there
is
a
new
resource,
if
there
was
a
new
resource
in
the
kubernetes
ever
it
notifies
the
operator
and
now
it's
a
responsibility
of
the
operators
to
do
something
about
it.
B
So,
for
example,
it
starts
of
thing
and
after
some
hopefully
short
time,
it
will
respond
with
some
action.
In
our
case,
it's
deploying
system
eggs
in
an
replicas
where
the
end
could
have
been
described
in
the
resource
representing
the
system,
acts
very
high-level
description
and
again
after
some
time,
someone
could
have
deleted
that
resource
in
the
kubernetes
and
it's
again
responsibility
of
the
operator
to
clean
all
the
resources
that
were
connected
with
the
system
x
had
reports,
services,
replications
or
whatever
previously
created.
It
should
now
clean
in
different
in
different
words.
B
In
other
words,
it's
managing
the
lifecycle
of
system
x
in
governance.
That's
what
operate
this
room
you
comparison
to
something
you
might
work
about
it.
For
instance,
open
ship
templates
are
similar
in
a
way.
That's
also
a
deployment
mechanism.
Film
charts
also
its
delivery
mechanism,
customizer
casement,
but
these
are
more
tools
that
can
operate
with
the
Jumbos
each
tool
at
different
strategy.
B
For
for
the
task
in
comparison
with
operator,
operators
are
more
real
time
based
systems
the
you're
reacting
on
the
fly
on
those
events,
even
today,
they
are
also
part
of
the
coverage
itself,
ending
agents
that
can
do
handy
stuff.
So
what
I
mean
by
representation
of
X?
Normally
these
days,
it's
custom
resource.
It's
a
way
how
you
can
extend
kubernetes
for
your
own
resources,
the
first
step
to
create,
create
customer
source
definition,
and
then
you
can
create
very
loud
to
create
resources
of
the
object
type.
B
So
customer
so
definition
is
a
type
and
custom
resources
instance
of
the
tag,
but
you
can
also
have
you
can
also
use
config
Maps
for
this.
For
this
use
case.
Originally,
my
idea
I
think
James
II
was
the
first
operator
that
I've
used
it
Goff
cooperator
and
it's
a
life
but
lightweight
approach
that
can
work
in
OpenShift
environment
because
you
don't
need
cluster
admin
rights
for
this
task.
So
here
is
an
example
of
custom
resource
definition
as
the
resource,
or
particularly
for
sparkle
brighter.
B
As
you
can
see
in
under
respective
section,
you
can
have
something
like
number
of
replicas
for
workers
number
over
because
for
masters,
some
customs
are
configuration
that
can
override
the
defaults
of
spire
and
pretty
much
every
trade
information.
But
you
also
have
to
handle
it
in
your
do
bridge
and
do
something
with
it
and
here's
example
of
the
same,
but
using
config
mode
so
again,
which
couple
of
times
back
and
forth,
and
you
can
suppose
it's
pretty
much
the
same
thing.
B
The
only
difference
is
that
contract
max
I
used
to
use
the
config
section
and
the
vary
with
the
multi-line
multiline
value
that
actually
contains
the
deep
configuration,
because
by
default,
config
mats
are
kind
of
flat
structure,
opening
key
and
values
all
right,
so
some
pros
and
cons
or
dimensions,
but,
for
instance,
for
secrecy,
our
beast:
there
is
a
much
finer
grade,
our
bike,
so
you
can
specify
which
users
as
accounts.
This
is
capable
of
words,
tasks
and,
for
instance,
also.
B
The
API
is
slightly
nicer
because
you
can
write
oopsie,
TL
or
obviously
get
and
then
directly
the
name
of
their
custom
resource,
but
in
case
of
config
mats
you
have
to
move
these
config
map
for
before
eggs,
but
it's
a
slightly
just
just
a
little
minor,
dramatic
difference.
Let's
talk
about
our
operator
just
a
couple
of
words
about
us:
Park
is
a
unified,
analytic
engine.
Mostly
these
days
for
ETL
tasks,
but
it
also
has
libraries
for
machine
learning,
also
for
graph
processing
and
beaming,
and
also
there's
also
module
4
as
well.
B
Part
operators
can
deploy
spark
clusters
and
also
intelligent
applications
that
itself
pounds
its
own
part
clusters.
These
are
different
to
two
basic
strategies
and
I'm
going
to
talk
about
two
different
operators,
from
which
the
first
one
is
from
TCP
Google
cloud
platform,
and
thus
the
second
chance
kit
deploys
the
despite
applications
that
are
capable
of
deploying
part
clusters
using
kubernetes
as
a
scheduling
mechanism,
so
taking
very
low-level.
B
It's
the
spark
submit
and
uses
this
kns
as
a
protocol
for
for
spark
master,
and
there
was
a
feature
introduced
inspired
to
the
free
the
understand
this
protocol-
and
you
are
also
you
can.
You
can
also
provide
for
custom
images
for
this
spark
submit
tasks,
and
it
should
create
a
boats
with
client
that
itself
spawns
the
driver
and
the
driver
spawns
and
executors.
B
This
is
all
handled
by
the
spark
itself
implemented
in
hurricane
spark.
So
what
the
operator
does
its
creates
these
kind
of
applications
and
it's
meant
for
batch
processing,
but
the
GCP
operator
also
contains
something
called
scheduled
application
where
you
can
use
chrome
like
expressions
where
you
can
describe,
for
instance,
around
each
each
each
hour.
It
will
not
reach
mid
night
to
run
some
some
tasks,
for
instance
batch
processing
that
happens
in
the
bank
during
night.
It's
written
in
language
and
it's
the
GCP
operator
and
I've
also
created
one
operator.
B
That's
living
in
this
repository
relatives
dry
cargo
brighter
and
it's
it
does
both
its
deploys,
are
clusters
and
also
those
spark
applications
that
can
itself
deploy
spark
clusters.
So
what
was
the
difference
between
I
when
I
say
like
you
can
deploy
spark
clusters?
The
difference
is
that
this
is
not
the
life
cycle
of
the
cluster
is
not
bound
to
the
life
cycle
of
the
application.
There
is
basically
no
application,
you
can
create
spark
Buster's
and
then
you
can
create,
for
instance,
notebook
instance,
two
pigeon
or
book
that
can
connect
to
the
spark
cluster.
B
B
And
yet
the
the
part
is
responsible
for
spawning
and
spark
application
is
actually
compatible
if
the
first
operator
I'm
using
the
same
team
names
for
feels
in
the
configuration
to
have
Alexander,
walk
in
and
ideally
user
could
use
the
same
custom
resources
for
both
operators
currently
I'm
supporting
only
a
subset
of
those
options
right.
So
let's
do
a
demo-
and
this
is
by
the
way
in
my
attempt
to
create
transition
from
spark
spark
over
to
the
wearable
so
time
to
drop
I'm
using
some.
B
B
B
B
B
B
Those
dudes
with
these
labels,
like
my
knowledge,
should
be
deployed.
Yes,
so
the
Gloucester
is
running
for
20
seconds
and
I
should
be
losing
I
just
start
using
the
cluster
for
my
limo
or
anything.
What
I
want
for
my
custom
model
is
when,
at
once,
I'm
once
I've
done.
My
board
I
can
also
delete
the
Gloucester
if
I
now,
custom
resource.
B
B
Add-
and
it's
absolutely
it's
so
there's
a
different
kind
called
that
and
it
says,
deploy
this
image
and
on
that
image
this
file
should
be
present
and
run
it
as
a
job
application
and
run.
Actually
this
this
main
glass
is
a
spy.
Examples
is
present
in
each
star
distribution.
It's
like
mine.
Application
is
for
hellward
purposes,
so
the
task
right
now
it's
using
the
different
approach,
also
again
watching
the
reports
running
boats,
its
points.
B
B
One
disagree,
I
think
always
perfect
language
for
doing
that,
because
it's
the
ticket,
I
band
versus
very
small
images,
but
I
would
argue
that
there's
also
a
different
topic
or
a
different
angle
to
see
that
it's
also
the
domain
expertise
like
if
the
system
X
is
written
in
language
Y,
maybe
maybe
nd,
to
preserve
the
knowledge
in
the
same
language.
So,
for
instance,
Park
is
written
in
Java
Scala.
A
Awesome
I
also
invited
Darren
you
who
is
from
light
Bend,
who
is
also
interested
in
the
spark
stuff
and
we'd,
had
a
conversation
about
what
light
and
was
doing
I'm
wondering
if
Sharon,
if
you
have
anything
to
add
to
this
spark
conversation
and
can
give
us
a
little
bit
of
an
insight
into
where
light
bend
is
going.
F
Yeah
sure,
thanks
for
the
presentation,
was
very
nice
so
at
life
and
what
we
are
planning
to
do
is
we
are
going
to
integrate
the
TCP
operator
into
our
work
and
offering
so
so.
I
have
a
few
questions
regarding
the
spark
operator
that
that
you
viewed
so
if
my
understanding
is
correct,
so
basically
it's
adding
a
your
operator
is
adding
a
layer
of
interaction
on
top
of
the
gcpd
operator,
because
there's
a
step
to
create
the
spark
raster
it
alright.
F
B
F
B
F
F
Yeah,
you
can
run
any
kind
of
spark
job
with
the
proximity,
so
yeah,
yeah
I
am
also
preparing
a
presentation
just
for
the
TCP
spark
operator
and
kind
of
talk
about
a
little
more
details,
but
yeah
I
think
what
is
that
was
pretty
great
and
I
actually
saw
in
your
spot
operator
a
while
ago
and
I
didn't
take
a
closer
look
at
it,
but
then
yeah.
It
looks
interesting.
B
F
C
F
For
example,
for
the
Jaffe
piece
pop
up,
there
are
two
kinds
of
metrics
that
are
exported.
The
first
kind
is,
as
you
study
the
metrics
that
are
already
supported
by
spark
itself
all
those
metrics
exported
in
the
spark
driver
and
they
deduct
their
parts.
Those
are
exported,
and
on
top
of
that,
they
are
say,
there's
a
set
of
application
level
metrics,
for
example.
What
are
the,
how
many,
how
many
stock
drops
are
counted
running
and
how
many
jobs
have
already
completed?
F
B
Metrics,
if
I
recall
correctly,
we
had
support
for
Jolokia,
that's
something
that
exposes
JMX
metrics
for
from
java
application
as
a
rest
service,
and
then
formatives
was
able
to
scrape
those
metrics
I.
Think
there
were
two
guys
representing
this
idea
on
sparks
I
mean
I,
think
they
had
set
up
with
those
images
like
how
to
set
up
from
each.
It
was
next
to
part
with
our
Park
images,
but,
to
be
honest,
I
haven't
tried
any
operator.
F
G
Am
this
VM
speaking
from
another
University
I
have
a
small
question
for
you.
You
mentioned
very
briefly
Jupiter
and
in
in
our
use
case,
what
we
want
to
do
is
you
know
we
are
spinning
at
Jupiter
environments
and
demand
for
our
researchers,
and
this
is
something
that
we
discussed
with
with
Matt
I.
Think
when
we
met,
we
want
to
spawn
spark
environment
exactly
at
the
same
time
and
directly
binded
to
the
Jupiter
netbook.
G
Initially,
we
dis
approach
with
we
try
to
make
it
work
with
a
oh
now
that
there
are
operators
that
are
very
close
in
in
terms
of
how
it
should
work,
which,
which
way
do
you
think
this
would
go?
Is
Ocean
Co
going
to
be
dead
because
everything
goes
down,
two
operators
or,
but
what
would
be
the
path
to
take
from
that
from
now
I.
B
G
B
B
It
should
definitely
work.
There
is
actually
Damian
revenge.
That
does
the
very
same
thing.
They
call
it
opened
it
up
and
they
use
this
operator
and
they
create
config
maps
for
for
creating
new
spark
clusters
and
they've
got
Jupiter
have
in
the
system
and
they
connect
those
those
two
together,
but
it's
definitely
doable.
That
should
work
idea
to
also
had
in
the
endo
operator,
but
I
don't
didn't
want
to
complicate
the
operator
with
kind
of
under
11
stuff.
B
No,
because
I
like
the
idea,
it
should
thus
one
thing
well
rather
than
if
I
do
that,
like
which
notebook
should
be
the
right
one
zeppeli
in
Jupiter,
Paragon,
multiple
vendors
in
Qaida
Lord,
you
could
have
operator
also
for
Jupiter
and
those
operators
could
fit
in
droves,
open
operator
life
cycle,
my
management
tool.
That's
like
operate
metal
operator
operator
operators
where
you
can
describe
that
your
operator
requires
other
custom
resources.
F
A
A
We
do
have
another
speaker,
live
guest,
speaker
lined
up
today
and
Kyle.
You
want
to
try
setting
up
there
and
he
was
going
to
talk
about
using
SEF
as
a
data
source
for
ml
on
open
ship
and
anything
that
that
might
have
a
spark
operator
component
to
it,
but
I'm
not
sure
so
I'll,
let
Kyle
dare
his
screen
and
walk
us
through
that.
Now.
E
So
one
of
the
things
I've
been
recently
working
on
is
kind
of
building
together,
like
almost
like
a
tutorial
for
for
experiencing
stuff
object,
storage
and
learning
how
you
can
use
stuff
object,
storage
with
with
some
of
the
tools
that
are
coming
out
of
wrap
up
lettuce.
So
if,
if
you're
interested,
you
can
do
this,
you
know
on
your
own
time
to
buy
just
kind
of
plumbing.
This
repo
I
have
here
really
kind
of
called
together
in
this
last
week.
So
it's
it's
improving.
E
It
runs
on
OpenShift
and
kind
of
the
idea
is
there's
kind
of
like
a
micro
staff
called
Seth,
Nano
and
I
have
kind
of
configuration
here.
That'll
create
a
set
of
credentials
and
use
open
ship
secrets
to
sort
of
those
credentials
and
then
create
kind
of
a
stateful
step
running
like
a
single
pod
step
cluster
effectively.
That's
just
you
know
just
described
here
during
the
bootstrap
process
of
that
that
stuff
Nano
staple
set.
E
It
will
use
the
the
credentials
from
the
secrets
to
kind
of
create
an
additional
set
of
users,
but
I
have
already
here
in
my
mini
shift
running
on
my
desktop
here,
a
set
cluster
consisting
in
one
pod
and
more
kind
of
robust
environment.
You
know,
there's
the
workaround,
you
know
a
set
operator
using
using
rook
and
and
you'd
probably
wanted
to
do
that
in
kind
of
a
real
environment.
You
wanted
to
be
running
a
more
production
grade
set
up
cluster
inside
of
openshift,
but
that's
still
very
much
pioneering
work.
E
E
But
one
of
the
new
things
that
you
know
the
rat
I,
don't
like
unity
is
the
ability
to
they
have
their
publishing
these
incomplete,
open
ships.
Part
builder
images
that
you
can
create
and
build
with
and
provide
a
tarball,
basically,
a
link
to
a
spark
card.
Ball
and
it'll
create
a
custom,
open
ships,
Park
image
for
you
with
that
particular
version
of
spark,
so
I
kind
of
put
together
a
spark.
E
So
I
did
that
earlier
here,
loaded
it
up,
but
I
have
have
notebook
here.
I
threw
the
environmental
variable,
I
included
kind
of
the
notebook
in
in
this
particular
repository,
and
it's
it's
getting
the
rgw
end
point
kind
of
from
this.
This
tell
that
command
here,
yeah
the
most
straightforward.
If
you
are
you're,
you
know
just
working
from
within
a
notebook,
and
you
want
to
interact
with
the
object,
store,
photo,
is
kind
of
the
AWS
library
and
choice
in
the
case
of
Python.
E
So
if
you
kind
of
want
to
be
able
to
interact
with
an
object
store,
this
really
kind
of
as
easy
as
installing
photo,
and
then
you
know
creating
a
boto
object
to
interact,
interact
with
the
object
store.
This
particular
image
I'm
just
using
the
base,
notebook
image,
and
so
it
doesn't
include
photo
three.
You
can
always
install
and
you
know,
install
condo
packages
from
the
notebook.
E
But
if
you
do
this
a
lot,
you
might
want
to
build
your
own
base,
notebook
image,
the
the
secret
key
and
the
user
key
the
credentials
for
for
the
Ceph
stuff,
buster
and
then
the
end
point
are
are
specified
and
as
I'm
creating
the
photo
object
here
and
those
are
being
sourced
in
the
environment.
They
make
their
way
into
the
environment
from.
E
E
Being
passed
to
the
new
app
man
line
and
then
the
secret
and
or
the
user
key
and
super
key
or
coming
from
open
ship
secret
and
being
exported
to
that
vironment
favorable
inside
the
pod.
So
instead
of
having
like
it,
would
be
a
very
bad
hygiene
to
kind
of
have
your
your
you
know:
s3
super
key
and
and
a
user
key
kind
of
like
statically
coded
into
the
notebook.
So
this
is
kind
of
the
the
current
best
approach
that
I've
I've
found,
at
least
in
conjunction
with
a
sapphic
store.
So
we
have
this.
E
This
object
now
and
we
can
use
it
to
create
a
bucket
in
the
set
panel
object
store.
So
we
have
the
you
know:
SEF
object,
store,
running
and
OpenShift
here
and
then
create
the
book
created
the
book
created
a
bucket
and
then
just
wrote
kind
of
a
dummy
object
into
that
bucket
and
and
then
listed
the
contents
of
that
one.
So
we
can
see
that
there
now,
that's
all
fine
and
well,
but
you
know,
if
you're
using
photo
just
within
the
confines
of
the
notebook.
E
That's
obviously
not
a
particularly
scalable
approach,
depending
on
what
you're
doing,
and
so,
if
you
have
to
do
some
more
heavy
lifting
and
interact
with
the
object
store,
that's
kind
of
where,
where
spark
comes
in,
though
you
can
from
this
notebook,
I
can
create
a
spark
context
here.
Of
course,
it's
just
it's
just
talking
to
the
you
know
it's
doing
spark
locally
within
the
pod,
that's
running
with
the
notebook.
This
could,
you
know,
be
a
good
cluster.
You
provision
with
the
Shenko
or
like
a
spark
operator,
I
suppose
I'm,
not
particularly
familiar
with
that.
E
Yet
so,
oh
that's!
It
learn
more
about
that
and
then
with
s3
a
you
have
to
set-
and
you
know,
there's
a
number
of
things
you
need
to
set
similar
to,
as
you
had
to
set
a
photo
again,
we're
setting
the
end
point
and
then
the
credentials
here,
but
we're
also
telling
it
to
use
a
path
style
access
kind
of
by
default.
E
The
the
AWS
Java
SDK
is
going
to
try
to
use
like
the
cname
notation
for
accessing
the
s3
API,
which
means
you
need
to
kind
of
have
a
bunch
of
DNS
plumbing,
set
up
that
in
the
case
of
the
Stephenie
I'm
running
an
open
ship.
We
don't
have
so
instead
of
using
kind
of
bucket
dot,
endpoint
flash
object,
name
we're
going
to
use
endpoint,
slash
bucket,
slash,
object,
that's
with
path
style
access,
that's
true
does
and
stuff.
Nano
is
not
not
configured
to
do
Els,
so
we're
just
setting
that
pulse
as
well.
E
E
That
they
have
some
data
that
they
made
available
in
Amazon,
s3
bucket
and
I've
shown
how
you
can
interact
from
the
same
context
with
data
both
in
the
public
cloud
and
the
private
cloud
or
post
object
or
private
object
store
here.
I'm
saying
that
this,
the
bucket
Brad
analytics
data
has
a
different
end
point
than
the
default.
E
E
So
in
very
much
is
you
have
this
kind
of
same
operational
modalities
clean
using
death
as
an
object,
store
and
then
I'm
gonna,
sana
s3?
So
if
developer
is
reused
to
having
the
you
know,
experience
in
the
public
cloud
and
you
want
to
replicate
that
private
type
environment,
it's
really
really
relatively
seamless
for
prevented
experience
here,
I'm
going
to
there's
another
bucket
of
mine,
called
BD
dist
I'm
doing
the
same
thing.
E
I
did
up
here
with
the
rad
analytics
here
to
that
bucket
to
the
pointing
to
Amazon
and
then
again
using
the
credentials
provider,
and
we
have
a
trip
report
attack,
separated
value
where
you
know
at
the
Red
Hat
we
have.
This
is
actually
from
Gerard's
data
hub
team
kind
of
the
trip
report.
It's
like
a
sanitized
version
of
the
reports
that
customers
provide
after
a
trip
and
one
of
the
things
they
do
with
it,
and
this
notebook
has
programmed
ion
is
doing
a
sentiment
analysis.
E
E
So
it's
is
reading.
Reading
the
data
reading
the
data
out
of
the
bucket
and
Amazon
and
then
turning
in
CSV
and
turning
it
right
around
and
then
writing
it
into
this
F
nano
running
in
our
clusters,
so
that
a
kind
of
an
example
of
almost
like
an
ETL
except
this
is
the
same
format.
You
can
do
something
similar
where
here
you
know
you
can
read
a
CSV
file
in
from
Amazon
s3
and
then
write
it
out
to
yourself.
Buster
and
a
different
format
like
Parque
I'd
have
all
in
one
little
neat
command
there.
E
E
If,
like
you
know
a
lot
of
data
folks
that
are
analyzing
data
or
familiar
with
using
sequels,
so
they
just
want
to
use
raw
sequel
and
less
familiar
with
using
using
kind
of
the
Python
methods
for
manipulating
data.
They
can
certainly
do
that.
So
by
registering
the
data
frame
as
as
this
this
table
name
now,
I
can
kind
of
just
run.
Sports
equal
against
that
table
will
filter
out
just
the
ticker
data
with
the
red
hat
symbol.
Here
then
plot
it
with
matplotlib,
and
this
is
all
from
the
the
radix
technical.
C
E
So
you
know
this
step
bucket
and
SEF.
Nano
I
can
kind
of
load
the
sample
data
set
into
a
data
frame
and
luckily
I
already
installed
those
in
the
kernels.
Those
are
good
load
into
the
data
frame.
This
just
a
separate
data,
and
this
is
coming
from
you
know
this
is
the
staff
nano
it's
stored
in
there
versus
Amazon
s3,
and
you
don't
have
to
worry
about
gopi
bees
or
moving
data
around
there,
because
it's
in
the
object
store.
E
You
know
you
I,
don't
have
to
worry
about
reattaching
that
pv
to
something
in
objects
core,
that's
available
to
anything
that
has
the
appropriate
credentials
and
is
really
kind
of
a
neat
way
of
kind
of
sharing
data
across
multiple.
You
can't
have
a
shared
native
context
across
different
applications
of
teams
or
technologies,
kind
of
anything.
E
I
didn't
write,
write
this.
This
was
the
folks
on
shorts
team,
but
basically
they
a
train,
a
train,
a
machine
learning
model
using
using
this
data
and
then
then
yeah
eventually
got
some
charts
and
then
and
save
it
back
into
the
asset
cluster
Walker
Cup,
so
they're
kind
of
showing
the
sentiment
of
these
trip
reports.
You
know,
based
on
on
the
person
successful
versus
unsuccessful.
These
are
all
the
sanitized
made
up
kind
of
people's
names
and
then
breakdown
based
on
the
personality,
the
audience
or
the
ROS
customer
engineering
etc.
E
C
A
Thank
you
very
much
pal
for
this
yeah
I
think.
Well,
we
bid
a
bit
went
over
my
head
as
well,
but
that's
that's
to
be
expected.
Sometimes,
when
the
m/l
stuff
comes
up,
the
Jupiter
stuff
I'm
familiar
with
but
cutting
it
all
up
is
is
another
thing:
are
there
any
other
questions
for
Kyle
on
this?
All
anybody
thinking
of
using
it
or
already
use
this.
F
A
Good
and
it
was
spark
related,
so
you
know
it
did
fit
with
the
theme
of
today's
meeting.
So
thank
you
I'm,
as
always
looking
for
other
things
that
you
guys
want
to
talk
about
at
the
next
meeting
here
and
as
well,
there
will
be
a
contingent
of
people
from
Red
Hat
at
the
upcoming
ODST
West
in
San,
Francisco
I,
think
that
is
like
November,
1st
and
2nd
at
31st.
First
and
2nd
October
I
might
be
Halloween
it's
the
day.
A
The
first
part
of
it
is
so
some
of
us
will
be
there
not
not
myself
this
time,
but
if
you're
there,
maybe
we
could
meet
up
informally
and
there's
going
to
be
a
meet-up
on
the
evening
of
November,
2nd
at
OD
SC,
which
a
few
Red
Hatters
it's
going
to
be.
They
do
this
as
part
of
OD
se,
there's
always
an
external
non-registered
people
can
come
to
it
too.
So
if
you're
in
the
Bay,
Area
I
know
there'll
be
a
couple
of
folks
from
I'm
right
hat
and
from
the
ML
group
speaking
there.
A
D
No,
nothing
specific
for
me
just
thank
you
again,
yurka
and
kyle.
I
think
the
the
two
of
those
were
good
examples
of
how
you
can
actually
access
storage
or
doing
your
your
ml
processing,
Thank,
You
Kyle,
and
then
some
of
the
tools
that
you
can
use
for
that
processing.
Thank
You
York,
nothing
else
for
me,
but.