►
Description
Jirika Kresmer (Red Hat)
GCP Spark Operator
Radanalytic.io Spark Operator
JVM Operators
Kubernetes and Operators
Operator Framework
A
So
my
my
name
is
Yuka
crimson
I
work
for
it
on
a
gene
called
relatives
that
I
have
and
I'll
be
talking
in
the
next
15
minutes
about
our
top
writers
and
also
a
little
bit
about
the
operators
in
general.
Yes,
brief
outline
I'm
using
this
presentation
from
different
presentation.
I've
already
done,
it
will
be
still
relevant
to
our
topic.
They
all
describe
what
the
operator
pattern
is
then
describe
the
pros
and
cons
between
choosing
configured
maps
or
custom
resources,
then
I
will
compare
to
existing
park.
A
Operators
and
I
was
do
that
so
I'm
using
this
notation
operator
of
X,
where
X
means
the
system
operator
is
managing
top
in
a
way.
It's
a
feature
that
extends
kubernetes
core
features,
because
the
parties
is
kind
of
mature
these
days
and
the
brightness
is
of
is
a
way
to
extend
it
for
your
custom
use
case.
Most.
A
For
deploying
custom
systems
like
sparked
or
others,
there's
a
copper
activated
native
application,
meaning
that
doesn't
actually
make
any
sense
without
kubernetes,
because
it's
calling
the
queries
api's
is
it
have
embraced
stemmed.
It
reacts
on
various
events
when
those
resources
are
credit,
created,
updated
or
deleted,
and
before
I
think
it
was
called,
also
controls
you
ever
name.
A
There
is
a
simpler,
very
simple
example,
because
at
the
end,
the
machine
learning
special
interest
groups
of
people
might
don't
know
about
the
practice.
So
the
first
operator
has
to
register
itself.
It
has
to
say
the
governance
API.
They
I'm
here,
I'm
listening
for
custom
resources
of
type
X,
ok,
kubernetes,
ever
registered
at
the
request.
Okay
and
after
some
time,
if
there
is
a
new
resource,
if
there
was
a
new
resource
in
the
kubernetes
ever
it
notifies
the
operator.
A
Now
it's
a
responsibility
of
the
operator
to
do
something
about
it.
So,
for
example,
it
starts
a
thing
and
I
was
after
some,
hopefully
short
time.
It
will
respond
with
some
fiction.
In
our
case,
it's
deploying
system
eggs
in
an
replicas
work
and
could
have
been
described
in
the
resource
representing
the
system.
Acts
very
high-level
description
and
again
after
some
time,
someone
could
have
deleted
that
resource
in
the
kubernetes
and
it's
again
responsibility
of
the
operator
to
clean
all
the
resources
that
were
connected
with
the
system.
A
X
Hydra
port
services,
whose
replication
comes
over
whatever
it
previously
created.
It
should
now
clean
in
different
in
different
words.
In
other
words,
it's
managing
the
lifecycle
of
system
x
in
kubernetes.
Let's
go
to
operate
this
tree.
You
comparison
to
something
you
may
work
about
it.
For
instance,
open
ship
templates
are
similar
in
a
way.
A
That's
also
a
deployment
mechanism,
helm
charts
also
it's
the
deploy
mechanism,
customized
or
placement,
but
these
are
more
tools
that
can
operate
with
those
Jumbo's
each
tool
at
different
strategy
for
further
tasks,
but
in
comparison
with
operator,
operators
are
more
real
time
based
systems
that
are
actually
reacting
on
the
fly
on
those
events
and
in
dark.
They
are
also
part
of
the
commodity
itself,
like
ending
agents
that
can
do
handy
stuff.
So
what
I
mean
by
representation
effects?
Normally
these
days,
it's
customer
resource,
it's
a
way
how
you
can
extend
kubernetes
for
your
own
resources.
A
The
first
have
to
create
create
customers,
as
they
finish
them,
and
then
you
can
create
you're
allowed
to
create
resources
of
the
object
type.
So
customers
of
definition
is
a
type
and
customer
resources
instance
of
the
tag,
but
you
can
also
you
can
also
use
config
maps.
For
for
this,
you
skate
it's
not
original.
My
idea,
I
think
James
II
was
the
first
operator
that
I've
used
it
kafka
operator
and
it's
a
life
lightweight
approach
that
it
can
work
and
OpenShift
environment
because
you
don't
need
cluster
admin
rights
for
this
task.
A
Alright,
so
some
pros
and
cons
or
dimension
ton,
but,
for
instance,
for
secrecy
at
least
there's
a
much
finer
grade
are
back,
so
you
can
specify
which
users
as
accountable
tasks
and,
for
instance,
also
the
API,
is
slightly
nicer
because
you
can
write
oopsy,
TL
or
OC
get
and
then
directly
the
name
of
their
custom
resource,
but
in
case
of
config
mats,
you
have
to
put
this
config
map
for
before
ex,
but
it's
a
slightly
just
just
little
minor,
with
micro
difference.
Let's
talk
about
our
operator
just
a
couple
of
words
about
spark.
A
A
Part
operators
can
deploy
spark
clusters
and
also
intelligent
applications
that
itself
pounds
its
own
part
clusters.
These
are
different
to
two
basic
strategies
and
I'm
going
to
talk
about
two
different
operators,
from
which
the
first
one
is
from
TCP
Google
cloud
platform
and
thus
the
second
chance
it
deploys
the
despite
applications
that
are
capable
of
deploying
fire
clusters
using
kubernetes
as
a
scheduling
mechanism.
So
thinking
very
low-level.
It's
that's
part
of,
submit
and
uses
this
kns
as
a
protocol
for
for
spark
master,
and
there
was
a
feature
introduced
inspired
2.3.
A
A
This
is
all
handled
by
the
spark
itself
implemented
in
theracane
spark.
So
what
the
operator
does
it
creates
these
kind
of
applications
and
it's
meant
for
batch
processing,
but
the
GCP
operator
also
contains
something
called
scheduled
application
where
you
can
use
chrome
like
expressions
where
you
can
describe,
for
instance,
around
each
each
each
each
hour
and
we
knock
or
midnight
to
run
some
some
tasks.
A
For
instance,
you
know
those
much
processing
that
happens
in
the
bank
during
night,
it's
written
in
language
and
it's
the
GCP
operator
and
I've
also
created
one
Operator,
that's
living
in
this
repository
red
night,
this
drought
Pargo
brighter
and
it's
it
does
both
it
deploys
our
clusters
and
also
those
spark
applications
that
can
itself
did
boys
part
clusters.
So
what
was
the
difference
between
I?
When
I
say
look,
you
can
deploy
spark
clusters.
The
difference
is
that
this
is
not
the
life
cycle
of
the
cluster
is
not
bound
to
the
life
cycle
of
the
application.
A
There
is
basically
no
application,
you
can
create
spark
Buster's
and
then
you
can
create.
For
instance,
notebook
instance
took
a
general
book
that
can
connect
to
the
spark
cluster.
It
doesn't
necessarily
I
used
nice
to
use
this
new
feature
in
cover.
In
spark
that
can
use
kubernetes
as
a
scheduling
mechanism,
there
is
different.
This
is
written
in
Java,
it's
kubernetes
native
I
mean
it
doesn't
use
any
open.
A
specific
API.
A
And
yet
the
the
part
is
responsible
for
spawning
in
its
publication
is
actually
compatible
with
the
first
operator
I'm
using
the
same
same
names
for
the
fields
in
the
configuration
heavily
Thunder,
Joaquin
and,
ideally
user
could
use
the
same
custom
resources
for
both
operators
currently
I'm
supporting
only
a
subset
of
those
options
right.
So
let's
do
a
demo,
and
this
is
by
the
way
my
attempt
to
create
transition
from
spark
spark
over
to
where
these.
A
A
A
Ta'rhonda
and
watch
for
big
boats-
that's
it
great
in
the
container
and
within
two
seconds
or
three
seconds.
Is
there
now
there's
operator
watching
complement
by
default?
I
can
create
a
new
config
map
of
given
label
and,
if
I
do,
that,
I
should
start
blowing
the
blaster.
For
me,
this
one
looks
like
it's
a
config
map,
and
this
is
the
label
that
should
be
up
here
and
it's
look.
It's
something
that
tries
to
provide
some
time
for.
A
A
Edit
and
it's
absolutely
it's
so
there's
a
different
kind
called
that,
and
it
says,
deploy
this
image
and
on
that
image
this
file
should
be
present
and
run
it
as
a
job
application
and
run.
Actually
this
this
main
glass,
he
said
spot
examples
is
present
in
each
star
distribution.
It's
like
mine.
Application
is
for
hellward
purposes,
so
what
it
does
right
now
it's
using
the
different
approach
also
again
watching
the
reports
running
boats,
its
points.
A
That
means
the
application
and
you
got
the
results,
I
think
it's.
It
should
run
for
minutes
or
two.
So
if
I
finish
right
here
and
I
think
this
is
different
point
to
mention,
because
these
days
everyone
assumes
that
go
is
the
right
language
for
writing.
The
operators-
I
won't
disagree,
I
think
always
perfect
language
for
doing
that,
because
it
statically
time
and
it
versus
very
small
images.
A
But
I
would
argue
that
there's
also
a
different
topic
or
a
different
angle
to
see
that
it's
also
the
domain
expertise
like
if
the
system
X
is
written
in
language
Y,
maybe
maybe
nd,
to
preserve
the
knowledge
in
the
same
language.
So,
for
instance,
Park
is
written
in
Java
Scala.
It
may
be
may
be
good
to
use
the
same
language
for
for
writing
the
operator
for
a
system,
because
you
may
find
experts
from
this
system
X
also
to
contribute
their
operator.
B
Awesome
I
also
invited
Darren
you
who
is
from
light
Bend,
who
was
also
interested
in
the
spark
stuff
and
we'd,
had
a
conversation
about
what
light
and
was
doing
I'm
wondering
if
Sharon,
if
you
have
anything
to
add
to
this
spark
conversation
and
can
give
us
a
little
bit
of
an
insight
into
where
life
end
is
going.
C
Yeah
sure,
thanks
for
the
presentation,
was
very
nice,
so
at
life
and
what
we
are
planning
to
do
is
we
are
going
to
integrate
the
GCP
a
spark
operator
into
our
work
and
offering
so
so.
I
have
a
few
questions
regarding
the
spark
operator
that
that
you've
used.
So
if
my
understanding
is
correct,
so
basically
it's
adding
a
your
operator
is
adding
a
layer
of
interaction
on
top
of
the
gcpd
operator,
because
there's
a
step
to
create
the
spark
raster,
a
direct.
A
C
A
C
The
GTP
operator,
who
helped
them
say
that
it's
used
for
batch
applications
because
you
can
also
use
it
large
taking
applications.
As
it's
is
basically
just
finding
spots
to
mate
with
a
bunch
of
arguments.
Yeah,
you
can
run
any
kind
of
spot
job
with
the
proximate,
so
yeah
yeah
whoo
I
am
also
preparing
a
presentation
just
for
the
TCP
spark
operator.
I'm
gonna
talk
about
a
little
more
details,
but
yeah
I
think
what
is
that
was
pretty.
C
A
C
For
example,
for
the
VP's,
probably
there
are
two
kinds
of
metrics
that
are
exported.
The
first
kind
is,
as
you
said,
of
the
metrics
that
are
already
supported
by
spark
itself
all
those
metrics
exported
in
the
spark
driver
and
the
exact
their
pods.
Those
are
exported,
and
on
top
of
that,
there
are
say,
there's
a
set
of
application
level
metrics,
for
example.
What
are
the,
how
many,
how
many
startups
are
counted
running
and
how
many
jobs
have
already
completed?
C
C
B
A
Metrics,
if
I
recall
correctly,
we
had
support
for
Jolokia,
that's
something
that
exposes
jmx
metrics
for
from
job
application
as
a
rest
service,
and
then
primitives
was
able
to
scrape
those
metrics
I.
Think
there
were
two
guys
representing
this
idea
on
sparks
I
mean
I,
think
they
had
set
up
with
those
images
like
how
to
set
up
Prometheus
next
to
part
with
our
park
images.
But,
to
be
honest,
I
haven't
tried
to
do
the
operator.
E
Am
speaking
from
another
University
I
have
a
small
question
for
you,
you
mentioned
very
briefly
Jupiter
and
in
in
our
use
case,
what
we
want
to
do
is
you
know
we
are
spinning
Jupiter
environments
and
demand
for
our
researchers,
and
this
is
something
that
we
discussed
with
with
Matt
I.
Think
what
we
we
want
to
spawn
spark
environment
exactly
at
the
same
time
and
directly
binded
to
the
Jupiter
in
that
book.
E
Initially,
we
dis
approach
with
we
try
to
to
make
it
work
with
a
oh
now
that
there
are
operators
that
are
very
close
in
in
terms
of
how
it
should
work,
which,
which
way
do
you
think
this
would
go?
Is
Ocean
Co
going
to
be
dead
because
everything
goes
down,
two
operators
or,
but
what
would
be
the
path
to
take
from
that
from
now
I.
A
E
A
Like
this
okay,
it
should
definitely
work.
There
is
actually
dinner
event
that
does
the
very
same
thing
because
it
opened
it
up
and
they
use
this
operator
and
they
create
config
maps
for
for
creating
new
spark
clusters
and
they've
got
Jupiter
have
in
the
system
and
they
connect
those
those
two
together,
but
it's
definitely
doable
that
should
work,
I
didn't
idea
to
also
had
notebooks
in
the
operator,
but
I
don't
didn't
want
to
complicate
the
operator
with
other
relevant
stuff.
A
No,
because
I
like
the
idea,
it
should
that's
one
thing
well,
rather
than
if
I
do
that,
like
which
notebook
should
be
the
right
one.
Ee
in
Jupiter
spiral
notebook.
There
are
multiple
vendors
in
tidal
Lord
if
you
could
have
operator
also
for
Jupiter,
and
those
operators
could
fit
into
those
open
operator
lifecycle.
A
My
management
tool,
that's
internet,
like
operate
metal
operator
operator
operators,
where
you
can
describe
that
your
operator
requires
other
custom
resources.
You
can
kind
of
assume
that
the
fargo
parades
would
require
other
operators
for
ending
the
Jupiter's,
Jupiter,
Jupiter
or
MOOCs,
or
creating
a
like
higher
level,
one
that
will
require
those
two.