►
From YouTube: Entire OpenShift Commons Copenhagen:Operator Framework, Machine Learning Talks and more
Description
Includes Brandon Philips Update on Operator Framework, all the ML Lightning Talks, AMA Panel and Hitchhiker's Guide to Kubecon/EU recorded live in Copenhagen 2018 at Kubecon CloudNativeCon
A
B
We're
really
grateful
tonight
for
all
of
you
for
coming.
We
know,
there's
multiple
things
happening
at
the
same
time
tonight
as
well
as
other
lightning
talks.
I
really
wanted
to
give
a
space
for
the
machine
learning
communities
to
come
together,
the
coop
flow
ones,
the
open
shift
on
m/l
working
group,
and
so
I'm
really
thrilled
to
see
this
many
faces
in
the
room
to
talk
about
one
of
the
something
that's
near
and
dear
to
my
heart,
or
are
two
things:
OpenShift
and
machine
learning.
B
B
He's
got
a
dinner
to
go
to
shortly,
so
I'm
gonna,
let
him
get
started,
kick
us
off
and
then
we'll
have
everybody
who
is
giving
a
lightning
talk
come
up
and
after
that
we're
gonna,
try
and
do
on
and
ask
me
anything
panel,
which
means,
while
they're
talking
and
doing
their
lightning
talks.
We
really
want
you
to
think
about
what
kinds
of
questions
you'd
like
to
hear
answered
and
we're
going
to
try
and
answer
them
tonight.
So
without
any
further
ado,
I
give
you
the
newest,
Red
Hatter,
Brandon
Phillips,.
C
All
right,
thank
you
very
much
for
the
warm
introduction
I'm
going
to
play
around
with
Windows
here
for
a
second
I
have
pretty
much
the
easiest
job
in
the
world,
which
is
to
entertain
a
bunch
of
people
who
have
fresh
beers
and
food
in
front
of
them.
So
this
should
go
very
very
smoothly,
so
core
OS
was
acquired
by
Red
Hat
about
three
and
a
half
months
ago,
and
what
I
wanted
to
do
was
just
walk
through
some
of
the
things
that
we're
working
on.
C
It's
pretty
easy
for
me
to
talk
through
some
of
this
and
give
you
a
couple
of
live
demos
of
it,
because
a
lot
of
it
is
things
that
were
inside
of
the
tecktonik
product
that
we
are
going
to
be
bringing
to
open
shift
over
time.
So
really.
This
is
not
a
lot
of
new
brand
new
announcements,
but
really
just
familiarizing
folks
that
are
inside
of
the
open
chef
community
with
some
of
the
things
that
we
had
been
doing
inside
of
core
OS
and
inside
of
tecktonik.
C
So
the
first
thing
that,
if
you're
not
familiar
with
this,
this
is
the
tectonic
console,
which
is
an
administrative
console
on
top
of
on
top
of
kubernetes,
and
one
of
the
things
that
we
spent
a
lot
of
time
doing
at
core
OS
was
rethinking
the
way
that
enterprise
software
was
delivered
and
ensuring
that
when
people
get
enterprise
software,
it
has
a
lot
of
the
capabilities
of
a
cloud
service.
Now,
when
we
think
about
a
cloud
service,
there's
essentially
two
pieces,
there's
the
hosting.
C
That
is
a
very
traditional
business
where
you
stick
a
server
in
iraq
and
you
give
it
an
IP
and
you
sell
it
to
somebody
and
then
there's
what
we
eventually
termed
automated
operations,
which
is
this
idea
that
it's
not
just
the
server
and
the
IP,
but
also
services
on
top
databases,
load,
balancers,
etc,
and
those
services
are
unique
because
the
operations
are
automated
and
the
upgrades
are
automated.
Monitoring
is
automated
and
so
there's
a
lot
that
you
get
out
of
that
by
default.
C
And
so
we
wanted
to
make
sure
that
when
we
delivered
software
to
people
and
that
started
with
operating
system
and
eventually
with
kubernetes,
that
you
could
also
automate
those
operations,
because
it's
a
software
company
we're
not
also
going
to
sell
you
a
server,
so
we're
automating
operations
ended
and
where
it
will
begin
again
inside
of
openshift.
Is
we
have
this
one
click
update
inside
of
Taconic,
where
the
software
it
gets
a
little
recursive,
where
we're
actually
hosting
all
the
components
of
kubernetes
on
top
of
kubernetes
and
don't
worry,
we
do
it
in
a
way.
B
C
One
is
that
I'm
able
to
come
in
and
actually
edit
the
pod
and
upgrade
the
scheduler
over
time,
and
that's
how
we
power
these
automated
operations.
So
you
can
upgrade
from
kubernetes
1/5
or
tecktonik
one
five,
two
one,
six,
two
one,
seven
two
one
7.1
on
2.3,
all
with
a
single
click
and
you
actually
get
live
telemetry
back
of
how
those
things
going,
and
you
can
do
everything
that
you
do
normally
like
drill
down
into
the
individual
pods
and
see
how
much
memory
and
CPU
they're
using
and
get
monitoring
and
metrics
data
back.
C
And
so
these
are
the
sorts
of
things
that
will
start
to
pour
into
openshift
and
was
part
of
the
announcement
during
the
acquisition
of
this
automated
operations
stuff.
So
that's
some
color
around
what
we
mean
by
automated
operations.
The
other
thing
is
that
sort
of
the
namesake
of
the
company
was
core
OS,
an
operating
system,
which
we
eventually
renamed
to
container
Linux,
with
some
success
of
that
renamed,
it's
always
challenging
to
rename
a
product,
but
the
automation
and
the
automated
operations.
Don't
just
go
down
to
communities
layer.
C
They
go
all
the
way
down
to
the
foundation
of
the
actual
operating
system,
and
so
this
is
a
brief
demo.
If
you
keep
down
here,
keep
looking
down
here,
it's
looping,
but
what
we
had
done
inside
of
the
operating
system
is
that
Cooper
Nettie's
is
actually
in
control
of
the
exact
version
of
software,
that's
running
on
each
node
and
that
status
and
that
information
gets
pushed
back
up
to
the
kubernetes
control
plane.
C
Reboots
are
controlled
across
the
cluster
in
case
of
security
updates,
and
you
end
up
with
the
system
where,
when
we
release
a
version
of
tecktonik,
you
get
not
just
kubernetes
that
have
set
version.
You
get
the
operating
system
of
the
set
version.
You
get
the
the
docker
version
of
the
set
Persia
and
this
entire
stack
of
software
is
controlled
together
and
it's
all
controlled
through
the
kubernetes
api.
So
you
can
control,
monitor
and
view
what's
actually
happening
in
real
time
using
a
coop
kettle.
C
So
those
are
two
big
things
that
we
plan
to
bring
to
openshift
the
other
thing
which
we've
open-sourced
a
few
of
the
kind
of
secret
sauce
pieces
of
tectonic
and
they're,
now
available
on
the
open
shift
github
around
monitoring.
We
ended
up
building
in
what
we
call
the
prometheus
operator
and
then
a
bunch
of
technology
around
monitoring
inside
of
tectonic,
so
that
you
get
immediate
insight
not
just
across
the
application
but
across,
as
you
saw
in
the
previous
demo,
actually
how
the
kubernetes
control
plane
is
running.
C
So
you
can
dig
in
debug
issues
and
that
sort
of
thing
over
time,
whether
they're,
post
level,
issues
plot
level
issues
or
individual
components
like
services
of
the
kubernetes
control
plane
all
right.
So
that's
that's
kind
of
a
preview
of
a
few
things
that
we've
started
to
do
that:
our
open
ships,
civic
and
then
the
other
thing
is.
We
announced
today
a
thing
that
we
call
the
operator
framework
and
I'm
gonna
run
through
and
give
a
quick
overview
of
what
that
looks
like
and
what
we're
trying
to
do
here.
C
So
this
is
actually
my
keynote
two
days
from
now
and
so
you're.
My
practice
audience
so
you
didn't
know
you're
in
a
beta,
but
welcome,
there's
some
joke
in
here
about
being
acquired
by
Hardy
blue
that
one
so
operators
we
introduced
two
years
ago
and
the
idea
of
operators
we
introduced
operator
for
a
database.
C
That's
a
D,
an
operator
for
monitoring
system
for
me
theists,
and
the
idea
with
operators
is
that
there
are
these
cube
native
applications
that
run
in
pods
and
are
managed
via
cube,
api's
and
so
by
running
pods
I
mean
you
deploy
the
operator
on
your
cluster
and
it's
just
a
normal
kubernetes
deployment
and
then
managed
with
kubernetes
api
is
means
that
you
deploy
a
resource.
That's
a
brand
new
type
of
kubernetes
resource.
It's
not
a
deployment,
it's
not
a
pod.
It's
not
a
stateful
set!
It's
a
net
CD
cluster.
C
By
analogy,
what
we're
trying
to
do
with
operators
is
something
that's
impossible
to
do
on
the
public
cloud
which
is
I,
have
my
application
whatever
it
is,
it
might
be
some
cool,
open
source
project
like
Cassandra,
or
it
might
be
something
like
an
S
AP
integration.
That's
specific
to
my
organization
and
I
want
to
make
that
available
on
the
public
cloud,
so
people
can
deploy
copies
of
that
application.
You
can't
do
that
on
the
public
cloud,
Amazon
or
Azure,
or
whoever
they're,
not
gonna.
Let
you
just
introduce
a
new.
C
You
get
more
consumption
of
it,
which
is
exactly
how
the
clouds
grow
so
quickly
and
we're
hoping
that
by
taking
advantage
of
that
success
of
cloud
but
bringing
it
to
kubernetes,
we
can
grow
the
overall
base
of
kubernetes
software.
So
our
goals
here
are
to
bring
more
operators
into
the
ecosystem
and
make
them
in
use
by
more
people.
C
So
the
operator
framework
is
this
toolkit
where
we're
making
it
easier
for
people
to
build
these
cube
native
apps,
like
we've
done
with
that
CD
like
we've
done
with
Prometheus
and
make
them
manageable
across
lots
of
different
kubernetes
clusters.
Of
course,
including
OpenShift.
You
can
check
it
out
at
github
home,
slash
operator
framework
and
it
has
two
components
as
an
SDK,
which
is
a
bunch
of
tools
for
doing
the
hard
parts
of
building
one
of
these
operators
tracking
related
cube
resources,
tests,
scaffolding,
vendor
of
the
correct
libraries-
and
it
looks
like
this
jokingly.
C
One
of
the
Google
engineers
is
called
this
a
similar
project
that
he
was
working
on
Kuby
on
Rails.
But
you
create
a
you,
created
a
new
version
of
an
operator
using
the
operator,
SDK
command
line
tool
and
describe
it
and
then
a
scaffolding
gets
created
for
you
and
Phillip
Witt
rock
has
been
working
on
a
similar
project
and
we're
looking
to
bring
them
together
in
a
cig
inside
of
cuber
Nettie's,
which
is
up
for
a
little.
The
other
piece
is
a
party
operator
lifecycle
management.
So
you
have
these
operators,
but
it's
a
little
cumbersome.
C
C
So
you
can
go
in
and
say
these
are
the
versions
that
are
available
to
me:
make
it
available
to
specific
name
spaces
so
that
cluster
admin
has
control
over
what
people
are
deploying
as
their
monitoring
tool
or
their
database
track
those
instances
across
namespaces,
so
that
people
like
the
folks
that
ticketmaster
are
able
to
figure
out
how
many
instances
exist
and
then,
of
course,
apply
updates
in
case
there's.
Some
problem
in
the
piece
of
software,
like
the
monitoring
stack,
has
a
security
issue,
so
it
looks
like
this.
We
have
these
manifests.
C
We
put
them
in
a
catalog
and
then
you're
able
to
deploy
them
across
namespaces
and
the
OLM.
The
operated
lifecycle
management
is
really
solving
this.
Well,
how
do
I
deliver
my
app
onto
the
cube
root
of
these
hybrid
cloud,
and
you
can
do
this
with
things
built
with
the
operator
SDK,
but
you
can
also
do
this
with
helm
charts
or
the
kubernetes
built-in
types
there's
Doc's
on
the
repo
if
you're
interested.
C
So
quick
recap,
it's
open
source,
it's
up
here,
star
stuff,
because
that's
how
the
source
software
wins
is
lots
of
github
stars
and
the
next
steps
here.
We
want
to
make
more
operators
more
easily
and
bring
more
users
to
those
and
the.
Why
is
we
want
to
make
kubernetes
the
dominant
API
for
cloud
native
applications
moving
forward?
C
We
believe
at
Red,
Hat
I
believe,
as
somebody
who's
been
in
this
ecosystem
for
the
last
five
years,
that
this
is
our
opportunity
to
make
an
actual
compute
network
storage
infrastructure
that
can
run
anywhere
from
somebody's
laptop
somebody's
data
center.
Somebody's
public
cloud-
if
you
want
to
find
any
of
us,
we've
been
working
on
this,
these
are
the
faces.
Kelly
is
right
there
in
particular,
I,
don't
know
where
Rob
a
Jimmy
are
I,
think
there
I'm
playing
somewhere
lost
names
to
him
and
that's
all
I
got.
Thank
you
very
much
for
your
attention.
B
All
right
thanks!
Thank
you
all
right.
So
thank
you
very
much
for
that,
and
that
was
a
real
sneak.
Preview.
I
only
saw
one
little
graphic
error
where
there's
a
draft
thing
up
there,
so
I
think
you'll
get
that
right
for
the
keynote
tomorrow.
So
now,
I'm
just
gonna
bring
up
the
folks
who
are
all
giving
lightning
talks
now
I'm
going
to
have
them
sit
in
the
order
in
which
they're
going
to
gonna
be
on
there.
B
That's
coming
over
from
the
core
OS
world
and
many
years
ago,
many
moons
ago
I
was
at
university
and
taking
classes
in
machine
learning
and
AI
weiwei
way
back
and
there
were
just
no
compute
resources.
Then
it
was
very
theoretical.
And
so
imagine
my
surprise.
You
know
many
years
later
to
come
back
and
be
working
on
the
platform
OpenShift
in
kubernetes.
That
is
now
delivering
the
resources
to
use
some
of
the
tools
that
I
only
dreamed
about
when
I
was
at
university
and
had
to
beg
and
borrow
for
compute
resources.
B
So
what
we've
done
in
the
open
source
communities
is
we've
done
two
things
and
on
the
open
shift
common
side,
we've
created
a
machine
learning
cig
for
people,
deploying
machine
learning
frameworks
using
machine
learning,
doing
big
data
doing
anything
that
touches
on
open
shift,
because
one
of
our
goals
is
to
make
OpenShift
one
of
the
the
the
best
places
to
run
your
machine
learning
workloads
and
really
what
we
focus
on
in
the
machine.
Learning
sig
is
the
use
cases.
B
We
want
to
hear
from
you
what
frameworks,
what
tools,
what
services,
what
things
you
you
want
to
do
and
then
there's
another
community.
Many
of
you
here
in
the
room
are
part
of
that
are,
is
the
coop
flow
community
and
that's
another
whole
thing,
and
that
is
our
last
lightning
talk
and
David
Aaron
chick
from
Google
has
also
graces
graciously
been
Co,
chairing
the
open
shift
on
ml
special
interest
group.
B
So
it's
been
a
really
nice
collaboration
going
back
and
forth
between
these
two
groups
and
we've
done
some
wonderful
work,
getting
coop
flow,
running
on
open
shift
on
Google,
compute
cloud
and
but
lots
of
really
good
cross-pollination.
So
that's
what
I'm
kind
of
tonight
to
try
and
do
is
to
give
everybody
five
minutes
of
fame
and
give
you
the
opportunity
to
ask
questions
of
all
of
them.
I,
you
know,
I
can
give
him
a
couple
of
softballs
I.
Really
don't
want
to
do
that
in
the
AMA
thing.
B
I
really
would
like
you
guys
to
try
and
think
about
what
the
questions
are
and
I'm
gonna
get
started.
I'm
gonna
get
Carroll
up
here
and
we're
gonna.
Do
this
and
we're
gonna
try
and
keep
it
to
five
minutes.
Each
I
know
that's
hard
for
all
of
us
and
here's
your
slide
deck
and
I'm
gonna
go
the
first
one
view
present
mode
and
Carol
willing
has
been
an
amazing
participant
in
these
conversations
and
she's.
B
D
We
are
a
nonprofit
research
organization
primarily
and
are
funded
by
grant
providers
like
the
Moore
Foundation
Sloan
in
Helmsley
grant
and
as
such,
we
are
interested
in
advancing
science
and
usability
and
reproducibility
and
collaboration
in
both
science
and
data
science
and
really
the
emphasis
on
how
do
we
get
humans
to
go
through
this
concept
of
you
have
an
idea.
You
have
some
data,
try
and
figure
out.
Okay
can
I
do
what
I
think
I
can
do
and
how
to
iterate
on
it
and
I.
D
Think
that
lends
itself
very
well
to
machine
learning,
because
you're
doing
prediction,
you're
doing
recommendations,
you're
doing
classifications.
When
you
start
your
models,
you
don't
always
and
know
exactly
where
you're
gonna
won't
land
up
in
the
end
and
I.
Think,
by
having
the
flexibility
that
Jupiter
brings
to
that,
it
really
helps
you
as
a
business,
come
up
with
new
project
and
product
ideas
based
on
what
research
your
machine
learning
folks
are
doing.
D
D
It
has
really
helped
us
with
deploying
helping
our
users
deploy,
Jupiter
hub
and
the
Jupiter
notebooks
at
scale,
and
so
thank
you
for
your
efforts
there
and
I
guess
I
just
want
to
say
that
we
see
that
we've
just
barely
scratched
the
surface
of
what
can
be
done
both
at
scale
and
with
machine
learning
tools
and
I'm
really
excited
to
see
the
things
that
are
gonna
come
forward
with
cube
flow,
using
Jupiter
to
kind
of
interact
with
humans
in
this
loop
and
see
what
you
guys,
collaborate
on
and
share.
Thank
you.
B
Cuz
I
under
five
minutes
right
on
time.
That
was
great.
So
next
up
we
have
Clive
Cox
from
Selden.
In
my
st.
right,
Selden
I
could
say:
I
can
say
it
I
just
didn't
spell
it
right
a
couple
of
times
and
they
are
new
openshift
commons
members.
So
well
you
get
hooked
up.
If
you
don't
know
what
OpenShift
commons
is,
it
is
the
open
source
community.
That's
built
up
around
openshift
and
there's
a
gentleman
in
the
room
is
Mike
here:
Mike
he's
the
tallest
person
here.
B
E
B
E
Were
based
in
Barclays
techhub
weighs
in
in
London
its
accelerator
with
20
30
companies
in
it
we
want
dense
flow
London
workshop
every
month,
so
if
you're
in
London
be
great
to
have
you,
there
join
em
with
you
talks
about
tense
flow
as
a
company,
we'll
work
on
machine
learning,
diploma
and
kubernetes
and
wheels
to
consulting
in
the
FinTech
area.
Doing
machine
learning
in
various
things
like
affects
equity
prediction
and
various
other
things.
So
where
do
we
stand
as
a
company
exactly
what
we
do
in
terms
of
our
product?
E
We
view
the
machine
learning
pipeline
as
these
steps.
You
know
from
training
data
ingestion
analysis,
validation
of
your
model,
basically
sold
on
core,
which
is
our
open
source,
which
is
I'm
going
to
talk.
What
talk
about
today
quickly
is
we
just
focus
purely
on
machine
learning
deployment
so
after
you've
done
the
training
and
you've
got
you
to
what
you
want
to
deploy
your
predictor
out
scale,
it's
monitor
it
door
analysis
and
do
updates.
Do
loading
updates
to
your
machine
learning
of
production,
and
so
we're
so
also
part
of
the
queue
flows
terms
of
ecosystem.
E
So
you
can
choose
seldom
core
to
deploy
your
models
on
queue
flow
as
one
of
the
options
you
know
you
can
choose
intense
flow
serving.
You
can
also
choose
cell
phone
call.
So
how
do
we
fit
so?
How
do?
How
does
it
all
work
so
once
that
once
you've
got
your
kubernetes
cluster,
you
can
install
if
our
helm
or
case
on
it
we've
got
our
own
case
on
it.
E
Registry
was
one
part
of
queue
flow
and
then
the
next
step
is
to
package
all
machine
learning
one
time
so
for
that
we
use
this
to
I
and
that's
what
I'm
going
to
explain
today
see
that's
to
take
your
source
code
of
your
machine
learning
prediction
point
package
opposite
image,
and
so
we
can
then
manage
that
container,
which
is
going
to
give
predictions
in
your
graph.
So
the
final
part
is
to
actually
create
your
one-time
graph,
so
that's
just
saying
how
your
components
are
going
to
fit
together.
E
E
Basically,
so
what
we're
trying
to
do
is
allow
machine
learning
they
desire
just
to
use
any
tool
kit,
so
SPARC
tensorflow
skycat
learn
what
we
want
is
they
can
use
any
tool
kits
they're
using
now,
and
we
just
managed
the
one
time
prediction
for
their
models
and
for
that
I
just
need
to
do
two
things:
they
need
to
dockwise
their
one-time
component
and
expose
it
using
our
Western.
Grc
ap
is
so
they
can
do
that
themselves.
E
We
want
to
make
it
really
easy
for
them
to
do
that,
for
so
for
that
we're
using
red
ship,
red
hats,
open
source
source
to
image,
open
source
tool.
So
just
suppose
you
have
a
new
source
to
image.
There's
two
parts
of
this:
you
have
your
code
that
you
want
to
package
up.
So
here
you
have,
we've
got
a
prediction
component
in
Python,
and
then
we
have
a
set
of
builder
image,
the
weep
that
we
provide.
E
We
provide
Python,
R
and
Java
builder
images
that
allow
you
to
package
up
your
source
code
into
a
image
as
we
provide
all
the
dependencies
and
then
we
provide
the
scripts
in
this
case,
assemble
script.
To
say
how
your
source
code
is
going
to
be
packaged
up
with
our
dependencies
one-time
scripters
of
how
it's
going
to
be
run
and
that
uses
script.
So
these
are
script
required
by
s
to
Y
and
once
you've
got
those
there.
You
can
then
use
the
s2i
tool,
and
that
will
package
it
up
and
it
does
all
the
work.
E
This
is
just
a
quick
example,
so
here's
a
here's
an
example
using
sty.
So
it's
going
to
a
builder
on
the
current
directory.
It
could
be
from
github
it's
going
to
use
our
Python
to
builder
image
and
it's
going
out
for
this
Python
classifier,
and
so
the
first
thing
that
needs
to
do
is
have
their
one-time
component.
E
So
here's
one
for
this
standard,
RS
pacifier
in
Python,
so
they
do
that
they
can
then
at
supply
a
set
of
requirements,
sort
of
what
packages
they
need
sky
kid
learns
side,
PI
and
etc,
and
that
will
be
included
in
the
image
and
then
they
just
need
to
provide
a
set
of
requirements
of
how
we're
going
to
package
that
image.
So
one
is
what
the
class
is
going
to
be
call
in
this
case
always
specify,
so
we
can
find
it
when
we
package
it.
E
How
you
want
to
expose
the
API
rest
or
gr
PC
is
the
two
api's
we
handle
right
now
and
what
it
what
this
is.
Is
it
a
model?
We
also
have
other
types
of
things
that
relates
to
a/b
tests
or
ensembles
in
different
different
forms
of
things
like
that.
So
once
you've
done
that
and
you
can
actually
provide
the
environment
as
part
of
the
command
line
or
you
can
provide
it
as
part
of
the
source
code.
E
It's
once
you've
got
that
you
just
run
the
single
line
of
s
to
Y
and
that
will
build
your
runtime
image
and
package
it,
and
then
we
can
deploy
it
onto
your
cluster.
So
really
what
we're
trying
to
do
is
make
it
really
easy
for
people
to
take
their
runtime
components
package.
It
up
describe
the
graph
of
what
they
want
to
deploy
out
there
on
kubernetes.
E
Then
we
deploy
it's
managed
by
our
operator,
and
then
you
can
go
into
the
virtuous
loop
of
updating
your
components,
changing
doing
doing
a/b
tests,
Canary,
rollouts
and
all
sorts
of
things.
You
need
to
do
in
machine
learning
in
production
to
actually
keep
that
machine.
Learning,
component,
updated
and
running.
So
just
the
final
slide,
a
few
call-outs,
so
there's
two
source
to
image:
deep
dives
and
intros
on
Thursday
and
Friday
and
I'm,
going
to
two
more
depth
form
seldom
core
which
is
stuff
that
I
work
on
on
Friday.
B
Right,
thank
you
very
much.
I
love
this
because
that
was
the
shout
out
that
I
was
gonna.
Make
in
between
talks
was
to
those
two
source
to
image
to
talks
and
those
are
really
we.
We
talked
to
a
token
shift
about
source
to
image,
it's
a
tool
that
we
use
to
help
build
pages
and
create
them
and
they're
one.
It's
wonderful,
but
there's
one
hundred,
maybe
twenty
other
types
of
tools
and
approaches
to
creating
your
images
and
your
containers.
B
G
B
I
did
tell
you
that
didn't
I
all
right,
so
so
please
join
us
for
those
source
to
image
sessions.
If
you
were
wondering
what
Red
Hat
was
doing
in
the
machine
learning
business,
these
two
folks
come
from
the
right
analytics
group
and
have
been
doing
some
great
work
and
they're
gonna.
Tell
you
all
about
it,
not
somehow
in
five
minutes
but
go
for
it.
I,
don't
know
if
this.
G
Yeah,
okay,
so
we're
going
to
talk
about
secondary
and
I,
could
talk
briefly
about
machine
learning,
we're
both
practitioners,
we're
using
machine
learning
and
on
kubernetes
right
now
and
we're
using
the
s2i
tools
that
we
talked
about
earlier,
and
we
are
part
of
this
read
analytics
IO
team,
which
is
creating
the
tooling
to
make
it
really
easy
to
run
these
machine
learning
algorithms
and
include
them
in
your
pipeline
on
OpenShift.
So
this
is
a
really
simple
overview
here
of
this
software
stack
with
open
shift.
Then
our
read
analytics
tooling.
G
On
top
of
that
and
then
apache
spark
which
Zack
will
talk
about
next
and
then
your
application,
which
could
be
it
could
be
something
like
a
retail
site
online.
It
could
be
I,
have
an
application
for
running
performance,
all
of
our
performance
tests
and
I've
added
an
an
intelligent
portion
of
that,
because
I've
added
a
machine
learning
component
which
improves
the
user
experience,
and
it
does
some
prediction
for
me.
So
Zack
will
tell
us
a
little
bit
about
SPARC
now
and
what
it
does.
H
So
patchy
SPARC
is
a
is
the
so
we
built
a
analytics
platform
on
top
of
OpenShift
and
Apache
spark
is
the
core
engine
for
our
analytics.
So
it
comes
with
different
api's.
You
can
use
machine
learning
or
you
can
use
streaming
or
you
can
use
graph
processing
as
well
as
spark
sequel.
It
comes
with
lots
of
language
bindings.
So
if
you
want
to
do
your
stuff
in
Python,
Scala
Java,
there's
sy
builder
images
that
you
can
you
can
utilize.
G
Primarily
I've
used
it
myself
for
these
algorithms
listed
at
the
bottom
for
clustering,
using
things
like
random
forests
and
regression,
so
some
examples
of
how
you
might
take
a
regular
application.
That's
doing
just
your
transactions
on
the
web
and
turn
it
into
something.
That's
using
one
of
these
machine
learning.
Algorithms
is,
for
instance,
like
on
the
Airbnb
site.
They
use
ultra
alternating
least
squares
to
you
to
give
you
recommendations
about
places
you
might
like
to
stay
say
you
go
to
a
site
where
you're
a
place.
You
normally
would
like
to
stay
it's
already
booked.
G
They
will
use
alternating
these
squares
to
give
you
a
bunch
of
other
recommendations
about
where
you
might
want
to
stay.
Instead,
you
can
do
clustering,
where
you
might
want
to
cluster
all
your
customers
and
tailor
their
experience
on
your
website,
based
on
which
of
these
clusters.
They
fall
into
I
personally
used
random
forests
to
help
me
with
my
performance,
monitoring
and
I'm,
able
to
like
pick
the
top
ten
configuration
parameters
that
I've
set
in
my
experiment
and
see,
which
ones
are
most
influential
only
the
overall
performance
of
the
codes
that
I'm
running.
G
So
this
just
gives
you
some
examples,
this
small
subset
of
all
the
ML
algorithms
we
have
available
in
SPARC,
and
this
is
the
good
news.
Well
I've
done
all
this
performance
testing,
and
so
far
the
overhead
has
been
less
than
10%
running
on
kubernetes
and
in
clusters.
I
mean
in
kubernetes
instead
of
just
bare
metal,
and
so
Zack
will
talk
to
you
a
little
bit
about
how
easy
it
is
to
use
this.
You
don't
really
have
to
be
a
data
scientist
to
do
this
work
you
can.
The
the
API
is
so
easy
pretty
much.
H
So
so
there's
a
lots
of
tooling
around
that.
So
when
you're
designing
models
and
and
whatnot,
then
you
know
there's
you
know
some
data
scientists.
We
do
have
data
scientists
on
staff
that
work
on
algorithms
and
to
have.
But
then,
when
you
train
the
model,
then
you
deploy
the
model
and
then
you
do.
You
can
do
things
like
predictions
and
solve
different
problems
with
with
your
data,
so
I
think
it's
very
interesting.
G
G
B
B
I
Oh,
yes
and
I
think
that's
posted
online
right.
Absolutely!
Yes,
so
I'm,
Daniel,
white,
neck
and
I
work
at
a
company
called
pachyderm.
So
you'll
hear
a
little
bit
more
about
pachyderm
here
in
a
second,
so
I'll
leave
that
off
for
now
also
I
just
wanted
to
let
you
know,
since
all
of
you
being
machine
learning
people
and
also
at
cube
con
I.
I
Imagine
that
practicality
is
something
that
you
value
and
I'm
just
launching
this
practical
AI
podcast
with
Chris
Benson
at
who's,
a
chief
scientist
at
Honeywell,
it's
being
produced
by
the
changelog,
so
keep
an
eye
on
that
we're
gonna
have
a
all
about
cube
flow
soon,
so
so
keep
an
eye
on
that.
So
the
the
ml
use
case
that
I
really
work
on
with
pachyderm
is
creating
platforms
for
large
companies
or
small
companies
that
that
allow
them
to
do
scalable
language,
agnostic
version,
data,
pipelining
and
data
management.
I
I
We
also
treat
this
data
management
piece.
So
a
lot
there's
a
lot
of
frameworks
out
there
for
processing
and
running
machine
learning,
algorithms,
but
but
the
one
that
we
work
on
at
pachyderm,
which
is
called
pachyderm,
is,
is
kind
of
a
unified
view
of
both
data
processing
and
data
management.
We,
as
I
mentioned,
where
we
have
a
bunch
of
production,
deploys
of
pachyderms.
So
pachyderm
itself
is
an
open
source
project.
There's
a
company
around
it,
but
the
the
core
is
open,
source
and
and
we're
working
with
a
bunch
of
different
companies.
I
But
we
have
pipelines
in
in
production,
running
tensorflow
and
and
pi
torch
and
a
bunch
of
other
weird
stuff,
including
like
bioinformatics
things
and
all
stuff
I
don't
know
about.
But
we
have
you
know
clusters
we
work
with
people
up
to
kind
of
like
1500
node
clusters,
doing
a
bunch
of
image,
processing
and
other
stuff
like
that.
I
Okay,
so
just
a
quick
talk
advertisement,
so
I'm
gonna
be
talking
about
compliant
data
management
and
machine
learning
on
kubernetes
on
Thursday.
So
make
a
note
about
that.
I
know.
Most
of
that
title
is
really
exciting
for
everybody
and
then,
when
I
add
the
word
compliant
in
then
everybody
no
longer
attends
my
talk
so
or
get
sad
or
get
scared
or
something
but
I
think
we're
gonna
have
a
lot
of
fun
with
it.
I
There's
gonna
be
a
live
demo
and
again
we're
gonna
be
talking
about
you
know,
actually
putting
pipelines
into
production
that
can
be
sustained
over
time.
In
the
face
of
you
know,
increasing
regulation,
especially
in
the
in
the
EU,
so
just
to
give
you
kind
of
a
little
taste
of
that
which
clive
set
up
great
for
me.
I
Pachyderm
can
do
that
orchestration
and
data
management
piece,
and
then
we
can
hand
off
kind
of
that
that
train
model
at
the
end
and
that
artifact
is
something
like
Seldon
for
serving
all
all,
while
keeping
all
of
that,
you
know
extremely
rigorously
tract
and
versioned
all
along
the
way,
from
code
to
data,
to
docker
images
to
actually
artifacts
that
are
deployed
for
serving.
So
that's
me.
E
B
So
there's
a
lot
of
you
in
the
back,
so
you
don't
be
afraid
to
come
up
and
fill
in
any
empty
seats.
If
you
can
find
one
and
make
sure
you
have
a
beer
in
your
hand,
while
you're
doing
it,
your
your
the
stand-in.
Yes,
your
the
stand
and
Lachlan
gave
a
wonderful
talk
a
little
while
ago
and
I'm
gonna
from
Microsoft
Lachlan
Evis
and
he
couldn't
come.
So
we
are.
B
J
Right,
hello,
everyone,
so
my
name
is
William
Buchwalter
I'm,
the
senior
software
engineer
at
Microsoft
in
the
aian
research
group.
So
just
to
give
you
a
bit
of
context,
I'm
not
going
to
talk
about
Asher,
mostly
just
grace
in
general
and
I've,
been
working
in
the
kubernetes
slush
and
mail
space
for
the
past
18
months,
I've
actually
been
committed
to
cool
off
since
last
July.
It
wasn't
called
cool
back
then,
but
still
and
a
bunch
of
other
stuff.
So
I
just
want
to
talk
a
little
bit
about.
J
Why
are
we
interesting
in
kubernetes
interested
story
in
kubernetes
for
machining?
In
the
first
place,
right
kubernetes
has
been
developed
with
micro
services
in
mind,
not
GPU,
workloads
or
anything
like
that.
So
why
does
it
make
sense
to
use
kubernetes?
Obviously,
the
the
biggest
the
strongest
point
for
community
is
the
community
right.
This
community
is
just
amazing
and,
and
so
large,
that's
if
you're
a
company
wanting
to
do
machine
learning,
training,
for
example,
and
you
deploy
a
new
training
strategy,
something
let's
say
like
progression
based
training,
it's
actually
kind
of
complicated
to
do.
J
But
you
have
a
good
chance
of
finding
an
open
supplementation
already
working
for
you
on
kubernetes.
Obviously
this
is
stronger
arguments,
but
then
it's
also
because
kubernetes
I
think
is
really
well
designed
and
clean
api's.
So
that
means
even
if
you
don't
find
what
you
want
and
you
need
to
start
from
scratch.
J
It's
actually
much
easier
that
on
kubernetes
than
it
was
just
a
few
years
ago,
for
example,
I
worked
actually
on
population-based
training
so
which
come
from
do
pine
originally
with
a
large
customer
and
and
to
implement
that
on
kubernetes
it
just
took
a
few
days
and
I'm
shot.
It's
actually
really
easy,
because
the
API
is
are
really
nice
and
obviously
scaling
is
important.
Kubernetes
can
scale
pretty
largely
so.
For
example,
we
have
a
nice
class
2d
with
open
area.
J
So
a
few
months
ago,
I
think
in
January
openly
I
released
this
blog
post
called
scaling
kubernetes
to
2500
dots,
so
detailed
measure,
and
you
know
it
wasn't
easy.
They
had
a
lot
of
issues
with
EDD
network,
this
guy
etc,
but
ultimately
they
managed
to
is
a
scale
with
with
a
very
small
team
of
engineers.
I
think
they
were
two.
Maybe
three
people
and
a
single
job
in
that
case
can
go
up
to
10k
course.
J
So
that's
pretty
big
you-
and
this
was
definitely
splatter
like
last
year
or
two
years
ago,
and
with
every
single
release
of
kubernetes
and
HCD,
it's
becoming
easier
and
easier
to
go
even
further
than
that.
So
I'm
very
excited
to
see
where
this
is
going
yeah,
that's
my
other
slide,
I
guess
so
we
are.
We
have
kind
of
two
offering
for
kubernetes
on
Azure.
J
We
have
a
key
s,
which
is
the
full
manage
kubernetes
I,
don't
have
to
do
anything
yourself
and
then,
on
the
other
side
of
the
spectrum
we
have
SES
engine,
which
is
open
source
where
you
can
really
do
whatever
you
want
with
it.
So
SES
engine
at
support
with
four
GPUs
for
quite
a
while,
but
aqui
es
know
as
as
a
GPU
support.
J
So
I'm
not
going
to
talk
about
everything
here,
but
I
won't
talk
about
two
things
that
I
think
are
going
to
be
interesting.
So
it's
a
bit
far-fetched,
but
the
first
one
is
visual
couplets.
So
if
you
didn't
hear
about
that,
that's
a
project
basically
do
an
open
source
implementation
of
the
couplet
that
you
can
then
back
up
with
usually
something
like
osteochondral
instance
or
edible
us
Fargate.
But,
for
example,
someone
just
made
up
request
to
other
provider
for
a
job
batch.
J
So
as
your
batch
lets,
you
run
basically
GPU
jobs
and
you
might
wonder
why
you
want
to
do
that
instead
of
just
using
GPU
in
kubernetes.
The
reason
is
because
you
can
scale
print
very
fast
in
a
matter
of
seconds,
with
a
draw
batch
and
so,
for
example,
it
would
be
very
nice
for
use
cases
when
you
want
to
do
it
from
start
running
on
very
short
running
times
and
when
you
want
to
keep
control
of
the
cost
another
one
which
I'm
excited
about,
but
it's
very
early
is
meta
particular.
J
So
if
you
were
at
coop
car
last
year
in
Austin,
you
might
have
seen
the
keynote
by
Brandon
burns.
Where
basically
made
this
point,
that
kubernetes
is
becoming
the
standard
runtime
of
the
cloud
right
and
since
it's
the
runtime,
we
also
need
a
staff
library
to
go
with
it.
So
you
can
directly
from
your
code,
deploy
to
kubernetes
with
having
to
go
for
docker
files
and
Comanche.
Stop
blades,
and
so
I
mean
I'm
playing
with
this
idea
of
tailoring
meter
particle
to
work
specifically
for
machine
learning.
J
So,
for
example,
you
could
define
a
decorators
in
Python
on
top
of
your
function,
to
say:
okay,
I
want
to
train
this
function,
using
that
mini
agent
in
parallel,
etc,
and
when
you
do
Python
my
scripts,
it's
actually
going
to
deploy
everything,
build
everything
and
deploy
on
the
cloud
for
you,
for
example,
using
flow
CRD,
some
shallot,
so
obviously
extremely
experimental,
but
you're
just
showing
a
few
files.
That
I
think
are
interesting
and
that's
it
for
me.
Thank
you.
B
A
Him
off
brandon
has
the
easiest
job
I
have
the
hardest
one,
because
after
this
it's
you
know
evening
activities.
So
you
know
this
is
gonna
be
hard,
but
in
fact
the
funny
part
is
is
I
actually
have
the
easiest
talk
in
the
world,
because
I'm
David,
Ronn
cheque
I
helped
found
the
cube
flow
project,
but
I
basically
do
nothing.
All
these
people
are
doing
the
stuff
that
makes
cube
flow
great,
we're
just
kind
of
wiring
it
together.
A
You
know
everyone
hears
about
ml,
it's
changing
the
world,
it's
changing
the
dynamics,
eating
everything,
but
the
problem
is,
is
that
most
of
the
world
is
like
this
there's
magical,
AI
goodness
on
one
side,
and
everyone
else
is
on
the
other
side
and
in
between
there's
just
lots
of
pain
and
the
biggest
reason
that
there
is
this
split
between
these
two.
You
know
opportunities
to
go
out
and
get
all
this
great
stuff
and
and
where
people
are
today
is
because
people
have
been
writing
these
incredibly
bespoke
solutions
for
ml.
A
And
then
you
go
to
your
training
rig
and
it's
something
completely
different,
and
then
you
go
to
your
cloud
and
it's
something
completely
different
again
and
you're
hit
over
and
over
and
over
again
with
the
various
you
know
reset
up
and
differences
between
those
environments
and
then
finally
scalability
you
know
I
mentioned
already
scaling
via
nodes.
That
is
one
type
of
scalability.
There
are
all
their
scales.
There's.
How
do
you
scale
the
number
of
experiments
that
you
run?
How
do
you
scale
your
teams?
A
How
do
you
scale
your
data,
all
these
various
things,
those
components
that
are
involved
in
scalability
as
well?
So
you
know,
containers
and
kubernetes
are
pretty
good
at
solving
this,
but
the
problem
is:
is
that
you
end
up
having
to
become
an
expert
in
a
whole
bunch
of
things
as
it
stands
today,
which
is
not
great?
So
that's
why
we
introduced
cube
flow.
A
How
can
we
make
this
overall
system
much
easier
for
you
and
our
mission
here
and
I
say
it
over
and
over
again
make
it
easy
for
everyone
to
learn,
deploy
and
manage
portable
distributed
ml
on
kubernetes?
That
is
not
us
as
part
of
the
cube
flow
project.
Writing
all
this
stuff.
This
is
packaging
and
helping
other
projects
make
their
services
available
in
a
standard
based
way
so
that
you
can
swap
in
and
out
so
that
you
can
scale
them
so
that
you
can't
move
them
from
place
to
place.
You
know
around
that
portability
component.
A
The
way
to
think
about
it
is
that
bottom
section
becomes
all
Nettie's,
that's
the
abstraction
layer
there
and
then
the
section
over
on
the
other
side
becomes
cube
flow
and
you're
able
to
stamp
out
that
cube
flow
in
every
location
that
you
have
today
in
the
box,
and
you
know
on
Friday,
don't
tell
anyone
but
we'll
be
announcing
that
we've
cut
our
0.1
release,
which
we're
very
proud
of.
Thank
you,
but
specifically
in
the
Box.
Today
we
have
Jupiter.
We
have
tensorflow,
we
have
our
go
for
workloads.
We
have
Selden
Corps
in
the
box
Daniel.
A
Is
there
working
very
hard
on
a
pachyderm
proposal,
we're
very
excited
about.
We
have
reverse
proxy
via
Ambassador
and
we'll
be
talking
about
all
the
sorts
of
things
we
have
four
out
of
that
overall
section
up
there.
It's
basically,
these
components
already
have
an
option
in
the
box,
but
you
can
use
many
more
and
we
are
really
are
just
getting
started.
This
is
a
very
small
subset
of
the
people
who
are
helping
out
today
and
we're
really
excited.
A
You
know
the
this
is
I
happen
to
be
from
kubernetes
from
I,
don't
know
de-
10,
and
it
really
feels
like
that
again
I.
You
know
there
were
so
many
when
we
first
got
kubernetes
up
and
running.
There
were
so
many
container
solutions,
so
many
orchestration
solutions.
Everyone
was
just
looking
for
something
to
rally
around
and
that's
what
kubernetes
provided
cube
flow
feels
very,
very
similar.
A
B
So,
thank
you
all
right.
So
this
is
the
QA
part
of
the
room
and
then
I
have
a
lot
of
questions.
That
I
could
ask
these
guys
but
I'm
hoping
a
few
of
you,
have
questions
as
well.
Is
there
I
quit
any
questions
yet
in
there
I
mean
you
have
beers
in
your
hands,
so
I
know
you're,
happy,
but
well.
I've
got
one
that
after
three
here's
went
over
here.
Let
me
struggle
down
yeah.
That
would
be
great
kubernetes.
K
J
All
right,
so
so
what
I
was
saying
was
basically
that
today,
if
you
want
to
run
your
application
in
any
cloud,
what's
the
easiest
way
that
you
can
find
to
do
that
and
with
edible
us
that
know
as
ETS
that's
going
to
be
public
pretty
soon
I.
Think
right.
Is
it
pretty
much
the
only
solution
you
have
that
you
can
deploy
in
one
click
on
all
magical
providers,
and
if
you
write
your
applications
to
work
with
the
committee
Pio,
then
basically,
you
don't
have
to
care
any
more
about
which
provider
is
beneath
that
right.
J
B
B
One
of
the
things
that
Jupiter
hub
is
wonderful
for
is
doing,
allowing
us
to
share
those
notebooks
and
other
things,
but
as
we
get
more
complicated
in
our
things,
how
how
are
we
thinking
about
that?
We
not
just
be
able
to
share
things
that
we've
trained
but
be
able
to
tweak
them
and
then
share
them
again.
Is
there
an
approach
that
we've
we're
working
on
with
coop
flow?
For
that
maybe
David.
A
A
You
know
the
reproducibility
problem,
who's
heard
that
there's
a
reproducibility
problem
in
ml
all
right,
so
those
that
have
them
raise
your
hands
if
you're
getting
two
ml,
you
will
it's
that
you
know
there's
this
fundamental
issue
right
now,
where
it's
not
just
you
know
something
complicated
like
well.
I
I
need
to
understand
exactly
what
this
model
did.
A
There's
already
great
ways
to
share
what
the
text
of
models
with
folks
like
Daniel
and
pachyderm
on
the
case,
I
think
there's
a
great
way
to
share
version
data
I,
think
today,
right
now,
and
and
this
is
really
what
cue
flow
is
trying
to
solve,
how
do
you
describe
in
code
the
exact
deployment
that
you
use
to
run
this?
What
libraries
were
involved?
What
versions
were
involved?
How
do
you
containerize
it
and
so
again
in
cue
flow?
We're
not
trying
to
reinvent
this
there's
already
docker
there's
already
the
OCI
there's
already
kubernetes.
A
These
ways
can
describe
the
underlying
infrastructure.
What
we
need
to
do
is
describe
what
happens
above
that
in
order
to
enable
you
to
run
the
kernel
that
uses
the
data
and
so
I.
You
know
my
hope-
and
you
will
hear
me
say
this
over
and
over
and
over
again
is
at
some
point
in
the
future
cross.
My
fingers
that
cube
flow
succeeds
that
we,
every
research
paper
in
the
world,
ends
with
three.
A
B
So
daniel
you,
you
use
the
compliance
word,
which
is
part
of
the
reproducibility
as
well,
and
you
know
that
puts
in
some
of
our
hearts,
fear
and
loathing,
but
and
from
someone
who
came
out
of
audit
and
IT
a
long
time
ago.
It's
one
of
the
things
keep
pieces,
and
so
how
are
you
in
enabling
compliance
in
in
your
efforts
there?
Because
that's
part
of
the
reproducibility
piece
yeah.
I
That's
that's
a
great
question,
so
I
mean
how
we're
really
thinking
about
it,
which
doesn't
doesn't
cover
all
aspects
of
it.
There's,
certainly
like
anonymization
and
privacy,
things
that
we
don't
tackle.
There's
great
companies
like
a
muta
tackling
those
things.
But
what
we're
really
trying
to
tackle
is
the
question
of
for
a
particular
result.
I
So
that's
how
we
stitch
all
of
those
things
together
and
just
to
kind
of
follow
up
on
the
on
the
previous
question
to
I
think
the
the
way
that
we're
some
of
these
things
and
I
think
this
goes
for
for
everybody.
I,
don't
want
to
speak
for
everybody,
but
I
think
we
are
building,
and
we
were
just
talking
about
this
in
the
happy
hour
previous
to
this
happy
hour.
A
lot.
I
I
That
one
of
the
goals
is,
we
know
people
will
want
to
do
a
lot
of
different
things
in
a
lot
of
different
ways.
So
part
of
the
goal
that
we're
trying
to
do
is
show
people
one
way
to
do
that
the
thing
but
give
them
flexibility.
So
if
you're,
you
know,
if
you're
running
your
pachyderm
pipeline
and
you
want
to
switch
out,
you
know
tensorflow
4pi
torch,
that's
totally
cool
it
does
this
just
another
container.
We
treat
it
in
the
same
unified
way.
D
It's
just
gonna
say:
I
tend
to
you
know:
I
have
a
background
in
econometrics,
and
data
can
tell
you
many
things
how
the
data
is
used
is
actually
really
important,
because
that's
what
really
impacts
humans
and
think
about
it.
Like
you,
wind
up
with
some
disease
that
is
being
diagnosed,
and
you
know
the
healthcare
algorithm
runs
through
it's
machine
learning
thing
yeah
you
get
funding
for
care.
No,
you
don't.
D
You
know
to
be
able
to
go
back
and
audit
how
the
decision
was
made.
Whether
that
was
the
humane
decision
to
have
been
made
is
really
important,
and
you
know
that's
just
one
example.
If
there's
many
examples,
so
I
think
you
know,
reproducibility
is
not
just
a
technical
need.
It
is
a
societal
need
as
well
when
it
comes
to
machine
learning.
E
Changes
we
have
we
working
with
banks
and
putting
machine
learning
into
production
is
really
to
get
an
understanding
of
the
complex
models,
they're
very
wary
of
putting
deep
learning
into
production
when
they
don't
understand.
You
know
what
are
the
range
of
the
outputs
for
the
different
inputs
and
how
can
if
it's
especially
it's
going
to
be
responses
that
going
to
go
back
to
a
human
being?
E
How
can
they
Blayne
the
responses
back
to
human
beings
are
going
to
be
affected
by
those
those
actual
predictions,
so
understanding
how
to
explain
the
machine
learning
predictions
in
a
high-level
way,
which
obviously
there's
a
lot
of
work
and
research
being
done
like
lime
a
few
years
ago
and
never
ever
techniques?
Is
it's
very
key
to
give
the
confidence
to
in
the
actual
FinTech
world
to
actually
get
these
machine
land?
These
calm,
more
complex
models
into
production,
and
it's
just
certainly
a
challenge.
I.
G
Was
just
gonna
say
that
in
my
last
job,
I
worked
in
climate
modeling,
where
a
bit
for
bit
reproducibility
is
key,
so
you
have
to
someone
writes
a
scientific
paper.
They
have
to
go
back
and
rerun
it
and
get
bit
for
bit.
Results
and
I've
been
frustrated
by
this
idea
of
having,
like
the
latest
version
of
some
image
or
something
so
I'm,
really
looking
forward
to
being
able
to
have
complete
reducibility,
reproducibility
and
I'd
be
a
lot
more
comfortable
with
that.
B
So
so
this
is
you
touch
on
the
performance
piece.
We've
touched
on
the
reliability
pieces
of
this
thing,
one
of
one
of
the
things
like
a
lot
of
us
have
used
how
many
of
you
have
used.
Jupiter,
notebooks,
alright,
and
a
lot
of
us
have
gotten
to
use
some
of
the
different
frameworks,
at
least
on
our
laptops
or
whatever
size
cluster
were
given
access
to,
but
one
of
the
the
biggest
issues
is
getting
the
CPU
resources
and
the
to
scale.
B
These
things
to
work,
this
stuff
at
scale
and
I
know
that
you've
done
a
lot
of
work
over
at
Microsoft
and
and
the
open
AI
project
did
a
wonderful
thing,
but
how?
How
realistic
is
it
that
we're
going
to
be
able
to
take
and
get
those
resources
and
make
that
make
make
these
M
these
notebook
things
that
we're
doing
actually
truly
scale?
Is
there.
J
J
I,
don't
know
exactly,
but
around
around
Road
Jupiter
notebooks
for
every
for
everyone
in
the
room
right
and
I,
not
I,
couldn't
use
communities
and
I
was
actually
very
sad
because
was
in
eco
flow,
for
this
is
actually
exactly
perfect
right
and
so
I'm,
not
sure.
If
your
question
was
out
to
scale
the
number
of
different
Jupiter
herbs
that
you
cool
down
or,
if
is
assigning
everything
to
a
single
instance,
the.
J
Last
one
yeah
so
now
for
that
I,
don't
really
have
a
good
good
sense
to
give
you,
but
for
sure,
with
kubernetes
and
with
this
regulation
and
we
stuff
like
PVCs
and
parent
parent
storage,
with
solutions
where
you
could
gradually
increase
the
number
of
GPS
or
CPUs,
you
ask
you
assigned
to
to
a
single
pod.
Obviously
it's
going
to
be
limited
by
the
VM
size
at
end
of
the
day.
E
H
The
way
we
do
it
is
we,
we
have
a
slight
instrument
in
a
Java
agent
that
runs
alongside
spark
the
cluster
and
also
when
you
have
a
job
running
then
I
inject
the
Java
agent.
This
is
using
JMX
exporter
if
you're
familiar
with
it,
so
the
driver
gives
you
particular
metrics
about
your
job
tomorrow.
Actually
we
we
actually
have
a
presentation
tomorrow,
where
we
talk
about
doing
some,
you
know
slight
modifications
to
code
and
then
finding
a
more
than
10
percent
improvement
right,
then
76
percent
improvement
and
and
yeah
like
I,
mean
definitely
the
observer.
H
B
J
A
A
My
hope
is
that
we're
able
to
pass
around
standards
on
this,
maybe
using
Q
flow
or
whatever
it
might
be,
but
develop
a
set
of
standards
and
say
hey
if
you
run
this
model
in
this
way,
you
know
you're
gonna
do
much
better.
The
second
thing
in
you
know,
unfortunately
it's
it's
no
one
up
here,
but
I
think
that
there
is
a
transformation.
That's
current
occurring
right
now,
which
is
mo
frameworks,
are,
are
notoriously
finicky,
two
very
subtle
changes
in
your
underlying
hardware,
drivers
and
so
on.
It's.
G
A
There's
the
thought
that
you
have
to
also
understand
you
know
what
the
you
know
frame
buffer
looks
like
on
this
particular
version
of
this
particular
GPU
is,
is
crazy
and
so
I
think
that
that
the
ML
frameworks
are
doing
a
lot
of
work
to
get
better
around
this
I
do
highly
recommend
going
and
watching
the
tensorflow
dev
summit.
There
were
some
really
cool
stuff
in
particular.
A
L
Oh
hey,
are
you
doing
so?
It's
obvious
for
me
and
I
have
hey
yeah.
Sorry
I've
had
two
glasses
of
wine
here,
so
it
seems
that,
like
the
open
source,
movement
is
really
driving
the
innovation
behind
this,
but
but
and
I
have
a
very
limited
knowledge
around
this.
But
what
about
the
models?
Themself-
and
you
mentioned
you-
you
download
a
model.
Are
we
seeing
kind
of
like
the
same
innovation
happening
around
the
models
and
sell?
So
you
train
something,
and
that
is
also
shared
among
the
community
of
the
AI
and
mo
movement.
I
So
I
I
will
so
that
was
my
my
funny
answer,
but
I
will
mention
a
couple
of
really
interesting
things
that
are
going
on.
So
excuse
me.
So
definitely
each
kind
of
everybody's
publishing
their
models.
They
talk
about
them
in
papers.
If,
if
you're
lucky,
then
you
will
kind
of
there's
like
pre-trained
versions
of
these
models
on
the
Internet
they're
in
all
sorts
of
different
formats,
it
might
be
protobuf
in
tensorflow
and
cafe,
model
and
cafes.
I
So
there's
a
lot
of
different
models,
and
so
that
there's
not
even
standardization
around
that
and
and
as
as
David
mentioned
a
lot
of
times,
you
don't
see
the
same,
the
same
performance.
That's
that's
that's
quoted
when
you
run
it.
There
are
a
couple
of
really
interesting
things
going
on.
There's
the
the
Onix
project
onn
x,
which
is
attempting
to
kind
of
provide
a
standardization
for
a
neural
network
exchange
format
so
that
you
know
you
could
take
a
tensor
flow
model
and
run
it
and
in
other
frameworks
and
kind
of
have
that
exchange
an
export.
D
Think
you
get
like
a
combination
of
things.
I
think
there
are
a
lot
of
things
that
are
very
open
research
being
done
at
Microsoft
being
done
at
Google.
There
are
excellent
websites
that
go
along
with
actual
research,
that's
being
done,
and
you
know
certainly
in
the
academic
world.
That
is
also
happening.
One
of
the
interesting
things
is
this
in
some
ways:
parallels
electronics
back
20
years
ago,
because
you
know
as
a
business,
you
might
not
always
want
to
disclose
and
be
completely
transparent
on
what
your
model
contains.
D
So
in
the
old
days
you
get
a
circuit
board
and
you'd
reverse-engineer
it.
The
same
thing
is
going
to
happen
with
models,
especially
when
you
see
a
successful
model.
People
are
gonna,
try
and
reverse
engineer
it,
and
so
I
think
there
is
sharing
going
around,
but
it
is
not
like
a
one
place
fits
all
at
this
point.
B
A
Just
echoing
a
couple
of
the
points
that
were
made
first,
you
know
this.
Is
it's
crazy?
How
specific
the
data
is
as
well
like?
You
could
have
something
that
that
is
a
perfect
model
for
baseball
game
tickets
and
it
doesn't
work
because
you're
selling,
movie
tickets
now
or
use
you
do
baseball
game
tickets
in
Seattle
and
it
doesn't
work
in
Miami,
because
the
traffic
is
different.
I
mean
it's
just
crazy,
how
like
absolutely
specific,
and
so
for
those
that
haven't
gotten
into
this.
You
know
you
think.
A
As
a
coder,
you
know,
I
was
like
okay,
you
know
when
I
first
looked
at
this
I'm
like
okay,
the
all
the
code
is
there.
This
is
fairly
straightforward.
In
fact,
that
code
doesn't
even
represent
half
it
doesn't
even
represent.
You
know
one
tenth
of
the
overall
information.
What
happens
is
that
code
basically
determines
the
weights
and
the
weights
basically
determine
you
know
the
combination
of
those
two
together.
Your
graph
plus
your
weights,
makes
all
the
difference
here,
and
so
that's
what's
extremely
hard
to
be
portable.
A
Even
if
you
you
package
that
entire
thing
up
there
are
pre-trained
models
out
there,
there's
some
very
good
ones.
A
tensor
two
tensor
is
a
great
generalized
one
and
there
there
are
lots
of
other
things
out
there,
but
it
is
crazy
how
non-portable
these
things
are
and
again
I
hope.
The
compilation
changes
that
one
thing
I
want
to
highlight,
which
is
what
Carol
said
that
it
is
absolutely
correct.
A
There
is
a
real
thought
that
we
are
going
to
lock
down
our
models
and
they're
gonna,
be
you
know,
because
it's
trained
on
my
private
data
and
I
know
more
about
this
than
anyone
and
my
recommendation
engine
is
gonna,
be
the
best.
The
number
you
want
is
500.
There's
an
academic
paper
out
there
right
now.
That
says,
if
you
give
me
500
arbitrary
queries
against
your
model,
I
can
get
90%
accurate.
What
the
underlying
weights
are
500
it
models
are
very,
very
quickly
going
to
be
completely
undefensible,
but
it
doesn't
matter.
G
That
just
this
on
yep,
it
points
out
the
importance
of
understanding
your
data,
it's
true
in
the
scientific
community,
I
think
it's
gonna
be
true
in
the
in
the
business
community's.
A
big
part
of
this
is
truly
understanding
the
data
or
you're
just
going
to
be
making
all
the
wrong
conclusions
about.
What's
really
going
on
so
there's.
No,
it's
not
easy.
It's
not
an
easy
thing
and
people
are
going
to
have
to
understand
their
data.
It
is
I.
F
So
my
question
is
around
adoption
of
the
frameworks
technologies
underneat
rather
than
the
models
and
all
that
good
stuff.
How
close
are
we,
how
much
growth
has
there
been
on
the
commercial
side
for
the
frameworks
around
things
like
rad,
analytics
and
stuff
like
that
and
and
not
just
with
customers,
but
also
with
community?
How
much
are
they
growing
as
well,
so.
I
Like
I
mentioned,
you
know
large
clusters,
small
clusters,
we've
got
lots
of
different
companies
like
that.
Well,
it's
just
not
a
company,
the
Department
of
Defense
adopting
this
for
for
some
of
their
their
work.
So
we
were
encouraged
and
and
I
mean
we,
we
have
more
work
than
we
know
what
to
do
with.
So
that's
that's
good.
D
Mean
as
far
as
Jupiter
goes
there's
over
two
million
notebooks
on
github
alone,
and
you
know
you
can
do
like
the
trending
machine
learning
stuff
and
really
get
a
sense
of
what's
being
used
in
machine
learning.
Today,
Jupiter
has
been
adopted
by
things
like
CERN
Open,
dream
kit.
You
know
large
Space,
Telescope
LIGO
when
we
did
the
gravitational
waves.
You
know
in
some
ways
it's
a
de
facto
standard,
for
you
know
interactive
collaborative
computing
and
the
computational
ideas
that
I
said
so,
which
isn't
to
preclude
other
front-ends.
Because,
let's
face
it,
you
want
a
front-end.
D
That's
gonna
best
match
your
use
case
and
that
won't
be
Jupiter.
In
all
cases
we
hope
in
most
cases,
but
not
necessarily
in
all
cases,
so
I
think
we're
in
very
early
days.
But
I
can
say
you
know
from
what
I've
seen
in
terms
of
Jupiter
hub
having
kubernetes
and
the
ability
to
have
a
helmet
art
that
lets.
You
do
like
a
more
declarative
deployment
and
list
of
things
that
are
in
that
deployment
was
a
huge
step
forward.
L
A
follow-up
question
so
I
view
this
as
an
again
I
might
be
understanding
things
wrong.
There
are
models
that,
from
a
perspective,
is
about,
for
instance,
saving
lives.
Cancer,
diagnosis,
self-driving
cars,
avoiding
collisions
lots
of
different
things.
Isn't
there
a
potential
kind
of
like
we're
in
a
regulated
market
where
there
B
would
be
a
great
benefit
of
actually
sharing
these?
So
there's
a
kind
of
like
an
ethics
side.
L
To
this
you
know
and
and
not
going
into
the
social
aspects
of
knowledge
about
us
as
people
but
kind
of
like
where
there's
actually
human
kind
kind
of
live,
illusionary
possibilities
of
this,
and
are
we
seeing
that
growth
of
also
sharing
about
the
exploring
of
the
universe?
That's
the
problem!
We
share!
What
about
these
things
is
that
something
that
we're
seeing
peeking
up
so.
D
D
You
know
deployments
and
I
think
what
you're
saying
is
yeah,
there's
a
moral
imperative
to
you
know,
share
the
things
that
make
sense
to
be
shared
and
to
you
know,
work
together
and
that's
where
I
think
the
open
source
piece
is
really
important,
because
you
know
we
can.
You
know,
recommend
what
would
be
best
for
a
large-scale
national
health
system
or
something
like
that
and
then
you
know
another
country
could
potentially
use
it.
But
you
know.
A
But,
like
I,
said
the
real
problem
today
is
that
that's
a
very
specific
use
case.
So
let's
say
to
take
a
take
that
and
go
commercial.
Let's
say
on
eBay
and
I
want
to.
You
know
automatically
categorize
imagery
that
was
uploaded
so
that
it's
easy
to
search
for
right.
That's
a
fairly
common
problem
that
generalized
solution
may
not
be
good
for
my
domain
right
I.
You
know
it
can
identify
a
coat.
It
doesn't
know
the
difference
between
a
raincoat
and
a
trench
coat
and
a
leather
coat
and
so
on
and
so
forth.
A
They
can
only
identify
coat.
Some
of
the
research
that's
going
on
right
now,
which
is
really
exciting,
is
around.
What's
called
transfer
learning
and
that's
the
way
to
think
about
it
is
we're
gonna
train,
80%
of
the
way
there
or
90%
of
the
way
there,
and
then
you
take
your
domain-specific
data
and
apply
it
to
that
model
and
do
the
final
training
and
that
little
bit
requires.
You
know
one
one
thousandth:
the
amount
of
data
which
is
the
biggest
problem,
another
stat
for
you,
you
can
get
acceptable
performance
out
of
a
model.
A
Did
a
rule
of
thumb
is
acceptable
performance
out
of
a
model
with
about
5,000
data
points
to
get
better
than
human
level
performance.
It
takes
about
10
million,
so
the
gap
is
that
big,
and
so,
if
you
can
drop
the
amount
of
data
by
one
one
thousand
that's
great,
but
but
like
Carol
said,
I
think
there
is
a
deep,
deep
moral
and
ethical
and
issues
here,
and
we
need
to
really
really
understand
them
because
and
I
highly
recommend
this
book.
It's
it's
fantastic.
A
It's
called
weapons
of
mass
destruction
and
it
is
all
about
the
I
think
a
great
at
least
initial
framework
around
what
what
rules
we
need
in
this
new
algorithm
world,
where
how
do
you
make
things
transparent?
How
do
you
constantly
evolve
your
model,
things
like
that,
because
I'll
tell
you
what
if,
if
I
you
know,
if
ml
was
available
in
1965
and
I
built
a
model
around,
you
know
real
estate
loans,
I'd
still
be
biasing
against
women
and
people
of
color
today
right.
So
how
does
my
model
evolve?
B
Really
almost
everybody
up
here
has
a
talk
at
almost
I
said
almost
right,
almost
everybody
here
it
has
a
talk
or
will
end,
and
everybody
up
here
will
be
here
all
week,
long
at
coop
con
and
we'll
make
sure
when
we
post
the
videos
we
give
you
links
to
reach
out
and
and
meet
with
and
talk
to
all
of
these
folks
there's
been,
you
know
a
lot
of
different
things
talked
about
here.
All
of
them
have
done
or
will
do
an
open
ship
comments.
B
Briefing
there's
there'll
be
at
the
beginning
of
April
post
Red
Hat
summit
when
I
can
breathe
again
or
the
beginning
of
May
or
June,
or
whenever
June
June,
6
I
think
his
next
OpenShift
machine-learning
cig
there'll
be
a
couple
of
coop
Flo
ones.
Actually
you
guys
would
be
a
little
bit
more
regular
than
we
are
so
there's
lots
of
opportunities
to
connect
with
these
folks.
You
should
I
think
we
were
talking
about
doing
a
dock
sprint
on
coop
flow
and
OpenShift
with
the
Jupiter
hub
folks
here.
B
All
right,
then,
I
mentioned
the
machine.
Learning
on
open
shifts.
Cig
David
mentioned
the
coop
flow
community.
You
can
get
them
and
find
them
on
github.
You
can
find
the
machine
learning
one
on
Commons
at
OpenShift,
org
we're
gonna.
Let
the
panelists
go.
Many
thanks
grab
a
beer
I
paid
for
them,
they're
free.
If
anybody,
some
of
these
tables
are
a
little
fill
and
now
I'm,
just
gonna
do
a
really
quick
talk
and
I'm,
also
going
to
give
a
shout
out
before
I.
B
Even
do
this
to
the
the
new
team
from
core
OS
who
actually
wrote
and
took
my
raw
notes
and
turned
them
while
I
was
on
a
beautiful
holiday
last
week
in
Norway,
I
finally
got
to
take
a
vacation,
and
the
Norwegian
tax
group
is
here.
Thank
you
very
much,
for
you
know.
I
paid
a
lot
of
taxes
in
Norway,
but
it
was
worth
every
penny.
I
had
a
great
time.
We
rented
a
caravan.
B
We
went
all
over
Norway
and
we
saw
some
of
the
most
amazing
things
and
if
I
could
I'd
be
doing
that
all
over
Denmark
this
week
too,
but
I'm
gonna
be
here
at
coop.
Con
so
I'm
just
going
to
give
you
a
quick
few
tips
about,
and
this
whole
blog
post
is
on
blog.
That
OpenShift
calm,
so
do
not
try
and
write
down
all
of
these
things.
B
But,
from
my
perspective,
one
of
the
wonderful
things
about
core
OS
joining
Red
Hat
was
all
of
these
new
colleagues
and
lots
of
people
from
mister
crawling
to
josh
burkas
who's.
A
community
manager
Dianne
feta
mo
who
was
on
the
panel
all
kind
Brandon's
got
a
couple
of
talks.
There's
talks
every
single
day,
so
I'll,
let
you
read
the
blog
post
and
while
you're
reading
all
that
I'll
tell
you
about
a
really
cool
thing.
B
If
you
sneak
away
from
koukin
for
a
day
here,
go
to
the
site,
Google
six
forgotten
Giants
of
Copenhagen,
and
if
you
ever
were
a
geocache
or
like
me,
this
artist,
Thomas
dambo,
has
built
from
pallets
from
wooden
pallets
these
beautiful
hidden
all
over
the
the
greater
Copenhagen
area,
a
whole
bunch
of
things
things.
Some
of
them
are
underneath
the
the
bridges
there's
a
little
map
you
can
go
and
explore
so
I
really
tell
you
yeah
stay
at
coop
con!
B
Go
to
all
the
sessions,
we're
going
tell
you
about,
but
do
take
an
opportunity
to
go
off
and
see
a
little
bit
of
us,
and
this
is
one
of
the
wonderful
magical
things
about
being
in
Scandinavia.
Besides
the
Norse
mythology
in
Norway,
the
trolls
and
here
in
Denmark
and
the
Swedes
Thursday
again,
I
do
not
know
how
I'm
going
to
get
to
all
of
these
sessions
and
to
connect
with
everybody.
But
there's
a
whole
lot
of
stuff
Magi
is
here,
is
giving
a
talk
on
writing.
Coop
controllers.
There's
just
you
know.
I.
B
We
mentioned
a
couple
of
times
I'm
going
to
keep
shouting
out
to
the
the
source,
to
image
conversations
that
we're
trying
to
have
to
really
get
deep
into
the
use
cases
around
creating
images
and
containers
and
the
the
workflows
around
them.
So
that's
really
good,
but
then
Thursday
night
there's
this
thing
of
the
Tivoli
Gardens.
There's
a
roller
coaster
here
and
a
garden
and
a
beautiful
place
and
they'll
be
buses
in
the
evening
and
beer
again.
So
please
make
sure
you
do
leave
this
beautiful,
Bella
Center
and
go
off
and
do
that
so
on
Friday.
B
If
you're,
not
already
tired
from
everything
that
you
did
on
Thursday-
and
you
have
you
know-
you're
not
seasick
from
the
roller
coaster-
there
are
still
even
more
talks
and
again
we
have
LC.
Philips
is
doing
a
great
talk,
has
done
some
amazing
things
with
the
National
Institute
of
Technology
standards,
and
just
really
you
know,
I'm
looking
forward
to
that
talk.
B
That
will
make
everybody
happy
but-
and
you
should
all
buy
now,
if
at
least
had
one
beer
or
two
glasses
of
wine
or
three,
maybe
now
you've
had
three
now:
okay,
so
you're
fine
I,
don't
have
to
worry
about
you,
but
we
will
have
a
big
booth
at
Red
Hat,
as
we
always
do
here,
because
we
really
want
to
support
the
kubernetes
community.
You
can
find
all
of
the
people
that
are
talking
at
talks
at
some
point
or
other
will
you
can
meet
them
in
the
Red?
B
Hat
booth
I
really
highly
encourage
you
Michael
the
tall
man
there.
If
you
haven't,
joined
openshift
Commons
yet,
and
you
want
to
get
in
the
slack
channel
and
get
into
some
of
these
conversations
meet
them
the
tall
man
and
he
will
sign
you
up
and
get
you
enrolled
in
the
OpenShift
community.
We
don't
spam,
we
just
send
out
announcements
about
when
briefings
are
coming.
When
the
next
ml
sig
is
go
to
Commons
at
OpenShift
org,
you
will
find
the
ml
sig
you
can
sign
up
for
that.
B
B
Some
of
you
get
to
live
in
Norway
and
stay
here
and
in
this
time
zone
and
you're
really
really
lucky,
but
the
rest
of
us
will
be
jet-lagged
and
dreaming
about
it
on
the
flights
home.
So
really,
thank
you
very
much
for
your
time
tonight
for
coming
to
coop
con
for
coming
to
this
and
event
this
evening.
I
really
hope
you
will
get
involved
in
both
the
coop
flow
community,
the
open
shift
on
ml
communities
and
stay
deeply
involved
in
the
kubernetes
community,
because
it's
been
a
wonderful
adventure,
the
past
five
years.