►
From YouTube: OpenShift and Tensorflow
Description
As the world of Tensorflow (an open source software library for dataflow programming across a range of tasks) and K8s (Kubernetes, an open source orchestration framework for containerized applications) come together there is a need for a consistent development workflow, model management, and streamlined scaled execution.
In this session, Red Hat and Google will walk through the tools, processes, and (perhaps the most important) business examples. You should expect to get a good idea of how to run these technologies together.
Learn more: https://agenda.summit.redhat.com/
A
Hi
everyone
I,
am
Dave
Ron
check,
I'm,
a
product
manager
at
Google
and
I'm,
one
of
the
cofounders
of
the
cube
flow
project,
and
we
are
here
to
talk.
This
is
Matt
in
the
front
row
you'll
be
seeing
him
in
a
minute,
and
we
are
here
to
talk
about
open
source,
cube
flow
and
tensorflow
running
on
top
of
OpenShift,
really
new
paradigm
in
the
world
of
machine
learning.
A
lot
of
people
are
kind
of
curious.
A
You
know,
anytime,
you
see,
tensorflow
anytime,
you
see
machine
learning,
you're
like
I
got
to
find
out
what
this
is
and
they
approach
the
problem
and
they
don't
really
know
how
to
understand
how
it
impacts
their
business
and
I.
Think
it's
a
lot
of
the
reason
is
from
this.
You
know
everyone
has
a
different
idea
about
what
machine
learning
is,
so
we're
gonna
spend
a
few
minutes
here,
just
getting
everyone
up
to
speed.
First,
you
start
with
a
question
not
hard.
For
example,
you
might
say
how
much
is
my
house
worth.
A
That
is
a
very
easy
to
question
area
and
ask
from
that.
Then
you
say:
okay,
well,
what
data
do
I
have
I
know
what
answer
I
want
to
get
I
want
to
know
how
much
I
house
is
worth
what
data
do
I
have
maybe
I'm
gonna
use
square
footage.
It's
pretty
good
metric,
larger
houses
generally
go
for
more,
and
then
you
start
to
collect
data
around
it.
For
example,
you
got
a
point
points.
Usually
not
enough,
usually
any
more
points.
A
Anyone
want
to
try
and
draw
the
line
here,
you're
kind
of
cheating,
because
you
know
there's
the
corner
there
right.
So
then
you
get
more
points
and
then
you
get
even
more
points
and
then
you
draw
a
line
and
you're
like
aha
I
have
an
answer
because
now
I
can
answer
my
question
right.
I
say
my
house
is
2100
square
feet,
therefore
$339,000
not
in
San
Francisco.
Of
course
you
know,
but
most
places
gratz
your
machine
learning
expert,
really
so
certificates
on
the
way
out,
but
things
can
get
complicated.
A
You
could
have
nonlinear
groupings,
you
could
have
multi-dimensional.
We
only
had
one
dimension
there.
Maybe
you
have
many
many
dimensions.
Things
could
change
over
time,
which
is
obviously
very
complicated,
and
what
I
think
this
all
goes
to
is
that
machine
learning
you
know,
while
it
captures
a
very
simple
taking
a
bunch
of
data
and
making
a
regression
out
of
it
or
doing
using
some
standard
techniques
at
the
end
of
the
day.
A
What
we
believe
is
that
machine
learning
is
a
way
to
get
at
that
answer
for
your
business
problem
without
explicitly
knowing
how
to
create
the
solution
and,
generally
speaking,
it's
great
people
love
it
at
Google.
This
is
what
machine
learning
did
for
us.
You
took
the
best
data
center
engineers
in
the
world
who
have
dedicated
their
life
to
using
renewables
using
latest
technology,
around
data
center
energy
usage,
trying
to
figure
out
how
to
make
the
most
out
of
every
watt
that
goes
in
in
this
and
machine
learning
or
in
Sydney
and
data
center
terms.
A
This
is
called
power,
usage
effectiveness.
They
mount
a
power
that
goes
in.
How
many
cycles
are
you
able
to
get
before
machine
learning?
They
did
everything
they
possibly
could
to
figure
out
what
the
hell
was
going
on
and
tune
it
to
the
nth
degree.
They
attach
machine
learning
to
it
and
they
got
a
40%
reduction
in
overall
usage.
I
mean
this
is
literally
hundreds
of
millions
of
dollars
a
year,
and
this
is
better
than
anything
that
they
had
done
or
even
thought
of
before
and
the
first
thing
I
hated.
A
They
wanted
to
understand
what
the
hell
was
going
on,
which
is
often
impossible
because
machine.
Nobody
really
knows
what
machine
learning
is
doing,
but
they
were
able
to
tease
it
apart
and
they
you
know
it
took
it
in
new
directions
and
things
like
that
that
they
hadn't
even
thought
of
and
and
again
that
goes
back
to
this
point.
Machine
learning
is
a
way
to
get
at
that
solution.
Lowering
energy
cost
without
having
to
think
about
exactly
what
the
right
way
is
to
do
it.
A
You
roll
out
every
single
library,
every
single
news
model
you
have
to
retrain
and
redo
everything,
and-
and
this
is
not
ideal
right-
you
know
this
is
kind
of
like
software
was
in
I,
don't
know
1965
before
people
had
invented
C
and
common
languages
and
common
ways
to
share
code
and-
and
the
question
is
haven't
we
heard
this
before.
We
actually
heard
this
really
recently
before
I
started
on
cube
flow,
I
was
working
on
kubernetes
and
kubernetes.
You
know
came
along
in
2014
and
people
were
doing
the
exact
same
thing.
A
A
What
this
enabled
was
true,
extensibility
cube
con
about
six
months
ago,
ken
goldberg
engineering
director
for
the
kubernetes
project
at
Google
came
on
stage
and
talked
about
the
extensibility
of
kubernetes,
and
this
is
a
really
key
point.
You
know
what
kubernetes
provided
was
this
common
set
of
api's,
but
it
also
provided
common
extension
points
for
people
to
build
on
top
of
it,
so
that
you
could
use
all
the
native
elements
of
kubernetes
and
trust
that
it
was
there.
A
As
long
as
you
were
running
on
a
kubernetes
conformant
cluster,
you
could
be
assured
that
there
would
be
dns
that
there
would
be
a
way
to
mount
storage
that
there'd
be
all
these
various
things,
and
this
is
extremely
important
because
it
enabled
what
are
called
cloud
native
apps.
These
are
apps
that
are
designed
to
run
in
these
distributed
systems,
they're
loosely
coupled
together
and
they
allow
you
to
swap
in
and
out
all
the
various
components
that
are
involved
in
rolling
out
of
a
you
know.
A
Modern
application,
so
I
would
propose
that
we
need
cloud
native
ml.
We
need
ml
that
understands
that
it
doesn't
have
to
to
that
level
of
depth
its
able
to
use
a
standard
set
of
api's
and
then
build
all
the
elements
on
top
of
it
and
what
this
looks
like
for
us.
It
really
breaks
down
to
these
three
key
components:
composability,
Portability
and
scalability.
For
composability
what
I
mean
is
the
following
lots
of
people
out.
There
will
focus
on
the
building
of
your
model
and
that's
great,
obviously,
that's
very,
very
important.
A
But
if
you
go
and
talk
to
a
data,
scientist
building
a
model
is
a
very
small
component
of
what
you
have
to
do.
It's
all
the
things
around
it
that
can
be
very,
very
painful,
ingesting
your
data,
transforming
it
splitting
it
running
through
an
initial
experimentation.
Finally,
getting
to
beam
up
building
your
model
and
validating
it
then
training
it
at
scale
and
finally
rolling
it
out
to
production.
That's
what
a
real
solution
is.
If
you
want
to
get
the
answer
to
your
house,
you
have
to
do
all
of
these
things
all
right.
The
answer.
A
How
much
your
house
is
worth?
You
have
to
do
all
of
these
things.
There's
no
easy
shortcut
to
just
say!
Well,
I
just
want
to
focus
on
the
model.
Now
the
problem
is,
is
that,
for
you
know
rolling
out,
all
these
services
is
critical,
but
every
data
scientist
is
also
gonna
have
a
slightly
different,
take
on
it.
They
say:
well,
I,
don't
want
to
use
Hadoop
I
want
to
use
spark
or
I,
don't
want
to
use
tensorflow
I
want
to
use
PI
torch,
so
on
and
so
forth.
A
These
are
all
components
that
a
data
scientist
is
gonna
be
familiar
with
and
we
want
to
help
enable
them.
So
that's
part
of
the
part
of
composability.
The
second
is
portability.
Setting
up
all
those
things
often
is
an
enormous
job,
but
that's
you
know
kind
of
even
half
of
the
problem.
Then
you
have
to
think
about
all
the
things
underneath
it
runtimes
drivers.
Os
is.
These
are
all
components
that
you
need
to
think
about
as
well.
What
version
you're
running
on
you
know
and
and
how
you
get
your
drivers
installed.
A
You
know
whether
or
not
one
version
of
a
hardware
works
or
doesn't.
These
are
additional
components
that
you
need
to
worry
about,
and
so
you
could
spend
a
whole
bunch
of
time
setting
that
up,
and
then
you
have
to
replicate
the
exact
same
effort
when
you
get
to
training,
because
your
cluster
may
be
very
different
or
when
you
get
to
the
cloud,
because
your
cluster
may
be
further
different
again
and
then
anytime,
something
changes,
video
releases,
a
new
driver,
you
have
to
go
back
and
re
update.
A
All
these
deployments
again
in
a
very
complicated
way
again,
not
ideal.
Folks
are
really
looking
for
portability
a
way
to
describe
your
overall
framework
in
a
way
that
can
be
stamped
out
everywhere
and
then
finally,
scalability
obviously
kubernetes
scales
to
thousands
of
nodes
and
that's
great
and
and
I'm.
You
can
spin
up
very,
very
large
deployments
of
VMs
right
in
there
now
single
clicks
away
in
your
cloud,
but
that's
not
really
the
only
thing
when
it
comes
to
scalability.
A
Obviously,
then
you
have
to
think
about
accelerators,
whether
or
not
you're
using
GPUs
you're
using
custom
cloud
accelerators,
like
Google's,
GPUs
or
Microsoft's
FPGAs,
that
they
just
announced
a
variety
of
different
CPUs.
Are
they
single
core
or
they
multi
core
different
discs
and
then
there's
also
the
human
being
aspect
of
scalability?
How
do
you
scale
humans?
Are
they
different?
Excuse
me?
Are
they
different
skill
sets?
You
know
intern
to
intern
to
20
year
long
researcher,
data
scientists,
we
different
engineers?
A
How
do
you
scale
your
team's
literally
grow,
a
single
team
or
add
additional
teams,
potentially
geo-located
and
finally,
more
experiments?
So,
even
if
everything
else
stays
the
same,
maybe
you
just
want
to
run
five
ten,
a
hundred
different
experiments
simultaneously
to
see
what
what
works
best.
Those
are
all
different
elements
of
scale,
but
at
the
end
of
the
day,
this
comes
back
to
those
three
core
components
for
what
we
mean
by
cloud
native
ml
and
the
way
we
decided
to
help
make
progress
against
this
and
and
build.
A
The
idea
behind
cube
flow
is,
we
want
to
make
ml
easy
for
everyone
to
develop,
deploy
and
manage
portable
distributed
ml
on
kubernetes,
and
what
that
looks
like
is
the
following.
First,
we
introduce
that
bottom
layer
or
wait.
Excuse
me,
we
don't
even
introduce
it.
We
just
rely
on
the
fact
that
it
exists
kubernetes
OpenShift
they
run
anywhere.
A
They
run
on
any
cloud,
they
run
on
premises,
they
run
in
your
laptop
they're
already
there
and
they
provide
a
great
layer
abstraction
with
a
core
set
of
key
api's
that
are
available,
so
you
can
rely
that
they
work
properly.
Second,
you
describe
your
deployment
it
using
cube
flow
and
cue
flow
native
packaging.
We're
not
introducing
anything
new.
These
are
all
open
source
projects,
but
we
rely
on
the
fact
that
you
can
containerize
everything
and
you
can
describe
it
in
code
and
then
you're
able
to
just
stamp
it
out.
A
Every
place
that
kubernetes
exists,
cube
flow
runs
fine
and,
specifically,
as
of
last
Friday,
we
are
introducing
cube
flow
0.1.
The
project
initially
kicked
off
at
cube
con
Austin
about
six
months
ago
and
we're
at
the
0.1
release
in
the
box.
Right
now
we
have
a
Jupiter
notebook,
we
have
distributed
multi
architecture,
training,
we
support
different
models
serving
frameworks,
and
we
have
case
Annette
packaging
such
that
you
can
build
it
and
roll
it
out
yourself.
You
know
in
any
customization
you'd
like
and
I,
wanted
to
really
stress
on
that.
A
You
know,
even
though
tensor
flows
in
the
box
right
now,
we
really
do
want
to
support
every
single
model
framework.
That's
out
there.
We
already
have
pipe
PI
torch
and
a
cafe
checked
in
as
operators
and
they're
just
going
through
the
final
process
to
get
that
rolled
out.
We
want
to
support
every
different
data
transform.
B
Great
thanks
Dave,
so,
hopefully
everybody's
awake
and
has
their
bingo
cards
out.
This
is
the
the
live
demo
parts
so
get
ready
for
the
carnage.
This
is
Matt.
My
fingers
crossed
on
a
new
network
here
and
whatnots
before
we
get
to
the
the
actual
demo.
I
want
to
give
you
a
little
bit
of
an
idea
of
what
you're
going
to
see
what
you're
going
to
what
you're
going
to
watch
built
me
build
here,
and
this
is
it
it's?
It's
really
exciting.
B
Isn't
it
when
you
go
and
you
build
your
first
machine,
your
first
programming
application,
it's
hello
world,
pretty
pretty
simple,
but
when
you
actually
get
it
done
it's
it's
amazing.
This
is
essentially
the
hello
world
for
machine
learning,
it's
hand,
written
character,
recognition
using
the
m-miss
data
set
and,
if
you're
and
if
you're,
really
quick
with
your
phone
at
the
end,
you'll
actually
see
it
in
endpoints
that
I
exposed
live,
that
you
can
copy
and
play
around
with
us.
If
you
want
yourself
at
least
until
I
turn
it
off
so
there
there
are
four
there.
B
Basically,
four
steps,
so
we
have
to
go
through
for
building
an
application
using
machine
learning.
The
first
Dave
talked
about
already,
where
you
talk
about
having
jupiter
notebooks
so
that
you
can
actually
process
your
data
construct
the
model
tweak
it
tune.
It
do
everything
locally,
so
you
can
get
a
good
feeling
for
if
things
are
going
to
work
or
not,
once
you
have
that
you
take
the
code,
you
stick
it
in
the
container
and
you
run
your
run.
Training
experiments
on
you
do
hyper
parameter
tuning
to
figure
out.
B
What's
the
best
configuration
for
the
model
that
you
can
that
you
can
produce,
that,
will
then
get
pushed
into
your
application
once
that's
built
up,
you
can
actually
do
the
serving.
This
is
where
you
take
the
trained
model
you
deploy
it,
you
put
an
API
on
front
of
it,
and
then
you
can
connect
your
your
business
logic
of
your
application
logic
to
it,
which
is
the
fourth
step
so
now
for
real
to
the
demo.
So,
hopefully
folks
can
see
this
mostly
in
the
back
when
I
was
putting
this
together.
B
I
realized
that
it
has
some
of
the
great
things
that
you
like
in
a
demo,
it's
got
I
think
11
different
concert
that
you're
gonna
get
to
smash
into
your
head,
but
you
came
to
this
session,
so
you're
all
really
smart.
It
shouldn't
be
too
much
of
a
problem.
The
first,
the
first
few
are
obviously
open
shifts
and
kubernetes,
which
you're
probably
very
familiar
with
already,
and
public
and
private
clouds
are
the
the
second
two.
So
that's
four
right
there,
the
private
cloud
is
going
to
be
my
laptop.
B
The
public
cloud
is
a
cluster
running
in
Google
cloud.
So
on
the
left
here
we
actually
have
open
shift
running
on
my
laptop
on
the
right
here.
We
have
kubernetes
running
on
Google
cloud,
so
I'm
going
to
do
is
start
off
by
just
taking
coop
flow,
downloading
it
and
deploying
it
on
both
of
our
both
of
our
clouds.
B
You
don't
have
to
worry
about
kind
of
like
Nigri
things
about
where
the
commas
are
extra,
whatever,
whatever
you
can
really
just
think
about
the
environment,
that
you're
working
in
and
and
push
to
so
starting
this
up
the
once
once
I've
got
the
project
created
here.
I
want
to
do
two
things
I
want
to
tell
it
about
both
of
the
clouds
that
I
have
so
I'll.
Do
that
I'm
gonna
create
a
environment
that
saying
there's
a
cloud.
This
is
the
thing
on
the
rights
and
environments.
That's
just
going
to
be
the
local
connection.
B
I
have
here!
So
there
we
are
got.
Three
environments
came
to
one
default
for
for
fun,
so
after
that,
I
can
just
basically
install
download
COO
flow.
So
this
is
all
you
have
to
do.
It's
going
to
be
registering
connection
to
github,
where
we
store
all
of
the
the
packaging
objects
pulling
down
the
core
pieces,
which
include
things
like
tensorflow,
then
also
the
the
operators
for
do
doing
your
training,
the
TF
job
as
well
as
serving
so
that's,
that's
all
there
now.
B
So
that's
great
next
step
is
to
to
generate
an
example
of
kind
of
the
core
components
that
will
push
out
to
both
of
our
clouds,
and
that's
done.
This
is
going
a
little
fast.
Hopefully
it's
not
too
fast.
Once
that's
once
that's
done.
We
can
really
just
push
these
push
that
configuration
to
our
to
our
kubernetes
deployments,
so
this
one's
pushing
over
to
the
OpenShift.
You
see
all
the
the
objects
popping
up
there,
and
this
is
really
key.
We
can
do
this
exact
same
thing.
I'll.
B
So
this
is
basically
the
kind
of
like
the
magical
and
end
points
to
connect
to
all
things,
Jupiter,
which
lots
of
lots
of
data
scientists
in
the
room
or,
if
you
work
with
data,
scientists
are
really
familiar
with
that
and
even
more
and
more
developers
are
getting
familiar
with
it.
In
addition
to
the
other
IDs
that
they
use
so
I'll
just
create
expose
an
end
point
to
this,
so
I
can
actually
connect
to
the
Jupiter
hub
and
boom
I've
got
access
and
can
log
in
say,
I
want
to
create
my
own
Jupiter
hub.
B
So
instead
I'm
just
going
to
give
you
an
example
of
the
actual
tensorflow
Jupiter
notebook
that
we
put
together,
some
really
smart
data
scientist
at
Red
Hat
puts
it
put
this
together
for
me,
and
this
is
all
the
tensorflow
code-
that's
needed
to
do
the
training
to
build
up
a
model
to
do
the
character
recognition
so
from
here
once
we
actually
have
this
code.
We
want
to
have
some
way
of
doing
repeated
experiments,
repeated
training
of
this.
We
really
take
it
from
here.
Stick
it
into
a
just
regular
Python
file,
and
this
year.
B
May
it
must
hit
tab
and
this
file
and
add
like
a
little
bit
of
boilerplate
around,
is
so
once
the
once.
The
model
is
actually
trained
up.
We
have
to
save
it
somewhere,
so
we
take
it
and
stick
it
into
Google
compute
cloud,
Google,
Cloud,
Storage,
okay,
and
to
do
that
basically,
I
have
to
build
a
a
container
where,
where
will
will
store
that
training
code?
So
this
will
kick
off.
This
does
probably
all
the
expected
things.
B
B
So
we
actually
pushed
this
to
the
Google
container
registry
to
make
sure
that
we
have
access
to
it,
both
on
my
laptop
and
on
the
cloud,
because
we're
going
to
do
the
training
in
both
environments,
greats
I,
need
to
make
sure
I
know
what
the
training
version
is,
because
when
you're
doing
these
experiments,
you're
actually
doing
tens
hundreds,
thousands
of
them,
potentially
you
want
to
be
able
to
track
them.
Immutably
labeled
across
time.
So.
B
B
The
next
thing
is
to
actually
say
what
is
the
image
that's
going
to
be
run
as
part
of
this
training
job
just
use
that
image
that
I
had
before
and
because
it's
most
helpful,
if
I
can
find
things
later,
I'm
gonna,
I'm
gonna,
give
it
a
name
as
well.
So
now
I
have
an
instance
of
an
experiment
that
I
want
to
run
so.
B
Then
all
it
is
to
actually
run
that
experiment
is
to
say:
okay,
I
want
to
take
this
training.
Job
and
I
want
to
deploy
it
on
my
local
cluster,
like
that.
If
we
go
back
over
here
to
open
shift,
we
can
see
that
this.
This
C
Rd
has
been
started
up.
The
TF
job
operator
that
we
had
picked
it
up
started
up
a
pod,
and
here
it
is
actually
training
the
model.
So
we
can
do
that.
B
One
of
the
things
that's
really
key
here
is:
we
can
do
that
both
locally
and
remotely,
with
the
exact
same
with
the
exact
same
code,
the
exact
same
configuration,
so
we
just
basically
run
the
exact
same
commands
with
one
tweak.
Is
that
I'm
gonna
point
it
at
the
Google
cloud?
Instead,
here
we
see
the
same
thing:
training
job
showed
up
logs
are
running,
runs
them
in
reverse
order,
but
same
basically
the
basic
idea
is
going
there.
Okay,
so
now
we've
got
code
built,
we've
got
model
trains.
We
have
to
do
two
other
things.
B
We
have
to
one
start
up
that
application
that
you're
all
going
to
try
to
get
onto
with
your
phones,
and
then
we
have
to
actually
serve
the
model
to
the
application.
So
our
application
is
is
stored,
just
like
everything
else
in
a
container
this
web
UI
or
some
some
pythons
and
flash
code
in
there
and
whatnot
I'm,
not
gonna,
build
that
container
just
for
the
interest
of
time
it's
already
deployed.
B
So
what
I'll
show
you
instead
is
actually
just
deploying
that
so
I'm
gonna
push
this
new
configuration
doing
a
deployed
service
off
to
the
Google
cloud,
and
this
is
going
to
get
set
up
with
a
with
a
load
balancer
that
you
can
actually
access
here
in
a
second.
So
here's
the
web,
UI
up
and
running,
looks
like
the
load.
Balancer
hasn't
started
up
yet,
but
that's
okay!
B
Basically,
this,
and
so
the
TF
job
before
this
is
the
TF
serving
we're.
Gonna
call
this
M
this
serve,
which
is
going
to
give
us
an
endpoint
that
the
application
can
talk
to
we're
going
to
tell
it
where
we
stored
the
model
in
the
in
the
Google
Cloud,
and
then
we're
just
going
to
deploy
it
we're
going
to
deploy
it
over
here
on
the
on
the
Google
cloud
to
go
back,
we
can
see
all
of
a
sudden
it's
spinning
up
right
here
so
with
that
and
the
application
has
started
up.
B
B
B
Melting,
my
laptop
quick
recap:
what
what
you
saw
here
was
essentially
the
ability
to
build
the
hello
world
for
machine
learning
applications.
This
was
seeing
how
you
can
uniformly
deploy
everything
in
cross
goop
flow,
seeing
how
that
you
can
use
Jupiter
just
basically
right
out
of
the
box,
how
you
can
train
models,
so
you
can
build
models
and
then
you
how
you
can
serve
them
and
connect
them
to
your
application.
A
So
yeah
it's
for
those
that
that
haven't
done
a
lot
of
data
science.
Just
you
know,
when
I,
when
I
first
saw
that
when
I
first
saw
that
the
capabilities
there
you're
just
blown
away
the
the
amount
of
PIPP
installs
and
dependencies
and
various
things
that
you
normally
have
to
do
just
to
set
that
up
is
crazy,
but
we
we
didn't
use
any
bespoke
solutions.
You
saw
us
using
absolutely
standard
packaged
stored
in
registries,
solutions
that
were
designed
to
work
together.
A
Then
all
you
do
is
you
describe
I
want
this
component,
this
component,
this
component,
they
download
and
they
run
and
they're
well
tested.
You
also
didn't
see
anything
specific
that
would
urge
me
cloud
specific.
You
saw
a
deployment
to
your
laptop
and
you
just
saw
a
deployment
to
the
cloud
without
using
any
specific
cloud
native
solutions,
or
anything
like
that.
You
just
saw
us
use
the
components
that
we're
already
there,
because
kubernetes
solves
this
problem
for
you
and
where
we
did
have
to
make
configuration
changes.
A
For
example,
using
where
you
stored
your
model
may
be
on
your.
You
would
store
it
on
your
local
laptop,
but
then,
when
you
move
to
Google
Cloud,
you
might
use
it
on
a
Google
storage
bucket,
Google
Cloud
Storage
bucket.
You
were
able
to
do
that
via
a
parameter
which
is
the
appropriate.
You
don't
want
to
have
to
go
in
and
change
code
just
to
make
a
small
configuration
change
like
that.
That's
all
that
kind
of
flexibility
is
all
built
into
the
system.
A
A
You
know
I
like
to
take
a
minute
here,
just
to
say
what
we
are
on
the
verge
of
data
science
can
save
lives,
there's
this
great
story
out
of
Sweden,
where
they
were
finding
that
women
and
stay-at-home
parents
were
being
injured
far
more
when,
after
you
know,
early
snowfalls
and
they
went
through
and
they
looked
at
all
the
data
they
had
from
hospital
records
to
Street
records
and
accidents
and
things
like
that
and
they
discovered.
This
was
entirely
a
factor
of
the
order
in
which
they
were
plowing.
They
ended
up.
A
Plowing
streets
before
sidewalks
and
after
they
looked
at
that
and
they
revamp
tit
and
and
realized
they
could
reorder
it
because
cars
can
handle
a
light
snow,
whereas
human
beings
can,
they
were
dramatically
able
able
to
reduce
the
overall
incidence
of
injuries
really
really
powerful
stuff,
and
that's
what
we
have
the
opportunity
to
do
right
now,
around
data
science.
We
have
the
ability
to
help
unlock
all
of
these
capabilities
for
all
of
the
people.
A
The
experts
in
these
other
fields,
without
having
to
ask
them
to
be
experts
in
you,
know,
hardware
and
drivers
and
specific
models,
and
things
like
that.
We're
working
on
a
place
where
we
can
provide
these
great
layering
technologies
and
let
everyone
focus
on
the
things
that
they
do
great.
We
are
just
getting
started.
You
can
see
a
small
set
of
the
people
who
are
working
on
this
right
now
we
have
over
70
committers
from
17
different
companies
over
700
commits
since
we
launched
six
months
ago.
A
We
are
really
really
excited,
but
we
are
still
very
very
early.
It's
a
0.1
I
would
not
recommend
using
it
in
production.
Yet,
even
though
there
are
many
people
doing
it,
that's
that's
up
to
them.
Mostly
the
people
on
the
left
there
that
understand
the
codebase
very
very
well
and
I
should
say
that
all
the
components
are
not
new.
These
are
all
things
that
have
been
battle
tested,
tensorflow,
Jupiter,
tensorflow,
serving
Selden
Corps
ambassador.
These
are
well
understood,
open-source
projects
with
very
big
communities.
A
We
are
really
working
on
the
packaging
and
the
wiring
them
together
and
that's
where
we
think
there's
still
some
work
to
do,
and
you
can
see
some
of
the
things
that
we'll
be
working
on
next
or
0.2
release.
We
are
targeting
this
summer
it
will
have
improved
setup,
improved
kubernetes
integration
for
for
a
native
kubernetes
things
like
auto
scaling,
accelerators
and
lots
more
packages
and
frameworks.
But
most
of
all
we
want
to
hear
from
you.
What
are
the
things
that
you'd
like
to
see?
Please
come
join
us.
A
Please
contribute
we
have
slack
and
github
and
open
design,
and
everything
like
that.
If
there's
a
proposal
or
something
that
you'd
like
to
see
as
a
next
step
for
us,
we'd
love
to
hear
it
and
specifically
I,
do
want
to
call
out
the
machine
learning
sig
in
the
open
sort
of
ensure
community
they've
been
wonderful
collaborators.
A
Obviously
we're
here
on,
stick
and
you're
talking
together,
but
we
have
this
every
two
weeks
and
it
is
extremely
important
for
us
to
get
out
there
to
listen
to
see
what
you'd
like
to
see
next-
and
this
is
a
great
place
to
talk
as
well,
but
you
know
take
down
all
these
URLs
and
emails
and
Twitter
handles.
Let
us
know
what
we
can
do
next.