►
From YouTube: DockerCon 2018 SF - Cool Hacks - June 14, 2018
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
D
C
B
C
B
Our
first
step,
Christopher
Houston
from
Johns
Hopkins
Applied,
Physics
Laboratory,
is
helping
save
the
world
no
seriously
Dart.
The
double
asteroid.
Redirection
test
is
exactly
what
it
sounds
like
and
we
don't
have
to
kill
Bruce
Willis
to
stop
an
asteroid
from
hitting
an
earth.
He
brings
in
docker
to
short
development
cycles
and
increase
quality
through
testing
and
a
domain
where
the
hardware
is
very
costly
and
rare.
Please
welcome
Christopher
to
the
stage.
E
Alright
I'm
gonna
talk
to
you
guys
about
automated
hardware
testing
using
docker
for
space,
and
that
is
a
really
long
way
of
saying
we're
using
docker
to
help
build
a
spacecraft,
so
the
spacecraft
that
we're
building
is
called
dart.
It's
the
double
asteroid,
redirection
test.
It's
a
mission
out
of
the
planetary
defense
coordination
office
out
of
NASA.
So
basically
the
planetary
defense
coordination
office
is
responsible
for
tracking
near-earth
asteroids
characterizing
them,
making
sure
that
we
see
how
hazardous
they
are,
how
big
they
are
and
figuring
out
how
to
deflect
them
or
save
basically
Earth.
E
If
one
was
supposed
to
come
and
and
come
towards
us.
So
dart
is
the
first
in
a
tech
time
and
the
first
tech
demonstration
to
actually
hit
a
representative
asteroid,
so
they
picked
an
asteroid
that
is
potentially
hazardous
160
metres
across.
It
would
do
a
lot
of
damage
and
they
said
we
want
to
hit
this.
We
want
to
measure
this.
We
want
to
figure
out
how
well
we
can
do
this
and
they
charged
APL
with
the
being
the
principal
investigator
for
this
mission
and
actually
building
this
spacecraft.
E
So,
like
every
good
mission,
we
have
a
three-step
plan
step,
one
build
the
spacecraft,
so
you
can
see
the
Drako
imager,
which
is
our
telescope
running
through
most
of
the
center
of
the
spacecraft.
Our
high
gain
antenna
is
a
really
sweet,
RLSA
antenna,
which
is
nicely
packaged
very
flat.
Our
rollout
solar
panels
roll
out
just
like
yoga
mats,
and
we
have
a
next
C
ion
thruster
like
we
actually
have
an
ion
thruster
on
a
spacecraft
to
go
hit
an
asteroid
step.
2.
E
We
got
to
hit
the
target
so
that
ion
thruster
is
actually
really
important,
we're
a
rideshare.
So,
basically,
our
plan
is
to
get
any
ride
that
we
can
into
space,
whether
it's
to
low
Earth
orbit
or
geodes
for
orbit
or
as
long
as
they
get
us
up
there.
We're
able
to
actually
escape
Earth's
gravity
using
that
ion
engine.
E
A
couple
months
later,
we
fly
by
another
asteroid
to
make
sure
that
we
calibrate
do
our
testing
make
sure
that
everything
checks
out
do
a
dry
rehearsal,
then
about
a
month
before
we're
able
to
see
that
asteroid
system.
So
it's
just
a
couple
of
pixels
or
just
a
pixel,
but
eventually
we
start
getting
close
enough
that
we
can
see
both
of
the
asteroids.
So
there's
a
big
one
and
a
small
one.
E
E
We
have
a
proposed
CubeSat,
that's
going
to
pop
out
and
basically
trail
us
home
and
then
we're
gonna
smack
into
that
asteroid
at
6
kilometers
per
second,
so
that
is
17
times
the
speed
of
a
speeding
bullet
into
something
that
is
about
the
size
of
a
football
field
and
one
of
my
team
members
pointed
out.
That
is
four
hundred
thousand
times
the
speed
of
a
snail.
E
Step3
save
the
world
by
making
sure
we
can
actually
save
the
world,
so
we
basically
need
to
make
sure
that
we
can
keep
it
in
range.
We
we
have
to
do
all
of
our
impact
assessment
from
the
ground,
so
we
have
that
that
selfie
set
so
that
it
comes
in
and
we're
able
to
see
basically
the
last
couple
of
moments,
but
the
the
reason
why
this
binary
asteroid
system
was
chosen
is
specifically
because
it's
close
enough
to
earth
that
we're
able
to
see
how
much
we
perturb
the
orbit
of
that
second
asteroid,
that's
going
around.
E
E
Pretty
sweet,
so
why
are
we
here
at
Thakur
Khan,
because
face
is
really
hard
and
our
software
team
wanted
to
make
it
a
little
bit
easier
on
us.
So
there's
tons
of
reasons
why
space
is
hard,
there's
extreme
distances,
I
mean
New.
Horizons
went
out
past
Pluto,
it's
been
out
there
for
over
ten
years.
We
have
one
shot
at
making
this
work.
You
have
super
low
bandwidth,
I
mean
people
are
complaining
about
Gigabit,
Ethernet
and
we're
talking
about
like
100,
kilobit
or
like
less
vacuum
is
gonna
mess
up
all
of
your
materials.
E
You
have
power
constraints.
Thermal
is
going
to
make
your
electronics
go.
Wild,
tons
and
tons
of
physics,
problems
that
constrain
your
space
environment
and
all
of
these
drive
your
cost.
Your
reliability
and
the
the
radiation
will
cause
your
memory
to
upset
so
the
the
higher
density.
Your
memory
is,
the
more
you
have
problems,
so
we
have
16
megabytes
of
memory.
We
have
no
virtual
paging,
we
have
a
32-bit
processor
that
runs
at
100
megahertz
and
we
have
tons
of
process
and
so
much
testing.
It
is
absolutely
crazy.
E
E
So
what
did
we
want
to
solve
as
a
software
team?
Basically,
hardware
scarcity.
These
systems
cost
a
lot
of
money
and
for
us
to
develop
our
platforms
that
we
were
developing
on
were
three
hundred
thousand
dollars
or
more
so
think
about.
If,
if
the
laptop
that
you
used
to
develop
your
code
on
was
three
hundred
thousand
dollars
well,
not
every
developer
is
going
to
get
one,
and
so
you
end
up
having
a
constraint
where
you
have
five
systems
and
thirty
developers,
and
so
you
end
up
time
sharing
all
of
these
assets.
E
So
what
is
the
holy
grail?
We
wanted
Hardware
emulation.
We
wanted
to
develop
on
our
laptops.
We
wanted
to
test
on
the
real
hardware,
and
then
we
wanted
to
deliver
to
the
spacecraft
teams
a
little
bit
different
delivery
process
than
most
people
are
used
to
with
docker.
So
what
enabled
that
for
dart,
nastic
or
flight
executive,
we
have
Hardware
operating
system
abstraction
liners
so
that
allows
us
to
run
on
PC.
Linux
allows
us
to
run
on
the
our
times
or
VxWorks,
or
insert
your
operating
system
here
as
long
as
you
support
it.
E
But
basically
we
build
C
code
and
it's
able
to
run
wherever
we
want.
So
that's
our
abstraction
liner.
We
use
bamboo
for
our
CI
CD.
We
move
to
a
network
based
architecture,
so
we
are
using
space
wire
now,
instead
of
toggling
specific
lines
and
if
you
squint
really
hard
that
looks
like
UDP,
which
is
really
nice
to
abstract
away,
we
use
cosmos,
which
is
our
ground
system.
E
So
that's
an
open
source
ground
system
that
really
allows
us
to
deploy
easy
test
cases
since
it's
built
in
Ruby
and
it's
open
source
and
then,
of
course,
we
decided
to
containerize
all
of
it
and
make
it
run
in
parallel.
So
what's
our
dev
setup
look
like
so
we
have
four
repos
flight
software,
which
is
the
stuff
on
the
single
board
computer,
our
testbed
software.
That's
everything
that
emulates
are
the
rest
of
the
spacecraft
cosmos,
which
is
our
ground
system
and
our
docker
environment.
E
So
that's
you
know
just
the
environment,
how
we
build
our
containers
and
and
all
of
that
jazz.
So
four
containers
come
out
of
that
flight
software,
testbed
software
cosmos
and
our
VNC
container,
which
is
the
cool
hack.
We
run
time
volume
all
of
our
source
code,
so
a
lot
of
people
put
their
source
code
in
the
container
and
you're
able
to
push
that
down
the
pipeline.
E
We
have
binaries
that
we
actually
push
instead
of
and
we're
able
to
build
and
stop
and
debug
versus
having
to
rebuild
a
doctor
container
every
single
time
we
changed
some
code,
so
our
network
setup.
We
have
our
flight
software
container.
That's
connected
to
our
testbed
software
container,
that's
speaking,
UDP
that
that
emulates,
the
space
wire
setup
you
have
testbed
to
cosmos.
E
That's
talking
that
as
a
TCP
link,
that's
set
up
there
and
that
acts
like
the
radio
cosmos
to
V
and
C,
which
is
actually
our
window
into
the
world,
because
we
have
a
docker
container
with
cosmos.
We
needed
a
GUI,
and
so
V
and
C
was
our
answer
for
that,
and
all
of
that
is
brought
up
with
our
docker
compose
file.
E
So
what's
that
VNC
window
look
like
we
grabbed
something
from
jan
and
we
we
basically
we're
looking
for
a
GUI
solution.
We
originally
started
with
x11
forwarding
to
our
desktop
and
we
ran
into
security
problems,
so
we
ended
up
with
I
think
a
very
clever
solution
of
an
X
11
server
and
a
VNC
server
inside
the
container,
which
actually
you
forward
to
that
container,
and
then
you
VNC
into
that
container
and
you're
able
to
get
a
GUI
from
any
contain
any
outside
container
that
you
want.
So
you
can
focus
on
your
so
in.
E
In
this
instance,
we
have
cosmos,
which
is
Ruby
and
cute
based
and
that's
going
to
the
VNC
container
or
the
x11
server
X,
forwarding
there
and
then
on
out
and
all
of
its
sharing
an
X
off
key.
So
your
security
is
kind
of
wrapped
up
between
the
two
containers
and
you're
only
exposing
them
to
each
other.
E
So
what
what
we're
going
to
show
you
as
soon
as
I
log
in
and
the
demo
gods
bless
me.
Is
that
exactly
that,
so
we're
going
to
show
you
our
dev
environment,
and
so
this
is
what's
what's
on
the
screen.
Right
now
is
our
compose
file,
so
it
brings
up
four
services.
We
have
flight
software
and
you
can
see
that
we
volume
in
the
code
here
we
have
our
display,
which
is
our
V
and
C,
and
you
can
change
your
your
dpi.
E
You
can
scale
things
up
scale
things
down,
you
have
cosmos,
which
is
our
GUI
application
in
Ruby,
and
then
we
have
our
testbed
software.
All
four
of
those
are
coming
up
with
this
with
when,
when
we
run
that
docker
compose
so
we
we
wrote
shell
scripts
to
kind
of
make
it
so
it's
a
one-liner
to
bring
things
up.
But
basically
what
this
is
doing
on
the
left
is
that's
booting,
our
flight
software
and
our
testbed
software
and
on
the
right
we're
actually
bringing
in
we're
opening
up
that
VNC
client.
E
E
So
this
is
a
little
bit
of
a
commanding
and
telemetry
walkthrough.
Basically,
we
have
telemetry
packets
that
are
coming
down
from
one
of
our
applications.
We're
able
to
look
and
inspect
that
packet.
So
on
the
right
you
can
see
what
time
it
thinks
it
is.
You
can
also
see
how
many
commands
it's
received
and
how
many
commands
it's
executed.
So
let's
actually
send
a
no
op,
so
we're
not
really
doing
anything
except
we're
just
sending
a
command
to
make
sure
that
it's
alive.
E
So
if
we
send
this,
then
what
you're
going
to
see
on
the
left
is
you
you
see.
Several
bites
went
up
and
that's
from
the
cosmos
container
up
to
the
testbed
through
the
radio
to
the
flight
software
and
then
on
the
right.
What
you're
going
to
see
is
the
received
count.
Just
went
up
to
one,
which
means
that
we
have
full
round-trip
to
the
spacecraft,
all
on
a
laptop.
E
So
we
we
thought
that
we
didn't
want
to
just
stop
there
now
that
we
have
a
dev
environment,
why
not
make
it
so
that
it's
also
a
continuous
integration
environment,
and
so
what
we've
managed
to
do
is
we've
managed
to
make
that
in
a
head.
So
what
we
just
did
was
a
check
out
of
our
spacecraft
hardware
or
spacecraft
software
now
I
want
to
be
able
to
do
that
headless
and
so
on.
E
The
right
we're
testing
essentially
the
same
thing
and
on
the
Left,
we're
going
to
run
it
with
a
different
application
so
that
we
can
paralyze
this.
So
both
of
them
are
running
very
similar
scripts.
Both
of
them
are
sending
commands
up
to
the
the
flight
software
and
back,
but
in
different
containers
in
completely
different
setups
and
on
the
right.
E
C
He
gets
it.
Alright.
Machine
learning
has
been
in
the
news.
A
lot
and
few
companies
have
had
as
much
impact
on
machine
learning
in
AI
as
Google.
Our
next
project
Kubb
flow
is
a
machine
learning
toolkit
from
Google
for
kubernetes,
and
it's
designed
to
cover
the
whole
lifecycle
of
machine
learning
applications
on
top
of
kubernetes.
It
has
three
goals:
composability
Portability
and
scalability.
Our
next
presenters,
David,
Oren,
chick
and
Michelle
Kaz
bond
from
Google
will
show
you
how
easy
it
is
to
build
and
deploy
a
machine
learning
application
on
your
kubernetes
cluster.
D
Borg
is
watching
good
luck.
Sage
words
for
those
in
oak
kubernetes
was
originally
based
on
board,
which
is
particularly
relevant
and
yeah.
Thank
you.
So
much
for
having
us
here
today,
I
couldn't
be
more
excited
to
be
here,
to
talk
about
docker,
EE
and
all
the
work
that
we've
done
with
kubernetes
and
in
specific.
D
It
really
has
started
to
reach
this
next
level
of
applications
when
it
comes
to
containers
and
orchestration,
and
that's
what
we
are
here
to
talk
about,
what
is
the
next
level,
it's
all
about
cloud
native
ml
and
really
taking
containers
and
orchestration
to
help
your
business.
This
is
me
six
months
ago.
Much
larger
excuse
me
much
smaller
group,
this
much
larger.
Obviously,
when
we
first
introduced
cube
flow
and
the
idea
buying
cube
flow
remains
the
same
today,
how
do
we
make
it
easy
for
everyone
to
develop,
deploy
and
manage
portable
distributed
ml
on
kubernetes?
D
This
is
such
a
fundamental
problem,
because
you
know
more
than
anything,
people
want
to
start
using
ml.
They
see
the
value
of
ml,
but
they
don't
know
what
to
do
and
how
to
get
started.
We've
had
some
great
momentum
since
we
started
over
800
commits
70
contributors,
community
contributors
17
different
companies
using
including
a
whole
bunch
of
really
big
names.
D
These
are
people
who
had
their
own
stacks
and
we're
basically
doing
their
own
bespoke
solutions
and
we're
having
a
lot
of
challenges,
because
that
meant
that
they
had
to
take
care
of
everything
from
the
ground
up
by
building
a
community
of
people
who
shared
the
same
vision
to
make
portable
distributed
ml
easy
to
use.
We
were
able
to
come
together
and
really
develop
a
framework
that
let
you
customize
your
overall
deployment
and
when
we
say
customized
and
we
say
composability,
this
is
what
we
mean
today.
D
It's
really
focused
when
you
hear
about
people
building
ml
on
what
framework
you're
using
are
using
tensorflow
or
cafe
MX
net
c
NT,
k
psyche.
It
doesn't
matter
everyone's
really
focused
on
that
model,
but
the
reality
is
it's
not
just
about
the
model.
It's
about
everything
else.
Around
the
model,
data
ingestion,
transformation
exploration,
hyper
parameter,
tuning
rolling
out
to
production,
monitoring
logging.
D
This
is
from
the
right
scale,
a
report
of
this
year,
81
percent
of
enterprises
are
currently
multi-cloud
and
that
just
one
cloud,
the
average
enterprise
that
is
multi
cloud
is
using
five,
and
this
is
where
the
things
that
Steve
and
Scott
we're
talking
about,
they
were
so
important.
How
do
you
build
an
overall
framework
that
is
portable?
That
uses
an
orchestration
system
that
really
works
anywhere
and
by
that
by
working
anywhere
by
being
portable,
I
mean
this
your
standard
framework?
D
D
That
represents
just
a
small
portion
of
it
when
you,
when
you
start
to
move
that
when
you
start
to
expand
and
actually
try
and
get
this
out
to
production,
you
end
up
having
to
repeat
and
rebuild
your
system
for
every
environment
you
roll
out
now
you
might
be
saying
this
doesn't
matter
me,
but
I
will
prove
you
wrong
right
now.
How
many
of
you
have
a
laptop,
raise
your
hand?
D
Congratulations,
your
multi
cloud
and
the
reason
your
multi
cloud.
Is
this
I'm?
Not
don't!
Listen
to
me.
Listen
to
this
smart
guy,
Joe
bata,
co-founder
of
the
kubernetes
project,
he's
got
this
tweet
I
love
it
the
way,
I
think
about
it.
Every
difference
between
dev,
staging
and
prod
will
eventually
result
in
an
outage.
Now.
Did
you
catch
that
word?
Let
me
just
highlight
it
for
you.
Let
me
highlight
it
even
more
Deb's
development
is
an
environment.
D
Development
is
a
cloud
if
you
drift
from
dev
to
staging
to
prod,
you
are
going
to
have
a
bad
time
and
that's
exactly
what
we're
trying
to
solve
through
things
like
cube
flow
and
the
docker
Enterprise
Edition.
By
that
what
I
mean
is
you
give
yourself
a
comment
or
castration
platform
that
works
anywhere?
Docker
and
kubernetes
run
in
all
of
these
environments?
They
run
in
dev,
they
run
in
staging
they
run
in
production.
D
They
run
across
all
your
various
clouds
and
then
you
use
cube
flow
as
your
single
deployment
framework
for
your
ml
stack
and
you
stamp
it
out
everywhere
and
the
same
cube
flow
deployment
runs
the
same.
One
that
runs
on
your
laptop
runs
on
your
cloud,
no
matter
what
cloud
it
might
be
in
the
Box
today
we
have
all
the
core
elements
for
getting
up
and
running
with
a
machine
learning
framework.
I
do
wanna
stress.
This
is
a
zero
point.
D
One
release,
please
do
not
use
it
in
production,
though,
if
you
are
using
in
production,
come
talk
to
me
I'll
help,
but
we
do
have
all
these
components.
These
are
the
standard
toolbox
for
what
people
are
using
Jupiter
notebooks
distributed
training
model
serving
and
best
of
all.
We
use
the
open,
templating
framework
case
Annette,
which
allows
you
to
customize
it
yourself
and
really
separate
deployment
from
code
from
configuration,
and
so
with
that
you've
heard
me
say
a
lot
of
good
things,
but
the
proof
is
in
the
pudding.
Let
me
hand
it
over
to
Michelle
all.
A
Right,
thanks
David.
We
are
really
excited
to
show
off
a
lot
of
the
hard
work
that
the
coop
load
community
has
put
in
how
much
of
what
we're
gonna
go
through
today
will
be
in
a
terminal
window
which
can
be
a
bit
disorienting.
So
I'll
give
you
an
idea
of
what
we'll
show
before
we
show
it
we'll
start
out
on
docker
for
desktop.
A
We
will
show
you
just
an
empty
cluster,
will
install
queue
flow
and
we
will
run
our
training
our
training
job,
distributed
only
across
the
CPUs
in
this
laptop
and
once
we
make
sure
there
are
no
syntax
errors.
It'll
be
time
to
move
to
the
cloud
we'll
move
into
the
Google
cloud
platform
and
we'll
run
kubernetes
on
docker
Enterprise
Edition
queue
flow
was
already
installed
on.
There
will
kick
off
training
right
away
and
then
we'll
show
you
a
really
special
treat
will
show
you
training
on
tensor
processing
units.
A
B
A
A
Okay,
so
I
prepared
a
recording
for
today-
and
these
are
all
commands
that
I
issued
here
on
site
at
da
Creek
on
to
a
live
running
cluster,
but
they're
machine
learning,
workflows
and
I
wanted
to
spare
you
the
experience
of
trailing
endless
log
files
and
watching
neural
nets
converge
so
I'll
talk
you
through
I'll
talk
you
through
all
the
commands
and
I
wanted
to
start
from
the
UI
perspective.
Why
are
we
building
machine
learning
models?
What
is
it
that
we
gain
from
this?
A
So
if
we
look
at
just
a
standard,
simple
web
app,
this
is
something
that
takes
a
restaurant
review
and
predicts
a
sentiment
for
it.
This
is
just
a
really
standard,
flask
app.
Let's
go
ahead
and
look
at
the
index
file.
We
have
a
very
simple
function
that
powers
that
prediction
code.
It's
pretty
naive.
It
takes
a
list
of
negative
words
and
if
it
finds
them
anywhere
in
the
text,
it
marks
it
as
negative.
If
not,
it
assumes
that
it's
positive,
so
nothing
too
sophisticated.
A
If
we
had
to
build
this
from
scratch
without
machine
learning,
this
might
be
where
we
start
so
our
example
here
the
Momofuku
ramen
was
awful
seems
pretty
straightforward,
that
even
a
naive
app
should
be
able
to
get
correct.
But,
alas,
the
words
in
our
example
happened
to
not
be
in
the
list.
We
can
do
better,
let's
throw
some
machine
learning
at
it.
So
we'll
start
from
docker
for
desktop
and
we're
connected
locally.
It's
an
empty
cluster.
The
first
step
is
to
install
cube
flow.
Now.
A
What
I
did
is
I
set
up
some
of
the
config,
which
is
essentially
downloading
it's
a
git
pull
from
key
flows
open
source,
it's
just
github
done
Sasha
cute
flow
and
we're
ready
to
install
the
default
installation
and
what
this
command
is
doing
is
generating
the
manifests
and
applying
them
onto
our
local
cluster.
And
what
that
looks
like
is
this.
This
is
these
are
default
tools
that
come
with
the
installation.
A
So
we
kick
off
our
job
and
because
we
have
cube
flow
up
and
running
it
distributed
it
for
us,
we
want
a
peek
inside
that
master
pot,
so
we
want
to
see
what's
going
on
and
we're
describing
for
the
tensor
flow
code,
because
sometimes
the
the
log
files
can
be
a
little
bit
messy.
So
we're
really
just
looking
for
our
machine
learning
code
alright.
A
So
this
is
good.
This
means
that
locally
we
were
able
to
take
our
to
take
our
Tencent
or
tensor
model
run
it
it
compiles.
Clearly,
it
runs
so
it's
time
to
move
to
the
cloud
we're
going
to
docker
EE.
This
is
running
on
GC
P
and
we're
changing
our
context
in
cube,
CTL
we're
switching
to
a
cluster
in
the
cloud
and
I
pre-installed.
So
we
already
have
cube
flow
running
there.
A
So
you'll
see
it
looks
just
like
it
did
on
our
desktop
version
and
we're
taking
that
exact
same
code
that
CPU
version
and
we're
kicking
off
here,
so
we're
just
applying
it
to
a
different
environment,
generated
the
manifests
and
applied
them
to
the
cluster.
It
looks
very
similar,
it'll
wait
for
it
to
kick
off.
It
takes
a
bit
to
pull
down
the
images
and
get
started.
But
again
we
want
to
look
in
that
master
pod
peek
inside
to
see
what's
going
on.
A
Okay,
this
all
looks
pretty
familiar,
it's
the
same
thing,
but
a
little
bit
faster,
because
now
it's
spread
out
across
different
nodes,
but
it's
still
not
quite
fast
enough.
I
think
we
could
use
a
little
bit
more
power,
so
we're
taking
that
exact
same
code
that
we
ran
locally
and
across
CPUs
and
we're
gonna
run
that
on
TP
use
and
see
what
that
looks
like
generate
our
manifests
applied
to
the
cluster,
and
here
we
have
our
TPU
pod.
We
want
to
peek
inside
that
master,
see
what
we
can
find.
A
It
looks
a
little
bit
different.
It's
it's
scrolling
through
quite
a
bit
faster,
so
you'll
notice,
this
global
steps
per
second
metric.
That
one
looks
pretty
interesting:
let's
take
five
into
that.
What
we
want
to
do
is
compare
what
we're
seeing
with
GPUs
versus
what
we
saw
CPUs
we're
seeing
about
11
to
12
steps
per
second
on
TPS,
and
because
this
is
the
exact
same
code,
it's
a
good
comparison.
A
It's
about
a
hundred
X
speed-up,
100
X
difference
now
that
we
have
our
models
trained,
it's
time
to
serve
them
and
go
look
at
them
in
the
UI,
and
we
can
do
this.
We
can
instantiate
both
components
with
a
single
command
and
what
that
looks
like
the
manifests,
get
generated,
the
pods
get
created
and
you'll
see.
We
have
two
new
containers
in
our
cluster.
A
We
have
a
serving
container
and
a
UI
container
and
in
order
to
be
able
to
look
at
the
UI,
we're
gonna
forward
a
port
from
that
cluster
in
the
cloud
to
our
local
environment.
So
this
is
mapping
port
8080
on
my
laptop
to
that
cluster.
So
if
we
type
that
into
the
browser,
this
is
our.
This
is
the
app
we
saw
before
the
naive
version
of
it.
A
Now,
let's
point
it
at
the
model
that
we
just
trained
so
because
we
built
this
container.
This
is
talking.
This
route
is
now
talking
to
the
serving
container
they're
both
running
in
the
same
cluster.
And
what
do
you
think
is
gonna
happen?
Do
you
think
we'll
get
better
results
this
time,
you
think
the
machines
will
do
better.
All
right.
Much
more
accurate.
D
Thanks
so
much
Michelle,
so
yeah,
what
didn't
you
see
there?
That
was
crazy.
I
did
something
locally.
On
my
laptop
it
worked.
Great
I
was
able
to
debug,
find
any
syntax
errors
made
sure
it
worked,
then,
with
no
changes
whatsoever.
I
took
dad.
I
moved
that
to
the
cloud
Michelle
did
anyway.
I
didn't
just
watch
Michelle
moved
into
the
cloud
and
got
a
hundred
times
more
speed,
exact
same
code.
That's
the
power
of
docker
ee.
They
having
it
on
your
desktop
having
it
in
the
cloud
and
and
wiring
up
to
Google
Cloud
tenser
processing
units.
D
What
you
didn't
see
were
bespoke
solutions
that
was
all
open
source
nobody's,
got
any
custom
code
written
there.
That's
all
open-source
technologies
that
we're
using
you
didn't
see
us
use
any
cloud
specific
portable
tech,
I
didn't
have
to
convert
from
the
thing
that
I
ran
on
my
laptop
to
something
that
you
know
is
geared
for
production.
We
just
were
able
to
move
with
the
exact
same
configuration
and
work,
and
then
finally,
you
didn't
see
any
forking
of
the
kubernetes
api
s.
We
use
native
extensibility
in
kubernetes.
D
In
order
to
do
all
the
things
you
saw
there
and
we
really
are
just
getting
started,
you
know
here's
a
small
list
of
who's
helping
and
we
have
a
whole
bunch
of
stuff
coming.
You
still
saw
a
lot
of
like
logs
spew
and
things
that
we
could
do
to
make
it
a
lot
easier
and
we
really
are
excited
to
get
going
to
make
that
much
easier
and
to
really
approach
data
scientists
everywhere
and
that's
what
we
mean
when
we
say
coop
flow
is
open.
D
B
Thank
you
David
and
Michelle.
That
is
super
cool.
Our
third
and
final
cool
hack
actually
repeats
a
theme
that
we've
had
over
the
last
three
cool
hacks
keynotes
serverless,
but
it
adds
something
new
to
the
whole
discussion,
which
is
portability
like
docker
ee
glue
buy
eat
at
Levine,
gives
you
the
portability
and
choice
of
a
serverless
framework
from
cloud
services
like
AWS
lambda2
running
of
the
one
of
the
several
containerized
self
hosted
service
frameworks,
and
it
does
it
all
while
running
on
docker
Enterprise
Edition.
Please
help
us
welcome
EDA
to
the
stage.
F
So,
let's
see
what
I
got
in
the
cookie
break,
a
final
movie
is
giving
you
all
the
luck.
For
me,
five
everybody
say
I
would
love
to
share
with
you
our
vision
of
solo.
So,
let's
start
so.
This
is
the
way
we
see
the
ecosystem
today,
most
and
enterprise
running
monolithic
application.
They
probably
use
something
like
an
C
bar
or
carpet
or
chef
to
M,
deploy
it
and
configure
it.
F
The
probably
will
use
some
APM
a
solution
for
actually
get
metrics,
and
it
probably
will
do
is
some
login
guys
system
explain
to
get
all
the
logs,
but
there
is
also
this
great
microservices
story
and
everybody
want
to
move
there
and
customer
actually
want
to
move,
probably
wouldn't
want
to
use
doctor
IE,
but
another.
Do
they
need
to
get
this
migration
client?
The
first
thing
that
they
need
is,
I
probably
use
some
tool
different
tooling,
like,
for
instance,
Prometheus
for
scaling
de
metrics.
Right
there
system
is
much
more
scaling.
F
You
need
to
use
something
like
for
meteors
and
probably
use
something
like
open
tracing,
because
now
your
log
is
total
distributed
and
you
need
to
fight
you
to
find
some
distributed.
A
transactional
logging,
but
they
also
look
at
the
AWI.
So
all
the
server's
crazy
right-
and
they
really
wouldn't
want
to
use
this
as
well,
and
you
already
use
that
it's
because
it's
a
really
really
really
good
and
final
story
right
there.
F
You
don't
need
to
you
paying
only
only
for
what
running
you
need
to
go,
probably
to
the
public
cloud
or
one
on
Prem,
but
you're,
going
to
use
the
provider
solutions
for,
for
instance,
AWS
lambda
in
mice
life
and
therefore
you
probably
will
be
able
to
use
it.
You
would
have
to
use
their
tooling
like
cloud
watch
and
x-ray
and
you
probably
use
an
event-driven
architecture.
So
me
as
a
customer.
What
is
my
option
right?
I
want
to
move.
So,
let's
see
what
is
my
option?
The
first
one
just
not
change
it's
too
hard.
F
I
mean
it's
working.
If
it's
working,
don't
fix
it
right.
The
problem
with
is
that
competitor
Mike
will
and
then,
if
you
will,
he
will
take
the
market
right.
So
I
can't
afford
that
so
I
see
what
is
my
second
option.
So
maybe
what
I
should
do
is
just
bring
a
q-tip,
and
this
game
is
going
to
built
all
the
green
field
of
location
there.
So
you
know
they
will
build
all
the
new
stuff.
F
The
problem
with
that
is
that
a
I
still
have
the
monolithic
application,
which
is
my
core
business
and,
second
of
all,
I
just
created
a
horrible
organization
problem
when
those
guys
from
them,
analytically,
don't
want
the
other
team
to
succeed
so
again,
not
very
useful.
So
there
is
our
enterprise,
that's
what
you're
trying
to
do
is
use
the
bean
Bank
refactoring
we're
going
to
me
great
refactor.
Every
color
exist
of
the
monolithic
application
to
micro,
service
and
service.
F
Well,
good
luck
with
that
right
that
will
take
between
a
year
82
years
and
what's
likely,
even
if
you
manage
to
finish
and
succeed,
you
didn't
get
any
new
feature
for
this
one
here
right.
So
this
is
exactly
a
problem
specifically
if
your
competitor
continue
adding
feature.
So
what
is
the
solution?
Because,
obviously
there
is
a
solution.
You
know
about
Twitter
and
about
live.
They
did
that
right.
So
the
resolution
is
to
basically
glue
them
together.
Somehow
so
take
the
monolithic
application
and
extend
it,
extend
it
with
new
feature
to
Microsoft
and
service.
F
Look
at
that
somehow
together
and
then
start
migrating
piece
by
piece,
slowly,
very
slowly:
human
oolitic
application.
Ok!
So
the
question
is:
what
is
the
smallest
unit
of
compute?
They
can
actually
help
with
that
and
the
way
we
see
that
is
basically
a
function.
If
you
think
about
it,
everyone
relate
the
complication
or
my
word.
Service
application
has
an
exposed
api,
which
means
that
they
can
treat
them
as
a
function
and
if
I
am
manage
to
somehow
take
all
my
infrastructure
cut
it
to
this
little
unit
of
compute.
F
I
can
do
something
like
that
right,
I
call,
it
I
believe
we
can
call
it
composite
application,
but
in
the
end,
is
an
application
that
will
for
different
type
of
architecture
and
that's
the
right
solution.
So
let
me
show
you
so
we
created
glue
exactly
to
solve
this
use
case
paying
on
money.
So
let
me
show
you
a
little
bit
how
it's
working
and
what
does
it
mean.
F
Okay,
so
this
is
a
monolithic
application,
I'm,
pretty
sure
that
all
of
you
know
it.
This
is
their
Lord
for
the
spring
application.
It's
actually
not
written
by
us.
It's
really
my
people
table
and,
and
it's
working
it's
a
Java
I
can
actually
even
show
you
the
code.
This
is
the
code,
it's
a
real
monolithic
application
and
we
love
okay.
F
So
it's
working
and
it's
great,
you
know
you
can
see
that
it's
working
and
it's
running
all
on
actually
the
docker
EE,
but
here's
the
problem
I
had
this
page
and
it's
working
but
I
have
a
new
engineer
and
I
really
really
want
him
to
add
a
new
column
of
location
here.
So
let's
see
what
does
it
mean
for
him
to
do
the
first
one
that
is
need
to
do
is
to
understand
how
the
application
working.
The
second
thing
that
you
need
to
do
is
actually
go
and
handle
functionality.
F
Then
you
need
to
test
it
regression
test
that
he
didn't
break
anything,
and
then
you
need
to
redeploy
monolithic
application,
not
fine.
So
what
we
did.
We
just
wrote
here
a
micro-services
reading
in
go.
Second,
as
you
can
see,
it's
a
very
simple
micro
service
that
go
directly
to
the
database
and
just
basically
bring
me
all
the
Phils
very,
very
simple,
so
I
deploy
them
on
docker
in
E
and
now
let's
go
to
glue
and
see
how
it
can
help
us.
F
So
what
you
will
discover
is
that
goo
actually
already
discovered
quite
a
lot
of
the
stuff
most
of
the
stuff
running
on
that
on
Dockery.
So
the
only
thing
that
I
need
to
do
is
to
come
here
and,
as
you
can
see,
I
define
a
route
to
go
only
to
my
monolithic
application,
and
now
we
define
a
new
route
and
I
will
say:
look
every
time
that
someone
going
to
vet
dot
HTML.
F
What
I
want
you
to
do
is
to
go
and
run
this
micro-services
and
go
that
I
wrote
right
very
simple,
I
headed
around
so
now
everything
is
going
to
go
to
the
monolithic
application,
but
it's
only
one
mic,
one
path
that
will
go
to
their
Microsoft.
So
let's
go
that's
working.
This
is
Java.
This
is
Java
and
this
is
a
go
micro-services.
Ok,
so
that's
cool!
So
now
I'm
running
already
at
an
entire
application,
but
look
what
happened
here.
F
Actually,
when
I
click
on
contacts,
it's
us
without
working
so
again,
I
can
go
to
my
team
and
ask
them
to
fix
it.
Oh
I
can
do
something
else,
that's
just
for
fun
at
lambda
there,
so
we're
going
to
go
again
and
I
will
go
and
add
an
upstream
for
my
WS,
so
click
here
and
I
will
do
a
WS
and
my
function
that
I
want
to
run
is
in
EU,
US,
east
one.
So
we'll
just
find
us
east
one
here
you
go
and
then
I
added
it.
So
here's
what
happened
right
now.
F
And
you
find
it
so
now
all
the
functions
are
running
on
this
region.
That
I
can
see
by
the
way
is
here
so
now,
let's
go
back
to
the
road
and
just
at
around
so
here's
what
I'm
doing
it's
very
simple,
every
time
that
someone
is
clicking,
slash,
contact,
direct
CML.
What
I
want
to
do
is
to
go
to
AWS
and
run
the
function.
Contact
form
3.
As
you
can
see,
we
also
support
version,
but
here's
the
thing
the
length
actually
returning
JSON
but
I,
wanted
to
show
it
on
the
browser.
F
So
I
need
to
transfer
it
to
HTML.
So
we
wrote
a
transformation
filter
to
endpoint.
I
will
explain
more
in
detail.
The
technology
and
I
will
add
a
route.
So
what
do
we
have?
Everything
is
going
to
the
monolithic
application,
someone
going
to
the
page
of
the
vet,
store,
HTML
to
go
to
Microsoft's
reading
and
go
and
every
time
that
someone
will
go
to
contact
or
HTML
to
go
and
spin
up
and
nodejs
lambda
in
AWS,
so
I
see
that's
actually
working,
so
this
is
Java.
This
is
Java
that
will
be
the
micro
services.
F
Alright,
then,
so,
let's
understand
what
could
quiet
just
did
right.
So
basically
it's
really
simple:
we
abstract
the
network,
we
abstract
the
network.
So
now
we
have
a
centralized
place
to
configure
it
all.
So
we
can
still
figure
security.
We
can.
We
have
observability
right,
we
can
see
everything
and
we
can
control
the
traffic,
but
for
us
specifically
because
of
the
immigration
use
case,
it
was
really
really
important
that
the
unit
that
you
need
to
factor
would
be
very
small.
F
That's
why
we
may
extend
envoi
to
actually
route
on
the
function
level,
so
everything
that
you're
getting
today
with
everything
that
you
know
all
the
regular
one.
You
know
canary
deployments,
security
for
your
services
and
caching
now
you
will
get
on
the
function
level
so
assume
that
I
have
a
microservices
with
four
API
calls
and
it
can
give
each
of
them
a
different
security
rotation.
We
also
needed
the
data
plan,
so
we
use
the
invoice
right.
F
It's
the
best
date,
for
my
opinion,
a
proxy
that
exists
out
there
and
he
specifically
good
for
this
use
case,
because
that's
why
I
lived
period
in
the
beginning.
So
we
did
that,
but
we
didn't
stop
there.
We
extend
that
right,
because
it
was
very
important
that
it
will
work
for
us
for
the
function,
migration,
and
the
last
thing
is
that
we
also
needed
a
control
plan
and
we're
going
to
talk
about
it
too
much.
F
But
it's
all
open
source,
it's
called
glue
and
go
check
it
out,
but
it's
really
pluggable
and
extensible
architecture,
but
we
can
run,
be
actually
really
really
integrate
with
your
environment.
So
we
discover
all
the
regular
monolithic
application,
we're
discovering
the
services
that
running
on
your
node
and
then
even
if
you're,
finding
the
services
we're
going
on
top
of
it
and
check
if
you
have
either
swagger
G,
OPC
or
graph
QL,
and
then
we
discover
in
the
structure
of
your
application.
F
We
also
can
run
any
service
function
and
discovering
it
on
Prem,
but
also
affirming
the
provider
so
again
full
integration
and
we
using
the
C
already
for
storage
and
so
on.
Ok,
so
that's
what
I
did
right.
It's
kind
of
simple!
Now
you
have
this
iPod
app
story
and
you
can
have
a
fabrication
that
some
of
them
some
analytics,
some
of
them
in
micro
services
and
some
of
them
in
lemon.
So
now
there
was
a
bit
of
a
new
problem
right
now.
Ok,
so
we
did
that
that's
great
now
how
to
fit
the
bucket
right.
F
This
is
huge.
This
is
like
three
different
distributed
application.
How
we're
going
to
do
that
and
that's
actually
not
a
problem,
only
an
eyebrow.
This
is
actually
only
a
problem
in
micro
services
because,
if
you
think
about
it
out
to
date,
people
actually
debugging
microservices
and
that's
what
I.
They
are
not
right.
There,
troubleshooting
they're,
looking
for
logs
and
if
it's
in
production,
it's
even
worse,
they
need
to
wait
until
the
locks
from
open,
open,
m8
racing
to
come
in
like
ten
minutes.
So
for
that
we
did
flow.
F
A
squash
and
I
want
to
show
you
how
it
works.
Okay.
So
this
is
a
very
simple
microservice
that
we
wrote
basically
a
calculator
overcomplicate
calculator,
but
for
the
use
case,
it's
good
enough,
so
we
got
need
to
formatter
right.
I
mean
I
basically
can
give
any
two
number
that
I
want,
which
it
doesn't
really
matter
and
then
I
can
either
add
or
subtract.
But
what
you
can
see?
It's
not
working
okay.
So
what
am
I
doing
as
a
developer?
Again
lots
right,
but
what
we
did
it's
a
little
bit
different.
F
So
let's
look
at
the
application
itself.
That's
means
that,
and
this
is
the
micro-services
that
actually
consists
this
calculator.
So
this
is
a
very
simple
go
micro-services.
You
see
that
I
can
see
it
in
visual
studio
code
and
it's
basically
doing
one
thing:
it's
serving
you
the
HTML
and
then
it's
taking
the
result
of
the
two
parameter
and
said
to
another
microservices
reading
in
java.
So
let's
look
at
it.
This
is
the
old
one.
F
This
one
very,
very
simple,
java
application
that
what
it's
doing
it's
getting
the
two
parameter
and
the
value
of
to
add
it
or
subtract
it,
and
just
do
that
for
you,
okay,
so
here's
what
we're
going
to
do
right
now
we
created
squash
when
I
will
do
the
command
line
and
I
will
write
squash.
You
will
see
that
I
have
some
offers.
I
can
do
a
debug
container.
F
So
let's
choose
this
one
when
I'm
clicking
on
this
one,
but
you
will
see
that
in
a
second
happen
and
again
that's
running
on
Dockery
in
AWS,
I
got
the
list
of
all
that
the
pods
that
I
can
see
right.
So
now,
our
governor,
which
is
the
first
one
because
service
one,
is
this
service
and
then
they
basically
eat
off
from
you
I
said:
look.
You
have
one
container
on
this
pod.
F
F
This
is
an
intelligent
by
the
way,
just
to
show
that
we
capable
of
doing
anything
both
right,
so
we're
coming
here
and
I'm
doing
squash
container
now
we'll
shoot
service
to
because
we
still
I
will
debug
this
container
and
I
will
attach
a
Java
debugger.
Well,
let's
see
what
happened?
That's
it
I'm
attached!
So
now.
The
only
thing
that
I
really
need
to
do
is
to
go
back
to
my
application
and
try
to
run
it
again.
What
will
happen?
I'm
running
and
debugging
it
in
production?
So,
let's
see
I
can
see.
F
Oh
the
number
and
it's
right
at
foot,
99
and
11.
It's
working
well,
I
can
click
and
do
everything
that
I'm
doing
with
a
regular
debugger
and
it's
actually
easy
regular
debugger.
We
just
did
a
piping
for
it
and
now,
when
I
will
do
next,
what
will
happen?
It
will
jump
to
the
other
one
because
I
put
a
break
one.
So
now
it's
running
there
right
and
we
can
see
that's
actually
working
I
can
get
the
formatter
ops
1
equal
99
is
good.
F
F
So
I
just
change
it
on
run
time
coming
here
and
I'm,
changing
it
to
false,
and
what
you
can
see
is
that
it
just
changed
your
value
on
one
time
when
I
would
click
next
to
run
to
the
other
one
jump
to
the
other
one
when
I'm
click
next
and
I
will
go
to
my
application,
you
will
see
that
it's
fixed
the
problem
so
now
I
can
push
it
now.
I
know
that
it's
working,
so
this
is
what
squash
is
again.
An
open
source
go
check
it
out.
F
What
we
discover
is
that's
actually
there's
quite
a
lot
to
do,
I
mean
look,
we
need
to
create
a
resolver
for
every
field
and
we
need
to
do
validation
and
checking
and
length
and
logging
and
met,
which
is
security
and
aggregation,
and
it's
kind
of
like.
Suddenly
we
realize
that
we
will
be
doing
all
of
this
with
glue
so,
if
glue
giving
us
the
strongest
of
envoi
now,
what
we
can
do
is
basically
just
leveraging
this
amazing
technology.
F
We
created
a
caching
filter
already
for
that,
so
we
have
all
of
this,
and
we
can
do
this.
The
resolver
we're
always
going
to
go
to
glue
and
now
I
can
actually
create
a
graph
QL
that
it's
cordless
zero
code,
only
configuration
so
now,
not
only
that
we
are
actually
brewing
all
your
application,
we're
still
growing
all
the
data
of
this
application
and
that's
really
really
powerful.
So
it's
actually
not
only
this.
There
is
something
else
that
is
very
interesting
the
way
the
graph
you
are
working.
F
The
query
is
that
you
have
function
that
invoked
with
and
validated
the
return
value
of
this
function
going
to
the
next
field,
which
is
kind
of
a
slow.
So
if
you
think
about
it,
not
only
that,
actually
during
your
application
and
gluing
your
data
actually
also
let
you
the
ability
to
do
a
flow
like
functional
and
lambda,
for
a
step
function
for
all
the
environment,
monolithic
micro
services
and
service.
F
Okay,
so
we
sure
will
quick,
because
we
totally
all
the
time.
So
basically,
this
is
the
landing
page
for
the
glue,
a
clue
story.
We
calling
it
true
when
you
actually
open
it,
you
can
go
to
a
playground
and
wide
okay.
Ch
is
a
seriously
can
just
play
with
it
right
if
I'm,
putting
some
value
that
is
no
working,
I
would
get
an
arrow.
I
will
get
a
validation,
I
can
go
and
actually
query
my
environment
and
all
the
data
on
my
environment
for
free
I
can
get
the
parameters.
F
I
can
change
it
to
be
daily
one
and
bring
me
whatever
I
want,
and
it's
all
going
to
work.
So
now
you
actually
do
the
data
okay.
So
let's
just
go
quickly,
please
totally,
oh
all
the
time,
okay,
so
the
code.
This
is
that
we
open
source
e
today,
so
I
mean
go,
try
it
it's
awesome.
It's
really
really
scalable
and
beautiful
code
is
reading
and
go
just
go
and
look
at
it.
Go
to
our
solo
repository
and
try
that,
yes,
you
do
that,
and.
F
C
B
All
right,
but
dr.
Khan
is
not
over
yet
we
still
have
a
lot
going
on
tomorrow,
we'll
be
doing
the
most
requested
sessions
and
you
can
find
them
here
in
the
agenda.
So
just
show
up,
you
don't
need
a
book.
Anything
will
scan
you
in.
You
can
see
the
sessions
up
here
and
the
names
of
the
rooms
they'll
be
in.
So
even
if
the
track
doesn't
match
up
with
the
room
name.
That's
where
they'll
be
we'll
also
be
re,
running
the
migrating
dotnet
and
Java
apps
to
docker
workshops
and
we'll.