►
From YouTube: Dan Callahan - My Python's a little Rust-y - PyCon 2015
Description
"Speaker: Dan Callahan
Rust is a new systems programming language from Mozilla that combines strong compile-time correctness guarantees with fast performance... and it plays nice with ctypes! Come learn how you can call Rust functions from Python code and finally say goodbye to hacking C!
Slides can be found at: https://speakerdeck.com/pycon2015 and https://github.com/PyCon/2015-slides"
A
B
B
As
a
greenfield
experiment
and
what
an
ideal
host
for
containers
would
look
like,,
we
can
see
things
today
that
will
become
best
practices
in
the
next
few
years.
from
those
of
you
from
the
bay
area,
green
is
going
to
be
started
on
monday,,
so
be
ready
for
that.
When
you
get
back
from
pycon.,
what
are
we
trying
to
solve
here??
Why
are
people
excited
about
containers??
B
B
B
B
B
B
B
B
B
B
B
B
B
B
B
B
B
B
B
B
B
B
B
B
B
B
B
B
B
B
B
B
B
B
B
B
B
B
B
B
B
B
B
B
B
B
B
A
lot
of
worker
processes
on
it.
in
some
cases,.
If
you're
on,
like
amazon
ectube,,
you
can
out-source
this,
pay
money
to
start
up
an
rds
or
something,,
but
being
also
for
weekend
projects,
fine,
go
bind.
Your
database,
say
only
run
this
on
machine
I.d.xand,
it
reboots,
no
big,
deal,
right?.
It
will
be
out
for
a
couple
of
minutes.
Every.
B
B
Things
like
postgres,
is
not
cluster
aware,.
So
if
you
need
postgres,,
you
need
to
babysit
it,
still.
great
room
for
innovation.,
but
the
end
of
this,
like,,
we've
done
it,
right?.
We
have
a
platform,
that's
self,
updating
and
self,
organizing
and
self
healing,
and
we
did
it
by
using
an
operating
system
that
automatically
updates,
using
autonomic
updates..
We
put
them
in
the
isolated
containers.
C
B
B
I
wonder
if
you
could
say
what
the
experience
is
like
running
coreos
on
a
mac
development
environment,.
Does
it
suck?
it
kind
of
sucks.,
so
the...?
One
of
the
premises
is
that
you're
going
to
share
your
kernel
between
all
your
containers
and
if
I'm
using
a
mac
and
trying
to
run
things
on
linux,
that
won't
work,.
So
you
still
need
a
virtualization
layer.,
not
ideal..
B
It's
totally
fine
for
testing,
there
is
a
vagrant
coreos
recipe
that
makes
it
dead
simple
to
spin
up
three
of
these
things,
play
with
them.,
but
you're
still
booting
a
vm
to
then
go
run
other
vms,
because
you
can't
do
this
natively
on
your
desk
top..
If
you're
running,
--
and
running
linux,
way
ahead
of
the
game.
D
It
is
what
it
is.
we're
deploying
linux,.
You
have
to
deal
with
it
somewhere.
audience:
and
giving
it
an
enormous
amount
of
ram..
You
give
the
vagrant
container
an
enormous
amount
of
ram
and
walk.
Away.
coreos
doesn't
take
a
super
amount
of
ram,
boots,
really
quick,
but
you
have
to
be
running
the
containers
on
linux..
There
are
obviously,
like
microsoft,
just
announced
a
containerization
solution
for
windows.
Earlier
this
week.,
but
containers
aren't
portable
across
operating
systems.
D
But
you
can't
take
that
container,
like
you
could
with
a
vm
and
run
it
on
windows.
audience:,
hey,,
dan,,
great
talk.,
quick
question..
If
you
have
a
service
that
depends
on
having
something
on
disk
persistently
and
you
want
to
be
able
to
start
this
service
on
any
sort
of
arbitrary
node,.
How
are
you
going
to
create
a
file
system?
Which,
say,
is
shared
across
those
instances
that
this
service
will
have
access.
B
B
A
B
Seems
like
kubernetes
is
doing
things
that
fleet
is
doing..
Is
there
a
difference,?
Are
you
moving
towards
kubernetes
or
what's
the
--,
so
one
point
of
clarification,,
I'm
not
those
guys..
I
work
for
mozilla,
just
a
hobbyist
in
this
space,
but
I
did
talk
to
kelsey
hightower,
who
works
for
coreos.
Before
I
gave
this
talk,
and
the
sense
I
got
from
him
was
that
fleet
is
an
excellent
basic
scheduler
and
if
you
need,
can
you
beer
nets?
You
know,
it..
You
get
5%
better
allocation
of
your
resources,.
B
You
save
millions
of
dollars
in
the
power
bill,.
That's
when
you
move
to
kubernetes,,
but
fleet
itself
is
a
great
primitive.,
so
you
could
use
that
to
boot.
Strap
the
kubernetes
on
the
clients'
network.
kubernete
momentum,.
Google
has
tons
of
engineers
on
it
all
the
time,
but
seemed
to
be
geared
towards
larger
scale.
Deployment.
audience:,
so
you
can
see
--,
not
you,
but
the
general,
fleet
and
kubernetes
existing
side
by
side,
basically.,
absolutely.
audience:.
This
is
a
brief
follow-up
on
an
earlier
question..
B
Have
you
experimented
with
amazon's
elastic
block
store
as
a
means
of
persistence
with
a
database
server
like
postgres
or
anything
else?
feel
that's
ready
for
production
yet
or
it
might
work.
If
you
try
it
sort
of
thing?
it's
more
of
a
--,
it
should
work
within
the
constraints
of
ebs.,
but
the
things
I'm
doing,,
I'm
working
on
hobby
projects.,
this
stuff,
everything
I
showed
you
will
scale
to
hundreds
or
thousands
of
servers.,
I'm
caring,
more
about,
like,
three,,
because
I
don't
want
to
deal
with
that.
B
One
email
once
a
month
where
it's
hey,
there
is
a
hardware
problem
on
your
server.
we've
rebooted,
man,,
my
stuff
was
offline,,
so
I'm
trying
to
fix
that
on
a
low
level,
and
I
find
it
interesting.
This
sort
of
technology
does
scale
down
to
that
pretty
well,
but
putting
my
staple
stuff
in
rds
on
amazon.
audience:,
okay,,
thanks.
audience:.
What
you've
been
providing
is
really
interesting,
and
I
would
like
to
know
how
this
is
fitting.
B
--
configuration
management
tool
such
as
salt
and
pepper,,
where
some
companion
might
have
invested
in
manifest
or
states,
and
how
do
you
see
the
transition?
Because,
like,?
The
transition
between
something
which
is
declarative
and
where
you
have
all
your
configuration,
getting
something
to
such
volatile
thing.
right,.
So
it
seems
like
the
coreos
platform
itself,
your
system
is
mounted,
is
read.
Only.
A
You
can't
really
change
the
configuration..
I
mean,
you
can
obviously
add
things
to
slash,
etc
and
that,,
but
in
a
certain
way
obviates
the
needs
for
tools
like
chef,,
puppet,,
salt,
stack,
ansible,,
because
you're
not
trying
to
make
the
machines.
you're
still
trying
to
build
a
container,
but
the
tools
that
exist
for
that,,
the
docker
files.
B
A
Or
the
configuration
convergence,
things
you
usually
get
with
those
sorts
of
tools.
so
might
get
rid
of
them,
which
is
kind
of
terrible,
but
getting
rid
of
stuff
is
also
kind
of
good.
audience:.
So
if
you
have,
like,
a
thousand
application
servers
and,
you
know,,
then
five
etcd
nodes,
doesn't
seem
like
a
big
deal.
But
if
your
scale
is
three
app
servers
and
you
need
a
node
of
3
to
five
etcd
things,
doesn't
seem
to
make
sense..
So
if
you're
in
product,
but
with
a
low
number,,
does
that
--
is
that
how
they
use
it?
A
B
Something,,
you
have
to
take
care
to
inform
all
the
other
nodes
that
this
one
is
no
longer
something
they
should
be
looking
for,
because
if
something
else
shows
up,,
then
you
have
a
really
weird
system
in
the
part
that
does
consensus,
right,,
because
it's
like,
I
have
four
things,.
I
thought
I
had
three
things..
I
think
it's
weird.,
but
if
you're
just
standing
up
a
couple
of
long-running
vms
to
a
fixed
number,
that,
configuration
works,
great.
traditionally,!
B
You
don't
want
to
turn
on
all
that
many
etcd
nodes,
starts
degrading
after
nine.,
I
think
in
the
google
chubby
paper,
they
had
5
machines,
handling
hundreds
of
concurrent
clients..
So
something
like
that
really
does
scale
enormously,,
something
like
that
is
fine.
If
you're
used
to
a
traditional,
long-running
vm.
time
for
one
more?
audience:.
So
you
made
a
comment
and
I've
heard
it
a
number
of
times
that
this
automatically
updated
and
saved
you
from
a
number
of
security
problems.,
but
it
didn't.
yes,.
I
think
I
know
where
you're
going,
and
this
is
great.
thank
you.
B
audience:,
because
your
user
lands
is
where
lib
ssl
lived
and
none
of
your
lib
ssls
on
your
cluster
got
updated
when
everything
rebooted.
spot
on..
So
this
would
help
you
with
things
like
kernel
level
vulnerabilities,
but
you
still
have
to
maintain
your
user
lands,
and
so
what
you
want
to
do
is
end
up
making
sure
that
you're
rebuilding
your
docker
images,
regularly,
redeploying
them
and
things
like
that,
once
things
come
out.
the
nice
thing,.
This
gets
you
away
from
having
to
manage
both
the
host
system
and
the
application
system
with
a
lot
of
care,
right?