►
Description
Join us for Kubernetes Forums Seoul, Sydney, Bengaluru and Delhi - learn more at kubecon.io
Don't miss KubeCon + CloudNativeCon 2020 events in Amsterdam March 30 - April 2, Shanghai July 28-30 and Boston November 17-20! Learn more at kubecon.io. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy, and all of the other CNCF-hosted projects
Keynote: Kubernetes Serverless Present and Serverless Future – Brendan Burns, Distinguished Engineer & Co-Founder of Kubernetes, Microsoft
For more info click here: https://sched.co/FzDs
A
All
right,
thank
you
so
much
this
is
title
Masai.
Where
are
my
nodes?
It's
something
we
wonder
every
day,
and
the
first
thing
I
wanted
to
say
is
to
just
say
thank
you.
It's
been
five
years
more
or
less
to
the
day,
since
this
original
idea
occurred
to
myself
and
Joe
and
Craig
in
Seattle.
How
many
people
are
gonna
join
us
in
Seattle
yeah?
A
Alright,
it's
my
hometown,
so
it
has
a
special
place
in
my
heart,
but
to
think
that
five
years
later,
we'd
be
here
in
the
first
cube
Khan
in
China,
all
of
you
coming
here
to
talk
about
Kirby,
Nettie's
and,
more
importantly,
to
base
your
businesses
or
your
projects
on
top
of
this
software.
That
was
really
just
an
idea
is
incredible
and
I
just
wanted
to
say.
A
A
And
so,
when
I
started
out,
I
always
used
to
ask
a
question
and
and
I
would
ask
you
know
how
many
of
you
heard
of
containers
and
I.
Don't
think
I
need
to
ask
that
question
anymore
and
even
now,
I
used
to
ask
how
many
you
know
have
heard
of
kubernetes
and
I.
Think
I
don't
have
to
ask
that
question
anymore
and
that's
really
great.
A
So
what
I'd
like
to
ask
instead,
is
how
many
people
out
there
are
using
communities
in
production
today
all
right
how
about
on-prem
versus
in
the
cloud
a
little
bit
of
both
all
right
cool,
how
many
people
are
running
kubernetes
in
China
a
couple
people
out
there
awesome
awesome.
Well,
thank
you
so
much
everybody
who's
doing
this
and
I
hope
in
a
year
or
in
nine
months,
when
we
come
back
to
Shanghai,
we'll
see
more
hands
and
from
the
things
that
you've
you've
learned
today.
A
Well,
as
I
said,
I'm
going
to
talk
about
serverless
kubernetes
I'm,
going
to
talk
about
some
ideas
about
the
future
and
we
I
think
we
may
be
headed
I
wanted
to
start
by
talking
about
where
we
came
from
and
and
what
the
role
of
the
kubernetes
api
and
the
kerbin
ADEs
project
was
intended
to
be
really
from
the
beginning.
We
had
this
notion
that
we
were
going
to
be
transforming
cloud-based
api's
from
being
focused
on
physical
things,
virtualization
of
physical
things,
virtual
machines,
virtual
networks,
virtual.
A
Whatever
else
you
want
to
add
after
the
end
to
more
developer,
oriented
application,
oriented,
primitives
and
so
the
kubernetes
api
is
really
intended
to
be
an
application
oriented
api.
But
the
truth
of
the
matter
is
that
the
machines
were
still
there
and
as
we
built
the
api,
we
actually
did
a
lot
of
thinking
about
the
machines.
A
But
I
think
that
in
doing
so
we
didn't
really
hide
the
machines
we
just
sort
of
made
them
someone
else's
problem
and-
and
that's
I
guess
a
step
forward,
because
it
allows
operations
to
separate
and
to
focus
on
machines
or
to
focus
on
the
application.
But
in
reality,
I
think
that
what
we
really
want
to
be
thinking
about
is
how
we
move
towards
a
world
where
the
machines
are
gone,
where
from
a
kubernetes
users
perspective,
you
really
don't
ever
think
about
the
machines.
A
They
really
are
not
a
piece
of
the
things
that
you
think
about,
and
so
hopefully
for
the
rest
of
the
talk.
I'll
give
you
an
idea
about
what
that
looks
like
how
we
might
get
there
and
what
some
of
the
challenges
and
open
questions
are,
because
the
truth
is
that,
although
the
API
was
built
to
give
you
this
application
oriented
perspective,
this
API
that
was
focused
on
your
container.
Is
your
your
distributed
systems
and
not
on
your
nodes.
The
nodes
are
still
very
much
there
in
kubernetes
as
it
stands
today
they
are.
A
A
That
was
well
if
any,
the
first,
like
buzzword,
compliant
words
that
I
heard
you
know
these
words
become
more
than
the
word
means
I,
guess
at
some
level
and
I
think
it's
important
when,
when
these
words
become
kind
of
these
amorphous,
understandable
things,
it's
important
for
us
to
take
a
step
back
and
actually
say
like
when
we
say
this
word
Oh
container
was
there
too
people
would
say
container
and
it
meant
all
these
different
things.
What
is
serverless
really
mean
and
I
think
when
we're
thinking
about
serverless,
it's
important
to
think
about.
A
Are
we
talking
about
a
programming
model?
Are
we
talking
about
how
do
I
take
my
code
and
get
it
into
the
cloud
because
I
think
in
some
ways,
because
it
was
the
birthplace
of
serverless
functions
as
a
service-
has
become
very,
very
closely
tied
to
the
idea
of
serverless?
Or
are
we
talking
about
taking
containers
and
running
them
in
a
world
where
there
are
no
machines
where
the
server
is
gone,
where
there
truly
is
no
server
and
I
think
distinguishing
between
those
two
very
different
use.
A
Cases
is
an
important
first
step
in
how
we
think
about
how
we
bring
server
lists
and
the
concepts
behind
server
list
to
kubernetes,
and
in
fact
we
can
have
a
server
list
kubernetes
api.
Today
right
when
you
use
a
managed
cloud
provided
kubernetes
the
kubernetes
api
is
effectively
serverless.
You
don't
see
the
master
nodes
that
are
responsible
for
the
api
server.
A
You
sometimes
don't
even
see
the
processes
themselves
that
are
supplying
the
API
server
you're
starting
to
consume
the
communities
API
as
a
server
less
offering,
and
that's
great
because
it
means
that
you
don't
have
to
deal
with
management.
You
don't
have
to
deal
with
the
complexities
of
upgrading.
You
don't
have
to
have
an
SRE
team
that
is
responsible
for
keeping
that
cluster
alive.
There's
a
whole
bunch
of
simplifications
that
allow
you
to
focus
on
your
application
that
you're
building
on
top
of
that
kubernetes
api,
but
serverless
has
a
bigger
meeting
than
that.
A
It
could
mean
I'm
having
a
server
list
database.
You
know,
I
can
hide
that
applications
themselves,
the
implementation
of
them,
especially
with
storage,
behind
the
same
kind
of
manage
interface,
and
likewise,
what
I'm
thinking
about
programming
models?
I
can
lay
programming
models
on
top
of
kubernetes
right.
So
I
think
this
is
why
it's
so
important
to
distinguish
between
server
lists
where
the
machines
are
hidden
server
lists
where
you
don't
have
any
operations
from
a
programming
model.
A
This
is
about
all
that
I'm
going
to
talk
about
in
terms
of
programming
models,
because
I
think
well,
it's
interesting
to
explore
how
we
build
up
these
levels
of
abstraction
on
top
of
the
communities
API.
It
actually
doesn't
require
very
much
from
kubernetes
itself.
They
take
the
API,
as
is
they
build
up
from
it?
They
provide
new
programming
models
and
developers,
hopefully
are
happy,
so
I
think
the
more
interesting
question
is:
how
do
we
deal
with
serverless
infrastructure
so
server?
A
This
infrastructure
looks
like
this
in
a
more
traditional
model,
you
have
a
container
and
it's
running
and
it's
running
on
a
machine.
Maybe
it's
a
virtual
machine,
maybe
it's
a
physical
machine,
but
it's
running
on
a
machine,
and
you
are
aware
of
the
machine
that
it's
running
on
in
most
cases
you're
managing
it.
A
Sometimes
someone
else
is
managing
it,
but
you're
very
much
aware
that
there's
a
machine
there
as
opposed
to
the
notion
that
you
can
have
a
container-
and
it
is
the
only
thing
you
think
about
in
the
cloud
you
say-
hey
cloud-
would
like
you
to
run
this
container.
That's
the
extent
of
my
infrastructure,
I,
don't
see
any
other
infrastructure
in
the
cloud,
except
for
the
containers
that
I
choose
to
run
and
so
I
think
this
notion
of
serverless
infrastructure
service
container
infrastructure
is
something
that's
becoming
increasingly
popular.
A
We
see
all
sorts
of
different
versions
of
it
popping
up
in
various
public
clouds,
and
so,
if
this
is
the
building
block,
if,
instead
of
having
virtual
machines,
we
have
serverless
containers
as
a
building
block.
How
do
we
make
that
go
and
work
with
this
kubernetes
api?
That
has
been
built
with
nodes
that
are
underneath
I
think.
A
The
first
thing
to
consider
is
the
that
kubernetes
actually
supplies
a
wide
variety
of
different
tools
to
different
people
and
in
particular,
what
I
want
to
draw
a
distinction
between,
as
I,
want
to
draw
a
distinction
between
orchestration
and
scheduling
and
I.
Think
most
people
when
they're
thinking
about
kubernetes
they're,
actually
consuming
the
orchestration
api's
they're,
consuming
an
API
like
scheduled
job
or
cron
job
that
runs
a
container
on
a
particular
schedule.
They're
consuming
things
like
service
load,
balancers
that
you
know,
create
cloud
load
balancers
and
bring
traffic
down
to
their
containers.
A
They're
consuming
things
like
replica
sets
that
replicate
containers
out
into
a
prescribed
number
or
even
are
Auto
scaled
in
response
to
load.
What
they're
thinking
about
something
like
a
deployment
that
is
busy
moving
from
my
green
container
image
to
my
blue
container
image
or
from
v1
of
my
software
to
v2
of
my
software,
these
are
all
orchestration,
api's
and
what's
important
to
note
about
all
of
these
api's
is
that
they
are
basically
divorced
from
nodes.
There
is
really
nothing
in
any
of
these
api's
or
users.
A
Think
as
we
move
towards
this
surveillance
infrastructure,
it
becomes
very
clear
that
the
scheduling
part
is
the
part
where
or
we
have
to
do,
some
thought,
because
the
orchestration
api's
I
believe
are
the
place
where
people
are
interested.
That's
the
use
case
that
they
you
end,
you
that
the
end
user
is
interested
in
scheduling,
happens
to
be
an
implementation
of
those
api's,
but
in
a
world
of
surveillance
infrastructure
the
scheduling
may
not
actually
be
something
that
could
be
entities
is
responsible
for,
and
so,
when
you
think
about
nodelist
kubernetes
the
nodes
disappear.
A
The
scheduler
in
many
ways
disappears
and
you're
left
with
the
orchestration,
api's
and
serverless
infrastructure
to
run
your
containers.
So
what
we're
doing
is
we're
marrying
this
up
with
that
serverless
control
paint
pain.
It's
already
in
the
cloud.
All
of
the
clouds
have
already
figured
out
how
to
run
a
server
list,
kubernetes
api,
and
so
as
a
community.
A
Hopefully
we
can
get
to
a
place
where
it
can
also
be
achieved
where,
when
the
cloud
has
taken
those
machines
over-
and
this
is
especially
important
when
we
think
about
portability
when
we
think
about
hybrid
cloud,
where
a
user
may
be
running
physical
machines
on-premise
it's
very
hard
to
take
physical
machines
and
make
them
serverless,
and
it's
only
something
that
can
be
present
in
the
cloud.
And
so
when
we
think
about
kubernetes
and
how
we
bridge
these
gaps.
It
has
to
be
a
conformant
kubernetes.
A
It
has
to
be
a
communities
where
an
application
runs
exactly
the
same,
whether
it's
running
on
physical
infrastructure,
where
the
nodes
are
very
much
there
or
right
it's
running
in
the
cloud
where
it's
truly
serverless
and
experience
passed,
maintain
and
likewise
things
like
service
load
balancers
need
to
function
correctly.
Things
like
logs
need
to
function
correctly.
The
entirety
of
the
experience
of
communities
needs
to
stay
consistent.
So
how
do
we
think
about
doing
this?
Oh
sorry,
first
time
through
this
deck
and
I
think
what
is
interesting,
however,
is
those
same
orchestration.
A
Api's
are
extremely
useful
for
other
things
that
might
not
traditionally
have
been
considered.
Kubernetes
nodes
in
particular
deployment
knows
how
to
do
a
rolling
update
with
health
checking
and
pausing
and
other
sorts
of
safety
get
safety
guardrails
from
one
version
of
software
to
another.
Well,
it
turns
out
when
you're
deploying
things
to
IOT
devices,
that
process
is
very,
very
useful.
So
how
can
we
take
that
orchestration,
API
that
people
know
how
to
use
that
is
beneficial
to
them
in
a
world
of
running?
You
know
reliable
services
and
bridge
it
to
IOT
right.
A
Clearly,
the
IOT
device
is
never
going
to
be
a
node
in
the
kubernetes
cluster
and
in
the
read
any
real
sense
of
what
kubernetes
thinks
about
when
it
thinks
about
nodes.
So
when
we
start
thinking
about
nodelist
kubernetes,
we
start
opening
the
door
to
these
kinds
of
use
cases
and
that's
an
awesome
thing.
A
Finally,
it
actually
allows
you
to
bridge
the
gap
to
other
kinds
of
nodes.
The
cubelet
itself
is
very
tightly
bound
to
linux
and
now
to
to
windows
as
well.
If
you
wanted
to
bring
it
to
a
freebsd,
if
you
wanted
to
bring
it
to
another
another
kind
of
operating
system,
you
can
actually,
you
know,
use
that
as
effectively
a
virtual
node
into
your
kubernetes
cluster
as
well,
because
you
don't
have
to
deal
with
the
operating
system
details
the
machine
management
details
of
what
the
the
cluster
looks
like.
A
So
hopefully,
at
this
point,
I
painted
you
a
very
beautiful
picture
and
we
can
all
go
out
and
execute
into
the
server
list
French.
Well,
unfortunately,
it's
not
quite
that
way
right.
There's
a
lot
of
open
questions
that
we
need
to
answer
and
I
think.
As
importantly,
there's
a
lot
of
open
questions
that
we
need
to
answer
together
as
a
community
and
so
to
give
you
an
idea
for
the
flavor
of
this
I
mentioned
earlier
about
scheduling
you
know,
why
are:
does
the
boundary
of
scheduling
lie
in
a
world
of
surveillance
infrastructure?
A
Clearly,
when
there
are
machines
there,
we
have
a
scheduler,
it's
responsible
for
things
like
affinity,
it's
responsible
for
placing
you
know
two
containers
on
the
same
machine
or
spreading
two
containers
across
two
different
machines
in
a
world
where
the
machine
isn't
there.
What
is
the
job
of
the
scheduler?
Do
we
want
to
just
pass
constraints
down
into
surveillance
infrastructure
and
have
the
serverless
infrastructure
take
care
of
those
constraints?
Do
we
want
the
serverless
infrastructure
to
expose
some
details
about
how
its
implemented
we've
seen?
A
The
same
concerns
happen
in
the
cloud
where
clouds
have
had
to
surface
some
details
about?
Well,
perhaps
what
rack
or
what
power
domain
is
this
particular
virtual
machine
running
on,
so
the
kubernetes
itself
can
actually
understand
the
failure
domains
that
are
present
in
the
cloud.
I
think
this
is
an
important
area
that
we're
gonna
have
to
continue
to
consider
as
we
go
forward.
Similarly,
with
networking
a
lot
of
the
service
infrastructure,
how
do
you
do
DNS?
How
do
you
do
all
the
IP
tables
work?
A
How
do
you
do
all
of
the
things
that
make
services
and
proxies
work
inside
of
communities?
These
are
going
to
be
important
areas
that
we're
also
going
to
need
to
explore,
and
the
good
news
is
we're
starting
to
explore
these
ideas
right,
we're
starting
to
explore
these
ideas
in
the
context
of
the
virtual
cubelet
project.
It's
an
open
source
project
up
on
github,
it's
been
around
for
about
a
year.
It's
worth
noting.
This
is
not
the
only
answer.
There
are
other
options
for
building
a
node
list
kubernetes.
A
This
is
one
implementation
that
has
allowed
us
to
start
experimenting
with
the
ideas
to
see
where
the
problems
are.
It's
in
Apache
2,
it's
community-led
and
we
have
providers
now
for
Azure
container
instances,
Alibaba
cloud
elastic
container
instances,
hyper
Sh,
Amazon,
Fargate,
VMware
and
others,
and
it's
a
very
vibrant
community,
starting
to
think
about
how
we
build
node,
the
scoober
Nettie's.
A
So
if
you're
interested
I
invite
you
to
come
a
lair
and
enjoy
I
want
to
give
you
a
little
bit
of
an
illustration
about
how
works,
we
have
server
lists,
kubernetes,
API
servers,
for
example,
in
the
agile
kubernetes
service
or
in
any
other
public
cloud
community
service.
We
have
the
surveillance
infrastructure,
we
have
the
virtual
cubelet
in
between
it's
just
a
process.
The
very
first
thing
that
it
does
is
it
registers
a
virtual
node
with
the
API
server.
It
says:
hey,
here's
a
pretend:
node
doesn't
actually
exist
anywhere,
but
kubernetes.
A
You
need
to
think
about
nodes.
So
here's
a
node,
eventually
I,
think
we'll
get
to
a
world
where
the
node
actually
isn't
in
the
kerbin
Nettie's
api
or
is
optional
in
the
communities
api,
but
we're
not
there
yet
and
so
for
practical
purposes.
We
need
to
inject
that
virtual
node
into
the
api
server.
At
that
point,
the
user
comes
along
and
says:
hey
api
server,
I'd
like
to
schedule
a
container
and
then
the
virtual
node
says
ok
container
is
assigned
to
this
virtual
node.
The
cubelet
sees
that
that
assignment
has
happened.
A
The
container
has
been
scheduled
and
the
container
is
placed
onto
the
server
'less
infrastructure.
So
the
virtual
cubelet
at
this
point
is
really
playing
the
role
of
mediator
between
the
serverless
container
infrastructure
and
the
kubernetes
api
server.
You
can
try
this
out
today.
It's
a
really
fantastic
way
to
get
to
get
a
sense
for
what
kubernetes
in
what
server
lists,
kubernetes
infrastructure,
what
no
discriminatees
infrastructure
might
look
like.
You
can
create
a
community
service
cluster.
A
So,
as
your
kubernetes
service
is
available
in
15
regions
around
the
world,
and
you
can
see
the
various
green
checkmarks
and
the
various
geographies,
so
we
can
deliver
a
worldwide
solution
for
you,
communities
anywhere,
but
you'll
notice,
there's
one
region
that
is,
that
is
absent,
and
so
that's
why
I'm
excited
today
to
say
we're
actually
in
16
regions
around
the
world
and
that
the
azure
kubernetes
service
is
now
available
in
China
in
the
azure
China
region.
It's
in
a
private
preview.
A
But
if
you're
interested
in
a
man,
Kerber
Nettie's
experienced
an
experience
where
the
operations
are
handled
for
you.
You
can
now
use
that
via
Azure
and
the
azure
China
cloud,
so
please
go
check
that
out
and
you
can,
you
know,
even
go
use
the
the
virtual
cubelet
on
it
come
see
us
at
the
booth.
If
you
want
to
talk
about
more,
there's
learn
aks.
If
you
want
to
learn
more
about
the
Azure
Community
Service,
here's
a
bunch
of
the
open-source
projects
that
we
work
on.
Thank
you.