►
Description
Join us for Kubernetes Forums Seoul, Sydney, Bengaluru and Delhi - learn more at kubecon.io
Don't miss KubeCon + CloudNativeCon 2020 events in Amsterdam March 30 - April 2, Shanghai July 28-30 and Boston November 17-20! Learn more at kubecon.io. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy, and all of the other CNCF-hosted projects
Keynote: Kubernetes Project Update - Janet Kuo, Software Engineer, Google
https://sched.co/MReA
A
So
hi
everyone,
I'm
Jenna,
Cole,
I'm
suffering
generic
Google
and
co-chair
of
this
conference
and
I'm
also
the
project
maintainer
of
kubernetes.
So
today,
it's
very
nice
to
be
here
to
give
you
an
update
on
kubernetes.
So
the
other
day
I
was
browsing
through
my
social
media
and
some
of
my
friends
are
sharing
their
memories.
Memories
is
a
feature
on
social
media
that
you
can
get
your
life
events
or
moments.
Big
moments
to
reflect
on.
A
So
one
of
my
friends
shared
that
she
had
her
first
job
three
years
ago
and
another
friend
moved
into
a
new
city
last
year.
So
I
started
wondering
what,
if
kubernetes
were
a
person?
What
would
kubernetes
memories
look
like
so
today
in
celebration
of
kubernetes
five
year
anniversary,
let's
have
a
look
back
at
kubernetes
16
years
ago
in
2003
Google
created
Borg.
Most
of
you
should
know
that
Borg
is
the
predecessor
of
kubernetes
and
Borg
is
Google's
cluster
manager.
It
supports
high
availability
applications
in
achieves
high
utilization.
A
Today,
Google
porque
still
gulose
primary
cluster
management
system
because
of
its
scale
and
robustness,
then
in
2003
the
board
developers
they
introduced
process
containers
because
managing
processes
couldn't
scale
and
process.
Containers
is
what
we
know
today,
as
C
groups
and
C
groups
became
the
foundation
of
continued
technology
and
then
in
2008
Linux
containers
adopted
the
container
terminology.
A
Linux
container
is
what
Ducker
was
built
upon
then,
10
years
ago,
in
2009,
Google
created
Omega
project
with
a
desire
to
improve
the
Borg
ecosystem
and
a
lot
of
Omega
primitives
eventually
made
their
way
into
kubernetes.
For
example,
the
scheduling
unit
eventually
became
the
part
in
kubernetes
and
then
some
other
examples
are
forgiveness
and
disruptions.
Tens
and
Toleration.
A
Now,
let's
fast
forward
to
six
years
ago,
in
2013,
docker
was
open
sourced
by
tech.
Floud
docker
has
done
a
great
job
of
making.
It
super
easy
to
run,
configure
and
shared
containers
on
a
single
host,
and
the
next
step
is
to
make
it
very
easy
to
configure
and
coordinate
containers
across
multiple
hosts.
So
we
need
a
great
cluster
management
system,
something
like
board.
So
with
that
in
mind
by
the
end
of
2013
project.
A
7
was
proposed
in
Google
to
create
an
open-source
cluster
and
management
system,
a
container
Orchestrator
and
to
help
lay
the
foundation
of
future
cloud.
You
may
wonder:
what's
about
the
name
of
project
7,
you
can
actually
come
from
a
Star
Trek
character,
seven
of
nine,
who
is
a
very
friendly
board.
A
A
A
A
Later
this
year
we
had
our
first
European
cube
plant
in
London,
and
then
we
start
getting
industry
adoptions
and
we
see
more
people
running
kubernetes
in
production
at
scale
and
two
years
ago
in
2017
the
customer
resource
definition,
API
or
many
people
call
it
C
Rd
was
introduced.
Crd
is
a
feature
that
allows
you
to
define
your
own
kubernetes
style
API.
So
it's
a
it's
now
a
very
popular
way
for
people
to
extend
customize
and
build
things
on
top
of
kubernetes,
some
other
important
kubernetes
primitives,
such
as
our
bag
and
workloads
api's.
A
A
And
the
total
number
of
commits
and
contributions
has
risen
by
four
times
since
kubernetes
joined
CN
CF
many
global
organizations
are
using
kubernetes
in
production
at
massive
scale.
Kubernetes
adoption
spans
multiple
industries,
including
ecommerce
retail
gaming,
IOT,
ki,
banking,
social
media.
You
there's
a
lot
more
and
the
from
the
kubernetes
look
back.
You
know
that
kubernetes
wasn't
an
overnight
success.
A
It's
an
over
decades
success,
and
here
we
are
at
the
one
of
the
biggest
open
source
developer
conferences
ever
celebrating
kubernetes
five
year
anniversary,
at
least
this
age
kubernetes
is
getting
very
stable
and
mature.
The
latest
kubernetes
released
114
has
more
enhancements
moving
to
stables
than
ever
so,
for
example,
out
of
31
enhancements
10
are
moving
to
stable.
A
A
A
A
A
Some
of
the
kubernetes
extensibility
extensibility
features
around
it
and
it's
not
stable.
Yet,
for
example,
CRD
is
still
beta
and
compared
to
the
built
in
kubernetes
api
s,
there
are
still
some
missing
pieces,
for
example,
versioning
and
validation,
so
I
expect
the
kubernetes
extensibility
features
to
continue
involved
and
eventually
mature.
A
The
second
focus
is
around
the
scalability
kubernetes
scalability
is
not
a
single
number.
It's
not
like
running
5,000
nodes
in
a
cluster.
It's
a
combination
of
the
number
of
nodes,
number
of
pods
namespaces
secrets
and
services,
and
many
many
things
Iranian
your
cluster.
It's
actually
a
multi-dimensional
problem,
and
sometimes
the
dimensions
are
not
independent
and
when
you
fix
one
bottleneck,
you
usually
just
expose
the
next
one.
A
A
It's
updated
by
a
cubelet
every
10
seconds,
and
every
time
the
status
is
updated.
The
Ford
knows
the
footnote.
Object
will
be
stored
in
a
CD,
even
though
the
no
status
hasn't
changed
at
all,
so
for
a
5,000
node
cluster.
That
means
300
to
600
megabytes
per
minute
in
a
net
CD,
and
this
often
overloads
at
City
and
to
solve
this,
we
introduce
a
new
API
called
note
lease,
so
cubelet
refreshes
leaves
with
the
same
frequency
but
as
a
lighter-weight
signal
for
node
healthiness.
A
Our
last
focus
is
reliability,
for
example,
fixing
cascading
failure
holistically
when
the
user
is
reporting
that
you
notice
being
killed
and
the
whole
cluster
is
down.
This
is
known
as
a
cascading
failure.
An
issue
we
saw
in
the
past
is
that
when
the
Batpod
is
crashed
looping,
it
is
doing
it
very,
very
fast
that
crashed
down
and
the
logs
will
eventually
fill
up
the
local
disks
on
the
node
and
then
eventually
cause
the
node
to
fill
when
the
bad
part
is
killed.
A
A
So
how
do
we
solve
this?
Currently
we
work
around
it
by
not
making
the
workload
controllers
to
think
that
they
had
to
recreate
those
parts.
Instead,
we
just
put
those
back
pad
in
a
waiting
State.
Let
them
sit
in
the
notes
so
that
the
workload
controller
will
not
try
to
recreate
them,
but
then
it
hides
useful
signal.
A
We
need
to
keep
improving
kubernetes
and
we
need
to
do
it
together
as
a
community,
whether
it's
direct
contributions
to
kubernetes
or
new
frameworks
built
on
top
of
kubernetes
or
built
in
the
ecosystem
or
upstreaming
your
internal
tools
or
fix
for
everyone
to
use,
but
just
share
some
real-world
success
or
failure
stories
so
that
everyone
can
learn
from
so
from
today's
morning.
He
notes
you
know
that
how
important
it
is
to
contribute
to
kubernetes
so
I
hope
everyone
can
contribute,
and
thank
you.