►
From YouTube: Nana goes to KubeCon
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
And
we
are
live
first
of
all,
welcome
everyone.
Thank
you
for
joining
me.
Let's
see
how
many
people
will
actually
join
the
live
stream
and
a
quick
disclaimer
right
away
to
get
it
out
of
the
way.
This
is
an
official
live
stream
of
the
cncf
and,
as
such
is
subject
to
the
cncf
code
of
conduct.
So
please
do
not
add
anything
to
the
chat
or
questions
that
will
be
in
violation
of
that
code
of
conduct,
basically
just
be
respectful
of
each
other,
your
fellow
participants,
presenters
and
so
on.
A
So,
as
I
said,
thank
you
for
all
of
those
who
joined
the
live
stream.
I
think
this
is
going
to
be
a
really
interesting
session.
We
have
a
lot
of
interesting
topics
to
discuss
around
my
favorite
subject,
which
is
kubernetes,
and
I'm
also
very
looking
forward
to
your
questions
as
well.
So,
let's
start
the
presentation
right
away
awesome.
A
So
some
of
you
already
know
me
probably
from
my
youtube
channel
tech
world
with
nana,
where
I
talk
about
kubernetes
and
many
other
devops
tools
and
for
others
who
don't
know
me.
Yet
I
am
nana,
I'm
a
devops
engineer
and
I
actually
dedicate
a
lot
of
my
time
creating
free
content
as
well
as
online
courses,
educational
programs
for
those
who
want
to
get
started
in
devops
and
basically
learn
different
tools
in
a
devops
space
because
they're
thinking
about
changing
to
to
the
devops
career,
or
they
have
to
learn
these
tools
for
their
work.
A
And
I
will
also
be
very
happy
to
connect
with
you
on
any
social
media
platforms.
And
I
think
we
have
the
link
to
all
my
social
media
platforms
in
the
chat.
So
you
can
find
it
there
and,
as
I
said,
one
of
the
most
fascinating
tools
for
me
personally
that
I
have
focused
on
also
as
an
online
educator
for
the
last
few
years
and
also
as
a
practitioner
is
kubernetes
and
since
kubecon
is
around
the
corner.
A
So
you
get
some
kind
of
a
nice
overview
of
all
the
kubernetes,
basically,
all
the
tools
in
the
kubernetes
world,
all
the
concepts
and
how
all
the
other
things
that
are
basically
happening
in
this
world
and
also,
I
hope
this
can
better
prepare
you
for
kubecon,
because
most
of
the
talks
and
technologies
that
you
will
see
on
on
the
list
will
be
will
basically
belong
to
one
of
those
categories
that
we're
gonna
go
through
today
and
also
share
in
the
chat,
whether
you
are
actually
going
to
kubecon
and
what
are
your
expectations
or
what
are
your
your
experiences
already
with
the
previous
kubecon
conferences
and
with
that
overview,
let's
actually
jump
right
in
and
let's
start
with
the
common
truths
that
everybody
already
knows.
A
We
know
that
kubernetes
is
gaining
the
popularity
at
a
crazy
speed.
I
have
seen
this
with
the
companies
that
I
work
with
that.
I
support
the
people
that
are
following
my
channel
and
basically
through
the
feedback
that
it
it
is
being
adopted
by
more
and
more
companies
and
projects
and
also,
logically
enough.
We
see
this
this
whole
huge
ecosystem
evolving
around
it,
and
you
have
lots
of
startups
and
new
projects
that
implement
solutions
for
different
problems
or
different
things.
A
Things
in
the
kubernetes
world
and,
as
you
know,
currency
is
not
the
easiest
tool
to
to
learn
or
maintain
or
to
set
up,
and
it
is
really
complex
to
understand.
It's.
As
I
said
it's
difficult
to
set
up,
but
it's
also
difficult
to
maintain
for
the
teams
who
actually
have
to
run
and
manage
kubernetes
and
the
that.
But
what
I
think
is
the
mo.
The
most
of
the
complexity
of
kubernetes
is
not
the
kubernetes
itself,
which
is
anyways
complex
and
its
own
topic.
A
But
what
adds
complexity
to
that
is
the
integrations
with
all
the
other
tools
and
technologies
right
and
we're
going
to
talk
about
that
through
the
throughout
the
live
stream.
What
things
that
we
need
to
integrate,
kubernetes
with
and
so
on,
and
also
because
kubernetes
is
a
platform
where
applications
will
run
in
and
there
are
lots
of
all
sorts
of
applications
from
the
databases
to
web
applications
to
monitoring
and
logging
services
and
security
services,
etc.
A
There
are
basically
lots
of
things
happening
within
kubernetes
plus
we
have
the
whole
life
cycle
life
cycles
around
kubernetes,
like
the
icd
pipeline
that
deploys
applications
to
kubernetes
the
icd
pipeline
for
configuring,
the
kubernetes
itself
and
updating
the
platform,
maybe
infrastructure,
even
underneath
and
as
a
result,
you
see
all
these
technologies
evolving
around
it
and
you
see
the
list
of
technologies
you
never
heard
of
before.
Probably-
and
I
know
that
for
a
lot
of
people,
this
could
be
overwhelming
to
keep
up
to
date,
because
people
all
usually
ask
me
like
do.
A
I
have
to
learn
all
these
tools.
Where
do
I
start?
How
much
do
I
actually
need
to
learn?
Where
do
I
stop
and
so
on?
So
I
thought
to
to
to
make
these
a
little
bit
easier
and
first
try
to
understand
what
are
all
the
issues
related
to
kubernetes
and
what
are
main
concepts
we
have,
and
only
after
that
see
what
technologies
are
out
there
that
actually
solve
these
issues
and
are
related
to
those
concepts.
A
A
A
So
when
you
schedule
an
application
in
kubernetes,
it
could
be
a
database,
a
logging
application
or
web
application
under
the
hood.
They
will
run
as
containers
right.
So
in
kubernetes
you
need
a
container,
a
runtime
technology
underneath
the
first
one
was
docker,
so
kubernetes
was
actually
created
with
docker
support,
integrated
in
the
code
itself
and
docker.
A
It
was
logical
because
docker
was
actually
the
technologies
that
made
technology
that
made
containers
popular
and
mainstream
in
the
first
place,
but
with
time
parallel
to
docker,
more
container
technologies
emerged
that
were
more
lightweight
than
docker
like
container
d
and
cryo
are
some
examples,
and
the
reason
was
that,
because
docker
itself
is
more
than
just
a
container
runtime,
it
is
used
to
build
images.
It
has
its
own
user
interface,
command
line,
interface
and
actually
more
features
get
added
to
it
right
to
have
it
to
make
it
a
self-sufficient
technology.
A
But
you
don't
need
any
of
these
extra
stuff
in
kubernetes.
You
only
need
the
container
runtime,
so
preference
was
actually
given
to
these
more
lightweight
runtimes
like
container
d
and
cryo
that
are
just
container
runtimes,
and
so
this
kind
of
change
was
change,
happened
in
many
major
cloud
platforms
like
aws,
google,
cloud,
azure
cloud
and
so
on
that
provide
managed
community
services
actually
use
container
d
as
a
container,
runtime
and
cryo
is,
for
example,
also
used
by
openshift.
A
But
I
have
to
note
here
that
when
you
work
with
kubernetes,
you
actually
don't
need
much
knowledge
of
the
container
runtimes
itself,
because
even
in
a
self-managed
kubernetes
cluster,
once
you
install
the
the
cluster
and
once
you
install
the
container
runtime
and
everything
is
set
up,
you
don't
need
to
do
anything
or
you
don't
need
to
work
with
runtime
anymore
right.
They
just
run
and
do
their
job.
A
In
the
background
quietly,
you
just
work
with
kubernetes
and
another
note
on
docker
itself,
even
though
it
is
not
the
the
first
option
anymore
for
kubernetes
as
a
container
runtime,
it
still
remains
super
popular
tool
because,
as
I
said,
it
is
used
to
build
images
that
will
then
run
in
kubernetes
as
containers,
no
matter
which
container
runtime
it
uses.
So
again,
if
you
see
any
talks
during
the
cubecon
in
the
in
the
list
that
are
about
container
runtimes,
this
is
basically
it's.
A
All
right,
so,
let's
move
on
to
another
topic,
which
is
very
interesting
for
me
personally,
and
that
is,
as
I
said,
kubernetes
was
actually
created
as
a
container
orchestration
tool
for
the
use
case
of
running
hundreds
or
maybe
thousands
of
containers
right
and
the
question
is
now
in
which
case
or
in
which
use
cases
do
we
have
so
many
containers
and
one
of
the
one
of
the
perfect
use
cases
for
this
is
actually
microservices
architecture
right,
microservices
applications.
A
So
basically-
and
this
was
itself
a
shift
that
happened
in
the
application,
infrastructure,
application,
sorry
architecture
or
world
itself.
So
instead
of
one
big
monolith
application,
we
have
the
same
application
but
divided
into
or
cut
into
smaller,
cleaner
and
more
manageable
pieces
right
and
they
can
run
in
isolation,
and
these
are
micro
services,
and
these
micro
services
often
have
other
services
that
they
talk
to.
A
So
we
end
up
having
loads
of
containers
that
needs
to
be
managed,
and
we
basically
deployed
all
of
these
on
kubernetes
and
since
in
the
microservices
architecture,
these
small
apps
get
deployed
as
own
isolated
containers.
The
challenge
becomes,
how
do
these
microservices
talk
to
each
other?
In
this
kubernetes
environment,
but
not
only
that,
but
how
do
they
talk
to
each
other
efficiently,
with
no
single
point
of
failure
also
securely
with
high
scalability?
A
So
even
if
you
have
thousands
of
services-
and
you
scale
it
to
10
000
microservices-
that
all
the
traffic
between
them
doesn't
actually
slow
down
the
whole
network
and
break
everything.
So
all
these
challenges
are
what
service
mesh
technologies
actually
address
and
not
surprisingly,
there
are
many
different
service
mesh
implementations
out
there.
Most
popular
ones
currently
are
istio
linker
d
and
hashicorps
console
which
you
can
all
deploy
and
use
in
kubernetes
again.
A
These
are
technologies
that
were
developed
in
this
ecosystem
in
this
area
and
they
have
all
really
good
integrations
in
kubernetes,
so
you
can
easily
deploy
them
and
use
them
there,
even
though
these
are
all
tools
that
can
be
managed
and
deployed
separately
independently
of
of
kubernetes
as
well
and
it
and
they
solve
the
same
challenges
in
different
ways,
and
I
know
I
also
get
this
question
very
often
when
there
there
are
50
technologies
that
solve
the
that
basically
do
the
same
thing:
people,
the
usual
question
is
which
one
do
you
learn
right,
which
one
do
you
use
for
your
project
and
again,
as
I
said,
like
these
tools
will
solve
the
same
challenges
but
in
different
ways.
A
So,
depending
on
on
what
approach
you
want
to
use,
depending
on
which
features
are
more
important
for
you,
you
have
to
basically
just
the
decide
per
technology
level
which
one
you
want
to
test
and
basically
try
out
for
your
application.
But
personally,
I
think
you
just
start
with
one
of
the
tools.
You
learn
all
the
concepts
and
then
you
can
test
another
tool
and
just
compare
how
they
work
with
each
other.
A
All
right
and
now,
let's
continue
from
the
microservices
architecture
and
talk
about
persisting
data,
because
whenever
you
are
deploying
applications,
whether
it's
a
monolith
or
microservices
application,
you
need
some
kind
of
data
persistence
right,
usually
mostly,
you
will
have
a
database
connected
to
your
application,
that
or
multiple
databases
that
persist
different
types
of
data,
and
this
could
be
an
in-memory
database.
This
could
be
a
caching
database
to
make
your
application
faster
using
cache.
This
could
be
a
persistent
sql
database
or
nosql
database.
A
So
basically
kubernetes
tells
you.
You
know
what
I
don't
care
where
you
store
your
data
physically,
but
I
will
just
give
you
an
interface
so
that
you
can
connect
your
physical
storage
wherever
it's
configured
whatever.
That
is
to
the
applications
running
inside
kubernetes,
using
kubernetes
components
like
persistent
volume,
storage
class,
persistent
volume
claim
and
so
on.
A
A
As
an
administrator,
you
decide
what
kind
of
storage
you
want
to
use
to
persist
the
data
and
that
could
again
be
dependent
on
what
data
are
you
persisting
for
which
database
and
this
could
be
a
storage
from
cloud
platforms
themselves
like
elastic
block,
storage
from
aws,
google
cloud
storage,
azure
cloud,
storage
and
and
so
on,
or
it
could
be
any
cloud
native
storage
tool
like
cfs
or
gluster
fs,
which
are
scalable
storages
distributed
storages
when
you
want
to
scale
the
data
persistence,
and
you
can
also
have
on-premise
storage
right,
it
could
be
an
nfs
server
or
could
be
very
simplistic
file
system,
storage.
A
So,
physically
you
go
and
configure
physical
storage,
where
the
data
will
end
up
from
the
kubernetes
applications
and-
and
you
actually
have
a
pretty
wide
choice
of
where
that
storage
could
be
physically,
and
that
gives
you
flexibility
to
select
whatever
storage
backing
you
want
to
persist.
Your
data
with,
and
once
the
storage
is
configured
by
you
or
whoever's
responsible
for
that
in
your
company,
you
can
make
it
available
in
kubernetes
simply
by
creating
and
using
a
kubernetes
component
called
persistent
volume
right.
A
So
it's
a
yaml
definition
where
the
storage
backend
address
and
other
data
is
configured.
So
you
basically
in
the
persistent
volume
you
say
I
want
storage
from
elastic
block
storage,
which
is
available
at
this
address
or
from
nfs
storage
available
at
this
address,
and
then
how
much
storage
you
need
and
once
kubernetes
has
the
storage
information
through
pc,
pv
persistent
volume.
You
can
then
attach
or
link
that
physical
storage
to
the
workload
in
kubernetes
again
using
kubernetes
native
configuration.
A
So,
as
I
mentioned
here,
the
challenge
and
the
main
takeaway
is
learning
about
different
types
of
storages
that
kubernetes
supports
and
then
which
storage
type
is
appropriate
for
what
type
of
application
data,
and
I
believe
that
kubernetes
official
documentation
has
in
the
in
the
persistent
volume
section
actually
has
a
list
of
all
the
supported
storage
back-ends,
where
you
can
basically
check
which
one
you
can
use
and-
and
there
is
also
another
very
important
thing.
A
I
want
to
mention
here
is
that
in
practice,
even
though
kubernetes
theoretically
supports
all
these
and
and
everything
is
great,
but
in
practice
till
now-
and
this
could
change
later,
many
people
decide
to
run
and
manage
the
databases
outside
kubernetes,
especially
in
the
production
environments
too.
And
this
is
because
they
want
to
basically
avoid
the
headaches
of
setting
up
the
data
persistence
in
kubernetes,
and
you
know
chaining
the
whole
thing
from
the
actual
storage
backend
all
the
way
to
the
application.
A
So
they
just
decide.
You
know
what
let's
leave
our
mysql
database
outside
the
cluster
and
then
applications
inside
the
kubernetes
just
connect
to
this
outside
database.
But
I
believe
that
this
will
change
in
the
near
future
and
there
are
actually
lots
of
trends
towards
that
because
it
is
going
to
become
easier
to
use.
Storage
and
people
are
going
to
get
more
confident,
basically
running
stateful
applications
in
kubernetes
and
using
the
the
storage
or
data
persistence
in
kubernetes.
So
I
think
we're
going
to
see
that
shift
very
soon
as
well.
A
Right
so
moving
on
now
at
this
point
we
have
our
modern
microservices
applications
running
in
kubernetes
we
have
the
data
persistence,
configured
and
so
on,
but
what
about
deploying
applications
into
a
kubernetes
cluster,
because
when
you
have
1000
micro
services,
let's
say
for
your
project-
and
this
is
actually
a
very
moderate
number
for
lots
of
projects
getting
in
all
these
microservices
could
be
updated
by
different
teams
or
a
handful
of
teams.
It
doesn't
matter.
A
Obviously
you
don't
want
to
deploy
these
microservices
individually
manually
to
you
to
the
cluster
right
or
using
some
scripts.
You
want
the
whole
process
to
be
automated
with
a
cicd
pipeline
for
your
applications
that
basically
watches
for
your
commits
and
then
tests
and
builds
the
application,
produces
a
new
container
image
version
and
then
automatically
deploys
that
to
the
kubernetes.
A
So
the
question
now
is:
how
do
you
automate
updating
application
versions
in
kubernetes
or
deploying
the
new
application
versions
in
the
cluster,
and
that's
where
integrating
kubernetes
with
different
ci
cd
tools
comes
into
the
picture
right?
We,
of
course,
have
the
traditional
tools
like
jenkins,
but
also
more
recent
ones
like
gitlabs,
icd,
circle,
ci,
etc,
and,
as
I
mentioned,
these
tools
are
all
will
all
have
integrations
with
kubernetes.
Of
course,
some
of
them
will
have
better
ones
than
others,
like
you
probably
like.
A
If
you
already
have
tried
this
and
and
set
it
up,
you
know
that
jenkins
does
have
an
integration
with
kubernetes,
but
since
it
it
has
been
developed
way
before
kubernetes
it,
you
know,
tries
to
kind
of
fit
in
in
this
whole
cloud
native
world,
but
it's
it's
really
struggling,
but
some
newer,
cicd
tools
are
being
developed
that
are
actually
purpose
built
for
kubernetes
right,
so
city
cicd
tools
that
work
directly
with
kubernetes
and
they're
super
easy
to
integrate,
and
so
on.
A
So
we
see
a
lot
of
changes
and
a
lot
of
new
tools
evolving
in
that
area
as
well
and
as
a
continuation
of
the
topic,
we
have
cicd
pipelines
that
automatically
deploy
coach
application
changes
to
the
cluster,
but
we
also
have
automatically
pipelines
that
will
automatic
automatically
apply
the
platform,
changes
or
infrastructure
changes
on
kubernetes
right.
So
traditionally
we
know
that
when
managing
and
configuring
servers
or
platforms,
where
applications
run
and
with
like
by
platform,
I
mean
kubernetes
in
this
case.
A
So
in
this
kind
of
cases,
administrators
would
use
scripts
or
do
all
these
configuring
and
managing
or
maintaining
tasks
basically
manually.
But
how?
But
if
you
have
infrastructure,
where
you
basically
run
and
manage
thousands
of
servers
or
tens
of
thousands
of
servers
because
with
kubernetes,
it's
so
simple
to
scale
that
then
obviously
managing
all
these
manually
and
and
using
script
is
not
physical.
It's
feasible
right
and
we
saw
that
infrastructure
became
more
complex.
A
We
are
running
applications
or
much
more
complex
infrastructure.
It
became
more
virtual
cloud.
Platforms
became
very
commonly
used
and
popular,
and
therefore
we
now
need
a
little
bit
more
sophisticated
and
high-level
tools
that
will
help
us
manage
all
these
and
not
basically
just
do
the
stuff
manually
right.
A
Another
very
important
change
in
parallel
to
that,
or
maybe
because
of
that
is
the
culture
of
collaboration
between
engineers
itself
right.
So
you
don't
have
just
a
couple
of
engineers
that
do
their
own
things
on
their
laptops
and
basically
just
remotely
configure
servers.
You
have
this
whole
culture
of
team
collaboration
where
the
system
engineers
need
to
talk
to
the
developers
and
security
engineers
and
devops
engineers,
and
everyone
has
to
work
together
and
so
on
and
as
a
result.
A
Basically,
they
don't
only
collaborate
on
the
on
the
infrastructure
changes,
but
on
the
application
changes
as
well,
so
the
best
way
to
address
this
was
basically
writing
code
for
infrastructure,
provisioning
and
also
configuration
which
emerged
into
a
term
called
infrastructure
as
code,
which
you
probably
all
heard
of
that
after
that
later
extended
to
a
configuration,
is
called
policy
as
code
x
as
code
like
whatever
as
code,
and
that
was
part
of
the
team
collaboration.
A
Culture
switch
as
well
as
just
a
solution
to
infrastructure
becoming
so
so
complex
and
you
basically
needing
to
have
a
tool
to
manage
10,
000
servers
and
not
just
10
or
10
of
them.
So
now,
instead
of
isolated
manual
changes,
we
have
a
team
collaboration
and
more
transparency
in
what
changes
are
actually
done
to
the
infrastructure
again
for
clarity,
infrastructure
would
be,
for
example,
if
you're
using
cloud
platform.
This
would
be
aws
infrastructure
and
then,
on
top
of
the
infrastructure.
A
Also,
you
have
the
platform
like
kubernetes
that
runs
on
the
infrastructure
right,
so
you
will
have
more
transparency
into
what
changes
you
are
making
into
infrastructure
as
well
as
platform,
and
you
have
that
using
version
controlled
code
to
express
all
these
changes
and
as
a
logical
continuation
of
it,
because
you
now
have
code
that
represents
your
infrastructure
or
platform
changes
or
configuration
changes.
We
have
the
same
cicd
pipeline
for
applying
those
changes.
Just
like
we
do
application.
A
We
do
apply
application
changes
using
this
kind
of
pipelines
and
that
process
got
a
term
of
gitops.
So
githubs
is
a
concept
of
using
a
version
control
system
like
git
for
making
any
changes
to
infrastructure
platforms
and
so
on
and
automatically
testing
and
applying
the
changes
using
a
ci
cd
pipeline.
A
Once
the
code
is
committed
to
the
repository
and
again,
there
are
many
tools
that
implement
the
github
githubs
concepts,
especially
in
the
kubernetes
world,
such
as
argo,
cd,
flux,
cd
and
so
on,
and
again
many
of
them
purpose
built
for
kubernetes
right,
so
they
actually
evolved
specifically
for
this
use
case
now,
kubernetes,
the
gitobs
is
a
process
that
cannot
be
implemented
within.
A
For
for
any
project
within
a
day,
because
it
requires
engineers
getting
familiar
with
the
tools
and
the
concepts
because
it
is,
I
wouldn't
say
this
is
a
difficult
or
complex
concept,
but
it's
just
so
different
from
what
we
have
been
working
with,
that
it
needs
to
sometimes
to
basically
adjust
to
this
new
way
of
working
so
again,
going
back
to
this
cultural
shift
and
how
teams
collaborate.
A
So
this
is
probably
one
of
the
challenges
of
implementing
githubs
in
the
projects,
and-
and
that
means,
if
you
and
your
team,
for
example,
plan
such
a
transformation
that
you
want
to
basically
move
in
the
direction
of
git.
Ups,
then
familiarizing
with
one
of
the
git
ups
tools
will
be
a
really
good
way
to
get
started.
A
Awesome
again,
moving
on
now
we
have
our
old
micro
services
applications
running
in
the
cluster
plus
any
third-party
applications
that
your
microservices
apps
are
using,
could
be
databases,
authentication,
services
messaging
whatever.
A
And
finally,
you
have
cicd
tools
that
are
connected
to
kubernetes
and
do
automatic
configuration
updates,
as
well
as
application
updates.
So
a
lot
of
things
going
on
inside
kubernetes
and
around
it
right
and
of
course,
the
cluster
itself
is
running
on
some
kind
of
infrastructure
like
on
a
cloud
platform
or
on-prem
on-premise
data
center.
So
you
have
the
infrastructure
level,
the
platform,
the
kubernetes,
on
top
of
it
applications
inside
and
then
integrations
with
external
applications,
and
this
actually
means
lots
of
potential
security
issues.
A
Every
new
tool,
new
application,
new
integration
with
kubernetes
actually
increases
the
attack
surface
right
now
you
have
a
new
way
of
or
new
new
place
that
could
be.
It
could
potentially
become
a
security
issue
and
you
need
to
manage
the
security
issues
on
multiple
levels.
As
I
said,
there
is
an
infrastructure
that
is
lying
underneath.
A
Basically,
you
have
opened
up
some
security
vulnerabilities,
so
you
can
imagine
that
this
creates
quite
a
challenge
for
teams
to
secure
the
cluster
and
applications
inside
and
again,
not
surprisingly,
there
are
loads
of
tools
that
address
different
aspects
of
security
in
the
kubernetes
world
and
I
actually
went
through
the
the
list
as
well,
and
I
was
surprised
that
to
see
that
a
lot
really
a
lot
of
talks
in
the
kubecon
are
actually
around
security
in
kubernetes
on
on
different
levels,
whether
it's
integration
or
the
cluster
or
a
cross
cluster,
and
there
are
actually
lots
of
tools
listed
there.
A
So
this
will
be
an
interesting
topic
for
those
who
are
actually
running
really
critical
applications
in
their
clusters,
and
they
want
to
be
confident
that
everything
is
secure
and
they
are
actually
running
everything.
You
know,
according
the
security
best
practices
and
in
addition
to
all
the
security
concepts,
we
have
also
a
new
term
called
devsecops,
which
basically
adds
the
security
solutions
within
the
automated
devops
processes,
and
we
saw
that
security
basically
became
more
foreground
issue,
instead
of
just
being
in
the
background
and
the
responsibility
of
the
security
engineers.
Suddenly
it
became
okay.
A
Now
we
have
to
actually
integrate
the
security
concepts
and
security
solutions
throughout
our
automated
processes,
because
in
in
the
whole
chain,
because
we
need
to
really
secure
each
and
every
step
of
the
application
life
cycle
when
it
gets
into
the
cluster
and
then
the
infrastructure
underneath
and
so
on,
and
just
like
with
git
ups
depthsec
ops
is
also
some
way,
a
cultural
shift
in
how
teams
of
engineers
work
together
and
share
the
responsibility
for
addressing
the
security
issues
instead
of
saying
this
is
just
this.
This
is
for
security
engineers
or
the
system.
A
Administrators
basically
have
to
take
care
of
all
this
all
right
now.
Kubernetes
clusters
can
get
very
complex
again
thousands
of
applications
inside.
So
the
question
is:
how
do
I
monitor
and
observe
stuff
to
make
it
transparent
for
me
or
whoever
is
basically
responsible
for
kubernetes
to
make
it
more
transparent?
What
state
my
cluster
is
in
right,
so
we
need
some
observability
in
our
cluster,
but
not
only
to
identify
issues
when
they
happen,
like
application
is
not
accessible,
it's
crashing
or
it's
under
heavy
load.
A
Kubernetes
cluster
is
running
out
of
resources,
so
we
have
to
now
identify
what
caused
these
issues,
and
we
can
do
that
using
very
easily
using
monitoring
tools,
but
that's
not
the
the
goal
or
purpose
of
the
monitoring
tools,
but
it's
rather
to
prevent
that
these
issues
actually
happen
so
by
we
monitor
the
values,
and
we
then
alert
the
right
people
in
the
team
to
take
action
when
there
is
a
chance
that
some
issue
will
arise,
and
there
is
a
very
effective
way
of
managing
and
maintaining
your
cluster,
because,
especially
in
production
environment,
because
you
can
actually
utilize
all
these
tools
to
avoid
any
unexpected
things
happening
in
your
cluster
or
avoid
users
being
the
the
first
ones
to
find
out
that
your
cluster
has
issues
instead
of
your
team,
that
is
responsible
for
kubernetes
and
that's
exactly
where
monitoring
tools
like
prometheus,
nagios,
datadog,
etc.
A
A
I
personally
have
used
prometheus
with
with
grafana
in
lots
of
my
projects,
and
it
really
takes
a
lot
of
maintenance,
work
and
and
effort
of
the
kubernetes
administrators,
because
it
lets
you
automate
all
these
issue
detection
stuff
for
you
basically
and
again,
if
you're
running
large
clusters,
you
may
want
to
consider
using
monitoring
and
in
one
of
these
monitoring
tools,
to
make
your
own
life
easier
in
managing
kubernetes.
A
I,
when
I
am
administering
kubernetes,
I
want
to
observe
my
applications
as
well.
Right
so,
let's
say
with
monitoring
tools.
I
find
out
that
the
cluster
is
running
out
of
resources,
70
of
the
cpu
and
rem
of
my
whole
cluster
is
used
up
and
there
are
more
applications
to
deploy.
So
I
know
to
take
action
to
add
more
resources
right,
but
let's
say
everything
is
running.
A
You
want
to
be
able
to
see
what
actually
happened
inside
the
applications
in
the
chain
that
resulted
in
that
resulted
in
that
error,
and
this
will
be
especially
critical
when
you
have
microservices
that
are
all
tied
to
each
other
and
connected
to
each
other,
and
you
basically
had
to
retrace
the
the
request
to
see
where
the
error
happened
or
what
caused
the
air
and
how
it
kind
of
propagated
throughout
the
application.
A
Now,
managing
logs
of
one
application
is
obviously
simple,
because
you
just
find
the
application
check
the
logs.
You
see
the
the
issues
in
the
logs
and
that's
it
right.
You
know
the
the
reason,
but
if
you
have
thousands
of
microservices,
you
can't
just
manually
check
the
logs
of
each
application
to
to
find
the
issues
right
and
basically
see
if
any
application
has
any
problems.
A
I
want
to
see
all
the
logs
that,
from
all
the
applications
that
happen
between
this
time
frame
that
have
this
request
message
or
header
inside
right
and
then
you
can
see
basically
the
logs
throughout
the
the
applications
and
how
they,
how
the
trace
basically
looks
like
and
for
solving
this
challenge
of
managing
logs
of
thousands
of
applications
in
kubernetes.
There
are
log
collector
tools,
some
of
the
more
popular
ones
being
fluenty
fluent
beats
logstash
and
so
on.
A
All
right
now
that
we've
talked
about
what
runs
inside
the
cluster.
Let's
see
where
does
where
kubernetes
cluster
itself
will
run?
I
quickly
mentioned
infrastructure
underneath
kubernetes
and
aws
is
one
example,
but
you
actually
have
lots
of
options,
so
it's
really
and
again
as
any
other
tools.
This
is
also
evolving
and
lots
of
new
new
technologies
and
lots
of
new
cloud
platforms
are
offering
new
services
for
this.
A
A
Now,
broadly
speaking,
you
have
two
options.
The
first
one
is
a
self-managed
kubernetes
cluster,
where
you
get
several
virtual
machines
and
you
basically
install
all
the
kubernetes
binaries
on
them.
You
make
each
virtual
machine
into
a
control,
plane,
node
or
a
worker
node,
and
you
just
join
it
to
the
cluster,
and
this
way
you
can
add
any
number
of
nodes
to
the
cluster.
You
can
extend
it.
You
can
shrink
it
based
on
what
your
application
requires.
A
So
you
can
flexibly
scale
up
or
scale
down
the
cluster
size
according
to
how
much
resources
your
applications
need,
but
the
disadvantage
of
self-managed
kubernetes
clusters
and
basically
spinning
up
on
virtual
machines,
whether
it's
on
your
own
data's
data,
centers
or
on
cloud,
and
then
installing
and
deploying
everything
on
that
the
main
disadvantage
disadvantage
is
that
you
have
to
manage
the
cluster
yourself
and
the
main
controlling
parts
of
the
cluster
which
are
the
the
master
or
control
plane
nodes.
You
have
to
install
the
binaries.
A
A
A
So
you
will
have
to
take
care
of
replicating
all
the
master
processes
and
basically
you
will
need
someone
in
the
team
who
knows
how
to
administer
a
kubernetes
cluster
and,
as
I
said
at
the
beginning,
kubernetes,
the
challenge
in
kubernetes
is
not
just
setting
up
the
cluster,
but
you
have
to
maintain
it
the
whole
time
right.
You
have
to
make
sure
to
upgrade
it.
A
You
have
to
make
sure
that
to
renew
the
certificates
you
have
to,
if
things
break
and
don't
work,
you
have
to
obviously
fix
them
and
so
on,
and
in
many
cases
it
could
be
that
you
and
your
team
just
wants
to
get
started
with
kubernetes
as
fast
as
possible.
Right
you
have
an
application,
it
could
be
a
startup
and
you
just
want
to
deploy
it
and
make
it
accessible
for
users
without
any
overhead
and
effort
of
going
through
the
process
of
learning
kubernetes.
A
Setting
up
kubernetes
and
and
all
these
headaches,
so
basically
you
don't
want
to
handle
the
cluster
administration
process
and
there
are
many
projects
like
this.
As
I
said,
and
in
this
case
a
great
alternative,
and
this
is
going
to
be
the
option.
Number
two
is
to
use
a
managed,
kubernetes
service
and
lots
of
cloud
providers,
especially
the
major
cloud
providers
actually
offer
managed
community
services,
like
you,
have
eks
from
the
aws
and
elastic
kubernetes
service.
A
A
So
when
you
create
a
load,
balancer
service,
for
example,
in
kubernetes
aws,
will
automatically
provision
an
elastic
load
balancer
service,
which
is
an
aws
native
service
with
all
the
right
configurations
for
you.
So
you
don't
have
to
do
that
manually,
which
you
would
have
to
do
if
you
have
a
self-managed
cluster
or
another
example
would
be.
A
If
you
want
to
encrypt,
for
example,
your
kubernetes
secrets,
you
can
use
one
of
the
aws
services
to
do
that
much
more
easily
than
basically
trying
to
deploy
something
there
and
piece
all
these
things
together
to
automatically
encrypt
the
secrets
every
time
they
get
created
right.
So
basically,
you
get
all
these
benefit
extra
benefits
of
using
the
the
cloud
platform
own
services,
in
addition
to
your
kubernetes
as
natural
integrations.
A
Of
course,
a
downside
of
that
is,
if
you
want
to
migrate
your
kubernetes
cluster
to
another
platform
that
will
be
difficult
because
you're
kind
of
tied
in
to
that
platform.
But
again
you
have
to
decide
it
probably
on
your
project
specific
basis,
whether
that's
something
that
you
will
want
to
do
in
the
future
or
you're.
Basically,
fine
saying
you
know
what
I
want
to
automate
as
much
as
possible
and
make
my
life
as
easy
as
possible,
and
basically
let
aws
manage
all
this
or
any
other
platform.
A
And
I
think
we're
coming
to
one
of
the
final
topics,
which
is
also
one
of
the
one
of
the
big
trends
and
I
use
I.
I
saw
actually
lots
of
topics
lots
of
talks
in
the
kubecon
around
topic
of
multi-tenancy,
which
is
a
very
interesting,
a
very
hot
topic
today
and
let's
actually
see
why
so
so
what
what
is
a
multi-tenancy
first
of
all,
and
why
why
why
did
did
it
become
so
so
demanded?
A
Suppose
you,
your
team
of
administrators,
set
up
a
production
grade
cluster
right
with
all
the
security
best
practices
configured
the
control
plane
processes
are
replicated
again
we're
talking
about
self-managed
cluster,
where
you
kind
of
do
everything
yourself.
A
Or
it
could
be
in
some
cases.
This
also
could
be
the
managed
service
where
you
also
can
have
configured
the
monitoring
the
observability.
You
have
logging
setup.
So
basically
you
have
this
super
well
configured
well
tested
kubernetes,
cluster
setup
right
now.
All
of
that
is
a
lot
of
administration
and
maintenance
effort
right.
So
in
that
case,
it
makes
sense
that
you
may
want
to
use
that
same
cluster
for
multiple
projects
in
your
company.
A
So
you
don't
want
to
be
setting
and
managing
setting
up
and
managing
a
separate
cluster
for
each
project,
because
imagine
what
kind
of
effort
that
would
be
that
will
be
for
kubernetes
administrator
team
or
the
the
project
team
to
have
to
basically
manage
their
own
kubernetes
cluster
for
each
project
and-
and
this
actually
sounds
logical
to
to
many
people,
so
you
want
to,
instead
of
instead
host
multiple
independent
projects
that
have
nothing
to
do
with
each
other,
so
one
so
individual
projects
within
the
same
company.
A
You
want
to
host
them
in
the
same
cluster.
This
production
grade
super
setup
cluster.
So
you
have
multiple
tenants,
multiple
projects
in
the
same
cluster
and
that's
that,
where
the
term
multi-tenancy
comes
from,
but
of
course
the
challenge
here
is:
how
do
you
isolate
the
independent
projects
from
each
other
in
the
same
cluster,
because,
as
I
said,
this
could
be
projects
that
have
nothing
to
do
with
each
other
and
could
also
be
very
important
to
actually
isolate
them
right
for
security
reasons.
A
A
How
do
you
make
sure
the
cluster
resources
are
distributed
fairly
among
all
the
project's
workloads,
so
one
project
doesn't
basically
use
up
all
the
resources
of
the
cluster
and,
of
course,
how
do
you
make
sure
that
any
security
issues
or
application
issues
that
happen
in
one
project
right?
If,
if
one
project
team
basically
introduced
a
security
vulnerability
and
it
basically
just
messed
up
the
whole
thing,
it
doesn't
actually
affect
all
the
other
applications
of
other
projects.
So
how
do
you
configure
and
guarantee
this
isolation
between
the
projects
in
the
same
cluster?
A
A
So
namespace
is
like
you
can
think
of
it
as
a
virtual
cluster
that
should
or
multiple
virtual
clusters
that
share
the
same
control,
plane
processes,
so
they're
managed
by
the
same
processes,
but
they
are
isolated
from
each
other
they're
virtual
clusters
within
the
same
physical
cluster,
and
now
you
have
this,
you
have
this
name
spaces.
You
go
a
step
further
and
on
the
namespace
level,
you
configure
more
isolation
right
because,
as
I
said,
namespace
is
a
network
isolation,
but
you
need
some
security
isolations
as
well
right.
A
You
want
to
make
sure
that
applications
cannot
access
each
other
with
the
fully
qualified
domain
names
right
or
you
want
to
make
sure
that
the
resource
quarters
or
resource
limitations
are
set
for
each
project.
So
on
the
namespace
level,
again
using
kubernetes
on
resources,
no
external
applications
or
services
needed.
You
can
then
use
kubernetes
policies
and
there
are
different
types
of
policies
in
kubernetes.
A
One
of
them
is
resource
quotas.
You
also
have
security
policies,
you
have
network
policies
and
so
on,
and
you
can
use
these
policies
to
restrict
access
of
what
containers
are
allowed
to
do.
You
can
restrict
what
traffic
from
and
to
the
applications
is
allowed
and
using
the
resource
quartas.
You
can
constraint
the
resource
usage
of
workloads
within
each
namespace.
A
Basically,
how
how
much
they
can
do
within
that
namespace
using
the
the
security
policies,
so
you
can
say,
containers
are
not
allowed
to
run
as
roots
they're
not
allowed
to
or
pods
are
not
allowed
to.
Basically
escalate
resource
escalate
permissions
and
so
on.
So
you
can
configure
all
of
that
on
the
network
on
the
namespace
level.
A
So,
as
a
result,
you'll
have
all
these
namespaces,
where
the
tenants,
so
each
tenant
will
get
its
own
namespace
and
they
will
share
the
control
plane
resources,
but
they
will
have
their
isolated
environments
with
their
own
resource
usage
network
and
security,
isolation
for
the
workloads.
So
that's,
basically
how
multi
tenancy
will
work
for
many
projects
these.
What
what
the
kubernetes
itself
offers
is
not
enough
isolation.
A
So
there
are
many
new
technologies
that
actually
go
a
step
further
to
really
make
sure
the
isolation
is
there
between
these
tenants.
So
we
have
the
concept
of
virtual
clusters,
which
are
implemented
by
different
technologies,
one
of
them
being
loft,
for
example,
or
we
also
have
concepts
like
multi-cluster
multi-tenancy
within
multi-cluster
and
so
on.
Right.
A
We
have
multiple
kubernetes
clusters,
basically
connected
to
each
other
and
again,
if
you
go
through
the
list
of
the
talks
in
kubecon,
you
will
see
a
bunch
of
topics
about
these
concepts
which
again
shows
how
important
and
how
how
popular
this
topic
is
all
right.
So
let
me
check
the
questions
again.
A
I
see
one
that
says
what
would
be
your
favorite
talks,
which
one
you
are
looking
forward
to
attending.
I
haven't
decided
like
for
me:
it's
really
difficult,
because
I
I
found
most
of
the
most
of
the
talks
import
very
interesting.
I
I
personally
like
talks
that
are
more
concept
oriented,
so
they're
not
specific
to
the
technology
itself,
because
I
want
to.
A
I
want
to
know
okay,
what
what
are
the
concepts
and
how
are
different
technologies
trying
to
solve
it
or
if
I
have
a
knowledge
in
in
a
concept
already,
then
I
would
check
out
like
if
there's
a
new
tool
out
there,
but
I
haven't,
I
haven't,
decided
actually
exactly
which
talks
I'm
gonna
attend.
A
Multi-Cluster
topics
are
definitely
interesting,
so
I'm
gonna
be
checking
some
of
them
out,
and
security
is
something
that
I
personally
do
not
have
much
expertise
in,
and
that
could
also
be
something
that
to
look
into
because,
as
I
said,
I
was
really
surprised
to
see
how
many
security
topics
there
are
on
the
talk,
and
I
also
recognize
this
is
a
very
important
thing,
because
you
have
you
have
so
many
things
happening
there
and
you
have
to
make
sure
that
the
securities
or
you
don't
compromise
security.
A
Yes,
that
is
absolutely
true.
I
see
a
comment
or
message.
We
can
also
restrict
apps
from
all
from
one
namespace
to
access
service
in
another
namespace
using
network
policies.
That
is
absolutely
true.
A
You
can
you
can
configure
network
policies
in
your,
so
by
default
in
kubernetes,
the
the
network
will
be
allowed
or
or
will
be,
allow
all,
but
using
network
policies
you
can
actually
limit
which
applications
can
talk
to
which
other
applications
in
another
namespace
or
get
traffic
from
from
which
ones
and
also
an
important
note
on
that
is
the
network
policy
support
depends
on
the
network
plug-in
that
you're
using
inside
kubernetes,
so
whichever
network
plug-in,
for
example,
the
self-managed
kubernetes
cluster,
which
network
plug-in
you
install
flannel,
for
example,
doesn't
support
network
policies,
I
believe,
but
most
others
should
be
supporting.
A
A
A
Yes,
another
example:
another
question
control,
plane
services
are
not
tenants,
aware
example,
api,
dns
scheduler.
What
would
be
the
best
practices
to
prevent
bad
tenant
abusing
the
shared
resources?
That
is
actually
a
very
good
question.
I
I
believe
it's
really
difficult
to
achieve
with
the
kubernetes
on
resources,
because
you
may
have,
for
example,
you
may
even
have
another
namespace
where
the
monitoring
is
set
up
right.
A
For
example,
you
you
have
logging,
namespace
or
monitoring
namespace,
where
the
the
monitoring
or
logging
applications
are
running
and
basically
other
namespace
resources
can
have
access
to
that.
So
how
do
you
make
sure
that
they
don't
mess
something
up
there
again?
One
thing
would
be
restricting
access
to
any
other
namespaces,
so
basically
having
own
users
that,
for
example,
giving
each
other
each
each
team
their
own
users
that
only
give
them
access
to
the
namespace
and
then
restricting
that
access
for
the
applications
within
the
namespace.
A
But,
as
I
said,
it
is
actually
difficult
to
100,
avoid
that
and
that's
why,
for
example,
the
visual
clusters
concept
that
I
mentioned,
I
believe
lofty
is
one
of
them.
So
basically,
the
their
approach
is
to
have
a
virtual
cluster,
where
you
don't
only
have
a
namespace
with
like
workloads,
but
you
have
the
master
processes,
also
in
that
in
that
virtual
cluster
right.
So
you
have
the
complete
cluster
with
the
control
plane
and
the
worker
processes
is
a
virtual
cluster.
A
So
basically
you
can't
break
out
of
that
and
you
have
much
much
better
isolation
than
the
namespace
because
they
don't
use
namespace
to
to
create
them
in
the
background.
So
I
believe
for
that
you
would
actually
have
to
look
into
some
additional
tools
that
are
out
there
that
try
to
exactly
solve
these
problems.
A
A
Flannel
is
a
very,
very
common
one.
I
have
used
weave
nets
network
plugin,
which
have
has
worked
pretty
pretty
well
for
me,
but
again
like
if
you
are
using
a
managed
kubernetes
service.
Obviously
you
don't
have
a
control
over
that
because
that
that's
managed,
so
it
is
already
installed
and
and
managed
for
you
by
the
by
the
cloud
platform
so
could
be
interesting
to
to
check
out
you.
Can
the
the
network
plug-in
application
is
also
running
as
a
pod?
A
So
if
you,
if
you
basically
print
out
the
pods
in
the
cube
system,
namespace
or
or
basically
in
all
the
namespaces
depending
on
the
network
plugin,
you
will
actually
see
the
pods
of
that
running
and
you
will
see
which
which
network
plugin
they
use
and
usually
they
will
be
deployed
as
demon
sets.
So
each
node
will
have
its
own
network
plug-in
pod.
A
So
you
can
check
check
it
like
this.
A
Any
other
questions
which
which
talks
are
we
actually
very
interesting
interested
to
know
which
talks
or
which
concepts?
Do
you
actually
find
the
most
interesting
or
most
relevant
for
you
for
your
current
projects?.
A
All
right:
well,
then,
let's
actually
wrap
this
up.
This
is
actually
end
of
our
topics.
A
I
really
hope
I
was
able
to
give
you
some
new,
interesting
insights
into
the
kubernetes
world
and
its
whole
ecosystem,
which
is
again
very
huge
and
evolving
super
fast,
and
even
though
this
talk
was
a
super
high
level
overview,
I
can
imagine
that
it's
still
a
lot
to
consider
still
a
lot
of
concepts,
lots
of
things
we're
probably
missing
out
of
it,
so
you
may
want
to
actually
learn
a
bit
more
into
and
dive
in
into
each
of
these
areas,
right
like
what
is
githubs
or
devsicops
or
service,
mesh,
etc,
and
in
many
of
these
topics
that
we
went
through
today,
I
actually
have
already
covered
on
my
youtube
channel.
A
So
if
you're
interesting
in
any
of
them,
you
can
you
can
check
the
channel
and
and
find
these
videos
there,
and
basically
with
that
this.
This
is
the
information
of
where
when
kubecon
is,
but
you
also
have
the
links
there
and
with
that
I
want
to
say
thank
you
to
all
of
you
for
hanging
out
with
me
today
and
for
all
your
questions
and
nice
comments
and
that's
the
end.