►
From YouTube: Kubernetes Office Hours 20210317 (EU Edition)
Description
Office Hours is a live stream where we answer live questions about Kubernetes from users on the YouTube channel. Office hours are a regularly scheduled meeting where people can bring topics to discuss with the greater community. They are great for answering questions, getting feedback on how you’re using Kubernetes, or to just passively learn by following along.
For more info: https://k8s.dev/events/office-hours
C
First
I'll
go
first,
I'm
rachel
leakin
wikileakins,
I'm
a
a
kubernetes
field,
engineer
at
vmware.
D
D
I'm
a
senior
suffrage
engineer,
slash
architect
at
spectrum
focusing
on
kubernetes
devops
and
engineering
in
general.
E
Hey
everybody
I'll
go
next,
I'm
mario
lauria!
I
am
working
remotely
for
a
company
called
carta.
If
you
have
stock
options,
you
may
have
heard
of
our
platform.
I
am
doing
kind
of
co-located
or
embedded
sre
working
very
closely
with
developers
and
also
other
teams
for
kind
of
optimizing
developer
experience,
of
course
around
kubernetes.
So
I'm
looking
forward
to
some
great
questions
today.
G
Hello
next,
my
name
is
chris,
I'm
a
customer
engineer
with
the
google
cloud,
canadian
public
sector
team,
where
I
focus
mostly
on
kubernetes
devops
containers,
although
all
that
kind
of
fun
stuff-
and
before
this
I
was
a
kubernetes
administrator
and
I'm
also
at
cka
and
ckad.
H
Hi,
I'm
pruya,
I'm
a
vp
product
at
giant,
swarm
and
mainly
working
on
extending
kubernetes
and
a
lot
of
cluster
api
recently.
Also
a
cncf
investor
and
a
cka
actually
want
to
go
next.
I
Yeah
hi
everyone,
my
name
is
archie,
I'm
called
native
ambassador
from
canada,
I'm
organizing
local
meetups,
around
kubernetes
and
cloud
native
across
six
cities,
and
if
you
from
canada
you
should
definitely
check
it
out,
or
even
you
know,
we
have
a
virtual
event
if
you
want
to
join
us.
I
I'm
also
happen
to
working
at
google
cloud
as
well
as
a
hybrid
cloud
specialist,
so
you
know
been
been
in
a
space
for
a
while
and
happy
to
see
all
the
familiar
faces
on
the
skull,
and
you
know,
learning
and
sharing
what
we
know
around
kubernetes
space.
B
B
This
is
also
a
judgment-free
zone.
Everyone
has
to
start
from
somewhere,
and
so
please
help
out
your
buddy.
Let's
provide
a
supportive
environment
in
the
channel
and
in
general.
Well,
we
do
our
best
to
answer
your
questions.
The
panel
does
not
have
access
to
your
cluster,
so
we
cannot
live
debug
it.
Unfortunately,
however,
we
will
do
our
best
to
get
you.
Some
links
help
advice
to
see
you
on
your
way.
B
Normally
we
do
provide
shirts.
However,
the
cncf
store
is
replenishing
this
inventory
at
the
moment,
and
so
we
will
give
you
a
shout
out
and
of
course,
our
undying
devotion.
Panelists,
please
bring
your
experience
and
pro
tips
to
any
of
the
questions
that
we
get
and
feel
free.
To
add
those
to
your
answers
audience
you
can
help
us
by
pasting
in
urls
to
official
documentation,
blogs
and
anything
else.
That
may
be
relevant.
B
B
So
we
have,
let's
see.
B
So
we
have
a
question
here
that
says
hello,
we're
having
a
hard
time
running,
kubernetes
jobs
that
used
a
30
gig
image.
The
image
takes
around
50
minutes
to
be
pulled
on
any
newly
created
node.
Reducing
the
image
size
is
not
an
option
and
the
pull
time
is
bound
by
the
decompression.
Not
the
actual
download,
so
running.
A
local
registry
won't
actually
help.
B
We
use
auto
scaling
to
allocate
the
resources
on
demand,
but
this
is
irritating
because
running
a
job
that
takes
five
minutes
will
take
almost
an
hour
if
they
knew
if
the
node
is
newly
created,
also
having
a
node
up
with
a
pre-pulled
image.
Waiting
for
jobs
is
very
expensive
and
because
the
node
relies
on
gpus.
I
I
If
I
can
take
it,
I
probably
my
first
event
so
probably
shouldn't
be
talking
right
away,
but
it
really
could
depend
on
the.
I
think
the
use
case,
what
I've
seen
from
the
industry.
Companies
like
youtube-
or
you
know
who
is
doing
the
video
processing
they
they're
working
with
the
large.
You
know,
files,
so
one
of
the
techniques
that
they're
using
they
kind
of
split
the
the
file
by
the
many
chunks
and
they
they're
working
with
the
smaller
chunks
and
then
combining
them
together.
I
So
it
requires
extra
orchestration.
I
guess
of
chunking
putting
you
know,
decompressing
and
putting
them
together,
maybe,
but
it's
definitely
speeds
the
things
up,
because
you're
not
working
with
one
large
file
but
you're
kind
of
splitting
them
by
chunks,
and
you
know
using
maybe
kubernetes
jobs,
or
maybe
some
other
serverless
or
functions
to
to
process
them
and
put
together.
I
don't
know
if
your
workload
is
similar
but
like
this
is
just
one
of
the
ideas
how
you
can
deal
with
that.
F
Also,
there's
a
product
called
keep
fledged.
It
may
be
able
to
help
keep
images
on
your
nodes
in
a
particular
context.
However,
if
you're
spinning
up
nodes
in
an
auto
scale
context,
she
may
potentially
fall
back
on
a
daemon
set
and
save
your
docker
image
to
like
a
persistent
volume
and
using
a
knit
container
on
that
node.
So
when
it
comes
up,
the
daemon
said
it
get
deployed.
Anita
container
will
get
initialized
and
load
the
image
on
that
particular
node.
That
might
be
an
option
to
reduce
the
network
latency
from
going
across
the
internet.
H
Yeah,
most
probably
what
chansey
mentioned
also
is
kind
of.
Can
you
extract
the
data
out
of
the
image
into
a
volume
and
then
be
able
to
to
catch
that
volume
or
or
have
it
persisted
over
over
notes,
not
sure
what
it
means
that
the
pull
time
is
bound
by
decompression,
but
that
might
be
related.
I
mean,
if
you,
if
you
have
it
in,
if
you
have
the
data
in
the
volume,
you
wouldn't
need
the
decompression
step
that
you
need
with
like
a
docker
pole
or
something.
B
B
A
A
E
Easily
yeah
trust
me.
I
have
tried
that
it
is
not
sustainable
and
people
around
you
will
start
to
hate
you.
So
that's
that's
just
how
that
works.
I
do
have
to
say
I
have
to
give
a
shout
out
to
cube
weekly
and
I
feel
like
we
might
have
talked
about
this
in
the
last
session
a
little
bit
as
well.
E
Q
weekly
is
a
weekly
newsletter
and
it
actually
is
diverse
enough
that
I
feel
like
I'm
getting
inputs
from
all
the
different
facets
of
the
community
and
what's
going
on
in
the
kubernetes
world
and
then
there's
also
cube
list,
which
is
also
very
strong.
I
think
they
have
a
podcast
now
as
well.
E
The
kubernetes
podcast
from
google,
so
those
are
some
of
the
sources
that
I
more
passively
while
I'm
working
out
while
I'm
driving,
I
can
tap
into
what's
going
on
without
having
to
stop
and
read
our
medium
article
or
something
like
that.
But
there
are
many
many
places
beyond
that
to
to
get
information
so.
D
What
I'm
doing
personally
is,
for
example,
I
just
keep
the
backlog.
It
sounds
super
stupid,
but,
like
I
just
like
I
have
I've
subscribed
to
so
many
weekly
email
things.
I
just
like
copied
the
interesting
titles
like
copy
the
links
into
my
reminders,
app
and
then
once
a
week
I
sit
down
for
an
hour
and
skim
through
it
again.
Oh
this
is
interesting.
I
read
it
and
then
the
other
ones
I'm
most
likely,
I'm
gonna
just
like
get
off,
I'm
also
not
like.
For
me.
G
Yeah,
I
save
everything
to
pocket
and
normally
I'd
spend
like
half
an
hour,
40
minutes
a
night
just
reading
before
bed,
just
to
kind
of
get
caught
up.
Unfortunately,
that
backlog
grows
exponentially
so
but
do
do
the
best
they
can
to
kind
of
keep
keep
an
eyes
on
things
or
ear.
On
things.
C
Yeah
I
actually
schedule
time
so
I
schedule
in
the
morning
twice
a
week
or
during
my
work
hours
just
to
catch
up
and
then
I
also
do
like
one
night
in
the
evenings
after
work
to
catch
up,
so
I
actually
just
scheduled
them,
and
I
said
okay,
if
wednesday's
my
night
to
do
it,
that's
my
night
to
do
it.
I
don't
try
to
say
oh
I'll
catch
up
on
a
saturday
or
sunday
or
something
like
that.
It
works
most
of
the
time.
H
Up
with
kubernetes
then
schedule
it
in
your
work
hours.
That
is
like
really,
and
it's
it's
hard
sometimes
to
like.
Not
everyone
is
like
daring
enough,
but
if
it's
your
job,
that's
part
of
your
job,
if
you're
a
writer,
you
have
to
read
four
times
as
much
so
if
you
have
to
code,
you
might
also
need
need
to
read
quite
a
lot
until
you
get
like
you.
Don't
want
to
do
anything
outside
of
the
again
have
to
redo
it,
because
I
don't
know
next
week
you
read
something
that
would
make
it
better
and.
F
Yes,
I
also
monitor
a
lot
of
the
sigs
and
the
kubernetes
slack
just
so
that
I
kind
of
keep
up
with
the
ones
that
I
have
an
interest
in
in
that
particular
context,
cube
weekly
the
kubernetes
podcast
tgik
from
vmware.
A
number
of
sources
that
I
use
to
kind
of
like
keep
abreast
of
the
technologies
that
are
kind
of
used
today.
B
H
B
And
siam's
channel
there's
so
much
good
content
out
there.
I
think
what
I
think
is
a
common
thread
across
all
the
answers
here
is
that
there
is
an
infinite
amount
of
wealth
and
knowledge
and
resources
out
there.
Let
other
people
condense
it
down
and
filter
it.
And
then
you
take
advantage
of
that.
Use
that
as
well,
because
time
is
a
commodity
that
we
we
need
to
take
care
of,
and
our
family.
I
I
don't
know
if
I
heard
this
one,
but
I
would
definitely
follow
all
of
you
guys
on
twitter
because
you
guys
very
quickly
from
there
and
all
the
latest
news
you
guys
always
tweeting.
So
I'm
following
you
and
always
up
to
date
what's
happening
there.
B
All
right,
perfect,
okay,
so
here's
a
question
I
found
on
the
discus
forums,
and
I
I
like
this
question.
First,
it
was
unanswered
and
I
think
it's
because
it's
quite
subjective
and
I'm
curious
to
see
what
everyone's
thoughts
here
are,
as
are
yeah
english.
I'm
scottish.
My
wording
is
not
great.
So
sorry,
so
this
question
really
is
to
one
of
the
things
this
is.
B
The
problem
with
this
there's
some
thoughts
is
that
there's
no
way
to
get
all
of
the
pods
across
several
name
spaces
and
a
single
command
for
each
app
that
they
have
in
mind.
They
need
to
type
out
the
particular
name.
Space
using
cube
control,
config
set
context,
current
namespace,
it's
a
lot
of
typing,
so
I
kind
of
see
there
are
two
questions
here.
One
is
how
do
you
all
structure,
your
namespaces?
What
do
you
use
them
for?
How
do
you
segment
your
applications
or
your
services?
D
Line
think
I
can
answer
this
one.
So
there's
two
things
you
can
well
that
person
can
do
so.
There's
utility
functions
like
cube,
and
s
and
cube
ctx,
which
help
you
to
like
at
least
save
like
a
bit
of
writing.
Then
there's
kind
k9s,
which
makes
it
a
bit
more
interactive
and
faster,
which
also
gives
you
like
shortcuts
with
your
number
keys.
And
then,
if
you
just
want
to
use
pure
qptl,
you
can
do
uk
get
parts
or
cube
cdi.
Get
parts
dash
dash
all
dash,
namespaces
or
dash
capital.
C
J
G
If
you
have
much
of
my
question
back
to
the
user
would
be
if
you
have
a
bunch
of
like
pods
kind
of
spread
off
over
a
bunch
of
different
name
spaces.
What
what
are
you
trying
to
accomplish
like?
What's
the
purpose
of
having
having
it
like
that,
but
on
that,
if
they
do
need
a
little
bit
more
control?
So
if
you
do
want
to
have
say
each
app
has
some
kind
of
sandbox
namespace
within
kind
of
a
namespace
group.
G
G
I
think
it
went
ga
archie.
I
Yeah,
well,
it's
not
part
of
like
of
the
kubernetes
trunk
but
they're
the
project
in
the
kubernetes
repository,
and
I
think
it's
0,
17
or
18.
I
don't
remember
now,
but
it's
it's
going
pretty
pretty
good.
I
think
right
now
is.
I
think
the
only
thing
we're
missing
is
community
consumption,
so,
like
people
should
try
it
and
have
a
look
and
definitely
give
you
more
option.
If
you
have
more
complex
structure
in
the
company,
definitely
can
replace
your
projects
that
doesn't
exist
on
kubernetes.
F
One
of
the
things
that
I
use
namespaces
to
separate
our
projects,
project
a
we'll,
have
a
namespace
and
have
workloads
in
it
now
to
address
the
specific
question.
One
option
you
could
potentially
use
is
if
your
applications
has
like
databases
and
stuff
associated
with
it,
and
you
start
applying.
F
Labels
like
based
on
their
role
like
a
role
database
which
you
could
potentially
do,
is
to
keep
control,
get
parts,
hyphen
capital,
a
minus
l
and
the
label
and
be
able
to
pull
pods
across
name
spaces
that
have
like
purpose
or
role
database
or
something
of
that
sort.
There
so
start
looking
at
labels
to
possibly
solve
some
of
those
query
problems.
E
So
I
want
to
mention
one
of
the
previous
places
I
worked
at.
We
actually
just
put
everything
in
the
default
namespace
and
I
actually
don't
think
there's
inherently
anything
wrong
with
that,
especially
when
you're
starting
out
and
you're
small,
I
think,
depending
on
your
needs,
depending
on
security
policy.
Things
like
that,
adding
that
separation
can
add
another
layer
of
complexity
at
a
client
level.
E
So
when
you
say
have
a
developer
who's
trying
to
develop
on
something
and
they
have
to
remember
to
put
minus
n
whatever
the
name
space
is
that's
something
else
that
you
have
to
teach
them
and
make
sure
that
is
it
kind
of
built
into
the
process,
so
there's
absolutely
nothing
wrong
with
just
using
the
default
name,
space
at
least
starting
out
as
well.
There
there
isn't.
E
This
is
kind
of
goes
back
to
like
how
many
processes
you
should
run
a
single
container
right,
there's
nothing
wrong
with
putting
multiple
processes
running
in
a
single
container.
It's
kind
of
what
you
want
to
do
and
at
the
scale
that
you're
at
which
everyone
is
different.
So.
A
K
E
Yeah
and
even
like
you
know,
I
think,
of
non-prod
environments
versus
production
right
like
maybe
for
non-product
environments.
We
don't
we
don't
need
to
care
about
name
spaces,
that
much.
We
don't
need
that
separation.
It's
a
testing
ground
right
and
that's
what
it's
meant
to
be.
Adding
that
extra
layer
of
complexity
might
not
be
necessary.
It
might
be
necessary.
You
might
need
policy,
you
might
need
be
running
other
pieces
where
a
name
space,
actually
matters
and
you're
doing
our
back
to
that
level
right
and
it
really
depends
on
your
organization.
E
B
Okay,
I'll
add
a
couple
of
my
own
thoughts.
On
top
of
that,
I
think
everything
there
that's
been
said
is
pretty
spot
on.
You
know
I
am
I
I
take
advantage
of
the
default
namespace
or
use
that
as
much
as
possible
until
I
have
a
requirement
to
move
away
from
it.
Usually
that's
to
do
with
security
or
identifying
boundaries
for
policies
and
other
things,
and
quotas
are
a
big
one
as
well.
B
B
So
the
second
part
of
that
question
was:
how
can
we
make
it
easier
to
work
with
multiple
name
species?
Any
of
you
all
using
any
cube
control
plugins
any
extra
binaries?
G
Cubens,
I
think,
was
brought
up.
That's
a
big
one
for
me.
I
I
think
one
of
the
emerging
patterns
now
is
to
using
githubs.
So
at
least
you
know,
you
can
define
your
namespaces
one
in
your
github
and
then
you
know
using
flux
or
some
other
tools
like
argo
cd
can
helps
you
to
you
know
not
repeat
yourself,
and
as
soon
as
your
cluster
is
up,
your
namespaces
are
ready
to
go.
I
H
Kelsey
was
last
year,
like
said,
the
cube.
Ctl
is
the
new
ssh
and
it's
like
at
some
point.
You
shouldn't
cube
ctl
in
your
class,
especially
not
into
prod
like
do
it
for
starting
out
and
trying
things,
but
at
some
point
really
look
into
having
automated
deployment
and
not
someone
really
manually
doing
anything
in
the
cluster.
B
B
F
When
our
developers
need
to
kind
of
see
what
their
workload
looks
like
in
their
particular
namespace
and
interact
with
it,
the
logging
and
the
ability
to
shell
into
their
pod
for
debugging
is
a
very
good
tool.
C
I
agree,
I
didn't
want
to
say
it
because
I
am
vmware,
so
I
don't
want
to
push
too
much
vmware
but
yeah.
That's
my
choice
as
well.
G
H
Yeah,
I
think
pierre
did
you
mention
that
right
now,
a
k9s,
it's
also
a
good
one.
It's
I
think
by
now
the
new
version
is
subscription,
but
it's
definitely
something
that
people
are
quite
raving
about
I've
heard
also
internally
at
our
company,
I've
heard
a
lot
of
people
being
very
good,
there's
another
one
from
mirantis.
What's
that
one
lens,
I
think,
yeah
lance.
It's
also
extendable
similar
to
auction,
has
plugged
it
functionality,
which
is
really
cool.
H
If
you
have
something
like
crds,
I
remember,
for
example,
aqua
security
had
a
had
an
extension
for
octant
to
support
their
security
stuff,
and
I
think
that
that's
really
something
that
makes
a
makes
it
makes
these
dashboards
really
powerful.
If
you
can
extend
them
for
your
use
cases.
A
All
right
awesome
wait:
yogi's
mentioning
vs
code
plus
a
kubernetes
plug-in
yogi.
If
you
got
a
link
to
that,
I'd
love
to
put
that
in
the
show
notes.
B
B
H
F
I
think
it's
three
revisions,
three
versions
behind
you
know
if
it's
121,
it's
120
118
and
I
think
it
might
be.
A
I'll
look
I'll
look
for
okay,
so
for
cube
admin,
skipping
minor
versions
when
upgrading
is
unsupported
for
sure,
but
they
don't
even
go
as
far
back
what?
What
version
are
they
coming
from.
H
The
the
problem
with
skipping
versions
is,
you
might
miss
a
migration
step
that
isn't
a
version
because
of
those
applications
that
johnson
mentioned,
you
will
sometimes
miss
a
conversion
between
an
api
version.
A
A
G
My
brain
would
go
to
just
boot
up
a
new
cluster
and
start
if
you
have
the
resources
or
you're
on
a
cloud
provider
and
then
migrate
that
way,
instead
of
having
to
go
through
all
of
that
pain,
so
you're
going
to
hit
the
a
bunch
of
api
depreciations
on
the
way
so
might
just
be
lower
cost
spin
up
something
and
just
test
it
and
go
that
way.
Instead
of
overnight
surprises.
D
H
Yeah
we
do
the
same.
We
basically
like.
If,
if
someone
wants
to
skip
versions,
we
still
do
the
the
upgrade
in
between
just
because
it's
more
secure
and
tested,
but
yeah,
I
think,
with
seven
versions.
You
might
be
better
off
having
a
new
cluster,
because
so
many
things
can
break.
I
mean
even
cni
and,
like
you,
you,
you
will
be
moving
from
cube
dns
to
coordinates
and,
like
actual
like
deep
components,
will
change
as.
B
H
I
think
we
had
a.
We
recently
had
a
customer
event
in
the
evening
talking
about
people
and
basically
everyone
was
using
valero
in
some
form
or
the
other,
but
you
might
also
still
need
something
to
switch
over
your
dns.
If
there's
live
traffic
on
it
need
something
some
plan
to
switch
over.
I
know
pipelines
if
you
have
some
going
in
get
all
your
developers
over.
C
C
F
Yeah
yeah,
I
know
one
thing
that
moving
from
113
before
you
get
to
one
1.16:
you
need
to
upgrade
your
api
versions
for
state
for
that
statement,
sets
and
deployments.
Otherwise
things
just
quit
working,
I
think,
before
you
get
to
122,
you
need
to
have
your
ingress
updated
and
I
think
after
that
you're
pretty
much
in
a
good
position.
G
H
D
A
Yeah,
because,
like
my
sysadmin
brain
tells
me,
there's
no
way
you
can
just
hit
back
up
on
1.13,
go
to
1.20
and
hit
a
plot.
There's
there's
just
no
way
that
works
like
there's
got
to
be.
You
know,
and
I
would
just
double
check
those
steps
with
both
the
cube
admin
folks,
which
I
I
slapped
the
channel
to
their
slack
channel
there
and
there's
valero
as
well,
and
I
would
just
I
would
just
check
that
as
well.
I
think
that
you
could
save
yourself
a
lot
of
pain
just
by
I.
A
B
B
G
Yeah
and
just
another
thought
it
might
depend
on
what
provider
using
to
provision
your
kubernetes,
whether
it's
cube
a
dam,
cube
spray
or
something
else.
They
may
have
a
different
path
to
do
those
upgrades
that
could
help.
So
it
might
be
something
worth
checking
in
on.
B
All
right,
let's
leave
that
one
there
and
move
on
to
our
next
question.
So
this
is
another
question
from
the
discus
forums.
The
question
is,
I
created
a
podcast.
A
Wait
wait,
wait!
Sorry,
I
accidentally
skipped
and
deep
who's
patiently
been
waiting
and
I
inserted
here.
B
A
On
the
the
google
question,
scroll
up
go
up
two
questions
in
the
doc.
A
Yeah,
I'm
gonna
just
yeah
go
for
it.
Sorry
all
right
and
deep
asks
and
deep.
I
miss
your
questions.
I'm
sorry!
If,
if
I
missed
other
ones,
please
just
repost
them
I'll
try
to
pay
attention.
It
says
I'm
at
google
at
a
big
client
and
I'm
constantly
getting
the
question.
How
will
gke
scale
at
their
limits?
What's
the
bottleneck,
fcd
the
control
plane,
scheduler
googler,
do
you
have
any
insight
on
this?
I
know
there
was
a
blog
post
on
hitting
y'all
hit
just
some.
E
E
What's
the
complaint
quite
well,
mk
notes,
yeah,
I
would
say
maybe
link
that
article
and-
and
hopefully
it
gets
into
that.
I
know
open
ai
has
gone
quite
high
as
well.
They
have
that
article
that
just
came
out
recently
where
they
refreshed
what
they're
doing
and
they
they're
they
have
a
pretty
big
deployment
of
kubernetes
as
well.
So
maybe
maybe
looking
at
those
pieces
of
information,
you
can
kind
of
see
where
the
bottleneck
starts
to
be,
I'm
sure.
I
Yeah
I
mean
there
are
probably
many
things
that
you
know
you
may
need
to
consider
if
you
want
to
run
15k
nodes
clusters
and
obviously
there
are
some
technology
behind.
For
example,
right
now
you
have
an
option
to
run
celium
networking
that
you
know
helps
you
to
kind
of
breaks,
some
of
the
limitation
of
the
traditional
networking
on
kubernetes,
but
obviously,
if
you
get
into
that
scale,
most
probably
you
know
you
will
be
also
getting
getting
help
from
google
support.
I
You
know
helping
you
to
scale
your
cluster,
some
of
the
tips
as
well.
For
for
people
who
want
to
go
with
the
large
clusters.
You
know
it's
probably
you
know
some.
You
know
you,
as
you
know,
g
key
running
the
control
plane
behind
the
scenes.
So
this
is
something
not
visible
for
the
end
user,
so
some
of
the
recommendations
is
maybe
if
you
want
to
run
500
nodes
cluster,
don't
start
with
the
one
node,
because
your
your
control
plane
behind
the
scene
will
scale
based
on
that.
I
So,
if
you're
running
with
the
like,
if
you
want
to
go
with
the
large
larger
control
plane
from
the
beginning,
try
to
right
away
set
up
like
a
larger
number
of
nodes,
potentially
and
then
maybe
scale
down,
but
at
least
your
control
plane
already
aware
that
you're
going
to
be
running
some
some
kind
of
a
large
notes.
Obviously
there
are,
there
are
some
things
that
you
don't
have
a
control
of,
because
it's
running
behind
the
scene
on
the
google
set
but
yeah
the
sky's
the
limit.
I
I
guess
right
now,
because
I
haven't
heard
anybody
running
more
than
50
keynotes,
except
maybe
twitter
and
a
few
other
companies.
B
All
right,
thank
you.
Okay,
jumping
back
over
to
this
question,
the
person
has
asked
I
create
a
no,
I
create
a
pod,
and
then
I
shut
down
the
node,
which
is
running
the
pod.
So
this
is
an
ungrateful
shutdown.
I
assume,
because
the
cubecontrol
getpods
still
shows
that
pod
as
running.
Why
does
this
happen?
E
Yeah
but
the
well,
I
think
what
they're
getting
is,
there's
actually
a
timeout.
Before
that
happens,
I
forget
what
it
is.
It
used
to
be
almost
five
minutes
in
some
cases
before
you,
the
pod
would
actually
be
realized
to
need
to
be
scheduled
elsewhere.
So,
during
that
time
frame,
the
deployment
would
actually
be
under
the
desired
state.
I
know
120
just
released
an
alpha
graceful,
node
shutdown.
I'm
gonna
put
the
link
here
into
chat.
I
think
this
covers
that
case.
E
I
know
this
is
definitely
a
thing
when
you're
doing
auto
scaling
groups
and
trying
to
scale
up
skill
in
on
something
like
eks
as
well.
There
can
be
periods
of
time
where
you
hop
in
your
cluster,
your
k9s
and
you're.
Looking
at
your
deployments
and
they're,
not
all
up
to
par
with
what
you'd
expect.
C
H
C
H
C
K
B
A
Haroon
needs
a
little
help
after
the
recommendation
we
give
gave
him
if
we
want
to
go
back
to
that.
One.
B
B
That's
right
so
another
question
from
the
discus
forums.
My
kubernetes
cluster
is
deploying
the
pod
and
the
node,
which
is
deploying
a
pod
on
a
node
which
is
not
in
my
cluster,
all
right.
A
warning
failed
scheduling,
default
scheduler
not
found
in
cache
node.
Okay,
let's
see
if
I
can
try
summers,
it
sounds
like
they're
expecting
the
pod
to
be
scheduled
somewhere
based
on
scheduling
constraints
and
that
isn't
exactly
happening.
Someone
wanted
maybe
explain
how
scheduling
works
on
kubernetes
and
why
this
could
happen
is
feeling
brave.
B
All
right
do
we
have
anyone
to
understand
the
scheduling
stuff
or
should
I
just
try
and
guess
my
ways
to
do
this?
One,
I'm
happy
to
guess
I
don't
mind
being
wrong,
so
I
believe,
based
on
my
knowledge,
of
how
the
scheduler
works
is
that
it's
always
it's
always
a
best.
B
Guess,
like
you
could
say
schedule
here,
and
I
think
there
are
certain
conditions
for
that.
I
can't
remember
the
exact
name
of
the
variable,
but
like
required
at
scheduling
but
optional.
B
I
can't
remember
someone
will
have
to
help
me
out
there,
but
the
scheduler
will
add
a
label
to
the
pods
suggesting
or
trying
to
encourage
it
to
go
somewhere,
but
that
node
is
unable
to
take
it
out
well
and
can
be
rescheduled
later
on.
So
it's
not
always
guaranteed
anyone
get
any
extra
flavor
on
that.
H
Maybe
it's
a
bit
similar
to
the
one
before
where
you
have
shut
down
a
note
or
a
note
got
shut
down,
but
the
api
server
has
not
registered
it
being
away
yet
and
once
the
scheduler
gets
to
scheduling
the
part
there,
it
realized
hey,
that's
it's
not
there,
even
because
it's
not
answering
because
I
think
what
you
said
is
right
is
the
scheduler
basically
says
hey,
please
like
schedule
this
to
this
node
and
then
the
cubelet
of
that
node
on
this
next
round
on
the
next
round.
H
B
A
Haroon
was
asking
so
so,
first
of
all,
everyone
who's
telling
them
to
make
their
image
smaller.
They
can't
do
that.
This
is
the
image
that
they're
stuck
with.
So
that's
that's
not
going
to
help
them.
However,
it
goes
mounting
them
as
volumes
like
a
lot
of
you
recommended
could
be
a
solution.
Yes,
but
what
should
I
put
in
the
volumes
and
where
should
I
mount
them?
So?
Can
you
maybe
kind
of
talk
through
volume
usage
here
with
it?
A
A
F
F
It
would
be
to
create
a
volume
mount
well
create
a
volume
in
my
manifest
for,
like
my
daemon
set,
that's
pointing
to
like
a
persistent
volume
or
something
well
and
in
that
init
container.
If
you
use
the
volume
mount,
it
can
mount
that
particular
persistent
volume
claim
and
then
execute
a
command
that
is
presumably
have
that
file
already
on
it.
F
So,
if
you've
exported
your
large
image
to
a
persistent
volume,
when
your
daemon
set
is
initialize
a
schedule
on
that
node,
the
knit
container
can
mount
that
volume
and
then
run
the
command
to
load
the
image
into
the
local
nodes,
docker
environment
and
it's
available
to
you.
So
it's
just
like
you
know,
create
a
persistent
volume
claim,
save
your
your
image
to
that
volume
and
then,
from
that
point
on
just
use
like
your
knit
container
to
attach
it
now.
F
A
All
right,
so
I
found
the
persistent
the
pvc
docs.
They
have
a
question
on
how
to
export
it.
H
A
K
H
F
E
Is
that
yeah?
Well,
so
I
think
what
chanti
is
saying
where
you
start
it,
but
I
don't
think
it
has
to
be
running.
I
think
you
can
copy
it
if
it
halted
as
well.
I'm
not
sure
on
that,
though,.
F
It
has
to
be
created,
I
think
yeah
yeah
yeah,
it's
the
docker,
create
start
command,
and
then
you
just
use
docker
cp
to
you.
You
need
to
know
the
path
to
whatever
is
in
the
file,
so
you
do
a
docker
cp
that
non-running
container
and
then
the
path
that
you
wanted
to
drop
and
it'll
copy
your
entire
directory
to
wherever
you
tell
it
to.
B
Yeah,
I
think
that
they
may
not
have
control
over
the
image,
but
I
think
yeah
if
they
extract
it
to
somewhere
and
then
just
use
a
different
image
and
try
to
consume
that
volume.
Somehow
it's
probably
the
only
path
that
they've
got.
I
I
I
can't
think
of
anything
else
either.
It's
a
really
tough
problem.
F
I
almost
feel
like
the
darker
load
is
going
to
be
much
faster
than
a
darker
pull
and
that's
why
I'm
suggesting
that
if
you
can
get
it
on
a
volume
that
could
be
attached
to
that
system,
you
can
still
docker
load
that
30
gig
file
in
a
much
faster
time
than
doing
a
git
pull.
I
mean
a
docker
pool
for
over
the
internet,
etc
and.
A
F
You
expensive.
F
Will
find
up
a
new
node
is
not
going
to
have
that
image
on
it.
You're
going
to
have
to
have
a
daemon
set
of
something
like
that
to
get
as
soon
as
that
node
comes
online,
a
daemon
set
gets
scheduled,
and
then
the
knit
container
in
that
daemon
set
could
load
the
image
into
that
node's
docker
registry
yeah.
B
Image,
I
wonder
if
flattening
the
image
would
get
the
many
performance
increases
and
just
having
it
one
layer
that
has
to
be
extracted
rather
than
multiple
layers,
and
I'm
not
sure
if
that's
going
to
get
you
from
50
minutes
to
40
or
50
minutes
to
20.
But
it's
definitely
something
you
could
experiment
with.
D
I
posted
docker
slim,
which
is
a
tool
that
basically
does
destructive
action
until
like
it,
no
longer
works
and
then
rewrites
it
and
like
deletes
more
things
until,
like
all
your
checks
are
still
working
but
like
yeah,
so
it
just
deletes
everything
that
is
not
needed.
You
need
to
have
a
comprehensive
test
suit
for
this,
but
this
might
help
you
to
slim
it
down
it.
I
just
don't
think
it's
super
easy
to
work
with
otherwise,
but
yeah.
Reducing
the
image
to
only
one
linear
might
help
actually.
F
B
Okay,
thank
you.
Everyone
all
right,
we'll
give
everyone
a
few
more
minutes
to
get
any
new
questions
in.
I
guess
we
can
just
do
a
quick
panel
question
just
now.
Anybody
working
on
anything
interesting
at
the
moment
in
the
kubernetes
space,
what's
exciting
for
everyone.
K
H
A
H
A
H
H
Haptio
team,
so
a
lot
of
the
younger
folks.
There,
good
friends,
google,
red
hat
everyone,
basically
in
there
and.
H
Being
able
to
start
the
promises
from
kubernetes,
this
is
being
able
to
start
the
kubernetes
from
kubernetes,
so
being
able
to
say,
cube,
cdl,
create
cluster
or
currently
it's
cluster
ctl,
create
cluster
and
then
getting
a
cluster
on
no
matter
which
infrastructure
we
left
like
like
linux,
metal
or
aws,
google,
whatever
yeah
that
is
really
really
exciting.
B
So
the
prerequisite
for
the
cluster
api
is
that
you
already
have
a
kubernetes
cluster,
though
any
any
best
practices
or
advice
for
that.
H
H
This
chicken
egg
problem
there
right.
I
think
the
current
solution
is.
H
H
Works,
it
seems
or
or
getting
a
managed
one
like
I've.
I've
run
the
control
plane.
I
think
in
aks
eks
once.
B
K
H
Yeah,
and
at
least
I
think
the
aws
and
azure
teams
are
also
working
on
having
aks
and
eks
being
able
to
start
that
from
cluster
api,
so
getting
the
same
api
but
being
able
to
start
a
managed
cluster.
I'm
not
sure
if
the
google
team
is
is.
J
I
don't
know
that's
something
I'll
have
to
look
into,
because
that
sounds
fun.
I
Maybe
this
is
happening
behind
the
scenes
already,
but
I
think
what
we're
trying
to
do
right
now
is
kind
of
a
git
ops
model
for
deploying
kubernetes
with
kubernetes.
This
is,
I
think
this
is
where
a
lot
of
focus
right
now
at
this
specific
moment.
H
Yeah,
I
think,
that's
exactly
the
the
exciting
part
of
cluster
api
is
being
able
to
use
the
same
approach
that
you
use
for
your
workloads
for
your
clusters,
so
basically
being
able
to
manage
the
whole
platform
from
one
api,
using,
for
example,
a
githubs
tool,
or
if
you
want
terraform.
But
at
least
you
have
like
one
consensus
api
where
all
sorts
of
tools
is
kept.
B
Yeah
they
do
provide
the
move
command
as
well.
Don't
they
in
the
cluster.
I
think
that
came
in
with
like
the
113
release,
maybe
of
cluster
api,
where
you
can
actually
use
docker
for
mac
or
mini
cube
and
then
have
it
move
itself
to
the
target
cluster.
That
kind
of
hopefully
tries
to
solve
that
chicken
and
egg
problem.
A
G
Oh
yeah,
so
kind
of
on
that
note,
I've
been
playing
around
a
lot
with
crossplane
and
kubernetes
config,
connector,
so
kind
of
same
area,
but
and
with
a
similar
chicken
and
egg
problem,
so
create
all
your
cloud
resources
with
yaml,
but
you
need
a
kubernetes
cluster
too.
To
start
then
similar
to
kind
mini
cube.
G
All
those
things
work
great,
it's
it's
a
lot
of
fun,
just
be
able
to
dump
my
infrastructure
that
I
need
for
demos
and
things
into
yaml
and
they
get
repo
and
then,
if
I
need
to
do
something,
just
spin
up
a
little
cluster
and
boom,
I
have
my
my
workload
right.
There.
B
All
right
awesome,
thank
you
very
much.
We
have
one
more
question
on
slack
and
then
I
think
we'll
leave
it
there
for
this
afternoon.
The
question
is
which
tools
are
available
to
scan
container
images
for
security
issues,
especially
open
source
and
free
tools.
Anybody
got
anything
there.
H
I
think
okay
harbor,
includes
by
now
trivia
used
to
include
claire
quay
includes
claire.
I
think.
B
H
Does
your
image
contain
something,
so
they
might
come
up
with
a
lot
of
false
positives
things
that
you're
not
really
using
in
the
image,
but
that
are
there
and
that
have
some
kind
of
security
issue
still
better
than
nothing,
definitely
not
really
sure
how
deep
they
they
go
on
on
other
other
sides
of
security,
like
I
don't
think
they
look
into
dependencies
too
much
snick
would
be
a
good
one
if
you're
open
source,
that's
also
for
free.
B
Yeah,
I
think,
when
I
first
started
to
blind
things
in
containers,
I
made
like
a
classic
mistake
of
building
the
base
image
and
then
only
leveraging
the
cache
or
all
subsequent
builds,
and
it
turned
out.
I
had
like
a
base
ubuntu
that
hadn't
been.
I
hadn't
had
an
app
update
in
like
three
years
and
I
felt
very
ashamed
of
myself
and
I
eventually
got
into
scanning
it
so
remember
to
rebuild
your
base
emergencies
regularly
as
well.
E
I,
on
that
point
I
just
linked
systag
just
did
an
amazing
write-up
on
docker
file,
best
practices
and
a
lot
of
what
it
talks
about
is
stop
using
ubuntu
or
debian,
or
the
aos
package
and
the
the
shims
that
well,
I
can't
think
of
the
name
right
now
that
google
provides.
I
forget
what
we're
calling.
A
This,
yes,
the
distro.
E
List,
thank
you
our
I
mean
I
just
if
you
followed
everything
in
here,
you
have
a
pretty
pretty
tight
kind
of.
You
decrease
your
risk,
a
massive
amount,
other
things
like
dropping
permissions
are
in
here
as
well,
so
there's
a
lot
that
you
can
do
base
images,
get
really
tricky
because
of
the
management
side
of
things.
I
think
a
lot
of
this.
You
can
build
into
your
pipelines
by
default
and
how
your
developers
are
trading
these
things,
but
it's
much
harder
to
circle
back
around
and
do
this
later.
B
Awesome
advice.
Thank
you
very
much
all
right.
I
want
to
thank
everyone
for
joining
us
today.
It
was
a
great
session,
some
really
great
questions
and
even
better
answers.
Also
thank
you
to
all
the
companies
who
support
people
joining
and
donating
and
volunteering
their
time
to
the
office
hours.
Thank
you
to
spectrum
google
cloud
phase,
2
microsoft,
unusual,
vc,
giant,
swarm
carter,
vmware
and
equinix
metal.
B
Lastly,
please
feel
free
to
hang
out
in
the
officers
afterwards.
There
are
other
channels
are
too
busy
and
you're
looking
for
a
friendly
home
you're,
more
than
welcome
to
pull
up
a
chair
and
hang
out,
we
will
be
back
at
the
same
time
next
month
until
then
have
a
great
day
and
a
great
month
and.