►
From YouTube: Kubernetes Office Hours 20210519 (EU Edition)
Description
Office Hours is a live stream where we answer live questions about Kubernetes from users on the YouTube channel. Office hours are a regularly scheduled meeting where people can bring topics to discuss with the greater community. They are great for answering questions, getting feedback on how you’re using Kubernetes, or to just passively learn by following along.
For more info: https://k8s.dev/events/office-hours
A
C
Sure
hi
everybody,
my
name
is
mario
lauria.
I
work
for
carda
if
you
have
stock
options,
you've
used
carta.
Hopefully
I
have
a
thorough
love
of
everything,
resilient
infrastructure
cloud,
infrastructure,
kind
of
networking
and
everything
in
between
lately.
I've
been
really
focused
on
developer
experience
and
how
we
can
use
kubernetes
other
cloud
native
tools
to
achieve
kind
of
developer
nirvana.
So.
B
E
F
G
B
All
right,
thank
you,
everyone!
Okay,
before
we
start
here,
there
are
some
ground
rules.
This
is
a
kubernetes
event,
so
the
code
of
conduct
is
in
effect,
please
be
excellent
to
one
another.
This
is
also
a
judgment-free
zone.
Everyone
had
to
start
from
somewhere.
So
please
help
out
your
buddy
by
having
a
supportive
environment
in
the
channel
where
we
will
do
our
best
to
answer
your
questions.
B
The
panel
does
not
have
access
to
your
cluster,
so
we
cannot
do
any
live
debugging,
but
we
will
do
our
best
to
get
you
some
advice
and
get
you
moving
on
to
the
next
step.
Normally,
we
provide
shirts.
However,
the
cncf
store
is
replenishing
this
inventory,
so
we
will
give
you
a
shout
out
and
our
undying
devotion,
panelists
you're,
encouraged
to
expand
on
your
answers
with
your
experience
and
pro
tips
audience
please
help
by
pasting
in
any
urls
to
docs
blogs
and
anything
that
you
think
may
be
relevant
to
help
us
answer
your
question.
B
B
If
you
want
to
join
us,
rotate
in
just
let
us
know
reach
out
to
pop
or
myself
or
anyone
else
in
office
hours
channel.
We
love
to
have
new
people,
please
get
in
touch
we're
going
to
do
a
community
shout
out.
This
is
something
that
pop
brought
in
a
few
months
ago,
as
we
started
hosting
the
kubernetes
office
hours
this
month.
We
want
to
celebrate
marky
jackson.
B
Marky
couldn't
join
us
today.
We
want
to
just
thank
him
for
all
of
his
contributions
to
the
kubernetes
community,
so
thank
you
very
much,
marky
all
right
to
get
started
today.
We
have
found
some
questions
from
twitter
and
from
discus
we're
going
to
work
our
way
through
them.
Please,
if
you're
watching
us
live
drop,
any
questions
that
you
have
into
the
slack
channel,
we
will
also
tackle
them
as
we
go
all
right.
Question
number
one
panel.
B
This
is
from
arousal
on
twitter,
which
would
be
the
preferred
way
of
creating
a
persistent
volume
and
persistent
volume
claim
when
the
default
storage
class
is
not
in
use.
They've
suggested
a
couple
of
options:
one
create
another
storage
class
having
annotations
option
two,
creating
labels
with
matching
labels
and
three
reserving
persistent
volume.
B
B
G
Mean
I've,
I
guess
I
I
don't
really
understand
the
question
I
have
some
experience
like
we've
had
to
like
the
default
storage
classes
available,
for
example,
in
azure
weren't,
satisfying
our
use
cases,
so
the
process
that
we
we
had
to
follow
was
we
would
create
a
new
storage
class
that
had
the
properties
that
we
required
and
then
from
there.
We
would
follow
the
usual
process
of
creating
a
pvc
and
then
our
pv
and
then
and
then
the
pvc.
G
So
that
was
how
we
tackled
that
problem.
I
don't
quite
maybe
I'm
not
familiar
with
the
annotations
part
and
how
that
comes
into
play
here,
but
it
could
be
just
something
that
I
don't
know
there's
a
lot
of
stuff.
I
don't
know.
B
Yeah
razzle,
if
you
are
watching-
and
you
can
give
us
any
more
context-
please
try
to
do
so.
I
believe
the
question
is:
if
you're
not
using
a
default
storage
class,
you
should
just
try
and
provide
one
in
the
spec
so
yeah.
Let
us
know
if
we
can
help
you
any
further,
we'll
do
our
best
all
right.
We
have
a
question
now
from
the
kubernetes
discussion
forums
by
avas
vasu
said
I
have
created,
created
a
static
pod
and
they
see
the
static
pod.
D
I'm
not
exactly
sure
if
it's
possible,
but
a
way
around,
that
potentially
would
be
to
under
the
bard
lib
cubelet
drop
the
config
secret
for
your
upstream
repository
and
the
keyboard
will
read
it
from
there
and
pull
the
image
in,
because
it
sounds
like
they're
trying
to
pull
in
an
image
just
behind
a
private
registry.
Otherwise
it
wouldn't
be
a
problem
if
it
was
a
public
registry.
B
Yeah,
I
guess
the
challenge
here
is
that
static
pods
are
spun
up
by
the
cubelet.
The
api
server
may
not
be
available
so
accessing
secrets
through
the
kubernetes
api,
probably
isn't
going
to
be
an
option
so
yeah,
I'm
not
entirely
sure
that
would
work
myself.
I
think
what
chauncey
says
is
best
configure
the
host
or
whatever
the
container
runtime
interface
is.
D
Now
keep
in
mind
you
introduce
drift
when
you
do
that,
because
unless
you
do
it
on
every
server,
you
need
to
have
some
mechanism
to
deploy
and
then,
when
that,
if
a
new
node
comes
online,
you've
got
to
solve
that
problem
there.
So
it
may
be
something
that
you
might
want
to
use
to
manage
that
as
a
daemon
set.
So
any
node
comes
up
that
has
you
know
that
dama
said
has
privileged
mode
to
implement
that
functionality
for
you,
but
it's
all
going
to
be
complicated.
E
That
also
might
be
something
to
submit
an
issue
for
just
because
the
fact
that
it's
not
being
seen
if
I
see
if
I'm
reading
that
properly
might
be
something
to
think
about
and
yeah,
like
the
external
registries,
to
me,
like
you
know
like
if
dockers
aren't
available
like
I
said,
I
totally
agree
with
you
on
the
rate
limiting
aspect
of
it
but
like
this
is
something
you'd
probably
want
to
like
spin
up
some
type
of
internal
registry
of
some
sort
anyway
right
to
be
able
to
do
this
so,
like
I
think
it's
a
hack
versus
like
what
should
be
a
best
practice.
B
D
The
current
usage
patterns
the
way
cuban
edge
handles
it
now
for
core
services.
Now,
if
you,
you
want
to
use
a
standalone
keyblade
for
some
sort
of
functionality,
you
know
the
static
part
would
be
an
appropriate
pattern.
B
B
Their
question
is
the
application
is
deployed
using
stateless
charts,
but
the
only
concern
from
the
client
is
to
have
the
name
of
the
pod
look
like
a
stateful
set.
So
I
think
what
they're
saying
is
they're
potentially
deploying
with
deployment,
and
they
want
to
be
able
to
control
the
pod
names.
Is
this
possible.
D
C
C
B
D
We're
just
going
to
say
the
order
of
it's
starting
up.
You
know
you've
got
to
make
sure
that
you
know
hyphen.
Zero
is
going
to
spin
up
before
hyphen
one
and
hyphen
two
is
gonna,
wait
until
hyphen
one
is
completed,
etc.
So
you
gotta
just
deal
with
those
issues
there,
but
those
are
very,
very
minor,
et
cetera.
C
Yeah,
the
the
only
other
thing
I'd
add,
is
staples
that
fundamentally
changes
the
kind
of
paradigm
when
it
comes
to
instances
existing
across
the
cluster.
So
it
tries
to
pin
a
little
bit
more
and
there's
more
of
maybe
affinity
concerns.
If
you
don't
want
things
to
pin
as
much
and
then
the
other
side
of
this
would
be
rolling
updates
and
making
sure
that
that
functionality
works
just
like
just
like
a
deployment
would,
and
so
you
you
should
be
able
to
tune
that.
C
But
it's
something
you're
going
to
want
to
look
into
because
again,
staples
are
meant
for
more
staple
centric
workloads.
So.
E
I
just
don't
understand-
and
I'm
going
to
say
this-
I'm
sorry
to
be
the
contrarian
today,
but
I'm
going
to
be
the
contrarian
on
the
panel
here
but,
like
I
don't
understand
why,
like
that's
a
native
mechanism
of
kubernetes
being
able
to
like
you
know
like
not
have
control
of
it
because
it's
ephemeral
and
things
are
gonna
spin
up
and
down.
Why
would
I
want
to
name
that?
I
guess
I
need
to
understand
that
use
case
better,
because
that's
basically
abandoning
all
the
best
practice
that
we
have
of
kubernetes
right
so
yeah.
C
Way
or
the
the
more
you
know,
kubernetes
native
way,
but
like
there's,
probably
some
monolith,
they
have
to
deal
with
there's
some
sort
of
sequencing
of
events
that
you
know
within
their
platform,
where
you
know
things
have
to
look
just
like
they're,
probably
moving
from
ec2
instances
or
whatever
it
is
right
and.
C
B
C
B
So
am
I
looking
at
you
the
immersive
vote
all
right,
so
summarize
what
we
said
there
you
can
use
the
stateful
set.
You'll
get
the
naming
semantics
that
you
expect.
There
are
caveats,
particularly
around
ordering
and
the
way
that
they're
started
and
reconciled.
I
believe
you
can
change
the
ordering
to
be
parallel
and
something
for
you
to
look
into,
but
I
think
what
pop
said
is
fantastic
like
maybe
we
should
be
looking
at
why
you
need
that.
Is
it
because
you
need
headless
services
as
a
dns?
Is
it
something?
B
And
if
you
can
we'll
respond
to
your
issue
on
kubernetes
discuss
if
we
can
get
some
more
information,
maybe
we
can
help
you
further,
but
great
question
thanks
all
right.
Let's
carry
on.
We
have
another
one,
this
one
from
alessandro,
zantare
hi.
I
am
playing
with
microcapes
with
the
goal
to
have
a
cluster
of
nodes
in
two
different
locations.
E
I'm
going
to
ask
that
I'm
going
to
be
the
contrarian
again,
I'm
gonna
say
why
right
I'm
gonna,
I'm
gonna,
ask
that
again
is
it.
You
know
there
are
services
right,
there's
load,
balancing
services
that
you
can
use
locally
right,
that
kind
of
emulate,
like
those
those
providers
right
you
like,
I
don't
know
like.
Why
would
we
you
know?
Why
would
we
use
those
load
balance
like?
Wouldn't
we
use
some
type
of
convention
that
china
traverses
both
and
that's
also,
you
know
again.
E
I
love
all
the
clouds
if
you
decide
that
your
workloads
are
on
cloud
instead
of
a
hybrid
type
of
thing,
you're
going
to
depend
on
that
load,
balancer
that
the
provider
gives
you,
but
if
you're
like
local
on-prem,
you
probably
are
going
to
use
a
load,
balancer
and
dns
paradigm
other
than
those
things
right,
but
that
is
somewhat
compatible.
I
guess
so
I
can.
I
use
low
bouncers
for
cloud
load
bouncers.
I
just
that's
a
mingling
of
the
two
worlds
you're
gonna
have
to.
E
I
think
maybe
I'm
incorrect
on
that,
but
I
mean
I
know,
there's
on-prem
versions
like
there's
anthos
for
google
and
then
there's
you
know
I
think,
what's
it
outpost
or
whatever
for
for
aws.
So,
like
you
have
those
options
that
kind
of
emulate
those
things
it
depends
on.
Are
you
using
like
native
kubernetes
like
what?
What
is
it
you're
doing
there.
B
Yeah,
I
think
I'll
tackle
this
from
a
bare
metal
perspective.
I
think
everything
pops
out
there
is
is
great
as
well,
but
I
I
definitely
wouldn't
be
using
the
clothes
little
answer
for
this
and
maybe
try
and
rely
on
bgp
on
your
on-prem
cluster
to
be
able
to
advertise
the
addresses
of
both
of
your
control
plane,
nodes,
michael
case
or
not.
I
think
that
that
is
maybe
irrelevant
for
the
time
being,
but
I
use
bgp
or
stick
a
load
balancer
in
front
of
them
and
do
around
robbing
I
I
I
echo
pop's
advice.
B
B
Does
anyone
use
micro
kits
in
this
way,
yeah
right
all
right,
let's
move
forward,
then
thank
you
for
all
of
that
pop
you're
killing
it
all
right.
Let's
see,
we've
got
another
one
from
disgust
here
by
the
piper.
B
This
is
more
of
a
general
question
about
container
design.
We
have
an
application
that
we
need
n
number
of
times.
The
application
also
needs
its
own
database
and
we
have
a
one-to-one
mapping.
I'm
assuming
each
instance
of
the
application
has
a
local
database
since
the
application
and
the
database
will
always
be
paired
for
each
instance.
Is
there
a
best
designed
to
approach
this?
B
D
I
think
I
think
there
was
a
misread,
I'm
not
sure,
if
they're
well
in
a
neat
container
for
the
database,
a
net
container
is
a
bad
pattern
for
managing
your
database,
because
it's
strictly
to
initialize
your
environment.
However,
having
your
database
as
a
sidecar
with
a
persistent
volume
claim
for
that
database
is
an
interesting
pattern
and
a
pattern
that
I've
followed
before,
but
the
init
container
is
strictly
for
initializing
or
loading
the
database
or
bringing
config
into
your
environment.
In
that
particular
context,
there.
C
C
I
was
gonna
say
yep.
No
sorry,
I
was
just
gonna
kind
of
say.
The
same
thing
in
in
regards
to
like
an
internet
container
is
not
gonna
be
long-lived.
I
I
see
where
this
person
is
thinking
we're
like
in
it.
Okay,
it's
gonna
start
before
anything
else,
so
I
want
that,
but
I
don't
think
they
realize
that
it's
not
going
to
live
when
your
main
containers
in
this
case
the
application
container
gets
going.
C
So
I
think
what
they
definitely
should
do
is
you
know
application
container
database
container
and
there
should
be
one
or
more
inter
containers
that
maybe
verify
the
state
of
the
world.
I.E.
You
know,
is
everything
happy
our
secrets
in
existence
et
cetera,
et
cetera
and
and
then
I
think,
there's
other,
and
I
can't
think
of
them
right
now.
C
Maybe
someone
knows,
but
I
think
they're
the
key
thing
here
is
they
want
the
databases
start
before
the
application,
and
so
I'm
interested
to
hear
what
others
on
the
panel,
maybe
think
of
like
how
do
we
time
that
appropriately
right,
because
those
main
containers
aren't
starting
until
the
ending
containers
are
completed.
So
how
do
we
do
this
kind
of
ordering
and
I
think
there's
I
can't
remember
what
it
is,
but
someone
knows
better
than
me.
D
I
think
you
can
use
the
startup
you've
got
your
liveness
probe.
You've
got
your
readiness
probe
and
you've
got
a
startup
probe.
You
could
use
that
startup
probe
to
kind
of
put
a
sli
in
each
container
in
a
pod.
Has
that
specific
starter
pro
so
for
your
database?
D
You
could
potentially
wait
a
few
seconds
till
it
comes
up
because
the
startup
probe
is
going
to
prevent
anything
else
from
happening
until
it
completes,
so
that
would
allow
your
database
to
actually
come
into
a
full
production
state
and
then
be
ready
to
deliver
to
whatever
app
it
needs
to
send
data
to.
E
F
Chansey,
the
startup
probe
that
you
mentioned
is
going
to
be
on
the
application
container
correct
because
it
would
actually
wait
for
the
database.
It
will
check
for
the
health
of
the
database.
I
I
think
that
if
you
look
at
the
wordpress
chart,
it
actually
has
that
kind
of
a
check,
so
the
php
application,
the
php
container,
it
actually
waits
for
the
mysql
container
to
come
up.
So
I
mean
what
the
way
they've
done
it
is.
F
E
I
I
say
two
things:
one
is,
if
you
think
of
like
galera
right
and
you
think
of
the
things
I
I
think
again
getting
into
the
wii.
U
you
know
so
my
sequel
has
this
ability
to
kind
of
cluster
out
right,
like
you
know,
leads
and
all
those
things
and
you
can
deploy
it
on
multiple
things,
and
you
know,
I
think,
having
that
managed
versus
managing
yourself
via,
like
deployments,
is
kind
of
the
methodology.
E
B
E
Anyway,
I
just
think
in
terms
of
the
sequence
of
bringing
things
up
and
stuff
like
that.
It,
you
know
it's
it's
what
you
know
it's
it's
basically
what
you
know.
I
don't
know
if
I
would
go
at
a
mechanism
for
deploying
it
like
in
in
an
in
or
anything
like
that.
I
just
like
use
some
of
the
native
mechanisms
or
whatever
the
the
the
sequel
mechanism
is.
B
F
F
Let's
say
you
have
a
java
application,
a
php
application.
You
can
just
access
it
using
localhost,
you
don't
want
any
external
name
or
anything
you
you
need
not
even
expose
it
to
via
a
service
or
anything
also
I
mean
if
you
want
to
really
keep
the
database
private
and
just
expose
it
via
api,
but
I'm
really
curious.
What's
the
use
case
where
every
pod
needs
to
have
its
own
database.
D
Yeah
I
had
a
use
case
that
required
that
our
developers
had
a
workload
like
in
a
trunk
based
deployment
process
and
we
didn't
want
to
have
the
pods
spread
across
the
entire
cluster,
so
we
had
five
side
cars
that
had
the
redis
database
and
all
this
stuff
tied
to
it.
So
in
a
development
environment
where
you're
managing
development
code,
that's
a
perfect
workload
pattern
now
for
a
production
workload
where
your
audience
is
hitting
the
rest
of
the
world.
A
G
Maybe
I'll
I'll
just
throw
in
a
suggestion
kind
of
found
what
pop
was
saying
like
if
an
option
is
to
externalize
the
database
to
like
a
cloud
provider
or
something
like
that,
and
that's
something
you
want
to
consider.
G
I
think
crossplane
you
can
use
that
to
basically
spin
up
a
database
for
every
deployment
of
your
application
or
something
like
that.
So
that
might
be
another
option
to
investigate
if
that
makes
things
easier.
D
G
G
D
Cheaper
to
just
bring
in
and
it's
kind
of
dependent
on
the
environment,
you
know.
Are
you
in
a
production
environment
where
your
audience,
none
internal
people,
are
interacting.
G
Yeah
for
sure,
that's
why,
like
the
concealer
part,
but
like
a
lot
of
the
cloud
providers
for
simple
database,
they
might
have
a
free
tier.
So
you
may,
like
the
custom,
may
be
okay.
G
But
if
it's
just
like
for
developing
purposes
like
you
were
seeing,
and
if,
if
in
production,
you're
using
a
managed
service,
then
it
might
make
sense
to
provision
a
small
database
in
your
development
environment
for
every
application
and
then
perhaps
cross-plane
might
help
you
there.
Just
just
another
suggestions
to
investigate.
E
B
Anyway,
right
lots
there.
Hopefully,
we've
answered
the
question
and
given
you
some
more
things
to
think
about,
so
thank
you.
Everyone,
there,
okay,
live
debugging
time.
I
hope
you
can
all
read
the
error
messages.
We
got
a
pod
stuck
on
a
container,
creating,
I
think,
there's
one
part
here
that
is
the
bit
of
the
giveaway.
Maybe
I'm
sure
I
saw
earlier
network
plug-in
cni
failed
to
set
up
pod
anyone's
seen
that
error
message
before.
C
Any
many
times
so
I
almost
feel
like
I
can
call
this.
They
are
probably
using
eks
or
something
that
cni
that
depends
on
an
ipam
which
is
leveraging
an
api
to
get
more
ip
addresses
and
they
don't
have
enough.
The
node
is
is
not
getting
enough
from
its
eni,
and
so
I
have
seen
this
there
many
many
times
a
lot
of
the
times
around
cron
jobs
that
are
like
every
minute
or
every
three
minutes
or
every
five
minutes
right.
C
They
kind
of
continually
cycle,
and
so
you
have
a
lot
of
churn
going
on,
and
so
the
cni
is
continually
calling
out
to
get
more
ip
addresses
or
or
even
just
get
like
a
a
basket
of
five
right
to
be
up
to
a
certain
value.
This
is
a
tricky
problem
to
solve
and
and
I'm
making
some
assumptions
here
that
the
aws
cni
could
be
another
cni
that
just
isn't
able
to
provision
for
whatever
reason.
C
I
think
this
is
a
key
point
where
you
need
to
be
actually
having
some
introspection
on
your
cni.
I
think
that's
really
hard
to
do
it's
very
easy
to
just
like.
Oh,
it's
deployed
the
damon,
says
they're
running
and
everything's
great
right,
but
this
is
something
where
you
really
need
some
key
metrics
around
your
ip
space
around
what
your
your
ipam
is
doing
and
the
amounts
of
pod
turns
you
know
in
inside
of
your
nodes
and
which
ones
are
are
kind
of
getting
ips
et
cetera.
C
C
So
I
would
say
kind
of
my
first
way
to
look
at
this
would
be
the
space
that
you
have
make
sure
you're
not
maxed
out
in
ip
space.
The
other
thing
would
be:
what
is
the
sort
of
job
or
the
container?
Where
is
it
coming
from?
What
does
it
need,
and
you
know,
is
it
being
spawned
ridiculously?
C
So
the
other
part
of
this
would
be.
You
know
a
lot
of
time.
You
can
kill
it
or
wait
and
it
will
eventually
come
up
and
it's
not
that's
not
really
a
nice
answer,
but
that's
it
when
you're
kind
of
thrashing
and
you've
got
lots
of
things
going
on.
C
Sometimes
you
just
have
to
try
to
kill
it
and
see
if
it
will
get
respawned
in
on
another
node
so
trying
to
you
know
trying
to
get
it
to
respawn
somewhere
else,
the
scheduler
to
put
it
somewhere
else
in
more
of
the
mid
to
long
term.
I
would
definitely
look
at
your
cni
settings.
If
you're
using
the
aws
tni
there's
a
lot
of
controls
now
variables
you
can
pass
into
the
daemon
set
that
let
you
tune
the
thresholds
and
how
many
reserve
ips
are
are
kept
right
on
each
node.
C
Of
course
you
need
to
be
careful
with
this.
You
don't
want
to
use
your
your
ip
space
too
soon
right,
but
I
think
there's
more
controls
you're
going
to
have
to
dig
a
little
deeper
on
so
great
great
question
and
really
a
frustrating
hair.
That's
for
sure.
E
And
I
mean
the
underlying
thing
I
think
you
said:
is
the
cni
provider
here
we
we
could
probably
provide
more
more
data
here.
If
we
knew
is,
is
it
I
don't
know
calico?
Is
it
cilium?
Is
it
you
know
what
I
mean
it's
we
we
don't
know
any
of
that
part
to
be
able
to
kind
of
really
like
give
a
logical
answer.
So
that's
me,
you
know
again
understand
frustration
next
time.
If
you
do
just
add
a
little
bit
more
data
and
then
it
could
help
us
debug
this
a
little
bit
more
so.
B
All
right,
thank
you.
That
was
a
extremely
complete
answer.
There,
mario
awesome
mark
and
yeah.
Just
following
on
from
what
pop
said
as
well.
Like
you
know,
when
you're
asking
questions,
please
try
to
provide
as
much
information
as
possible.
It
really
helps
us
understand
what
the
hardware
is,
where
it's
hosted,
what
your
configuration
is
for
kubernetes
cri,
cni
csi.
B
Okay,
let's
see
what
else
do
we
have
here?
We've
got
another
one
from
our
discuss
forum,
hi
all
why,
after
I
shut
down
a
node,
the
pods
that
were
running
on
this
node
remain
running
for
six
or
seven
minutes.
After
that
time,
the
pods
are
scheduled
to
other
nodes.
But
how
do
I
make
kubernetes
respond
to
this
more
quickly.
C
This
is
a
this
is
a
good
one,
I'm
going
to
be
short,
and
then
I
want
other
people
to
answer
as
well.
Pod
eviction.
Timeout
is
five
minutes.
If
you
add
up
all
the
overhead
of
calls
and
other
timers
in
the
the
process
of
the
controller
manager,
six
to
seven
minutes
is
actually
pretty
good.
I've
seen
ten
minutes
even
even
more
so
I
will
stop
there,
but
there
are
lots
of
other
controls.
I
think
at
the
control
plane
level
that
need
to
be
considered.
F
And
typically
you
would
I
mean
if
it
is
a
planned
shutdown,
then
you
can
obviously
trigger
a
drain
first.
So
that
would
speed
things
up.
B
Yeah
great
advice
there
yeah
cardinal
node,
if
you
do
plan
on
doing
maintenance
to
help
speed
up
this
process,
anyone
get
any
tips
on
selecting
eviction
time
other
than
five
minutes.
C
You
know
well
so
there's
a
reason:
it's
five
minutes
the
developers
you
want
to
fight
flapping
right.
You
don't
want
a
no
that's
in
a
bad
network
state
right
to
coming
up
and
down
right.
You
don't
want
the
positives
get
killed.
I
I
tend
to
think
more
in
a
fail,
fast
scenario.
So
I'd
say
probably
two
minutes,
maybe
a
minute
and
a
half
something
like
that,
because
it's
it's
kind
of
it's
not
too
long.
It's
not
too
short,
necessarily
and
honestly.
I'd
rather
just
get
rid
of
the
note.
C
If,
if
we're
having
problems
with
a
note,
I'd
rather
just
get
rid
of
it,
and
so
maybe
there's
other
controls
are
on
there.
Node
problem
detector,
I
think,
has
other
pieces
where
it
can
detect.
Okay,
is
there
a
kernel?
You
know
lower
level,
os
things
that
might
be
causing
issues
right
and
then
maybe
we
can
just
get
rid
of
that
node
quickly
and
get
things
up
and
running
somewhere
where
we
know
the
you
know,
nodes
are
healthy
right.
So.
B
Yes,
so
the
advice
here
is:
try
and
drain
the
node.
If
you
can,
for
whatever
reason
it's
going
to
disappear,
if
it
doesn't
and
the
you
know
the
election
time
is
five
minutes:
do
they
have
any
other
options
to
speed
up
that
process?
From
that
point,
can
they
use
the
kubernetes
api
to
tell
it
that
the
node
is
gone,
even
though
it's
already
gone?
Can
they
start
deleting
pods
like
what
are
their
options.
C
Yeah,
I
think
a
qctl
node
delete
or
delete
node
is
their
best
bet.
I
mean
that's
a
very
manual
process,
that's
them
noticing
that
this
is
going
on
and
then
forcibly
deleting
the
node,
and
then
that
should
cause
those
pods
to
be
respawned.
Pretty
quick.
But,
but
I
think,
like
my
other
side
of
this,
is
that
it's
2021
like.
C
C
The
first
thing
you
should
do
is
drain,
because
that
will
automatically
do
a
cordon
and
then
get
those
those
pots
off
of
there
immediately
right,
and
I
think
you
need
to
probably
ignore
damon
sets
option
for
for
a
drain
as
well,
but
that
should
be
kind
of
my.
That
would
be
my
first
thing
that
I
do.
It
also
uses
the
eviction
api,
so
it
will
how
disruption
budgets
will
be
considered
as
well.
So.
B
C
Yeah
really
quick
and
I'm
gonna
search
it
here
as
well.
There
was
a
talk,
a
few
and
link
it
for
the
for
the
channel.
Also,
I
think,
there's
some
questions
in
the
channel.
I
don't
wanna,
I
don't
wanna
yeah.
E
C
Good
good,
so
no
problem
detector,
they
they
kind
of
aim
to
make
various
node
problems
that
aren't
as
visible
to
to
cubelet
right.
So
cubelet
is
just
kind
of
maintaining
this
network
communication
with
the
the
api
right
and
that's
kind
of
the
core
thing
that
we
understand
is
the
node
is
healthy.
No
problem
detector
says
well
wait
a
second.
C
There
could
be
hardware
issues,
there
could
be
ntp
service
issues,
there
could
be
a
kernel
deadlock,
there
could
be
a
corrupted
file
system,
there
could
be
runtime
daemon
issues,
flapping
other
things
that
are
happening,
and
so
it's
a
little
bit
lower
level
and
say:
let's
bring
these
lower
level
things
upstream
so
that
we
can
say
the
node
is
not
working.
You
know
we
need
to
shut
down.
We
need
to
get
out
of
the
cluster
et
cetera,
et
cetera,
so
I
I
just
linked
it.
C
I
definitely
think
people
should
look
into
it
if
they
are
managing
nodes.
I
think
the
cloud
providers
now
have
a
lot
more
managed
nodes
offerings.
So,
if
you're
in
cloud,
this
is
a
little
bit
solved
depending
on
your
needs,
but
this
I
think
this
is
a
savior.
If
you
are
in
a
scenario
where
you
have
to
manage
nodes,
whether
on-prem
or
in
cloud.
E
And
if
you,
you
know,
get
shout
out
to
the
observability
slash
monitoring
tools
that
are
out
there
right.
If
you
have
those,
and
there
are
problems
with
nodes
and
all
of
those
you
can
basically
have
like
that,
be
an
alert
of
some
sort,
and
then
you
can
trigger
something
like
a
web
hook.
That
does
something
right.
An
action
like
chord
in
the
node
and
all
those
fun
things
right
and
so
having
that
posture
as
well
helps
but
like
natively
again
without
like
what
mario
was
talking
about.
E
B
All
right
awesome,
thank
you
both
okay,
now
we've
got
some
questions
that
came
to
us
through
the
slack
channel.
Please
keep
them
coming
if
you're
watching
and
you
have
anything
you
want
us
to
discuss
so
share
drop
it
in
the
channel.
So
we
have
a
question
from
kekerpur.
What
is
the
recommended
secret
management
strategy
going
forward?
Should
I
be
looking
at
vault
or
a
key?
Ms
plugin
who's
got
a
picture.
E
B
C
Yeah
we
do
it
carda
and
I
believe
we
use
the
secrets
manager
project
from
bitnami.
They
just
moved
the
namespace,
though,
for
that
on
github,
if
I
find
it
I'll
link
it,
we
found
pretty
good
success
with
that
for
for
leveraging
setting
up
external
secrets
and
having
people
more
easily
tap
into
vault.
So.
G
Yeah
I
mean
we're
using
an
azure
key
vault.
My
like
the
way
I
would
look
at
this
question
is
use
something
that's
familiar
to
you
right.
G
So
if,
if
you
have
vote
already,
then
try
using
vault
if
you
are
in
a
cloud
and
cloud
provider
like
provides
a
key
management
solution,
try
that
out
right,
because
there's
probably
a
lot
of
support
and
integration
built
in
if
you're
used
to
point
to
to
eks
or
azure
k,
google
or
whatever,
there's
probably
built,
integrations
that
you
can
leverage
and
simplifications
regarding
access
policies
and
management,
and
things
like
that.
That
will
help
you
along
the
way.
F
Yeah,
I
I
second
that
so
I
think
you
brought
up
a
very
good
point,
and
sometimes
we
actually
forget
that
hey,
there's
actually
workloads
outside
kubernetes
as
well.
So
there
are.
F
Yeah,
of
course,
surprised,
but
that's
actually
quite
quite
a
good
point.
So
if
you
actually
have
a
solution,
because
we
have
to
also
understand
that
a
solution
that
a
solution
of
this
nature
has
to
work
across
both
those
ecosystems,
so
if
you
have
something
like
volt,
is
something
that
you're
using
and
it
has
integration
into
communities.
Obviously,
so
you
can
use
that
if
you
are
going
like
completely
new
about
it
and
obviously
serial
secret.
I
think
that
was
it
the
one
that
you
were
talking
about,
the
vietnamese.
F
So
that
that
is
definitely
something
that
I
I
like
using
with
kubernetes.
But
yes
again,
if
you
have
things
outside
and
obviously
you
have
to
look
for
more
comprehensive
solutions.
B
Yeah,
I
think
I
was
coming
out.
Maybe
I
read
the
question
differently,
but
I
I
think
there's
two
ends
over
here
like
field
secrets,
is
a
really
good
way
for
teams
and
developers
to
you
know,
use
like
a
get
ups
pattern
to
get
the
secrets
into
the
cluster,
but
the
cluster
itself
is
only
going
to
be
using
base64
encoding
with
an
lcd.
I
believe
so,
but
there
are
extensions
to
hook
into
kms
and
vault
to
encrypt
them,
so
I
think
we've
kind
of
iterated
on
it
a
few
times
now.
B
But
you
know
if
you're
on
a
cloud
provider
use
their
qms
like
all
the
I
am,
is
going
to
be
really
nicely
integrated.
Yeah
use
that
as
much
as
possible.
If
it's
not
an
option
fall
is
also
fantastic.
It's
there
use
it.
Unfortunately,
you
have
to
operate
it,
but
you
know
can't
win
all
the
battles
and
pop
up.
You
want
to
add
anything
to
that.
E
No
just
I
mean
the
overall
kind
of
vault
thing,
regardless
of
what
cloud
provider
you
have
they're
they're.
Also
thinking
of
integrating
with
like
I
know,
google,
you
know,
does
a
lot
of
that
like
in
terms
of
integrations
with,
like
you
know,
like
the
out-of-the-box,
build
vault,
perform
and
stuff
like
that.
So
it's
like
again,
it's
the
tywin
lannister
thing
right
tool,
there's
a
tool
for
every
task
and
there's
a
task
for
every
tool
right.
E
H
B
All
right,
okay,
we
got
another
question
that
came
out
live
from
the
slack,
so
this
is
from
narayan
khaled.
My
app
dynamically
creates
k,
k
native
native.
Do
we
want
to
have
that
argument?
No,
let's
not!
Let's.
C
B
Native
services
on
demand
using
the
python
sdk
in
the
back
end,
the
user
just
provides
the
name
and
image
and
some
other
metadata,
and
we
fill
them
in
and
create
the
resource
one.
Would
there
be
a
resource
definition
fails
of
the
corresponding
k-native
services
be
stored?
B
F
I
think
that
the
clean
native
I
played
with
it
as
part
of
one
of
our
old
products
and
again.
B
F
Under
the
hood,
it
is
a
set
of
pods,
but
these
are
like
pre-one
pre-warmed
up
pods
and
they
kind
of
have
a
landing
area.
So
if
you
send
a
function
for
say
python,
you
have
a
set
of
bunch
of
pods
which
are
ready
to
run
something
of
python,
so
yeah.
I.
E
There's
two
things
right:
there's
key
native
serving
and
it's
k,
native
events
right
and-
and
I
just
wish,
like
you
know,
scott
nichols
or
like
matt
moore
around,
to
answer
these
questions
because
they
would
just
murder
on
this,
but
so
shout
out
to
you
all,
but
in
terms
of
like,
I
think,
a
lot
of
the
the
definition
files
are
like
you
can
code
it
in
something
called
co.
E
I
think-
and
it's
like
a
you
know,
a
kind
of
syntax
for
for
doing
that
in
terms
of
where
they're
stored,
I'm
not
even
gonna
venture
to
kind
of
even
bs
about
that.
But
this
one.
Maybe
we
can
take
tag
somebody
from
the
k-native
side,
maybe
carlos
santana,
or
something
like
that
as
well.
To
kind
of
add
some
feedback
here.
B
Yeah,
we'll
we'll
definitely
pass
this
question
on.
I
have
read
it
again
and
I'm
going
to
take
a
take
a
guess
at
this,
so
the
question
seems
to
be
like
they're
using
the
python
sdk
to
actually,
I
think,
generate
kubernetes
resources
and
if
that
is
the
case,
store
them
and
get.
I
think
that's
going
to
be
going
to
be
the
best
approach
here
and
then
part.
Two
of
the
question
is:
what
is
the
best
way
to
migrate
them
to
another
cluster
without
having
to
regenerate
or
recreate
the
resources?
B
I'm
assuming
we
know
we
could
recommend
some
get
options
like
flux
or
argo.
Any
of
these
tools
that
will
automatically
automatically
monitor
the
get
repository
for
changes
and
apply
them
to
the
cluster
that
it's
linked
with.
B
Again,
I'm
not
entirely
sure.
I
understand
that
question
correctly
pop
and
I
will
pass
this
on
to
someone
on
the
kidnap
team
and
we
would.
E
B
B
What
is
the
best
way
or
tool
to
back
up
a
kubernetes
cluster
to
back
up
kubernetes
cluster
resources
and
production
and
they've
got
examples
so
deployment
secret
status,
tests,
conflict
maps
etc,
and
I
want
to
be
able
to
recreate
the
same
cluster
but
the
same
configuration
in
case
of
disaster.
There
we
go.
We
have
a
disaster
recovery
question
and
I
think
this
is
one
answer
who.
D
Big
time
it
ops
pattern.
If
you
get
upset,
it
might
be
better
because
you
have
all
your
manifests
and
if
you
have
docker
containers
in
an
upstream
registry,
you
know
just
spin
up
a
new
cluster,
install
your
flux
in
our
argo
and
it
will
repopulate
the
server.
B
Yeah,
let's
mention
sorry,
I
should
go
back
here
good,
as
you
guys
said,
let's
split
this
and
into
the
two
different
aspects
of
it.
I
think
what
chauncey
said
about
get
ops
and
being
able
to
you
know
automatically
recreate
a
new
environment
with
all
the
same
resources
is
a
fantastic
way
to
be
doing
stuff.
I
still
think
valero
brings
a
lot
of
value
to
the
table
and
they
mention
pvcs.
C
That's
yeah.
Absolutely!
I
didn't
even
know
that
I
hadn't
really
used
valero
much,
but
I
would
say,
there's
another
command
that
most
a
lot
of
people
don't
know
about,
and
it
doesn't
cover
every
single
object
in
your
cluster,
but
the
cooper's
ctl
get
all
command.
C
You
can
actually
run
that
and
it
gets
all
of
the
the
major
top-level
api
objects
that
you
probably
care
about
for
most
of
your
applications,
although
there's
definitely
a
lot
that
it
does
not
cover,
but
you
could
literally
just
do
that
all
across
all
name
spaces,
so
minus
minus
all
name
spaces
and
then
minus
o
yaml
output
right
and
then
just
get
that
into
a
file
and
that's
a
really
good,
quick
way
to
get
okay.
C
I
just
can
kind
of
back
up
what
the
current
state
of
most
of
the
applications
are
in
my
cluster,
but
yeah.
Definitely
looking
to
valero
previously
named
arc
came
from
heptio
about
three
years
ago.
I
think
so,
maybe
longer
I.
E
Think
another
one
is
cubester
as
well.
I
mean
that's
more
of
like
a
storage
option
thing,
but
they
have
like
things
you
can
plug
in.
If
you
want
to
do
more
of
a
diy
kind
of
methodology,
I
saw
and
again
I'm
not
an
expert
on
this
at
all,
but
I
saw
a
you
know
a
pretty
amazing
webinar
on
that
and
and
definitely
check
that
out
as
well.
E
I
think
it's
actually
on
cloud
native
tv
where
they
we
have
this
week
in
in
cloudy
or
cloud
native
live,
and
they
actually
did
talk
about
it
and
they
they
kind
of
demonstrated,
and
that
was
pretty.
I
thought
it
was
pretty
cool
the
functions.
Can
you
do
with
it?
So.
D
H
E
F
H
F
They're
mentioning
that
they
have
an
s3
endpoint
accessible
from
the
cluster,
I
think
valero
automatically
becomes
a
very
viable
option,
because,
typically,
what
I
have
seen
with
customers
is
especially
the
ones
who
are
starting
out
with
kubernetes.
That
is
like
especially
not
having
access
to
something
like
s3
on
from
isis
or
something
like
that.
So
if
they
have
access
to
s3,
endpoint
valero
becomes
a
natural
option.
I
have
a
lot
of
people
who
are
using
it
in
production
quite
successfully.
E
As
well
so
he
amended
his
creating.
His
main
aim
is
to
recreate
the
infrastructure
to
me.
I
would
more
think
about
your
infrastructure
as
code
right
having
that
like
and
again
that
buzzword,
but
really
I
mean,
if
you
want
to
recreate
it,
you
should
have
like
something
like
a
terraform
or
a
cross.
You
know
with
a
crossband
whatever
something
that's
a
provisioner
of
some
sort,
they'll
be
able
to
like
do
that
and
then
restore
your
and
then
have
scripts
that
restore
your
data.
E
F
Yeah,
I
mean
obviously
for
the
underlying
infrastructure
like
say
cloud
infrastructure,
vpcs
and
things
like
those
terraform
is
an
option.
Your
salt
stack
and
all
those
other
options,
your
kubernetes
clusters,
if
you're
using
cluster
api,
that
becomes
very
easy,
you
can
actually
have
your
manifests,
exported
out
right
and
then
you
can
recreate
it
through
those.
So
github's
model
works
in
that
space
as
well.
B
B
I
could
drop
in
the
palumi
and
soul
stack
references
because
they're,
two
of
my
favorite
tools
in
the
world
and
I'll
just
say,
pollution
soul
stack
one
more
time,
but
okay
emmanuel,
you
better,
our
last
question
that
we
have
so
far
and
I
see
you're
following
up
with
chat.
So
we
are
all
yours
you
can
you
can
ask
whatever
you
want,
but
there
was
a
follow-up
there
regarding
valero
the
vast.
I
imagine
that
the
backups
need
a
lot
of
storage,
of
course,
depending
on
the
size
of
the
pv.
Is
that
right?
B
D
Because
you
have
a
couple
of
different
patterns,
you've
got
the
restic
version,
which
is
it
scrapes
the
entire
file
system,
and
it
could
be
extremely
large,
pushing
it
out
there.
So
the
pattern
that
you
follow
in
that
particular
context
is
important,
etc.
Now,
if
you.
D
On
your
pvcs-
and
you
know
that
mace,
if
your
cluster
goes
down,
the
pvc
will
go
never
mind.
E
So
yeah,
so
let
me
let
me
add
one
more
point
to
this
support.
Work
used
to
have
like
this,
this
thing
for
cloud-native,
storage
and
stuff,
like
that,
I
think,
they're,
part
of
pure,
I
think
now
or
something
somebody
some
storage
infrastructure
bought
them.
They
had
this
kind
of
idea
of
the
pvs
and
storage
functions
and
all
of
those
types
of
things.
So
again,
if
you
want
to
have
you
know
kind
of
vendor
manage
aspect,
depends
on
the
storage
that
you
have
from
the
underlying
perspective.
E
B
Yeah
I
mean,
depending
on
the
layer
that
you
want
to
handle.
The
backups,
like,
I
think,
like
chauncey
said,
valero
is
going
to
sync
everything
as
on
the
entire
disk,
but
if
you're
working
with
a
cloud-native
database
like
cockroach
or
something
they
also
provide
their
own
tools
and
controllers
to
do
more
compressed
backups
to
cloud
storage
providers
may
even
be
able
to
do
incremental
to
try
and
reduce
the
cost,
and
then
the
csi
providers
themselves
all
also
have
backup
and
restore
functionality
that
you
could
probably
tap
into.
B
G
Just
another
comment
for
that
one
I
mean,
I
think
it
really
depends
on
valero.
You
can
configure
how
often
you
take
the
the
backups
right.
So
that's
going
to
impact
how
much
storage
you
require.
So
that's
a
configuration
that
you
can
control
depending
on
what
your
requirements
are
and
if
I
recall
correctly,
the
liro
will
actually
delete
backups
if
they're
older
than
a
certain
period,
which
you
can
also
configure.
G
So
that's
another
thing
that
you
can
use
to
control,
perhaps
how
much
storage
you're
using
depending
again
on
on
your
requirements
for
data
retention
and
then
the
other
consideration
is
depending
on
what
cloud
you're
using
is
easier
things
like,
for
example,
if
everything's
in
in
s3,
can
you
use
like
life
cycle
policies,
to
change
your
tiering
pricing
of
your
storage
in
the
cloud
to
minimize
the
cost,
if
you
do
have
requirements
to
retain
the
data
for
a
significant
amount
of
time,
so
those
are
just
some
other
things
to
think
about.
G
I
don't
know
where
the
question
is
coming
from,
but
if
it's
like
a
cost
perspective,
I
assume
it's
a
really
capacity
perspective.
If
you're
in
the
cloud
it's
probably
class
perspective,
perhaps
so
there
might
be
things
in
the
in
features
inside
the
cloud
that
lets
you
mitigate
some
of
those.
E
Constraints
like
using
glacier
like
lower
cost,
like
you
know,
for
that
long
term,
but
like
the
short
term
ones
using
s3.
I
mean
that
to
me
is
more
of
just
the
kind
of
a
cloud
storage
kind
of
plan
right
in
terms
of
from
that
cost
perspective.
But
I
totally
agree
with
deborah
without
a
shadow
of
a
doubt.
F
No
one
thing
to
add:
actually
valero
actually
is.
It
has
a
csi
support
now,
so
I
mean
it's
in
beta
right
now,
so
if
your
underlying
csi
actually
supports
snapshots,
then
your
backup
sizes
could
be
significantly
smaller.
F
It's
obviously
it's
not
ideal
for
a
disaster
recovery
kind
of
a
setup,
but
this
is
more
for,
like
a
human
error,
rollback
kind
of
a
scenario
it
could
be
a
the
backups
are
going
to
be
much
faster
b.
The
size
is
going
to
be
really
small.
B
All
right:
well,
thanks
for
all
those
answers
panel.
It
was
an
absolute
pleasure
tackling
office
hours
with
you
today.
I
also
want
to
thank
all
the
companies
for
supporting
the
community
with
developer
volunteers.
I
want
to
thank
vmware.
I
want
to
thank
e-pam
carter.
I
want
to
thank
cystic
and
equinix
metal
and,
lastly,
feel
free
to
hang
out
in
the
office
hours
channel
afterwards.
We're
always
happy
to
answer
questions
even
when
we're
not
live
on
the
stream.
There
are
other
channels
if
the
other
channels
are
too
busy
for
you
and
you're.