►
From YouTube: Kubernetes Office Hours 20200520 (West Coast Edition)
Description
Office Hours is a live stream where we answer live questions about Kubernetes from users on the YouTube channel. Office hours are a regularly scheduled meeting where people can bring topics to discuss with the greater community. They are great for answering questions, getting feedback on how you’re using Kubernetes, or to just passively learn by following along.
For more info: https://github.com/kubernetes/community/blob/master/events/office-hours.md
A
All
right,
everybody
welcome
it's
a
third
Wednesday
of
the
month.
That's
means
it's
time
for
the
kubernetes
office
hours.
This
is
the
livestream
where
we
take
as
many
questions
as
we
can
for
the
communities
community
and
have
this
awesome
panel
of
experts
try
to
answer
as
many
of
them
as
possible.
We
do
this
every
month
on
the
third
Wednesday.
We
have
two
sessions.
We
had
one
this
morning.
That
was
more
time
friendly
for
the
Europeans,
and
this
one
is
the
West
Coast
edition
as
we
like
to
call
it.
So
we
have
two
sessions
a
month.
A
Just
the
heads
up.
This
June
we're
also
gonna,
have
an
additional
session
for
ingress,
networking
and
stuff,
and
we're
gonna
have
guests
from
cig
networking.
It's
gonna
be
great
Pierre.
Can
we
solidify
a
date?
We
have
a
date
on
we
penciled
in
a
date,
but
we'll
just
paste
that
into
the
into
the
slack
channel
there.
So
let's
go
and
get
ahead
and
get
started.
So
before
we
begin,
let's
start
by
introducing
ourselves:
let's
go
Marky
Mario,
Pierre
samu,
and
then
the
bot
doesn't
have
to
do
introduce
itself.
Hello.
C
Hey
everybody:
my
name
is
Mario
Lauria
I've
done
many
of
these
before
so
nice
to
see
everyone,
who's
participated
and
come
back.
I
worked
for
her
stock
ex
in
downtown
Detroit
Michigan
I
work,
mostly
in
cloud
mainly
AWS
EKF,
but
I've
dealt
elsewhere
as
well.
In
gke,
most
of
my
focus
is
around
ingress.
Auto-Scaling
deployment
packaging.
Definitely
cluster,
optimization
things
like
that.
So
look
forward
to
your
awesome
questions
today.
Thank.
D
I'm
senior
software
architect
spectrum
mainly
focusing
these
days
on
all
cloud
native,
saying,
basically,
communities.
My
focus
is
also
currently
on
AWS
eks,
previously
a
lot
of
a
case
akz
from
better
and
yeah.
Basically,
all
saints'
automation,
community.
It
is
a
lot
better,
every
single
block
sphere!
That's
me.
E
Hey
guys,
this
is
some
Vala
from
American
Airlines
been
working
with
kubernetes
for
a
bit
quite
time
now
and
that
join
this
awesome
community
and
part
of
secant
of
X
and
seek
marketing
team
and
working
on
some
internal
projects
that
but
majorly
working
on
assure
a
KS,
IBM
IBM
I
case
and
on-prem
cubed
is
clusters
and
open
to
any
questions.
And
let's
get
on
that.
A
Alright
and
I'm
Jorge
Castro
I
work
at
VMware
is
a
kubernetes
community
manager
and
I'm
the
host
of
the
shindig.
So
before
we
get
started,
we're
gonna
go
over
some
ground
rules
and
then
I
kind
of
tell
you
how
it
works.
So
does
the
kubernetes
event,
so
the
code
of
conduct
is
it
effect.
Please
be
excellent
to
each
other
in
the
slack
channel
which,
if
you're
watching
the
YouTube
stream
is
on
the
right
here.
This
is
also
a
judgment-free
zone.
A
Everyone
had
to
start
from
somewhere
there's
no
dumb
questions
so
feel
free
to
just
ask
whatever
you
need
to
do
to
get
your
thing
working
for
work,
and
while
we
will
do
our
best
to
answer
your
questions,
the
panel
does
not
have
access
to
your
cluster.
Nor
do
we
want
it.
So
live
debugging
is
off
topic,
so
we
can't
necessarily
like
SSH
to
your
thing
and
fix
it,
but
what
we
can
do
is
help
share
our
expertise
and
best
practices
to
maybe
at
least
get
whatever.
A
Your
next
step
is
more
clear
on
what
you
need
to
do
to
solve
your
problem.
Panelists
you're
encouraged
to
expand
on
answers
with
your
experiences
and
pro
tips,
especially
your
production
ones.
As
part
of
the
reason
we
like
to
have
production,
people
on
here
is
kind
of
share
the
goodies
you
know
outside
of
the
home,
like
I,
have
my
experience
with
my
home
ladder
kubernetes
and
it's
totally
different
from
those
of
you
running
kubernetes
and
anger
audience.
A
A
Someone
else
a
question:
we
read
it
and
then
someone
starts
to
answer
it,
or
maybe
the
panelists
take
turn
answering
questions,
and
while
that's
happening,
those
of
you
have
the
expertise
in
this
area
feel
free
to
just
start
to
whack
URLs
into
the
chat
those
will
get
shown
on
the
YouTube
stream
and
then
what
I
do
at
the
end
of
the
stream
is
collect
them
all
together
and
publish
those
as
show
notes.
Those
URLs
are
awesome.
I
think
we
learn
of
a
new
tool
every
time
we
do
it.
A
So
it's
just
a
fantastic
way
to
help
spread
the
knowledge.
So
that
you
know
someday
when
someone
runs
into
this
video
and
it's
covering
a
topic
that
they
care
about
they'll,
have
reference
materials
available
from
the
community,
and
we
love
that
you
can
also
feel
free
to
just
start
posting
your
questions
right
now
in
hash
office
hours.
The
instructions
are
down
here
below
if
you've
never
been
in
the
kubernetes
slack,
just
go
to
slack
that
kids
at
I/o
follow
the
instructions
and
join
the
hash
Office
ours
Channel.
A
You
can
also
post
your
questions
on
the
kubernetes
forums
that
discussed
kubernetes
I/o,
and
that
is
one
of
the
places
where
I
do
publish
all
the
notes
as
well
and
I
will
post
those
in
the
slack
Channel
today
after
everything's
done
and
we've
done,
curating
all
of
the
URLs
and
all
that
good
stuff.
You
can
also
help
us
off
by
tweeting
spreading
the
word
paying
it
forward,
helping
somebody
out
anything
that
you
could
do
to
help
spread.
The
knowledge
of
kubernetes
is
always
appreciated,
and
this
panel
is
made
entirely
of
volunteers.
A
A
So
if
I
asked
your
question
live
on
the
air,
we
will
automatically
put
you
in
a
little
rally
to
win
a
snazzy
kubernetes
t-shirt,
which
none
of
us
are
wearing
today
and
then
so
we
give
away
two
of
those,
but
you
have
to
wait
till
the
end
of
the
show
for
when
we
do
the
Wrath
and
you
have
to
be
present
to
win.
So
let's
do
that
and
for
that
we
have
questions
how
we
sounding
there.
A
It
looks
like
the
chats
already
buzzing
and
talking
feel
free
to
say,
hello
and
chat
where
you're
from
in
the
world
what
you're
working
on,
where
you're
working
on
it's
it's
we're,
always
interested
in
that
kind
of
stuff.
So
in
the
meantime,
let's
start
with
our
first
question
comes
from
nerdy
Sean,
who
asks
I've,
got
a
storage
class
that
I
initially
set
with
the
reclaim
policy
equal
delete.
Trying
to
edit
the
resource
to
retain,
doesn't
seem
to
work.
Do
I
need
to
recreate
the
class
then,
or
am
I
doing
something
wrong.
A
C
B
A
E
E
I,
don't
think
it
will
try
to,
because
once
you
have
create
this
to
this
class,
if
there
are
any
other
things
that
are
slightly
using
it
and
that
that
is
attached
to
the
storage
class,
any
person
requests
and
volumes
at
all
be
using
it
I,
don't
think
it
will
allow
you
to
edit
it
so
any
new
storage
class
and
try
to
get
what
you
what
you're
trying
to
achieve,
or
otherwise
it's
just
but
but
the
existing
ones.
If
there
are
other
connections
or
other
labels
that
are
trying
to
connect
to
it,
I
don't
think
it.
A
Awesome
any
other
any
other
opinions
here,
I,
don't
out
of
fact.
They're
listening
live
on
this
one,
so
we
can
come
back
to
this
one.
If
anyone
has
future
thoughts,
all
right,
Tim
hunter
asks.
What
approach
would
you
take
to
a
large
jump
update
of
the
kubernetes
version?
We
were
on
bare
metal
clusters
that
are
still
on
1.13
dot
X
and
are
ready
for
an
update.
I
recently
read
a
recommendation
that
clusters
should
be
rebuilt
from
scratch
from
situations
like
this.
Transferring
data
was
something
like
Valero
any
other
recommendations.
I
think.
B
That
yeah,
you
definitely
should
I,
would
try
to
start
from
a
new
cluster
and
migrate
data.
Over
one
thing,
I
will
say,
is:
if
you
do
not
choose
to
go
there
out,
you
love
pain,
make
sure
you
don't
update
going
from
one
thirteen
to
like
you
know,
1.18.
You
need
to
go
one
major
version
at
a
time.
Don't
yeah.
C
I
guess
then
this
comes
from
about
the
riders
space,
but
I
guess
some
tidbit
yeah,
you
still
happy
with
the
control
plane
go
one
at
a
time,
even
in
cloud
land,
but
the
nice
thing
is,
you
can
have
your
nose
and
your
control
plane
off
by
two
minor
versions.
So
you
can
have
a
control
plane,
that's
one!
Sixteen!
They
have
nodes
that
are
running
one
fourteen
and
so
I
guess
considered
that
so
what
we
we
did.
C
What
I
just
recently
does
I
I
took
us
to
150
on
the
control
plane
from
113
tables
around
112,
but
yeah
I
took
the
control
plane
as
high
as
I
could
go
and
then
incrementally,
when
I
was
ready,
made
a
plan
to
get
the
note
groups
taken
care
of
which
it
which
makes
it
a
lot
easier,
with
no
groups
of
the
bulk
of
the
work.
I
would
say,
especially
if
you
have
data
and
other
things
like
that.
So
definitely
was
like
move
the
needle.
C
As
far
as
you
can
from
a
control
plane
perspective
obviously
consider
things
with
a
116
consider
the
deprecated
API
is,
and
things
like
that.
Only
read
the
deprecations
from
the
series
logs,
which
you
choose
log
is
like
you
should
be
laser
ID
on
that,
if
you're
an
operator
so
but
yeah
definitely
definitely
control
play
in
the
know
groups.
So
let.
A
Me
get
that
changed,
Mauri
well,
I
move
on
the
next
question.
Can
you
grab
the
changelog
URL
for
the
website
and
toss
that
in
that
would
be
great,
yeah
I
would
say
you
know
if
you're
upgrading
in
place,
let's
say
you're
113
you
go
to
1.14
every
time
you
make
a
jump,
there's
like
new
deprecated
API
s,
and
it
feels
like
got
to
rip
the
band-aid
off
right,
because
you're
gonna
have
to
test
all
your
at
cuz.
That's
such
a
major
difference.
A
It
feels
like,
if
you're
testing
out
each
step
of
the
upgrade
your
just
make
the
pain
happen
four
times
in
a
row,
whereas,
if
you
just
recreate
the
new
cluster,
see
if
your
stuff
runs
and
then
you
fix
that
you
at
least
have
a
you
know,
you
go
through
the
pain
once
as
opposed
to
incrementally
doing
a
lot
of
work
for
a
version
that
you're
not
gonna
stay
on
for
the
upgrade
so
yeah
any
any
other
upgrade
random
question.
Do
any
of
you
even
do
in-place
upgrades
I.
C
Say
yeah
and
as
I
was
before
yeah
we
have
would
be
TAS
and
I
have
to
say
yes
at
least
makes
it
quite
easy.
No
gke
actually
has
it
automated
and
they
just
introduced
surging
as
well.
I'd,
like
just
think
about
your
upgrades
being
completely
automated
I
mean
you
still
need
to
worry
about
some
of
workloads
and
api's
and
other
deprecations
of
tools,
and
things
like
that,
but
like
for
the
most
part
in
GK's,
automated
eks,
it's
a
little
bit
more
work.
C
D
Yeah
for
me,
like
the
last
few
updates
and
I,
did
I
try
to
reduce
the
trustus
all
the
time
from
scratch
and
prefer
that
everything
is
working
before
so,
but
I
have
done
in
case
updates
upgrades
before
and
it
works.
But
it's
just
like
too
much
of
my
has
a
hassle,
and
it
also
gives
you
back
some
kind
of
trying
to
recreate
your
cluster
on
a
regular
basis.
Do.
A
C
Releases
minor
point
releases
yeah.
We
actually
generally
don't
really
worry
about
those
too
much.
What
ends
up
having
a
Jessie
tells
were
locked
into
the
ami,
which
is
locked
into
a
certain
patch
version.
Generally.
So,
okay,
we
don't
really
touch
patch
versions,
because
those
are
considered
updates
as
well,
and
we
don't.
C
A
Okay,
good
to
know
good,
to
know
all
right
anything
else
about
upgrades,
Tim,
I,
hope
that
helps
you
out.
Next,
we
have
a
question
from
meiosis.
I
hope
I
pronounced
that
right.
Sorry,
anyone
running
kubernetes
at
home.
What
kind
of
storage
do
you
recommend
using
NFS,
I
scuzzy
hose
path?
What's
the
most
suitable
for
this
kind
of
workload?
I
am
trying
my
hardest
not
to
run
kubernetes
at
home,
but
I'm
failing
hard
at
that
I'm.
A
D
A
Yeah,
so
just
a
quick
follow
up
for
Tim
says
thanks
guys,
I'll
definitely
read
and
communicate
the
changelog,
so
at
the
very
least
I'll
be
able
to
tell
the
developers
I
told
you
so
a
plus
moving
restoring
the
PVC
Peavey's,
especially
those
with
production
data,
seems
like
the
biggest
stress,
so
I
wanted
to
go
back
for
this
question
specifically
to
ask:
how
do
you
all
man?
How
do
you
deal
with
that
specifically.
E
Are
there
any
things
you
can
recommend,
I
think,
but
again,
managing
the
storage.
Moving,
PVC
and
PVCs
is
like
how
work?
What
is
that
you
are
trying
to
use
for
the
worst
and
volume
is
like
some
it's
something
like
safe,
or
that
is
a
bowl
solution
which
is
by
default,
giving
you
a
replication
and
in
case,
if
you
lose
that
there's
like
high
availability
and
everything.
So
if
it
is
those
kind
of
tools
like
safe
anything
which
you
are
using,
it's
like
a
third-party
tool
that
will
literally
help
you
out.
A
A
E
D
E
Need
to
have
some
third
party
solution,
which
is
giving
out
of
the
box,
which
is
everything
that's
taking,
so
because
that
that
is
the
main
thing,
because
PV
and
PVCs
is
just
like
a
pipe
that
is
talking
to
your
data
right
you're,
not
storing
it
anywhere.
Do
you,
where
the
actual
location
of
data
you're
storing,
is
important?
I.
D
Mean
this
is
exactly
his
reason:
why
I
don't
databases
and
kubernetes
and
stuff
so
I
just
try
to
avoid
this
at
all.
Like
don't
run
like
for
me,
I
know
you
can
and
a
lot
of
people
do
and
we
have
such
that
we
shake,
but
we
frequently
in
this
round
but
about
running
databases
in
communities,
but
for
me,
I,
just
don't
bother
like
I,
keep
my
process
as
simple
as
possible.
So
you
try
to
keep.
A
Yeah
and
cue
mentions
yeah,
it's
not
an
option
for
us.
We
have
hundreds
of
databases
in
kubernetes,
so
yeah,
that's
that's
a
thing.
People
need
to
manage
all
right,
so
let
me
see
if
anybody
else
has
some
recommendations
here
in
chat,
so
that
side
is
rowboat,
stockade
Co.
Definitely
you
definitely
want
to
subscribe
to
that
one
of
the
nice
things
about
that
those
of
you've
never
used
the
site.
Is
you
can
narrow
down
per
cig
per
release?
A
A
C
A
All
right
anything
anything
else
on
storage,
Tim,
I,
hope
that
helps
you
out.
If
you
have
any
follow-up
questions
keep
on
typing.
Next
question
comes
from
a
mere
thanks
for
joining
us
as
cluster
IP
service
type
in
mini
cube,
I
have
a
micro
service
and
I
have
a
help
chart
to
deploy
it.
There's
a
service
defined
for
it
as
well
as
exposes
a
port
for
UDP
packets.
This
works
well
in
a
proper
kubernetes
cluster
I
can
see
the
service
using
cube.
Control.
A
I
can
also
see
that
under
the
hood,
the
IP
vs
is
properly
configured
on
the
worker.
Node
I
can
ping
the
service
IP
from
within
the
pod.
Unfortunately,
this
doesn't
work
well
when
I
do
a
helmet
stall
in
my
local
testing
environment
that
uses
mini
cube,
I
can
list
a
service
using
cube
cuddle
get
services,
but
beyond
that
nothing
works.
I
can't
ping.
The
service
IP
from
within
the
pot
and
I
realized
that
I
don't
have
IP
vs
installed.
So
how
does
many
cube
implement
services
under
the
hood?
B
Say,
port
forwarding
and
I
put
a
sort
of
a
sudo
command
that
the
user
can
use.
Try
that
out,
he
also
a
or
the
person
is
also
referencing
that
the
helm
chart
you
thought
the
helm
chart
should
take
care
of
that.
I
did
ask
for
a
link
to
that
home
chart
so
I
could
sort
of
see
what
the
values
look
like
right.
D
B
B
A
E
B
E
A
A
Anything
else
here,
I
think
that's
the
best
we
could
do
since
we
don't.
We
can't
see
the
help
chart
but
seems
like
a
good
place
to
start
all
right.
Moving
on
Nathan
welcome
to
the
show
says,
if
you're,
currently
using
helmets
starting
a
move
to
customize,
is
there
anything
that
you
feel
helm
is
better
than
four
then
customize,
for
example,
perhaps
elastic
search
for
helm
and
customize
for
your
custom
services
and
the
opinions
here.
C
So
I
have
not
used
customize
I've
heard
a
lot
about
it.
I'm
gonna
go
out
on
a
limb
and
say
that
if
I
see
something
that
isn't
that
helm,
that
has
its
own
home
chart
already
so
like
elastic
search,
I'm,
fairly,
confident,
there's
already
a
home
part
for
it.
Whether.
A
C
Officially
from
elastic
or
in
their
their
main,
like
helm,
stables,
Hertz
repository,
which
is
super
helpful
if
that
exists,
that
gets
me
further
than
trying
to
figure
out
doing
that
same
thing
for
customized
right,
depending
on
is
what
yeah
memos
are
out
there
and
what
blog
post
I
might
be
reading
for
the
most
part,
those
table
charts
are
fairly
up
today.
Just
depends
on
the
people
that
are
doing
you
know,
running
the
service
or
the
the
vendors
that
created
the
service,
etcetera,
but
I
almost
always
am
using
those
as
a
starting
point.
C
So
this
is
really
like,
depending
on
the
upstream
project
that
you're
trying
to
leverage
and
the
customizations
that
you
want
to
make
to
it,
which,
for
the
most
part,
the
home
chart
is
either
flexible
enough
or
you
can
fork
it
and
make
it
flexible
fairly
quickly.
I
say
helm
is
going
to
make
the
most
sense
for
your
own
application,
so
customized
might
make
a
lot
of
fun,
so
you'd
want
to
weigh
it
on
that
side
of
them.
So
hopefully,
some
of
those
more
customized
experience
and
weigh
in
as
well
yeah.
A
A
Alright,
any
other
tips
here
keep
the
questions
coming.
Everyone
we're
almost
caught
up
to
the
queue
so
keep
keep
on.
Asking
queue
says
multiple
home
turns
for
elasticsearch,
although
they
are
challenging
for
a
variety
of
reasons,
but
yeah
you're
saying
what
I
was
thinking
if
a
stable
try
to
work
with
it
and
use
custom
services
for
customized
appreciate
you,
Mario
I
have
a
follow-up
question
for
you.
Mario,
since
you
are
a
heavy
consumer
of
helm.
You
had
mentioned
that
you
say
elasticsearch
with
your
helm,
but
I've
also
seen
people
do
operators
for
elasticsearch.
A
C
I
personally
prefer
an
operator.
However,
the
operator
needs
to
be
well
developed
and
well
maintained
for
me
to
adopt
it,
and
we
actually
I,
don't
even
think
we're
running
in
the
operators
right
now,
but
there's
a
couple
that
I
just
haven't
got
a
chance
to
really
deep
dive
test
and
get
implemented,
but
I
think
operators
are
a
great
better
moving
forward
and
I
think
the
operators
really
take
a
lot
of
stress
off
of
my
head,
because
once
you
home,
install
you're
kind
of
hands-off,
you
don't
know
what's
going
on,
don't
know.
C
Life
cycle
upgrading
is
like
a
whole
new
thing.
Eight
months
later
and
you're
like
I
gotta,
the
chart
version
updated.
What
do
things
look
like
for
what
you
know?
I
still
want
these
configuration
options
or
there's
more.
You
have
to
deal
with.
I
mean
an
operator
makes
all
that
a
lot
more
streamline
and
you
get
digest
especially
a
long
term
throughout
the
lifecycle
of
whatever
you're
trying
to
run
so
I
would
say
like
for
elasticsearch.
If
there's
an
operator
start
looking
at
that
operator-
and
you
know-
is
it
it
has
been
committed
to
recently.
C
That's
that
links,
so
you
know,
take
a
look
and
a
lot
of
services
that
you're
you're,
seeing
like
bonzi
cloud,
for
instance,
has
an
HPA
operator,
and
you
can
say
you
know:
I
want
a
default
HPA
for
all
these
services
and
using
annotations,
enable
that,
and
then
operator
takes
care
of
it
for
you.
So
I
love
that
model
I
think
it's
getting
a
lot
better.
Moving
forward
we're
starting
to
see
a
lot
more
applications.
C
You
can
just
scroll
through
the
operator
hub
and
see
that
there's
a
kind
of
cool
stuff,
one
of
them
related
to
horizontal
pod.
Auto
scaling
is
a
Keita
project
that
just
got
into
the
sea
MTF,
which
lets
you
scale
on
being
like
a
wsq
size
right,
which
is
super
cool
and
having
operator
I
mean
it
kind
of
takes
care
of
handling
the
you
know,
the
deployment
Damon
says
other
objects
for
you,
so
you
don't
have
to
worry
about
the
home
chart.
So.
A
A
You
alright
next
question
comes
from
one:
it
says:
hey
there,
how
can
I
schedule
different
wait?
How
can
I
schedule
different
pods
and
different
nodes
by
metrics,
for
example,
I,
don't
want
more
than
three
pods
in
the
same
node
I
don't
want
to
use
a
demon
set.
Please
help
me.
I
have
a
lot
of
night
without
sleep.
E
C
You
answer
that
I.
Actually,
you
believe
I
answer
that
one
enough
thread
I,
just
like
good
affinity
and
anti
affinity
stuff,
there's
some
fairly
standard
models
for
doing
that
for
just
like
I
want
you
to
not
put
the
same
pod
on
on.
You
know
two
pods
on
one
node
I
want
to
spread
as
much
as
possible
and
then
doing
that
from
his
own
perspective
for
availability
zones
and
whatever
cloud
that
you're
using
so
I
linked
in
what
we
use.
C
It's
happening
to
all
our
deployment
couple
ones
that
are
home-
and
you
know
I
think
so
even
pod
spreading
is
I,
think
setting
a
pod
topology
constraint
in
the
the
new
versions
of
cases
beta
or
almost
GA,
which
kind
of
builds
us
into
covered
enemies
by
default.
A
little
bit
so
I
would
look
into
that
as
well
link
it
here
in
the
channel
yeah.
B
A
D
D
E
It
comes
with
a
game
right
if
you
want
to
have
co-located
databases
in
the
cloud
or
better.
Your
are
you
going
back
to
your
hybrid
architecture?
We
are
coming
back
to
your
on-prem
and
your
database
are
already
the
existing
databases,
cards
applications
or
any
existing
already
using
right.
There
are
how
sequel
or
any
other
databases
that's
already.
There
is
an
enterprise
and
you're
coming
back
to
your
connections
again
to
your
on-prem
data
center
from
the
cloud
then
in
that
case
is
right.
E
In
that
case
is
definitely
your
database
is
out
and
you're
just
you
have
application
workload,
that's
running
in
a
case
or
I
case
or,
as
you
or
somewhere
I
mean
it
is
or
somewhere
so
those
scenarios.
Definitely
your
crowd
is
I
mean
you.
Your
application
is
running
somewhere
in
your
cloud
or
your
database
is
running
in
somewhere
and
you're
connecting
back
to
that
and
your
data
is
not
in
production.
Now
your
data
is
and
red
layer,
not
in
Cuba
noticed,
so
you
got
more
things
which
you
need
to
take
care.
E
A
And
I
am
I'm
gonna
link
this
for
those
of
you
that
are
interested
in
databases
in
kubernetes,
because
we
do
get
a
lot
of
questions
about
it.
Josh
burkas
is
three-part
series
on
running
Postgres
in
kubernetes.
Obviously,
kubernetes
is
the
post-its,
20:18
and
obviously
kubernetes
is
advanced
since
then,
but
he
gives
lots
of
tips
there
and
as
far
as
doing.
E
Database
they
looking
for
myself
and
across
the
panel.
Let's
let
I'll.
Let
me
post
that
question.
So
you're
running
the
databases
inside
the
cluster
right.
So
if
you're,
if
you're
running
any
CSF,
for
example
the
poster
database
itself.
So
you
want
to
run
a
single
instance
of
database
inside
your
application
and
co-located
inside
the
deployment
or
you
want
to
run
an
enterprise
kind
of
a
small
installation
of
probe
like
Postgres,
on
the
cube
Redis
cluster
in
a
different
name
space
and
want
to
use
that
to
different
applications
like
it
again
depends.
B
C
B
B
That
depends
IRB
for
I
would
say.
From
my
perspective,
it
depends
on
what
your
architecture,
the
cluster
architecture,
is
gonna.
Look
like
I'm
more
of
a
fan
of
a
tiered
architecture.
So
then
you're
gonna,
separate
it
by
namespaces
and
that
helps
for
debugging
and
just
general
monk.
Yeah
I'm
more
prefer
a
tiered
architecture,
because.
E
I
keep
on
getting
these
questions
too,
so
bad
to
run
my
database
with
my
application,
with
a
single
instance
of
a
powers
just
stranded
there,
or
do
you
want
to
cluster
database?
Give
it
a
different
name
space
and
just
use
that
as
just
an
endpoint
again
use
to
connect
internal
from
Cuba
needs
to
kubernetes
as
a
different
one.
I
guess.
D
It
also
comes
down
to
roles
and
responsibilities,
so
if
you
have
team
said
I
able
to
run
their
own
databases
that
say
own
database
hosted
deployment.
Why
not
do
it
this
way
in
general?
For
me,
I
would
like
to
have
a
standardized
standard,
a
standardized
way
of
deployed
my
database
and
have
it
like
defined
for
everyone
that
is
consuming
database
loads,
so
I
would
also
go
a
tiered
architecture
is
where
you
have
database.
You
have
like
a
base
infrastructure,
layer,
communities,
databases
and
so
on,
and
then
you
have
a
second
layer
which
is
basically.
B
People
choose
to
do
it
differently,
so
I
see
somebody
is
saying
in
the
slack
channel
saying
everything
should
be
in
one
namespace
and
and
some
architectures
make
sense
to
do
that.
I
have
found
that
it's
become
problematic.
When
you
have
everything
in
one
namespace,
a
tiered
architecture
has
just
been
easier
to
manage,
not
only
from
an
administrative
standpoint
but
just
from
a
cluster
wide
standpoint.
E
Yeah
and
again
it
depends
on
situation
where
Atia
of
situations
we
had
your
enterprise.
How
do
you
want
to
run
your
month
like,
as
you
said,
the
tide?
It
depends.
A
completely
depends
on
enterprise
if
you're
trying
to
move
an
enterprise
database
into
kubernetes
and
you're
trying
to
run
as
a
cluster.
That's
like
shared
cluster
to
multiple
applications
and
say
yeah.
B
A
A
It's
sort
of
an
impossible
challenge
to
actually
set
anything,
but
just
as
an
own
kill
your
stuff
when
it
shouldn't
I
feel
like
out
of
scaling,
is
gonna,
make
more
sense,
not
about
the
tools
and
more
about
the
organizational
to
challenge
I
having
a
huge
number
of
people
all
working
in
the
same
environments.
So
question
for
all
of
you
we're
a
large
team.
Everyone
has
different
requests
of
what
they
want
running
on
the
cluster
right,
marki,
you're
gonna.
A
Ask
me
how
much
ram
my
app
needs
and
I'm
going
to
lie
to
you
and
just
say,
give
me
the
most
amount
you
can
give
me
right
so
and
then
you're
gonna
have
certain
people
saying
well.
You
know
my
thing
needs
to
run.
You
know
spread
out
across
pods,
and
you
know
the
thing
you
said
with
the
Cassandra
cluster
needs
to
be
run.
This
way
how'd
it
it.
D
A
Is
more
of
a
human
question,
but
how
did
teams
like
normalize
all
this
stuff
right,
because
you're
you're
kind
of
you've,
given
all
these
requirements
and
then
you
have
to
make
it
fit
inside
a
cluster
right
or
do
you
do
you
do
certain
things
where
certain
workloads
going
a
certain
cluster,
because
that
has
special
needs
compared
to
other
ones?
Do
you
have
more
general-purpose
clusters,
so
in
my.
B
In
large
organisations
like
I've
worked
in
super
large
organizations
where
application
profiling
is
part
of
the
you
know.
This
is
part
of
the
flow
and,
if
you
don't
have
the
application
profile
set,
as
well
as
the
data
to
back
up,
why
you
have
that
application
profile,
meaning
like
oh
we've
tested
it
here
in
their
dev
environment.
Here's
the
requirements,
here's
where
you
can
verify
that
in
a
small
shop
that
may
not
be
possible
setting
those
types
of
things,
I.
B
I
have
feels
about
that
because
I
think
application
profiling
should
be
done
from
a
developer
perspective.
You
should
know
how
your
application
is
going
to
perform
in
a
given
environment,
and
maybe
it's
not
the
same
as
prod,
but
you
should
be
able
to
mimic
something
as
close
to
prod
as
possible
to
say:
oh
I've
tested
it.
It
needs
XYZ.
A
E
Because
I
maintain
I
mean
we
maintain
a
an
enterprise
cluster
for
contra.
We
call
it
as
enterprise
container
service
that
in
that
we
create
and
I
name
spaces
for
each
app
teams
where
you
eliminate
their
itself.
First,
what
what
you
get
you
compute
on
on
your
namespace
itself?
You
can
just
set
your
computer
limits
while
creating
the
namespace,
if
not
I,
mean
again
that
going
down
at
pods
live
at
the
part
level.
Also,
you
can
just
set
your
memory
and
CPU
limits
right.
E
A
Comes
on
a
say,
we
have
profiling,
but
the
reality
is
that
this
is
change
over
time.
It's
always
a
moving
target.
We
don't
have
an
auto
updating
feature.
I
could
have
sworn
Marquis.
Tell
me
if
I'm
wrong,
that
there
was
a
feature
somewhere
as
far
as
being
able
to
adjust
limits
based
on
changing
metrics
I.
Believe.
B
Also
add
a
second
in
terms
of
application,
profiling
and
drift
and
I
think
that's
what
Q
is
kind
of
getting
to
that
that
profile
drifting
over
time.
That
happens
and
that's
really
where,
when
you
have
a
profile,
there
should
be
regular
sort
of
sinks.
Where
you
look
back
at
that
profile,
does
it
still
meet
the
same
criteria?
I
understand
that
that's
not
for
everybody,
and
they
don't
sometimes
have
time
to
do
that.
For
writing.
B
That
the
that
you
paint
yourself
into
a
corner
and
I've
been
part
of
this,
is
you
start
having
an
application
that
has
no
profile
now,
let's
just
say,
you're
in
AWS
or
TKE,
and
then
you're,
starting
to
spin
up
new
resources
to
support
this
sort
of
ever-growing
application?
And
at
some
point
the
CFO
is
gonna.
Come
to
you
and
say
why
the
hell's,
the
bills
so
high
and
then
you'll
see
they'll
start
to
be
an
artificial
lockdown
and
then
you'll
start
to
have
to
profile
that
application
better
over
time.
A
B
A
B
A
D
E
C
So
if
they
need
something
custom,
they
can
get
it
and
they
can
get
it
easily
without
having
to
bother
your
operations,
set
off
steam,
etc.
I
think
the
big
thing
for
us
is
our
service
teams
have
no
clue
why
resources
are
important
at
all.
They
just
wanted
to
play
their
code
and
provide
them.
Goldilocks
is
going
to
be
a
huge
burden
off
of
us
who
have
kind
of
one-by-one
gone
through
services
and
tried
to
apply
these,
because
we
can
give
them
a
recommendation
of
what
it
should
be
and
actually
BPA.
C
So
we're
kind
of
down
in
the
resource,
weeds,
but
I,
think
the
core
thing
to
learn
here
is
that
like
providing
the
utility
to
have
other
people,
make
the
decisions
in
a
safe
sort
of
way
and
understand
what
those
decisions
the
impact
they
have
on
their
services.
For
what
they're
trying
to
do
is
a
huge
part
of
this
like
if
you
look
at
the
board
papers
from
Google
and
that
you
know
the
early
days
of
doing
this
stuff,
it's
all
about
giving
the
users
the
developers,
flexibility
on
their
workloads,
so
I.
D
Actually
also
have
a
Christmas
are
people
actually
using
resource
limits
based
on
namespaces
I've,
never
used
survive
very
rigid
rules
for
pots
and
deployments,
but
then
we
use
them
for
namespaces.
I
did
I
just
came
up
and
just
look
at
it
a
we
are
currently
I've
like
artificial
ants
based
on
namespaces.
So
if
the
namespace
is
consuming
its
amount
of
memory
or
CPU,
then
I
get
an
alert
and
say
hey.
We
have
a
high
concern.
D
B
I've
done
it
both
ways
in
it
depending
upon
when
and
a
team
is
requesting
to
put
an
application
in
the
cluster
I've
been
in
shops,
where
there's
been
the
idea
of
tenancy,
whether
that
shared
tenancy
or
dedicated
tenancy,
so
where
how
they
get
that
deployed.
If
it's
a
shared
tenancy,
then
there
will
be
an
Ames
thing.
It's
all
about
your
policies
and
how
you're
looking
to
do
things.
So,
if
somebody
says
I
want
to
deploy
this
Cassandra
cluster
and
I
want
to
do
it
in
a
shared
namespace.
B
A
Try
not
to
move
your
dog,
that's
perfect,
keep
that
there
all
right
any
any
and
anything
else
on
resource
and
limits
and
stuff
I'll
give
the
chat.
Let's
see,
let's
see
if
anybody
else
has
a
chat,
I
see
some
who
dropped
a
link
there
from
the
OpenShift
folks
on
VP,
a
Kelly
will
get
to
your
question
in
a
second,
let's
see
what
else
we
got
anything
more
to
say
about
Goldilocks
Pierre
you've
been
using
it
so
far
thumbs
up
how
you
meet
it.
D
I've
tested
it
I
just
have
not
used
to
day
production,
yet
so
I
feel
good
about
experience,
but
I
know
it
works.
I
was
books.
Well
enough
said:
I
I'm
coming
to
adapted,
but
I
also
had
like
an
experience
blow
up
with
my
face
yesterday,
with
a
Quay
being
down
so
scary,
right
now
or
scared
at
least
basically
I
try
to
have
like
super
short
leaf
nodes
and
have
some
like
we
scheduled
every
hour.
A
For
the
panel
and
the
audience
you
could
just
write
in
slack
or
whatever
I'm
just
curious
of
what
how
the
past
few
days
have
been
for
people
as
far
as
decoupling,
their
deployments
from
a
public
service
like
you,
are
you
gonna,
proxy
stuff?
Now?
Are
you
just
gonna,
be
like
you
know,
it's
a
SAS
but
they're
running
it
way
better
than
I.
Would
myself
anyway,
I'm
sure
those
of
you
running
very
large
clusters
always
have
your
on-prem
things,
but
I'm
just
curious.
A
C
That's
interesting
so
I
see
it
both
ways
if
you're
doing
in
closer
autoscaler.
It's
preferred
that
you
have
a
no
group
per
zone.
You
don't
do
one
node
group
that
fans
we
actually
do
do
one
node
group
disbands
and
we
disable
the
AZ
rebalance
function,
function
that
can
talk
with
it
with
aSG's
and
in
cluster
autoscaler
works.
Just
fine
in
that
manner
because
of
the
de
churn
the
each
zone
might
not
have
exactly.
You
know.
C
You
know
33
33,
33
percent,
but
it's
close
enough
for
our
needs
and
the
kind
of
added
burden
of
doing
a
node
groupers
own
really
doesn't
make
sense.
Now,
I'm,
not
considering
things
like
you
know
your
storage
requirements
per
zone
right,
you
need
to
look
at
EFS
or
EBS
and
based
on
you
know,
those
are
specific
to
a
zone,
so
your
data
might
just
be
sitting
in
a
and
not
seat
right,
and
so
you
might
have
requirements
around
that.
But
I
think
those
those
two
models
are
kind
of
the
ones.
C
I
seems
that
are
most
common,
but
I
think
having
three
no
two
groups:
three
individual
node
groups.
Each
one
list
in
three
zones
is
doesn't
really
seem
like.
It
makes
a
lot
of
sense.
I
think
the
core
thing
here
is
like
having
nodes
in
the
cluster
that
are
in
each
zone
like
that.
That
is
the
core
thing
of
like
I
want
to
be
safe
and
case,
one
zone
dies.
C
The
other
thing
to
consider
here
that
most
people
don't
realize
is
that
by
default
you
only
have
one
NAT
gateway
in
one
zone
for
egress,
so
pods
going
out
or
the
runtime
going
out
to
get
a
docker
image
right
or
your
instances
that
you
know
are
calling
out
to
the
internet
for
something
or
a
third
party.
If
a
goes
down
your
neck
gateways,
an
a
you're
not
going
to
be
able
to
go
out
to
the
internet,
so
that's
definitely
something
you
need
to
consider.
C
A
A
E
D
For
me
it
was
actually
nice
experience,
you
see
it
blowing
up
and
basically,
oh
yeah.
We
have
a
weakness
in
our
class
operation
so
since
it
only
was
a
definite
environment
for
me,
actually
I
am
NOT
super
worried,
so
I
just
have
to
adjust
and
maybe
for
the
first
step,
I
move
everything
to
let
C
contain
every
Tuesday
or
something.
But
let's.
A
See,
oh
and
Q
matches
he's
not
running
a
proxy.
They
have
this
thing
called
Kapil,
which
is
a
regionally
federated
multi,
tenant,
container
image
register
and
he's
plot
that
link.
Sorry,
they
taught
that
link
in
the
slack
that
is
good
to
know
well
make
sure
that
heads
the
show,
notes
Thanks,
how
about
you
Mario
you
were
next
Jenna
I
was.
C
Just
gonna
say:
wait:
yeah,
kWe,
going
down.
We
see
these
deployments
for
basically
the
entire
day
yesterday,
so
we
have
no
caching.
We
I've
never
really
heard
this
Nexus
stuff.
So,
like
we
we've
been
thinking
a
lot,
we're
actually
think
about
getting
off
of
Quay.
The
original
reason
we
went
to
it
was
for
the
security
scanning.
We
don't
really
need
that
anymore.
That's
almost
kind
of
a
baseline
now
across
most
container
services.
C
But,
however,
you
know
we
do
where
the
global
scale
or
local
and
what
that
that
might
look
like,
so
because
we
can't
really
have
that
sort
of
thing
where
there's
always
constant,
churn,
there's
always
we're
always
deploying
there's,
always
things
going
down
and
inflating
back
up.
You
know
we
need
to
be
able
to
pull
images
so
yeah
thanks
for
the
links,
everyone
yeah.
A
Awesome
anything
else
before
we
move
on
to
Jim's
question
I
feel
this
is
one
of
those
those
learning
moments
where,
like
you
know
before
you
were
knee-deep
into
this
stuff,
it
seemed
like
an
obvious
thing
that
you
would.
You
know,
always
make
sure
that
you
can
deploy
and
not
depend
on
third-party
services,
but
in
the
real
world.
You
find
yourself
in
this
situation
where
it's
like
wow
I,.
D
D
A
It's
like,
and
then
you
fix
all
that
and
then
you
like
realize
you
didn't
you
didn't
catch
your
operating
system
updates.
So
that's
something
else
breaks
it's
just
right!
I
just
find
that
it's
very
interesting
that,
like
you
know
to
this
day
here
we
are,
and
in
things
like
this,
really
actually
affect
us
in
real
life.
You
know
we're
like
in
your
brain
you're
like
I
thought.
A
This
was
like
DevOps
number
rule
number
one
right
like
make
sure
all
your
dependencies
can
deploy
blah
blah,
blah
blah
blah
and
just
what
you
read
in
the
book
and
then
what
you're
experiencing
I
always
think
is
very
interesting.
So
thanks
for
your
insights,
there
and
jim
says
yeah
lesson
learned
it
to
your
on-prem.
A
C
I
guess
we
just
got
bolts.
Another
team
member
of
mine
just
got
vault
setup,
just
as
a
service
we're
working
on
getting
the
secrets.
Our
main
thing
is
getting
secrets
for
center
mass
into
services.
Secure
there,
instead
of
using
right
now
use
the
client-side
to
local
tops
from
Mozilla,
which
does
encryption
with
a
kms
key
and
then
those
efforts
to
a
repo
that
is
gross
and
well
it's
not
gross.
It's
a
lot
more
overhead
than
we'd
like
to
deal
with
and
getting
developers
to
use.
It
is
tedious.
D
C
C
So
there's
an
API
and
a
UI
as
far
as
I
can
tell
that
developers
are
going
to
work
with
and
they
can
tap
into
that
to
get
them
in
I.
Don't
know
that
we're
still
working
on
like
the
the
best
ways
for
that
the
best
patterns
etc.
So
definitely
look
for
more
from
that
or
feel
free
to
ask
me
in
the
future,
but
I
I'm,
not
100%
in
that
project
as
well.
So
I
have
to
kind
of
touch
base
and
see
what
what's
the
latest
there.
So.
E
Mean
we
use
how
she
got
what
and
if
it
is
an
on
cloud
we
use
we
just
with
with
for
sure
if
it's
like
a
fun
default
vault
which
we
go
and
use
it,
and
but
we
manually
key
in
the
Seekers
once
like
an
ad
account
is
generated.
We
just
get
the
password,
but
pretty
keen
and
manually
we
key
in
there
and
then
we
used
it
but
yeah.
We
don't
have
any
API
screens
such.
A
A
We
appreciate
everyone
joining
today
when
we're
done
today.
Panel
stick
around
I
got
stuff
for
you,
Callie
asked
so
does
vol
implement
the
secrets
interface
for
kubernetes.
This
will
be
our
last
question
of
the
day
out
of
this.
How
does
this
integrate.
C
I
can't
really
answer
that
myself
we're
not
we're
not
using
so
we're,
not
using
kubernetes
secrets
now
right
we're
going
straight
to
the
vault
I.
Don't
know
if
there's
there's,
probably
some
provider
that
can
do
that
or
some
operator
or
something
we
I,
don't
think
we
have
been
so
so.
A
Yeah,
well,
we'll
definitely
do
that
now,
as
always,
we
encourage
everyone
to
stick
around
to
the
channel.
If
you
have
questions
throughout
the
month,
you'll
whack
them
in
there
we'll
get
to
them.
So
with
that,
it's
time
for
the
raffle,
we
are
wrapping
up.
I'd
like
to
thank
everyone
for
listening
in
great
session.
We
had
over
150
of
you
listening
in
a
live
stream
and
we
always
appreciate
your
support.
Our
t-shirt.
Winners
are
Nathan
and
Kelly.
I
will
PM
both
of
you.
A
After
this
and
I'll
give
you
a
little
code,
you
can
go
to
store
dot,
C
and
C
F
dot,
IO,
get
your
kubernetes
shirt
and
then
apply
that
code.
It'll
set
the
price
to
zero.
We
give
away
two
shirts
every
session,
so
please
come
back
to
win
a
shirt
or
you
could
just
buy
one
on
there.
It's
the
San,
C
F
store
has
many
many
dope
items,
including
this.
My
favorite
hat
so
definitely
check
that
out
and
as
always,
thanks
for
the
CNC
F
for
sponsoring
these
t-shirt
giveaways.
B
A
D
Just
want
to
say
yeah
very
great
time.
As
always,
like
I
said,
we
are
planning
the
ingress
of
of
sorus
in
the
first
week
of
June.
So
if
you
have
any
questions
that
are
particularly
related
to
ingress,
we
have
six
networking
people
joining
this
session
and
you
can
fire
away
your
question
and
you
have
like
to
ask
like
very
specific
and.
A
As
always,
this
show
is
the
third
Wednesday
of
every
month
with
our
two
sections,
one
for
the
EU
one
for
the
west
coast.
This
is
the
last
one
of
the
day.
This
is
a
West
Coast.
So
with
that
thanks
everybody,
we
hope
this
was
valuable
for
you
please
share
like
and
subscribe.
I
was
waiting
for
that
one
and
let
us
know
how
we're
doing
we
always
appreciate
feedback
from
the
community
and
hope
this
is
useful
for
you,
so
that
we'll
see
everyone
in
a
month
thanks.
Thank
you
guys.