►
From YouTube: Kubernetes Office Hours (West Coast) 20190220
Description
Third Wednesday of every month we do an hourly livestream where we try to answer as many user questions as we can!
Post your question to this thread for us to check it out: https://discuss.kubernetes.io/t/kubernetes-office-hours-for-20-february/4711
More info here: https://github.com/kubernetes/community/blob/master/events/office-hours.md
A
All
right
welcome
everyone
to
today's
kubernetes
office
hours,
West,
Coast
Edition,
where
we
answer
your
user
questions
live
on
the
air
with
our
esteemed
panel
of
experts,
you
can
find
us
in
hashtag
office
hours
on
slack
check
the
topic
for
the
URL
for
the
event,
information,
and
also
we
have
a
discuss
thread
that
we
will
link
to
shortly
before
we
begin.
Let's
start
by
introducing
ourselves
I'm
your
host
Jeff
Rasika,
and
then,
let's
talk
to
our
panel
first
up
Bob
here
on
the
right
on
my
screen
that
works.
B
C
D
E
A
Awesome
awesome
so
before
we
start
here
are
some
ground
rules.
This
is
a
judgment-free
zone.
Everyone
had
to
start
from
somewhere.
So
please
help
out
your
buddy
by
having
a
supportive
environment
in
the
channel
and
on
stream.
While
we
do
our
best
to
answer
your
questions,
we
the
panel,
don't
have
access
to
your
cluster,
so
live
debugging
is
kind
of
off
topic,
but
we'll
do
our
best
to
get
you
moving
down
the
next
step
in
the
right
direction.
A
Panelists
you're
encouraged
to
expand
on
your
answers
with
your
experiences
and
pro
tips
you
are
on
the
panel
audience.
You
can
help
us
by
pacing
in
URLs
to
official,
Docs
and
blogs,
or
anything
that
might
be
relevant
to
the
topic
at
hand.
Our
slack
invite
page
is
down
so,
if
you're
unable
to
use
slack,
please
post
your
questions
on
to
the
discuss
site.
I
will
tweet
out
the
link
as
well
just
to
make
sure
there's
additional
exposure.
You
can
help
us
out
by
tweeting
and
spreading
the
word
and
paying
it
forward.
A
Each
session
is
recorded
and
available
on
YouTube
if
you're
using
this
as
a
work
resource.
Please
let
us
know
how
we're
doing
so.
We
can
try
and
make
it
better
we're
always
looking
for
marketing
help.
So
if
you're
awesome
at
social
media,
please
help
us
because
a
bunch
of
us
aren't
and
we'll
be
holding
a
raffle
for
the
audience.
At
the
end,
we
like
to
give
away
a
t-shirt
every
session
and
we're
looking
to
do
more,
so
it
pays
to
come
back
and
stick
around.
A
All
right,
so
we
want
to
prioritize
all
the
questions
that
are
in
office
hours.
However,
we
realize
that
a
lot
of
times
questions
don't
just
happen
in
office
hours,
so
we
tend
to
scour
coober,
not
kubernetes,
novice
and
kubernetes
users
for
questions.
In
the
mean
time.
So,
first
off
we
have
a
question
from
mac
in
kubernetes
novice
hi.
What
happens
if
I
do
a
rolling
update
and
one
of
the
users
using
the
app
is
on
the
old
pod
and
hit
submit
and
the
new
pod
comes
up?
How
is
that
submit
handle
correctly
during
the
rolling.
A
All
right
there's
no
real
way
to
control
that,
so
it
depends
on
your
deployment,
but
generally,
if
your
pod
goes
down
and
then
the
new
one
comes
up,
the
submission
is
going
to
hit
the
new
pod.
So
if
there
are
things
like
database
migrations
that
happen
as
part
of
the
rolling
update,
you
have
to
make
sure
that
those
all
complete
before
the
new
pod
comes
up
and
the
old
pod
goes
down
oftentimes.
You
actually
want
to
have
it
set
up,
so
the
new
pod
comes
up
before
the
add.
A
B
A
A
B
B
I,
don't
believe
so.
Usually
you
will
define
a
set
of
volume
claims
in
your
PPC
or
your
storage
class,
and
not
your
PPC
in
your
PV
or
your
storage
class,
and
then
your
PVC
is
just
going
to
be
the
thing
that
consumes
one
of
those
things
out
of
that.
If
you
want
all
of
them
to
sort
of
have
a
specific
one,
I
know
it's
possible
to
create,
like
multiple
storage
classes,
backed
by
the
same
storage,
at
least
like
storage
of
plans,
or
something
like
that
with
those
sort
of
different
settings
in
place.
A
C
C
A
B
B
That
way
like
you
can
have
your
actual
Prometheus
instances
themselves,
don't
have
to
have
much
in
the
way
of
disk
space
available
and
everything's
stored,
and
you
know,
object,
storage
and,
however,
you
want
to
have
like
that
sort
of
move
from
you
know
and
if
you
want
to
move
it
off
to
like
glacier,
but
you
can
move
through
different
storage
tiers
and
things
like
that.
The
one
thing
I
will
say
is
that
you
do
have
to
expose
the
store
API
externally
on
all
the
instances
for
them
to
be
able
to
communicate
with
each
other.
A
F
B
A
Well,
you
are
grabbing
a
link.
I
just
want
to
yet
again
tell
everyone
that
happens
to
be
listening.
We
are
open
to
answer
any
questions
if
you
can
hop
into
the
office
our
slack,
if
you
have
already
joined
the
kubernetes
slack
or
if
you
can
hop
on
to
discuss
kubernetes
dot
io,
there
is
an
office
hours
thread
that
we
would
be
happy
to
answer.
Your
questions
on
bob
did
pull
up
that
Lincoln
awesome.
A
A
It's
fine.
We
can
table
that
one
and
try
and
look
that
one
up
later
and
give
them
an
answer
in
the
office.
Our
slack
David
just
asked
I've
got
another
question
about
ingress.
If
I
create
a
wildcard,
DNS
entry
and
pointed
at
the
API
server,
does
ingress
traffic
flow
into
the
cluster
via
HTTP
HTTPS
on
the
API
server,
or
do
we
need
to
open
additional
ports
on
the
nodes
themselves?
Are
there
any
docks
that
happen
to
make
this
clear,
Thanks.
A
B
F
Notes
running
the
ingress
controller,
so
basically
you
can
use
things,
it
depends,
but
like
metal
will
be
or
yes
or
just
use
the
node
IP
and
no
worth
of
things
controller
or
maintain
the
cost
networks,
so
it
uses
port
80
or
for
for
free
but
yeah.
It's
an
issue
on
itself
how
to
work
traffic
to
the
ingress
yeah.
If
you're
in
a
corporate
or.
B
In
general,
I
would
I
wouldn't
avoid
sort
of
sharing
the
master
services
or
that
you
know
the
control
plane
services
with
things
that
you're
hosting
in
there.
Unless
you
absolutely
you
know,
have
to
general,
you
want
your
ingress
or
other
services
that
you're
spending
up.
It's
actually
consuming
it
to
consume
like
a
different,
IP
or
different
means
of
entering
the
system
entering
the
system.
F
A
One
for
me
as
well
seems
like
we
answered
David's
question.
If
there
are
any
other
questions,
please
toss
them
in
office
hours
or
in
that
discuss
thread.
Otherwise
we
will
be
moving
on
to
another
question
from
kubernetes
users
jan
asks
hi,
I
upgraded
from
1.11
to
112
v
as
per
the
documentation,
but
now
HPA
seems
broken
cube
controller
manager,
logs,
say
warning
reason
failed,
get
resource,
metric
unable
to
get
metrics
for
resource
CPU,
unable
to
fetch
metrics
from
resource
metrics
API.
The
server
could
not
find
the
requested
resource
get
pods,
metrics
kto.
D
B
A
Didn't
look
like
a
heap
ster
warning,
hipster
was
deprecated,
but
it
wasn't
removed.
Heap
ster
wasn't
fully
removed
until
113.
That's
the
point
at
which
heaps
tur
is
like
not
available.
You
have
to
install
it
like
manually,
but
till
then
heap
ster
was
the
default
and
I
think
metric
server
was
recommended
or
optional
did.
C
A
That
I,
don't
know
and
I
also
don't
know
if
in
the
release
notes,
that's
part
of
it.
If
the
release
notes
actually
state
that
this
is
a
new
flag
that
they
need
to
enable
and
stay
upgrade
from
111.
That
would
be
something
to
look
into,
but
I
definitely
think
it
is
some
wonkiness
with
keep
stir
and
metric
server
and
the
migration
path
from
heap
store
to
metric
server.
That's
causing
up
at
that
point
we'd
need
a
little
more
information
on
their
cluster
I.
Think.
F
A
F
D
C
D
I
mean
I:
guess
if
we're
talking
about
putting
it
in
the
images
etc
for
deployment,
then
I
mean
realistically,
when
I
do
something
I'm
honestly
doing
kind
of
a
mix
because
obviously
comes
up
a
lot
with
database
containers
because
databases
have
eight
million
configuration
variables.
The
I
is
there's
like
90%
of
the
database
configuration
that
I
know.
I'm,
never
gonna
want
to
change.
You
know
kubernetes
runtime,
because
I'm
not
gonna,
be
using
obscure
storage
hardware
and
I'm
not
going
to
be
changing.
D
You
know
the
transaction,
isolation
and
I'm
not
going
to
be
getting
old
tons
of
things,
but
there's
a
handful
of
stuff
that
I
will
potentially
change
at
deployment
time
or
even
after
deployment.
Time
and
I
put
all
those
in
a
config
map
right.
Some
of
the
databases
on
kubernetes.
The
tooling
is
specifically
designed
and
some
of
the
operators
designed
to
pull
config
Maps
in
order
to
push
down
configuration,
changes
to
the
individual,
pods
yeah
and
that's
obviously
not
going
to
work
with
a
in
image,
build
approach.
A
Also
in
the
office,
our
slack
ohmy,
crow
nyan
said
this
falls,
maybe
into
self
templating
we're
on
launch
a
pod
authenticates
itself
and
then
has
a
config
template
once
running
in
an
it
container
reaches
out
to
a
tool
like
password,
vault
or
other
tool
to
template
for
configs.
That's
probably
best
practice,
but
most
people
use
config
maps
directly,
but
yeah
configs
and
containers
are
usually
bad.
A
A
Let's
go
back
to
our
list
of
questions
from
kubernetes
users.
Our
vandagriff
asks
I
have
an
app
using
an
empty
dear
volume
for
scratch
space.
Is
there
a
way
to
prevent
cube,
coddled
rain
from
stopping
if
it
finds
that
volume,
ideally
I'd,
be
able
to
mark
it
as
disposable,
so
cube?
Coddled
rain
could
still
warn
operators
about
other
pods
with
local
storage.
F
A
I'll
reread
it
for
the
stream
as
well.
I
have
an
app
using
an
empty
dear
volume
for
scratch
space.
Is
there
a
way
to
prevent
cube
cuttle
drain
from
stopping
if
it
finds
that
volume,
ideally
I'd,
be
able
to
mark
it
as
disposable,
so
cube,
cuddle
drain
could
still
weren't
operators
about
other
pods
with
local
storage.
D
F
F
B
B
F
D
D
D
D
D
D
A
Omicronian
says
client
go,
maybe
I
would
want
to
just
mention.
Clank
Oh
is
actually
just
the
client
libraries
and
go
that's
not
actually
we're
cute
cuddle
or
any
of
the
CLI
things
live.
They
actually
are
in
a
separate
repo.
Clank
OH
is
just
for
like
building
on
top
of
building
a
go
application
on
top
of
kubernetes.
D
D
D
I
mean
it's
the
reason
why
local
persistent
volumes
are
a
feature
so
that
people
can
differentiate,
because
the
likely
when
we
get
local
persistent
volumes,
somebody
else
is
going
to
file
a
PR
to
say
that
goop
Caudill
dream
should
just
automatically
delete
any
empty
deer
without
asking
they
went,
they
won't
be.
That
was
that's.
The
behavior
I
would
personally
want
so
yeah.
D
D
A
D
Not
graduated
I
don't
see
any
good
reason
other
than
round
two.
It's
the
why
they're
not
GA
this
run
to
its
in
probably
conformance
testing
are
probably
the
only
reasons
why
local
PDS
are
not
GA
yet
so
once
so
movies,
Archie
DEA
or
if
you
won't
use
them
as
a
beta
feature,
the
answer
is
well.
How
could
control
our
most.
D
A
D
Good
users
right
yeah,
yeah
I'm-
such
your
follow
up
on
this,
because
if
this
user
has
a
specific
use
case,
like
I,
could
make
a
pretty
good
argument
for
once.
Local
TVs
are
GA
that
the
behavior
of
queue
controlled
drains
should
be
to
automatically
delete
empty
gears
and
to
require
a
force
to
delete
local
PDS
completely
agree
because.
D
B
A
A
Head
shakes
head
shakes
mr.
moon,
Jenkins
yep,
so
I
will
I,
don't
have
personal
experience
with
Jenkins
X
either,
but
just
for
the
viewers,
I
will
say,
Jenkins
is
you
know
a
very
extensible
Java
based
CI
CD
system
job
system
build
system.
Jenkins
X
is
a
flavor
of
Jenkins
that
is
tooled
specifically
for
kubernetes,
so
it's
all
about
doing
Jenkins
on
kubernetes
and
being
kubernetes
native.
A
Unfortunately,
none
of
us
have
much
experience
with
it.
What
I
will
ask
is,
if
you
can
panel,
what
CI
CD
system
are
you
using
with
kubernetes
I
will
go
first
since
I'm
the
one
asking
the
random
question
at
work:
Bob
and
I
are
using
git
lab
and
get
lab
integration
with
kubernetes
to
manage
a
lot
of
our
stuff
know.
A
F
My
previous
job
I
was
using
an
open-source
tool
from
Zendesk
that
is
called
Samsung.
That
is
a
little
more
than
than
that,
but
we
were
using
github
that
run
tests
on
Travis
on
every
request
and
if
the
and
when
it
was
nursed
it
it
cook
for
our
Samsung
think
that
build
a
very
mission
deployed
and
so
UBS
work
is
not
a
very
well-known
to
Samsung.
But
it
was
pretty
good.
B
D
D
We
have
our
own
s2i,
which
is
part
of
OpenShift
yeah,
and
that
gets
used
both
for
open
ship
stuff
and
it
gets
broken
out
and
it
gets
broken
out
we're
trying
to
remember
the
name
of
it
when
it's
not
part
of
open
shift
and
used
as
part
of
the
Fedora
infrastructure
en
we
have
a
couple
of
other
things
come
around
as
well
for
for
different
purposes,
including
a
ansible
container
and
one
that
we
created
specifically
for
the
atomic
rpm
OS
tree
images.
That
I
don't
remember
the
name
of
right
now.
I.
A
A
My
immediate
thought
is,
you
would
just
delete
the
pot
and
have
it
restart
like
the
pod
lifecycle.
I
see
two
nods
three
knots,
four
knots,
all
right,
that's
consensus,
so
you
would
delete
the
pod
if
it's
hung
like
that
and
that
would
trigger
whatever
higher-level
object
to
create
a
new
pod
and
then
start
the
pod
lifecycle
again.
B
A
All
right,
next
up
from
kubernetes
novice,
we
have
Jason,
hey
all
I'm,
having
trouble
wrapping
my
brain
around.
Why
pods
share
in
Linux
namespaces.
If
you
should
have
a
set
of
processes,
you
want
to
share
namespaces,
why
not
just
put
them
in
the
same
container,
instead
of
in
the
same
pod.
The
way
the
namespace
interaction
is
the
same
for
containers,
whether
they're
running
in
kubernetes
or
outside
in
a
regular
docker
setup,
I
kind
of
want
to
reread
that
again,
because
that
sounded
a
little
odd
I.
C
B
Let's
see
I'm
a
carny
on
sailing
stuff
in
slack
yeah,
they
don't
share
process
namespaces
by
default.
Shared
process
namespaces
a
newer
feature
coming
online,
mostly
from
like
the
perspective,
debug
containers,
yeah.
D
B
D
D
To
give
you
an
example,
simple
that
was
like
you
know
how
I
got
into
kubernetes
was
deploying
databases
in
kubernetes.
So,
for
example,
one
of
the
things
that's
frequently
used
with
Postgres
is
a
connection
pool
or
called
PD
bouncer
and
depending
on
how
I'm
setting
up
my
connection
environment,
sometimes
I
want
one
Pooler
per
back-end
node
for
Postgres,
and
in
that
case
it
actually
symbolized
my
deployment
a
lot
and
improves
network
performance.
D
If
I
have
those
two
sharing
a
network
namespace
in
a
pod
but
I,
don't
want
to
put
them
both
together
in
the
same
container,
because
it's
two
separate
binaries,
it's
two
separate
executables,
they
have
their
own
individual
configuration
files
and
they
have
their
own
separate
release
schedules
in
the
up
straight.
So
all
the
stuff
that
goes
into
build
and
configure
for
those
two
applications
is
going
to
be
separate.
It's
just
that
I
wanted
to
play
them
together
and
that's
the
reason
that
pods
and
their
shared
network
namespace
exists.
F
Yeah
one
other
example,
I
think
from
the
from
the
docks.
If
you
have
a
have
a
service
that
exposes,
we
are
in
HTTP
server
files
from
a
git
repository
and
it's
updated
every
time
the
gate
repose
changes
and
you
can
have
like
all
in
one
container
but
using
the
both
of
struction.
You
can
have
just
our
word
seven
another
container
that
just
knows
how
to
pull
from
the
git
repository
and
a
shared
volume,
and
that's
an
important
thing
that
it's
also
shared
between
both
everything
containers
in
the
book.
So
you
have
this
reusable
containers.
F
A
Think
that
that
has
been
answered
next
up
from
kubernetes
users
we
have
Alex.
Does
anyone
have
any
familiarity
with
standing
up
a
bare-metal
cluster
in
a
corporate
network
where
there's
a
root
CA
of
the
organization?
That's
trusted
I
would
like
the
kubernetes
CA
to
be
a
subordinate
CA
off
from
the
corporate
CA,
but
while
the
Corp
CA
is
trusted
on
the
Linux
node,
the
API
server
cert
is
signed
by
the
subordinate
CA.
Normally,
you
can
add
a
second
cert
of
the
subordinate
CA
to
the
HTTP
services
certificate.
A
C
A
Gotcha
all
right,
then
we
will
move
on
from
kubernetes
users.
I
am
NOT
going
to
be
able
to
pronounce
this
name.
Zoo
far
asks
hi.
All
I
just
want
to
ask
a
basic
question:
I
deploy
a
deployment
with
docker
images.
What
happens
when
the
docker
images
are
updated
in
the
repository?
Is
the
deployment
affected
from
this
update?
A
A
A
F
Yeah,
it
makes
sense
what
I
was
going
to
add
this.
It's
like,
if
you,
you,
should
really
use
a
version
for
your
image
and
something
that
might
sleep
is
like
if
you
use
latest
in
the
governess's
deployment,
and
you
just
push
a
new
image
later
start
for
both
crashes
or
something
the
new
image
might
be
pulled
and
you
might
have
a
weird
mix.
So
you
might
want
to
be
sure
your
recent
versions.
B
Nick
says
you
could
grab
the
new
image
unknowingly
if
the
pod
crash
we
started
write
this
after
comes
back,
comes
back
to
avoid
using
latest
then
I'm,
Korean,
chimes
and
saying
that
depends
on
your
image,
pull
policy
as
well.
That
I
would
agree
with
like
if
the
POTUS
just
restart
like
if
it
successfully
pulled
the
image.
B
F
A
B
E
A
Alright
in
office
hours,
if
that's
an
L
jail
as
well
asks
we're
moving
from
cops
to
a
managed
kubernetes
solution.
As
part
of
this
we're
working
to
standardize
a
lot
of
the
CI
pipeline
and
kubernetes
resource
specs.
At
the
same
time,
besides,
the
recommended
kubernetes
labels
are
there
other
labels.
You
all
have
found
useful
during
your
time.
Managing
kubernetes
like
app
company,
io,
/
internal
equals
true,
and
then
they
also
linked
to
our
Docs
that
show
common
labels.
A
B
C
B
And
I
don't
other
plate
like
some
will
do
like
the
get
commit
hash
or
some
of
the
other
stuff
in
there
that
way
like
when
you,
when
you're
doing
like
an
update
or
something
like
that,
it
gets
added
to
your
selector.
For
you
know,
services
and
things
like
that.
I
think
is
very
highly
sort
of
business
process.
Specific.
A
C
B
B
C
It's
usually
a
name,
so
it's
either
a
namespace
for
team
or
a
namespace
per
app.
So
we
have
folks
that
do
both
that
we
service,
and
so,
if
you're
doing
a
namespace
for
application,
then
what
portion
of
the
URL
is
service.
So
if
it's
a
say,
a
login
portion
is
the
specific
container
and
micro
services.
It's
useful
for
a
label
of
knowing
that
this
specific
container
is
servicing.
That
URL
I
got
you.
A
A
From
kubernetes
users,
cinder
asks
I,
have
a
question
regarding
the
cron
job
resource
in
the
spec
I've,
set
both
successful
jobs,
history
limit
and
failed
jobs,
history
limit
to
three
and
the
back
off
limit
to
zero,
so
basically
always
keep
history
of
minimum
of
the
last
three
jobs
and
not
retry
immediately
as
they
fail,
even
though
the
jobs
are
kept
according
to
the
history
limits.
According
to
the
job
docks
when
a
job
completes,
no
more
pods
are
created,
but
the
pods
are
not
deleted
either.
A
Keeping
them
around
allows
you
to
still
view
the
logs
of
completed
pods
to
check
for
errors,
warnings
or
other
diagnostic
output.
The
job
object
also
remains
after
it
is
completed
so
that
you
can
view
its
status.
This
seems
to
contradict
the
behavior
I'm.
Seeing
with
pods
are
deleted.
Is
this
by
any
chance
different
for
cron
jobs,
or
is
there
a
bug
for
the
record
they're
using
1.11
dot?
A
F
So
maybe
before
you
answer,
like
he
discussed
that
he
changed
this,
not
this
these
settings
from
three
to
one
another
policy
to
never
and
the
result
was
as
passive
but
expected.
Quite
she
could
see.
Locks
and
I
asked
him
if,
if
the
same
was
happening
for
successful
port
or
only
faith
board
sailboats,
and
if
changing
back
to
three
make
it
work
again,
because
you
know
magic
key
answer,
she
is
going
to
try,
but
so
far
with
one
it's
working
great,
she
didn't
try
to
wait
success
successful,
but
she
just
had
simple
thoughts
that
fail
and.
A
A
Cuz
like
now
that
I've
had
time
to
think
about
it.
So
I
was
trying
to
remember
the
cron
job
like
spec
and
it's
you
know
the
timing
and
then
there's
a
job
spec
and
then
success
like
yeah
I
guess
we're
gonna
have
to
see
what
happens
on,
discuss
and
like
what
happens
because
I've
used
successful
and
failed
job
history
limit
and
it's
worked
fine
for
me,
but
again
different
environments.
A
B
Ahead
when
one
quick
thing
on
that
is
I
know
that
they
are
like
the
cron
job
controller
is
kind
of
jank.
They
are
looking
for
someone
to
completely
rewrite
it.
There
is
a
role
like
a
kubernetes
role
board
post
in
discuss,
specifically
looking
for
someone
that
might
want
to
take
ownership
and
improve
that.
What.
A
A
All
right
last
question:
Josh
from
kubernetes
users
asks
if
I
have
three
nodes
with
an
azure
IKS
master
and
the
engine
x,
ingress
installed
by
home
with
a
static
IP.
What
happens
if
one
of
the
nodes
goes
down?
Does
the
AKS
master
remap
the
static
IP
to
one
or
more
functioning
nodes,
or
something
like
that?
Or
is
this
not
the
best
place
to
ask
that.
A
A
F
A
A
Which
will
be,
and
the
winner
of
the
raffle
goes
to
jail
as
well
jail
as
well.
Will
you
will
receive
a
code
to
the
CN
CF
store
where
you
can
get
an
awesome
community
shirt?
That's
probably
going
to
come
from
Jorge,
who
said
couldn't
be
here
with
that
panel.
Thank
you
for
being
awesome
viewers.
Thank
you
for
being
just
as
awesome,
and
everyone
have
a
wonderful
day.