►
From YouTube: Kubernetes Office Hours 20190417 (West Coast Edition)
Description
Office Hours is a live stream where we answer live questions about Kubernetes from users on the YouTube channel. Office hours are a regularly scheduled meeting where people can bring topics to discuss with the greater community. They are great for answering questions, getting feedback on how you’re using Kubernetes, or to just passively learn by following along.
For more info: https://github.com/kubernetes/community/blob/master/events/office-hours.md
A
I'm
gonna
move
yeah
all
right.
Everybody
welcome
it's
a
third
Wednesday
of
the
month.
That
means
it's
time
for
kubernetes
office
hours
is
the
livestream
where
we
hop
on
YouTube
in
a
slack
Channel
and
try
to
answer
as
many
questions
as
we
can
from
our
kubernetes
users
thanks
for
joining
us.
So
here's
how
it's
gonna
work.
A
B
D
E
About
that
yeah
I
am
Mario
Laurie,
oh
I
currently
work
at
stock
ex
in
Detroit
Michigan,
I
kind
of
specialize
and
a
little
bit
everything
Cooper
neighs,
especially
in
the
networking
ingress
area,
as
well
as
a
were
patient
work,
clothes
and
howlman
see
ICD
in
some
of
those
areas
as
well.
So
love,
love,
storage
and
love.
Some
of
the
other
aspects
of
kubernetes
as
well
so
yeah.
A
All
right
and
I
will
be
your
host
today.
My
name
is
Jorge
Castro
and
I
work
at
VMware
as
a
community
manager.
Some
regrets
here,
Val
wanted
to
join,
but
a
work
emergency
came
up
for
her
and
Rodrigo
was
going
to
join
us,
but
he
is
sick,
so
let's
send
good
thoughts
their
ways.
Okay.
So
before
we
start,
let's
just
give
you
some
quick
ground
rules
here.
This
is
a
judgment-free
zone.
A
So
if
you're
hanging
out
in
the
channel
just
remember
that
the
kubernetes
code
of
conduct
does
apply
and
we
kind
of
want
to
make
a
safe
zone
for
anyone
to
just
ask
whatever
questions
you
have,
we
all
had
to
start
from
somewhere.
So
let's
all
be
supportive
of
that.
Well,
we
will
do
our
best
to
answer
your
questions.
A
The
panel
doesn't
have
access
to
your
cluster,
so
like
live
debugging
is
gonna,
be
off
topic,
so
what
we
will
try
to
do
instead
is
kind
of
give
you
the
next
step
that
you
need
to
do
to
hopefully
help
you
resolve
your
problem.
Panelists
you're
encouraged
to
expand
on
your
answers
with
their
experiences
and
pro
tips.
That's
why
you're
here
an
audience
you
can
help
us
out
by
pacing
the
URLs
to
official
thoughts
or
if
people
start
recommending
tools
or
things
like
that
feel
free
to
just
use
the
chat
as
like
a
live
note-taking
thing.
A
What
I
do
is
at
the
end,
I
take
both
sessions,
all
the
URLs
that
we
discussed
today
and
whack
them
into
the
show
notes
that
enables
to
keep
like
a
little
set
of
topic.
So
if
you
see
the
stream
and
you're
interested
in
something,
you
can
always
go
back
and
reference
that
keep
in
mind
that
the
slack
channel
is
being
a
live
stream,
so
feel
free
to
just
post
your
opinions
on
there.
A
We
like
to
see
what
people
think
free
to
also
post
your
questions
on
discussed
at
kubernetes
at
I/o
bob's
about
to
whack
in
the
URL.
For
that
we
use
that,
as
kind
of
like
the
thread
where
we
try
to
tie
things
together
and
that's
where
all
those
URLs
and
stuff
I
was
talking
about
and
the
notes
will
be
published.
You
can
also
help
us
out
by
tweeting
spreading
the
word
of
paying
it
forward.
A
Each
of
these
sessions
is
recorded
and
available
on
YouTube,
so
feel
free
to
check
out
the
playlist
of
all
the
sessions
that
we
have.
We've
been
doing
this
twice
a
month
for
almost
two
years
now,
so
pretty
pretty
awesome.
This
panel
is
a
set
of
volunteers.
So,
if
you
feel
like
you
want
to
come
on
board
and
help
share
your
experience
or
your
expertise,
we
would
love
to
have
it.
A
The
commitment
is
one
hour
a
month
and
you
don't
even
have
to
show
up
every
month
if
you
don't
want
to
as
long
as
we
have
enough
people
to
cover.
So
we
try
to
have
like
a
rotating
band
of
crazy
kubernetes
experts
out
there
that
can
help
rotate
in
and
out,
and
if
we
do
read
your
question
aloud
on
the
air,
we
will
be
using
a
totally
scientific
method,
choosing
a
random
winner
from
anyone.
A
And
while
you
all
are
thinking
about
that,
those
in
the
channel,
please
feel
free
to.
Let
us
know
where
you're
from
as
most
of
the
panel
today
is
from
Michigan
and
what
we
call
southern
Canada
and
one
from
Ontario.
So
please,
let
us
know
we'd
love
to
know
where
you're
from
and
what
you
work
on.
Actually
that.
C
A
E
E
The
core
thing
is
editing,
something
in
the
pods
back
that
exists
for
the
deployment,
object
or
demons
that
object
generally
with
how
I
believe
an
annotation
for
the
deployment
object
itself
will
work,
there's
actually
a
fancy
command,
which
I
will
try
to
find
here
and
paste
in
chat
that
we
use
just
one-off
command
that
can
actually
update
like
a
last
modified
variable
in
the
pods
back
and
obviously
you
know
just
use
the
epoch
time
and
causes
a
role
for
all
the
the
pods
right
for
that
object.
So
that's
the
best
way.
E
There's
there's
also
a
facility
in
helm
that
can
kill.
You
can
kill
all
pods
all
at
once,
which
you
usually
don't
want.
So
keep
in
mind.
You
want
to
maintain
the
like
for
a
deployment.
You
want
to
maintain
the
kind
of
rollout
model,
the
rolling
update
model
for
your
stuff.
You
don't
want
to
take
an
application
down
entirely
to
reroll
everything,
so
you.
A
Okay,
any
other
information
here.
If
we
ask
your
question
and
you
need
to
do
a
follow
up,
feel
free
to
just
post
it
right
into
the
channel
and
we'll
keep
track,
and
sometimes
if
it
takes
us
multiple
times
going
back
and
forth
to
get
to
an
answer,
we
will
do
that.
Alright,
thanks
for
joining
next
up
is
Dylan
thanks
for
showing
up,
he
asks
if
a
secret
is
updated.
What
is
the
best
way
to
refresh
that
secret
as
an
environment?
A
E
So
if
you're
mounting
something
and
I
think
secrets
is
the
same,
although
I'm
not
sure
on
that,
the
big
thing
is
the
application
most
likely
will
have
already
ready
that
boot
and
will
not
be
rereading
the
filesystem.
So
you
usually
like
like
there's
the
whole
like
you
can
send
us
a
cover
or
whatever
the
call
is
to
the
application.
That's
probably
dirty,
and
you
probably
don't
want
to
do
that
so
yeah
doing
a
roll
is
the
best
thing
with
helm.
E
If
you're
using
helm,
you
can
make
it
a
little
bit
easier
and
have
the
annotations
in
your
template
for
your
deployment.
Heavy
annotations
do
a
shot,
256,
some
of
the
actual
config
map
or
the
secret
CMO
file,
and
so
on
generation
in
Russia.
If
it's
a
change,
you
know
value,
helm
will
see
that
and
then
redeploy
the
deployment
which
will
roll
the
pods.
So.
C
All
right
go
ahead:
if
you
mount
it,
the
secret
has
a
volume,
and
instead
of
is
an
environment
variable.
There
are
some
things
that
will
automatically
read
it
from
IO
notify
I
forget,
what's
called
there's
something
that
will
sort
of
act,
as
you
know,
paid
one
within
a
container
to
check
about
file
in
automatically
for
you
as
a
workaround.
D
A
All
right
thanks
the
next
one's
from
Dylan
and
is
a
Stack
Overflow
link,
but
as
I
click
through
it's
really
long.
As
one
of
you
looked
at
this
one
and
kind
of
giving
us
a
tldr
or
I.
A
It
now
yep
one
of
you
want
to
take
this,
and
then
we
can
get
back
to
it,
because
it
seems
rather
long
and.
A
Then
the
next
one
is
a
link
to
a
superuser
question
which
is
really
great,
so
I
love
when
people
put
things
in
Stack
Exchange
because
it
lets
them
update
them
in
real
time
and
kind
of
gives
us
a
chance
to
see
them.
So
the
SEC's
one
says
I'd
like
to
create
a
Cuban
config
file,
multiple
nodes-
and
this
is
a
horizontal
pod,
autoscaler
one.
A
Q
babban
Ralph
is
this.
You
has
recommended
today,
I
think
it's
pretty
good.
I
got
some
nodes
up,
so
I
can
test
okay,
good
all
right.
So
you
take
this
one
and
then
we'll
get
back
to
you
on
the
on
the
superuser.
One
who's
got
the
Stack
Overflow
one
Bob,
okay,
we'll
get
back
to
both
of
you
here
in
a
minute.
Next
question
comes
from
Dylan.
Where
can
I
find
the
original
manifest
for
cube
DNS
for
eks
on
1.10?
What
generates
the
secret
for
this
deployment?
E
Okay,
so
I've
been
using
eks
for
a
few
weeks,
at
least
or
maybe
a
month
now
we're
moving
from
stack
point
generated
clusters
to
eks
generated
using
ETS
ATL,
the
fantastic
utility
from
we
works.
Your
on
110,
you
know
in
terms
of
like
getting
manifests,
I
mean
for
cube,
DNS,
yeah
I,
don't
really
know
like
look
at
the
key.
You
know
spawn
a
one-time
cluster
kind
of
thingy
CTL
makes
this
happen
in
20
minutes,
so
spawn
a
cluster
for
the
version.
E
You
want
to
kind
of
see
how
how
things
are
deployed
for
that
sort
of
DNS.
Like
the
you
know,
look
at
the
config
map
for
cube
DNS.
Once
you
deploy
that
one-time
cluster,
you
can
kind
of
see
how
they
they
can
figure
things
for
that
generation.
I've
actually
done
that
for
114
as
well,
where
I've
need
to
look
at
what
like
a
future.
Config
map
looks
like
for
some
reason:
maybe
I
want
to
update
it,
etc.
What
are
you
risking
by
not
upgrading
I
mean
I?
E
E
Know
so,
but
yeah,
that's
the
the
TLDR
to
kind
of
answer
his
question
quickly,
but
yeah
and
then
considering
a
future
migration
is
gke.
I
mean
this
is
the
whole
like.
How
soon
are
you
gonna
actually
be
migrating
right
or
if
you're
telling
yourself
and
your
team,
you
know
yeah
we're
gonna.
Do
the
next
few
months,
probably
double
that
you
know,
depending
on
how
busy
you
get
and
things
like
that,
we
always
try
to
say.
Oh
yeah,
we'll
do
that.
You
know
in
the
next
week
and
then
two
months
later
right
depending
on
importance.
E
So
you
know
I'd
say
you
probably
want
to
get
to
112
right
now,
honestly,
since
we're
at
114,
and
then
you
know
that
gives
you
a
good
six
months
plus
where
you
can,
you
know,
get
to
GK
and
you
know
been
on
GTA
IV,
C's
latest
there,
which
I
think
is
112.
So
112
is
the
latest
on
e
KS,
as
well
as
GK
and
I've
lacked
the
diversion.
A
C
A
D
E
C
This
is
going
back
to
the
Prometheus
question.
The
first
stack
overflow,
one
honestly
that
is
likely
not
an
issue
with
Prometheus
itself
or
I'm,
pretty
sure.
That's
the
Prometheus
operator
that
they're
using
to
deploy
it
as
like
I'm
I'm,
not
exactly
sure
what
prom
KS
is
or
is
it
from
eks,
but
the
response
in
that
issues
actually
pretty
pretty
good
and
that's
trying
to
troubleshoot
the
you
know,
look
at
the
events,
the
cluster
itself
and
see
why
the
cluster
thinks
it
can't
schedule
it.
Okay,.
C
D
A
E
I,
don't
use
the
problem
so
so
our
engineering
teams
are
so
optimized
that
we
only
write
micro
services
that
need
compute
and
all
the
storage
they
don't.
They
don't
need
storage
right.
So
we
don't
actually
delve
in
storage
much
right.
So
I
can't
answer
that
question.
I,
I
really
don't
know
if
you're
using
a
que
es.
Lo,
keep
in
mind
that
you
have
support
from
Amazon.
So
if
you
got
a
support
contract
or
whatever
you've
got
setup,
they
should
be
able
to
answer
questions
for
you
like
that.
E
If
there
is
an
early
documentation,
so
I
apologize
I
can't
be
more
helpful,
but
you
know
the
kubernetes
Doc's
do
a
decent
job
of
talking
about
some
of
the
AWS
integrations
and
one
especially
kind
of
relating
to
ingress
and
load
balancers.
But
some
other
things
are
a
little
bit
light,
so
feel
free
to
ask
you
know
us:
they
should
have
a
good
answer
for
you.
Anybody.
C
A
For
those
of
you
wondering
what
he's
talkin
about
each
kubernetes
special
interest
group
sig
has
a
channel
in
slack,
so
you
can
just
go
search
for
AWS
and
your
little
slack,
quick,
switcher
and
hop
in
there
and
they're,
usually
pretty
useful
there
to
help
you
all
right.
Next
question:
keep
the
questions
coming.
Dylan
asks
how
do
I
safely,
lock
in
a
version
of
a
hum
deployment,
eg
Redis
H
a
without
copying
the
entire
hum
Redis
repo
into
our
own.
This
feels
like
a
common
question
to
me.
E
People
ask
well
so
what
does
he
mean
by
lock-in?
So
if
he
wants
a
lock
in
a
version,
so
there's
two
versions
when
he
talked
about
a
helmet
right,
there's
the
helm,
chart
version
and
then
there's
the
version
of
the
app
that
is
represented
by
that
version
of
chart,
and
so
each
chart
dot
yanil,
which
exists
in
the
root
directory
of
the
chart,
has
both
those
variables.
E
If
you're
looking
at
like
the
stable
charts,
the
main
charts,
which
I
presume
he
is
yep
agency,
what's
to
lock
in
both
yep,
so
so
on
deployment,
he
all
he's
going
to
do
is
call
the
the
version
of
the
chart
and
that
locks
in
the
version
of
the
application
right
now,
if
he
goes
in
upgrades,
let's
say
upgrade
six
months
later.
Obviously,
both
of
those
may
have
changed
right
and
so
a
new
chart
version
will
probably
have
a
newer
app
version.
E
If
there's
been
a
release
score
right,
that's
something
to
think
about,
but
if
you,
if
you
want
like
lock
in
the
chart
lock
in
the
actual
version
of
that
application,
just
using
the
chart
version
when
you
go
to
deploy
is
one
way
to
do
that.
I
know
a
lot
of
people
use
helm,
file
and
tools
like
reckoner,
which
is
from
reactive
ops,
which
is
like
a
whole
like
single
yeah.
Well,
that
defines
multiple
helm
charts
that
you
want
to
deploy
it's
great
for
bootstrapping
on
a
new
cluster.
E
In
there
you
can
specify
a
version
and,
and
that
version
is
the
chart
version,
not
the
app
version.
So
most
client
tools,
I
think
go
to
chart
version
right
because
you
know
the
short
version
itself
picks
an
app
version
to
kind
of
deploy.
So
obviously
you
can
change
that
yourself,
but
again
then
you're,
you're,
forking
or
you're.
You
know
bringing
those
files
locally
in
some
manner
and
bring
your
own
kind
of
home
is
so
right.
A
So
I
want
to
expand
on
their
question
a
little
bit
here.
So
let's
say
it's
six
months
later,
there's
a
new
version
of
the
help
chart
and
it
might
fix
some
bugs
they
care
about,
but
they
want
to
stick
to
the
version
of
the
service
that
they
have
in
this
case
Redis
test
AJ
at
that
point,
is
it
a
forking
decision
or.
E
E
So
technically
you
could
write
so
you
could
so
like
if
you're
using
wreckin
or
let's
say
in
that
config
you
can
say:
I
want
to
use,
chart
X,
and
then
you
can
pass
a
value
to
that
chart
when
it's
when
it
launches.
When
installs
that
says
you
know
image
tag
I
want
set
to
you
know
this.
You
know
image
newest
image
version
for
the
application,
so
that
would
probably
be
the
way
that
you
do
it
I
get
it
it's
dependent
on
the
application
generally
and
then
how
that
chart
is
set
up
and.
C
E
Absolutely
I
want
to
reiterate
that
you
know,
while
it
makes
things
life's
life
a
lot
easier
and-
and
it
is
ok-
maybe
a
little
bit
more
so
for
core
applications,
and
it
is
for
things
that
do
need
to
be
updated
every
so
often
and
are
very
heavily
managed.
Like
you
know,
Kafka
or
MySQL
definitely
be
aware
that,
just
because
there's,
a
stable
directory
that
the
chart
is
in
doesn't
mean
that
every
time
that
you
go
to
interact
with
it,
you're
gonna
get
the
the
best
100
percent
experience
right.
E
I've
deployed
some
charts,
a
few
80s
kind
of
centric.
You
know
auto
scaling
and
and
load
balancer
stuff
that
there's
a
variable
that
I
didn't
pass
that
Britt.
You
know
it
doesn't
start
at
all
right
and
and
that
wasn't
really
defining
the
readme,
no
we're
really
to
be
seen,
etc.
So
definitely
pulling
things
down
also
you're,
relying
on
servers
that
are
out
out
there
run
by
other
people
and
and
their
helm
servers.
So
that's
something
to
be
be
mindful
of
I
know.
E
A
So
it
seems
the
reason
I'm
bringing
this
up.
It
seems
like
I,
hear
this
question
a
lot
like.
So
what
do
you
do
at
that
point?
Do
you
hard
for
could
just
do
that
or
like
are
there
methods
for
people
that
do
this
say
you
know
what
every
six
months
I'm
gonna
check
out
stream
and
like
do
cherry-pick,
merges
like
what?
What
does
this
look
like
day
to
day,
for
you
is
that?
Is
it
just
like
it's
forked.
E
Yeah
well,
it's
it'd
be
for
kit
and
it
yeah
it
before
kit
make
it.
What
you
want
have
a
kubernetes,
catalog
repo
in
your
namespace,
and
you
know,
deploy
from
there.
Don't
don't
ever
touch
it.
You
know,
and
this
that's
just
a
repo.
You
could
also
run
a
helm
server
right.
That
doesn't
represents
that
repo
and
your
helm,
client
tools,
call
that
and
you'd
maintain
it.
That
way,
a
lot
of
the
time.
E
E
E
A
Awesome
Dylan
I
hope
that
that
helps
please
feel
free
to
keep
following
up
questions.
The
next
one
is
from
Dax
and
we're
caught
up
on
questions
at
this
point,
so
please
feel
free
to
keep
asking
them
in
the
channel.
The
question
is:
what
is
a
good
first
step
to
contribute
to
kubernetes,
so
we
actually
have
a
sister
program
that
is
kubernetes
meet
our
contributors.
That
is
specifically
based
around
that
and
bob
is
going
to
get
you
a
link
for
that,
because
I'm
gonna
link
something
else
which
I
thought
was
really
interesting.
A
So
tim
Hawking,
one
of
the
first,
could
commuters
to
to
kubernetes.
He
writes
at
this
post
I
think
it's
pretty
awesome,
because
it's
like
a
very
6m
paragraph,
I'm,
just
gonna
read
it
off
here.
It
says
I'm,
giving
this
advice
many
times.
That's
why
I'm
repeating
it.
So
you
know,
hopefully
we'll
have
to
repeat
this.
As
often
you
guys
start
with
a
piece
you
find
interesting
if
you
like,
CLI
start
with
you
controllable
start
at
main,
read
it
if
it
doesn't
make
sense
play
with
it
and
add
temporary
print
apps.
A
Until
you
understand
it
then
write
better
comments
or
rename
variables
or
functions
or
refactor
until
the
next
person
won't
struggle
proceed,
if
you
can
make
it
an
hour
without
finding
something
to
fix,
I
will
be
shocked
before
long
you'll
be
a
foremost
expert
in
it
and
start
reviewing
other
people's
code
in
tackling
bigger,
bugs
and
features
know.
This
code
is
rocket.
Science
just
start
reading.
It
so
I
really
like
that,
because
he
kind
of
just
talks
about
just
grab
something
until
you
can
like
figure
it
out.
A
So
I
thought
that
was
really
good.
Now
the
way
we
help
implement
Tim's
idea,
here's
we
have
a
bunch
of
programs
that
we
do
run
as
a
community
to
help
people
with
that.
So
meter
contributors
is
a
start.
It's
similar
to
this,
except
the
questions,
aren't
about
running
kubernetes
they're
about
contributing.
So
you
can
ask
someone
you
know
which
of
my
first
pull
requests.
Look
like.
We
do
code
code
based
tours
code
base
tours.
Thank
you
very
much.
We're
like
people
will
explain
how
the
stuff
is
laid
out,
how
you
should
configure
your
github.
A
C
C
A
And
the
last
bit
I
added,
of
course,
is
the
kubernetes
contributor
guide,
which
is
a
written
version
of
everything
we
just
said,
except
with
a
bunch
of
of
other
stuff
in
there.
So
you
can
click
through
and
look
at
like
good,
first
issue
bugs
and
all
the
programs
that
are
available
to
you,
but
definitely
definitely.
C
Look
in
there,
if
you
can
also
make
it
to
like
any
of
the
the
cube
cons.
There
is
a
new
contributor
workshop,
where
it's
like
a
you
know,
a
six
hour
session,
yeah
six
hours
where
they
sort
of
go.
You
through
the
whole
like
how
to
start
contributing
the
project
and
they'll
go
over
other
things
like
CI.
How
we
use
labels,
how
you
use
BOTS
and
things
like
that.
Yeah.
A
B
I
wasn't
able
to
get
the
cube
admin
llamo
that
was
provided
and
the
question
to
work,
love
interesting,
okay,
so
I
changed
it
a
bit
and
answered
the
question
on
there
for
because
it's
there's
nothing
worse
than
going
back
to
a
question
and
finding
their
response,
so
I've
actually
taken
the
even
answering
myself
and
I
figure
out
the
answer
in
places
like
that.
There's
a
famous
xkcd
on
there.
If
someone
wants
to
paste.
A
It
in
or
it's
like
what
did
you
see?
Coder
87,
it's
like
my
favorite
thing
ever
thanks
for
answering
that
and
then,
of
course,
the
link
to
that
psycho
over
an
old
question
will
be
in
the
notes:
Seana
thanks
for
dropping
by
ass.
Anyone
using
psyllium
testing
it
out
for
BPF,
but
trying
to
get
around
the
network
policy
stuff
I
like
to
do
default,
allow
versus
denied
for
testing
until
we
get
the
policy
testing
aspect
of
it.
Anyone
know
how
to
do
that.
Bob's
already
smiling
I
know
he's
silly
a
fan
number
one.
C
C
A
E
D
E
I
did
a
quick
answer.
I'm
just
gonna
give
a
quick
answer
here
that
is
dependent
on
a
lot
of
things.
You
know,
I,
don't
work
for
Google
or
Amazon
I've
been
an
outsider
and
kind
of
looked
at
both
of
them
and
working
with
them.
I
personally
prefer
GK
I
think
it
is
the
premium
offering
Google
has
been
the
leader
in
containers.
They
helped
foster
the
you
know,
kubernetes
bring
it
bring
it
here
and
they've
been
doing
them
for
over
ten
years
and
and
Oregon,
Omega,
etc.
E
In
terms
of
saying
up,
ons
on
security
updates
enhancements,
getting
things
like
SEO
and
other
integrations,
making
it
easy
to
deploy
clusters
quickly
configuration
things
like
that,
Google
seems
to
be
the
leader.
In
my
eyes,
eks
is
a
little
behind.
It's
almost
like
you
can
was
kind
of
a
you
know.
We
need
to
be
in
the
kubernetes
realm,
let's
get
a
product
out
there,
and
so
you
can
see
that
with
some
of
the
community,
some
of
the
client
tools
etc.
Eks
CTL,
like
I
mentioned
before,
is
super
powerful.
E
It's
not
done
by
Amazon,
it's
done
by
weave
works.
It
really
does
help
you
and
if
it
didn't
exist,
eks
would
be
quite
harder
to
use.
I,
don't
know
in
terms
like
terraform
I
know
a
GK
is
supporting
terraform
I,
don't
know
in
terms
of
UK
s.
How
well
that's
supported.
You
know
the
other
big
thing
that
I'm
noticing
from
an
administrative
system
in
DevOps
perspective,
is
that
eks
gives
you
zero
control
over
the
Masters.
You
don't
even
see
like
when
you
do
a
coop
CTL
get
nodes,
you
don't
even
see
the
masters
there.
E
E
At
the
end
of
the
day,
it's
a
revised
airy
kind
of
thing
based
on
you
know
the
the
type
of
cloud
that
you're
using
you
know
there
are
gonna,
be
optimized
for
for
that
cloud,
but
it
also
you
know
we
made
the
decision
to
go
with
eks
because
we
just
live
in
Amazon
and
it
does
get
us
pretty
far.
It
does
get
us
a
cluster
in
in
15
minutes
to
spawn
one.
E
You
kind
of
manage
the
nodes
yourself
and
that
it's
great,
but
you
know
in
terms
of
just
the
kind
of
the
leader
and
the
what
seems
to
be
the
premium
offering
I
would
say,
definitely
go
to
gke.
If
you
can
and
start
playing
with
them,
it
costs
you
very
little
to
get
a
simple
three
node
cluster
going
or
in
something
tiny.
E
Both
of
them
are
doing
112
right
now,
but
you
know,
if
you
look
at
you,
can
look
at
the
es
roadmap
actually,
which
I
think
is
on
github,
there's
just
a
lot
kind
of
that's
liking
behind
that
it's
still
being
worked
on.
So
definitely
just
do.
Do
it
a
day
of
research
and
player
playing
around
and
get
an
idea
of
what
things
look
like
and
you'll
get
a
better
sense,
but
definitely
you'll
find
things
are
more
fleshed
out,
I
think
with
Google,
so
I.
C
A
Chris
and
I
know
Ralph
is
bare
metal
fan
number
one
all
right,
so
it
looks
like
it
looks
like
Dylan
is
typing
good.
Let's
take
a
look.
Let's
take
a
little
bit
of
time
here,
Mario
since
you
mention
it.
Let's
talk
about
eks
cuddle,
because
I've
seen
releases
on
this
from
the
week
folks.
Can
you
kind
of
walk
through
what
that
looks
like
absolutely.
E
Yeah,
so
eks
cuddle
is
basically
like
a
terraform
command,
but
it's
specific
to
eks
itself.
It
also
interacts
with
ec2
to
spawn
your
worker
nodes,
and
so
it
what
it
actually
is
doing
it
kind
of
behind
the
scenes,
its
generating
cloud
formation
and
then
just
playing
that
for
you.
So
if
you
go
to
like
EAS
CTL
like
the
readme
and/or,
the
dot
IO
page,
there's
it's
like
a
long
page.
It's
like
here's
how
you
know
that
things
you
can
pass
at
the
command
and
that's
great
null,
but
no
one
wants
that
right.
E
So
you
want
to
you,
know,
write
a
file
and
put
it
in
a
repo,
and
you
can
do
that.
It's
the
first
config
files
and
sports
pretty
much
every
option
you
can
pass
to
it
in
a
config
file,
but
the
you
have
to
go.
There
go
doc,
API
Doc's,
to
see
all
those
options.
It's
not
very
well
documented
and
there's
some
things
that
are
broken
dry
run
is
not
supported.
E
You
have
to
pass
a
ssh
path
which
it
represents
the
key
pair
you
might
have
any
of
us
and
there's
other
little
intricacies
that
don't
work
as
well.
Our
back
isn't
really
fleshed
out
for
per
user
stuff,
so
you
have
to
kind
of
generate
and
keep
config
per
user
and
there's
just
some
other
things
like
that.
But
yeah
it's
a
client
tool.
It
does
everything
for
you,
so
you,
basically
you
write
up
like
a
fig
I
want
to
cluster
it
on
a
node
group.
With
this
size
you
can
do
it
auto
scaling,
which
is
great.
E
It
sets
that
up
automatically
for
you.
Sadly,
we
just
found
out
actually
the
cluster
autoscaler
does
not
work
with
the
auto
scaling
group.
It
creates
right
so
there's
another
little
gotchu
there
right.
So
it's
like
well
I
have
this
auto
scaling
group.
But
what
actually
you
can
happen,
especially
you
know
we're
spreading
nodes
across
the
lily
zones,
which
everyone
should
be
doing.
You
know
for
redundancy
there
and
in
that
case,
Amazon
in
all
the
scaling
kind
of
does
a
street
balancing
act
and
and
cluster
all
descaler
actually
negatively
impacts
how
that
works.
E
So
you
can
actually
have
knows
that
just
kind
of
go
away.
So
so
there's
little
things
little
big,
relatively
big,
bigger
things.
That
kind
of
you
have
to
keep
an
eye
on
but,
like
I,
think
the
mantra
configs
are
like
20
30
lines
and
they
get
us
everything
we
need
for
our.
You
know
staging
production
clusters
and
it's
it's
really
kind
of
cool.
So
we
have
like
a
eks
CTL
config.
E
We
have
a
config
for
a
reckoner,
slash
home
file
and
then
we'll
have
any
other
llamĂł
files
that
we
need
to
apply
to
the
cluster
and
we
put
those
in
a
directory
and
that
directory
represents
a
single
cluster
and
then
I.
Have
you
know
three
or
four
steps
and
read
me:
how
do
I
set
up
a
cluster
and
it's
ksdc
l
create
cluster
you?
E
Maybe
you
passed
the
AWS
credentials
to
use
for,
like
you
know
your
sandbox,
a
Tobias
program
in
or
whatever
it
is,
and
it
goes
and
you
can
there's
waiting
is
supported
for
deletions,
there's
a
V
C
ball
game
po,
especially
that
you
can
see
the
confirmation,
debug
info,
etc.
So
it's
it's
extremely
powerful,
but
it's
a
very
it's
a
pretty
small
version
and
there's
still
a
lot
more
kind
of
instability,
room
to
succumb.
So,
have
you
found
yourself
ever
have
any
kept.
E
A
E
So
I've
heard
people
read
media
article
from
a
guy
yesterday
that
tried
to
make
that,
like
an
auto
scaling
group
for
each
zone,
so
that
costs
are
all
the
scale
it
would
work
and
you
went
at
the
cloud
information
I
think
that's
easy
to
do.
I
think
you
can
just
export
the
cloud
formation
from
from
hcto,
but
I
haven't
tried
it
and
I
don't
want
to
get
there
in
life.
So.
A
B
E
Yeah
and
and
each
CTL
is
super
powerful,
so
you
can
pass
a
sitter
value
for
your
V
PC
that
you
want.
That's,
that's
not
like
a
slash
16,
that's
a
slash,
20
and
it
will
automatically
it
automatically
makes
the
subnets,
and
it
makes
you
know
each
one
in
each
availability
zone.
It
actually
it'll
makes
those
on
/
22
or
slash
23,
and
it
handles
all
those
it
handles
the
route
tables
and
everything.
So,
at
the
end
of
its
run,
you
have
a
completely
working
cluster
without
side
access,
etc.
E
You
know
be
PC,
pairing
or
anything
else
you
want
to
do
is
up
to
you,
but
everything
is
set
up
and
good
to
go
to
kind
of
do
whatever
you
need
to
do,
which
is
just
super
problem,
there's
nothing
from
Amazon.
That
does
that,
and
this
is
why
I
kind
of
said
you
know
gke
kind
of
just.
Does
that
automatically
right
for
both
the
nose
and
the
control
plane,
whereas
eks
is
just
the
control
plane
and
then
you,
you
kind
of
work
on
the
note
that
get
the
worker
nodes
set
up
yourself.
A
All
right
anything
else
before
we
move
on
the
Dylan's
next
question,
all
right,
we're
running
about
20
minutes.
We
have
about
20
minutes
left,
so
keep
them
coming.
Dylan
asks
what's
a
good
process
for
staging
two
production
workflow
on
a
cluster
level,
for
example,
if
I
wanted
to
upgrade
eks
and
make
sure
everything
works
before
switching
from
one,
not
ten
to
one
twelve
I'm
running
ETL
workers
that
are
making
a
ton
of
external
API
requests
when
I
just
clone
the
cluster
upgrade
and
somehow
neuter
the
external
requests.
While
they
test
everything
else
is
functioning
properly.
A
C
A
C
So
in
general,
you're
not
going
to
be
able
to
upgrade
directly
from
like
110
to
112
or
something
like
that.
You're
gonna
have
to
go
to
1.11.
Then
you're
gonna
have
to
go
to
1.12,
also
stuff
to
make
sure
is
you
know,
you're
switching
over
to
use
like
the
new,
like
kind,
API
versions.
So,
like
you,
don't
want
to
use
I
think
it's!
It's
extensions.
V1
beta
one
for
apps,
that's
getting
deprecated
in
116
mm-hm.
A
E
External
name
is
fantastic.
I
want
to
make
a
quick
note
here,
be
aware
about
how
your
pods
are
querying
for
DNS,
be
aware
of
and
outs
v,
which
is
in
the
result,
file
for
and
I
wish.
They
would
take
it
out.
But
basically,
when
you
make
an
external
name,
you
can
name
it
awesome
name
and
then,
when
you're,
if
you
put
that
into
let's
say
the
environment
variable
for
your
pod
and
it's
just
awesome
name
that
will
be
query
that
will
be
its.
It
will
search
domain
against
cluster
local
server
side,
closure,
local.
E
You
know
the
namespace
and
and
and
services.
So
you
you
want
to
make
sure
to
use
the
fully
qualified
cluster
domain
for
that
external
name,
so
it
would
be
awesome,
name,
dot,
name,
space,
dots,
SVC,
dot,
cluster
dot,
local.
If
you
can
so
that
you
can
avoid
the
extra
you
know,
six
DNS
queries
that
you're
gonna
have
there.
So
just
a
general
best
practice.
Yeah-
and
you
know
it's
I
like
it's
a
great
thing.
E
A
D
A
It
looks
like
some
people
are
typing,
so
we'll
give
them
a
quote
a
little
bit
of
time
there
to
feel
their
questions.
I
do
have
a
backlog,
one
that
I'd
like
to
see
address
by
the
panel
can
I
everyone
assist
me
or
provide
me
guidelines
to
add
a
node
with
a
GPU
enabled
in
a
cube,
a
big
cluster,
desperately
looking
for
anyone's
assistant
I
have
an
NVIDIA
GPU
server
and
installed
a
single
master
cluster.
A
C
D
C
A
C
E
D
E
Updating
demon
set
something
I
just
saw
in
a
github
helm
issue
that
I'd
scrolled
down
in
is
that
daemon
sets
I
think
by
default.
Their
update
strategy
is
manual,
and
so,
if
you
do
want
to
have
kind
of
the
same
rolling
update
thing
sort
of
acting
that
a
deployment,
does
you
know
one
at
a
time?
You
know
max
unavailable,
etc.
You're
gonna
want
to
update
that
or
set
that
update
strategy
to
rolling
update
so
and
I
think
it's
done.
E
E
A
B
B
A
B
Some
of
the
nice
things
about
having
cops
in
stead
of
eks
is
actually
having
control
plane
access.
So
if
you're
running
a
workload
that
you
need
audit
for
a
whole
bunch
of
random
reasons
behind
the
scenes,
business
purposes,
you're
able
to
enable
audit
logging
and
all
of
the
hot
security
policies
and
all
of
the
other
bits
but
you'd
want
and
then
also
the
auto
scaling
groups
are
pretty
easy
to
put
into
an
HPA
as
an
add-on
through
cops
like
right
off
the
bat.
So.
A
I
feel
like
if
you're,
if
you're
doing,
if
you're
doing
a
case
cluster
on
AWS
your
options,
but
you
want
the
control
pay
in
access.
Your
options
are
literally
Cubana
in
my
hand,
in
cops
right.
You
know
you
can
do
cube
spray.
Oh
that's
right!
Yeah!
That's
right!
Keep
spray!
This
will
this
will
kind
of
lean
into
our
next
question?
What
people
ask
about
kubernetes
distributions?
But
let's,
let's
give
cops
another
few
minutes
there,
Bob
or
Chris.
Do
you
have
any
opinions
on
cops,
I've.
D
E
Know
we
had
an
external,
we
had
reactive
apps,
actually
kind
of
get
the
rot
the
cop
stuff
out
for
us,
so
we
didn't
do
a
lot
of
it,
but
looking
through
the
configs
there's
just
a
lot
to
configure
it's
it's
a
little
bit
tedious.
Looking
sure
you
get
more
controllable
with
that
comes
more
responsibility
at
the
same
time,
so
you
have
to
kind
of
weigh
the
pros
and
cons,
I.
E
Think,
overall,
in
terms
of
things
you
can
do
and
kind
of
you
know
having
that
control
cops
is
gonna
win
out
over
aks,
but
you
know
if
you
just
need
a
cluster
going
and
you
kind
I
can
trust
someone
for
a
little
bit
you're,
not
running
anything.
That's
anything
a
special
snowflake
or
anything
like
that.
You
you're,
probably
gonna,
be
okay
with
KS
I
find
we're
not
really
missing
a
lot.
E
You
know.
Is
there
some
stuff
that
would
be
nice
to
configure
sure,
but
cops
makes
everything
a
little
bit
more
explicit,
so
defining
subnets
and
and
all
of
those
things
is
just
a
little
bit
more
more
tedious.
So
it
depends
on
your
radios
kung-fu
as
well,
and
how
much
you
you
want
to
configure
I
can
tell
you
right
now.
We
have
a
lot
more
Yambol
for
cop
stuff
that
that
reactive
kind
of
setup
for
us
than
we
do
for
eks.
So
keep
that
in
mind.
A
A
Okay,
yes
I
just
realized
our
raffle
person
isn't
here,
but
Dylan
s
a
lot
of
questions
today,
so
no
matter
who
wins
I
also
want
to
get
Dylan
a
t-shirt,
so
I'm
going
away
to
shirts
today,
but
this
other
one
will
be
as
thank
you
for
asking
so
many
great
questions,
but
I
do
want
to
segue
into
this
question
from
users.
Earlier.
Okay,.
C
Yeah
you
skipped
over
someone.
Oh
I
did
where
Jason
Carter,
oh.
A
C
D
A
E
A
So
last
question
of
the
day:
unless
someone
comes
up
with
a
really
good
one
in
the
next
few
minutes,
is
there
any
pros
cons,
slash
list
of
all
the
kubernetes
distres
out?
There
see
a
lot
of
promising
things
and
kate's
TW
micro,
Kade's
kind
and
all
the
others,
but
either
they
are
really
bleeding
edge
or
too
minimalistic
occurring.
A
My
current
needs
also
seems
like
I,
had
to
discover-
and
you
proof,
a
concept
using
a
new
distro
every
week
or
so,
and
I
want
to
mirror
this
on
my
kubernetes
subreddit
that
people
were
asking
what's
up,
you
know,
what's
the
best
distro
I
should
use,
but
I
don't
want
to
turn
into
a
distro
distro
war
here.
But
let's
just
talk
about
it
for
a
little
bit
while
we
wrap
up
but.
A
D
C
A
D
So
I
just
dropped
a
spreadsheet
into
the
slack.
Has
ever
cubed
x
did
a
massive
spreadsheet
that
kind
of
lists
all
the
pro
and
cons
of
all
the
different
district
just
rows
out
there,
even
though
price
point
so
they
they
baseline
it
at
what
open
chef
cost.
So
he
get
an
idea
of
what
things
do
and
how
much
they
cost
and
taken
SATs.
What
suits
your
needs
best,
yeah,
that's
a
good
one!
What
do
you
use
just
curious?
D
E
Minnie
keep
still
if
I
need
something
locally
and
quick.
My
go-to
for
the
proper
way
is
I'll
spawn
a
gke
cluster
and
then
obviously
es
CTL
I
haven't
used
kind
yet
I
want
to,
and
then
for
most
things
like
if
I
have
a
one
node
like
a
dedicated
server
that
I
just
want
to
spawn
some
things.
I'll
keep
a
repo
with
dr.
compose
files
for
each
sort
of
application
and
just
roll
with
that.
No
kubernetes
there
and
it's
a
very
simple
docker
PS
and
that's
everything.
E
D
A
I
will
so
you've
won
the
t-shirt,
Dylan
you've
on
the
t-shirt,
so
I'll
go
ahead
and
PM
you
right
after
this
and
give
you
the
code
that
you
need
so
with
that
we're
gonna
wrap
it
up.
I'd
like
to
thank
everyone
for
joining
us.
We
do
this,
the
third
Wednesday
of
every
and,
as
always,
this
will
all
be
posted
on
YouTube.
We
really
appreciate
you
helping
out
if
this
was
useful
for
you.
C
A
Alright
is
that
it
going
once
going
twice.
Good
luck.
Everyone
still
not
sure
whether
we'll
have
one
next
month
due
to
cube
con
I,
had
to
see
how
close
we
are
cutting
it
based
on
people's
travels,
but
we'll
keep
everyone
updated.
You
can
just
subscribe
to
the
cure
bananas
channel
if
you
want,
or
this
playlist
and
you'll
get
a
reminder.
So
with
that.