►
From YouTube: Kubernetes Office Hours (West Coast) 20180919
Description
Office Hours is a live stream where we answer live questions about Kubernetes from users on the YouTube channel. Office hours are a regularly scheduled meeting where people can bring topics to discuss with the greater community. They are great for answering questions, getting feedback on how you’re using Kubernetes, or to just passively learn by following along
https://contributor.kubernetes.io/events/office-hours
A
And
I'm
kicking
it
off.
It
is
the
third
Wednesday
of
a
month
welcome
everybody
to
the
kubernetes
office
hours.
This
is
a
monthly
livestream,
where
we
hop
on
YouTube
with
some
kubernetes
experts,
and
then
you
all
join
in
cash
office
hours
on
on
slack
and
then
ask
your
questions
and
we
will
try
to
answer
as
many
of
them
as
we
can
so
before
we
get
started
here.
Let's
do
some
quick
introductions:
let's
go
Ralph
Jesse,
Josh
dims
say
it
tell
us
a
little
bit
about
yourself
where
you
work
and
maybe
your
area
of
expertise.
D
E
A
All
right,
sorry
about
that
and
Bend
the
elders
trying
to
join
us,
but
he's
having
some
laptop
problems
so,
unfortunately
he
might
or
might
not
be
able
to
join,
but,
lastly,
I'm
Jorge
Castro
I
work
as
a
community
manager
and
I
kind
of
help
wrangle
these.
You
have
these
sessions
that
we
do
once
a
month
thanks,
very
much
all
the
panelists
for
showing
up
to
contribute
before
we
start
just
some
new
ground
rules.
A
Remember
the
channel
is
a
no-judgment
zone,
please
obey
the
kubernetes
code
of
conduct
and
while
this
panel
of
experts
is
pretty
good,
we
actually
don't
have
access
to
your
cluster.
So
there
might
be
situations
where
we
really
can't
help
you
and
you
may
not
have
the
information
about
your
environment
and
things
get
complicated
and
distributed
systems
are
hard.
A
Panelists
you're
definitely
encouraged
to
not
just
answer
a
question
with
a
link
to
the
docs
for
whatever,
but
maybe
share
some
of
the
your
production,
expertise
or
or
any
story
that
might
help
somebody
that's
trying
to
to
use
the
technology
and
with
that
I
looks
like
them
got
his
Zune
figured
out,
then
we
were
just
finishing
with
our
introductions
feel
free
to
just
introduce
yourself
and
then
we'll
get
started.
As
people
ask
questions
in
hash
office
hours,
we've
got
a
bunch
of
questions
lined
up
already.
A
Okay,
and
actually
the
very
first
question
is
for
you
because
this
morning
we
did
get
a
gke
question
and
a
bunch
of
us
were
stumped,
so
I'm
gonna
go
ahead
and
post
it
in
the
channel
real
quick
just
as
a
post,
just
a
repeating
here
for
everyone.
Please
keep
in
mind
that
there
is
about
a
10-second
delay
between
the
slack
Channel
and
our
live
stream.
A
So
the
question
is
this
might
be
more
of
a
gcpd
question,
but
does
anyone
know
which
DK
permission
if
any
controls
a
user's
access
to
create
certain
C
or
D
resources
that
fall
under
certain
api's
I'm,
guessing
container
dot,
custom
research,
definitions
that
create
allows
users
to
create
custom
research
definition
objects
but
I'm
after
controlling
permissions
to
create
something
like
API
version,
:
cert
Manager,
you
can
read
the
yanil
of
there
on
the
channel.
Sorry
I
guess!
A
So.
If
you
can
help
us
with
that
and
while
you're
thinking
about
that,
we
actually
discovered
earlier
today
that
just
about
all
the
cloud
providers
have
a
channel
on
the
slack,
the
kubernetes
slack.
So
if
you're
looking
for
provider
specific
stuff,
that
was
basically
the
answer.
We
came
up
with
this
morning
was
hey
check
out
hash
to
eat,
but
there
are
channels
available
for
other
providers
as
well.
So.
F
So
I
believe
most
are
back.
Things
are
done
through
our
back.
You
can
set
up
users
with
the
identity
integration,
but
if
you
want
to
set
what
there
are
back,
rules
are
using
normal.
Koreans
are
back,
I,
do
remember,
seeing
and
I'm
trying
to
pull
up.
There
was
a
recent
watch
coming
out
for
integrating
Google
Groups
with
our
Beck,
which
might
be
helpful,
but,
generally
speaking,
I
think
with
pretty
much
all
providers.
If
you
want
to
set
up
our
back
rules
for
a
especially
like
a
custom,
kubernetes
resource,
you
just
use
kubernetes
our
back.
Ok,.
A
F
A
And
this
this
person
was
from
this
morning
and
they
were
able
to
find
that
gke
channel.
So
as
mostly
kind
of
saying,
if
you
happen
to
know
off
the
bat,
ok
moving
on
those
of
you
just
joining
us,
we
are
answering
questions
live
for
the
next
hour
in
hash
office
hours
and
on
the
YouTube
channel.
Basically
keep
asking
your
questions,
I'll
read
them
off
to
the
panel
and
then
we
all
continuing
to
go.
Let's
see
next
question
SRAM
poll
asked
question
for
the
office
iris.
A
E
I'll
take
a
stand
sure
the
OEB
see
one,
that's
the
easiest
place
to
start
I.
Think
if
you
so,
if
the
o
ADC
integration
should
work
off
the
box
out
of
the
box,
so
that's
where
you
would
start
and
then,
if
you
have
any
additional
things
that
you
would
like
to
do,
then
you
write
custom
web
hooks
and
custom
web
hooks
are
always
useful
when
you
are
integrating
with
things
that
are
not
already
there
in
the
box.
So
to
say
so.
A
Yeah
and
if
not,
users
feel
free
to
always
ask
a
follow-up
question:
if
you
have
any
clarifying
information
or
anything
like
that,
feel
free
to
either
post
a
follow-up
or
we've
done,
the
whole
will
wait
for
you,
while
you
type
furiously
thing
so
all
right,
moving
on
trample,
let
us
know
if
we
can,
if
you
have
a
follow-up
to
that,
Jim
angel
welcome
to
office
hours
asks
I
would
like
to
feedback
on
anyone
currently
using
a
service
mesh
in
production.
What
challenges,
if
you
face,
can
you
share
your
most
painful
learning,
experience
of
implementing
it?
A
B
Any
feedback
on
this
I've
only
ever
worked
with
folks,
not
any
production
service
mesh
items.
It's
always
been
proof
of
concept
clusters
and
test
clusters
to
see
if
it
was
feasible.
Yeah,
usually
ISTE,
oh
to
be
honest
with
the
ones
that
I've
worked
with
and
a
lot
of
it
was
getting
all
of
the
endpoints
to
talk
as
expected,
based
on
how
everything
was
open
in
the
past
and
they
had
locked
everything
down.
A
A
All
right,
hopefully
that
helps
answer
your
question
Jim.
Sorry.
We
can't
give
you
a
lot
more
detail
there.
So
ramble
ass,
sick,
luster,
lifecycle,
future
direction
is
this:
it
is
it
expected
that
there
will
usually
be
a
custom
cluster
management
application
running
on
top
of
the
cluster
API,
or
is
it
expected
/
recommended
direction
going
that
the
customers
directly
use
the
cluster
API
to
specify
clusters
to
be
created
along
with
CRTs
or
non-standard
enhancements?
E
E
So
just
like
you
know,
there
were
a
lot
of
installed
tools
before
cube
ATM
and
we
kind
of
like
got
everybody
to
start
using
cube
adium,
even
if
they
present
another
ux
internally,
they
should
use
cube
medium
right.
Even
mini
cube,
uses
cube
medium
now
so
similar
to
that
we
want
to
have
something
that
people
can
rely
on,
and
wit
will
work
in
the
same
fashion
across
the
different
cloud
platforms.
That's
the
reason
for
doing
the
cluster
api.
E
Yes,
once
we
do
the
api,
we
will
be
providing
the
CLI
tools
as
well,
so
you
can
use
the
CLI
tools
and
do
whatever
you
need
to
do,
or
you
can
write
a
custom
manage
plaster
management
application
to
if
you
want
so,
but
the
idea
is
that
we
will
end
up
using
cops
and
cuban
artists
anywhere
and
other
things
like
that
to
switch
to
cluster
api.
So
essentially
the
management
tools
will
use
cluster
api
and
then
cube
areum.
E
E
Could,
depending
on
what
they
are
trying
to
do,
yeah
sure
sure?
Okay,
just
to
start
with
in
cig
testing
we
want
to
like.
We
are
constantly
running
into
problems
with
some
of
the
tools
that
are
not
fully
maintained
so
like
Kuban,
it
is
anywhere,
for
example,
right.
So
that
is
where
we
would
like
to
start.
First
saying:
okay,
we
want
to
switch
over
from
cuban.
E
It
is
any
way
to
the
cluster
api
and
use
cluster
api
with
both
AWS
and
GC
j
ke
and
then
prove
that
you
know
this
is
possible,
and
this
works
fine
for
our
use
case
and
then
kind
of
like
expand
from
there.
So
currently,
there's
OpenStack,
gues
AWS
folks.
Are
there
GC,
gke
stuff,
there's
a
lot
of
people
working
on
cluster
API
for
different
cloud
providers,
so
I
think
it's
an
exciting
time
to
join
and
help
out.
A
G
A
Moving
on
to
our
next
question
from
Tim
w
asks
I've
inherited
a
cop's
managed
cluster
on
version
1.7
running
on
AWS
I'd
like
to
upgrade
to
the
latest
versions,
but
I'm
a
bit
shy,
I'm
just
running
the
standard
upgrade
commands
untested
in
production,
I
hear
ya.
Is
there
a
proper
way
to
handle?
These
type
of
updates
should
I
be
cloning,
the
entire
cluster.
In
order
to
test
upgrades,
we
have
10
to
15
main
interconnected
services.
D
A
C
A
Okay,
so
this
is
actually
the
second
upgrade
question
we've
had
today.
The
first
one
was
I,
don't
think
it
was
cops,
it
was
something
else
I'm
starting
to
think,
and
this
is
a
loaded
question
I'm
asking
on
purpose.
It
started
looked
like
to
me
that
the
default
upgrade
path
has
always
spawn
a
new
cluster
move,
your
stuff
over
kill.
The
old
is.
It
is.
D
Call
the
hole
in
spawning
a
new
cluster
necessarily
as
a
way
of
performing
the
upgrade.
It's
okay
he's
not
his
cluster.
He
didn't
set
it
up
originally
sure,
and
so
he
doesn't
even
know
this
stage
whether
the
cluster
operates
strictly
according
to
a
cop
spec
file,
whether
it
was
deployed
with
cops
and
then
somebody
modified
it.
A
A
B
B
And
he
just
formation,
though
I
also
want
to
do
some
load
testing.
Let
me
cut
off
my
video
you'll
also
want
to
load
test
those
applications
on
the
staging
cluster.
Just
to
make
sure
you
don't
see
any
weird
behavior
and
an
upgrade
option.
I've
seen
that
before,
when
you
do
an
upgrade
on
a
cop's
cluster,
it
does
give
you
a
very
nice
printout
of
everything
that
is
going
to
be
upgrade
to
the
next
version.
So
you'll
want
to
roll
1/8
to
the
next
one
and
you'll
want
to
do
that
on
the
stage
in
cluster.
D
B
Not
that
I'm,
aware
of
it,
should
be
pretty
easy
to
upgrade
flag
actually
before
you
perform
the
upgrade,
which
I
would
definitely
suggest
doing
on
the
stage
in
cluster.
It
does
tell
you
everything
that
might
be
up
before
it
goes
through
and
does
the
upgrade
itself.
So
there's
a
verify
flag
before
you
actually
go
through.
A
Okay,
you
cut
out
a
bit
Ralph
I
think
you
might
just
want
to
kill
your
video
for
the
rest
of
the
session,
but
he
did
say
something
that
makes
me
want
to
ask
again
because
I
noticed
a
little
smile
out
of
dimness
in
the
corner
of
my
eye.
I
saw
him
smile
for
in-place,
upgrades
or
kind
of
new
cluster
I'm
purposely
asking
this
controversial
question.
E
In
in
our
sig
testing,
we
do
have
upgrade
jobs,
but
the
upgrade
jobs
are
specific
to
certain
tools
that
we
have
right
so
and
we
cannot
always
I
mean
different
tools
do
different
things.
So
you
can't
tell
for
sure
whether
you
know
what
you
have
will
actually
do.
The
upgrades
in
a
proper
fashion
like
it's,
see
the
changing
blade,
CD
versions,
doing
the
pulling
all
the
stuff
in
from
you
know
the
older
versions
of
STD
from
its
82
to
HDD
3
things
like
that.
E
So
it's
hard
to
tell,
but
then
we've
always
had
this
problem
with
suppose.
I
have
a
really
big
cluster.
Then
I
can't
really
create
recreate
a
new
cluster
right,
so
yeah
that
that's
that's
definitely
something
we
need
to
work
on
better
as
a
community.
How
do
we
support
like
fighters
and
node
clusters?
How
do
we
make
sure
that
upgrades
work,
seamlessly
and
and
also
skip
upgrades
is
another
hot
topic?
How
can
I
go?
We
just
talked
about
like
they
have
to
upgrade
four
versions
right.
So
how
do
we
go
from?
E
C
A
F
Migration
in
one
there
yeah
so
I
would
add
for
the
actual
for
the
sick
testing,
like
for,
like
the
cooling
that
we
brought
to
do
the
testing,
which
also
runs
on
clusters.
We
would
prefer
everywhere
possible
to
just
use
new
clusters.
It
also
helps
you
guarantee
that,
like
what's
running,
is
actually
from
your
config,
but
we
do
have
that
same
problem
where,
when
you
have
a
large
cluster,
it's
kind
of
disruptive
to
say
well,
we're
just
gonna,
delete
this
cluster
and
make
a
new
one
or
something
so.
C
F
Have
a
cluster
that's
been
upgraded
on
gke,
so
long
that
we
don't
actually
have
our
back
and
it
so
as
the
part
of
the
CNCs
migration.
We're
going
to
finally
do
some
of
those
things,
but
we
expected
to
be
pretty
disruptive,
so
yeah
for
large
clusters
in
place.
Upgrades
can
go
pretty
smoothly,
but
you
might
want
to
try
to
test
that
somewhere.
F
A
Of
course,
one
of
my
favorite
features
of
the
manage
cloud
services,
of
course,
is
automatic
upgrades,
ok,
Tim,
I
hope
that
answers
your
question.
If
you
are
doing
this,
we
do
encourage
you
to
blog
about
it.
Let
the
community
know
I'll,
be
very
interested
in
the
in
how
how
people
do
this?
What
precautions
you
take
that
kind
of
stuff,
so
hopefully
that
gets
you
going
in
the
right
direction.
As
always,
anyone
please
feel
free
to
ask
follow-up
questions.
Even
if
it's
not
your
question,
that's
kind
of
how
I
learn
as
well
trampled
ass.
A
Looking
for
some
recommended,
Network
policy
recipes
slash
models
for
providing
a
secure
by
default
cluster
that
has
some
level
of
isolation
and
protection
between
the
control,
plane,
namespaces
such
as
keep
that
system
and
general
application
namespaces,
and
maybe
some
extra
day
and
services
namespaces
that
are
running
monitoring
services
for
apps
for
examples.
Anyone
know
of
a
good
recommended
design,
slash
recipes.
This
is
a
slightly
open-ended
question,
happy
to
chat
offline
with
anyone
more
specific
details.
So
this
this
sounds
like
our
back
and
main
spaces.
E
This
one
will
have
to
take
offline.
There
is
a
sink
security
I,
remember
a
lot
of
URLs
being
posted
there.
So
if
you
go
right
now
to
six
security
and
go
through
the
things,
you
will
see
a
few
things
being
talked
about
with
URLs
and
pointers
to
different
resources.
I,
don't
know
any
off
the
top
of
my
head,
but
you
know,
while
we
are
talking
I'll,
throw
some
links
out
when,
if
I
see
them,
okay.
B
One
yeah
I
put
a
link
in
channel
to
the
one
that
I
used
to
base
most
things
off
of
it's
a
pretty
good
starting
point
with
really
nice-looking
examples
in
markdown
documents:
how
to
give
you
a
place
to
start
and
then
you've
got
a
kind
of
tweak
based
on
your
namespaces
and
things
like
that.
But
it
has
a
some
recipes
that
are
pretty
useful.
Okay,.
A
And
that
is
github.com
slash
I'm
at
be
kubernetes
network
policy
recipes.
That's
in
the
channel,
hopefully
that'll.
Give
you
something
to
steal
and
get
started.
Dylan
has
four
questions,
so
we're
gonna
try
to
power
through
these
all
right.
The
last
and
eks
should
I
stick
with
the
default.
Max
pods
is
defined
in
the
cloud
formation
template.
If
not,
how
would
I
determine
a
better
about
I?
Don't
know
how
familiar
the
panel
is.
Ks.
Anyone
want
to
take
a
stab
at
this
one.
B
A
B
Don't
believe
it
takes
them
into
account,
it's
just
all
nodes,
all
pods
that
get
scheduled
to
a
node.
So
it's
not
going
to
include
any
dns.
It
gets
added
a
shiny
replica
sets
and
then
I,
don't
believe.
The
C&I
has
a
pod
that
lives,
because
it's
actually
running
kind
of
in
a
special
way
that
attaches
every
pod
to
VPC,
IP,
address
and.
A
This
next
one
he's
asking
here
she's
asking
about
ec2
instance
types,
but
I
want
to
generalize
this
just
in
general,
because
we
do
get
this
question
quite
a
lot.
What
does
a
good
general-purpose
process
for
choosing
an
instance
type
for
nodes
having
a
problem
where
jobs
that
take
five
seconds
on
a
MacPro
locally
will
take
over
a
minute
on
a
pod?
A
Huge
bottlenecks
are
the
load
time
when
python
is
importing
the
panda's
library
and
delays
and
s3
handshakes
and
he's
using
the
Alpine
Python
3
docker
image,
not
sure
if
this
is
more
of
a
cait,
config
thing
or
an
alpine
issue.
But
we
do
get
this
question
a
lot
where
it's
like.
How
exactly
am
I
figuring
out
how
to
size
my
node
here?
Does
anyone
have
any
general
tips
in
this
case.
E
I
mean
you
can
always
play
with
system
requests
and
I
mean
limits,
and
things
like
that
too.
But
basically
you
start
with
something
that
you
can
monitor
the
CPU
and
the
memory
right.
So
you
start
there
Prometheus
and
things
like
that.
Are
there
so
and
then
you
figure
out
how
many
of
these
do
you
want
to
fit
into
a
pod
right?
That
will
give
you
a
rough
estimate
of
the
size
that
you
need.
Plus
you
know
you
need
the
typical
Oh
head
for
things
that
are
running
in
the
cubelet
itself.
E
A
Sure
and
that
kind
of
leads
to
the
next
his
next
follow-up
question,
which
is
a
question
we
also
also
hear
our
setup,
has
worker
pods
pulling
jobs
off
a
redis
queue?
New
jobs
are
at
it.
Every
minute
goal
is
to
keep
the
queue
at
zero.
How
to
manage
scaling
the
pods
versus
the
nodes,
I
assume
it's
always
best
to
add
workers
to
the
max
pods
limit
and
just
scale
the
nodes.
A
Are
there
instances
where
I
would
not
want
to
have
max
pods,
not
be
the
number
of
workers,
so
I
think
what
he's
trying
to
get
it
he's
like?
How
do
I
figure
out?
Am
I
adding
more
bigger?
Pods
am
I
trying
to
pack
more
pods
I'm
sorry,
nodes
am
I,
trying
to
pack
more
pods
in
a
node
or
do
I
make
bigger
nodes
with
less
amount
of
pods.
It
feels
like
the
kind
of
eternal
and
being
question
there.
A
E
E
E
A
Okay,
so,
hopefully
that'll
give
you
an
idea
of
a
place
to
start
dealing.
Please
do
you
need
to
post
on
the
channel
if
you
have
follow-up
questions,
it
looks
like
Ralph.
Ralph
is
also
doing
the
text,
only
follow-up,
so
thanks
Ralph
for
doing
that.
Moving
on
to
two
more
questions
here,
f
do
XYZ,
says
hello,
not
sure
if
it's
a
valid
question
since
I'm,
pointing
you
to
kubernetes
admin.
What's
your
general
stance
on
the
efk
stack
add-on
that
lives
inside
the
official
repo?
A
Is
it
a
good
template
to
personalize
or
should
I
try
rolling
out
something
else
here?
It
has
still
been
maintained,
but
I
also
think
the
add.
Ons
are
no
longer
officially
supported
to
the
project
as
they
are
before,
and
then
here's
the
github
URL
to
what
he
or
she
is
talking
about
here
and
I
will
tell
us
that
in
the.
F
F
A
E
General,
we
are
trying
to
take
things
out
of
the
main,
a
kunis
repository
right
so
right,
and
it
was
a
good
effort,
samples
that
that
get
into
these
add-ons.
So
you
can
use
them
as
a
starting
point.
But
then
yes,
please,
contribute
back
as
well,
but
it's
mostly
people
trying
to
do
something
and
they
got
it
working
so
they
they
said.
Okay,
maybe
it's
a
good
idea
for
maintaining
it
in
the
main
cuban
artists
repository,
but
then
we
are
going
in
the
opposite
direction.
E
Now,
where
we
are
proposing
people
take
things
out
of
the
main
cuban
artists
repository
and
maintain
that
in
their
own
little
depositories,
because
you
know
getting
review
cycles
as
hard
things
like
that.
So
you
know
some
of
these
are
not
maintained,
because
people
have
moved
on
to
other
things.
So
yeah
take
take
what
you
can
see
if
you
can
make
it
work,
and
we
would
appreciate
it
if
you,
if
you
start
a
new
repository
yep,
take
care
of
things,
so
it.
A
Does
look
like
the
it
was
committed
to
21
days
ago
by
Neil
litt
1,
2,
3
who's
actually
always
active
on
slack.
As
far
as
I
can
tell.
This
sounds
like
something
like
I
I.
Personally,
actually
he
looks
like
he's
been
maintaining
this,
or
at
least
a
year.
I
would
just
ping
him
on
slack
and
be
like
hey.
A
A
Any
other
comments
on
this
one.
This
one
seems
pretty
straightforward.
So
thanks,
FBO
XYZ,
those
of
you
that
we're
answering
questions
we're
a
little
bit
over
halfway
done
and
stick
to
the
end,
so
you
can
be
entered
in
the
raffle
and
I
will
give
you
a
kubernetes
t-shirt
that
we
always
forget
to
wear
during
the
actual
show.
A
D
A
D
A
Okay
and
with
that
we're
waiting
for
more
questions
from
the
channel,
let's
see
what's
going
on
in
kubernetes
users,
which
always
has
questions,
let
me
go
ahead
and
take
more
questions
and
hash
office
hours.
What
to
join
a
flea
I'm,
not
overly
spamming
on
on
this
day
and
there's
the
source
image
link.
So,
while
we're
waiting
for
more
questions
with
the
delay
there,
what
are
you
all
looking
forward
to
in
this
next
version
of
kubernetes
I
know
we
have
a
release
coming
up?
What
what
wakes
you
up
in
the
morning
didn't.
E
Oh
I'm
excited
about
doing
things
on
other
architectures
other
than
Intel.
We
did
a
lot
of
work
trying
to
get
images.
You
know
the
images
that
we
use
typically
have
a
suffix
saying,
amd64
or
arm
right.
So
typically,
when
you
write
on
that
ml
file,
then
the
image
is
very
specific
to
the
architecture
so,
for
example,
q
proxy.
If
you
have
a
ml
for
the
q
proxy,
you
will
see
that
the
q
proxy
has
a
suffix
in
the
image
name.
E
E
So
now,
if
you
just
say
pause,
that
is
enough
to
give
you
the
image
that
will
work
on
all
architectures,
so
we
actually
switched
all
the
test
images
that
is
used
in
our
test
infrastructure
for
conformance
testing,
to
be
images
of
this
kind.
We
have
like
20
30
images
like
that
that
we
had
to
modify
ad
manifests,
so
they
would
work
on
different
architectures.
So
this
is
something
that
will
really
help
I
think
so.
E
The
first
step
is
to
make
sure
that
are
the
same
things
that
we
use
for
intel
architecture
will
work
on
arm
right,
so
full
arm
cluster
will
work
off
of
the
same
evil
files
that
you
had
for
your
Intel
Architecture.
So
then
we
would,
after
that,
in
113.
We
will
start
experimenting
more
with
mixed
clusters,
where
you
will
have
a
like:
the
master
might
be
on
Intel,
but
then
you
will
have
power,
nodes
or
s/390
nodes,
or
things
like
that.
A
E
A
D
A
D
Is
we
have
a
release
team
roll-call,
brat,
okay
and
when
we
do
the
actual
release
after
the
day
of
the
actual
release
and
after
the
postmortem,
then
the
release
lead
steps
down
and
that
person
takes
over
the
current
one
for
the
one
for
111
is
fox
ish
on
slack
I'm
blinking,
and
what
his
real
name
is.
I,
muted.
D
The
patch
release
manager
is
actually
a
very
important
role
and,
if
you're
interested
in
being
on
the
release
team,
but
you
need
to
limit
your
time
in
a
way
that
doesn't
allow
you
to
be
super
distracted
for
the
class
or
weeks
to
the
release
cycle.
It's
something
people
should
take
a
look
at
because
the
dot
releases
they're
a
little
bit
more
predictable.
A
A
Someone
mentioned
that
the
arm
channel
isn't
that
busy.
It
I
think
it
depends
if
Edie
from
packet
is
around
or
not
that's
right,
so
Ed's
local
to
me
and
they
packet
does
some
really
good
arm
stuff.
I,
just
I'll
just
leave
it
at
that,
but
yeah
Ralph,
anything
you're.
Looking
forward
to
in
this
release.
Well
before
we
get
to
Brian
toppings
next
question,
yeah.
E
On
that
front,
if
somebody
can
help
us
kick
the
tires
on
18:06
docker
you'd
be
really.
It
would
be
really
helpful.
We
cut
a
rc1
release,
yes
today,
so
please,
you
know,
kick
the
tires
and
let
us
know
if
you
have
trouble,
you
can
find
us
on
cig
release
channel
and
we
will
escalate
it
to
the
right
place
so
that.
A
A
Do
we
have
any
best
practices
like
I
I
know
that
the
latest
release
that
we
test
and
we
guarantee
on
is
the
one
that's
in
the
app
repo
but
I've
also
hadn't
heard
people
saying
well,
we
use
the
upstream
on
and
it
kind
of
just
works
how
how?
How
was
how
does
that
play
out
other
than
I
know,
I
know
what
Josh's
answer
is
gonna
be
so
cuz
he's
just
gonna
tell
us
is
cried
alright,.
E
A
E
A
Good
to
know
that's
a
great
question:
Jim
thanks
for
that
I'm,
taking
down
everyone's
questions
for
the
t-shirt,
raffle,
which
will
have
in
a
few
minutes
here.
Brian
is
asking-
and
we
have
this
question
this
morning.
Generally
speaking,
how
do
you
all
build
your
and
run
in
home
clusters?
Keep
Adam
in
bare
metal,
anything
intentionally
vague,
an
open
question?
This
is
actually
an
interesting
question
because
we
get
this
one
a
lot.
We
had
a
question
this
morning
with
someone
I'm
having
a
problem
with
mini
cube,
and
then
someone
said
it
way.
A
That's
why
I
use
keep
admin
because
it
kind
of
gives
me
a
real
saying
and
then
someone's
like,
but
many
keys,
moving
to
cube
admin,
so
just
kind
of
curious,
I
know.
A
lot
of
this
is
personal
taste
developers
have
their
favorite
tools.
So
just
quick
survey,
how
you
run
your
just
spawning
a
bunch
on
a
managed
cloud.
Server
does
not
count
as
an
answer
so
yeah.
E
D
I
have
a
micro
coaster
of
little
x86
boards,
one
of
my
primary
test
platforms
and
for
that
I
tend
to
either
use
cubed
men
if
I'm
testing,
current
kubernetes
or
a
bunch
of
danceable.
If
I'm
testing
open
chests
and
when
I'm
deploying
stuff
in
the
clouds
and
webzines
cubed
men,
sometimes
I
use
goop
spray.
Okay,.
E
C
Same
as
R
alpha
with
working
with
customers,
you
obviously
have
to
you
know,
follow
what
the
deployment
or
a
big
strapping
tool
that
they're
using,
but
on
my
own
cube
admin
just
like
Ralph
said
it
works
a
lot
better
for
me
in
a
bare
metal
situation
in
the
home
lab,
and
it's
just
it's
where
I
cut
my
teeth
in
this
place.
What
I
like
to
like
to
keep
using
mm-hm
and
Brian?
You
have
an
opinion.
G
G
I
bought
off
the
eBay,
and
it's
literally
summing
the
Colo
and
stuff
like
that,
so
I've
been
using
metal,
be
and
and
going
with,
yeah
I
guess
really
wet
and
cubed
men
to
start
with,
but
I
guess
the
thing
that's
this
challenging
is
is
really
getting
to
that.
First.
First
deployment,
where
you're
sitting
there
saying
like
wow,
everything's
running
and
I'm
deploying
stuff
and
it's
working.
A
A
Alright,
so
let's
look
surround
blast,
hey
I
was
idle
joined,
live.
Did
my
questions,
get
answered,
don't
worry,
we
have
all
of
these
on
YouTube
and
there's
an
entire
archive
and
playlist.
You
can
literally
watch
hours
and
hours
of
us
just
answering
questions.
Rope.
Try
ask
what
is
the
best
way
to
deploy
on
premise.
E
In
DNS
records,
let's
start
from
that,
because
I
don't
know
how
to
start.
For
the
other
end,
so
DNS
records
I
think
there
is
a
external
DNS
project
which
can
help
you
with
integrating
with
your
corporate
DNS
service.
Let
me
find
the
URL.
It
has
a
bunch
of
integrations
with
you
know
non
traditional
as
well,
and
you
can
probably
write
your
own
plug
in
to.
If
it
comes
to
that
I
don't
have
answer
to
the
rest
of
the
question.
Okay,.
A
So
we
are
running
short
on
time,
so
hopefully
you
will
get
the
URL
and
that
will
that
will
get
you
at
least
started,
and
you
can
come
back.
This
next
session
hopefully
have
a
follower.
If
you
need
any
help
and
then
the
last
question
is
gonna,
go
to
mati
doc
Cruz,
who
says:
what's
the
best
practice
to
deploy
a
each
a
kubernetes
cluster,
a
bare-metal
on-premise
I
get
the
feeling.
A
The
answer
is
gonna,
be
the
same
thing
that
we
did
last
time,
although
one
thing
I
have
learned
is
AJ
is
such
a
loaded
term,
because
when
I
actually
went
in
and
looked
at
the
specification
that
the
cube
admin
folks
wrote
up
on
how
they're
gonna
support
H
a
and
stuff,
not
8h
a
depending
on
who
you
ask,
seems
a
little
bit
too
general
of
a
term
but
opinions
here.
I.
E
D
The
other
thing
I'll
say
is
if
you're
not
doing
ku
ADM,
because
right
now
it
only
provides
one
configuration
option.
There's
actually
two
degrees
of
kubernetes
ha1
is:
will
you
make
just
the
SED
backing
store
eh-eh,
the
other
one
is?
Will
you
actually
have
running
failover
nodes
for
the
single
quad
services
like
controller,
so
one
of
them
is
you
know
one,
the
the
second
one
is
fully
automated
H
a
and
that's
what's
described,
and
the
second
is
currently
manually
check
the
and
you
know
part
of
it.
A
Right
before
we
get
to
the
raffle,
just
a
quick
outro
here,
thanks
to
our
panel
anyway
for
volunteering,
this
is
a
volunteer
event.
So
if
you're
deploying
kubernetes
or
something-
and
you
feel
like
you
want
to
give
back
to
the
community
it's
an
hour
a
month,
it's
kind
of
laid-back
show
up
as
long
as
we
have
enough
people
to
have
a
panel.
You'd
have
to
show
up
every
single
time,
but
it
is
a
lot
of
fun.
I'd
like
to
thank
the
following
companies
who've
donated
engineering
time
to
this.
A
However,
so
thanks
to
giant
swarm
hefty
Oh
stock
X
packet
on
net
pusher,
comm
Red
Hat,
we've
works,
VMware,
zing,
Huawei
and
the
University
of
Michigan.
These
organizations
have
volunteered
their
engineers
to
help
us
out
so
give
them
a
thumbs
up.
If
you
would-
and
we
are
ready
for
the
raffle,
so
I
am
rolling
and
the
number
is
actually
mati
Cruz.
The
very
last
person
to
join
you
on
a
kubernetes
t-shirt
stick
around.