►
From YouTube: Kubernetes Office Hours (West Coast US) 20180418
Description
Join us on the third Wednesday of every month! All experience levels: https://github.com/kubernetes/community/blob/master/events/office-hours.md
A
A
Audio
sounds
good.
First,
try.
We
are
getting
better
at
this.
All
right!
Welcome
everybody
to
the
kubernetes
office
hours
you
once
a
month,
community
livestream,
where
we
I
grab
a
bunch
of
experts.
Like
these
gentlemen
here
and
we
answer
your
questions
live
for
an
hour
in
slack
on
hash
offish
hours.
So
let's
go
through
intros
who's.
D
All
right
and
I
am
Mario
Gloria
I
work
for
a
company
called
liquid
web
here
in
Ann,
Arbor,
Michigan
and
I
also
run
a
local
meetup
group,
which
was
the
people
in
this.
This
chat
here
at
N
and
we
get
together,
eat
pizza
and
we
talk
about
Cooper
nice.
So
we
just
like
to
do
it.
You
know
in
another
week
as
well.
Sometimes
all
right,
my
name
is
Jeffrey
cecum,
also
at
the
University
of
Michigan,
mainly
a
developer,
so
I
focus
on
architecture
and
applying
application.
E
B
I'm
Josh
burkas
I'm
on
the
release
team
with
kubernetes.
Currently
the
released
lead
for
the
111
release,
which
is
in
process,
so
I
can
answer
people's
teeth.
Questions
about
how
releases
work,
I
work
at
Red
Hat
on
the
community
team
they're.
Also
one
of
my
personal
passions
is
running
databases
and
kubernetes.
A
Awesome
awesome
and
I'm
your
host
Jorge
Castro
work
on
hefty
aware
I'm,
the
community
manager
and
I
help
organize
and
run
this
whole
thing
and
the
streamer.
So
with
that,
let's
get
started,
we
have
a
few
rules
on
how
the
livestream
works.
So,
let's
just
go
over
it
real
quick.
This
is
a
judgment-free
zone.
So
when
people
ask
questions,
we
want
to
be
supportive
of
what
people
are
trying
to
do
in
a
best
way
as
possible.
A
So
please,
please
be
cognizant
of
that
obey
the
code
of
conduct,
even
if
someone's
asking
a
question
that
you
might
not
like
well,
do
our
best
to
answer
your
questions,
but
our
panel
doesn't
really
have
access
to
your
cluster,
some
questions
that
are
like
really
hard
to
debug.
We
might
end
up
defaulting
with
showing
you
how
to
get
the
information
that
you
need
and
maybe
giving
you
enough
information
to
try
to
figure
out
where
you
can
get
started.
A
So
apologies
for
that
but
distributed
systems
are
hard.
Networking
is
hard,
so
we'll
do
our
best,
sir
can't
we
can
given
the
format
audience,
you
could
totally
help
out
by
pacing
in
URLs
of
things
that
you
think
might
be
useful,
so
links
to
official
Docs
or
maybe
a
blog
post.
That's
related
to
the
questions
answered,
please
feel
free
to
to
just
toss
in
any
relevant
information
into
the
channel,
and
we
would
really
appreciate
that,
as
with
the
person
who's
asking
the
question,
all
these
sessions
are
available
on
YouTube.
A
So
if
you
using
this
as
a
work
resource
or
it's
helping
you
out,
please
give
us
some
feedback,
so
how
we
can
make
this
a
better
resource
for
you,
because
we're
always
constantly
trying
to
tweak
the
formula
see
what
works.
What
doesn't
work
if
you
want
to
sit
on
this
panel
if
you've
gone
through
a
really
hairy
installation
or
something,
and
you
want
to
get
back
to
the
community.
A
Let
see
we'll
be
holding
raffles
soon
as
soon
as
I
sort
out
my
raffle
code
issue,
but
we're
gonna
make
it.
So
if
you
ask
a
really
interesting
question,
we'll
send
you
a
t-shirt
or
something
cold
like
that,
so
let's
stay
tuned
for
that
and
as
always
feel
free
to
hang
out
in
hash
office
hours
throughout
the
month.
A
F
A
A
So
when
we
have
time
here
between
questions,
I
do
have
some
questions
for
you
about
release,
specifically
where
you
could
use
volunteers
and
help.
So
vice
key
asks.
Question
I
need
to
run
RT
proxy
on
kubernetes,
which
proxies
RT
PRT
PR
tcp/udp
streams
of
port
pairs
allocated
for
each
session
by
an
outside
controller.
This
requires
a
large
port
range
to
be
routed
one-to-one
to
the
RTP
proxy
pods
I
know,
there's
a
few
hacky
and
cloud
you
ways
to
steal
the
no
network,
namespace
etc.
But
what
would
be
a
real
kubernetes
way
to
approach
this?
A
C
Right
now,
as
far
as
like
allocating
port
ranges
and
things
like
that,
it's
it's
definitely
not
the
best.
The
way
we
have
sort
of
done
sewing
similar
for
another
application
is
to
use
metal
lb
and
you
know,
host
networking,
but
the
application
itself
binds
to
a
specific
IP.
That's
taken
from
the
metalic
revision
load,
balancer
IP.
A
A
And
in
the
meantime
sure
yep
sorry
about
that,
you
can't
answer
everyone,
but
we
will
try
and
at
a
minimum
well
at
least
get
eyeballs
on
your
questions
on
stack
overflows
to
get
some
help.
How
Rehan
subra
dinner
hold
on
Hariharan,
Subramanian,
I,
hope,
I,
didn't
butcher.
That
has
a
really
long
questions.
So
it's
typed
in
he
goes
hello.
I
created
a
pod
security
policy,
added
it
to
a
cluster
role.
Here's
a
cholesterol
binding!
A
A
B
So
what
exactly
is
happening
is
he
is
setting
up
the
kubernetes
system,
including
flannel,
be
networking
when
he
removes
allow
privileged,
which
you
want
to
do
in
a
secure
installation,
stops
running
because
it's
trying
to
escalate
to
a
privileged
container
and
does
not
have
the
correct
permissions
and
it'll.
Take
me
some
time
to
find
this
I
know
there
was
a
specific
recent
bug
fix
around
this
problem.
Okay
and
and
I,
don't
know
that
that
bug
fixes
in
193,
okay.
A
A
C
A
D
A
A
One
of
the
things
I
do
want
to
share
with
the
audience
here
since
I.
We
have
some
time
while
we're
waiting
for
question
is
and
I
will
piece
this
in
the
slack
is
the
list
of
cigs
and
kubernetes.
So
usually,
each
section
of
kubernetes
is
divided
into
what
are
called
special
interest
groups
and
each
of
those
things
owns
a
section
of
code
or
is
in
progress
of
claiming
the
sections
of
code
that
they
own.
So
usually,
while
things
like
this,
a
kind
of
general
catch-all
questions
for
the
really
hairy
specific
issues.
A
Each
part
of
kubernetes
has
a
sig
associated
to
it
and
each
saying,
as
a
public
meeting
mailing
lists
and
a
slack
channel,
so
that,
if
you
know
exactly
where
your
problem
is,
you
can
usually
get
some
really
X
specialized
help
there.
Of
course
there
the
problem
is,
is
how
do
you
know
exactly
which
sig
to
go
into,
which
is
where
things
like
this
go
help
help
kind
of
point
you
in
the
right
direction.
So
we
have
a
few
people
typing
here
in
the
channel.
Any
interesting
questions
in
users,
Jeff
wow,.
D
D
Wait
hold
on
so
are
they
doing
any
sort?
You
just
has
two
separate
independent
grains
clusters
or
from-from
the
question:
that's
what
it
said.
Interesting
I
mean
I
off
the
bat
I
say:
no,
it
shouldn't
really
matter
as
long
as
they're
actually
separate
yeah.
It
really
should
not
matter
because
any
workloads
it
might
be
talking
to
were
close
on.
Another
cluster
are
using
some
sort
of
ingress
or
external
IP
address,
etc,
to
access,
so
internal
cluster
IDs
for
internal
services,
never
matter
and
those
should
be
accessible
from
the
outside.
D
A
F
Roth
has
audio
all
right,
I
do
have
audio
I
was
gonna
actually
say.
It
sounds
like
a
use
case
for
art
where
you
back
up
the
what's.
Currently
there
create
a
new
cluster
in
the
new
location
and
then
siphon
workers
off
I.
Don't
think,
there's
a
clean
way
to
move
a
cluster
in
place
like
that
with
that
it
is
possible,
I,
guess,
to
add
more
notes
to
add
CD,
to
keep
warm
but
I'd,
be
afraid
of
things
getting
here
earlier
with
that
when
moving
it
to
be
honest,
yeah.
A
D
I
think
there
might
be
a
way
to
do
it,
but
the
kind
of
precautions
that
you
would
have
to
take
in
the
path
to
get
there
would
be
tricky.
So,
for
instance,
you
could
take
it
down
the
primary
at
CD
instance:
that's
defining
young
rights
first
and
take
down
the
the
read-only
in
you
know,
instance
ism
and
hold
on
and
try
to
kind
of
move
things
slowly
and
get
that
going.
D
A
Alright,
before
you
move
on
the
next
one,
Jeff
I've
got
one
from
Vice
key
again.
A
bunch
of
people
here
on
slack,
seemed
to
have
issues
with
their
shirts
expiring
and
then
their
clusters
dead,
and
they
just
bash
a
cube
admin
or
something
but
I've
not
seen
a
clear
guide
on
what
to
do
when
those
one-year
certificates
die.
Asking
for
a
friend
as
I
started
with
kubernetes
less
than
two
weeks
ago.
First
of
all
welcome
to
the
kubernetes
community
thanks
for
hopping
in
so
what
what?
What
are
we
doing
with
certs?
C
A
C
A
A
F
A
F
B
A
D
Just
constantly
scrolling
up
looking
for
new
ones,
I
have
one
that
we
kind
of
covered
earlier
today,
but
I
feel
it's
worth
recovering.
Sure
sure
I
think
it's
go
sim
at
12:36
asked.
Hopefully
a
quick
question
trying
to
lock
memory
and
elasticsearch
tried
passing
in
a
limits:
comm
2,
etcetera
security
by
the
docker
image,
but
it
doesn't
seem
to
respect
it
as
elasticsearch
still
complains
that
it
can't
lock
JVM
memory.
Have
you
dealt
with
the
memory
locking
at
all.
D
F
C
D
B
A
All
right
just
grabbing
people
from
Kevin
Andy's
users
waiting
for
more
questions
yeah,
and
there
were
a
lot
of
a
lot
of
the
Java
stuff
that
we
looked
up
was
things
of
just
generic
running
Java
in
containers
and
that
I
don't
think
we're
seeing
enough
Java
10
in
the
wild.
Yet
to
make
now
recommendations
stuff
like
that,
I'm.
A
Say
you
really
feel
all
right,
so
we
still
got
plenty
of
room
for
questions.
You
can
just
ask
them
in
hash
office
hours
if
you're
joining
the
live
stream
and
that's
on
the
kubernetes
slack,
that's
kubernetes
at
slack
calm
and
we
are
still
looking
for
questions
and
kubernetes
novice
as
well.
I'm
looking
through
those
I've.
A
Yes,
we'll
do
that
so
I
feel
that
cube
Atman
gets
kind
of
like
a
bummer,
because
on
the
page
it's
actually
like
almost
too
honest,
because
it
tells
you
exactly
what
the
limitations
are
and
currently
the
one
that
people
turn
to
hereupon.
The
most
is
a
che
on
the
control
plane,
but
the
other
features.
So
if
you
look,
they
recently
updated
the
page
for
cube
admin
with
a
with
a
feature
grid
that
will
tell
you
whether
the
each
feature
they
consider
production
ready
or
not.
A
So
if
you
don't
need
a
che,
then
cube
admins
been
production
ready
for
a
long
time.
There's
a
full
proposal
that
you
can
check
out.
If
you
look
at
SiC
cluster
lifecycle
on
the
progress
that
they're
making
on
a
che
and
they
are
making
good
progress
and
things
are
getting
better,
despite
all
that,
a
lot
of
tools
are
using
cube
admins.
So
if
you
use
cuba,
corn
you're
using
cube
admin
and
things
like
that,
so
a
lot
of
community
work
is
going
into
it.
It's
just
people
tend
to
hang
on
the
one.
A
D
Yeah
I
think
a
lot
of
people
kind
of
blur
the
lines
of
production
versus
a
chain
as
a
lot
of
people,
think
AJ
or
production
should
be
inclusive
OHA
and
that's
just
eight
four
more
most
software
that
you
see
that
comes
out.
Then
we
made
they
don't
release
and
say:
okay,
you
know
this
is
the
greatest
thing:
the
world
of
sports
every
use
case
and
H
a
is
completely
inherently
they're.
D
Damn
out
it's
going
to
be
the
case,
so
you
really
have
to
look
the
way
your
knees
are
while
you're
trying
to
do
your
approach
etc
and
to
what
Bob
was
saying
you
shouldn't
think
of
clusters
as
some
math
the
thing
that
uses
the
week's
planning
for
and
designing
and
before
you
even
spawn
your
first
one,
you
should
really
start
playing
as
soon
as
humanly
possible,
a
little
bit,
that's
in
gke
or
elsewhere,
to
get
an
understanding
of.
What's
going
on
this,
especially
if
you're
doing
bare
metal
using
comedian
and
whatnot,
so
yeah
I
think.
A
Part
of
the
problem
is
a
che.
Is
such
a
kind
of
loaded
term
right
that
can
that
can
mean
a
lot
of
things
right.
So
if
you
actually
look
into
the
cube
M
&
H,
a
document
like
the
amount
of
use,
cases
keeps
and
corner
cases
and
stuff
keeps
/
mutating
right.
So
at
some
point
it's
like
the
whole
thing
just
becomes
this.
This
copies.
A
Really
I
know
it
sounds
like
a
lame
Kotla.
It
depends
on
your
use
case,
but
in
a
lot
of
cases,
especially
if
you're
doing
this
like
internally
at
work
and
you
have
bare
metal
and
things
involved.
It's
really
kind
of
a
homework
assignment
to
like
figure
out.
If
it
covers
your
use
case,
which
is
unfortunate,
but
so.
C
There's
a
quick
follow
up
question.
This
is
skipping
salsas
question,
but
it's
relevant
what
we're
talking
about
here.
So
what
actually
happens
if
you
have
one
master
with
no
H
a,
and
it
goes
away
for
half
an
hour
right,
yeah.
C
A
One
thing
that
I,
like
even
like
I,
don't
know
anything
like
this
is
my
home
lab
and
I
just
turned
it
off.
It
can't
schedule
anything
new,
but
kubernetes
says
it's
cool,
like
mantra
that
it's
like
well,
if
something's
broken
everything
just
keeps
on
cruising,
it's
like
cruise
mode
forget
Joe
Beda
has
a
cool
term
for
it.
But
basically
you
can't
schedule
anything
new,
but
everything
that's
running
stays
running.
So
you
know
unless
you're
trying
to
schedule
something
new
or.
D
B
D
A
Thing
was
a
stupid
power
thing.
I
saw
my
home
lab
here,
you
know
I
unplugged,
a
machine
and
I
hadn't
realized
it
or
whatever.
I
was
like.
Oh,
no,
my
whole
clusters.
What
what
happened?
You
know
my
whole
state
and
I
went
to
an
endpoint
and
everything
was
working
and
I
thought
that
was
that
was
really
cool.
So
that's
like
here
you're
cool
use
case,
but.
A
Really
nice,
it's
a
Lee
surprise
which,
which
is
pretty
cool
man
squab,
asks
alright.
Now
we're
getting
some
questions
now
now
we're
heating
up
I've
noticed
that
on
YouTube
it
usually
takes
it
takes
a
while
for
the
live
stuff
to
start
to
really
hit
people's
radars,
because
I
want
to
know
where
I
should
start
with
kubernetes
I'm
gonna
be
spearheading
its
set
up
at
used
for
new
development
and
potentially
migrating
legacy
products
as
well.
D
So
if
you're
worried
about
adoption,
the
biggest
thing
that
I've
always
said
is
it
starts
with
the
developers
being
a
developer
myself
when
I
had
a
team
of
devs
under
me
and
our
I
was
helping
them
get
on
to
the
container
foot.
I
made
the
transition
for
them
as
easy
as
possible.
So
it's
not
necessarily
like
the
technical
answer,
but
it
is.
You
want
to
focus
on
your
devs
workflow
before
you
focus
on
your
infrastructure
once
your
applications
are
gonna,
be
in
production.
D
B
Weird,
okay
has
been
a
real
win
on
the
developer
experience.
You
know
that
where
the
devs
became
important
is
when
we
were
able
to
give
them
a
test
and
staging
set
up
where
they
could
just
deploy
whatever
they
wanted.
You
know
out
of
a
previous
org
where
that
had
to
be
scheduled
by
someone
else,
so
gig
give
them
a
different
freedom
and
they'll
be
really
enthusiastic
about
it
and.
D
And
we
do
that
at
Meijer,
we'll
use
namespaces
use
them
very
liberally.
If
you
need
to,
you
know,
create
the
separation
you
need,
but
if
you
give
them
the
flexibility
to
go
from
zero
to
60
in
no
time
without
having
to
talk
to
one
or
two
people
or
putting
requests
or
ask
questions,
they
just
want
to
get
things
to
employ.
They
want
to
figure
out
if
their
thing
is
working
and
get
it
moving
on
to
the
pipeline.
So
definitely
listen
your
developers
and
then
kind
of
from
there.
D
The
other
thing
that
comes
to
mind
is,
if
you
are
the
only
person
really
spinning
up
this
kubernetes
cluster.
You
definitely
like,
if
you
do
have
a
bunch
of
developers
that
are
going
to
be
working
with
kubernetes,
find
one
or
two
that
could
maybe
help
you
with
the
maintenance
and
help
you
spin
it
up.
Just
so
you're
not
saddled
with
maintaining
it
yourself
and
that'll,
also
kind
of
help
build
the
kubernetes
knowledge
within
your
organization
as
well.
A
Alright,
so
that
should
get
you
started
but
feel
free
to
ask
more
follow-up
questions,
because
that
that's
a
topic
we
could
talk
about
all
day,
so
we
have
a
few
more
questions
queuing
up
and
then
we'll
come
back
to
a
follow-up
question.
If
you
have
it,
let's
see
Pia
Vailsburg
asses
they're
in
a
way
to
prevent
an
ingress
resource
from
creating
a
load
balancer.
D
Yeah
I
was
gonna,
say,
I,
don't
really
understand
the
question
that
well,
a
loadmaster
is
created
by
you
right
and
the
ingress
resource
is
just
a
resource.
Saying
hey
little
balancer
I
had
I
have
a
thing
you
should
redirect
to
is
online
trading
amigos?
Basically,
so
maybe
a
little
bit
more
clarity
would
be
awesome
on
kind
of
what
he's
trying
to
achieve.
D
A
So
P
Vailsburg,
if
you
could
kind
of
give
us
a
little
bit
more
detail
there,
yep
and
two
more
until
I
got
two
years.
Josh
Jay
Davis
would
like
to
add.
I
would
second
that,
having
developers
deeply
as
a
huge
plus
ding-ding
Dustin
Bachrach
ass
I'm,
currently
placing
all
of
my
kubernetes
manifests
in
a
single
repo
was
setup,
a
teardown
script
that
will
provision
a
cluster
and
then
execute
all
the
manifests.
This
is
great
for
development
debris
clusters
up
and
down
and
spin
up
all
our
deployments
of
services
in
a
single
command.
A
There's
a
single
repo
source
of
truth
for
the
cluster,
however,
I'm
wondering
as
more
and
more
projects
and
people
start
working
on
our
kubernetes
cluster
if
it
would
be
better
to
co-locate
the
manifest
with
each
individual
project
in
repo
any
suggestions
or
thoughts
on
how
structuring
how
to
manage
a
cluster
if
manifests,
are
scattered
across
many
repos.
This.
D
Is
this
is
it's
like
he's
just
climbing
out,
like
Mario
I
need
to
maybe
talk
for
10
minutes,
so
this
is
funny.
I
was
actually
just
looking
at
a
post
and
I'll
link.
It
here
as
well
from
cue
thousand
views
comparison
on
all
the
utilities
that
are
helping
people
approach
of
this
problem,
and
this
problem
really
is
a
packaging
problem
or
a.
How
do
I
defined
my
workload
problem?
So
home
is
one
of
the
many
tools
out
there
now
and
relation
to
get
cute
case
on
it
and
I
think
has
some
of
that
as
well.
D
D
The
idea
with
helm
is
that
you
template
out
your
workload,
and
then
you
can
grind
a
single
file
that
has
values
and
configurations
for
how
it
should
be
deployed,
and
then
you
run
one
command
menu
to
play
it
and
all
those
assets.
You
know
whether
it's
some
secrets,
so
in
config
maps
at
the
claimant
and
a
PDC,
those
all
get
deployed
right.
It's
it's
a
single
command
to
kind
of
handle,
shooting
that
out.
So
I
will
link
this
article
here,
I'm
not
going
to
talk
too
much
about
it.
There's
tons
of
documentation
out
there.
D
There
is
hash
tag
home
users
here
and
like
to
let
you
slack,
which
is
a
fantastic
community
and
very
supportive
as
well.
That's
a
great
place
to
start.
That's
where
I
have
kind
of
I
mean
we
do
flow
things
in
production
using
helpful
as
well.
It's
a
great
utility!
So
there's
a
lot
more
development,
though
coming
up
for
helm
in
the
future,
which
will
be
more
secure,
more
efficient
and
and
quite
nice
and
again,
a
lot
of
all
their
tools
can
help
you
solve
this
problem.
A
An
area
where
there's
a
lot
of
development
happening
from
a
lot
of
different
projects,
so
it's
like
I
do
have
a
follow-up
question
on
his
behalf.
It
sounds
like
his
existing
setup,
though
he
also
has
provisioning
scripts
in
there
in
his
in
his
repo.
So
it
you
know
if
you're
a
developer,
it
also
spins
up
your
cluster
for
you.
What
how?
How
would
you?
How
would
you
break
that
down
in
a
home
world?
D
This
is
this
one's
hard
because
it
depends
on
the
environment
and
it
dependent
depends
on
you,
trusting
your
developers
right.
So
you
can
specify
the
name
space
at
at
launch
time
at
right
at
run
time,
and
you
can
specify
different
options
for
that
limit,
and
so
your
developers
worry
on
their
laptop
and
they
need
to
be
working
in
the
belvane.
You
know
clustered
not
the
production
cluster.
D
They
need
to
know
that
they
need
to
either
change
that
option
or
maybe
make
your
default
option
on
development
cluster
and
that
when
you're
actually
ready
to
go
to
prod,
you
actually
say:
okay
go
to
the
production
cluster.
Does
it's
explicit
name
space?
You
know
as
your
production
occurred.
You
know,
they'd
switch
your
context
here,
the
culture
you
want
them,
the
namespace
that
you
want
to
deploy
it
to
so
the
help
them
doesn't
make
any
assumptions
about
that.
D
It's
all
based
on
your
current
context
or
cue
TTL,
which
is
I,
mean
they
can't
do
a
ton
in
that
realm
is
really
up
to
that
end
user
and
how
they,
how
they
approach
that
deployment.
If
you're
talking
a
little
bit
backwards,
backing
up
to
just
creating
clusters
and
managing
clusters
like
qat
and
big
and
whatnot,
that's
not
something
home
does
call
them.
Is
you
know,
let's
say
it's
at
the
API
level?
D
It's
you
know
if
you're,
if
it
works
at
the
point,
does
that
that's
right
to
the
API,
so
I'm
not
sure
what
other
tools
might
be
coming
out
may
be
handling
multiple
clusters
yeah.
It
would
be
really
just
orchestration
or
on
qadi
and
it's
its
options.
So
maybe
there's
some
stuff
coming
there,
but
yeah
home
is
definitely
you
know
once
you've
got
the
cluster
up
running
once
you
have
you've
got,
you
know
either
is
created,
we
have
their
their
accounts
and
we
use
cubes
ETL
up
over
the
laptops.
That's
where
held
really
shines,
yep
all.
A
B
A
Who's
typing
hard
is
that
you
Bob
mute,
someone's
someone's,
overly
typing
difficult
I.
Think
that's
you
about
thanks.
B
He
asks
hello,
I
need
to
build
and
join
a
new
node
to
the
AWS
QuickStart
kubernetes
cluster
I
can't
afford
any
information
on
how
to
configure
the
note
and
pre-flight
check
is
failing.
Any
pointers.
I've
asked
him
I
for
a
copy
of
the
era.
That
pre-flight
check
is
giving
him
that's
what
I'm
waiting
for
right
now.
It's.
B
A
D
B
D
D
So
you
know
I'll
post
those
likely
to
what
are
the
cases
that
a
pod
spec
change
is
categorized
as
update.
D
Now,
for
say,
something
like
a
like
a
damon
set
spec
template
is
what
you're
gonna
want
to
update
in
terms
of
getting
that
getting
a
fresh
rollout
going,
especially
when
you're
doing
rolling,
update
being
aware.
That
is
super
important
Damon's.
That's
do
not
fall
under
the
sand,
reconciliation
loop
per
se,
so
because
it's
really
just
make
sure
this
is
running
on
every
note
right
so
and
when
you,
when
you
do
rolling
update,
which
came
out
in
one
six
of
kubernetes,
you
really
have
to
make
sure
that
it
change
you
want.
A
E
D
E
D
F
D
B
A
D
B
D
Just
you
spin
up
telepresence
on
the
cluster
and
you
spin
up
telepresence
on
your
laptop
and
then
boom.
Another
really
big
use
case
is
developers
that
don't
have
a
really
powerful
laptop
that
can
spin
up
mini
cube
and
like
spin
up
their
application
as
well
as
all
of
that,
it
makes
a
lot
of
sense
to
spin
up
telepresence,
because
then
it's
running
everything
remotely,
but
it
appears
locally
shares.
A
A
A
Interesting
question
but
I
guess
it
would
also
give
people
consistent
stuff
right
so
yeah
if
I
have
ten
developers
and
two
of
them
grabbed
mini
cube
a
month
ago,
but
didn't
bother
to
upgrade.
You
know
what
I
mean
it
kind
of
helps.
You
keep
everything
consistent
if
everybody
is
using
telepresence
right,
yep,
so.
A
D
So
we're
not
using
telepresence,
but
we
should-
and
the
reason
is
because
of
standardization
on
versioning
and
our
pipeline.
So
when
we
want
to
test
a
new
change
to
you
know
using
the
latest
version
of
docker
right
in
our
core
name
is
remark
and
we
drove
out
the
dev.
The
developers
on
their
laptops
might
not
be
using
that
exact
same
version
therefore
latest
might
be
from
the
Ferrara
latest,
and
so
you
may
be
running,
you
don't
make
the
VR
one
of
our
apps
or
one
of
our
micro
services.
D
It
makes
sense
for
them
to
use
teleport
something
like
Phillip
residents
and
run
it
in
the
cluster,
where
you've
got
the
latest
kind
of
changes
that
are
being
tried
and
grouper
pipeline
for
production
right.
So
it's
really
making
sure
that
we're
doing
our
due
diligence
here
for
ensuring
our
workflows
run
effectively
on
whatever
our
latest
kind
of
we
call
it
below
B
API
changes
right,
whether
it's
a
node
and
OS
bubble,
change,
intercept
or
some
sort
of
kubernetes
core
asset
right
like
an
ingress,
Controller
ng
nginx
upgrade.
D
D
Yeah
I
mean
there.
We
we
try
not
to
we're
trying
to
get
to
a
point
where
every
change
is
accounted
for
and
you
can
see
it's
just
like
a
commit
right.
You
can
see
it
as
it
moves
through
the
pipeline
as
well
and
if
it
impacted
things
right
and
so
we're
trying
to
really
get
a
battery
of
tests
that
that
work
well,
which
is
I,
mean
it's
challenging
right.
So,
yes,
it
is
for
any
order,
but
we
actually
do
have
the
isolation
between
the
environments.
D
Now
we
can
see
things
moving
through
it
yeah
we
don't
want
to
we've
come
to
things
like.
Oh,
this
shouldn't
make
a
difference,
and
then
you
know
someone
rolled
it
out
and
then
all
of
a
sudden
things
started.
You
know
freaky
now,
and
this
is
also
we
tweet
your
notes
more
like
pets
and-
and
so
that's
also
I-
think
you
guys
kind
of
know,
we're
doing
a
hyper-converged
storage
route
right
and
so
there's
a
lot
more
going
on
with
you
know:
kernel
module
handling,
reliability
of
docker
things
starting
up
at
no
rebooting
cases.
A
Right
because
then
you
have
moving
parts
on
top
of
moving
parts
right.
So
that's
interesting,
I
TelePresence
been
on
my
list
of
stuff
to
play
around
with,
but
I
haven't
really
had
time.
Lately
more
questions
we
got
about
10
or
15
minutes
left.
You
can
ask
them
in
hash
office
hours
if
you're
joining
us
on
the
on
the
YouTube
stream
and
we're
also
kind
of
scanning
kubernetes
users
for
questions
that
are
reusable
and
anything
new.
There
Jeff
for
Josh
anything
in
IRC
questions.
B
Error
message:
I:
don't
think
that
he
will
really
do
much
once
you
get
the
error
message:
I'll
I'll
actually
paste
and
proceed
with
that
question.
Josh.
F
A
Thanks
for
that,
ruff
any
other
questions,
anything
anything
well,
we
have
time
Joshua
anything
interesting
that
you're
looking
forward
to
in
in
1.11.
What
are
we
on
1.11
112
once
I
just.
C
A
B
Well
because
it's
a
it's
a
minor
feature
that
I
use
a
lot
I'm,
actually
really
looking
forward
to
the
the
overhaul
and
tremendous
performance
improvements
in
affinity
and
anti
infinity.
Okay,
just
because
a
lot
of
people
have
actually
not
made
a
lot
of
use
of
node
affinity
for
scheduling
pods,
simply
because,
once
you
got
to
any
kind
of
scale,
it
was
really
slow.
It
really
slowed
down,
scheduling,
new
pods
and
in
111
it
will
not
be
any
more
No.
C
B
B
A
So
I
heard
an
interesting
during
the
status
that
the
number
of
features
for
this
last
cycle
was
down
compared,
and
it
was
more
of
a
bunch
of
finishing
of
features
and
stuff.
Can
you
talk
a
little
bit
about
about
110
and
I'm
gonna?
Ask
the
question,
because
I
I
keep
getting
this
asked:
do
you
do
foresee
in
the
near
future
of
something
like
an
LTS
release
for
kubernetes,
or
is
it
just
gonna
keep
on
you
know
for
LTS
releases.
You.
B
B
The
so
for
everything
else,
I
mean
actually
110
being
a
little
light
on
features
was
unsurprising
in
hindsight.
Okay,
for
two
reasons:
number
one
January
is
always
a
slow
month
for
kubernetes
development,
or
at
least
I
say
always,
given
that
we
only
have
three
years
of
history,
but
the
previous
two
January's
were
also
slow
people
coming
back
from
the
holidays,
they're
restarting
their
work.
They
don't
necessarily
get
a
lot
done.
B
Some
people
work
for
companies
where
their
fiscal
years,
the
calendar
year
and
so
they're
doing
budgeting
during
January
the
and
then
the
other
thing
that
happened
was
1.9
was
to
be
sort
of
cleanup
stability.
Release
right
as
in
people
were,
it
was
a
short
cycle
for
that
release.
Only
nine
weeks
instead
of
twelve
and
people
were
expected
to
focus
on
bug,
fixing
and
getting
stuff
from
alpha
to
beta
and
that
sort
of
thing
and
they
didn't.
Instead,
we
got
a
whole
bunch
of
new
features
in
1/9.
B
D
That's
really
interesting:
Josh
thanks
for
that
ran
I,
mean
I.
Think
I,
don't
think
that's
a
sign.
I'm,
like
you
know,
I
think
everyone
wants
to
work
on
new
features
because
it's
very
exciting
but
I
think
that
we
see
that
shift
now
with
one
time
when
we
kind
of
come
back
and
say:
okay,
nope
I
need
to
go
and
verify
this
to
make
sure
it
works
a
little
bit
better,
make
it
more
robust
and
there's
nothing
wrong
with
that.
That's
that's
fantastic!
That's
just
common
Alton
cycles
go
though
so
for
anyone
who's.
Listening
like!
D
Oh,
that's,
weird,
I,
don't
trust
one
time
because
it
might
be
later
one
I
might
be
flaky
like
that's,
not
really
the
way
to
look
at
it.
New
features
coming
in
Ella
is
meeting
getting
them
in
early
as
early
as
possible
is
really
great
because
you
view
people
that
use
them
and
get
some
visibility,
and
then
you
get
that
feedback
to
make
them
better.
So
that's
really
what.
A
A
You
know
a
feature
that
in
your
cluster,
that's
very
alpha
and
rough,
but
like
the
next
person
doesn't
use
that
feature,
and
it's
like
the
most
rock-solid
thing
ever
so
that
wasn't
that
wasn't
clearly
obvious
to
me
when
I
first
started
I
kind
of
coming
from
a
more
traditional,
listen
source
thing,
it
was
like
okay,
you
know
the
odd
number
releases
of
the
stable
ones
and
the
even
number
ones
or
the
the
ones
that
you
would
trust.
You
know,
and
things
like
that
so
I.
B
F
B
A
D
D
D
Okay,
I'm,
the
singular
person,
that's
going
to
focus
on
this
testing
for
this
certain
type
of
feature
or
whatever
we're
trying
to
do
and
how
it
works
in
kubernetes,
and
they
try
to
take
the
reins
on
that
and
they
pretty
much
do
that
with
with
QC
TL
right.
They
use
the
the
tool,
that's
easiest
for
them.
So
I
would
also
say.
B
This
is
one
of
those
areas
where
the
add-on
tools,
both
both
proprietary
and
open
source,
have
a
lot
of
presence.
I
mean
effectively.
If
you
look
at
open
shift,
that's
one
of
the
big
things
that
open
just
adds
to
kubernetes
in
terms
of
management
of
that
stuff.
If
you
look
at
them,
who
are
the
Frog
people,
J
frog,
hey
frog!
Thank
you.
If
you
look
at
deep
wrong,
that's
what.
D
B
D
I
think
so
that's
why
I
think
it's
super
important
like
using
a
VCS
told
assets,
is
awesome
but
being
careful
that
multiple
people
don't
run
in
and
try
to
do.
Something
is,
is
another
another
question:
that's
why
it's
something
like
home,
where
you
know
it
is
tiller,
which
you
know
there's
other
ways.
It
can
be
doing
that
they're,
more
sparing
and
coming
down
to
click
here,
but
using
you
know,
using
the
resources.
Oh
I
know:
what's
in
the
cluster
now
I
know
its
current
state.
D
This
is
what
it
is
and
making
sure
that
the
the
end
user
is
aware
of
that
through
some
sort
of
really
nice
utility
is
the
best
way
to
go.
We
I
mean
our
developer
to
a
ton
of
zsh
and
functions
around
making
it
easier
to
use
a
little
bit
as
well,
so
other
definitely
tools
out
there.
That
can
make
this
better
and
make
people's
workflows
much
easier.
Yes,
but
there's
always
that
training
time,
if
you
will
the
time
to
ramp
up
and
learn
those
things
and-
and
actually
you
know,
deploy
them
so.
A
Sweet,
and
with
that
and
slack,
we
have
time
for
one
more
before
we
go
into
the
outro
like
to
thank
everybody.
Who's
asked
a
question
so
far
like
I
said
on
the
feature
we'll
have
swag
for
you
and
and
all
sorts
of
neat
interesting
things.
Anyone
have
the
the
kubernetes
fidget
spinner
handy
I,
always
say:
I'm
gonna
bring
one
but
I
don't
but
I,
don't.
A
Alright,
so
a
few
announcements
before
we
go
again
thanks
everyone
for
joining
us,
we
are
going
to
be
doing
this
event
live
at
cube,
Con
in
Copenhagen,
so
we'll
actually
have
a
panel
I'll,
be
in
the
audience
with
a
microphone
and
we'll
have
a
panel
of
experts
up
on
the
stage,
and
you
can
ask
all
sorts
of
questions,
and
things
like
that.
So
do
join
us
there.
A
If
you
are
going
to
Copenhagen,
also
check
out
some
of
the
workshops
and
things
like
that
that
people
are
running
on
Monday
they're,
always
always
good,
valuable
training
that
happens
around
a
Q.
Come
we'd
like
to
thank
the
following
companies
for
supporting
the
community
with
their
developers
and
volunteers.
For
this
event,
Amazon
bitNami
giant
swarm
hefty
a
liquid
web
Northwestern
Mutual
packet
net,
pivotal
Red
Hat,
we've
works,
the
University
of
Michigan
and
VMware
and,
lastly,
feel
free
to
hang
out
in
hash
office.
Ours.
A
If
you
have
any
strong
opinions
on
how
you
can
help
Jeff
with
a
question
bot,
so
we
could
collect
questions
over
the
course
of
the
month
and
then
kind
of
spit
them
all
out.
During
that,
the
events
itself,
I
think
that'll
help
us
become
more
efficient
and
spend
more
time
answering
questions
instead
of
trying
to
reach
questions
from
all
over
the
internet
so
and
we're
also
investigating
whether
streaming
the
twitch
is
a
good
idea
or
not
so
more
to
follow
more
to
follow
there.