►
From YouTube: Kubernetes Office Hours 20181017
Description
Office Hours is a live stream where we answer live questions about Kubernetes from users on the YouTube channel. Office hours are a regularly scheduled meeting where people can bring topics to discuss with the greater community. We have these on the third Wednesday of every month. They are great for answering questions, getting feedback on how you’re using Kubernetes, or to just passively learn by following along.
https://contributor.kubernetes.io/events/office-hours/
A
All
right
welcome
everybody.
It
is
October
17th
third
Wednesday
of
the
month,
so
it's
time
for
kubernetes
office
hours.
This
is
the
live
stream,
where
we
take
your
questions
from
slack
on
how
to
use
in
the
fort
duvernette
ease
and
then
try
to
answer
as
many
of
them
as
we
can
for
you
real
quick
before
we
start,
though,
let's
go
through
some
intros
we'll
go
Joel
Peter,
Reuben,
Michael,
Justin,
Illya,.
B
Hi
I'm
Joel
speed,
I
work
at
a
company
called
pusher
doing
a
bunch
of
community
stuff.
At
the
moment,
around
controllers,
so
I've
been
in
the
key
builders
channel
for
us
or
two
months,
just
helping
out
they're
also
done
a
lot
of
stuff
on
off
and
also
skating
in
the
past.
So
those
sort
questions
filled
to
me.
E
G
And
I
guess
I'm
last
time,
just
in
Santa,
Barbara
I
now
work
at
Google,
but
I
originally
got
involved
in
like
AWS
support
and
started
the
ones
that
one
of
you
always
started
the
cops
project
and
also
very
involved
in
Oscar
a
PI
Buster
lifecycle
that
sort
of
stuff.
So
AWS
is
pretty
much
fair
game
for
me
as
well.
Awesome.
A
Awesome
it's
about
time.
We
got
someone
from
cops
on
on
here.
I
always
get
cops,
cops
is
storage,
I
feel
like
are
number
two
like
we
get.
We
get
a
lot
of
those
so
welcome
everybody.
Here's
how
this
is
gonna
work,
I'm
gonna,
lay
out
some
ground
rules,
so
we're
hanging
out
in
the
slack
Channel
that's
hash
office
hours
on
the
Cooper
Mattie
slack.
You
can
go
to
that.
If
you
just
go
to
slack
off
kubernetes
thought
IO
ask
your
question
new
question
in
all
caps.
A
:,
so
I
could
see
it
and
then
we
will
go
ahead
and
answer
them
in
the
order
that
we
received
them.
If
you
have
a
follow-up
question,
or
sometimes
we
might
say
something
like
hey,
we
need
more
information
and
things
like
that
feel
free
to
just
go
ahead
and
and
respond
and
then
we'll
like
get
back
to
you.
This
is
a
judgment-free
zone.
So
when
you
see
people
asking
questions
in
the
channel,
remember
we
all
had
to
start
from
somewhere.
A
So
try
to
be
supportive
of
everybody,
and
we
will
do
our
best
to
answer
your
questions.
Unfortunately,
we
don't
have
like
access
to
your
nodes
and
things
like
that,
so
there's
only
so
much
troubleshooting
that
we
can
do
so
in
those
cases
we'll
just
try
to
help
you
out,
maybe
send
you
to
either
the
right
cig
or
get
you
started
on
how
to
debug
a
problem.
A
Panelists
you're
encouraged
to
just
expand
your
answers
with
your
experiences
and
pro
tips
like
you
know,
if
you
felt
a
customer
or
if
you
know
you
have
like
the
features,
sometimes
that
sort
of
thing
any
advice
that
you
have
from
running
these
clusters
and
prevention,
our
users
always
appreciate
audience.
You
can
help
out
by
tossing
in
URLs
into
the
slide
channel.
A
So
if
you
see
something
we're
discussing
a
topic
that
you
might
have
met
something
on,
and
you
remember,
a
blog
post
or
something
useful
or
a
piece
of
Doc's
feel
free
to
always
just
we
always
have
people
that
always
go.
Look.
We
find
the
relative
relevant
page
of
documentation
and
then
they
just
whack
it
into
the
site
channel
that
helps
us
collect
the
corpus
of
information
around
that
topic
and
helps
us
spread.
The
word
also.
If
we
talk
about
things,
if
you
have
tools,
you
can
recommend.
A
That's
all
we
useful
I
think
every
single
week,
I've
seen
a
new
tool
that
I've
never
seen
before
for
a
problem.
I
just
have
to
spread
the
spread,
the
wolf
among
the
community.
You
can
also
help
us
up
by
tweeting
spreading
the
word
paying
it
forward
saying
what
a
great
time
you
had
asking
your
question
here.
Each
of
these
sessions
is
recorded
available
on
YouTube.
We
have
an
entire
playlist
going
back
almost
a
year
now
I
feel
like
we
were
due
for
an
anniversary,
or
something
like
that.
A
So
please
feel
free
to
check
out
that
back
catalog.
If
you're,
using
these
resources
at
work,
I
would
love
to
get
feedback
on
how
we
can
make
the
format
better
for
you
also,
if
you
want
to
sit
in
this
wonderful
panel,
they're
all
volunteers,
so
the
commitments
pretty
much
an
hour
a
month.
If
you
can,
we
have
enough
volunteers
where
you
don't
have
to
sit
in
every
single
time,
but
it's
a
good
way
to
pay
back
to
the
community
by
just
helping
someone
out
who
might
be
stuck
with
a
problem.
A
Let's
see
and
if
you
ask
a
question
and
we
read
it-
live
on
the
air
today,
you'll
be
raffle
to
win
a
kubernetes
t-shirt,
which
we
all
continue
to
not
wear
it's
not
this
one.
At
some
point,
one
of
us
will
actually
wear
the
kubernetes
t-shirt
that
we
are
giving
away.
What
happens
is
I,
give
you
a
code,
you
go
to
the
CNCs
store
and
then
you
could
say
yay
someone
help
me
at
work
and
I
want
a
t-shirt
so
and
lastly,
feel
free
to
hang
out
and
hash
office
hours
in
between.
A
We
actually
keep
track
of
the
questions
that
you
ask
in
there
throughout
the
month.
So
when
we
have
the
show
once
a
month
we'll
go
back
and
look
at
questions
that
are
unanswered,
I
know
it
can
be
intimidating
to
be
in
kubernetes
users
with
like
45,000
people
in
there
so
has
office.
Hours
is
kind
of
like
our
little
our
little
spot,
where
we
can
kind
of
carve
out
a
little
spot
where
you
can
feel
free
to
ask
questions
that
we
can
go
ahead
and
close
the
loop
on
once
a
month.
A
A
Okay,
he
might
just
be
a
list.
He
or
she
might
be
just
be
listening
in
alright
panelists.
Are
you
ready
all
right?
Our
first
question
comes
from
an
fan
question
when
running
a
canonical
kubernetes
cluster
on
localhost.
What
would
be
the
simplest
storage
class
provision
set
up
that
has
dynamic
provisioning,
unlike
local,
when
I
was
running
a
managed
kubernetes
cluster
on
Azure
I
just
used
as
your
disk
looking
for
something
similar
about
our
localhost?
A
If
you
could
at
me
honey
when
the
question
gets
brought
up,
if
I
can
go
back
through
the
VOD
I
appreciate
it
not
able
to
stream.
This
am
due
to
meetings
thanks
ad
fan,
so
I
believe
the
canonical
kubernetes
cluster
is
just
it's
just
installing
the
things
via
snaps
on
disk
I.
Haven't
you
see
DK
in
a
while?
Anyone
have
any
ideas
on
this
one.
G
For
me,
be
I
think
the
interesting
thing
is
he
was
saying:
local
host
I
think
he's
spoken
like
I
or
he
or
she
is
talking
like
a
single,
a
single
node
cluster,
and
my
guess
is
that
whatever
reason
dynamic
provisioning
is
not
supported
for
like
single
node
volumes,
I
think
yep.
There
are
very
interesting
provisioners
that
are
that
you
can
use
on
bare
metal
like
I've
personally
played
around
with
rock
I've
heard
of
great
things
about
others.
G
That
may
be
overkill
for
the
single
node
case,
but,
like
I,
guess,
there's
two
questions
here:
right,
there's
the
single
node
I
just
want
to
have
a
mini
cube
type
experience
and
I
honestly
got
an
answer
for
that.
But
then
there
are.
There
are
also
solutions
that
do
multi
metal.
They
work
on
bare
metal
that
don't
rely
on
like
a
cloud
I
personally,
don't
have
experience.
I,
don't
agree
with
experience
any
of
those.
C
E
So
if
I
understand
correctly,
the
person
asking
the
question
trying
to
replicate
what
they
did
over
in
Asher
right,
and
so
this
is
obviously
the
dynamic
commissioning
part.
The
question
that
I
would
ask
there
in
this
context,
is:
is
this
something
that
you
know
you
can
replicate?
What
many
cube
does
for
you,
because
there
you
have
essentially
the
same
so
I
don't
know
the
answer:
I'm,
not
necessarily
a
big
Chronicle
user.
So
sorry,
but
maybe
you
can
replicate
that
somehow.
A
Mm-Hmm
I
really
have
no
way
to
help
other
than
I
know
the
people
that
wrote
it
so
worst
case
I
will
PME
I
will
send
you
the
information
in
slack
they
have
their
own
channel
on
the
kubernetes
slack,
where
they
help
you
out.
I
would
actually
be
interested
in
in
how
people
do
things
like
this
via
all
the
local
tools.
Right,
like
those
many
cube,
do
you
do
this?
Does
it
kind
of
have
like
a
or
is
it
one
of
those
or
it's
like
sorry,
you
need
a
real
cluster
at
that
point,
I'm.
E
A
H
G
F
If
the
answer
is
that
column
is
not
the
place,
we
should
be
looking
for.
Look
at
that
right.
It
would
be
a
very
nice
API
level
if,
if
it
exists
right,
so
you
can,
you
can
try
and
get
the
the
name
of
the
pod
and
parsley
parsley.
If
you
get
it
the
last
portion
of
it
right,
but
then
you
can
probably
just
use
the
host
name
to
the
same
extent,
I.
H
G
E
It
always
problems
the
the
the
current
one
right
early
needs
the
current
or
she
needs
to
current
whatever
they
are
on,
so
unless
they
are
using
the
down
with
the
end,
I'm
gonna
provide
a
linking,
and
it's
like
in
the
moment.
That's
that's
commodity.
On
the
direct
way,
I'm
aware
of
yeah,
there's
no,
which
use.
C
Case
it
is
like
what
they
want
to
know
do
with
it,
because
if
they
want
to
use
it
in
the
helm,
templating
again
kind
of
use
but
is
created
again
in
the
templating
of
the
next
object
or
something
to
create
a
list,
for
example,
of
all
the
pots
that
were
created
in
the
next
in
the
service
or
I
know,
then
they
might
need
to
build
a
small
script.
It
gets
that
out.
E
G
I
also
say
it
would
be
better
to
possibly
be
better
not
to
use
not
to
rely
on
helm
for
somebody,
because
then,
if
you
were
to
resize
the
safe
for
set
improvement,
if
you're
not
like
helm,
is
no
longer
in
the
picture,
just
like
you
know,
sort
of
the
same
thing,
we
have
with
cops
it
cups,
ism
and
we're
the
fixture.
Once
you
create
a
cluster
same
thing:
I
don't
like
helm,
is
no
longer
in
the
cluster.
Once
you
created,
there's
no
longer
the
picture
once
you
create
a
stateful
set.
G
A
All
right
moving
on
for
those
of
you
that
are
just
too
dim
welcome
to
the
kubernetes
office
hours.
We
do
this.
A
third
Wednesday
of
every
month
feel
free
to
hop
into
hash
office
hours.
Ask
your
question
and
our
panel
we'll
get
to
them
as
quickly
as
possible.
Moving
on
a
cop's
question,
yeah
for
cops,
could
I
skip
DNS
setup,
eg
Rupp
53
set
up
and
create
a
kubernetes
cluster.
For
now
what
would
be
the
side
effects?
G
Yes,
so
I
the
big,
it
is
not
possible
to
set
it
to
change
the
name
of
your
cluster,
which
includes
a
DNS
domain
name.
Today,
one
of
the
decisions
that
cops
made
pretty
early
was
to
use
DNS
for
discovery,
which
is
sort
of
why
you
need
this?
It's
it's.
You
can
use
an
internal
route
53
domain
name,
so
you
can
also
you
don't
have
to
create
a
real
domain
name,
but
it
is
a
real
problem
for
people
to
to
set
up
two
main
names
in
general.
G
It's
hard
I,
don't
often
needs
approval,
and
so
we
added
this
magic
thing
called
Kate's
top
local,
and
if
you
do
clustered
up
Kate's
Kas,
not
local,
it
will.
It
goes
into
a
different
DNS
mode,
called
gossip
mode
or
it
effectively
instead
of
using
DNS
for
discovery,
uses
a
gossip
protocol
based
on
the
wonderful
weave
library,
weave
mesh
from
the
folks
at
Lou
Burks.
G
It
also
that
the
really
nasty
thing
is
it
uses
Etsy.
It
puts
the
host
names
into
Etsy
hosts,
which
is
a
little
sketchy,
maybe
but
seems
to
actually
work
in
practice
and
you're,
not
losing
a
lot
by
doing
that,
if
you,
if
you
want
to
do
it's,
not
the
recommended
deployment
mode
for
production,
but
it
is,
it
does
seem
to
work
fine,
the
big
gotcha.
Is
you
can't
yeah,
you
definitely
Danny
lb
to
access
your
API
server.
At
that
point,
it's
sort
of
the
big
gotcha
course
you
don't.
G
A
Cool
that
is
good
to
know
all
right
and
he
is
listening.
He
or
she
is
listening.
So
thanks
for
answering
that
Justin
moving
on
and
Watson
has
question.
Is
anyone
using
kubernetes
Federation
in
production
across
cloud
regions
or
even
cloud
vendors
Rubin?
You
want
to
thank
you,
you
hopped
on
this
one,
pretty
quick,
you
mean
it
now.
He
doesn't
want
to.
C
H
A
A
Things
are
in
a
state
of
flux
of
Federation,
as
most
of
the
development
is
going
into
Federation
v2,
which
is
very
different
from
Federation
v1
from
a
usability
so
normally
Jeff
and
Bob
work
with
Federation
just
about
eh,
but
this
is
the
one
month,
but
neither
of
them
could
be
here
for
office
hours,
so
they
hang
out
in
the
channel.
That's
at
mr.
A
G
Can't
say,
there's
also
I
think
sure,
as
well
as
the
flux,
so
there's
there's
a
team
working
on
Federation
v2,
which
I
don't
know.
If
it's
ready
or
not
my
believe,
is
it's
not
or
not
recommended
the
number
happens
to
be
used
in
production.
The
the
other
thing
is
I
know,
there's
a
team
at
Google
working
on
some
of
this
stuff
and
they
do
have
something
called
multi
cluster
ingress,
which
is
sort
of
a
way
to
set
up
an
ingress
across
multiple
kubernetes
clusters
so
that
you
can
have
one
hostname.
G
G
So
you
know
I
want
to
be
able
to,
because
you
know
I
certainly
want
to
run
all
your
clusters
exactly
the
same
way,
because
if
you
do
them
all
in
lockstep,
then
they're
all
gonna
fail
at
the
same
time
right
so
I,
its
ingress
seems
like
a
pretty
canonical
use
case
and
there's
this
tool
I
think
it
only
supports
Google
at
the
moment,
I
should
say,
but
you
know,
hopefully
other
people
will
add,
support
and
drop
a
link
for
it
to
it.
Inside.
E
We
we
essentially
relaunched
the
whole
effort
and
and,
as
Justin
said,
try
to
ground
it
a
bit
more
into
like
things
that
people
actually
want
to
do
and
can
do,
and
it's
really
to
me
and
if
someone
throws
in
Federation
always
the
question,
what
do
you
want
to
achieve?
What
is
your
use
case?
Do
you
want
to
do
the?
Are
you
wanna
disaster
recovery?
Do
you
want
to
have
a
little
penance
or
what
exactly
is?
Is
it
depending
on
what
you
want
to
do?
They
are
there
multiple
things,
I,
don't
know.
C
Registry
itself
is
defined,
but
there
is
not
much
use
of
it
terms
of
people
entering
or
tools
entering
they're
set
up
clusters
into
the
registry,
but,
like
you
said
it's
basically,
you
need
to
know
about
your
use
case,
and
maybe
there
is
other
options.
I
mean
putting
a
global
geo.
Dns
in
front
of
different
clusters
gives
you
failover
region
wise.
You
can
have
pipelines,
pull
it
pushing
into
different
clusters
too.
E
One
low
hanging
low
hanging
fruit-
if
that's
that's,
actually
something
real
that
is
essentially
yeah,
I've
seen
it
quite
often
with
customers
there,
you
have
something
on
premises.
You
know
that
might
be
your
main
production
cluster
and
then,
for
you
know,
seasonal
reasons
like
Friday
or
Christmas
or
whatever
you
want
to
be
able
to
twice
or
three
or
four
times
the
capacity,
the
compute
capacity
in
the
cloud,
and
you
want
to
federate
that
essentially
this
kind
of
use
case
in
many
cases,
first
or
most
cases
for
stateless
stuff,
not
a
problem.
E
A
Dana
has
gravity
I
like
that
when
you
use
that
alright.
That
next
question
is,
is
what
I've
been
looking
forward
to.
Jimmy
Joe
asks
welcome
back
Jim.
How
do
you
handle
tracking
user
adoption
over
time
or
charge
back
for
lack
of
a
better
term
at
any
given
Cates
cluster
I'm
curious?
If
anyone
is
looking
at
aggregated
data
of
namespace
activity,
number
of
events
number
of
code
pushes
or
any
other
metric
to
quantify
the
productivity
value
of
kubernetes
within
the
org.
A
E
Where,
from
from
the
very
beginning
that
belief
was
mainly
Tim,
who
coded
it
spartacus
for
telemetry
within
the
cluster,
and
that
you
know
just
gives
you
statistics,
you
know
kind
of
like
how
many
pods
and
whatnot
are
running.
This
is
something
you
can
readily
use.
There
are
newer
projects
out
there
that
do
more
or
less
the
same
and
I.
Think
the
question
really
always
is
what
do
you
want
to
measure?
E
How
do
you
you
want
to
produce
the
number
of
vanity
metrics
or
or
do
you
want
to,
for
example,
if
it's,
if
it's
about
making
the
point,
look
boss,
we're
really
using
our
cluster,
you
could
probably
want
to
see
how
many
applications
per
namespace
per
developer
or
whatever,
so
that
there
are
multiple
co-star
am
happy
to
provide
a
few
pointers
there.
That's
me
see
the
easy
part
everything
else.
I,
don't
know
one.
It's.
C
Kind
of
two
sides
to
the
question:
one
is
like
showing
how
how
productive
you
are
or
what
it
gave
you
you
you
might
want
to
even
do
like.
You
know
how
many
deployments
to
production
that
you
have
during
a
day
before
and
after
that,
sometimes
something
that
on
sea
level
or
a
management
that
was
really
nice,
where
you
say
like
before
we
used
to
do
a
release
every
three
months.
Now
we're
doing
50
releases
a
day,
and
that's
that's
usually
something
that
that
man
just
really
liked.
H
C
It's
always
really
really
hard.
I've
seen
some
customs
one
of
our
customers
have
built
their
own
based
on
namespaces
and
basically
getting
metrics
out
of
the
namespace
with
tags
and
then
doing
that
or
if
you
can
separate
by
cluster.
It's
super
easy,
then
you
can
just
measure
how
much
the
cluster
costs
you
on
AWS
and
put
on
a
service
fee
or
decade
@ks
or
something
there's
different
ways:
I
guess
very
custom.
B
Looked
at
this
like
six
months
ago
and
ended
up
giving
up
because
it
just
was
too
difficult
at
the
time.
I
wasn't
the
one
working
on
a
bind
I
trying
to
find
there's
some
project
that
came
up
on
my
feed
recently.
That
was
talking
about
this
and
I
can't
find
what
it
is.
So
I've
just
been
googling
for
last
five
minutes
to
try
and
work
out
what
it
is
if
I
find
it
I'll
post
it,
but
I
haven't
seen
it
there
was.
C
A
Ilya
before
you
before,
we
even
get
to
the
Cates
cluster,
is
there
anything
in
the
get
ops
world
that
kind
of
helps
you
like
track
this
kind
of
stuff,
I,
just
merged
or
yeah?
Well,.
F
Yeah,
you
could
potentially
do
that
because
then
we
could
basically
use
anything
that
can
analyze
a
give
gamete
log
right
so
for
Paul
we
stood.
There
are
few
tools
out
there
quite
looked
into
this
particular
problem,
but
but
with
the
get-ups,
essentially
yeah,
because
now
it's
get
is
your
source
of
truth.
Now
you
can
see
what's
been
happening
through
the
commit
log
and
play
some
math.
You
can
figure
things
out,
I
thought
I
was
gonna,
say
what
potentially
was
Prometheus
you
could.
F
You
could
also
tell
the
kind
of
you
know
general
usage
as
long
as
you
have
some
kind
of
general
way
to
differentiate
how
you
know,
but
the
team
has
really
having
a
namespace
by
team
or
something
like
that.
You
have
that
sort
of
convenience
and
dimension
that
you
could
do
that.
If
that's
on
the
team
basis,
if
it's
only
just
application
basis,
you
could
just
count
unique
deployments
in
some
way
and
then
they
could
be
done
with
music.
You
might
want
to
use
an
external
storage,
for
example.
A
E
You
just
triggered
an
association,
and
that
is
auditing.
You
can
also
switch
on
auditing.
If
you
do
have
access,
that's
a
little
I,
don't
actually
know
about.
You
know,
managed
environments.
If
you
you
know
looking
at
I,
don't
know
I,
guess
or
you
case
or
gke,
if
you
can
actually
get
the
auditing
like,
if
you
can
tweak,
if
you
can
get
to
the
control
plan,
to
enable
that,
but
auditing
is
also
something
so
I
would
definitely
go
for.
Multiple,
like
I
would
try
out
multiple.
You
know
these
tools
to
see.
E
Maybe
it's
a
combination
of
that,
but
it
like
the
the
these.
That's
where
I
said
initially
is
vanity
metrics.
That's
like
that's
kind
of
easy
to
do,
but
what's
the
impact
right?
What
does
it
really
mean
they?
Can
you
actually
correlate
that
with
actual
revenue
you're
making
like
you
know
it
might
be
a
tiny
application
that
makes
80%
of
your
revenue
stream
so
I.
A
Feel
like
a
lot,
a
lot
of
the
things
people
would
want
to
charge
with
would
be
like
like,
for
example,
it
would
be
silly
if
hey
we
moved
to
kubernetes,
so
we
could
do
multiple
deployments
a
day,
but
Ilya
wants
to
charge
me
for
deployment.
So
I'm,
not
gonna,
do
like
you
know
like
it
feels
like,
like
you'd
have
to
have
like
that
covered
those
awkward
conversations.
C
G
A
We
should
definitely
do
that.
Thanks
for
the
question
Jim
and
welcome
back
danby
asks
what
are
the
best
practices
for
handily
credentials,
search
Keys
for
service
accounts
for
programmatic
access
to
the
cluster
for
unprivileged
use
cases,
IEC
ICD
that
wants
to
install
hum
charts
on
a
cluster
I've
been
looking
involved,
but
have
not
yet
pieced
it
all
together.
B
Is
next,
I
was
to
say
use
service
count
token,
rather
than
a
difficut
Humanity's
doesn't
have
any
sort
of
sense
of
certificate
revocation.
So
if
you
give
a
CI
system
a
certificate
and
it
gets
leaked,
there's
no
way
to
revoke
that.
You've
basically
got
Ricky
your
Cuban
lessons
cluster,
which
is
not
fun,
at
least
with
the
service
type.
Can't
token,
if
you
delete
the
token
the
service
account
controller
will
recreate
your
new
on.
You
can
revoke
access
very
easily,
so
definitely
service
camp
token
over
certificate.
On
that.
C
E
I,
don't
know
if
that's
too
wide
of
an
interpretation,
but
there
is
a
very
nice
project
budget
Nami
around
sealed
secrets
which
allow
you
to
essentially
put
credential
or,
like
you
know,
any
kind
of
sensitive
information
on
github
will
get
level
and
encrypted
for
you
and
it's
kind
of
transparently
handling
that
which
is
kind
of
nice.
So
I
put
that
like
as
well.
G
G
B
Think
you've
hit
nail
on
the
head
really
rotation
is
the
thing
that
no
one
solved
at
the
moment.
I've
not
seen
any
solution
to
this,
where
you've
actually
got
rotation
service,
counter
combs,
don't
get
rotated
certificates,
don't
really
get
rotated
that
often
there's
no
automation
around
I
think
actually
having
seen
the
fall-off
stuff
the
wrong
kubernetes.
That's
probably
the
easiest
way
to
get
a
small,
secure
sort
of
almost
rotation.
I
haven't
used.
E
A
C
You
just
need
to
take
care
that
you
use
ingress
classes
once
you
have
more
than
one
controller,
because
if
you
don't
then
controllers
will
fight
or
commit
like
concurrently
set
up
ingress
for
you.
So
that's
bad,
so
you
need
to
definitely
use
an
ingress
trust
in
all
your
ingress
objects.
You
might
use
something
more
advanced.
A
few
people
are
working
on
very
nice,
ingress
advances.
I
would
say
you
see
our
DS,
that's
something
to
look
at.
Maybe
all.
F
C
B
We
run
yeah
ingress
controllers
on
each
of
our
clusters
at
moment,
one
that
has
a
sunset
just
based
on
nginx
and
another
that
is
different
set
of
features
based
on
em
boy.
We
found
that
for
our
production
traffic,
the
nginx
ingress
controller
doesn't
balance
properly
when
you've
got
auto-scaling,
so
we've
got
em
boy
in
front
of
the
production
stuff,
but
then
things
like
the
kubernetes
dashboard
Prometheus
core
fauna.
We
will
host
that
behind
nginx
because
it's
got
the
sort
of
single
sign-on
stuff
that
we
needed
for
that.
So
there
are
a
few
use
cases.
B
It's
normally
about
the
feature
sets
of
the
ingress
controller,
I
think
but
yeah
English
class
is
sold
separately
again.
A
F
C
D
B
A
It
sounds
like
this
would
be
something
that
a
lot
of
people
would
do.
So
if,
if
I
don't
know,
if
both
of
your
co-workers
are
interested
in
the
same
question,
I'd
love
to
see
a
blog
post
on
something
setting
up
multiple
ingress
controllers
and
kind
of
doing
a
little
little
charts.
You
know
I
always
think
that
that'd
be
useful
but
yeah.
Hopefully
that
will
get.
You
started
all
right.
Any
other
comments
on
ingress
controllers.
Before
we
move
on.
A
All
right
and
Watson
asks
Oh
real,
quick
if
you're
just
joining
us
on
the
stream.
This
is
the
kubernetes
office
hours,
there's
Wednesday
of
every
month
hop
into
hash
office,
a
showers
on
slack
dr.
bananas
on
I/o,
ask
your
question
and
we
will
get
to
it
as
soon
as
we
can.
If
you
reread
your
question,
live
on
the
air
you'll
be
entered
to
win
an
awesome,
but
that
raffles
at
the
end
of
the
show,
so
we
make
you
sit
through
the
whole,
show
all
right.
Moving
on
and
Watson
says
question
in
large
organizations.
A
What
approaches
are
folks
taking
and
deploying
kubernetes
in
terms
of
large
shared
clusters
or
smaller
clusters
share
by
fewer
teams?
So
this
is
a
question.
I
actually
see
commonly
asked.
You
know:
do
I
want
like
one
big
cluster
with
a
lot
of
main
spaces.
Do
I
want
like
a
lot
of
little
clusters,
but
then
I
have
a
management
overhead.
What.
D
It's
a
thing:
it's
a
trade-off
analysis
that
you
have
to
do,
and
it
really
depends
on
the
org.
You
know
I
think
your
own
process,
given
the
limitations
I,
would
say
of
the
story
around
Federation.
Right,
like
you
say,
well
back,
we
can
Oh
hand
or
manage
it
that
way.
So
that
way,
we
need
to
decide
whether
we
went
a
huge
voltron
cluster
with
you
know,
five
hundred
different
nodes
or
one
five
clusters
with
a
100
nodes,
each
right
like
again,
it's
the
trade-off
between
operational
costs
versus
the
ability
to
like
in
like
blast
radius.
D
If
that
cluster
goes
down,
you
want
the
other
force
to
go
down
as
well.
I
like
or
so
I,
don't
think,
there's
a
clear
answer
right
now,
like
I,
don't
think
other
than
what
matters
to
you.
The
most.
If
disaster
recovery
or
high
availability
is
an
issue,
I
would
say
it's
better
to
have
more
if
clusters,
as
opposed
to
less
in
the
sense
of
like
something
happens,
that
cluster
gets
sick,
you
need.
You
have
another
one
to
jump
in
on
yeah.
H
C
Costs
of
its
kind
of
you
will
have
less
clusters
to
manage,
but
then
you
need
to
set
up
your
classes.
A
lot
more
carefully
you
need
to
put
in
our
bag
is
be,
give
very
specific
roles
to
them.
Mitigate
this
blast
radius,
which
is
possible,
and
we've
seen
both
of
customers
with
like
ten
clusters
and
ones
as
to
that
place.
Those
10
hersa
and
customers.
Moving
between
such
models
even
depends.
C
C
F
With
the
will
be
one
large
cluster
approach,
it's
also
still
question
of
how
much
multi-tenancy
isolation
do
you
expect?
Because
you
know,
did
you
set
it
up
properly?
There's
there's
tremendous
amount
of
effort
that
elliptically
to
that
and
to
some
definition,
for
you
know
like
the
most
is
the
strongest
isolation
is
sort
of
impossible
at
the
moment
right
or
very,
very
hard
or
even
clinically
hard.
So
you
know
there
been
multiple
tones
that
you
would
be
able
to
find
on
YouTube
on
the
topic
of
isolation
in
most
tenancy
and
know.
F
The
general
bottom
line
is
essentially
it's
fairly
hard.
I
mean
if
you
wanted
very
strong,
meant
that
it's
gonna
be
super
hard
right
and
if
that's
what
you're
in
for
you
could
do
that
or
if
you,
if
you
find
with
some
basic
isolation
in
terms
of
ROI
for
like
main
space
level,
are
back
rules
or
whatever
my
team,
earnings
per
team
or
whatever
event
that
suits
you.
That's
fine,
but
if
you
need
something
quite
complex
and
it
may
end
up
pretty
complicated
and
make
the
process
easier
to
run
with
budge.
G
Small
clusters,
I
I,
think
there's
a
there's,
a
long-term
trend
here,
which
is
like
in
the
beginning
and
they're
like
100
of
kubernetes.
There
were
not
a
lot
of
features
that
would
have
made
running
multiple
tenants
safe.
Like
our
back
wasn't
there
quotas
were
early
I
think
they
were
in
100
I,
don't
remember
like
quotas,
where
it
certainly
like
one
of
the
first
things
in
there
there
were
a
lot
of
unprivileged
ports
or
sort
of
insecure
ports.
Listening
that
anyone
could
then
hit.
There
was
a
network
policy
for
a
network
isolation.
G
I
was
you
know
there
are
other
things
happening
like
different
container
runtimes
that
might
be
more
secure,
there's
a
general
towards
making
it
safer
to
run
more
workloads
in
one
cluster
and
I.
Think
one
of
the
things
that
drive
is
driving
that
form
I,
think
Google
is
Google.
Google's
experience
with
Borg
is
by
putting
lots
of
workloads
into
one
cluster.
You
are
able
to
achieve
higher
utilization
of
it,
particularly
particularly
if
you're
able
to
mix
batch
and
real-time
over
web
type
workloads.
G
G
Try
put
some
links
in
there,
which
I
think
have
some
of
the
numbers
on
utilisation,
but
they
are
like
much
higher
than
you
normally
expect
and
I
think
that's
sort
of
the
that's
the
big
goal
of
the
that's,
the
big
carrot
in
terms
of
get
running
in
one
big
cluster
rather
than
in
ten
small
clusters.
Definitely.
C
F
F
If
you
actually
want
to
go
to
do
some
extreme,
where
you
kind
of
trying
to
totally
black
and
everything
and
whitelist
all
the
network
connections
that
are
allowed
right
and
then,
if
you,
if
you
want
to
do
that,
it
takes
quite
a
bit
of
effort
and
there
may
be
things
that
you
can't.
Actually
you
don't
be
unaware
of
and
those
are
hard
to
track
down.
So
it's
it's
not
that
easy.
So
that's
something
to
keep
in
mind
right,
a
lot
of
API
success
and
but
using
them
isn't
interval.
E
There's
another
thing
we've
mentioned
it
already
and
that
is
pardoned
and
node
affinity
which,
if
you
have
a
bigger
cluster,
which
allows
you
to
in
addition
to
stuff
that
Rustin
mentioned,
like
network
policies
and
other
things,
it
allows
you
in
terms
of
placement,
where
you
could
say
you
have
a
policy
that
you
know
for
certain
rag
or
whatever,
only
pods
from
cluster
and
from
user
X
goal
whatever.
So
that
is
certainly
something
to
put
that
in
perspective.
What
Justin
said
in
terms
of
utilization
I
I'm
all
for
it,
but
not
many
people
are
Google
right.
E
You
know
60
70,
80
%,
that's
kind
of
like
whoo,
that's
the
year
or
two
years
time.
I
think
people
will
already
be
happy
with
30
40
percent
decision
about
the
currently
have.
So
this
is
certainly
true.
This
is
on
a
like.
This
is
almost
for
me,
like.
Let's
write
a
custom
scheduler
right
like
all
right,
you're
use
the
tool,
simple
tools
which.
I
G
Already
very
hot
yeah
I'm,
just
trying
to
explain
like
that.
This
is
sort
of
my
view
of
like.
Where
were
why
we're
going
in
a
certain
direction
and
yeah
we
I,
don't
think
we
today
have
the
right
tools
for
batch
workloads
in
kubernetes,
for
example,
I
mean
there's
jobs,
something's,
that's
weak
I
think
we
can
do
more
there.
We
need
to
make
it
easier
to
get
more
batch
projects
on
two
committees
so
that
there
is
yeah.
We
don't
have
anything
to
fill
it
with
yet
like
there.
E
A
A
Okay,
we
are
down
to
our
last
fifteen
minutes.
I
want
to
get
these
two
questions,
and
especially
this
one
as
we're
kind
of
trying
to
scope
it
down
here.
Peddler
asks
how
would
I
go
about
emulating
q
cuttle
apply
using
client
go
and
then
it
looks
like
we're
trying
to
find
out
what
exactly
he's
trying
to
do
here.
Lots
of
opinions
already.
G
Just
to
go
the
easy,
so
there's
something
called
server
side
apply,
which
is
a
long
live
feature.
Branch
should
hopefully
merge
in
either
113
or
114,
so
will
then
be
an
API
new
different
client
go.
There
is
a
ton
of
logic,
though
in
it,
and
it's
currently,
who
could
also
your
best.
That
is
either
too
vendor
KK
or
shell
out
to
coop
cuddle
apply.
Sadly,
until
server
side
of
light
comes,
my
view
is
when
I've
done.
This
I've
showed
up
the
circle
apply
on
the
grounds
of
server
side.
D
Went
to
echo
yeah
what
just
enjoy
said
it
was
I
tried
for
my
own
edification,
try
to
really
limit
some
of
these
things,
especially
the
marriages
and
especially
like
trying
to
reconcile
stuff
that
is
already
present
in
the
server
that
I
need
to
patch
and
then
add
other
stuff
to
it,
and
it's
just
it
becomes.
The
logic
becomes
pretty
mind-numbing
so
like
yeah,
just
shell
out
to
keep
CDM.
A
Big
lesson
is:
follow
your
personal
github,
namespace
and
hausenblas
got
it.
You
get
a
lot
of
little
tools
in
there
and
browsing
I'm
browsing
in
between
the
questions.
Alright,
so
hopefully
peddler.
That
would
help
you
out
and
then
he's
second
about
our
three-way
merge
and
that's
scary.
Okay,
oh
gee
asks:
can
someone
explain
the
integration
between
a
metal
lb
and
an
ingress
controller?
What
is
a
role
of
a
metal
lb
in
this
use
case.
C
Typing
I
think
it's
something:
we've
done
this
before:
building
it
in
a
chat,
proxy
kind
of
as
a
as
a
as
an
ingress
of
ingress
to
load
balance,
two
different
ingress
or
two
different
clusters
in
in
a
bare
metal
environment.
I
think
your
gimbal
is
trying
or
doing
something
similar
to
if
I've
understood
the
project
right,
I.
C
G
Wsdl
B,
which
is
like
a
TCP,
a
layer
for
load
balancer,
which
is
actually
gets
the
traffic
into
your
cluster,
there's
also
now
an
ingress
controller,
which
does
like
eight
of
us,
a
LP,
for
example,
which
is
that's
HTTP
there
seven
and
it
sort
of
pushes
the
HTTP
the
nginx
component
out
into
AWS.
So
AWS
runs
all
of
that
for
you,
so
it
combines
the
routing
and
the
commands
the
TCP
routing
and
the
HTTP
routing
is
my
understanding
is
meadow.
G
Ob
is
the
sort
of
equivalent
for
the
on-prem
world,
of
the
a
WSDL,
be
the
layer
for
the
tcp
load
balancer,
and
with
that
you
would
still
run
an
HTTP,
a
layer,
seven
load
balancer,
like
nginx
you'd,
run
the
nginx
Congrats
in
your
cluster
or
other
ones,
and
boy
or
contour
or
other
ones,
I'm,
pretty
forgetting,
but
please
speak
up,
but
then
metal
obeah.
He
does
something
similar
to
80s
ELB,
in
other
words,
routing
TCP
traffic
to
your
nodes
in
a
BGP
environment,
so
in
a
sort
of
bare
metal
role,
Hardware,
Cisco's
type
environment.
C
I
know
you
don't
wanna,
you
don't
want
to
have
l7
in
front
of
l7,
especially
if
you
have
maybe
clusters
that
have
different
kinds
of
needs
and
some
might
wanna
use
as
an
ISO
might
want
to
terminate
SSL
directly.
So
then
you
want
to
just
tunnel
through
the
TCP
stream
and
then
decide
inside
the
cluster.
What
you
do
with
it.
A
Okay,
next
question
comes
from
Alan,
welcome
to
office
hours
we're
starting
to
run
out
of
time
here
he
did
have
a
dangling
question
and
then
there's
a
link.
What's
the
correct
way
to
get
annotations
needed
by
metrics
creep.
In
my
case,
added
two
pods
produced
via
deployments
live
stateful
sets
from
a
hound
chart.
Should
the
chart
expose
values
for
arbitrary
annotations,
many
don't
should
PRS
be
made,
should
pod
preset
or
some
other
a
mission
controller
dr.
the
pod
just
before
creation,
without
touching
the
home
turn
something
else.
C
He
usually
don't
use
upstream
charts.
I
would
keep
on
Forks
of
them
and
have
because
have
my
own
production
level
charts
for
those.
You
could
use
something
like
a
mutating
that
hook
to
add
things
on
top
or
you
would
just
basically
change
the
templating
doing
upstream
PRS
to
add
that
not
sure
how
how
easy
it
is
or
how
fast
it
is
to
get
a
PR
merged
into
a
chart,
especially
in
a
unstable
chart.
F
So
if
you
are
seeing
consistency
between
charts,
that
implies
that
you
know
they
assume
certain
Prometheus
conflict
and
so
the
way
to
fix
that
would
be
fixing
chart,
assume
or
supporting
different
options
and
your
Prometheus
conflict.
Unfortunately,
Prometheus
perfect
is
fairly
complex
myself,
but
but
you
could,
you
could
do
certain
way
to
certain
things
there
and
you
could
do
special
tasting
there
as
well.
So
you
could
say
in
the
Prometheus
come
fake.
F
E
Not
hundred
percent
sure,
if
I
understand
what
exactly
the
problem
or
the
use
cases,
but
why
not
just
doing
it
like
not
with
inquiries
but
on
the
primitive
side
with
with
labor
rewriting
all
right?
Why
not
do
that
like
I'm
wondering
what
exactly
be?
Is
it
just
a
question
around
the
good
practice
or
like.
F
A
G
G
On
top
and
the
idea
that
there's
an
upstream
which
has
a
base-
and
then
you
layer,
your
particular
changes
on
top
or
like
pull
them
from
someone
else
who
is
layered
their
data
dog
things.
On
top,
so
I'll
put
a
link
to
customized
in
there.
It
is.
It
is
another
way
of
doing
things
that
is
sadly
like
a
different
ecosystem
and
a
more
nascent
ecosystem
than
helm.
G
A
A
The
following
companies
have
been
supporting
the
community
office
hours
with
these
developer
volunteers,
including
giant
swarm,
hefty
Oh
stock,
X
packet,
dot,
nut
pusher
com,
Red
Hat,
we've
works,
VMware,
zing
Huawei
and
the
University
of
Michigan
also
special
thanks
to
Google
for
sponsoring
our
t-shirt
giveaway
today
and
with
that,
the
winner
of
the
t-shirt
is:
can
I
get
a
drumroll
from
someone
and
Watson,
so
I
will
go
ahead
and
and
ping.
You
thanks
everybody
who
listened
to
the
live
stream.
A
I
think
this
is
pretty
much
the
busiest
livestream
we've
had
so
far
I'm
still
waiting
for
the
metrics
to
to
crunch,
but
thanks
thanks
those
of
you
in
the
tuned
in
we
are
having
another
session
in
about
six
hours
for
the
West
Coast
us
folks,
with
an
entirely
new
panel,
an
entirely
new
set
of
questions.
So
we
hope
you
join
us,
then,
with
that
Thank
You
panel
and
we'll
see
everyone
next
month,
hey
everyone.