►
From YouTube: Kubernetes Office Hours 20200219 (EU Edition)
Description
Office Hours is a live stream where we answer live questions about Kubernetes from users on the YouTube channel. Office hours are a regularly scheduled meeting where people can bring topics to discuss with the greater community. They are great for answering questions, getting feedback on how you’re using Kubernetes, or to just passively learn by following along.
For more info: https://github.com/kubernetes/community/blob/master/events/office-hours.md
A
You
are
the
epitome
of
great
examples
and
with
that
let's
get
started,
welcome
everybody.
It
is
a
third
Wednesday
of
the
month.
That
means
it's
time
for
our
favorite,
my
favorite
day
of
the
week
and
my
business
the
other
week,
the
kubernetes
office
hours.
This
is
our
monthly
live
stream.
Where
we
get
on
the
internet
on
this
thing
called
YouTube
we
livestream,
and
then
we
answer
as
many
kubernetes
user
questions
from
the
audience
as
possible
in
an
hour.
So
with
that
we've
gotten
a
ton
of
new
people.
How
is
the
livestream
sounding
there?
A
If
you
are
listening
in
and
in
the
channel,
we
would
love
to
know
also
those
of
you
tuning
in
you
should
see
our
livestream
chat
here
on
the
side.
If
you
are
listening
in,
please
say
who
you
are
where
you're
from
and
so
while
we're
doing
the
intro,
we
kind
of
have
everyone
introducing
themselves.
I
will
go
ahead
and
and
with
that
before
we
start
remember,
this
is
a
kubernetes
event,
so
the
code
of
conduct
is
an
effect.
Please
be
excellent
to
each
other,
all
right
before
we
begin.
A
B
Enough,
my
name
is
Jeffrey
Sica
I'm,
a
senior
software
engineer
at
Red,
Hat
I
have
been
helping
out
with
the
kubernetes
community
for
a
few
years
now
I
kind
of
have
special
special
knowledge
with
the
dashboard
and
batch
jobs
and
cron
jobs.
Things
like
that,
but
I
have
general
knowledge
with
a
lot
of
different
stuff.
So
here
am.
C
D
Hey
guys,
this
is
one
cheese,
Samudra
la
I'm
software
senior
software
engineer
at
American
Airlines
based
out
at
what
is
in
Dallas
and
I'm,
started
working
for
a
cubist
community
from
past
three
or
four
months,
really
enjoying
it
with
very
good
cool
people
around
in
the
community
and
I
have
just
our
and
I've,
been
working
with
humility,
simple
last
three
years
or
so
with
American
Airlines
and
started
with
open.
So
it's
just
three
months
ago
and
I
see
how
I
can
help
are.
A
E
F
E
G
A
If,
at
some
point
you
ask
a
question
and
it's
obvious
that
I
miss
it
or
I,
don't
see
it
feel
free
to
just
ping
it
again
in
chat
and
then
one
of
the
panelists
or
will
notice
eventually
right
I
will
be
keeping
track
of
the
chat
and,
as
always
remember,
we
are
live
streaming
on
YouTube
all
right.
So
here.
A
Free
zone
everyone
had
to
start
from
somewhere.
So
please
help
out
everybody,
not
everyone's
going
to
know
everything
we're
all
doing
our
best
to
figure
this
stuff,
so
please
be
supportive
of
each
other
and
what
we
will
do
our
best
to
answer
your
questions.
We
don't
have
access
to
your
cluster,
so
there
are
certain
things
like
you're,
not
even
SSH,
in
and
figure
out
why
your
network
has
busted
that
sort
of
thing,
but
what
we
will
try
to
do
is
kind
of
at
least
get.
A
You
started
on
figuring
out
what
the
problem
is
to
help
you
move
forward
a
little
bit,
so
you
can
at
least
have
some
sort
of
progress
and
whatever
your
problem
is
panelists
you're
encouraged
to
expand
on
answers
with
your
experiences
and
your
pro
tips
same
for
the
audience.
A
lot
of
you
are
doing
this
kind
of
stuff.
If
you
end
up
talking
way
too
much
in
chat,
we
might
invite
you
to
just
be
on
the
panel
and
that's
up
here.
So
we
always
love
that
kind
of
information.
A
So
if
there's
information
that
you
think
is
relevant
to
a
topic
feel
free
to
put
the
URL
just
whack
it
there
and
chat
as
we
talk
discuss,
questions
feel
free
to
look
up
things
in
the
official
Docs
things
on
github.
You
might
have
seen
awesome
blog
posts
about
a
certain
topic.
Just
start
whacking
it
all
in
there.
A
What
I
do
is
at
the
end
of
each
show,
I
grab
all
the
URLs
and
we
publish
it
with
the
show
notes
so
that
you
have
a
nice
set
of
stuff
and
we've
been
doing
the
show
for
about
two
years
and
every
single
one
there's
at
least
one
new
tool
that
we
discover
each
time.
So
that's
the
power
of
community
right.
A
If
you
have
a
premade
question
on
Stack
Overflow,
that
already
has
a
bunch
of
information
just
feel
free
to
whack
that
in
there
sometimes
the
best
thing
the
show
could
do
is
help
get
eyeballs
on
your
question,
so
we
can
at
least
help
there.
So
if
you
have
stuff
premade,
we
love
that
as
well.
This
panel
is
made
entirely
volunteers,
so
if
you
want
to
rotate
in
please
let
us
know
we
have
some
people
that
are
brand-new
today.
I
think
we
have
one
two
three
four
new
people
and
the
old
people.
A
So
what
we
try
to
do
is
have
enough
panel
is
standing
by.
So
when
we
go
on
the
show
you
know,
sometimes
people
have
work
stuff
come
up,
I
think
who
use
cars,
broken
I,
think
he's
standing
at
the
side
of
the
road
right
now.
So
that's
what
we
like
to
have
a
lot
of
volunteers
around
it's
a
great
way
to
give
back
to
the
community.
It's
it's
an
hour
a
month
that
you
can
do,
and
then
you
know,
if
you
do
it,
you
do
it.
If
not
it's
fine.
A
We
have
a
pool
of
people
that
we
tap
from,
so
if
you're
interested
in
doing
that
is
a
fantastic
way
to
get
started
in
kubernetes.
That's
how
Chris
got
started
then
he
started
speaking
at
cube
con
and
now
he's
doing
all
this
stuff
and
he
got
a
CKA
and
he's
all
like
awesome
and,
lastly,
we're
DevOps
people
too.
So
we
like
to
measure
how
we're
doing
as
well.
So
please
give
us
a
like
to
subscribe.
A
I
do
have
some
viewer
accounts,
but
it
really
does
help
when
you
retweet-
or
you
tell
a
co-worker
about
us
or
tell
someone
hey
this
event
is
a
thing.
This
helps
us
look
at
metrics,
look
at
numbers
and
that
lets
me
do
things
like
get
more
money
to
give
out
more
t-shirts
and
equipment,
and
so
that
you
know,
if
possible,
somewhere
in
the
world
someone
at
a
times.
I
wanted
to
do
this.
We
have
everything
that
they
need
to
replicate
this
in
their
time
zone,
so
we're
kind
of
thinking
about
sustainability,
and
things
like
that.
A
So
please
do
remember
that
every
time
you
tweet
or
you
thumbs
up
or
you
share
something
you're
really
helping
us
out,
and
we
appreciate-
and
lastly,
I
like
to
thank
the
companies
for
supporting
the
community
with
these
developer
volunteers.
They
are
giant
swarm,
stock,
X,
pivotal
pusher,
calm,
weave
works,
VMware,
the
University
of
Michigan
Red,
Hat
spectrum,
IO
American,
Airlines
and
Utility
Warehouse,
and
special
thanks,
as
always
in
the
CN
CF
who's
sponsoring
the
t-shirt
giveaway.
A
What
we'll
do
is,
if
you
ask
a
question
during
the
show,
will
automatically
enter
you
raffle
and
at
the
end
of
this
session,
we'll
give
away
two
t-shirts
two
hours
after
that.
We're
gonna
have
another
session
and
we're
going
to
give
away
two
t-shirts.
So
that's
something
we're
trying
new
this
year
is
having
two
sessions.
This
is
the
EU
time
slot
and
then
we
have
a
West
Coast
US
time
slot.
That's
happening
a
few
hours
after
that.
A
B
C
C
Talking
I
think
that
cube
spray
and
ansible
is
pretty
good
I
know
of
one
large
organization.
That's
using
it
and
I
know
one
of
the
maintainer
x'
for
that
project.
I
believe
his
name
is
chad.
Swenson
is
from
that
organization
and
I
can
say
personally
that
I
have
used
it
because
I
worked
for
that
organization
and
super
super
solid
tool
and
very
well
maintained
so
I,
like
it,
man
Sable's,
always
a
good
config
management
tool.
C
H
We
use
our
ke
ranchers
installer
here
and
it's
been,
it's
been
pretty
good,
but
yeah
whatever
suits
your
needs,
I'm
just
going
to
find
a
link,
I
think
I
can't
remember
who
was,
but
they
had
a
little
threat
on
Twitter
about
the
different
in
different
installers
and
everyone
kind
of
chimed
in
on
that.
So
it
might
be
a
good
good
spot
to
do
some
research.
A
A
The
one
thing
I
am
looking
up
to
see
if
it
uses
cue
bad
man,
but
this
is
a
personal
rule
for
myself
is
whatever
deployment
tool.
You're
using
doesn't
use
cube
admin
or
cluster
API.
Eventually,
that
makes
me
feel
better
about
whether
a
save
is
maintaining
something
and,
of
course,
cube.
Spray
is
under
the
kubernetes
sig
org,
so
automatically
I'm
gonna
put
it
a
little
bit
higher
than
I
would
another
tool.
Anybody
else
have
comments
on
keep
spray
anybody.
Anybody
chat,
have
strong
opinions.
A
A
Could
do
one
so
I
have
seen
on
the
kubernetes
community
calendar?
There
is
a
key
Batman,
you
officers,
I,
don't
know
if
there's
a
keep
spray
office
hours,
but
hopefully
max.
If
you
have
information
about
an
office
hours
would
be
happy
to
to
say.
Let's
see
what
some
of
the
people
are
saying:
vinda
Yakka
says:
he'd
been
using
Q
30
for
the
last
three
years.
A
Yeah,
he
seems
pretty
happy
with
it.
I
think
it
depends
on
who
owns
the
installation
of
clusters.
That
is
a
great
point
to
make
I
believe
so.
Yeah
welcome
max
guy,
we'll
just
give
them
some
time.
And
yes,
of
course,
they
don't
have
an
office
iris,
but
the
channel
is
pretty
active
and
that's
hash
cube
spray
on
the
kubernetes
channel.
A
Okay,
anything
else
before
we
move
on
we've
got
linens,
says
hello
from
Finland
any
good
tools
for
making
big
cube,
configs,
more
manageable
cleaning
up
old
clusters
that
don't
exist,
anymore,
etc.
He's
got
some
feedback
here,
Chrissa
says:
have
you
looked
at?
Keep
CTX
and
cube
ends
so
cube
CTX,
I've
heard
of.
E
Yeah
QT
the
excesses
awesome
to
look
at
manage
multiple
contacts
in
one
config,
so
I
use
a
day
to
really
love
it.
Just
like
keeps
track
of
time
and
just
enables
you
to
quickly
cycle
between
two,
for
example
as
well
mm-hmm,
and
this
other
one
cube
adds
the
same
filename
switching
their
name
space
searching.
Basically,
you
can
set
your
name
switch
name
space
in
the
cube
config.
That's
all
it
does.
It's
quite
nice
to
you,
but
for
deprecating,
like
40
meeting
old
computer
I
personally
listens
to
detail,
may
be
complicated
and
then
deleted
context.
A
Leonard
says
he's
been
using
both
with
a
bf
works,
really
well,
but
got
a
bunch
of
clusters
that
have
been
some
downed.
In
my
current
calm
sundown,
that's
a
good
one,
sundown
door
sunset,
it
I
think
sunset.
It
is
also
incorrect,
English
correctly
hug
that
one
okay,
anything
else
to
say
about
cube.
Ctx
I,
see
this
tool
mentioned
a
lot.
When
people
asked
this
question
on
the
forums
or
on
reddit
or
something
so
it
seems
to
be
pretty
popular
there,
hello,
everybody.
A
Jeremy's
passing
along
some
few
interesting
links
they
found
this
week,
telepresence
thought
I,
oh
I,
don't
know
how
many
of
you
use
that
I've
seen
that
used
a
few
times.
It's
pretty
awesome
cheap
set
that
IO
and
something
called
Goldilocks
for
right
sizing.
Your
pods
I
feel
we
have
looked
at
Goldilocks
on
the
show
before,
because
I
remember
that
name
hangover
near
ho,
ask
great
name.
So
there
was
an
interesting
question
that
another
user
asked
in
the
event
that
you
lose
all
your
masters
but
separate
at
CD
cluster.
That's
still
up
and
running.
F
A
G
We
actually
we
had
like
an
outage
where
all
of
you
been
at
the
CP
I
said
where
this
went
down
and
like
it
was
like
I
think
it
was
readiness
probe,
which
was
constant.
If
I
can
read
them
each
and
every
like
single
kubernetes,
atlases
ever
would
go
into
crash
loop,
but
once
we
sort
of
got
rid
of
that
readiness
probe
and
liveness,
basically
everything
automatically
I
recovered
so
yeah.
Basically,
if
that
city
is
there
and
it's
running
it
should
be,
you
should
be
good
to
go.
Okay,.
D
A
A
All
right,
yeah
I,
don't
wanna,
have
to
fake
the
lottery
at
the
end.
If
we
didn't
keep
track
of
who
is
asking
the
questions?
Okay,
I'm
moving
on
here
next
is
so
Matt's
talking
about
having
problems
with
search,
not
reopen
and
expiring.
If
you
give
some
more
information
address
that
Gorka
mask,
what's
your
recommendation
for
a
multi
cluster
dashboard,
my.
B
H
Our
multi
cluster
management,
it
works
quite
well,
it's
a
little.
It's
a
lot
if
you're,
just
kind
of
debugging
stuff,
octant
I
use
on
the
side
that
you
do
to
grading
very
effectively
and
I'm.
Looking
forward
to
this,
a
new
multi
cluster
cube
dashboard
off
to
try
that
out
when
it's
a
what
it's
live.
Yeah.
A
A
Someone
in
the
audience
dig
up
the
URL
for
octant
that'd
be
really
useful
and
then
Jeremy
mentions
continent
lens
was
awesome.
Is
that
not
being
made
anymore
seem
to
remember
that
tool,
but
yeah
I
think
that
pretty
much
covers
all
the
major
all
the
major
dashboard
things
is.
Is
there
anything
upcoming
from
Kate's
dashboard
that
you
can
talk
about?
Give
us
a
little
TL
DR
sneak
peek,
Jeff
or.
B
So
it's
the
the
big
thing
there
is.
We
are
not
going
to
visualize
multiple
clusters
in
a
single
pane
of
glass.
You
will
be
able
to
see
that
you
have
access
to
multiple
clusters
and
then
like
peek
into
them.
It
won't.
It
won't
show
like
all
these
pods
running
across
five
different
clusters.
It'll
be.
A
A
A
Of
the
all
of
these
tools
do
have
to
have
channels
in
the
in
the
kubernetes
slack,
so
you
can
hang
out
a
shot
bench
or
something
if
you
are
just
do
alright
anything
else
about
dashboards,
see
Jeff.
We
were
gonna
talk
about
stuff.
You
work
on,
I
told
you.
If
you
have
any
batch
questions
in
the
audience,
now's
your
chance.
Okay,
moving
on
here,
Tett
r'mante,
hope
I
pronounced
that
right
says
I'm
trying
to
administer
a
cluster
currently
on
a
single
node
on
Prem
and
have
hit
the
storage
conundrum
I.
Did
that
part?
A
How
do
you
all
recommend
handling
storage
on
Prem,
particularly
with
a
single
node
I've
looked
at
cluster
and
Seth,
but
they
state
multiple
nodes
are
necessary
and
stumbled
on
rook,
which
uses
Seth
but
didn't
explicitly
require
three
plus
nodes.
For
my
current
issue,
I
could
use
a
key
value
store
as
well
and
like
the
look
of
console,
but
again
it
wants.
Multiple
nodes
is
my
best
option
just
to
bring
on
two
more
nodes,
or
can
we
do
it?
On
a
single
note,
the
great
on-prem
storage
question
as
reared
its
head
once
again,
opinions
I.
H
Can
give
you
some
my
experience?
I,
don't
know
if
you
have
the
resources
on
wherever
you
are.
We
are
backing
on
to
our
net
app
installation
using
their
Trident
plugin.
So
we
got
around
that
through
it
through
those
means.
When
we
first
started
kicking
off
there,
our
test
we
just
used
NFS
strives
to
to
meet
that
at
those
ends.
Everything
else
seemed
kind
of
like
overkill
for
just
a
single
node
installed,
but
you
know
it
might
might
work.
I'm
I'll,
try
to
think
of
a
one
of
the
smaller
ones.
C
A
Anybody
else
in
the
audience
it
looks
like
people
are
starting
to
paste
stuff,
yeah
and
Bob
just
says
in
a
single
node.
You
could
just
use
local
volumes,
but
would
that
be
bad?
Well,
throw
ignoring
the
fact
that
you're
only
on
one
node,
so
you're
only
gonna
have
a
failure
thing.
It's
just
putting
everything
in
local
volumes,
bad
all.
H
E
A
A
Yeah,
it's
just
a
homeland
that
I'd
be
hosting
ambien,
related,
apps
and
websites
and
stuff.
So
actually,
Jeff
and
I
have
talked
about
this
for
a
while.
So
at
home,
I've
run
similar
services
at
home,
but
I
just
run
with
docker
containers
and
it
was
like
hey
wait.
A
minute
can
I
do
a
single
node,
kubernetes
cluster
and
kind
of
get
all
the
cool
benefits
on
that.
A
But
then,
if
you
were
to
do
something
like
MV
that
not
you
know
you
have
a
deployment
and
then
you
have
a
service
and
stuff
and
I
was
like
I.
Don't
want
to
do
that
for
a
home,
lab
I
know
I'm
a
kubernetes
person,
but
I'm.
A
A
E
I'm
learning
it's
totally
fine
I
was
thinking
more
like
a
small
production
point
or
something,
but
it's
right:
yeah
I
would
go
one
node
and
then,
like
use
local
storage,
it
teaches
you
can
still
learn
the
same
stuff
only
one
node
and
local
storage
and
anything
need
to
scale
up.
You
can
then
look
into
it.
I
think
communities
is
very
good
at
teaching.
How
easy
things
you
need
to
know
at
the
moment,
and
you
can
always
add
new
things
on
it
later,
without
having
to
break
everything,
there's
a
very
quite
straight
for
it.
A
E
G
H
B
D
A
A
Well,
if
you're
looking
for
a
reason
to
get
that
10
gig
e
for
home
now's
your
chance,
ok,
anything
else
about
single
node
storage,
thanks
for
the
question
really
handy:
ok,
moving
on
Amir's
Calico
related
hi
hall,
first
time
visitor
new
to
kubernetes,
welcome,
trying
to
deploy
micro
services
at
work
in
kubernetes
clusters
using
different
implementations
like
mini,
keep
on
testing
locally
on
our
development
machines,
and
then
clouds
setup
in
our
test
lab.
Don't
know
how
this
Kate's
cluster
is
setup.
A
H
A
A
Okay,
let's
move
on
then
and
come
back
to
this
one
thank
a
tas.
I've
used
Valero
for
kubernetes
cluster
backup
and
restore
not
a
production,
great
cluster
by
small
for
learning,
DEP
purpose,
where
I'm
using
a
dynamic,
NFS
provisioner
for
persistent
volumes,
Valero
does
for
backup,
restore
manifest
volumes
and
only
supports
object.
Storage
like
s3
mini,
oh
and
a
few
others.
They
mentioned
about
Resik
integration,
which
is
possible
to
back
NFS
persistent
volumes
as
well.
Has
anyone
got
any
experience
using
rustic
at
Valero
in
an
NFS,
PV
environment?
A
D
E
A
E
A
Okay,
let's
move
back
then
to
a
Mears
question
here.
I
like
to
points
out.
There
is
a
huge
thread
here:
okay,
so
Alex
points
out
when
you
use
calico,
all
your
pods
will
get
a
/
32
IP
address
that's
by
design.
Is
this
affecting
your
applications
in
any
way,
Alex
benches?
All
traffic
coming
out
of
the
pot
is
right
into
the
pods,
the
fault
gateway,
which
goes
over
a
ve
th
pair
to
know
where
the
pot
is
running
once
a
packet
reaches
a
node,
it
is
routed
to
the
destination
using
the
nodes
route
table.
A
And
a
deer
has
some
information
here.
You
want
to
look
at
multi,
so
if
you
want
to
use
DP
DK,
this
will
allow
a
primary
pond
networking
via
Calico
but
would
allow
you
to
assign
an
SR
I/o,
be
interface
as
a
secondary
pod
network
interface
and
then
a
bunch
of
links
that
I'm
gonna
put
in
the
main
channel
here.
So
everybody
sees
them
and
I
will
check
back
on
this
thread.
It
looks
like
we're
getting
a
lot
of
information
here
and
I
will
toss
that
there,
okay,
we're
about
halfway
done.
A
So
those
of
you
joining
us
on
YouTube,
welcome
to
the
kubernetes
office,
iris
Joanne,
hash
office,
hours
on
the
slack
Channel
they're
on
kubernetes
slack
and
ask
your
question
and
we
will
queue
them
up.
We
are
getting
to
the
end
of
our
queue
of
questions,
so
feel
feel
free
to
start
asking
them.
Siberian
crane
is
nest
which
a
nice
multi
pipe
question.
A
My
favorite,
no
that's
Michael
Larson,
will
be
after
with
the
multi-part
question,
but
Siberian
crane
asks
I
recently
came
across
cute
Tetris,
which
claims
to
handle
/,
avoid
resource
fragmentation
starting
a
new
node
in
a
cast
enable
cluster
when
it
could
be
managed
and
exists.
He
knows
by
rearranging
pods
on
nodes.
Isn't
there
a
native
solution
for
this
in
Qumran?
Are
there
any
other
tools
which
do
the
same
so.
A
A
B
A
A
Periodically
per
finally
a
swap
or
migration
of
pods
across
nodes
that
keep
a
healthy
balance
of
different
type
of
research,
requesting
together
on
a
node
okay.
So
for
my
first
question
for
the
group
is:
do
you
do
you
want
to
do
this,
or
do
you
want
to
let
the
scheduler
do
its
thing
right,
I
figure!
If
your,
if
your
deployment
descriptions
are
very
explicit
about
the
resources
that
you
want
and
need,
would
you
ever
need
to
go
here?
I
guess
solution.
E
B
Does
this
feels
a
lot
like
HPC
scheduling,
wanna,
be
like
the
the
idea
of
absolutely
and
totally
packing
a
machine
in
the
most
optimal
way
and
honestly,
the
kubernetes
scheduler
doesn't
really
do
that.
Well,
it
does
some
things
really
well,
because
it's
meant
to
schedule
things
like
you
know
web
servers
and
other
long-running,
but
simple
processes.
Not
necessarily
you
know,
hard-hitting
things.
So
that's
just
my
my
quick
take
on
it
like
yes,
there
there
is
a
need
if
we
want
to
be
truly
and
completely
optimal,
but
currently
the
scheduler
doesn't
necessarily
do
that.
Yep.
A
G
So
this
schedule
of
does
the
opposite,
think
it
sort
of
tries
to
spread
pods
in
a
way
that
they
are
thought
that
if
one
node
goes
down,
you
are
still
fine
and
you
still
have
enough.
Replicas
priced
find,
like
notes
with
low
note,
your
table,
ization
and
sort
of
move
those
spots
there
and
yeah.
Basically,
the
exact
opposite
yeah.
A
So
Joe
mentions,
if
you're,
trying
to
minimize
the
number
of
nodes
you're
using
for
cost
reasons,
and
you
may
want
to
pack
them
smaller
on
dev
test
clusters.
Cluster
autoscaler
will
repack
nodes,
but
by
default
is
configured
to
be
quite
safe
and
will
need
some
tweaking
a
bit
to
be
more
aggressive
with
the
packing
good
tip.
A
Feel
like
if
I
were
using
two
of
these
tools,
I
would
really
have
I
would
really
want
to
pay
attention
of
the
resources
I'm
asking
for
it.
Sometimes
I
sceptical
might
be
too
strong
of
a
word,
but
sometimes
it's
like
you
know
what
give
as
much
information
to
the
scheduler
as
possible
and
then
let
it
do
its
thing.
B
Best
practices
is
you
actually
define
what
your
application
needs?
Yes,.
A
Right
yeah
but
I'm
just
wondering
like
it,
but
then
in
real
life
you
go
and
you
have
an
unbalanced
cluster
and
then
you're
like
wait.
A
minute
right.
So
I
understand
why
that
is,
but
that's
definitely
very
interesting,
but
these
are
two
tools
that
could
help
there
any
any
other
opinions
from
the
crowd.
Aaron
Eaton
says
to
some
environments.
The
high-water
mark
for
traffic
could
be
a
hundred
times.
The
low-water
mark,
in
that
case,
scale
up
at
scale
down
to
leave
us
with
pods,
spread
across
many
underused
instances.
Good
point,
yeah,
yeah,
I.
A
A
Yeah,
although
I
totally
you
know,
it's
Joel
Pett
mentions
that
I
totally
understand
why
the
defaults
default
is
safe,
right,
better
to
be
more
expensive
and
have
the
services
running
I,
guess,
okay,
so
moving
on
Mike
Larsen
has
a
three
three
part
question
awesome:
what
does
a
recommended
approach
when
you
multiple
teams
they
have
about?
Eight
teams
are
deploying
to
kubernetes.
Should
each
team
deploy
two-stage
broad
clusters
only
running
their
loads,
or
is
it
better
to
have
each
team
get
granted
a
namespace
on
a
common
organization,
wide
stage,
slash
Prague
cluster?
A
B
Want
to
I
want
to
go
first
and
ask:
are
you?
Are
your
developers
doing
anything
with
kind
or
mini
cube
locally?
Are
they
actually
testing
the
deployments
and
everything
locally
before
they're
pushing
up
to
any
sort
of
CI?
So
that
would
be
my
first
question
and
then
my
second
question
is
assuming
the
that
answer
is
yes,
you
were
validating
that
you
know
locally
these.
B
C
Have
many
opinions?
Let's
do
it!
That's
why
we
do
the
show
give
people
opinions
you
first,
okay,
so
I
have
worked
in
companies
where
they've
done
it
multiple
different
ways.
One
of
the
ways
that
I
have
found
that
it's
worked.
The
best
is
local
deployment.
First,
you
have
a
kind
or
mini
cube.
You
test
that
everything
works.
Then
you
push
that
to
some
shared
namespace,
where
your
team
test
features
against
one
another.
C
Then
if
it's
been
approved
by,
let's
say
a
QA
department
or
or
some
type
of
regression
testing,
then
it's
pushed
into
some
type
of
staging
environment.
Then
it
is
tested
by
your
fool
like
QA
and
to
end
testing
and
it's
gated.
Gated
means
it
has
some
type
of
approval.
It
could
be
a
JIRA
approval
or
something,
and
then
that
gets
promoted
to
the
next
environment.
I
have
found
that
that
works.
The
best
I
have
found
that
that
does
less
introducing
a
bugs
into
prod.
That's
just
my
opinion.
E
D
Yeah
I
think
I've
been
eight
teams
or
ten
teams.
There
are
different
applications
that
deploying
across
the
clusters
Maki
and
the
Jeff,
and
peer
mentioned
just
that's:
that's
the
right
strategies
to
just
go
and
apply
somewhere
in
local.
If
not,
the
local
is
available,
there
should
be
a
dev
environment
where
the
first
thing
testing
should
be
done,
and
then
it's
pushed
over
to
stage
for
the
load,
castings
or
and
there's
a
process
around.
D
It
will
just
push
to
try,
but-
and
the
second
part
of
the
question
I
would
like
to
answer
is:
is
each
team
granted
a
namespace
on
a
common
organization
wide
stage
of
Broadcasters?
Yes,
I
think
there
should
be
a
namespace
divisional,
distinguishing
between
each
actor,
it's
being
deployed
or
each
team.
That
is
being
you
using
that
yeah,
that
that
offers
more
a
lot
of
distinction
between
the
apps
and
a
lot
of
privileges,
also
meaning
the
teams
to
run
over
okay.
A
H
Ends
and
how
much
resources
you
have
and
the
isolation
you
need:
we've
done,
the
cluster
/
staging
environment
going
that
route.
So
we
kind
of
box
things
in
so
we
don't
have
someone
doing
a
resource,
hog,
say
staging
and
then
taking
out
your
product
Kloster.
So
you
can
mentalize
them
that
your
whatever
risk
you're
willing
to
take
for
those
areas,
yeah.
B
A
common,
a
common
topic
that
office
hours
usually
tackles
as
how
many
clusters
do
I
need,
and
the
sad
answer
is,
it
usually
depends
one
of
the
common
things
that
we
normally
all
agree
on
is
per
environment.
That
is
usually
the
line
per
cluster
and
that
I
completely
agree
with
just
because
you
usually
want
an
environment
or
everything
soaks
and
that's
not
prod,
and
you
want
that
very
much.
You
know
separate
from
an
environment
where
everyone
is
just
pushing
and
it's
the
Wild
West,
because
that
will
always
introduce
some
sort
of
their
like
resource
problem.
B
B
C
B
H
So
we
have
to
kind
of
go
in
with
Jeff's
things,
so
we
have
a
say:
a
QA
branch,
Deb
branch,
prod
branch,
QA
branch
syncs
with
QA
cluster
dev
branch
syncs
with
dev
cluster
prod
with
prod,
so
it
kind
of
box
it
in
like
that.
So
you
could
go
this
step
further
and
have
like
version
tags
and
having
something
like
agro
or
flux,
sync
against
those
get
artifacts
and
then
go
like
that.
So
really
doesn't
matter
where
the
cluster
is
as
long
as
you
have
something
there.
That's
syncing
with
the
appropriate
get
branch
or
tag.
F
Think
one
thing
one
thing
that
companies,
their
approach
of
namespace,
isolation
and
and
also
in
for
isolation,
is
having
good
visibility
into
what's
going
on
inside
of
those
isolation
environment.
So
in
each
namespaces,
because,
as
she
have
several
namespaces
within
the
environment,
it's
that's
becoming
difficult
to
track
what
resources
are
hogging
up.
You
know
your
CPU
and
your
memory.
So
it's
what
you
know
looking
into
monitoring
for
those
species.
E
Yeah,
it
also
might
make
sense
to
run
some
loads
in
production
clusters,
for
example,
what
I
used
to
do
is
also
like,
like
something
kind
of
like
every
wanted
sample,
goes
to
the
second
service,
but
doesn't
send
response
just
saves
the
result,
so
you
can
actually
like
validate,
but
it's
most
of
Kafka
and
basically
most
as
a
new
consumer,
but
it
actually
like
wallah
dated.
Is
that
the
other
responses,
though,
to
make
sure
then
you.
C
Can
say
for
in
terms
of
namespace
separation
and
cluster
separation,
it
really
depends
on
what
your
application
is
doing
and
your
workload
I
think
for
larger
development
teams.
They
will
use
a
separate
cluster,
but
depending
on
how
resource
intensive
the
application
is,
maybe
a
dev
environment
makes
more
sense
to
just
do
a
namespace
separation.
But,
as
you
start
to
get
closer
to
fraud,
you
want
to
be
more
isolated.
So
then
it
makes
sense
to
have
that
in
you
know,
your
QA
is
a
separate
cluster
and
your
prod
is
a
separate
cluster
for
better
isolation.
Yeah.
D
But
from
the
cost,
considerations
also
like
the
in
perspective
perspective,
so
it
is
you
cluster
I
mean
there's
a
million
more
pools
concept
right.
You
can
pool
as
a
note
for
each
cluster
I
mean
if
they
want,
who
don't
have
an
isolation,
isn't
a
single
cluster
and
they
won't
run
it.
They
can
run
it
based
on
the
note,
pulls
like
a
pool
of
notes
for
each
application
team
and
then
move
it.
The
load
based
on
that
I.
A
H
If
they
would
depend,
there's
a
really
I'll
find
the
link,
but
calico
had
a
nice
little
post
about
managing
your
network
policies
through
get
ops.
The
pattern
would
be
have
kind
of
your
master
brain
you're,
faster
repo
with
all
your
network
policies,
and
they
would
sync
up
with
the
repo.
So
you
have
that
central
auditing
place
per
app
Network
policies
would
probably
be
wouldn't
thievery
Co
the
application,
though
yeah.
D
A
A
Wonder
how
many
of
these
are
like
well,
I
work
with
kubernetes
all
the
time,
so
I
have
a
bunch
of
kubernetes
resources
and
how
that
translates
to
people
who
are
maybe
not
comfortable
or
don't
have
like
I
can't
I
can't
imagine
it's
a
comfortable
conversation
to
be
like
hey,
I
figured
out
our
isolation.
Thing
I
need
to
triple
the
amount
of
clusters
we're
using
right
now
right.
A
C
Can
work
like
that
again,
it
really
depends
on
I,
like
what
I
like
what
Jeff
said
about
the
cost
and
the
cost
ratio,
and
it
depends
on
the
level
of
experience
on
your
team.
It
depends
on
the
application.
It
depends
on
how
quickly
you
have
to
get
a
product
up
a
lot
of
times.
People
don't
realize
that
they're
working
with
a
product-
and
you
may
not
have
the
luxury,
the
testing
everything
out
you
may
have
to
just
go
like
super
fast
and.
G
C
There's
a
lot
of
considerations
to
take
I
will
say:
isolation
for
your
application
is
the
best
thing,
so
Deb
can
sort
of
be
the
Wild
West.
But
as
you
go
to
like
your
QA
or
stage
and
prod
you
really
it's
going
to
be
best
to
isolate
it
and
maybe
even
eat
that
cost,
because
that
means
you're
getting
something
to
prod
and
you
don't
want
to
introduce
especially
bi
paying
customers.
You
don't
want
to
introduce
something
bad,
that's
gonna,
take
a
customer
down,
yeah.
E
C
C
So
at
one
particular
company
that
I
worked
for
in
the
past,
we
would
have
to
do
that
on
a
weekly
basis
and
we
would
back
every
May
release
to
prod
would
be.
We
use
something
called
the
value
stream,
so
that
back
reporting
would
be
our
feedback
loop
and
we
would
back
port
every
change
to
each
environment,
and
that
worked
when
you
have
great
automation.
C
I've
worked
at
smaller
companies,
where
you
don't
have
that
luxury
and
dev
sort
of
just
sort
of
stays
in
a
very
bad
state
for
quite
a
while,
but
you
get
very
stringent
once
you
get
to
slight
QA
or
staging,
and
the
changes
come
more
frequently.
So
your
automation
to
keep
the
changes
to
a
minimum
in
that
cluster
and
fix
them
become
really
critical,
so
dev,
not
quite
as
often
for
changes,
prod
or
excuse
me
staging
or
a
QA
type
environment.
C
A
That
max
guy
has
more
info
here's
something
to
consider
it
could
be
a
life
cycle
or
velocity
of
each
separate
team.
Different
teams
could
have
different
requirements,
kubernetes
version
operators,
ingress
controllers-
it
could
be
complicated,
make
them
all
happy
in
a
single
cluster.
It's
a
great
point
as
well.
That's.
C
Correct
and
really
what
that
is,
is
is
not
only
just
from
a
cluster
design
perspective,
but
also
how
your
development
or
engineering
flow
works,
and
you
have
to
take
all
of
these
features.
You
have
to
understand
that
so
really,
what
happens
here
is
each
team
has
to
be
a
part
of
what
I
call
the
value
stream
and
know
what's
coming
from
a
release
perspective
up
the
chain.
What
team
is
working
on
a
feature
request,
because
that
helps
you
as
a
cluster
administrator,
better
design
and
have
impact
on
that
flow?
C
A
That
is
awesome.
That
is
a
tastic.
What
so
this
is
why
I
started
the
show
questions
like
that,
because,
like
that's
that
stuff,
that's
just
hard
to
get
from
a
blog
or
from
Doc's
right,
I
want
to
know
from
someone
who's
like
doing
this
stuff.
Is
this
a
good
idea?
So
that's
really
love
that
question.
Mark
or
Mike
I
would
love
for
you
to
come
back
and
let
us
know
how
this
is
going.
This
is
something
I
would
love
to
just
keep
checking
in
on
to
see
how
people
are
getting
on.
I.
A
A
All
right
so
he's
gonna,
he's
gonna,
do
the
raffle
and
then
we're
gonna
give
away
two
shirts
I'm
gonna.
Do
a
quick
follow
up
here.
This
is
a
question
that
we
had
actually
from
December,
where
the
demon
set
was
not
starting
up.
Some
nodes
was
due
to
a
bug
that
was
a
1.49
was
fixed
in
1.49
two
after
a
cluster
upgrade
to
1.15.
We
have
not
observed
this
issue
so
awesome
thanks
for
that.
That
is
Vinh
Yakka
and
the
original
question
was
from
Ravi
Shankar
I.
A
Remember
not
being
able
to
answer
that
so
it
it
relieves
me
that
that
was
a
bug
we
are
gonna.
Do
the
raffle
Amir
we're
out
of
time.
However,
we
can
queue
that
up
for
the
next
session,
which
will
be
about
two
hours
from
now
and
we
will
have
totally
different
panelists
and
things
like
that
unless
someone
wants
to
hang
out
in
the
West
Coast
edition
as
well.
So
with
that
roll
roll
twice
and
give
me
two
winners:
here's
what's
gonna
happen,
we're
gonna,
announce
the
winners
and
then
PM
me
or
I'll.
A
Pm
you,
I'm,
caster,
Jeon,
slack
and
I
won't
get
the
information
that
you
need
to
get
your
kubernetes
shirt,
which
we
all
neglect
to
wear
on
the
day
that
we
are
giving
them
away.
But
it's
a
cool
shirt.
It's
just
the
Tim
Hocking
shirt,
I
call
it
so
logo
right
in
the
middle
and
then
that's
it,
but
the
winners
are
Matt
and
David.
So
congratulations
to
our
winners
and
I
will
PM
you
after
that
panelists.
Do
we
have
any
any
further
comments
here.
A
We're
gonna
take
about
a
two
hour
break
and
then
the
show
will
be
back.
Thank
you.
So
much
for
joining
us
you're
always
welcome
to
come
back.
Those
are
you
listening
if
you
want
to
come,
hang
out
on
this
panel,
it's
a
volunteer
thing.
If
you
do
enough,
eventually,
I
get
you
this
totally
awesome.
Kubernetes
water
bottle,
so
pierre
you're
gonna
want
to
stick
around
for
that
water
bottle
because
it's
like
double
vacuum-sealed,
it's
it's
the
bomb.
Oh,
no,
oh
david
has
a
shirt
already
so
roll
again
awesome.
A
Thank
you
thanks
for
giving
up
that
shirt,
so
the
winners
are
Matt
with
one
tee
and
who's
the
other
winner
Venkat,
you
have
won
a
t-shirt
and
I
will
follow
up
with
you.
Those
of
you
listening.
You
always
feel
free
to
hang
out
in
the
channel.
We
like
to
hang
out
there
in
the
months
before
it's
like
kubernetes
users,
except
a
little
bit
smaller,
makes
it
easier
for
you
to
get
to
know
people
things
like
that's
a
considered
best.
A
This
is
a
place
of
learning
where
you
can
hang
out
and
with
that
we
are
gonna
sign
off.
We
will
be
back
in
two
hours,
I'm
gonna
collate.
All
these
questions
feel
free
to
keep
asking
you
questions
I
will
get
all
the
notes
and
everything
and
publish
them
and
I
will
be
pushed.
That
would
be
letting
you
know
the
URLs
all
that
good
stuff
here
in
the
channel.
Any
last
comments
welcome
first-timers,
marki,
toon
de
Pierre.