►
From YouTube: Kubernetes Office Hours 20200520 (EU Edition)
Description
Office Hours is a live stream where we answer live questions about Kubernetes from users on the YouTube channel. Office hours are a regularly scheduled meeting where people can bring topics to discuss with the greater community. They are great for answering questions, getting feedback on how you’re using Kubernetes, or to just passively learn by following along.
For more info: https://github.com/kubernetes/community/blob/master/events/office-hours.md
A
You
all
right
everybody.
It
is
the
third
Wednesday
of
the
month.
That
means
it's
time
for
the
kubernetes
office
hours,
our
monthly
livestream,
where
we
hop
online
with
a
panel
of
kubernetes
experts
and
try
to
answer
as
many
of
your
user
questions
as
possible.
So
we
are
going
to
try
to
go
for
about
an
hour
and
answering
your
questions,
so
we
are
hanging
out
in
hash
office
hours
on
the
kubernetes
slack.
If
you're
on
YouTube
followed
the
link
below
and
the
inviter,
there
will
send
you
a
little
invite
to
the
email
address
you
provide.
A
You
can
hop
in
the
slack
channel,
which
we
are
broadcasting
here
on
the
side
and
feel
free
to
start
typing.
If
you
are
in
chat
where
you're
from
say
hello,
there's
nothing,
we
love
better
than
to
have
that
chat
scrolling
as
fast
as
possible
when
we're
having
it
so
welcome
to
everyone,
then,
to
today
is
kubernetes
office
hours.
So
before
we
begin,
let's
start
by
introducing
yourself:
let's
go
Marky
Chris,
Mario,
Bob
and
Bob
hello.
E
F
A
Fan
number
one
and
I'm
Jorge
Castro
I
work
at
VMware
as
a
community
manager
and
I
will
be
your
host.
So
before
we
begin,
we've
introduced
ourselves
if
you're
listening
in
on
the
slack
feel
free
to
say
hello,
welcome
back
Christian
David
says
something
for
Virginia
hello,
thanks
for
dropping
by
all
right.
So
before
we
start
here
the
ground
rules.
This
is
a
kubernetes
event,
so
the
code
of
conduct
is
an
effect.
Basically,
please
be
excellent
to
each
other.
This
is
also
a
judgment-free
zone.
A
Everyone
had
to
start
from
somewhere,
so
there
are
no
dumb
questions.
So
please,
you
know
how
about
your
buddy
by
just
having
a
supportive
environment
in
the
channel,
and
while
we
will
do
our
best
to
answer
your
questions,
the
panel
doesn't
have
access
to
your
cluster,
so
live
debugging
will
be
off
topic.
We
can't
really
like
ssh
into
your
stuff
and
fix
it
that
kind
of
thing.
But,
however,
we
will
do
our
best
to
get
you
at
least
moving
in
the
next
general
step.
Next
direction.
A
Panelists
you're
encouraged
to
expand
on
your
answers
with
your
experiences,
your
pro
tips,
your
production
experience,
an
audience.
You
can
help
us
out
by
piecing
URLs
to
the
official
Docs
blogs
or
anything
that
might
be
relevant
to
the
topic
at
hand.
So
this
is
something
we
do.
That's
pretty
cool,
while
the
panelists
are
answering
questions.
A
A
bunch
of
us
are
like
furiously
googling
that
subject
or
whatever
people
just
start
to
put
the
URLs
and
stuff
into
the
chat
and
that
streams
for
the
people
that
will
be
following
it
like
the
video
later
on
and
what
I
do
is
at
the
end
of
each
show,
I
collect
all
of
the
URLs,
and
then
we
slap
that
into
the
show
notes
so
that
we
have
a
nice
reference
there
moving
forward.
So
the
more
links
the
better
and
also
we
learn
about
new
tools
literally
every
time
we
do
a
session.
A
So
that's
always
awesome
when
we
can
share
expertise
like
that.
So
please
feel
to
post
your
questions
directly
in
the
channel.
We
have
one
question
so
far:
please
just
start
typing
away.
If
you
want,
you
can
also
post
your
questions
on
discussed
that
kubernetes
do
the
kubernetes
forum,
where
I
have
a
thread
on
this
and
that's
all
we'll
be
posting.
The
show,
notes
and
I
will
link
to
all
that
stuff.
A
At
the
end
of
the
show
in
the
slack
Channel
and
the
YouTube
notes,
you
can
help
us
out
by
tweeting
spreading
the
word
paying
it
forward
doing
whatever
it
is.
You
need
to
do
to
help
us
get
the
word
out.
We
always
appreciate
it,
and
this
panel
is
made
up
entirely
of
volunteers.
So
if
you
want
to
rotate
in
please
let
us
know
we
love
to
have
new
people
come
in
and
help
out.
A
This
is
a
great
way
to
get
involved
in
the
kubernetes
community
and
a
nice
one
hour
a
month,
commitment
to
give
back
to
the
community.
So
we
try
to
keep
that
on
ramp,
slow.
What
happens
is
the
week
you
tell
me:
hey
I'd,
be
interested
in
this
session
and
then
the
week
before
I
ping,
all
the
panelists
and
whoever
can
make
it
can
make
it
and
whoever
can't
make
can't
make
it,
but
we
have
enough
people
to
have
a
nice
rotation
going.
A
So
with
that,
before
we
start
I'd
like
to
thank
Jayant
swarm
stock
acts,
pivotal
pusher,
we've
works,
VMware,
the
University
of
Michigan
Red
Hat
and
utility
warehouse,
ooh
and
I
forgot
to
add
Microsoft
for
a
lot
of
their
engineers
to
participate
in
our
programs.
We
appreciate
your
support
very
much
and
as
always,
special
thanks
to
C&C
F
for
sponsoring
the
t-shirt
giveaway.
A
So
what
happens
is
if
you
ask
your
question
and
we
address
it,
live
on
the
air,
we'll
put
you
into
the
cool
raffle,
then
we
roll
a
very,
very
scientific,
Dungeons,
&
Dragons
dice
to
determine
the
winners,
and
we
will
get
you
at
kubernetes
t-shirt,
which
Chris
is
modeling
today.
Awesome,
so
I
think
that's
the
first
time
in
a
while
someone's
actually
worn
this
shirt
we
give
away
during
the
show
I
wore
mine
yesterday
and
totally
forgot.
So
with
that
we're
gonna
to
get
started
panel.
How
you
feeling
today
audience?
How
do
we
sound
bob?
A
A
Okay,
so
basically
you're
just
gonna,
add
a
month
to
each
of
the
rest
of
the
cycles
this
year
and
the
other
all
I'm
a
functor
and
that's
three
months,
and
that's
usually
one
kubernetes
cycle,
okay,
good
to
know
alrighty
with
that,
let's
start
with
the
first
question
from
Christian
Roy
welcome
back,
says:
I
want
to
change
my
single
zone,
gke
cluster,
into
a
multi
zone.
It
looks
as
it
seems,
I
just
have
to
check
more
boxes
in
the
cluster
setting
and
it
will
start
more
nodes
in
those
zones.
A
The
problem
is
with
the
stateful
sets
with
persistent
volumes.
Isn't
it
always?
These
pods
will
always
start
on
the
same
zone
where
the
PVS
are
right.
So
my
question
is:
is
there
some
official
guidelines
about
how
I
could
redistribute
my
MongoDB
replica
set
pods
into
multiple
zones
with
minimal
downtime
and
no
data
loss?
So
far,
I
was
thinking
make
sure
the
Peavey's
are
in
retained
mode
delete
the
stateful
set
in
my
case
I
have
three
uninstall
make
it.
This
snapshot
using
g-cloud
create
a
new
disk
in
the
new
zone.
A
A
E
E
So
I
think
Google
actually
supports
like
regional
disks,
so
you
don't
really
need
to
actually
like
you.
If
you
would
switch
from
zonal
disks
to
regional
disks.
Does
this
can
be
mounted
in
different
zones?
So
that
would
be
one
example.
I
think
I
can
share
a
link
to
that
documentation
around
that.
So
that's
like
one
option.
We
have
a
option.
I
think
you
described
this
basically
create
three
disks
in
different
zones
and
then.
D
E
C
Would
ask
is
where
you're
storing
that
data
like,
if
you're
using
a
local
disk
within
the
GK
node,
you
might
be
in
a
little
bit
more
of
a
painful
situation,
but
otherwise
you
know
if
you're,
using
just
the
standard
storage
volume
with
GK,
you
should
be.
The
situation
papa
sloth
mentioned,
should
work
pretty
good.
D
I
think
so
we
haven't,
we
haven't
really
done
a
ton
with
TVs,
but
there's
the
same
sort
of
like
EBS
exists
if
you're
using
it.
It
makes
this
easier.
Yeah
EFS
makes
it
even
easier,
but
if
you're
using
something
like
SEF
kind
of
on
top
and
just
using
node
storage,
it
gets
much
harder.
Actually
just
had
a
question
yesterday
from
a
friend
is
running
Seth
and
if
that's,
what
he's?
D
E
A
Easy
as
long
as
I
don't
have
anything
stateful,
yeah
yeah,
if
only
I've,
yet
to
meet
someone
that
actually
has
a
100%
like
stateless,
stateless
thing,
but
whatever
it's
all
good,
all
right,
Christian
I
hope
that
answers
your
question
feel
free
to
ask
follow-up
questions
so
the
way
the
way
it
works
is,
as
we
are
addressing
questions
you
can
feel
free
to
keep
adding
your
questions.
We'll
add
them
in
a
queue.
A
If
you
need
something
clarified
or
something
like
that,
just
ask
again
in
the
channel
and
then
we'll
we'll
go
back
to
it
and
so
on
all
right.
The
next
question
is
from
Aaron
Eaton.
Thanks
for
joining
us
says,
working
with
multiple
eks
clusters
and
running
our
monitoring
as
daemon
sets.
Is
anyone
choosing
to
bake
their
monitoring,
log
collection,
known
metrics,
etc
into
their
instance,
a
mis
instead
of
relying
on
successful
node
bootstrapping
to
begin
getting
data?
Is
this
a
kubernetes
anti-pattern?
A
D
Just
cuz
it
mentions
eks
and
I
live
in
this
world,
I
would
kind
of
say,
anti-pattern,
I'm
really
interested
in
the
reasoning
why
he
wants
to
do
this,
but
I
mean
the
the
default
ami
is
so
sorry,
I
use,
ETS
CTO
and
it
just
pulls.
The
latest
am
I
that's
eks
enabled,
and
it's
it's
good
to
go
from
there.
Of
course
you
could
do
bootstrap
commands
on
that.
D
You
can
tune
cubelet
in
your
Eureka
CTL
yeah
mo,
but
like
I,
don't
really
see
a
need
to
go
off
and
create
my
own
ami
zodia,
like
a
huge
boatload
of
work
and
like
I,
would
really
need
a
strong
reason
to
do
that.
So
I
I
would
recommend
against
that.
I
would
probably
call
it
an
anti-pattern
in
the
cloud
environment
at
least.
If
you
want
to
do
it,
I
mean
that's
completely
as
Authority
you
can.
You
can
tell
like
a
CTL
to
use
whatever
a.m.
D
B
B
A
D
For
us
we're
not
actually
using
prometheus
at
the
moment,
we
I
would
so
we
have
a
set
of
core
services
that
are
installed
on
every
cluster.
Creating
so
we
build
a
I,
don't
know
if
you
guys
have
heard
like
Auto,
helm
or
reckon
or,
and
there's
a
couple
of
the
project
that
basically
a
single
config
file
that
defines
multiple
charts
that
you
want
installed
at
all
simultaneously
at
one
time.
D
We
use
that
to
basically
define
core
services
that
are,
you
know
critical
to
the
cluster
operating
for
the
workloads
a
week,
we're
putting
on
it
and
Prometheus
would
live
in
there.
There's
also
the
Prometheus
operator-
and
you
know,
profile
and
other
things
like
that,
so
those
would
all
kind
of
be
accumulated
in
a
core
set
of
services
for
us.
A
I
see
everyone
nodding,
anyone
have
anything
sad
other
than
nods
yeah.
How
are
you
using
the
operator
pattern
for
this
I'm
just
introduced?
It
feels
like
I,
usually
read
a
lot
of
these
monitoring
tools
or
when
you
go
to
like
install
the
default,
always
kind
of
tends
to
send
you
towards
the
operator.
So
I
was
wondering
if
that's
kind
of
the
dominant
pattern
these
days
for
this
kind
of
stuff,
sorry
interested
deep,
Avalos,
yeah.
E
I
think
a
few
separator
is
one
of
my
first
operators,
or
maybe
no
se.
D
was
the
first
one,
but
anyway,
yeah
and
I
think
it's
like
high
quality,
basically
product
so
definitely
check
it
out
regarding,
like
baking,
game
monitoring
system.
So,
like
one
big
disadvantage
of
Asus,
imagine
if
you
need
to
upgrade
your
monitoring
system
now,
you
will
have
to
like
roll
the
whole
cluster,
because
just
to
like
update
the
image
right.
E
So
that's
like
one
big
antepartum
just
due
to
this
reason,
like
typically
the
way
we
do,
things
is
basically
we
have
like
daemon
sets
which
just
run
on
each
node
and
give
give
metrics
back.
We
have
like
Prometheus,
which
collects
those
metrics
and
that's
it,
and
you
can
like
do
bates
test
everything.
It's
it's
just
like
regular
kubernetes,
the.
A
A
Oh
typing,
but
we'll
see:
okay,
all
right,
well,
Aaron,
I.
Hope
that
answers
your
questions.
If
you
like
your
question,
if
you
have
a
follow-up
feel
free
to
post
it,
that
would
be
awesome.
Moving
on
those
of
you
just
joining
us
welcome
to
the
kubernetes
office
hours,
we
have
plenty
of
room
in
the
queue
today
at
the
usually
usually
we're
like
15
questions
behind.
A
So
plenty
of
questions
so
feel
free
to
ask
your
question
and,
as
always,
wait
till
the
end,
and
if
you
ask
your
question
we'll
give
you
will
enter
you
in
the
raffle
to
win
the
community
shirt
all
right.
Moving
on
David
asks.
Seo,
sidecar
containers
need
to
be
privileged,
which
conflicts
with
my
PSP
policy
is.
Our
service
motion
does
not
require
privileged
containers.
E
So
I'm
thinking
like
very
Swiss
service
mesh
called
mash.
It's
like
very
English
controller
called
traffic
like
I,
don't
know
how
to
pronounce
it,
but
basically
they
built
a
service
mesh
and
it's
non-invasive.
It
doesn't
have
any
side.
Cars
so
think.
I
think
that
one
is
definitely
shouldn't
like
repair.
Any
special
mud.
D
Plus
there's
also
a
console
from
from
a
c-corp
console
will
connect,
which
is
their
service
mesh,
offering
and
I
believe
it's
based
on
node
agents,
so
it's
effectively
a
demon
set.
So
that
gets
you
away
from
the
sidecar
pattern.
I'm,
not
sure
if
that
demons,
that
particularly
knees
privileged
access,
possibly
so,
definitely
something
to
look
at,
but
there
are
other
options
out
there.
Besides,
just
sto
and
linker
B
so
definitely
take
a
look
around
I'll
post,
a
link
to
a
great
comparison
and
breakdown
webpage
on
this.
A
Just
trying
to
get
to
let
the
kubernetes
users
channels
know
that
we
are
live
in
on
the
air
to
get
more
questions.
Okay,
anything
else
as
far
as
service
meshes
without
privilege
is
anyone
playing
with
history,
I'm
wondering
why
it's
why
it's
like
this?
Is
this
like
a
temporary
limitation?
Is
it
design
like
that
on
purpose
like.
E
Think,
like
last
time,
I
played
looked
at.
They
were
designing
like
artsy
and
I
plugin,
which
is
supposed
to
like
solve
this,
but
it
was
like
in
early
beta
and
now,
like
I,
don't
know.
What's
the
state
currently
in
but
I
believe
it's
still
not
like
production
ready.
So,
basically,
once
we
finish
this,
you
and
I
plug
in
I
think
you
shouldn't
be
required
to
do
any
any
of
that
work.
A
And
then
actually
says,
I
think
he's
here,
nice
and
mango
iptables
and
Mac's
guy
welcome
back,
says:
yeah.
It
needs
additional
permissions
to
intercept
traffic
max
guy
says
one
could
use
opa
to
have
a
little
bit
more
fine-grained
policies
than
PSP
like
a
lot
in
the
psyche
are
certain
capabilities,
but
not
your
workloads
and
then,
while
you
welcome
back,
has
a
link
here
on.
A
F
D
Sure
so
I
actually
I
can't
speak
to
SEO
right
now,
but
we
actually
just
put
together
a
test
environment
in
our
sandbox
and
I
built
four
clusters
and
each
cluster
is
getting
a
service
mesh
and
the
for
that
we're
gonna
test.
Our
is
steel,
linker,
D,
kuma
and
console,
and
everything
that
I'm
working
with
a
teammate
and
everything
that
we
read.
D
Sto
is
always
the
much
more
complex
and
there's
so
many
emerging
third
parties
that
are
coming
dislike,
provide
consultation
and
support
and
articles
around
installing
a
steel
and
from
what
we're
seeing
is
steel
still
seems
like
the
biggest
lift
to
get
going
and
we're
already
leaning
against
it,
because
we
we
don't
need
something
that
complex,
really
I
can't
see
what
added
game
we
were
really
getting
from
using
a
steel
over
using
something
like
linker
D
right
by
adding
that
at
you
know
that
extra
complexity,
so
I
guess
from
someone
looking
in
and
like
doing
the
research
side
of
things
it
just.
D
D
D
F
A
F
F
A
D
D
F
D
Yeah
you
guys
just
came
up
on
16
I,
believe
gke
has
117
on
stay
on
normal
branch,
so
I'm
glad
your
question.
George
just
finished
up
close
the
thread
here
and
I'm
what
we
thought
of
were
kind
of-
and
this
was
specific
to
us,
so
we
kind
of
just
kind
of
got
together
and
got
our
brains.
Thinking
MPLS
is
like
an
absolute
thing:
we're
looking
at
kind
of
an
intuitive,
relatively
intuitive
UX
that
that
easily
gives
us
the
observability
that
we're
kind
of
looking
for
of.
D
What's
going
on,
there's
other
products
out
there
like
service
mesh
hub
from
solo
that
are
gonna
even
that
we
definitely
want
to
look
at
that
canary
and
everything
that
goes
with,
that
circuit,
breaking
rate,
limiting
the
kind
of
intelligent,
retrying,
I,
know,
link
or
D
has
this
from
a
presentation
that
Brendan
did
for
our
Meetup,
so
kind
of
like
those
extra
features
around
safety
of
influx
of
requests.
If
there
is
a
sidecar
being
used
really,
we
want
that
to
be
envoy
just
from
what
we
know
and
what
we've
used
before
ingress
controller
support.
D
So
we
want
to
have
the
flexibility
there
to.
You
know
ensure
that
it's
going
to
work
with
whatever
we
want
to
do.
We
don't
want
something
like
privilege,
mode
with
what
SEO
is
doing,
want
something
we're
coming
more
from
like
a
security
and
observability
standpoint
is
our
core
things
that
we
care
about
so
I'm
kind
of
full
declarative
nature
ease
of
deployment.
D
You
know
so
using
helm,
something
like
that
and
then
you
know
great
interaction
with
maybe
like
cilium
or
theta
v,
SC
ni
or
whatever
we
end
up
using
I,
don't
think
there's
much
there
that
we
really
need
to
look
at,
but
that
obviously
we
just
want
everything
to
mesh
really
well
and
if
we
can
get
added
support
continued
from
AWS
and
using
their
CNI
without
you
know
losing
anything
that
that
would
be
a
big
thing.
I
guess.
The
last
thing
is
kind
of
support
recently,
as
we've
gotten
bigger.
D
We
we
do
want
someone
to
kind
of
lean
on
if
something
is
not
not
working
well
so,
and
it's
not
not
that
we
need
like
four
minute
response
times.
It's
just
that.
Maybe
if
we
want
to
ask
wider
questions
around
something,
we
don't
really
understand,
we
want
to
have
someone
to
go
to
for
that.
So
we've
been
looking
from
that
side
as
well
so
and
soullow
also
provides
that
I've
links
in
here,
just
so
low
stuff,
they're
doing
a
lot
of
great
work
there.
A
Is
awesome
so,
if
you'd
like
to
see
Mario
write
up,
the
results
of
his
thing
feel
free
to
leave
in
the
moji
on
the
messages
that
I
left
of
the
slack
so
actually
I'm.
Reading
that
up
for
two
reasons,
I
know:
I
love
a
lot
of
lot
of
those
a
lot
of
you
out
there
doing
this
professionally
and
stuff,
and
you
get
to
do
cool
things
like
a
bake-off
and
things
like
that,
the
kubernetes
blog
anyone
for
the
community
could
submit
content
to
it.
A
A
If
you
can
kind
of
take
a
page
out
of
the
sre
book
and
share
with
all
your
buddies,
so
I'm
really
looking
forward
to
seeing
how
that
works
out
for
you
Mario,
so
it
co-consul
and
the
other
two
are
linked,
Rd
and
kuba
with
a
K.
If,
if
someone
could
drop
the
links
to
that
in
the
channel,
that
will
be
fantastic
and
then
we
move
on
to
the
next
question.
A
Let's
see,
let's
just
see
what
people
are
saying,
real,
quick,
any
anything
left
to
say
about
sto
Wally
says
according
to
works
is
do
is
getting
much
adoption
and
he's
got
a
link
to
a
bunch
of
articles
there
that,
if
you're
listening
to
this
about
sto,
you
will
definitely
check
those
out
and
I'll
make
sure
that
those
links
make
it
to
the
show
notes.
So.
A
This
isn't
this
is
before
we
get
to
the
next
question.
I
want
to
talk
about
this
a
little
bit.
Kubernetes
long
says
kubernetes
on
docker
for
desktop
on
mac
broke,
and
could
it
not
cuz
k,
dot
io
went
bad
yesterday.
Is
there
a
dependency
that
anyone
would
know
of
docker
for
desktop
Mac
needing
registries
to
set
up
just
curious?
How
the
outage
yesterday
has
affected
anybody
today,
or
is
there
anything
you
could
do
as
far
as
your
local
laptop
to
like?
Is
there
like
a
proxy
thing?
E
A
A
A
A
Remove
another
external
dependency
from
your
life,
which
is
also
awesome
all
right.
Moving
on
FC,
Welcome
Back
says
any
idea.
If
how
ingress
API
might
develop
in
the
future
seems
like
a
lot
of
edge.
Proxies,
have
ditched
consuming
ingress
resources
in
favor
of
their
own
custom
resources.
Good
question
I
have.
B
E
B
E
A
A
Rob
mentions
that
DIMMs
wrote
a
migration
mitigation
script
to
micro,
the
images
from
K
for
the
Kate's
project,
and
they
are
looking
for
it
now.
Here's
some
links
these
links
are
fantastic.
Everybody
keep
them
coming,
waise
good.
In
the
meantime,
we
will
move
on
to
the
next
question.
Have
we
have
miss?
Any
questions
are
John's
question
is
next
right.
Let
me
just
double
check
yeah,
so
our
John
asks
what's
the
difference
between
not
setting
a
CPU
limit
and
disabling
CFS
quota
at
the
cubelet
level.
A
D
A
E
So
I've
seen
some
parks
but
say
that,
basically,
if
you
tweak
content
container
GFS
period
seconds
to
a
lesser
value,
you'll
have
like
a
better
performing
cluster,
because
basically
Linux
won't
throttle
your
pod
that
that
that
much
often
but
again
it
sort
of
depends
on.
How
are
you
like,
if
you
are
calm,
calm
comfortable
with
that,
like
because
GFS
is
a
hard
system
to
configure
and
get
right
and
yeah.
B
E
Would
definitely
practice
that
before
been
actually
like
yeah
just
running
out
in
production
and
the
difference
between
like
disabling
CFS
and
cubelet
and
sort
of
setting,
the
limits
on
the
pod
is
basically,
if
you
disable
the
disabled,
return,
the
cube,
let
that
cubelet
won't
throttle
any
pods,
which
means,
if
somebody
sort
of
incidentally,
deploys
the
Bitcoin
miner
or
something
or
something
goes
like
two
hundred
percent
CPU.
It
will
probably
like
you
have
like
on
responding
node,
or
something
like
that,
so
you
won't
have
that
extra
protection
from
the
cube
left
side.
You.
F
F
A
A
A
We
had
this
problem
where
we
would
see
700
millisecond
tail
light
and
see
lots
of
hops
for
microservices
all
getting
throttled
for
like
a
hundred
milliseconds
at
a
time,
bring
the
quarter
down
a
lane
to
see
drop
loads
because
they
were
being
paused
for
pause
for
less
time
best
practice.
He
stays
teams
to
just
turn
off
limits.
Pavel
has
dropped
a
link
there
about
some
good
slides
about
tuning
CFS.
A
But
some
good
reading
there
Wow
there's
a
lot
more
information
on
this
I
was
expecting
any
any
other
information
here.
A
Yeah
the
github
issue
itself
has
links
to
tons
of
stuff
and
then
there's
435
hidden
items,
so
we're
not
gonna
review
that
during
the
show.
But
hopefully
that
is
more
information
and
that
answers
the
question
or
you
are
John-
feel
free
to
post
a
follow-up
Joel
just
mentions
I,
know
Zalando
and
pusher.
Both
wick
advocates
for
the
initial
CFS
quota
implementation
have
all
turned
off
limits.
A
A
Yeah,
that's
just
well
eat
here's,
here's
a
tldr
effort,
so
we
would
recommend
removing
CPU
limits
and
kubernetes
or
disable
the
CFS
quota
in
the
qiblah.
If
you're
using
a
kernel
version
with
the
CFS
quota,
bug
unpatched
there
seriously
known
CFS
bugs
so
definitely
check
out
that
article
to
see
if
that
affects
your
environment
all
right.
Our
next
question
is
more
ingress,
quick
sidenote.
We
are
definitely
working
on
an
ingress
session.
We
have
one
penciled
in
for
early
next
month
in
June,
so
we're
gonna
have
a
session.
That's
gonna,
be
nothing
but
ingress.
A
We're
gonna
have
panelists
from
cig
network.
Joining
us
will
give
away
t-shirts
it'll
be
fantastic,
so
it'll
be
just
like
this.
We
will
be
doing
the
normal
office
hours
next
month
as
well.
Excuse
me,
so
I'm
just
pay
attention
in
the
channel
I
will
be
make
sure
that
I
span
the
information
when
that
date
is
set.
D
D
So
we
use
nginx
ingress,
which
a
lot
of
people
use
and
it's
the
last
one
applied
for
us
because
that's
partially
how
canary
works
too.
So
that's
probably
the
one
that's
going
to
take
over
completely.
It's
not
going
to
be
like
there
won't
be
any
merging
right.
It's
just
going
to
be
the
last
one.
Let's
define
there
is
what
lives
so.
A
Oh,
this
is
interesting
yeah
to
come
back.
Long
also
mentions
that
on
the
latest
distribution
sign
of
tables
is
causing
broken
C&I
network
and
bonded
infants
require
bridge
set
to
active
passive
mode.
None
of
this
is
detected
by
cube
admin
during
installation.
I
personally
have
had
three
such
cases
in
last
week.
A
I
saw
our
kids
handles
it,
but
just
four
days
ago,
on
Debian
Buster
long,
do
you
know
if
there's
a
bug
reported
for
this
I
feel
like
I
should
ping
someone
on
the
cube
admin
team
and
let
them
know
that
this
is
an
issue
out
in
the
wild.
So
if
someone
can
find
a
bug
for
that,
I'd
be
happy
to
point
the
right
resources
at
it
or,
if
not,
maybe
we
should
file
one,
but
hopefully
that
answers
your
questions
about
the
ingress
rules.
A
D
D
We
don't
we
actually
just
use
them
for
layer,
4
functionality
and
they
route
right
to
our
clusters,
cluster
nodes
and
they
have,
by
default
health
checks
for
each
of
those
nodes
where
we're
running
edge
next
ingress
as
a
daemon
set.
So
I
can't
find
the
question
because
I'm
blind
today,
apparently,
can
you
reread
that
Gorge
over.
A
D
I
would
say
just
go
if
you
really
just
want
to
use,
then
I'll
be
and
TLS
termination
with
the
NLB,
then
just
use
that
they're
released
in
manual
configurations
and
I'll
be
support
with
ingress
definitions,
isn't
exactly
there
for
all
the
features
there's
a
kind
of
like
you
can
enable
it,
but
there's
not
much
else.
You
can
do
note
that
MLB's
are
different
from
the
bees
and
e/l
bees
are
no
longer
think
they
still
exist
in
it.
Let
you
see
mode,
but
e/l.
Bees
are
no
longer
a
thing
in
AWS.
D
D
Absolutely
a
much
more
money,
because
they're
layer,
7
right
there
more
control
over
your
traffic
NL.
Bees
are
great,
though,
because
you,
if
you
have
your
interest
controller,
that's
doing
later,
7
ready.
You
don't
really
give
that
so
the
other.
The
other
side
of
it,
too,
is
that
NL
bees
are
just
as
redundant
as
al
bees.
You
do
cross
his
own
load-balancing
with
them
as
well,
so
from
angers
perspective,
they're
they're,
just
as
as
good
so
right.
B
A
D
To
be
here,
all
matching
hogs
and
yet
I
mean
no
I
guess
do
whatever
you
want.
I
would
say:
use
labels,
though,
and
be
very
explicit
with
what's
going
on
so
when
you're
troubleshooting
and
trying
to
understand
where
things
are
going
logs,
etc.
You
you
have
a
little
bit
more
to
go
on,
but
you
know
we've
done
canary
within
the
next
ingress
and
that
works
really
well,
but
I
mean
it's
all,
those
really
labels
to
two
different
backends,
so
separate
ingress
definitions,
etc.
So
yeah,
I,
guess
being
explicit.
A
Alright
I'm
max
guy,
who
looks
who
looks
a
lot
like
Charles
LeClair,
like
f1
driver
for
Ferrari,
just
saying
max,
says
you
can
use
opa
to
block
kind,
:
ingress
with
the
same
host,
name
plus
path,
there's
a
tutorial
for
something
so
much
that
in
the
OPA
docks
and
then
he
has
the
links
there.
Thanks
for
that,
Walid
has
a
question
for
a
friend
so
Waleed.
If
you
win
the
shirt
you
have
to
give
it
to
your
friend.
Do
you
have
some
idea
about
load-balanced,
EGR,
PC
and
eks
in
GCP?
A
It's
not
an
issue
as
I'm
doing
gr,
PC
micro
service,
hence
micro
services,
and
since
it
uses
HTTP
2,
this
will
become
as
the
traffic
will
be
load
balanced
before
establishing
the
connection.
But
once
the
connection
has
been
established,
there's
no
load
balancing
and
doing
server-side
load
balancing
it
doesn't
make
any
sense.
So
the
alternative
is
to
use
something
like
engine
access
to
your
link
or
D,
etc,
but
can't
seem
to
find
good
docks
for
it.
A
E
So
like,
basically,
the
issue
is
because
gel
PC
uses
HTTP
to
basically
it
requires
lab
layer,
7,
load,
balancing
and
if
you
are
using
like
regular
cluster
IP
like
the
regular
service
in
kubernetes,
it
will
have
a
sticky
session
two
apart.
So
basically,
if,
if
one
client
sends
to
practic
to
that
service,
it
will
load
balance
to
one
pod
and
that
and
and
it
won't
basically,
it
will
always
keep
that
session
open
and
it
will
just
always
send
all
the
requests
to
that
single
pod.
E
A
You're
cutting
up
Mario
but
audience
were
cut
up
on
questions.
So
if
you
have
anyone,
we
have
time
for
maybe
one
or
two
more
before
we
get
to
the
raffle
and
then
we'll
start
to
wrap
it
up
all
right.
Our
John
next
question:
what's
the
right
approach
to
have
time
sink
in
the
pod,
just
set
the
time
zone
or
use
NTP,
client
and
configure
or
mountain
hosts
at
the
local
time
or
something
else?
How
do
you
all
do
this?
A
A
A
C
D
E
E
D
A
Yeah
cuz
I
know
poor
OS
and,
like
flat,
current
stuff,
definitely
is
using
NTP
by
default
because
I
see
it
I
just
happen
to
set
that
up.
Yesterday,
I
was
like,
oh
me,
yep,
so
our
Johnson
host
time
is
synced
and
correct.
So
is
there
another
problem?
That's
interest,
I,
wonder
if,
if
that's
just
a
symptom,
then.
A
So
I
see
our
John
is
typing,
we'll
get
back
to
them.
There
I'm
out
meiosis
says
what
I
need
to
join
the
raffle.
You
have
to
ask
a
question
during
the
show
and
I
have
us
address
it,
but
don't
worry
we
give
away
two
every
month,
let's
see
so
while,
while
our
John
is
typing,
let's
address
Daniels
question
real
quick
here.
Is
there
any
recommended
way
to
handle
a
huge
spike
in
traffic
in
kubernetes
we
have
HPA
setup,
but
we
have
to
deal
with
spikes
at
certain
times.
D
E
E
D
We
10
minutes
prior
that
we,
you
know
double
the
size
of
a
deployment,
let's
say,
and
so
that's
kind
of
the
model
we
built
this
in-house
we're
looking
at
possibly
open
sourcing
it
when
we
get
there.
There
needs
to
be
a
lot
more
work
done
on
it,
but
right
now
there's
nothing
that,
like
there's
no
github
project,
that
I
could
find
that
that
does
this.
There
is
and
I
will
link
it
in
the
channel.
A
predictive,
auto
scaling,
kind
of
AI
sort
of
model,
interesting.
D
Say:
ml
sort
of
model
that
someone
is
working
on
out
there
there's
also
spot
that
IO,
which
used
to
be
spot
inst
that
handled
using
spot
instances
and
it
yes
they're
now
getting
it
more
into
visibility
and
application.
Auto-Scaling
and
they've
just
kind
of
rebranded
a
spot
they've
got
something
they're
claiming
that
does
does
this
as
well
we're
looking
into
that
where
it
does
kind
of
predictive
auto-scaling
based
on
obviously
past
past
performance
and
influx
of.
E
D
And
things
like
that,
so
you
know
predictive
auto-scaling
is
one
one
of
the
things.
So
you
know
it's
not
going
to
be
perfect,
I.
Think
if
you
have
some
sort
of
feed
you
can
probably
fairly
easily
write
something
to
do
this.
It
really
isn't
that
difficult,
you're,
just
kind
of
taking
in
a
call
you're
setting
a
schedule
and
you're
making
a
call
to
the
API
right
so
but
yeah
well.
A
D
Stuff
like
that
I'm,
so
it's
like
you
I
think
you
have
to
really
dial
in
your
reactive
auto-scaling
right
before
you
can
start
thinking
about
the
the
other
side
of
it,
because
if
you're,
not
reactive,
yet
your
I
mean
you
need
to
solve
that
from
being
proactive.
It
is
kind
of
like
the
the
sugar
on
top
that
that
helps
when
you
know,
but
you're,
not
gonna,
know
all
the
times
when
you
do
actually
infer
downside
so
yeah,
okay,.
A
A
I
need
to
I
need
to
go,
buy
some
sneakers,
okay,
all
right.
We
are
running
out
of
time
here.
Let
me
see
if
our
John
here
has
followed
up
says:
I
will
check
again
by
default.
Most
of
the
images
come
with
UTC
set
and
our
hosts
are
using
CET.
So
generally
I
see
a
time
difference
when
I
set
the
time
tone.
The
CET
in
the
pot
I
see
the
right
time.
I
just
want
to
confirm.
This
is
the
right
approach.
A
A
Yeah
I
just
hate
everything
hot
times
in
general,
okay,
so
we
are
running
out
of
time.
Those
of
you
that
have
asked
questions
we
are
gonna,
go
live
again
in
about
another
two
hours
for
the
West
Coast
Edition.
So
there
are
some
questions
here.
If
you're,
not,
if
you
can't
make
it
to
the
next
session,
I
will
definitely
queue
up
your
questions
so
definitely
nerdy
Sean.
Your
question
is
next
Muse's.
A
I
will
definitely
make
sure.
We
ask
your
question
this
afternoon
and
it
looks
like
long
has
some
more
questions
as
well,
so
I'm
fortunate.
We
have
to
take
it.
We
have
a
hard
cutoff,
so
we
need
to
take
a
break.
We
will
queue
up
those
questions,
as
always,
I
will
take
all
of
the
show,
notes
and
links
and
publish
them
with
the
show
notes
in
this
YouTube
video
and
I
will
post
that
in
the
channel
come
and
as
well.
A
All
of
these
are
always
recorded
and
put
on
the
playlist
on
youtube,
so
you
can
always
definitely
go
back
and
check
them
out.
Oh
well.
He
says
it
is
hard.
We
use
timezone,
meet
awesome
and
then
Tim
hunter
I
see
your
question.
We
will
definitely
address
that
in
the
next
one.
So
thanks
everybody
for
coming.
Let's
do
the
quick
raffle
here.
A
A
A
You
see,
you
won
a
t-shirt,
I
hope
I'm
pronouncing
that
right.
I
am
not
sure
if
you
want
a
t-shirt
before
or
not,
but
if
you
did
we'll
go
ahead
and
run
the
raffle
in
slack
afterwards,
so
I
will
p.m.
both
of
you.
The
way
this
works
is
you
go
to
store
it
on
CNC
FIO
I'll,
give
you
a
code
and
you
get
a
t-shirt.
A
E
A
Is
your
first
time,
congratulations,
I
know
you've
been
watching
the
show
for
a
long
time
so,
but
glad
you
finally
won
the
shirt
and
with
that
we'll
be
giving
away
two
more
shirts
this
afternoon.
So
we'll
come
back
in
two
hours
and
remember
it's
always.
A
third
Wednesday
of
every
month
feel
free
to
hang
out
in
the
office
hours
Channel,
it's
a
lot
smaller
than
the
kubernetes
users
channel.
So
you
know
if
you're
feeling
overwhelmed
or
it's
too
flooded
or
you
can't
get
your
question.
A
D
D
A
Yeah
a
lot
of
great
link
that
we
always
get
a
lot
of
links,
but
a
lot
of
good
ones.
This
time
this
is
like
the
first
time,
I
feel
overwhelmed.
I
got
a
lot
of
homework
on
the
CFS
stuff,
I'm
gonna
catch
up
on
alright,
and
with
that
we'll
see
everyone
in
two
hours
thanks
panel
thanks
listeners
and
keep
on
keep
on
truckin
keep
on
deploying
everybody
stay
safe
out.
There
thanks
panel
stick
around.