►
From YouTube: Kubernetes Office Hours (EU Edition) 20180620
Description
Join us the third Wednesday of every month: https://github.com/kubernetes/community/blob/master/events/office-hours.md
A
All
right
we're
doing
this,
everybody
we
are
live,
please
feel
for
you
to
tell
us,
on
the
slack
channel
how
we
sound.
Oh
I,
already
have
to
mute
my
75
monitors
of
different
things:
I'm
I'm
I'm,
watching
welcome
everybody
to
office
hours
for
kubernetes,
for
June
of
2018,
I'm,
Jorge,
Castro,
I'm
gonna,
be
your
host
Jeff
and
Bob.
Please
introduce
yourselves.
A
B
C
A
C
A
Castro
I
work
a
hefty
as
a
community
manager
and
I'm
kind
of
wrangling.
All
this
thing
together
we
had
two
people
who
send
our
regrets
so,
unfortunately,
for
this
EU
edition,
we
literally
have
no
one
from
Europe
here,
so
do
our
best.
We
have
available,
and
hopefully
Solly
Ross
can
drop
by
earlier
or
later
on.
Today,
before
we
get
started,
let's
go
ahead
and
go
through
our
rules,
the
ground
rules
for,
for
the
event,
we're
gonna
go
for
about
an
hour
today.
This
is
a
judgment-free
zone.
Everybody
had
to
start
somewhere.
A
So
if
you
see
someone
ask
a
question
that
you
don't
like
or
prove
try
to
be
supportive
of
people
learning
this
stuff,
there's
a
lot
of
complexity
here
that
we're
trying
to
work
through.
While
we
do
our
best
to
answer
your
questions,
the
panel
doesn't
have
access
to
your
cluster,
so
live
debugging
is
kind
of
off
topic.
We
kind
of
do
our
best
to
get
you
moving
down
the
step
or
figure
out
how
to
do
a
best
practice,
as
opposed
to
you
know
digging
through
every
single
line
of
your
llamo
sort
of
thing.
Panelists.
A
Of
course,
you're
encouraged
to
expand
on
answers
with
your
expertise
and
pro
tips
and
deployment
tips
and
all
the
stuff
that
you've
been
learning
at
work
audience.
You
can
help
us
by
pacing
in
URLs
to
the
official
thoughts
logs
or
anything
that
might
be
relevant
to
the
topic
of
hand.
So
you
know
any
expertise,
anything
we
can
help
toss
up
in
the
channel.
A
You
don't
have
to
whack
it
in
the
notes,
we'll
go
back
later
and
fix
all
that
up,
but
you
can
always
help
us
out
by
you
know,
helping
us
Google
in
the
background
kind
of
thing,
feel
free
to
post
your
questions
in
the
channel,
we're
also
monitoring,
Stack
Overflow
and
discuss
that
kubernetes
I/o.
If
you
look
at
recent
thread,
just
you'll
find
the
thread
for
this
show.
We
do
one
thread
for
show
to
kind
of
keep
that
organized.
You
can
also
help
us
out
by
tweeting
spreading
the
word
paying
it
forward,
telling
your
co-workers.
A
We
do
record
these
on
youtube,
so
it's
always
useful.
If
you
find
something,
that's
really
useful,
it
might
be
helpful
for
your
team
at
work
feel
free
to
let
them
know
that
they
can
subscribe,
and
just
I'd
like
to
put
these
this
kind
of
stuff
on
my
phone.
So
when
I'm
traveling,
somewhere
I'm
always
getting
and
learning
information.
A
If
you
want
to
sit
on
this
panel
and
spread
your
knowledge
you're
more
than
welcome
this
is
all
staffed
by
volunteers.
So,
if
you've
just
done
something
really
great
at
work
or
you
have
some
level
of
expertise,
if
you
want
to
share
with
the
community,
please
feel
free
to
to
let
me
know,
and
then
you
can
sit
on
the
panel
as
well.
The
commitment
is
one
hour
a
month,
so
it's
it's.
Definitely
a
lot
of
fun.
I've
put
the
notes
to
the
URLs
or
the
URL
to
the
notes.
A
In
the
slack
channel
there
that's
hash
office
hours
on
kubernetes
slack,
we're
trying
something
different
today.
We're
gonna
put
our
notes
on
hack,
em,
D
kind
of
kicking
the
tires
really
enjoying
the
service
so
far,
so
we'll
see
if
it
works,
for
us
we're
always
looking
for
marketing
help.
So,
if
you're
good
at
social
media,
you
want
to
retweet
your
you're
good
at
helping
us
spread.
The
word
please,
let
me
know
so
you
can
help
us
out.
We
always
appreciate
any
help
in
that
regard
and,
of
course,
at
the
end
we'll
be
holding
a
raffle.
A
If
we
read
your
question
on
air
you're
automatically
entered
into
the
raffle,
what
happened
is
Jeff's
incredibly
non
non
weighed.
Randomized
script
will
pick
a
winner
out
of
everyone.
Let's
ask
a
question,
and
and
I'll
give
you
a
code.
You
get
a
store
since
yep
that
I
owe
and
get
yourself
a
fancy,
kubernetes
t-shirt
which
I
went
on
yesterday,
but
in
the
threated
it
put
a
picture
of
what
it
looks
like
it
is
a
snazzy
snazzy
piece
of
gear,
lastly
feel
free
to
hang
out
in
the
hash
office
hours
afterwards.
A
We
do
monitor
that
channel
all
month.
So
if
you
have
a
question
in
between
sessions-
and
you
want
to
post
it,
there
we'll
get
to
it
the
next
month
or
or
whatever,
and
also
it's
really
good,
because
there's
so
many
people
in
slack
kubernetes
user
has
30,000
people.
You
know,
office
hours
is
kind
of
a
quieter
Channel
and
you
can
feel
free
to
get
to
know
each
other
and
help
each
other
out.
Cuz.
That's
how
we
build
community
with
that.
We
are
ready
to
go
so
everybody
ready.
A
Are
we
so
we
have
a
bunch
of
questions
so
feel
free
to
start
asking
them
now
in
in
the
slack
channel.
Just
do
question
:,
so
it's
obvious
for
us
to
pick
out.
We
already
have
a
few.
We
have
quite
a
large
queue,
so
we're
gonna
have
to
start
to
get
going
here.
We're
gonna
try
to
answer
as
many
questions
as
possible
over
the
next
hour
and
then
something
like
eight
hours
later
today.
We'll
have
another
session
and
we'll
do
it
all
over
again.
So
yeah
yeah,
let's
get
started.
A
How
do
you
guys
want
good
news?
You
want
you
guys
want
to
take
your
question.
Read
it.
You
want
me
to
read
it.
How
we
doing
this
doesn't.
A
I'll
start
first,
with
March
asks
considering
best
practices.
Would
it
be
recommended
to
have
the
binary
in
a
pod
to
point
to
localhost
for
database
proxy
container
and
have
a
proxy
container
handle
TLS,
either
nginx
or
traffic
question
right?
It
feels
to
me
like
a
lot
of
overhead
per
pod,
but
it
will
be
easier
to
switch
from
development
to
production.
A
B
So
that's
kind
of
how
things
like
sto
and
other
service
mesh
services
work.
The
preferred
method
for
things
in
gke
Google
has
their
own
proxy.
This
does
have
some
scaling
limitations,
though,
and
depending
on
the
database
like
for
Postgres,
you
may
want
to
set
up
PG,
bouncer
or
Odyssey
and
then
have
sto
or
something
else
handle
the
encryption
between
your
application
and
that
service.
A
Okay
question
two
I
think
that
is
generally
considered
back
practice
to
kill
a
pod
when
the
database
connection
or
other
required
service
is
missing.
Would
that
retry
logic
be
best
embedded
in
the
application
logic?
Or
could
you
also
have
the
pod
restart
after
an
interval
interval,
say
60
seconds
or
something
to
avoid
those
steps.
C
So
this
is
sort
of
like
a
non-answer
honestly,
but
I
think
it
would
some
be
somewhat
dependent
on
your
application,
your
application
logic
and
you
can
have
a
pod
suicide
and
restart,
but
depending
on
how
frequently
it
does
that
it
might
want
to
go
into
crashes
back
off,
and
then
you
know
it
could
be.
You
know
starting
up
much
later
and
when
you
didn't
want
it
to
where,
as
if
you
shove
that
logic
in
your
application,
you
know
you
can
just
have
it
go,
not
necessarily
indefinitely,
but
you
could
do
it
there.
A
A
A
People
typing
everyone
furiously
typing:
tap:
tap
tap
tap
two
thumbs
up:
yes,
okay,
cool
all
right,
just
the
excuse,
I
needed
to
get
a
new
i7
right,
okay,
okay,
so
I
guess
the
person
then
went
back
and
updated
a
bunch
of
stuff.
So
we're
still
on
March
question
here
in
the
past
week,
I've
been
reading
into
playing
around
with
this,
do
for
the
TLS
handling
in
relation
to
the
first
question,
I
posted
and
everything
else
service
mesh
has
to
offer
I
guess
the
envoy
psyche
are
containers
similar
functionalities.
A
That
I
was
required
from
nginx
or
traffic
for
the
retry
logic,
question
I
only
have
to
serve
the
GRP,
see
your
HTTP
server
when
all
required
connections
are
made.
If
those
fail,
it
will
try
again
in
10
seconds.
The
readiness
of
the
server
is
also
used
for
kubernetes
self
check.
Would
you
say
that
this
is
a
fair
approach,
or
do
you
have
an
alternative
to
share
no.
A
B
So
the
way
around
that,
where
you
you
know
your
server
reboots
and
the
initializer,
was
running
on
the
server
that
reboots
it
tries
to
come
back.
The
initializer
fails.
That's
gonna
happen.
Unless
you're
running
multiple
instances
of
the
initializer
pod,
you
should
be
able
to
point
and
initializer
to
multiple
pods
but
I'm,
looking
for
the
syntax
on
how
to
do
that.
Otherwise,
it
it
you're
absolutely
right.
It's
gonna
deadlock
because
it's
trying
to
start
itself
up
and
it
fails.
C
A
C
As
far
as
I
know,
you
can
create
a
PvE
with
the
like
backed
up
data
and
then
as
long
as
you
label
your
PVC
appropriately.
The
Stila
set
can
pick
it
up,
although
you
might,
you
might
have
to
switch
from
using
the
volume
claims
template
it
might
try
and
recreate
it.
But
if
you
get
like
the
UID
and
everything
it
shouldn't
another
possible
turn,
it
would
actually
be
like
mount
that
volume
is
something
else
copy.
C
The
data
over
then
I
think
there's
also
a
few
applications
that
will
sort
do
this,
for
you,
at
least
in
certain
environments
like
actually
think
kept.
You
Ozark
I'll
handle
the
back
yep
and
restoring
of
the
snapshot
and
stuff
for
you
as
long
as
you're
in
AWS,
and
then
I
think,
there's
a
couple:
other
storage,
vendors
and
storage
engines,
but
storage
providers
that
do
something
similar.
B
With
initializers,
you
can
do
one
of
two
things
you
can
either
have
a
pod
initializer
or
you
can
have
like
an
admission
webhook.
So
if
you
need
something,
that's
a
che
that
won't
deadlock,
you
actually
spin
up
a
service
within
kubernetes
that
you
point
the
webhook
to
so
you
can
have
multiple
instances
of
that
pod
running
that
you
can
point
to.
So.
If
say,
two
servers
go
down
out
of
five.
You
still
have
something
running
that
can
hit
that
webhook.
B
A
B
A
B
A
Bob
anything
to
add
nope,
okay,
Danny
Boy
I've
been
using
cops
for
some
time
now.
I've
never
had
any
issues
deploying
to
AWS
at
all.
Lately
I've
been
experimenting
with
Google
cloud,
so
I've
been
trying
to
use
cops
to
deploy
to
GCP
I,
have
not
succeeded,
been
having
DNS
errors,
and
is
that
is
that
the
only
information
we
have?
Oh
there?
Okay.
So
this
is
a
recent
one.
From
this
morning,
yeah.
A
C
B
C
And
they
they
actually
responded
this
morning.
Okay,
where
it's
you
know,
oh
looks
like
it
was
updated
an
hour
ago,
two
cars
in
still
valid,
so
honestly,
I
would
update
that
thread
with
that
issue,
but
yeah
I
would
I
would
not
expect
like
I
would
not
use
cops
in
any
production
capability
on
GCP
at
this
point
in
time
also,.
A
Danny,
if
you
want
to
share
I'd,
be
interested
in
knowing
why
not
GK
just
curious
I
know,
there's
a
lot
of
reasons
for
people
to
do.
You
know
a
cluster
on
on
instances
themselves,
but
I'm
always
kind
of
interested
in
seeing
the
trade-offs
that
people
do
when
they're
making
those
deployments
just
kind
of
an
interesting
thing.
A
If
you're
willing
to
share
subplot
we're
going
back,
1m
brand
asks
question
kubernetes
performance
question:
I
understand
that
all
API
calls
must
go
through
a
master
node
for
processing,
while
one
could
scale
up
a
master
vertically
ie
a
bigger
machine.
Is
there
any
advantage
of
scaling
horizontally,
ie
Bourne
masters,
because
my
understanding
that
even
in
a
multi
master
cluster
there
is
one
master
which
handles
the
API
calls
and
others
are
just
redundant?
Spares?
Is
my
understanding
incorrect.
A
C
So
the
cooties
masters
are
made
up
of
multiple
components,
usually
on
a
master.
You
will
wind
up
running
the
cube,
API
server
and
the
the
most
important
thing,
etcetera,
D,
the
cube,
API
server.
You
can
actually
like
load
balance
between
all
the
nodes,
like
you
can
toss
a
load
balancer
in
front
of
it.
It's
no
big
deal.
You
just
have
to
configure
it
correctly
for
etc.
D.
C
When
you
start
deploying,
you
know
more
than
one
node
you
you
get
your
quorum
going.
One
of
those
nodes
will
actually
become
the
leader
and
that
will
handle
rights,
but
all
nodes
can
handle
reads
and
when
it
comes
to
you,
like
you,
can
scale
horizontally,
like
generally
three
well
again
odd
number
of
nodes,
three
five,
seven,
and
as
long
as
you're,
like
your
workload,
you
aren't
like
creating
lots
of
stuff
over
and
over
and
over
again
like
it
and
honestly
should
be
fine.
A
C
C
B
A
C
A
C
A
C
A
C
A
B
B
Unfortunately,
not
first
and
foremost,
it
seems
kind
of
you
know
basic,
but
what
like
cube
cuddle
events
show
up
and
then
like
the
logs
of
the
container,
because,
if
you're
trying
to
exact
into
it
yeah
that
seems
like
you
know
it
might
also
be
a
DNS
resolution
issue.
So
outside
of
the
audit
logs
yeah
I'd
start
looking
at
event
logs
and
see
if,
like
the
containers,
restarting
off
and.
C
B
A
And
so
if
we
could
get
more
information
on
that
subhan,
that
would
be
really
great.
Also
just
generally
speaking,
if,
if
you
do
have
an
issue
on
a
cloud
provider
going
to
their
support
is
always
a
good
step,
but
if
you
do,
if
you
do
find
yourself
finding
an
issue
and
filing
it,
obviously
these
these
public
cloud
providers
are
involved
in
gke
and
we
have
no
problems
passing
on
issues
to
them
to
help
them.
A
You
know
help
you
out
so
if,
but
in
order
to
do
that,
I
need
something
written
down.
Something
I
could
forward
to
someone,
so
that
could
be
a
github
issues,
Stack,
Overflow
or
or
something
I'm
discuss,
or
something
like
that.
So
if
you
find
something
help
us
help
you,
so
we
can
pass
it
on
a
Marantz.
Let's
see
Emmie
Oh
awesome,
so
he
has
father
gitlab
issue.
One
of
you
guys
go
ahead
and
start
reading
that
one
and
then
we'll
get
back
to
your
question.
I.
B
A
Okay
and
asks
regarding
auto-scaling,
we
have
very
spiky
usage
of
our
service
with
certain
events
triggering
sudden
peaks
many
times
over
usual
loads.
Due
to
this
behavior,
one
of
our
requirements
has
been
to
always
have
an
empty
buffer
node
available
to
be
able
to
quickly
scale
up
our
services,
because
node
creation
is
very
slow.
Our
current
solution
works,
but
it's
kind
of
hacky,
a
cron
job
runs
every
10
minutes
and
our
cost
resources
equal
to
one
node
I've
heard
of
others
doing
similar
hacks.
Is
there
a
more
official
solution
available
or
in
the
works?
A
C
There's
actually
a
like
I
think
you
can
use
priority
classes
for
this
and
I
know.
There's
an
issue
out
there
too,
like
document
it
better
mm-hmm
here
I'll
link
that
in
here
real
quick,
we
profess
that
I,
don't
know
the
autoscaler
all
that.
Well,
the
cluster
Otto's
going
that
well
that
all
that
well,
our.
A
C
A
A
B
A
Hey
Sally
Sally
just
joined
chat,
so
maybe
he
might
be
help
us
up
help
us
out
here
all
right.
Let's
give
him
some
time
to
like
get
ready
and
join
us
and
stuff
all
right.
So
sorry
about
this
gku
still,
this
is
still
a
bummer
for.
B
A
A
A
Alright
Rahul
asks
I
would
like
to
know
about
getting
started
with
contribution.
Shall
I
ask
us
during
office
hours?
Yes,
of
course
you
can
ask
anything
during
office
hours.
I
just
wanted
to
take
this
time,
real,
quick
to
talk
about
our
sister
show,
which
is
called
meet
our
contributors
and
I'm
getting
for
you
now
and
tossing
that
in
the
channel.
A
So
this
is
our
sister
show
run
by
pears,
Pittman
at
Google,
and
it's
run
monthly,
the
second
Wednesday
of
every
month,
and
then
we
go
the
third
Wednesday
of
every
month.
We
it's
this
exact
format.
We
you
join
hash,
meet
R
contributors
and
slack,
and
you
ask
questions
about
contribution.
So
it's
just
like
this,
except
instead
of
asking
how
to
use
kubernetes
you're
asking
you
know
how
do
I
for
my
patches
to
be
right
how
to
do
a
pull
request?
You
know
all
that
sort
of
contribution
stuff.
A
C
A
B
A
A
A
A
C
C
A
That
Chris
shirt
is
posting
in
chat.
The
Aqua
security
has
a
sonobuoy
plugin,
but
I'm
pretty
sure.
That's
just
like
a
benchmark
plug-in
the
test
performance
I,
don't
think
it's
actually
like
a
security
thing,
so
sonobuoy
itself
will
run
the
kubernetes
and
add
tests,
and
it
will
do
some
of
that
stuff.
Like
is
your,
are
back
set
up
correctly.
A
C
C
A
Hold
on
it's
on,
it
sounds
like
Chris
short,
really
knows
this
a
lot
better
than
we
do.
Okay,
so
Chris
goes
on
to
say,
cue
benches,
a
go
application
that
checks
whether
kubernetes
is
deployed
securely
by
check
the
checks
documented
in
the
CIS
benchmark
same
thing:
we're
good,
okay,
so
benchmark
in
this
case
isn't
performance.
It's
it's.
A
D
A
A
Of
tools
to
do
this
right
yep,
so
if,
if
anyone
has
any
opinions
on
registry
security
tools
feel
free
to
toss
them
in
chat,
Chris
would
also
like
to
mention
that
cue
bench
was
written
by
the
awesome,
Liz
Rice,
so
big
shout
out
to
her
and
Chris
shirt
is
actually
pacing
a
bunch
of
good
information
here
about
about
scanning
images
and
stuff
like
that
and
there's
a
links
of
VMware's,
Harbor
and
GCR
does
scanning
Keo
is
an
on-prem
one
as
well
right
for
my
pennants
in
that
right.
Kid.
C
A
Chris
says
I
may
or
may
not
have
an
article
in
progress
talking
about
this
and
Chris
will
make
sure
that
he
drops
the
link
to
the
article
when
it's
finished
in
the
office
hours
Thank
You
Alexander
for
that
link.
Okay,
that
seems
to
be
a
ton
of
recommendations.
D,
I
hope
that
answers
that
question.
For
you.
Let
me
know
in
the
chat
there
marques
asks.
Is
it
reasonable
to
have
an
odd
number
of
API
servers
run
at
CD
and
API
servers
together
on
the
same
node,
not
specifically
kubernetes
node?
A
A
A
Okay,
I'm
starting
it
one
last
time.
Here
we
go
the
final
time
here
we
go
all
right,
so
I
have
the
encoding
set
to
actually
ultra-fast
now.
So
this
should
be.
This
is
literally
the
fastest.
I
can
go
with
a
hardware
encoding
which
I'll
need
to
figure
out.
I,
don't
think
I
could
stick
an
Nvidia
card
in
a
nook,
but
actually
the
Intel
chip
should
have
well
anyway,
I'll
figure
that
out
later,
let's,
let's
continue
all
right.
A
So
let
me
just
repeat
the
question
because
it'll
end
up
being
a
separate
video:
is
it
reasonable
to
have
your
odd
number
of
API
servers
run
at
CD
and
ApS
servers
together
on
the
same
node
that
specifically
kubernetes
node?
Is
there
some
benefits
of
not
housing,
n
CD
in
the
API
in
the
same
machine?
Are
there
examples
that
you
know
for
using
one
a
TD
cluster
to
serve
multiple
API
Buster's?
So
we
just
we
decided
that
was
probably
not
a
good
idea.
Yeah.
B
So
definite
I've
never
heard
of
one
that
CD
cluster
managing
multiple
API
clusters,
because
at
CD
itself
is
so
lightweight.
It
doesn't
really
make
that
much
sense
and
you
do
want
to
keep
things
separated,
fair,
like
in
a
clean
sense.
So
one
of
the
CB
cluster
one
kubernetes
cluster
is
there
a
benefit
to
not
housing
at
CD
and
the
API
in
the
same
machine
and.
A
A
Gonna
say
normally:
it
feels
like
there's
a
parrot
crawl
plane,
people
just
consider
all
of
that
yep
right
and,
of
course,
with
with
cloud
managed
servers.
That's
all
managed
for
you,
you're
kind
of
only
working
with
the
controller
with
the
worker
nodes
anyway,
right
now
really
good
question,
though,
if
you
have
any
follow-up
marques,
please
feel
free
to
to
ask
him
the
channel
for
follow.
Ups,
a
Solly's
here
awesome.
A
So
we
had
a
few.
We
had
an
auto
scaling
question
solving
that
I
think
we
want
you
to
field,
but
if
you
need
to
ever
get
caffeine
and
stuff,
we
can
come
back
to
it.
Let's,
let's
do
this
next
question
and
then
we'll
come
back
to
auto
scaling
and
then
I
wanted
to
hear
his
on
on
another
one
back
here.
A
B
B
And
alerts
based
on
that
yeah.
B
A
Yeah
hold
on,
let
me
see,
alerting,
isn't
a
problem,
it's
understanding
the
audit
log
info.
The
audit
log
question
was
regarding
an
example
of
the
audit
policy
file
and
setting
up
the
right,
selectors
and
then
Chris
says.
Oh,
so
it
sounds
like
he
knows
what
to
say.
Also
real,
quick
law,
everyone's
thinking
house
how's,
the
feed
quality.
This
time
it
should
be
blurrier,
but
a
lot
faster.
A
So
it
looks
like
Chris
short:
is
gonna
dig
up
through
some
notes,
any
other
any
other,
okay
good.
So
definitely
switching
to
ultra-fast
was
the
way
to
go
here.
Good,
to
know,
thanks
for
the
feedback,
everyone,
okay,
so
it
looks
like
chris-
is
digging
through
his
notes.
Do
we
have
do
we
have
any
other
opinions
here
on
audit
logs.
C
A
A
Let's
go
back
to
our
next
question
for
salah
here,
ready
emery
asks
regarding
auto-scaling.
We
have
very
spiky
usage
of
our
service
with
certain
events
triggering
sudden
peaks
many
times
over
the
usual
loads.
Due
to
this
behavior,
one
of
our
requirements
has
been
always
have
an
empty
buffer,
node
available
to
be
able
to
quickly
scale
up
our
services,
as
our
node
creation
is
very
slow.
Our
current
solution
works,
but
it's
kind
of
hacky.
A
We
have
a
cron
job
that
runs
every
10
minutes
and
request
resources
equal
to
one
node
I've
heard
of
others
doing
similar
hacks.
Is
there
a
more
official
solution
available
or
in
the
works
for
this
use
case?
It
feels
like
a
it
could
be
a
configurable
value
set
when
installing
the
cluster
autoscaler
example,
men
available,
CPU
and
min
available
memory.
So.
D
I,
don't
I,
don't
think
I
have
a
specific
answer
here,
but
I
believe.
Actually
this
got
brought
up
that
a
cube
con.
Someone
asks
a
very
similar
question
and
I
think
the
state-of-the-art
was
was
doing
what
was
kind
of
similar
hacks
requesting
resources
or
using
a
preemptable
super
low
priority
pod
to
reserve
the
resources
you
need
and
then
that
pod
would
just
get
preempted
when
there's
something
that
actually
or
poor
set
of
pods.
We
get
preempted
when
there
was
something
that
actually
needed
those
resources
and
that
would
keep
the
cluster
autoscaler
happy.
I.
D
Think
the
the
follow-up
for
that
answer
was
kind
of
like
that.
We
think
that
at
last
time
I
heard
we
thought
that
that
was
perhaps
an
acceptable
solution
and
we
were
seeing
how
that
played
out.
But
if
you
have
feedback
on
something
like
that,
I
encourage
you
to
file
an
issue
with
the
cluster
autoscaler
repository,
we're.
A
So,
if
he's
around,
please,
please
still
follow
up.
Ok,
audit
log
Chris
short,
is
a
provided
an
example
in
in
the
chat.
Let's
see,
if
that's
what
Monty
Xero
is
looking
for,
we'll
come
back
to
that
YP
20
and
ask
question:
how
do
you
guys
feel
about
controlling
objects
that
they're
not
running
bearnaise
with
kubernetes
native
objects
like
controlling
a
GCP
cloud,
sequel
instance
by
adding
a
GCP
cloud
sequel
instance,
kind
and
kubernetes,
and
managing
the
creation
through
the
api
by
an
operator
to
be
able
to
deploy
all
dependencies
in
a
consistent
way
on.
A
B
D
Mean
being
able
to
keep
a
declarative
configuration
for
for
your
application,
all
together
at
once
is
is
a
really
big
advantage
right
being
able
to
say
this
file
here
describes
all
the
setup
for
my
application.
If
I
need
to
rebuild
my
cluster
I
wanted
to
play
a
new
instance,
whatever
I
just
you
know,
cube
CTL
create
app
this
directory,
that's
a
huge
advantage,
so
I'm
I'm
in
favor
of
it
yeah.
C
C
A
D
A
Solly
I
wanted
I.
Think
I've
heard
you
talk
about
how
you
first
got
started:
learning,
kubernetes
and
I.
Remember
you
having
a
good
answer.
If
someone
has
this
question,
so
I
was
waiting
for
you
to
show
up.
You
guys
asks
question:
can
you
direct
me
to
any
link
or
document
where
internals
of
auto-scaling
services
ingress
and
note
part
works?
I
do
not
understand,
go
so
understanding
source
code
will
be
a
challenge
for
me,
so
our
recommendation
was
to
start
with
the
API
Docs.
D
So
I
API
Docs
are
a
decent
place
to
start.
I
would
also,
if
you
can,
some
of
these
are
out
of
date,
but
oftentimes
still
have
the
general
concepts
I
down
are.
If
you
can
go
and
find
the
proposals,
some
of
the
proposals
that
originally
proposed
the
features
a
lot
of
those
like
more
over
the
years
where
the
actual
implementation
has
changed
a
little
bit,
but
the
general
concepts
are
often
still
good
and
you
know,
obviously,
as
we
go
into
the
future,
the
more
we
follow
the
kept
process.
D
A
A
A
A
Chris
should
know
how
to
file
a
bug
as
well.
I
recommend
grabbing,
maybe
grabbing
the
copy,
just
copy
and
paste
a
conversation
that
you're
having
cut
it
out
kind
of
mention
that
the
examples
are
out
of
date
and
we
can
probably
start
from
there
see
see
me
on
the
ticket.
Chris
and
I'll
help
clean
it
up
and
maybe
link
to
the
video.
A
So
if
someone
wants
wants
to
do
that,
yeah
worst
case,
even
if
we
can't
help
you,
we
can
help
find
if,
where
we're
we're,
coming
up
short
and
writing
it
down,
so
that's
kind
of
that's
kind
of
how.
If
it's
first
words.
So
thank
you.
Sorry,
sorry
that
that
isn't
working
for
you,
but
we'll
we'll
get
the
ball
rolling
down
the
right
direction.
We
have
time
for
prayer,
we
have
about
five
minutes
left
for
questions,
and
then
we
could
do
the
the
raffle
Jeff.
You
want
to
go
ahead
and
start
in
the
background.
A
A
Dude,
do
we
have
any
questions?
Do
we
need
have
we
tapped
any
from
kubernetes
users,
yet
that
we
want
to
address.
B
No,
we
have
not
pulled
from
them,
yet
we've
actually
had
a
bunch
today.
It's.
A
A
So,
let's
Jeff,
why
don't
you
see
if
you
can
find
a
question?
Well,
maybe
do
one
or
two
more
yeah
then
well
we'll
raffle
off
a
shirt
so.
B
A
D
You
shouldn't
pods
should
have
running.
Pods
should
have
different
IP
addresses
globally
unique
yeah
IP
addresses
can
be
reused,
obviously
dead,
pods
running
pods,
but
it's
you
know
eventually,
we've
run
out
of
ipv4
addresses,
but
you
shouldn't
have
to
like
running
pods
with
the
same
IP
address.
That's
very
wrong.
B
All
right
and
summers
asks
and
I'm
already
looking
at
you
Bob
a
small
and
quite
simple
question
regarding
cube
spray.
If
I
got
another
address,
I
would
like
cube
spray
to
generate
certificates
for
so
that
I
could
later
work
with
that
address,
with
cube
CTL.
Where
should
I
update
that
address
in
the
host
I
and
I
I
saw
only
binding
to
IP
addresses
you.
C
A
B
There
some
standardized
common
way
for
pods
running
in
a
cluster
to
access
the
cloud
provider,
cloud,
API
cloud
provider,
slash
cloud
API
that
the
API
server
and
cubelets
have
access
to
via
their
dash
cloud
config
setup
as
it
is
now.
Only
the
API
server
and
cubelets
have
that
access
directly,
which
would
make
it
hard
to
deploy
services
that
interact
with
the
cloud
directly
example.
Setting
up
load
balancers
you'd
always
have
to
patch
that
into
the
API
server.
A
We
do
have
a
live
question,
though,
so
no,
let's,
let's
go
back
to
this-
will
be
the
last
one.
Monty
wasn't
on
the
live
stream.
Oh,
we
haven't
answered
his
first
question
I'm
interested
to
seeing
if
holly
has
an
opinion
on
this,
is
there
any
way
to
attach
an
existing
volume
when
I
just
restored
from
a
snapshot
to
a
PVC.
C
Let's
see
so
as
far
as
I
know,
if
you
like,
create
a
PV
with
the
backup
data
and
like
if
you
have
all
the
UID
stuff
and
all
that
created
like
it,
should
be
able
to
pick
it
up
again,
at
least
if
it's
like
a
stateful
set.
If
not
and
you
can
create
like
a
one-off
and
instead
of
using
like
the
volume
claims
template,
you
can
have
it
mount
the
PVC
directly
and
then
I
know
that
there
are
other
tools
like
kept
do
arc,
which
can
backup
some
of
that
stuff
in
specific
cloud
riders.
D
Around
yeah,
you
can
I
mean
if
you,
if
the
PV
is
already
there,
you
can
manually
find
a
persistent
volume
claim
to
a
particular
persistent
volume
if
you
have,
if
you
have
permissions.
So
if
you
know
like
I,
need
this
PVC
to
attach
to
this
PV
and
you
don't
just
want
to
mount
the
persistent
volume
directly
into
third
yeah
yeah.
D
So
if
you,
if
you
know,
rather,
if
you
know
you
have
a
particular
PVC
that
you
need
to
find
your
particular
TV,
you
can
do
the
same
that
the
PVC
controller
would
do,
because
you
know
controllers
are
just
like
normal
users
that
perhaps
a
different
set
of
permissions
from
normal
users.
But
if
your
user
has
that
permission,
you
can
just
be
like.
Ok,
yes,
PVC
is
bound
to
this
PV,
and
then
you
can
map
it
in
here.
Your
pod.
A
Okay
and
then
Matthew
says
ooh.
That
could
be
good
enough.
Let
us
know
how
that
goes
and
then
feel
free
to
follow
up
question
and
then
I
could
try.
I
feel
like
I'm
I
should
try
harder
to
get
a
storage
person
more
often,
it
feels
like
every
time
we
get
a
storage
question
we
get
kind
of
stumped
at
the
end.
So
we'll
ask
well
ask
well
ask
so
jeff
has
done
the
raffle
and
we're
ready
to
give
away
the
shirt
so
who's
who's,
the
winner
with
your
guaranteed
to
be
random,
Python
script,
the.
B
A
B
A
So
I've
pinned
you
on
slack,
congratulations.
The
way
it
works
is,
if
you
ask
a
question,
we'll
put
you
in
a
wreck
in
in
the
raffle,
and
you
can
win
a
shirt
if
you
show
up
often
and
start
pacing
a
lot
of
notes,
helping
people
out
and
stop
for
someone
just
doing
something
really
great,
let
me
know
and
that
we
could
just
send
a
my
shirt,
also
because
we're
here
to
help
out
so
congratulations
d.
That's
at
D,
I'll,
pin
you
on
slack
to
get
your
shirt.
A
I
have
to
do
thanks
to
the
following
companies
for
supporting
the
community
with
developer
volunteers,
giant
swarm
hefty
a
liquid
web
Red
Hat
we'd
works,
University
of
Michigan
and
packet
net
and
the
CNC
F
itself,
thanks
to
Google
for
sponsoring
the
t-shirts,
let's
see
and
feel
free
to
hang
out
in
the
head
office
hours
I
always
appreciate
everyone
showing
up.
If
you
have
any
feedback,
I
hope
we
can
make
the
show
better.
As
always,
let
us
know-
and
with
that
thanks
very
much
thanks
ollie
for
jumping
in
last
minute
when
we
needed
you
problem.