►
From YouTube: Kubernetes Office Hours (West Coast Edition) 2019-01-16
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
Happy
2019
everybody
welcome
to
today's
West
Coast
edition
of
kubernetes
office
hours,
where
we
answer
your
user
questions
live
on
the
air
with
our
esteemed
panel
of
experts,
you
can
find
us
in
hashtag
office
hours
on
slack
and
check
the
topic
for
the
URL
for
the
event
information.
Before
we
begin.
Let's
start
by
introducing
ourselves:
I
am
your
host
Jeff
Rasika
and
with
the
University
of
Michigan
and
now
panel,
let's
start
with
Josh
hi
I'm
Josh
burkas.
A
C
D
B
To
have
you
all
on
before
we
start
here
are
the
ground
rules.
This
is
a
judgment-free
zone.
Everyone
had
to
start
somewhere.
So
please
help
out
your
buddy
by
having
a
supportive
environment
in
the
channel.
While
we'll
do
our
best
to
answer
your
question,
the
panel
doesn't
have
access
to
your
cluster,
so
live
debugging
is
off-topic,
but
we
will
do
our
best
to
get
you
moving
down
with
next
steps
for
debugging
and
whatnot
panelists.
You
were
encouraged
to
expand
on
your
answers
with
your
experiences
and
pro
tips
you
are
on
the
panel.
B
After
all
audience,
you
can
help
by
pasting
in
URLs
to
official,
Docs,
blah
or
anything
that
might
be
relevant
to
the
topic
at
hand
or
that
we
happen
to
mention.
Please,
post
your
questions
on
discussed,
kubernetes,
io
or
in
the
slack
channel.
You
can
also
help
us
out
by
tweeting
and
spreading
the
word
and
paying
it
forward.
Each
session
is
recorded
and
available
on
youtube
if
you're
using
this
as
a
work
resource,
please
let
us
know
how
we're
doing
so.
We
can
try
and
make
it
better.
B
If
you
want
to
sit
in
on
the
panel
and
spread
your
knowledge,
you
are
more
than
welcome
to
and
you
can
earn
this
fabulous
water
bottle.
I.
Have
it
this
time
because
I'm
at
home,
if
you
we're
also
always
looking
for
marketing,
helps
so
if
you're
awesome
at
social
media,
please
come
and
help
us
out.
We
will
be
holding
a
raffle
at
the
end
and
it
will
be
for
a
t-shirt
and
hopefully
we'll
be
doing
a
little
bit
more
as
the
year
goes
on.
So
it
does
pay
to
come
back.
B
Lastly,
feel
free
to
hang
out
in
hashtag
office
hours
afterwards,
if
the
other
channels
are
a
little
too
busy
for
you
and
you're.
Looking
for
a
friendly
home,
you
are
more
than
welcome
to
pull
up
a
chair
and
hang
out
with
us
with
that.
Let's
get
started
so
I
love,
Ed,
Edd
packet
asked.
Is
there
a
best
place
in
this
slack
to
talk
about
Federation,
just
reflecting
on
the
multiple
sources
of
advice,
to
quote,
run
more
clusters
and
run
smaller
clusters?
The
the
context
was
earlier.
B
C
Beyond
that
there
are
other
Federation
technologies
that
are
also
being
developed.
Probably
the
big
one
of
note
right
now
is
cross
plane,
I
think
it's
I
might
just
mean
cross
plane,
but
it's
something
from
the
rook
team
that
have
developed
a
tool
for
sort
of
helping
manage
things
across
multiple
clusters.
B
D
Yeah,
where
I
work,
we
are
using
a
third-party
provider
to
monitor
the
whole
community's
culture.
It
just
they
give
you
an
easy
integration.
There's
a
demon
set
you
just
they
give
you
the
Yama
and
you
just
just
run
the
human
set
down.
That's
all
and
there
you
can
configure
to
see
advisor
some
punitive
metrics
and
well
they're
in
the
managed
service
you
can.
You
can
just
configure
alerts
for
everything
like
we
usually
found
for
every
deployment
we
have
metrics
about
the
recipe
usage,
the
memory
usage.
D
If
there
is
a
CFS,
you
know
the
colonel
schedule
in
the
scheduler,
if
it's
throttling
out
anything
or
something
with
that
that
might
get
in
the
way
also-
and
we
are-
is
using
everything
in
a
third-party
provider
and
nothing
in
cluster.
In
in
our
case,
there
are,
of
course,
advantages
and
disadvantages
and
yeah
for
our
use
case.
If
it
felts
or
everything
a
third-party
provider.
I,
don't
know
if
the
name
helps
or
maybe
it's
not
okay
to
say,
go.
C
C
B
Alright,
I
think
that's
that
next
up
we
have
Dee
Anderson,
okay,
here's
a
topical
question
for
me:
what's
the
roadmap
for
pod
security
policies,
is
there
a
future
in
which
the
suggested
default
configuration
for
our
cluster
ie?
What
cube
admin
does
roughly
has
PSPs
enabled
by
default?
If
not,
is
there
any
problem
with
shipping
PSPs
in
my
projects
manifest
similar
to
shipping
our
back
rules?
Will
they
just
be
gracefully,
ignored
on
PSP
disabled
clusters,
or
will
it
cause
issues.
A
So
if
you
want
it
to
progress,
I
would
say.
The
first
thing
to
do
is
to
jump
into
signo
to
end
or
sig
off
I
and
offered
to
help
out
ask
questions
there,
etc.
A
A
A
B
D
B
All
right,
hot
off
the
presses
is
easy:
hey
I
have
two
questions
smile
first
one
was
a
quick
one,
repot
affinity,
anti
affinity
that
I
have
below,
and
then
they
posted
some
mammal
if
I
change
it
to
required,
rather
than
preferred
I
see
missing
required
field,
topology
/ki,
but
it
looks
correct
to
me.
The
second
question
was:
is
there
a
better
way
of
exposing
TCP
/
UDP
services
externally,
in
this
case,
on
gke
other
than
node
ports
and
ensuring
the
firewall
rules
are
set
up
correctly?
B
A
B
B
A
Yeah
well,
the
thing
is
one
of
the
things
I
do
also
is
maintain
lwk
D.
That's
last
week
in
Cuban
days
development
you
can
subscribe.
It
all
wkt
that
info,
if
you're
interested
in
that
and
one
of
the
things
that
we
print
in
there
is
deprecation
notices,
and
there
has
not
been
one
for
ingress
and
gen-x.
A
B
C
B
All
right
Bob,
while
you're
looking
up
the
the
second
question,
I
think
we
will
move
on
black
dragons.
Seven
has
asked
I'm
having
some
issues
with
kubernetes
on
AWS
I'm,
not
sure
if
I
just
don't
understand
how
to
do
this
or
what
essentially
I'm
trying
to
expose
my
cluster
via
a
load
balancer
and
it
changes
the
external
IP
everytime.
The
service
restarts.
How
am
I
supposed
to
make
sure
the
external
IP
is
always
the
same.
We
are
essentially
going
to
have
an
iOS
app
hit
this
cluster,
so
the
external
IP
can't
change
and
Josh.
B
D
Okay,
so
type
load,
balancer
service
with
type
load
balancer,
will
create
an
Amazon
load,
balancer,
Rowley
and
ELB,
and
that
has
a
dynamic
IP.
Do
there's
nothing
you
can
do
about
about
bees.
Amazon
Amazon
does
provide
a
way
to
use
a
load.
Balancer
were
with
a
static
IP,
that
is
the
network
load,
balancer
and
I.
Think
there's
an
annotation.
You
can
use
in
a
service
to
to
provide
an
network
glutens
load
balancer
instead
of
an
elastic
load
balancer.
D
But
yeah
for
sure
the
issue
should
be
that
there's
a
the
note.
The
type
is
load
balancer
and
that
provides
an
elastic
load.
Balancer
and
Amazon
doesn't
work.
I
mustn't
changes
the
IP
address.
So
that's
probably
the
reason
if
I
understand
correctly
and
using
a
service
annotation
would
do
the
trick
if
that
exists.
But
if
I
remember
correct,
I,
don't
use
it
so
maybe
I'm
just
confused
and
little
confused,
but
I
think
it.
C
Yep
as
like,
looking
up
the
API
spec,
it
looks
like
it
like,
it
should
be
in
there
and
it
should
work.
My
I
honestly
couldn't
tell
you
why
it
wouldn't
work
when
switching
it
from
like
preferred
to
required,
because
both
them
just
take
a
pod
affinity
term
and
they
have
the
same
spec.
B
Right
I
think
with
that
we
will
hop
onto
the
next
question.
This
one
is
from
froze
Bay
hi.
Is
there
any?
Is
there
a
way
to
execute
a
bunch
of
scripts
from
host
to
a
pod?
I
want
to
deploy
a
Postgres
database
in
my
kubernetes
cluster,
and
I
wanted
to
create
a
database
using
several
sequel
scripts
to
import
my
whole
data.
Yes,
in
csv
files
into
the
database,
I
found
solution
to
initialize
the
database
using
a
config
map,
but
I
don't
think
it
can
import
the
data
from
my
file.
A
Well,
it
can
import
the
data
from
a
file.
The
file
just
needs
to
be
in
a
location,
that's
actually
accessible
to
kubernetes,
so
either
it
needs
to
mean
a
location,
that's
available
via
the
network,
I
yeah
and
has
to
be
addressable
in
a
way
that
the
stuff
inside
the
kubernetes
network
can
see
it.
Even
though
it's
outside
the
kubernetes
network
alternately,
you
can
load
it
on
to
some
form
of
the
shared
file
system.
That
kubernetes
has
access
to
his
persistent
volumes.
A
So,
depending
on
where
your
cluster
is
running,
if
you're
running
an
amazon,
you
could
put
it
somewhere
in
the
EBS
and
have
VBS
persistent
volumes
where
you
then
share
that
volume
with
the
database
when
it
starts
up
so
that
you
can
load
it
or
you
know,
if
you're
doing
this
bare
metal,
then
you
might
use
something
like
SEF
or
just
NFS.
A
Honestly,
it's
just
that
you'll
want
to
drop
that
data
somewhere,
that
it
can
be
loaded
inside
a
container,
whether
whether
it's
over
the
network
or
whether
it's
through
your
shared
file,
storage,
the
other
thing
that
I
would
actually
recommend
is
don't
bundle.
Those
load
scripts
into
your
general
database
container
instead
create
a
create
a
separate
container
for
launching
as
a
sort
of
prerequisite
container
for
launching
that
database.
That
only
exists
when
you're
doing
the
load.
A
There's
a
lot
of
the
various
systems
for
running
Postgres
and
kubernetes,
whether
it's
patroling,
slash,
pilo,
whether
it's
the
crunchy
DB
stuff
and
that
sort
of
thing
they
have
a
place
for
these
sort
of
initialization
containers,
and
so
you'd
want
to
add
that
to
the
initialization
container,
not
not
the
operational
database
container,
because
once
the
database
is
loaded,
if
you
restart
it,
you
don't
want
it
to
load
again.
B
D
Write
an
interruption
to
what
she
was
saying
is
using
the
host
the
host
path
right,
if
you
want
to
like
the
question
I
think
it
says
a
bunch
of
straight
from
hostel,
but
if
you've
already
already
on
a
host,
that's
running
kubernetes,
you
can
use
a
host
host
pass
volume
and
just
mount
that
on
the
bottom.
You
can
just
connect
to
the
boat
like
exact,
a
bash
and
run
whatever
you
need.
D
A
A
D
D
B
Yo
meteo
and
I'm.
Sorry,
if
I'm
pronouncing
that
wrong
said
ideally
you'd
want
to
build
this
into
your
CI
CD
and
connect
over
the
network
to
the
Postgres
service.
Is
that
necessarily
true,
depending
on
this
like
to
me,
I
I,
think
the
problem
is,
you
were
just
starting
to
spin
up
this
database
for
the
first
time
and
you
want
to
in,
like
you
want
to
injectors,
you
know
in
it
data
into
it
that
wouldn't
necessarily
lend
itself
to
CC
ICD
job
yeah.
D
B
C
C
C
D
Yeah
the
immunity
last
time,
I
use
it.
It
was
not
supported
because
this
load
balancers
in
taxes
to
create
a
load
balancer
and
usually
depends
on
the
culture
either
you're
saying,
but
but
I'm
not
sure
if
it's,
if
it
wasn't
on
the
roadmap,
so
maybe
see,
two
versions
of
many
kids
were
supported
in
some
way.
I,
don't
know
how
really
I.
C
C
B
Right
moving
on
next
question
is
from
Anton
of
the
woods
my
Google
foo
is
lacking
today
and
I
couldn't
find
the
best
way
to
make
a
large
50
megabytes.
Ish
read-only
text
data
file
available
to
pods.
The
images
are
going
to
be
publicly
available,
but
some
of
these
files
can't
be.
Are
there
any
pointers.
C
D
A
I
would
I
would
honestly
do
it
by
deploying
clustered
file
system
across
the
cluster
I
mean
assuming
the
multiple
pods
need
access
to
the
same
files,
so
I
would
deploy
a
clustered
file
system
across
the
cluster
and
in
just
load
the
files
on
to
that
there's
several
different
ones
that
work.
You
know,
staff
cluster,
there's
one
that's
supposed
to
actually
be
optimized
for
read-only,
caching
and
I
can't
remember
what
it
is
the
and
and
that
would
allow
them
to
be
available
for
for
multiple
pods,
assuming
that's
the
actual
structure.
Here.
Oh.
D
C
Limit
I
believe
for
Secrets
is
one
megabyte
because
the
secret
is
stored
in
etc.
D,
so
don't
think
you
want
to
dump
like
50
miles
in
there.
What
you
could
use
like,
if
you
have
it
like
an
s3
bucket
or
something
like
that
you
could
use
like
in
an
it
container
with
a
secret
to
them,
like
you
know,
with
something
pull
it
down
from
the
bucket
and.
B
All
right,
let's
scroll
down
a
little
bit
more
for
the
next
question,
Jody
asks.
Is
it
correct
that,
when
I
want
to
give
a
colleague
access
to
a
service
in
my
namespace
I
will
just
give
him
the
host
name
field
of
my
cluster
IP
service,
eg,
my
sequel,
dot,
whatever
name
space,
dot,
service,
dot,
cluster
dot,
local
and
the
port
which
the
service
exposes
3306?
And
then
his
request
gets
routed
to
one
of
my
instances
that
are
selected
by
my
service.
B
A
I'm
here,
which
is
white,
which
is
correct
right,
then
you're
just
treating
it
as
an
external
service,
and
obviously
the
service
needs
to
be
exposed
to
external
requests
for
that
to
work.
Yes,
your.
E
C
B
D
B
B
Storm
war
wonders
how
kubernetes
can
identify
between
preemptable
versus
standard
yo.
Matteo
says:
gk
preemptable
nodes
is
per
pool,
so
you
could
easily
failover
to
another
standard
node.
But
you're
going
to
have
an
outage.
Preemptable
nodes
are
labeled.
As
such
gke
preempt
nodes
get
a
label.
D
Anderson
said
it
will
prefer
the
smallest
node
kind.
That
would
allow
the
unscheduled
load.
The
schedule
I
have
a
feeling.
The
listeners
in
slack
know
more
about
this
than
us,
so
I
think
so
too.
I.
C
B
We'll
let
them
riff
on
that,
and
we
will
well,
let
me
say:
D
Anderson
says,
for
example,
if
you
have
a
n1
standard,
one
pool
and
an
n1
standard
60
for
pool
and
1
n1
standard.
One
is
enough
to
accommodate
the
new
load.
It'll
create
one
of
those,
not
the
64,
so
I
think
that's
where
we
will
leave
that
I
will
say:
there
is
a
slack
channel
in
the
kubernetes
slack
hashtag
gke.
B
They
will
probably
be
able
to
answer
it
or,
at
the
very
least
point
you
in
the
right
direction
to
which
cig
might
be
able
to
answer
that,
but
pretty
sure
they'll
know
more
next
up
we
have
xcq
one.
Is
there
a
way
to
globally
force
the
scheduler
to
try
harder,
not
to
schedule
pods
of
the
same
deployment
on
the
same
node,
I
know
of
pod
anti
affinity,
but
having
to
change
all
deployments
seems
a
little
redundant
I.
C
The
yeah,
so
we
did
cover
it
this
morning,
sort
of
key
LDR
version.
You
still
want
to
use
the
pod
anti
affinity
stuff
and
you
target
the
the
hostname,
but
what
you
can
do
is
actually
use
to
automate
that
without
having
to
touch
all
your
files
is
use
sort
of
a
tool
like
medic
controller
or
something
that
can
sort
of
mutate.
The
pod
request
and
add
that
in
there
to
do
it,
for
you
at
least
as
long
as
some
guys
like
the
right
labels.
B
B
A
Yeah
well,
but
there's
also
there's
the
migration
process.
Yeah
I
would
actually
honestly
look
for
presentations
from
either
any
of
the
recent
coop
cons.
Look
at
the
various
my
sequel
conferences,
I,
to
see.
If
somebody's
in
a
presentation
about
my
grading
process,
I,
don't
do
a
lot
of
my
single.
We
did
actually
migrate.
One
database
from
RDS
to
my
sequel,
running
on
kubernetes
yeah
and
were
able
to
somehow
get
replication
working.
I
know
that
Amazon
put
some
obstacles
in
the
way
of
that
and
I.
A
A
B
Don't
you
thinking
about
it,
there
isn't
a
way
to
you.
Do
it
in
cube,
cube
cuddle
yeah,
oh
I
love
the
links,
so
I
have
no
idea
how
to
pronounce
that.
Oh
me,
caronian
I'm,
sorry
cube
cuddle.
Exec
does
not
yet
support
the
specification,
but
you
can
ssh
to
the
node,
and
you
know:
docker
exact
yo
meteo
is
laughing
at
our
pronunciation
of
cube.
Cuddle
laugh
all
you
want
and
then
Eve
C
actually
linked
to
the
issue.
That
is,
support
the
user
flag
from
docker
exec
and
cube
cuddle
exec,
which
is
fantastic
and
thank
you.
B
C
B
D
A
B
A
Yeah
and
it
is
possible
the
and
it's
actually
relatively
easy
if
you
are
already
using
an
automated
high-availability
system
for
post
versions
of
MySQL
in
the
case
of
MySQL,
that
would
be
something
like
Vitesse
or
the
Morea
DB
operator
from
Rio
DB,
which
which
handles
failover
automatically
in
the
case
of
post
quiz.
It
would
be
something
like
Patroni
or
the
crunchy
DB
operator,
which
also
handle
automated
failover,
and
then
at
that
point,
draining
just
becomes
a
matter
of
doing
fail.
A
A
It
make
sure
that
it's
up
and
running
for
a
while
before
you
move
on
to
the
next
node,
because,
if
you're
going
to
fail
over
to
a
new
database
instance
on
the
upgraded
node
that
fail
over
time
is
going
to
be
a
lot
longer
than
it
would
be.
For,
like
you
know,
a
JavaScript
app
right,
because
they
on
that
node,
though
your
replication
system
needs
to
copy
all
of
the
data
on
to
the
node
before
it
can
actually
bring
up
a
live
instance.
A
D
Yeah
I'm
not
sure
if
something
like
this
exists
but
like
when
you're
upgrading
a
cluster
one
note
at
a
time
you
might
force
off
a
failover
operation
like
if
you
have
n
nodes
n
times.
So
maybe
if
you
can
do
and
I'm
smart
upgrade
and
just
locate
your
databases
on
your
database
and
it's
replicas
on
already
upgraded
nodes,
then
green.
All
the
other
nodes
will
be
smooth
operation
recovering
that
database
yeah
like
if,
if
the
failover
is
not
so
smooth,
maybe
it's
it's
something
you
want
to
yeah.
Consider
yeah
yeah,.
A
A
D
A
Kubernetes
needs
to
start
that
running
database
node.
Now
you
could
obviously
tell
kubernetes
where
to
start
it.
The
problem
is
it's
a
little
hard
to
do
in
the
context
of
a
stateful
set?
Okay
right,
because
kubernetes
is
going
to
start
stateful
set
nodes
where
it
believe
it
has
believes
it
has
capacity
not
based
on
what's
already
upgraded
yeah.
A
A
A
But
obviously
this
is
one
of
the
things
we
discuss
on.
If
we
had
lots
of
time
to
spend
on
making
stateful
set
better,
what
things
would
we
want
to
do,
and
that
is
definitely
on
the
list
right
is
that
doing
rolling
up
rates
intelligently
is
one
of
those
hard
problems.
It
gets
even
harder
when
you
start
talking
about
things
like
charted
databases
where
well.
A
B
That
was
awesome,
I
think
we'll
move
on
BBQ
asks
I'm
using
Nexus
as
a
local
registry
for
containers.
The
problem
is
that
I'd
like
to
have
cilium
images
on
Nexus
and
also
run
Nexus
on
top
of
the
cilium
Network.
So
I
have
a
bit
of
a
chicken
and
egg
problem.
Since
instances
can't
access
Nexus
until
cilium
is
up
and
I
can't
set
up
cilium
until
I
can
access
Nexus
should
I
just
bundle,
cilium
images
and
upload
them
as
part
of
the
deployment
and
download
them
from
object,
storage
or
is
there
something
else.
A
B
C
Think
you
know
if
the
image
is
already
cached
locally
on
the
host
like
if
you're
bootstrapping
your
hosts
and
you
have
a
way
of
seeding
the
local
image
store.
That
would
probably
be
the
way
of
doing
it.
So,
like
your
building,
if
you're
building
your
like
your
image
template
for
your
nodes,
you
can
stick
the
cilium,
you
know
version
or
whatever
you
want
on
there,
at
least
for
bootstrapping
your
cluster.
C
So
that
way,
when
you
do
the
daemon
set
to
put
in
all
that
it
can
like
it
doesn't
have
to
use
it
like
you
know,
not
always
pull
or
like
I
forget
the
the
things
in
the
pod
spec
for
it
honestly,
but
so
it's
not
always
trying
to
pull
every
time
it
starts
up.
It
should
pull
from
its
local
cache.
First,
that's
honestly,
my
best
guess
at
solving
that
sort
of
problem.
Yeah
thanks
image
pull
policy,
if
not
present,.
D
I
think
that's
a
really
good,
guess
and
I
think
cops.
The
tool
to
banish
core
narratives
are
basically
mostly
AWS
uses
that
trick
for
four
or
several
stuff
like
for
the
cube
proxy
images.
It's
it's
already
on
the
docker
pool,
so
it
doesn't
need
to
have
access
to
anything.
It's
just
there
and
it
just
works.
D
So
I
think
that's
a
trick
that
it's
used
another
way,
but
maybe
it's
that
I
can
think
of
it's,
maybe
just
having
the
the
nexus
registry
on
another
caster
or
some
somewhat
in
a
different
thing.
So
if
you
have
another
cluster,
where
you
have
this
nexus
registry,
then
the
problem
is
solved,
I,
think
but
yeah.
If
bootstrap.
E
D
Yeah
but
yeah
I
think
the
trickiest
said:
it's
really
good,
yes,
copies
using
it
and
it
works
right.
It
even
worked
like
I
was
some
time
ago.
I
was
using
a
really
old
cornetist
version,
and
the
cornetist
registry
moved
from
TCR
to
some
other
domain
and
everything
continued
to
work.
Just
because
the
cube
for
Proctor
image
was
wasn't
baked
into
the
into
the
notes.
So
I
I
tried
that
and
it
really
works.
Yeah.
C
B
B
All
right,
I
think
that's
relieved
that
F
doc
cz
asked.
Is
it
possible
to
implement
pod
placement
on
nodes
separated
by
namespaces,
never
mix
pods
from
different
namespaces
in
a
single
node?
Is
there
an
existing
implementation
for
that,
or
maybe
you
guys
know
where
I
could,
research
about
it
and
I
believe
josh
is
taking
up
this
one
and
is
looking
yeah.
A
I'm
looking
at
it
right
now,
actually
probably
one
I
was
originally
thinking.
No
definitive
II
actually
probably
want
pod
affinity
right
because
what'll
happen
is
that
one
pod
will
get
scheduled
for
a
node
and
that
pod
will
have
a
namespace
yeah.
Then
you
basically
say
you
know
you
put
an
anti
affinity
rule
that
says,
if
a
pod
from
any
other
namespace
is
there,
you
can't
schedule
on
that.
C
Other
place
I've
seen
well,
the
only
place
I've
actually
really
seen.
This
sort
of
thing
is
more
like
from
a
compliance
side
of
stuff
where
you
have
to
have
work.
Look
like
this
person
owns
this
node
or
this
node,
or
this
workload
cannot
be
on
the
same
notice,
something
else
just
from
a
security
and
compliance
reason,
in
which
case
like
we
did
this
on
our
HIPAA
cluster,
we
literally
bound
specific
groups
workloads
to
groups
of
nodes.
So
that
way
they
are
the
only
thing
always
running
on
that.
It's
not
a
you
know,
good
answer.
B
C
A
B
I
am
and
apparently
answered
his
question.
Yep
and
Christian
asked
another
question
that
I
think
will
be
fun.
What
was
each
of
our
favorite
talks?
Even
if
you
were
the
one
that
gave
the
talk
at
the
latest
cube
gun
I
will
happily
go
first,
it
was
Ben
the
elder
and
oh,
my
god.
I
can
remember
his
slack
handle,
but
not
his
ring
sent.
It
was
Ben
the
elder
and
son
Lou
and
they
gave
a
talk
about
sig
testing
and
how
everything
is
fine.
B
This
is
this
is
fine
and
it
was.
It
was
a
really
well
done
talk.
It
was
also
really
insightful,
because
I
knew
that
the
project
called
kind
kubernetes
and
docker
existed,
but
I
hadn't
actually
seen
its
practical
uses
and
they
went
over
that
and
it
was
very
insightful
and
also
super
fun,
and
there
was
this
is
fine
dog.
B
A
In
addition,
just
some
of
the
same
ones
as
Geoffrey
also
say:
I
really
got
a
lot
technically
out
of
the
one
on
controller
runtime,
just
because
I
working
on
building
custom
controller
and
that
allowed
me
to
junk
a
whole
bunch
of
half
written
code
and
start
over
so
plus
I
also
liked
it
because
it
was
people
people
from
two
different
projects
who
were
saying
hey.
We
have
two
different
projects,
but
they
both
needed
this
thing,
and
so
we
collaborated
on
making
a
library
together
instead
of
two
conflicting
approaches.
C
Bob
I'm
a
little
bit
biased,
but
I
enjoyed
everything
from
the
contributor
summit.
If
you
are
interested
in,
you
know
contributing
doing
the
kubernetes
I
highly
recommend
watching
that
series
on
YouTube
they're,
both
under
the
CNCs
slack
and
the
community
slack
outside
of
the
contributors
summit,
though
my
favorite.
By
far
was
he
running
serverless
HPC
workloads
on
top
of
kubernetes.
That
was
a
super
interesting
one
and
anything.
Hpc
related
is
near
and
dear
to
my
heart,
yeah.
D
Don't
really
remember,
to
be
honest,
I
start
playing
with
Anita's
I
mess
with
the
code,
and
that
was
like
the
best
messing
me
slack,
making
a
proposal
and
just
debate
with
and
with
him
talking
a
lot
on
the
stuff
that
was
like
the
best
for
me
and
the
community
was
very
welcoming,
and
that
really
was
foreign.
Just
yes,
awesome.
B
B
Sarma
asks.
Hi
I
have
multiple
deployments
in
a
helmet
and
a
separate
and
separate
values
for
each
of
them
in
values.
Yeah
Mille,
when
I
wanted
to
update
the
docker
image
for
one
of
them
and
upgrade
the
chart,
it
recreates
the
containers
for
all
the
other
deployments
as
well.
Is
there
a
way
that
I
can
control
or
not
to
recreate
pods
for
the
deployments
that
don't
have
a
change.
C
B
Upgrading
the
chart,
the
reason
for
that
is,
we
were
doing
something
with
our
Jupiter
hub
deployments
that
caused
this,
and
there
were
a
couple
options
that
we
would
have
to
change
in
order
for
it
to
only
upgrade
or
change
the
deployments
that
the
values-
and
this
sounds
weird
to
say,
it
would
figure
out
which
deployments
changes
in
values,
yeah
Mille
affected
and
only
changed.
Those.
B
Store
more
with
more
knowledge,
something
in
those
pod
templates
changed
as
part
of
the
helm,
upgrade
and
suspects
that
a
change
to
an
environment
variable
or
an
annotation
for
config
map,
that's
kind
of
what
I
was
getting
at.
So
what
will
happen
when
you
do
a
helm
upgrade?
Is
it
will
actually
look
at
those
pod
templates
and
everything,
essentially
all
the
generated
llamó
in
the
background
and
if
anything
changed
it
will
just
automatically
apply
it
so
that
will
apply
like
that,
will
effect
any
deployment.
B
If
a
simple
you
know,
environment
variable,
changed
nick
does
suggest
that
you
could
render
the
templates
and
look
at
the
diff,
and
that
would
be
an
option
to
try
and
figure
out
which
change
was
actually
affecting
it.
I
think
that's
where
we
can
end
it
this
week,
so
we
have
a
fantastic
raffle
to
do
and
one
two
three
four
five,
six
seven,
the
winner
of
the
t-shirt
for
the
West
Coast
Edition,
is.
B
Just
for
being
awesome
and
asking
questions
and
hanging
out
with
us,
and
with
that,
we
want
to
thank
the
following
companies
for
supporting
the
community
with
developer
volunteers,
Giants
warm
stock
ex
packet
net
pusher
comm,
Red,
Hat,
Samsung
SDS,
we've
works,
VMware,
Jing
Huawei
and
the
University
of
Michigan,
and
also
thanks
to
the
CNC
F
special
thanks
to
Google
for
sponsoring
the
t-shirt,
giveaway
and,
lastly,
feel
free
to
hang
out
in
hashtag
office
hours
afterwards.
If
the
other
channels
are
too
busy
for
you
and
you're.