►
From YouTube: Kubernetes Office Hours 20200715 (EU Edition)
Description
Office Hours is a live stream where we answer live questions about Kubernetes from users on the YouTube channel. Office hours are a regularly scheduled meeting where people can bring topics to discuss with the greater community. They are great for answering questions, getting feedback on how you’re using Kubernetes, or to just passively learn by following along.
For more info: https://github.com/kubernetes/community/blob/master/events/office-hours.md
B
C
You
hi
everybody.
My
name
is
Mario
Lauria
I
am
a
senior
essary
DevOps
owner
of
our
kubernetes
environment
at
sock
X,
which
is
a
start-up
in
downtown
Detroit
in
the
e-commerce
field,
I
focus
on
doing
our
closers
in
eks,
auto-scaling,
Network
redundancy.
You
know
ingress
things
like
that
other
problems
and
as
of
yesterday
I
am
a
certified
burn.
I
use
application
developer.
C
E
Hi
ever
Chris
I'm
a
nuclear
with
Google
Cloud
Canada
public-sector
team.
My
focus
is
on
containerization
hybrid
environments.
My
background
has
been
primarily
in
on-prem.
Also
a
big
fan
of
get
up,
see
related
things,
though
happy
to
help
looking
forward
to
answering
some
questions.
Today,
let's
go
with
pure
next
cool.
F
Hey
I'm
here,
I'm
working
at
Spectrum,
mostly
owning
our
community's
infrastructure
there
and
migrating
our
infrastructure
to
community
is
so
that's
my
day-to-day
job.
I'm
trying
to
help
out
the
sick
contribute
experience
a
little
bit,
I'm
still
doubling
my
feet
at
that.
So
that's
what
I'm
doing
I
think
the
next
one
would
be
proven
to
be
honest.
Follow
us
yeah,
hey.
A
I
made
this
okay,
sorry
yeah
I'm
publicist
Oscar
Sam
from
Lavinia
people
call
me
buff
for
simplicity,
I'm,
a
certified
to
bernetta,
administrator
and
I
know
a
lot
about
monitoring,
observability,
a
Saranda,
bobster
I'm,
also
a
maintainer
of
Donna's
project,
which
is
a
long-term
flooded
solution
for
Prometheus.
Quite
recently,
I
played
around
with
vertical
father
to
scaler
and
basically
of
the
scaling
in
it
is
house,
requests
and
stuff
like
that.
So
yeah.
That
would
be
me.
B
And
is
that
it
were
you,
the
last
one
yeah
all
right,
everybody
and
I'm
your
horse
host
George
Castro
work
at
VMware
as
a
community
manager
and
I
work.
It's
a
contributor
experience.
So
first
we're
gonna
go
over
there.
Some
quick
rules
on
how
this
works
and
then
those
of
you
in
the
audience
we'd
love
to
see
you
typing
in
stuff,
so
tell
us
where
you're
from
in
the
world,
where
you're
listening
from
what
you
do
and
start
asking
your
questions.
So
here's
how
it's
gonna
work,
you're
gonna,
ask
your
question
in
the
channel.
B
It
helps
if
you
write
like
question
:
and
I'll
cap,
so
we
can
see
it
and
then
Pierre's
gonna
paste
those
into
a
document
and
then
we
basically
just
take
them
in
the
order
that
we
received
them
and
that's
how
that
works.
So
before
we
start,
however,
I
do
want
to
thank
the
following
companies
for
donating
and
lending
us
these
experts
here
for
this
community
program.
So
thanks
to
Google
spectra,
Microsoft,
VMware,
stock,
X,
giant,
swarm
and
UW
for
helping
us
out,
we
do
have
a
new
West,
Coast,
Edition
I
know.
B
Most
of
you
are
probably
listening
in
Europe.
This
is
the
European
time
zone,
but
we
are
experimenting
with
a
much
later
western
coast
edition
stream.
So
if
that's
useful
for
everyone,
we
are
doing
that
tonight.
It's
gonna
be
a
10
p.m.
night.
For
me
today
see
this
is
gonna,
be
awesome.
It's
gonna
be
great,
so
yeah.
So
let
me
just
explain
how
the
whole
thing
is
gonna
work
here.
B
So
here
the
ground
rules
like
I,
said
before
there's
a
kubernetes
event,
so
the
code
of
conduct
is
in
effect,
so
please
be
excellent
to
each
other
in
the
channel.
This
is
also
a
judgment-free
zone.
Everyone
had
to
start
from
somewhere.
So
please
how
about
your
buddy
by
having
a
supportive
environment
in
the
channel
and
things
like
that?
There
are
really
no
dumb
questions.
We
have
no
problems,
answering
the
same
questions
over
and
over
again,
if
only
we
could
because
that'd
be
way
easier
and
what
we
will
do
our
best
to
answer
your
questions.
B
The
panel
doesn't
have
access
to
your
cluster,
so
live
debugging
is
gonna,
be
off
topic,
but
we
will
do
our
best
to
get
you
moving
down.
The
next
step,
so
if
you
have
logs
or
something
like
that,
use
a
pastebin
service
or
just
like
github
dot-com,
or
something
like
that.
Usually
that
might
help
us
do
that.
And
then
we
do
try
to
thread
the
conversations
in
the
slack.
But
if
you
are
gonna
paste,
a
huge
monster
thing,
please
use
a
paste
bin
for
that.
B
Panelists
you're
encouraged
to
expand
on
your
answers
with
your
experience
and
pro
tips.
After
all,
what
we
were
really
interested
in
is
that
sweet,
sweet
production
experience
and
audience
you
can
help
us
out
by
piecing
URLs
to
the
official
Doc's
blogs
or
anything
that
might
be
relevant
to
the
topic
at
hand.
B
And
as
always,
we
do
monitor
discuss
that
kubernetes
I/o
for
questions
there,
I'll
go
ahead
and
post
that
thread
once
we
get
started,
and
you
can
always
help
us
out
by
tweeting
spreading
the
word
Pei
forward.
This
is
a
volunteer
program
that
we
put
together.
So
for
us,
success
looks
like
a
lot
of
people
listening
and
getting
help.
So
anything
you
can
do
to
help
us
get.
The
word
out
is
always
appreciated
and,
as
always,
this
panel
is
made
entirely
of
volunteers.
B
If
you
want
to
rotate
in
or
out,
please
let
me
know
what
I
do
is
I
just
basically
say:
hey
who's
who's
in
for
this
month
and
then,
as
long
as
we
have
over
three
or
four
people,
we
make
it
happen.
So
while
we
do
ask
you
to
do
a,
you
know,
help
us
out
and
pasting
things
in
the
channel
and
stuff.
If
you
do
to
get
a
job,
you
might
end
up
on
the
show
like
what
happened
to
Pierre
and
pop,
which
is
great
so
with
that
are
we
ready?
B
It
looks
like
audio
is
fine
I
see.
Wayne
is
here,
welcome,
Wayne,
welcome,
Kris
from
Broomfield
Colorado
and
yogi
from
Singapore
awesome
thanks
for
coming
in
all
right.
So
let's
look
at
the
first
question
here
is
from
Shai
Katz
says:
welcome
off
hello
office,
our
zeroes
thanks,
here's
my
question
for
you
today
out
of
scaling
workers
that
reads
from
Q
in
kubernetes,
we
tried
Keita
as
Akita
Arcada,
but
the
behavior
is
always
a
Radek
Bob's
going
up
and
down
frequently,
although
tried
multiple
configurations.
B
The
reason
for
that
erratic
behavior
is
that,
once
the
met
the
Q
fills
up,
we
get
more
pods,
but
then
they
can
process
all
the
messages.
The
Q
goes
back
down
to
zero
messages
and
the
pods
scale
down
and
the
cycle
never
ends.
I
couldn't
find
any
articles
or
blog
spots
like
about
how
to
correctly
scale
your
deployments
according
to
Q
pace.
That
is
unpredictable.
That
is
an
interesting
question.
I'll
be
happy
to
hear
some
advice
from
people
on
that
topic,
so
I.
A
B
D
It
maybe
because
it
takes
too
long
to
spin
up
were
cursed
again
and
the
records
can
be
reused
or
is
that
is
that
the
problem
there
yeah
I'm,
also
trying
to
find
out
what
the
problem
with
the
actual
thing
is
because
usually
in
queue
based
jobs,
you
you
have
a
worker
that
runs
and
then
exits,
and
this
sounds
more
like
you're
they're
trying
to
reuse
the
workers,
so
them
scaling
up
and
down
is
hurting
them
because
they
need
warm
up
or
something
for
could
cost.
Is
that
the
case.
B
They
haven't
responded
yet
so
I
cats,
if
you're
around
they
pay
so
that
the
question
earlier
this
morning
about
a
hat
about
forty
minutes
ago.
So
we'll
get
my
chance
to
respond.
If
they're
listening,
yeah.
A
Like
usually
I,
really
like
dynamic
environments
as
much
as
possible,
that
means
things
never
get
stale
so
basically,
for
me,
is
kill
and
get
like
the
market
scale
up
and
scale
down.
I
like
that,
because
that's
just
sort
of
I
like
that
dynamic
environment
means,
if
one
note
goes
down,
new
possible
will
be
brought
up.
Basically,
no
a
single
note,
though,
going
down
won't
impact
the
whole
operations
right.
So
this
is
usually
a
good
thing
for
me
from
a
city
point
of
view,
right.
A
B
B
D
B
D
C
C
I
just
want
to
say,
I
know
in
118,
there's
some
new
options
for
HPA
that
control
the
timer's,
by
which
HPA
definitions
will
take
action
and
and
scale
in
a
scale
out
that
was
never
before
thing.
So
we've
actually
seen
issues
at
stock
X,
where
we
have
a
push
notification,
a
massive
influx
of
traffic.
We
scale
up,
but
don't
we
actually
scale
back
in
before
that's
actually
done
and
we
have
no
control
to
say
no
actually
give
up
a
grace
period
of
15
minutes
before
you
actually
make
any
hard
decisions.
C
Then
there's
no
new
outages
I'm
in
the
docks
right
now
and
I'm,
not
seeing
where
those
are
for
118
I
forget.
So
if
someone
can
find
them
link
them
I
think
they're
they're,
either
alpha
or
beta,
but
maybe
that
will
help
this
person
ID
again
yeah.
But
if
the
actual
problem
is
a
little
bit
hard
to
decipher
yeah.
A
B
A
B
I'm
wondering
it
from
like
a
cloud
perspective.
You
have
maybe
you're
just
trying
to
figure
out
what
this
right,
if
you
have
five
reserved
instances-
and
you
could
just
set
that
to
five-
you
know
but
you're
willing
to
wait,
but
you
always
know
that
you're
using
you
know
what
I'm
saying
it's
like
well,
if
I
have
five
reserved
businesses
for
the
course
of
a
month,
that's
probably
better
than
bursting
to
20
every
once
in
a
while,
you
know
cuz,
they
don't
really
say
what
the
time
you
know
like.
Are
these
long
processing
jobs?
B
B
A
B
B
Yeah,
okay,
well,
they
haven't
responded
yet
so,
hopefully
they
are
listening
and
if
they
do,
if
you're
listening
out
there
shy
cats
feel
free
to
just
respond.
If
we
do
have
questions
that
are
kind
of
complex,
where
they
take
a
long
time,
we'll
just
move
on
to
the
next
one
and
then
revisit
this
as
we
get
more
information.
So
if
you,
if
we
say
something,
you
don't
understand
what
it
is,
feel
free
to
ask
a
follow-up.
B
We
always
like
to
do
that
to
make
sure
that
you
know
what's
up
all
right:
Felix,
DPG,
ass,
hello,
office
hours,
mates
I
need
to
know
if
there's
a
way
to
use.
Config
mat
versioning
in
my
deployments,
for
example,
to
apps,
use
the
same
config
map,
but
I
need
to
change
the
config
Mac
from
one
app.
The
other
app
means
to
use
the
original
config
map.
B
A
So
by
the
looks
of
it,
it
looks
like
the
way
I
would
approach.
This
problem
is
basically
I
would
have
to
convict
maps,
because
if
one
application
needs
one
config
map
and
we
ever
needs
different
config,
maybe
that
little
bit
different
parameters.
This
feels
like
two
different,
separate
config
maps,
and
then
you
can
imagine
them
differently.
You
can
update
them
differently.
You
don't
need,
like
any
kind
of
control
controller.
Here
you
just
need,
like
yeah,
basically
put
your
config
maps
and
to
get
version
one
and
make
sure
that
you
basically
know
what's
happening.
B
And
you
think
sorry
I
was
gonna.
Ask
you
a
follow-up
question.
Cuz
he's
not
listening
live.
Do
you
think
that
maybe
they
have
very
similar,
config
Maps
and
they're,
just
changing
a
few
things
and
they're,
probably
trying
to
like
reconcile
that
so
they're
not
forking
into
a
bunch
of
config?
Is
there
like
a
templating
yeah.
D
Exactly
I
was
also
thinking,
most
probably
some
better
templating
like
and
whatever
you
want
to
use,
customized
helm,
kpt
tanker,
whatever
is
out
there
to
get
like
your
base
conflict
map,
plus
like
your
additional
versioning.
It
might
also
be
a
use
case
where
they
are
doing
something
like
a
Bluegreen
deployment
where
they
wanna
update,
like
the
one
conflict
map
is
already
like.
The
one
deployment
is
already
using
the
new
config
and
then
the
other
one,
so
I
think
most
probably
the
way
to
solve
it.
B
F
B
A
G
B
So
two
options
there.
So
since
we
don't
know
what
their
config
map
looks
like,
if
it's
baby
needy
really
simple
at
small,
consider
just
forking
them
into
their
individual
ones,
and
if
it's
something
you
could
use
home
for
that
way
to
go
alright
any
other
questions
on
this
one,
an
audience.
If
you
have
any
tools
you
want
to
recommend
in
this.
Let
us
know:
Philly
Martin,
says
hello
from
Paris
wants
to
drop
a
link
to
Kato
F
dot
IO,
making
on
making
the
kubernetes
API
doc
more
accessible.
B
This
is
awesome.
I
am
going
to
PM
you
after
this,
because
this
is
great
and
I've
wanted,
just
like
I
like
how
it's
organized
already
just
right
off
the
top
of
the
bat.
This
is
a
great
job,
so
thanks
for
letting
us
know
about
Kate
craft
at
I/o,
all
right
next,
we
have
Manny
asks.
Is
there
any
way
to
show
a
set
of
apps
with
similar
labels
in
a
CRD.
B
B
D
B
B
Right
right
so
to
fill
in
everyone
who
isn't
doing
kubernetes
day-to-day.
This
is
a
thing
that
was
running
on
Google's
infrastructure
and
is
in
the
process
of
being
stood
up
on
the
kubernetes
I/o
infrastructure
under
the
CN
CF,
and
that's
just
been
a
process
that
has
taken
a
while,
and
things
like
that.
So
somewhere
in
here
is
where
is
and
I
know
we're
publishing
things.
B
B
I
know
there's
an
issue
we're
in
the
process
of
migrating
there,
but
that
should
help
you
find
what
you
need
alright,
while
we
sort
that
we
definitely
have
room
for
more
questions,
so
keep
them
coming.
Alright
and
ray
also
has
another
question
says
for
people
who
do
not
use
cube
admin
and
have
their
own
way
of
kubernetes
installation.
Is
there
a
recommendation,
a
migrate
out
from
hypercube
which
will
be
removed
from
kubernetes
1.19
so
for
people
not
using
cube
admin?
A
B
H
B
B
Caught
up
on
questions,
so
let
me
let
me
do
a
quick
thing
here.
If
you
have
more
questions,
feel
free
to
ask
them
I
see,
Andrea
is
typing,
so
we're
getting
more
information
out
of
that.
That's
a
good!
That's
always
good,
feel
free
to
keep
on
typing
and
mani
is
typing
as
well,
so
we'll
get
them
a
chance
to
catch
up.
In
the
meantime,
peers
got
a
question
for
the
group.
I
mean.
F
Let's
discuss
this
topic,
maybe
so
in
the
recent
weeks,
like
two
big
dog
histories
have
been
down
a
couple
of
times
like
cray
and
get
lab.
Many
people
have
been
affected
and
how
do
you
manage
this
like?
How
do
you
basically
either
have
back
up
in
a
second
registry
and
use
this
as
a
tool
for
registry
or
something
else,
access
cube,
aeneas
operators
that
do
this?
F
B
E
D
Currently,
it's
only
for
docker,
hub
and
I.
Think
that's
the
the
ongoing
open
issue
at
Harbor
could
be
that
the
official
implementation
just
supports
docker
hub
and
as
long
as
you're,
not
using
that
the
cache
the
caching
doesn't
work
because
we
were
also
looking
into
using
it.
I
think
there
is
some
ways
around
it,
but
there
bit
hacky
I,
think
what
we
did
in
the
end
was
like
to
have
a
real
direct
failover
without
having
to
manually,
replace
and
of
the
registry
where
it
comes
from.
Was
that
docker
and
container
DS
support
registry
mirrors?
F
D
So
for
us,
the
docker
demon
does
it
if
you
use
the
solution?
Okay,
which
is
the
easiest
like
we
didn't,
want
to
do.
A
failover
that
make
sense.
Yeah
like
good,
DNS,
failover
or
others
are
more
complicated,
yeah
yeah,
but
it's
not
possible
to
do
for
everyone
I
really,
for
me.
Also
only
did
it
done,
it
was
like
critical
images
that
are
used
for
cluster
scaling,
for
example,
like
anything
that
is
on
a
per
node
basis
and.
B
D
F
G
F
F
B
B
B
B
B
It's
you
know,
I
I
would
I
don't
know
depending
on
a
third-party
source,
that's
having
a
bunch
of
reliability
issues
either
it
might
be
a
you
might
have
to
weigh
that
yourself.
That's
an
exercise
for
you
all
right.
So
many
I
see
that
you
are
back.
Let's
come
back
and
address
your
question,
so
initially
they
asked.
Is
there
any
way
to
show
a
set
of
apps
with
similar
labels
in
a
CR
D?
B
So
we
asked
for
more
information,
since
we
are
thinking
of
using
a
helm
operator,
but
my
question
is
that
how
can
I
do
things
like,
for
example,
creating
a
backup,
upgrading,
etc?
What
I
saw
from
the
tutorials
that
we
deploy
an
operator
which
looks
into
an
object
containing
values
at
yamo
and
make
sure
is
that
the
objects
created
adhere
to
those
values
and
reload
objects?
If
we
make
any
changes,
but
I
want
my
operator
to
do
more
than
that,
PS
I
don't
know.
Is
there
any
way
to
achieve
this?.
D
It
seems
they
wanna
build
or
they
would
like
to
have
a
an
operator
around
help,
but
I
think
the
upstream
project
was
thinking
of
building
one,
but
currently
is
still
busy
with
everything
I'm.
Sorry
we
built
one,
but
it's
built
for
a
very
multi
cluster
use
case,
I'm,
not
sure
if
that
works
for
them
yeah
doing
much
more
like
the
backups
and
upgrades
and
everything
included
that
I
haven't
seen
that
outside
out
there.
This.
G
D
A
Yes
reading
again
the
question
on:
if
you
want
to
make
operator
to
do
more
than
that
I
guess
you
will
have
to
have
no
longer
like
implement
like
there
are
other
operator
libraries
I
think
you
can
write
one
in
Java
or
or
rather
library,
other
languages,
but
I.
Think
if
you
want
something
really
specific
that
you
have
in
mind,
I,
don't
think
you'll
find
a
good
solution
for
that
right.
So,
basically,
specially.
D
A
D
B
B
And
oh
I
guess
this
morning:
Chris
short!
That's
why
he's
not
here?
Did
a
log
stream
related
to
ansible
for
developing
operators
and
he's
tossed
a
link
there
to
his
twitch?
You
I
need
to
see.
What's
going
on
there
and
I
see
man
he's
giving
us
more
information
there.
So,
let's
see
in
the
meantime,
we
will
give
him
some
time
to
digest
some
of
that
Jake
Walden.
Welcome
to
the
show
asks
I'm
running
kubernetes
at
the
edge
using
Ranger
k3s
running
on
top
of
boon
to
1804.
B
I
have
some
pods
that
are
running
on
the
host
Network.
To
avoid
some
netting
I've
observed
that
if
the
IP
address
of
the
machine
itself
changes
while
the
machine
is
running,
the
pods
on
the
haast
host
Network
do
not
update
their
IP
addresses
with
new
host
address
any
ideas
why
they
wouldn't
do
this.
B
D
A
A
B
G
B
B
D
A
I
mean
I,
think
yeah,
a
white
house
not
working
as
much
as
possible,
think
about
security.
Think
about
all
the
things
you
can't
you
know
will
have
access
to
has
native
networking.
This
is
generally
a
super
bad
practice
for
container
eyes
environments,
so
yeah
yeah,
so
basically
about
that
man.
If
you,
if
you
think
nothing,
is
a
problem
due
to
performance,
you
can
use
like
there
are
solution.
E
BPF
solutions
which,
like
are
super
fast
like
in
kernel
really
fast
like
you,
will
avoid
any
kind
of
performance
issues.
A
B
Not
brought
to
you
by
Sileo
yeah
yeah,
so
yeah
just
get
rid
of
that
problem.
B
I'm
just
curious.
If,
like
you
know,
if
you
don't
do
the
hosts
networking
sure
you
have
to
do
with
setting
some
of
this
stuff
up,
but
you'll
probably
be
more
in
line
with
what
everybody
else
is
doing
right
any
other.
Any
other
advantage
mentioned
security
and
whatnot
any
other
advantages
to
avoiding
host
host
mode
I'm
thinking
for
the
future.
If
we
ever
need
to
refer
to
this
again.
B
F
B
G
B
B
That's
a
good
point:
I
hadn't
thought
about
that.
So
alright
Jake
that
will
hopefully
give
you
something
to
think
about
feel
free
to
post
a
follow-up
question.
We
always
do
that.
He
goes
host
driven
host.
Networking
was
driven
by
the
application
using
WebRTC
to
stream
video,
so
natin
created
many
issues.
There
does
that.
Does
that
give
us
any
more
insight
on
possible
solutions
here.
B
B
C
C
B
C
B
F
B
F
B
F
Many
said:
there's
no
straightforward
way
for
the
operator
stuff
people
look
into
the
Java
and
Titan
item
recommendations.
He
already
has
something
that
is
based
on
Jason
net
and
basically
he's
thinking
on
how
to
distribute
the
way
he
wants.
The
software
or
application
to
run
on
Kalidas
semana
when
my
mind
came
up,
was
like
a
ham
chart
that
a
separately
version
version
to
the
application.
B
B
D
What
I've,
seen
from
most
vendors
like
giving
out
their
software
to
run
on
is,
if
they
use
ham,
charts
in
in
a
very
extended
way
still
it
can
get
quite
complex,
sometimes
because
of
dependencies
and
basic
configuration
changes
between
each
and
every
client,
because,
as
much
as
we
want
in
is,
is
still
different.
Every
time
you
set
it
up
right,
there's
all
the
different
configurations.
Do
you
have
PSPs
installed?
You
have
our
back
all
these
things.
F
F
C
C
B
When
should
I
use
vpa
the
subvert
a
vertical
potato
scaler?
Is
it
when
the
app
doesn't
support
horizontal
scaling?
You
know
I've
always
wondered
this
myself.
When
do
you
when
you
go
this
way,
and
when
do
you
go
this
way
and
I
know
for
a
lot
of
it
is
the
application-specific,
but
what
other?
What
are
the
things
are
y'all,
looking
for
when
you're,
when
you're
yeah.
C
C
Yet,
as
far
as
I
know,
I
could
be
wrong
to
have
just
recently
got
update,
I
haven't
looked
in
a
while,
but
I
know
that
one
of
the
best
things
that
it
does
provide
and
I
can't
imagine
this
prior
is
at
least
you
can
start
with
recommendations
and
see
what
it
would
actually
recommend
for
your
your
services
from
a
resource
requests,
point
of
view
which
can
really
help
when
you're
out
provisioning
and
deploying
so
yeah
that
was
Goldilocks,
Thank
You
Pierre.
For
me,.
C
That's
probably
the
de
facto
for
gay
provides
fancy
UI,
and
it
gives
you
both
first
of
all
and
guarantee
QoS
recommendations
for
your
resources
and
I
think
that's
the
biggest
thing
from
VP.
We're
gonna
actually
be
implementing
you
here
soon
at
stock,
XO
I
can
at
least
give
developers
a
pane
of
glass
into
best
practice
for
setting
the
resources
on
their
on
their
applications,
where
we're
moving
from
a
burstable
to
a
guaranteed
QoS
for
our
applications,
which
will
more
cost
upfront
because
we
reserve
more
resources.
C
But
in
the
long
run,
when
we
do
a
major
likes,
it'll
it'll
definitely
help
I,
know
VP
a
can
then
be
kind
of
enacted
to
take
action
and
actually
apply
those
those
resources
for
you
as
well,
and
those
might
be.
That
might
be
something
you
want
to
tiptoe
into
with
testing
them.
Or
so
that's
my
two
cents,
I
know
boss,
probably
wants
to
say
if
these
things
as
well
and
those.
C
B
A
So
yeah,
like
last
week
out
like
two
weeks
ago,
I
implemented
vp
a
eww
and
it's
amazing.
It's
it's
really
good.
It's
it's
great.
Its
I
just
have
only
positive
words
so
around
the
starting
from
like
what
what
I
did
is.
Basically
I
took
I
connected
with
EA
to
Prometheus,
because
I
used
to
have
a
VP
has
two
modes:
one
is
basically
checkpointing
which
will
contact
metrics
server
to
get
metrics
will
write
a
check
point.
A
So
this
is
one
big
limitation.
If
you
read
the
Google
paper,
the
way
we
have
it
running
is
basically
they
have
one
service
called
autopilot
and
it
does
both
horizontal
and
vertical
auto
scaling
automatically
yeah.
Obviously
you
can
configure
it
in
you.
Basically,
you
can
configure
it
and
limited,
but
it
does
it
automatically.
So
I
think
the
future
is
gonna,
be
like
basically
about
the
top
of
the
scaler
somehow
manages
together
and
works
well
with
horizontal
bother
to
scale.
A
B
F
Me,
for
example,
I'm
using
also
week
a
for,
like
I,
have
pots
at
scale
or
departments
at
scale
on
cue
size
and
not
on
the
usage
metrics
of
CPU
and
memory
and
I'm
using
the
week
a
recommendation
to
basically
see
hey
how
much
resource
that
doesn't
actually
need
that
gives
the
services
fast
enough
to
process
a
lot
of
items.
I
just
want
now,
like
okay
am
I,
actually
right
sizing
the
deployment
or
not
so
yeah.
B
Follow-Up
question
here:
you
all
mentioned
the
you
know
for
things
not
using
it
for
things
that
uses
a
JVM
and
things
like
that.
I
know
in
the
past
containerization
and
you
know,
vm
based
languages
like
Java,
have
been
a
been
a
problem,
but
you
know
those
are
being
addressed
and
stuff
are
these
limitations
because
of
like
where's,
the
bugs
D?
Is
it
not?
Is
it
because
VP
a
doesn't
support
it
as
well,
or
is
it
still
kind
of
a
fundamental
issue
where
we
don't
know
what's
good,
you
know,
I
believe.
A
It's
a
fundamental
issue
because
Java
the
way
certain
JVM
works.
It's
you
said
how
much
memory
you
will
need,
like
X,
MX,
flag
and
XM
s,
flag
yeah.
So
basically,
you
tell
ok,
create
a
virtual
machine
with
like
2
gigabytes
of
RAM
right
and
then
Java
does
whatever
it
wants
with
it.
It
imagines.
I
have
two
gigs
dedicated
to
me
right
and
then
that
the
couple
of
the
scalar,
what
can
it
do?
It
can
only
give
you
like
best-case
scenario,
two
gigs
right,
worst-case
scenario,
also
two
gigs.
A
So
right,
it's
yeah
and
this
the
new
agey
Java
version.
Somehow
don't
actually
take
all
that
memory,
but
if
your
application
starts
to
consume
and
consume
and
consume,
get
more
requests,
get
more
memory
keeps
increasing.
It
will
still
always
hit
keep
that
limit.
So
you
need
sort
of
user
input
to
change
about
X
MX
one
and
it's
yeah.
It's
basically
not
really
I.
Think
it's
a
fundamental
issue.
Marvin.
B
Okay,
so
definitely
I
the
reason
I
wanted.
You
explain
that
is,
for
you
know
those
people
that
are
looking
at
their
applications.
They
have
Java
and
things
like
that
might
not
be
aware
of
those
things.
It's
just
one
of
those
things
where
I've
been
slowly
putting
together
a
blog
post
that
we're
all
going
to
work
on
I
am
telling
you
this
now.
You
know
the
top
things
that
we've
learned
from
office
hours.
You
know-
and
this
is
just
one
of
those
things
that
you
know
the
Java
resource
issue.
B
C
Nothing,
no,
we
have
a
model
of
a
PHP
monolith
that
we've
slowly
been
container
izing
and
definitely
I.
Think
the
big
thing
there
is,
if
you're,
going
to
auto
scale
that
you
have
to
dial
in
your
readiness
and
lightness
probes,
or
else
you're
gonna
be
in
a
world
of
hurt,
and
so
we
had
instances
where,
like
we,
problems
were
like
new
instances
would
come
up
and
they
wouldn't
exactly
be
ready
and
we
didn't.
We
couldn't
dial
in
like
a
great
check
for
when
they
were
ready,
and
so
they
started
getting
traffic.
C
And
likewise,
when
instances
came
down,
we
were
doing
a
deployment
and
I
know.
A
lot
of
people
probably
encountered
this,
but
we
had
an
implement
priest
app
like
I
sleep
10
seconds
because
of
the
delay
between
the
endpoint
being
pulled
out
and
the
ingress
being
updated,
and
so
we'd
have
businesses
that
were
coming
down
that
we're
taking
traffic
with
conservative
right.
So
there's
a
lot
of
like
lifecycle
such
centric
things.
C
You
really
have
to
focus
on
when
you're
gonna
do
that
and
that
that
for
auto
scaling,
that
also
applies
right
because
you're
going
up
and
down
in
an
auto
skilled
world,
so
I
we
started
very
sensitive
and
then
we
started
to
dial
in
a
lot
more
and
that
workload
was
definitely
like,
guaranteed
QoS
but
I
think
with
HPE
you're.
Never
gonna
get
right
the
first
time
you
really
have
to
it's.
It's
really
trial
and
error
for
a
lot
of
it
depending
on
your
application,
and
you
know
any
new
enhancements
to
your
application
extra
things.
C
It
does
memory,
consumption
and
things
like
that
when
resources
start
changing
because
of
the
nature
of
the
application
that
is
going
to
impact
your
HP
I
think
a
big
one
for
me
end
of
last
year.
Our
burnin
is
in
node
and
actually
runs
in
some
cluster,
and
it
took
me
like
a
good
month
plus
to
really
dial
in
great
HP
by
the
authority,
but
I
haven't
touched
it
in
literally
all
of
this
year
that
cluster
just
runs.
C
It
all
scales
both
HP,
a
and
cluster
autoscaler,
and
that's
like
our
front
end
stock
XCOM
on
your
browser
right,
like
I,
haven't
touched
it
at
all.
It
that's
a
huge
personal
accomplishment,
but
it
took
a
of
learning
tuning
to
get
in
note
is
not
the
most
forgiving
workload
on
in
containers,
let
alone
in
a
distributed.
A
system
like
uber
Nettie's,
so
that
was
a
lot
of
fun
learned
a
lot
but
yeah
some
apps.
Just
it's
it's
hard.
C
We
actually
just
had
an
issue
too,
with
DNS,
where
you
know
our
graph
QL
was
not
was
routing
a
DNS
query.
You
know
2.7
million
DNS
queries
in
a
minute
or
two,
and
you
know
using
keep
lives
and
more
intelligent
connections
and
having
redundancy
there
with
those
connections.
So
certain
policy
without
like,
like
that
all
flows
into
the
overall
health
and
you
want
you
need
those
those
primitives
and
you
know
when
you're
auto-scaling
or
else
that
you
know
there
will
be
a
cascading
effect
of
other
things,
having
issues
as
well.
So.
D
First
one
is
HPA's
I'm,
currently
still
still
working
a
bit
on,
but
the
the
choice
of
metrics
is
definitely
a
tricky
one
like
which
one
to
scale
on
when
to
scale
on
it
like
60
percent
of
56
or
65,
or
should
you
based
on
requests,
for
example,
scale
its
and
yeah
like
Mario,
was
saying
it
changes.
If
your
software
changes
so
the
third-party
software,
it
might
still
be
quickly
easier
because
I
don't
know,
nginx
doesn't
change
that
much
between
versions
like
if
you're
building
yourself,
then
you're
building
a
new
feature.
You're
scaling
my
change.
D
D
B
Was
good
to
know
alright
last
question
of
the
session
belongs
to
Shyam
Yost's
thanks
for
joining
us.
Oh
no,
that
that
was
the
last
one.
Last
last
question
is
actually
from
recurrent
Rakesh
pylori
I
hope
I
got
that
right,
says:
I
am
using
calico
networking
on
a
self-hosted
kubernetes
cluster
on
AWS
this
problem,
where
all
the
nodes
get
registered
to
the
load.
Balancers.
Is
there
a
way
to
avoid
it
at
least
limited
to
certain
node
groups?.
A
So
from
my
point
of
view,
I
didn't
see
being
a
problem,
so
let's
say
you
have
like
hundred
nodes
are
like
thousand
nodes
and
imagine
your
load.
Balancer
is
typical,
like
let's
say
in
jinks
a
cheap
proxy,
something
so
H,
J
proxy
load.
Balancing
in
2000
targets
doesn't
sound
bad
right.
This
is
it's
not
a
huge
number.
You
are
never
going
to
have
like
a
super
huge
number
of
nodes
unless
you're
like
they.
D
Using
AWS
load,
pencils
like
in
the
shredder
is
saying
they're
using
service
of
that
load,
balancer
and
an
ingress
it
obvious
alb,
so
I
would
also
not
worry
about
any
of
those,
even
with
500
notes,
which
seems
to
be
what
they're
the
limit
is
scaling
they're
just
worrying
about
like
why
skip
like
load
balance
to
500.
If
the
service
is
running
on
for
like,
if
that's
creating
too
many
hops,
basically
I
thought
missed
the
alb.
B
G
G
C
B
D
The
thing
is
like,
even
if
you
have
it
load,
balancer
to
all
nodes,
and
you
have
only
four
instances
of
your
service.
If
one
of
them
gets,
we
rescheduled,
it
might
land
of
one
of
those
hundred
notes,
and
you
don't
want
to
have
like
register
and
deed
ursaring
load-balancing
since
that
load
pencils
all
the
time,
because
yeah.
C
C
Gonna
say
the
same
thing:
I
like
when
I
think
a
lot
of
from
a
networking
perspective
yeah.
It
is
really
weird
that,
like
I'm,
going
through
a
node
that
doesn't
have
the
workload
and
then
it's
going
to
go
to
that
nginx
ingress
and
it's
gonna
route
to
the
workload
on
another
host.
It
seems
kind
of
odd
but
like
what
you
do.
Look
it
like
the
bigger
picture.
It's
it's
more
of
the
redundancy
reliability
resiliency
of
anything
happening
in
this
distributive
environment,
I'm
covered
across
all
modes,
so
yeah
and
the
big
thing
too
is
like
nodes.
C
Eight,
oh,
yes,
no
every
any
cloud
has
instances
that
just
get
wonky
stopped
working.
Won't.
You
know
networking
blocks
I
like
there
are
random
issues
you
need
to
prepare
for
failure
and
the
best
way
to
do
that
is
spread
the
thread.
The
love
with
you
will
spread
the
workload
across
nodes
so
like
it
seems
like
a
really
like
inefficient
way
of
doing
it,
but
it's
a
unless
you're,
really
having
like
a
big
issue
like
don't
put
too
much
effort
into
changing
so.
B
B
About
to
run
out
of
time
here,
but
let
me
just
give
you
one:
more
information,
combination
of
external
traffic
policy
and
lifecycle,
hooks
for
register
and
B
register
from
Al
be
sure
to
keep
the
registered
instances
Albie's
in
check
question
mark.
Or
do
you
think
there
are
obvious
failure,
scenarios
here
and
then
we'll
wrap
it
up?
I
mean.
D
The
thing
is,
then,
you
would
not
be.
They
would
not
be
reusing
service
of
that
loop
answer
anymore,
so
they
would
need
to
basically
do
it
themselves
and
every
time
you
do
something
yourself,
which
is
usually
upstream
being
automated.
There
is
a
risk
of
just
your
automation,
breaking
and
as
you're
the
only
one
who
is
using
it
you're
most
foully
on
your
own.
Then,
why
is
like
when
upstream
breaks
at
least
there's
someone
upstream
fixing
it.
B
Alright,
and
with
that,
we
are
gonna,
wrap
it
up.
I
like
to
thank
our
panelists
for
doing
this.
We
are
gonna,
give
away
two
kubernetes
t-shirts.
The
way
it
works
is
if
you've
asked
a
question.
If
we
address
it
on
the
air,
you
can
have
a
t-shirt
on
us.
It
is
the
kubernetes
not
this
kubernetes
t-shirt,
but
the
one
with
that
logo
on
it,
shy
cats
and
rakesh.
The
first
and
last
person
to
ask
a
question:
you
have
won
a
kubernetes
t-shirt,
that's
just
the
way.
The
dice
roll
I
worked
out
this
time.
B
So
I
will
PM
you
after
this
and
give
you
a
a
story.
Yeah
Mario
talk
so
that
the
camera
switches
to
you
there.
This
is
what
the
t-shirt
looks
like
there.
You
go
hold
it
up,
yeah,
it's
just
fantastic
since.
C
B
C
B
H
B
We
are
going
again
in
about
twelve
we're
almost
exactly
twelve
hours
from
now
we're
gonna
go
again,
so
everyone
feel
free
to
keep
hanging
out
in
the
channel
and
sometimes
I
just
give
out
random
t-shirts
for
those
people
helping
out,
but
a
big
thanks
to
the
CNC
F
for
sponsoring
the
t-shirts
that
we
give
away.
So
with
that
anything
anything
else
before
we
wrap
I'd
like
to
thank
everybody
for
listening
yeah.
No,
you
don't
rob
you
don't
get
that
specific
shirt.
That's
pretty
worn!