►
From YouTube: Kubernetes Office Hours 20200415 (West Coast Edition)
Description
Office Hours is a live stream where we answer live questions about Kubernetes from users on the YouTube channel. Office hours are a regularly scheduled meeting where people can bring topics to discuss with the greater community. They are great for answering questions, getting feedback on how you’re using Kubernetes, or to just passively learn by following along.
For more info: https://github.com/kubernetes/community/blob/master/events/office-hours.md
A
All
right
welcome
everybody.
It
is
a
third
Wednesday
of
every
month.
That
means
it's
time
for
the
kubernetes
office
hours,
the
YouTube
livestream,
where
we
hop
on
the
internet
with
a
panel
of
esteemed
kubernetes
experts
and
hang
out
a
and
slack
and
answer
as
many
of
your
user
questions
as
possible.
So
welcome
everybody.
This
is
the
West
Coast
slot.
This
is
our
second
session
of
the
day,
so
welcome
everybody.
A
Let's
I'm
going
to
go
through
a
little
intro
kind
of
explain
how
everything
works
and
then
it'll
be
fine,
but
before
we
begin,
let's
start
by
introducing
ourselves
and
let's
go
with
Beebe
Monica
Dave,
samel,
Drella,
Jeremy
tuned
and
Eric
since
you're
new,
we
will
introduce
you
to
last
to
get
us
started.
Welcome
everyone.
My.
B
D
F
G
My
name
is
Erik
Ostrom
and
I'm,
the
founder
of
cloud
posse.
We
are
a
DevOps
accelerator.
We
help
companies
own
their
infrastructure
in
record
time
by
building
it
for
them
and
showing
them
the
ropes.
If
you,
google,
around
for
terraform
you've,
probably
come
across
some
of
our
modules,
we
maintain
over
a
hundred
and
30
or
so
tariffs
or
modules.
Do
a
lot
with
eks
do
a
lot
with
kubernetes
in
general,
he'll,
on
file
etc.
Don't.
H
A
And
I'm
Jorge
Castro
I
work
at
VMware
as
a
community
manager
and
part
of
my
job
is
putting
together
programs
like
this,
so
that
we
can
organize
a
community
around
stuff.
That
is
useful
things
like
sharing
knowledge
and
stuff.
The
original
reason
I
started,
the
show
was
I
had
to
learn
kubernetes
and
I
figured.
We
could
just
stream
it
and
have
a
good
time.
So
we've
been
doing
this
for
two
years.
We
appreciate
everyone
joining
us
from
YouTube.
The
way
it
works
is,
if
you
see
below
you'll,
see
the
slack
channel
office
hours.
A
We're
also
streaming
that
channel
here
on
the
sidebar
in
YouTube,
if
you're
watching
it.
So
those
of
you
in
chat
feel
free
to
say
hello
and
where
you're
from
what
you
work
on
so
before
we
get
started.
Let's
talk
about
the
ground
rules,
so,
first
of
all,
there's
a
kubernetes
event,
so
the
code
of
conduct
is
in
effect
in
chat,
Annie
and
zoom,
so
please
be
excellent
to
each
other.
This
is
also
a
judgment-free
zone,
which
means
everyone
had
to
start
from
somewhere.
A
There
are
no
dumb
questions,
so
all
skill
levels
and
expertise
levels
are
definitely
welcome,
and
while
we
will
do
our
best
to
answer
your
questions,
we
don't
have
access
to
your
cluster.
So
anything
that's,
usually
live
debugging.
Your
stuff
is
of
talk
off
topic.
Usually
what
I
like
to
think
is:
if
you
need
to
put
something
in
a
paste
bin,
it's
probably
a
little
bit
too
verbose.
A
So
what
we
will
do
instead
is
kind
of
look
at
what
you
got
and
then
give
you
recommendations
on
where
to
move
forward
to
debug
the
issue
that
you're
having
I'm
panelists
you're
encouraged
to
expand
on
the
answers
with
your
production
experiences
and
pro
tips.
Part
of
the
reason
that
we
have
you
on
here,
so
you
can
share
the
little
gotchas
the
little
nitpicks
of
stuff
that
you
you've
figured
out
from.
Like
your
years
of
expertise,
audience
you
can
help
us
out
by
pasting
URLs,
so
the
official
Doc's
blogs.
A
We
have
a
lot
of
people
this
morning,
helping
answer
questions
for
people,
maybe
going
into
deeper
threads
and
things
like
that
people
love
to
recommend
tools
on
the
show
we
love
it.
I
think
we've
had
an
I've
learned
about
a
new
tool,
every
single
session,
the
entire
time
we've
been
doing
this,
so
this
is
kind
of
a
chance
for
us
to
kind
of
get
together
and
share
your
knowledge.
A
So
if
you're
in
chat,
we
definitely
encourage
you
to
do
that
and
love
it
and
what
I
do
at
the
end
is
we
grab
all
the
URLs
from
the
show
and
we
paste
that
in
the
show
notes,
so
that
even
if
you're
listening
to
this
on
YouTube
later
on,
you
can
still
have
our
reference
of
the
cool
stuff
that
we
are
referencing
so
feel
free
to
post
your
questions.
There
ain't
chat.
We
currently
have
zero
questions
in
the
queue
because
we
cleared
it
this
morning,
so
we
are
open
for
questions
there.
A
If
we
don't
get
questions
I'm,
just
gonna
start
asking
questions
to
the
panel,
so
you
probably
don't
want
that
so
feel
free
to
start
queuing
up
your
questions,
some
actually
Cemil
drama.
If
you
would,
if
you
would
tossed
a
youtube
linking
the
kubernetes
users
channel
to
remind
people
that
were
here,
there's
usually
tens
of
thousands
of
people
in
there
looking
for
help
and
just
have
them
check
it
out,
maybe
slide
on
over,
and
that
would
work.
A
So
with
that
before
we
start
I
like
to
thank
vmware
american
airlines
cloud,
posse,
Microsoft
and
BB
who's,
an
independent
contractor
for
allowing
their
engineers
to
take
some
time
to
help
the
community.
It's
always
appreciated,
and
with
that
we
are
now
opening
the
floor
to
questions.
It
looks
like
some
people
are
typing,
so
we'll
go
ahead
and
wait
for
our
questions
here.
I
see,
there's
plenty
of
people
in
YouTube
watching
so
start
typing.
I
want
to
go
over
some
of
the
questions
that
we
went
over
this
morning.
A
Q
Batman
it's
that
I
wanted
to
react
for
this
group
here.
So
someone
asked
this
morning.
I
thought
it
was
an
interesting
question
is
why
can't
the
cube
admin
certificate
be
valid
for
ten
or
a
hundred
years?
It's
too
painful.
When
we
deliver
kubernetes
to
customers,
production
environment,
we
need
to
manually,
compile
the
cube
admin
source
or
manually,
create
a
large
number
of
certificates.
Can't
you
give
us
an
optional
parameter
or
variable
to
flexibly
configure
this
option
now.
I
did
point
the
person
to
the
cube
admin
office
hours,
which
are
happening
right
now.
A
Actually,
but
I
wanted
to
get
the
panel's
opinion
on
certificates
in
queue
babban,
because
in
the
back
of
my
head,
I
was
thinking
you
know,
I
think
they
default
to
a
year
to
kind
of
remind
you,
when
your
certificates
expire,
that
you
should
probably
upgrade
your
cluster
as
time
so
I
figured
I
would
ask
a
new
panel
opinions
on
this.
Well,
the
the
field
kind
of
gets
ready
to
ask
some
questions.
A
G
First-Hand
account
literally
this
weekend,
I
was
tasked
with
repairing
clusters.
Certificates
expired
by
that
one
year
shelf.
The
embarrassing
thing
is
the
reason
why
they
expired,
so
this
happens
to
be
a
cop's
cluster.
Some
cops
they
automatically.
We
generate
those
certificates
on
every
upgrade.
So
really.
This
is
just
saying
that
we
haven't
upgraded
the
cluster
recently
enough
and
that's
the
actual
problem.
That's
the
root
cause!
C
Yeah
I
mean
a
years
is
a
long
time
and
to
me,
aside
from
the
actual
upgrade
I
mean
thinking
of
all
the
things
that
happen
in
a
year
to
your
cluster
and
how
many
people
might
come
and
go
or
have
your
coop
configure
something
like
that.
I
mean
a
years
a
long
time.
You
certainly
don't
want
to
leave
it
forever
or
five
years,
and
especially
when
already
at
that
year
mark.
This
is
a
task
that
you
maybe
have
forgotten
how
to
do
so.
C
I,
always
like
the
you
know
the
practice
of
if
it's,
if
it's
a
hard
task,
if
it's
something
that's
painful
to
do,
you
should
do
it
more
often
to
make
it
less
painful.
So
that's
one
of
those
things.
That's
super
important.
If
you
were
to
wait
every
five
years
to
do
it,
you're
gonna
have
different
people
managing
it.
It's
gonna
be
a
different
process,
so
I
think
the
more
you
do
it
the
better
and
honestly.
It
should
just
be
a
seamless
process
right,
rotate.
Those
certs
upgrade
your
cluster.
A
A
C
I
do
remember
having
to
rotate
certificates
before
Kubb
atm,
and
that
was
horrible,
so
I'm
at
least
happened
happy
to
have
that
management
capability
there
so
trying
to
do
upgrades,
for
example,
I,
don't
know
like
can't
remember
what
version,
maybe
one
nine
to
110
or
111,
something
like
that
that
we
coup
Batum
was
still
kind
of
beta
and
just
rolling
through
upgrades
on
our
own
was
horrible.
I
mean
it
was
such
a
painful
process
and
you
could
do
some
ugly,
shell,
scripting
or
some
other
kind
of
process.
A
Sure
yeah
Kuban
is
probably
my
my
favorite.
My
favorite
tool
because,
like
I,
would
have
never
been
able
to
install
a
cluster
without
it.
You
know,
I
did
the
hard
way,
but
Kelsey
hi,
Todd
I.
Think
you
want
to
do
that
thing
once
right
to
like
get
it
out
of
the
way
and
be
like
I.
Did
it
and
I
could
say
I
did
it
I
hope
I
never
have
to
do
that
again,
but
I
just
remember
the
first
time.
I
did
a
cube
admin
upgrade
to
a
cluster
and
it
actually
worked.
A
You
know
I
mean
yeah,
it's
a
home
lab
right.
It's
not
like
I.
Have
this
huge
thing,
but
it's
just
really
nice,
when
the
tooling
is
is
in
place
in
our
first
question
today
comes
from
Bray
I
hope,
I
pronounced
that
right
says:
do
you
have
any
suggestions
on
how
to
set
up
Prometheus
for
cluster
monitoring
and
to
keep
it
highly
available?
I
you,
you
put
it
in
the
same
cluster,
a
separate
cluster
or
an
external
VM,
great
question
who
wants
to
take
a
stab
at
this
one?
First.
C
Take
it
away,
Monica
I
wasn't
sure
I
saw
so
my
mute.
So
this
is
a
common
question
right.
How
do
you
monitor
your
monitoring,
ultimately
right?
So
this
is
something
that,
in
my
experience,
sometimes
we've
had
the
prometheus
in
the
cluster
and
that's
monitoring
like
the
applications.
But
you
know
maybe
it'll
report
some
of
the
kubernetes
metrics
and
information
as
well,
but
if
kubernetes
goes
down,
so
is
your
monitoring,
so
you
really
do
have
to
keep
in
mind.
C
What
do
you,
where
you're
going
to
put
it
so
I've,
had
situations
where
we're
using
like
a
consolidated
Prometheus
so
we'll
like
send
metrics
to
another
Prometheus,
that's
kind
of
our
master
and
that's
either
in
a
different
cluster
with
maybe
a
higher
reliability
goal
as
far
as
the
availability
of
it
goes,
I
guess
or
we'll
use
other
services,
like
maybe
a
lambda
service,
to
watch
the
Prometheus
or
you
know,
maybe
you
just
have
it
somewhere,
that's
outside
of
kubernetes,
and
it's
it's
like
the
age-old
question
it
feels
like,
but
yeah
that
that's
kind
of
the
biggest
thing
is
you
can't
just
rely
on
it
to
monitor
your
own
cluster.
C
That's
gonna
live
in
the
cluster,
so
it's
great
for
all
your
apps
and
everything
in
there.
But
you
need
to
do
something
else.
It's
either
pull
another
prometheus
instance
out
or
send
it
to
another
another
service,
again
URIs
Lamba
like
a
wavefront
or
something
else.
So
you
know
when
you're
Prometheus
is
actually
down.
E
Yeah
and
adding
to
that
right.
Yes,
that's
a
good
point,
Monica
yeah,
because
when
we
are
running
Prometheus
inside
the
cluster,
we
may
run
into
chicken
and
egg
problem.
So
when
you're,
even
your
cluster
is
having
an
issue,
your
promises
down,
you
don't
know
what
happening
inside
it.
What's
caused
that,
so
it's
it's
always
better
and
best
practice
to
have
your
monitoring
outside
your
cluster
and
the
sending
geometrics
outside
running
any
in
any
VM
or
any
enterprise
class
or
any
any
shared
one,
so
that
it's
it's
better.
E
G
Is
yeah
I
echo
what
everyone
else
has
said,
the
other
thing
this
comes
up
all
the
time
in
our
own
officers
that
we
run
also
with
running.
For
me,
the
zine
cluster
is
a
memory
hot.
So
if
you're,
if
this
is
your
production
cluster,
it's
not
uncommon.
You're
gonna
need
to
have
a
pod
with
prometheus
operator,
that's
at
like
15,
16,
gigs
of
memory
or
more
not
so
that
the
consolidate
approach
allows
you
to
run
smaller
prometheus
instances
per
cluster,
and
then
you
can
have
a
more
massive
cluster
prometheus
cluster.
G
Now
this
is
still
the
chicken
and
the
egg
problem
with
other
people
brought
up.
Then
what
monitors
your
monitoring
cluster
itself?
So
your
phase,
shifting
outages
so
hopefully
that
you're
monitoring
cluster
isn't
is
out
of
phase
from
outages
happening
in
your
production
cluster.
The
other
thing
is
your
storage
back
in
and
there's
a
bunch
of
open
source
projects
for
managing
that
storage
back
in
there
pretty
hairy
to
get
up
and
running,
they
have
all
come
with
trade-offs.
Our
experiences
PFS
man.
G
A
D
Dave
I
was
just
gonna,
say
yeah.
Maybe
why
I
also
just
take
a
look
at
like
Thanos
project
or
cortex?
That
kind
of
talks
about
the
other
point
said
others
made,
and
you
can
kind
of
be
able
to
aggregate
multiple
Prometheus
instances,
and
you
can
also
use
multiple
different
back-end
storages
for
those
metrics.
A
Yeah
and
a
breeze
actually
follow-up
question
was:
do
you
have
any
resource
on
setting
up
a
master,
prometheus
I'm,
not
sure
to
send
one
per
me
fuse
to
another
and
and
Monica?
You
toss
a
link
in
there
to
Prometheus
Federation,
yep
and
I
I.
Just
I
just
found
the
link
there
to
Thanos
any
of
y'all
have
experience
with
with
Federation
and
or
Thanos
I.
C
Haven't
worked
with
Danna
specifically,
but
it's
something
I've
had
explored
a
little
bit
so
I
mean
it's
a
another
way
to
go
about
it.
It
just
kind
of
depends
on
your
use
case.
How
big
your
environment
is,
what
you're
looking
to
send
yeah
and
it
as
always
with
metrics.
You
want
to
make
sure
that
you're
sending
the
right
information,
because
we
could
send
way
too
much
and
we
overload
our
system
and
you're
not
getting
anything
value
out
of
there
too.
So
it's
always
the
metric
tuning
as
well
so
make
sure
you're
picking
your
important
indicators.
C
A
And
and
I
just
want
to
expand
a
little
bit
since
we're
talking
about
Prometheus
we
might
as
well
just
just
keep
on
going
here.
How
do
you
all?
How
do
you
all
typically
set
up
a
configure
Prometheus?
Is
there
a
specific
operator
that
you
use
or
like
if
I
were,
if
I
wanted
to
set
this
up?
Where
would
I
start
I
think.
C
That
Prometheus
docks
are
great
I'm.
Sorry,
I'll
speak
up
again,
so
I've
set
up
my
own
Prometheus,
like
in
my
own
kind
cluster
and
honestly,
just
following
right
through
is
super
easy
and
it's
just
its
own,
like
Prometheus
CRD,
like
it's
just
custom
resources
in
there.
So
it's
really
easy.
Prometheus
docks
are
excellent.
Yeah!
It's
anybody
I
think
anybody
can
follow
along
with
it,
and
certainly
there's
tons
of
support
to
ask
people
for
questions
and
help
is.
A
D
Yeah
I
would
also
say
the
home
chart
for
the
Prometheus
operator.
There's
a
lot
of
configuration
you
can
do
in
there
there's
a
lot
of
different
stuff
like
you
can
even
set
it
up
with
Thanos,
but
it
comes
out
of
the
box
with
a
lot
of
stuff.
You
might
not
need
all
that,
but
it's
a
really
easy
way
to
get
started
and
provides
a
bunch
of
dashboards
out
of
the
box
and
stuff
like
that.
So
it's
a
nice
way
to
kind
of
get
started.
At
least.
A
Mm-Hmm
looks
like
we
have
a
few
links:
Marco
chappies
back,
hey
Marco,
soles,
Josh,
we'll
be
getting
to
the
Kafka
question
here
in
a
minute,
but
Marco
says:
we've
been
using
trickster,
not
exactly
a
che
but
allowed
us
to
set
up
tons.
A
smaller
constituent
Prometheus
as
stateful
sets
and
aggregate
those
into
a
large
or
larger
single
endpoint.
Great
tip
and
Eric
looks
like
you
posted
a
link
to
a
blog
post.
G
You're,
muted
yeah:
this
is
just
a
it's
come
up
before
you
know.
Thanos
is
a
back
end.
There's
another
one.
Victoria
metrics
somebody
brought
up
another
one.
I
wasn't
familiar
with
there's
so
many
now.
My
problem
with
a
lot
of
these
is
that
there's
so
much
infrastructure
just
to
manage
the
backend
and
ideally
I,
like
my
my
monitoring
infrastructure,
to
be
as
thin
as
possible,
not
as
tall
as
possible,
but
companies
with
massive
requirements
aren't
going
to
be
able
to
get
away
with
just
me
using
the
FS
yeah.
A
Before
before
we
do
move
on
to
Kafka,
though
I
do
I
do
want
to
to
ask
semi-related
question.
We're
specifically
are
talking
about
cluster
metrics,
not
application
metrics.
So
do
you
do
any
of
you
have
opinions
as
far
as
are
those
related?
Would
you
run
that?
Would
you
run
the
app
thing
in
cluster
or
do
you
keep
it
like
separation
concerns?
Walk
me
through
real
quick,
how
you
segment
between
the
applications
and
the
actual
cluster
itself,
or
is
it
an
issue
at
all.
C
Mean
you
know:
don't
run
your
own
in
your
own
monitoring
and
like
load
testing
next
to
the
app
itself
right,
like
don't
do
that.
So
that's
where
you
talk
about
like
a
monitoring
cluster
or
some
other
place,
you're
gonna
put
that
but
yeah
it's
fine
I
would
think
to
monitor
your
you
know
cluster
from
within
the
cluster.
C
As
long
as
you
have
that
backup,
you
know
that
outside
if
it's
gonna
be
thinnest,
if
it's
gonna
be
a
federated
Prometheus,
so
you
have
that
higher
level
I
used
to
throk
way
way
back
when
it's
not
as
an
aggregator,
but
it's
not
supported
we're
not
growing,
but
yeah,
and
then
the
application
metrics
can
just
be
sent
over
to
that
Prometheus
wherever
you
have
it,
but
definitely
don't
run
your
monitoring
in
the
same
app
because
all
it
takes
is
for
that
app
to
explode
or
in
the
monitoring.
A
Distributed
a
syslog
spam
and
wardaddy,
here's
recommending
Prometheus
for
customer
monitoring
and
uses
in
Stata
for
app
monitoring,
I've
heard
of
that
one.
If
a
if
you
could
toss
on
a
URL
to
that
that'd,
be
really
useful
here,
cool
all
right.
Do
we
have
anything
else
regarding
Prometheus
and
cluster
monitoring
before
we
we
move
on
to
Kafka
and
I'm,
assuming
most
of
you
are
just
using
Prometheus
for
anything,
there's
an
alternatives,
but
you
could
recommend
I'm
sure,
there's
alternatives,
but
any
other
comments
on
other
other
tools
and.
E
A
D
G
D
G
And
I
I
think
that
what
Kyle
hung
me
up
is
like
we
definitely
use
Prometheus
and
with
alert,
manage,
monitor
infrastructure
and
applications,
but
then
once
you're
getting
into
the
applications,
your
APM,
your
application
performance
monitoring,
then
there's
another
suite
of
tools
both
of
the
SAS
Lucian's
mentioned,
but
then
also
like.
If
you're
doing
is,
do
you
got
and
you
got
to
create.
Somebody
has
to
correct
me
on
I've,
never
known
how
you
say:
key
ally
or
Kylie
and
Jaeger
I.
A
Helps
you
visualize
the
traffic
flows,
okay,
cool
yeah
toss
a
link
for
that
when
you
get
a
chance
in
slack,
cuz
I
would
be
interesting.
Awesome
all
right
last
call
any
other
Prometheus
while
we're
have
it
Bri,
Bray
I,
hope
that
answers
your
question
feel
free
to
ask
a
follow
up
and
we
will
get
back
to
it.
So
wardaddy
is
an
interesting
question
here
was
asking
about
how
to
use
Kafka
more
efficiently
in
kubernetes
and
Josh
asked
for
a
little
bit
more
detail.
A
A
G
A
A
Weird
out,
if
you
could
just
give
us
a
little
bit
more
information
of
exactly
what
the
problem
is,
we
might
be
able
to
to
help
so
we'll
go
ahead
and
let
let
them
type
there
and
collect
their
thoughts.
Moving
on
the
Jim
angel
co-chairs
egg
docks
welcome,
says
what
are
folks
doing
for
infant
Tory
/tracking.
If
running
many
clusters
with
many
nodes,
any
slick,
inventory
operators
or
ways
to
track
clusters,
I'm
interested
in
the
panel's
opinions
here,
because
I'm
sure
there's
probably
lots
of
tools
in
the
space
I.
F
Don't
know
a
tool
to
track
the
inventory
we
have
like
home
rolled
things.
This
keep
track
of
stuff.
We
have
in
AWS
what
nodes
are
associated
with
what
clustered
groups
we
run
multiple
clusters
across
different
regions,
so
we
have
a
pretty
decent
inventory
of
what
clusters
we
have
deployed
and
then
from
there
we
match
two
instances
in
AWS,
but
we
don't
really
it's
custom-built
for
us.
I'm
I
would
be
super
interested
to
know
if
there
is
a
better
tool
as
well.
Yeah.
C
So
as
far
as
I
don't
know,
if
any
inventory
operator
outside
of
like
the
managed
products
right
like
every
talking
manage
products
like
TMC,
you
know
what
I
work
on
you
can
connect
on
your
clusters
and
you
can
see
all
the
information
about
it
version
Health's
and
you
can
make
new
ones
and
and
a
bunch
of
other
stuff
right.
It's
not
meant
to
be
a
plugin
is
saying
I
know
about
that.
As
far
as
like
it
depends.
What
do
you
want
to
manage?
Are
you
just
trying
to
figure
out
that
you
know?
C
They
get
information
like
the
version
about
your
cluster
or
the
overall
health,
or
something
like
that
to
see
that
if
you
did
like
federated
Prometheus
again
we're
back
to
army
and
you'll
be
able
to
see
all
of
your.
You
know
kind
of
cluster
metrics
from
there.
As
far
as
the
health
goes,
we
like
Jeremy,
said
we
do
have
our
own
tools
to
kind
of
flip
petit
between
different
clusters.
C
As
far
as
like
as
an
operator,
I
do
like
to
use
KTX
for
just
a
switch
between
my
different
coop
configs
and
and
get
to
work
with
that.
But
my
question
would
also
be
I
guess
why
do
you
care?
Why
does
it
matter
what
nodes
are
in
there
like?
If
it's?
What
do
you
need
to
know
about
it
right?
It
should
be
set
up
in
a
way
that
you
don't
have
to
go
onto
the
nodes.
You
know
if
a
node
fails
like.
C
Are
you
handling
that
automatically
or
something
so
I
guess
I
would
just
kind
of
want
to
know
what
what
things
you're
you're
worried
about
to
monitor,
specifically
because
there's
some
things
you
know
set
it
up
and
and
let
it
go,
and
it
should
be
good,
like
you
want
to
make
sure
you
have
your
automation,
handling
things
for
you,
yeah.
A
F
F
Of
crazy
scanning,
that's
looking
at
the
infrastructure
itself
and
we
get
all
these
alerts
and
like
oh
well,
these
nodes
are
no
longer
there,
so
we
don't
have
to
worry
about
going
to
triage
this
because
it's
it's
been
taken
care
of,
but
we
do
have
to
report
back
so
like
for
us
knowing
like
what
ec2
instances
are
actually
running
which
cluster
like.
Oh,
this
is
an
abusive
cluster.
We
don't
have
to
go
deal
with
that
right
now.
It.
H
H
D
A
And
I
went
ahead
and
toss
that
URL
in
there
I
was
also
thinking,
maybe
for
regulatory
reasons.
It
might
be.
You
know
that
maybe
that
kind
of
track
you
might
be
important,
it's
not
a
field
I'm
familiar
with,
but
I'm.
Just
thinking
of
reasons
why
you
would
want
to.
You
know,
keep
track
of
this
stuff.
I'm.
C
Trying
to
think
of
I
guess
reasons,
you
would
confuse
your
instances,
I
guess:
I,
don't
like
it's!
If
you,
if
you're
you
know
going
through
things
and
paying
attention
to
you,
know,
regions,
and
you
know
privileges
and
role
escalation.
Things
like
that,
it
should
be
I
would
think
you
know
pretty
clear
what
belongs
to
what
I
guess:
mm-hmm
I
guess
it
just
depends
how
things
are
set
up.
Yep.
A
And
then
sue
J
Pillai,
sorry,
if
I
mispronounced,
that
linked
up
this
infra,
a
p--
which
is
like
a
little
cool,
desktop
app
that
lets
you
that's
showing
you
a
bunch
of
stuff.
We
looked
at
a
similar
tool
like
this
earlier,
so
awesome
to
see
all
these
little
cool
little
dashboards
and
things
like
that
proliferating
everywhere.
Okay,
hopefully
that
answers
your
question
Jim
feel
free
to
ask
another
question:
there,
Saloni
I,
hope
I
pronounced
that
right,
ass,
hello.
A
G
G
Exception
tracking
tool
is
typically
used
by
application
developers
to
catch
their
exceptions.
Well,
there's
actually
a
kubernetes
event
forwarder
for
a
century,
so
you
can
forward
all
your
event,
log
from
Cooper
Nettie's
to
sentry
and
then
using
filters.
They're
assigned
those
two
groups
and
things
like
that.
So
that's
just
another
way
of
doing
it
that
we
found
effective.
Well
sure,
that's
interesting!.
A
A
Zach
curry
has
some
recommendations
here
on
Kafka
the
same
rules
apply
for
any
stateful
data
would
apply
to
Kafka
if
it's
a
date.
It
is
a
database,
after
all,
isolate
the
deployment
within
its
own
node
pool
with
at
least
four
replicas
and
ensure
the
backend
disk
is
relatively
high.
Oh,
if
possible,
ensure
that
your
topics
have
enough
replicas
to
match
the
number
of
brokers
that
that
seems
like
reasonable
recommendations
to
me,
then
he
mentions
be
very
sure.
Your
topics
are
initialized
correctly,
with
the
correct
number
of
replicas
for
broker's
does
little
for
you.
A
If
you
have
one
replica
of
your
topic
just
saying.
Thanks
for
that
tip,
Zak
Aaron
even
asks
an
interesting
question:
I'm
what
resources
are
people
using
to
help
developers
and
junior
sres
get
up
the
steep
learning
curve
of
kubernetes
coming
from
a
mix
of
backgrounds
using
ECS,
compose
Heroku,
terraform,
puppet
and
bare
metal?
We
have
some
passionate
kubernetes
advocates
and
some
who
don't
know
where
to
begin
this.
A
This
really
hits
close
to
me
because
I
hang
out
with
a
ton
of
kubernetes
experts
and
there
are
always
new
people
coming
in,
and
it
seems
some
days
that
the
learning
curve
is
very
much
like
vim
except
worse
I'm,
so
I'm
really
interested
in
the
tips
that
you
all
have
like.
You
know,
let's
say:
I'm
a
junior
developer,
I'm
just
getting
started.
I
know
how
to
make
a
container,
where
do
I
start
kind
of
think
so,
I'm
interested
in
how
you
all
tackle
this
I.
D
Mean
answer
for
me,
as
you
buy
the
kubernetes
best
practice
book
that
I
co-wrote.
That's
your
best
way,
but
no
it's
a
typical
question.
People
have
when
trying
to
get
your
team's
up
to
speed
with
kubernetes,
because
your
teams
are
going
to
have
different
experiences.
Kubernetes
is
very
operational.
Heavy
some
may
have
a
more
of
a
developer
background,
but
I
think
the
only
way
you
can
get
up
to
speed
is
getting
their
hands
on
there's.
A
lot
of
good
I.
D
Think
Jorge
might
have
mentioned
it
earlier,
like
kubernetes
a
hard
way,
even
if
you're,
using
like
a
managed,
kubernetes
or
some
other
commercialized,
kubernetes
I,
think
it's
it's
a
great
thing
to
go
through,
because
you
really
start
to
understand
how
kubernetes
kind
of
operates
and
all
the
pieces
that
go
into
it.
Other
than
that,
like
your
developers,
you
probably
want
to
have
a
different
focus
for
them
compared
to
say
your
sres
developers,
FRA
shouldn't
have
to
have.
D
You
know
that
level
for
her
knowledge
of
kubernetes,
or
at
least
they
shouldn't
so
a
lot
of
times,
building
there's
a
lot,
there's
so
much
material
out
there
around
kubernetes
there's.
You
know
a
hundred
ways
to
go
about
it,
but
really
getting
them
to
understand
the
process
of
what
it
is
to
deliver.
Their
application
on
top
of
kubernetes
is
where
I
would
focus
from
a
developer
side.
So.
C
I
put
a
couple
links
in
the
chat
there,
so
I
can
really
address
the
guess
more.
The
junior
sree,
because
I
think
about
this-
a
lot
because
I
do
try
to
you,
know
get
other
people
involved.
I
try
to
have
my
friends
even
that
are
in
other
areas
of
tech.
Like
hey,
come
over
and
learn
kubernetes,
so
I
think
I
do
I!
Think
the
best
way
like
I
think
Dave
said
is:
is
you
gotta
get
your
hands
on
it?
C
That
doesn't
mean
you
have
to
have
some
full
size
production
cluster
to
go
and
work
on
you
can
use.
You
can
do
clear
eyes
the
hard
way
and
it
is
hard
and
painful
and
that's
fine
and
that's
a
lot
of
people
learn
and
it
step
by
step
and
you're
really
getting
in
there.
I
started
that
and
was
I
did
I
didn't
get
their
ID
so
I'm
going
through
and
doing
things
like
mini
cube
or
kind
like
how
to
set
up
your
own
cluster
on
your
desktop.
Like
there's
a
couple,
you
know:
okay,
hey!
C
If
you
get
your
environment
ready
and
get
that
stuff
set
up,
but
then
you
get
your
own
cluster
and
doesn't
matter
if
you
break
it,
you
know
it's,
you,
don't
you're,
not
costing
any
money
in
AWS
resources,
even
because
it's
all
local,
so
you
get
to
play
with
it.
You
know
you
can
deploy
Prometheus
on
top.
You
could
go
and
put
like
a
wordpress
site
on
something
all
local
to
your
own
machine
and
learn
about
it.
And
then
you
know
delete
that
pod.
C
That
has
the
WordPress
site
and
you
know
whatever
else
it
might
be.
You
there's
also
I
put
a
link
for
cata
Kota
on.
There
has
a
couple
kubernetes
modules
you
can
go
through
so
then
you
don't
have
to
set
up
your
own
environment,
I
use,
I'm,
trying
to
think
I.
Don't
know
this.
Linux
Academy
have
criminai
these
things,
I
think
it
does,
and
I
think
it
does
have
kubernetes
stuff.
So
there's
definitely
some
online
like
elearning
things
as
well.
C
But
honestly,
my
favorite
thing
to
tell
people
just
go:
put
up
a
kind
cluster
on
your
desktop
and
then
trying
to
play
something
like
there's
lots
of
different
projects
there
and
then,
if
you
go
through
and
look
at
like
the
CN
CF
the
different
projects,
you
can
kind
of
read
like
well.
What
is
this?
What
is
that?
Because
there's
so
many
incubating
and
graduated
projects
and
stuff
to
learn
about
there
and
honestly
the
docs
are
great,
like
you
could
go
and
and
with
Valero,
for
example,
is
one
of
my
favorite
projects.
C
It's
a
backup/restore
like
dated
disaster
recovery
tool,
so
you
can
have
your
own
little
kind
cluster
set
up
Valero
and
then
you
could
go
through
an
exercise
where
you
just
delete
something
and
try
and
restore
it,
so
that
I
think
just
getting
your
hands
on
in
some
way
is
really
good.
So
that's
what
I
would
recommend
for
the
Sarris
at
least
and
developers
to
write
just
to
get
a
general
idea
of
what's
happening
in
a
kubernetes
cluster
yeah
yeah.
E
E
That
all
together
now,
because
of
this
good
night
they're,
like
udemy
rudl
site
these
days,
are
all
offering
free
courses.
So
you
don't
do
any
other
sites
and
then
just
get
started.
Learning
and
cubed
readers
cubed
is
that
I
would
ask,
is
the
primary
one
for
me
to
just
go
because
they
have
interactive
sessions,
also
which,
if
you
don't
have
fewer
on?
If
you
cannot
find
any
cluster
or
anywhere
to
work
on,
they
have
an
interactive
guide,
interactive
setup?
F
E
H
Good
one
thing
that
I,
you
know
I
think
about.
While
looking
at
that
question
and
I've
seen
people
access
question
is
you
know
from
coming
from
an
sto
background?
You
might
be
thinking
about
how
do
I
being
think
about
sre
technologies
like
eliminating
tall,
it's
kind
of
a
competitive
environment
and
what
I
have
seen
to
help
a
lot
of
folks?
H
Is
you
know,
first
in
and
of
clusters
being
kind,
but
at
the
same
time,
once
it's
been
a
business
clusters
going
on
top
of
the
kubernetes
github
repository
in
looking
at
how
most
of
the
configurations
are
being
codified
inside
of
inside
of
different
repo
and
then
comparing
those
behaviors
together
and
then,
once
you
can
do
that,
you
can
start
to
look
into
projects
like
cube
or
crater
or
cuter,
and
seeing
how
you
can
interact
with
those
API,
so
I
think.
That's
that's
also
something
that
could
be
considered
doing
in
process.
D
The
other
thing
I
was
gonna
suggest
too.
If
they're
you're
planning
on
using
like
a
specific
cloud
provider,
the
cloud
providers
all
have
different
workshops
like
and
after
we
have
a
KS
workshop
I
Oh
AWS
has
eks
workshop
that
I.
Oh,
those
are
really
good.
If
you
plan
on
deploying
kubernetes
to
a
specific
cloud
provider,
because
all
these
different
cloud
providers
they
have
their
different
intricacies
around
like
running
kubernetes
and
those
cloud
providers.
So
a
lot
of
times.
Those
are
good.
If
they're
going
to
be
on
a
specific
cloud
provider.
A
A
Like
the
first
thing,
I
did
was
like
how
do
I
install
this
thing
and
just
get
it
started,
because
that's
how
I
learn
and
as
a
result,
I
found
myself
toiling
away
at
dumb
little
configuration
issues
and
not
grasping
the
big
picture
of
what
kubernetes
has
done
for
you.
So
usually
I
just
tell
people
get
on
a
cloud
provider
or
something
in
a
free
tier
deploy,
something
like
a
blog.
You
know
and
like
kill
a
pod
and
then
watch
it
like
fail
over.
A
You
know
and
do
all
that
kind
of
cool,
kubernetes
things,
and
so
that
you
realize
what
the
concepts
are
and
then
I
should
have
gone
back
and
like
apt-get
install.
You
know,
like
all
this
other
kind
of
stuff,
because
I
found
myself
struggling
just
getting
it
to
work
that
I
failed
to
grasp
the
basic
concepts
until
later
on
and
that
ended
up
hindering
my
ability
to
learn.
So
that's
just
a
tip
from
me.
There
anything
else
on
learning
and
thanks
a
lot
Sammy
for
that
list
of
URLs
we'll
definitely
get
those
in
the
show
notes.
A
So,
let's
see
next
question
comes
from
Christian
Roy
welcome,
says:
I
am
currently
setting
up
a
batch
group
to
set
up
our
dev
staging
gke
cluster,
so
we
can
shut
them
down
at
night
and
restart
them
on
mornings
at
will,
but
I'm
wondering
if
there's
any
solution
that
exists,
that
would
allow
me
to
bundle
up
all
the
resources
needed,
the
cluster,
the
DNS,
the
various
public,
helm,
charts
red
us,
etc.
What
custom
config
and
our
own
applications
happens
to
also
be
custom,
help
chart
rather
than
a
custom,
bash
script.
D
I'd
say:
there's
a
lot
of
tooling
out
there
on.
There
would
be
one
that
comes
to
mind
and
from
like
configuration
management,
you
can
also
look
at
some
of
the
get-ups
tooling
around
flux
and
argo
CD.
Those
are
also
good
ones,
but
every
cloud
provider
typically
has
their
own
type
of
way
to
deploy
as
infrastructures
code
and
do
that
Tara
forms
agnostic
to
that
I
find
lease
on
Azure.
D
A
lot
of
our
users
are
using
terraform
just
because
it's
agnostic
and
I
can
bundle
even
things
outside
of
kubernetes
like
if
I
have
DNS
outside
of
kubernetes
I
can
utilize
things
like
that.
They
also
have
some
like
kubernetes
provider,
a
home
provider.
Things
like
that
I,
don't
know
if
I
would
recommend
using
the
provider
but
there's
different
ways.
You
can
bootstrap
and
kind
of
contain
all
of
your
infrastructure,
a
full
deployment
of
your
applications
and
everything
within
just
to
get
repo.
G
Yeah,
obviously
biased
here
we
provide
a
lot
of
tearful
modules
for
eks
and
use
it.
My
recommendation
would
be
if
you're
just
getting
started
with
it,
though
you
want.
You
want
to
get
the
immediate
gratification
in
the
outcomes
and,
if
terraform
isn't
already
the
language
that
you're
using
everywhere-
and
maybe
it's
not
easiest
and
then
tools
like
other,
have
been
suggesting
I'm,
not
sure
eks
CTL,
if
that
was
brought
up,
makes
it
pretty
easy
to
get
up
and
running
and
by
para
my
experience.
The
easiest
cluster
to
bring
up
is
digitalocean
kubernetes
clusters.
A
Any
other
recommendations
here
all
right,
Christian
hope
that
points
you
in
the
right
direction.
Waleed
welcome,
ass,
okay,
I
might
as
well
just
ass.
There's
not
too
many
questions
actually
Wow.
We
only
have
15
minutes.
This
is
this
is
we're
burning
through
this
earlier
today,
I
was
troubleshooting
an
issue,
a
notice
that
in
the
cluster
there
are
deployment
using
PVCs
with
a
storage
class,
local
storage
and
others
with
local
storage
local.
However,
there
is
SC
object
when
I
do
get
SC
that
presents
the
storage
class.
How
did
that
happen?
A
I
also
tried
to
check
the
node
for
annotations
labels
related.
There
was
one
called
late.
Local
storage
equals
true,
but
nothing
corresponding
in
the
deployment.
Any
ideas
on
this
one.
Also,
if
I've
skipped
your
question,
please
feel
free
to
react
it
there
in
the
in
the
chat,
any
any
advice
for
a
while
lead
on
this
one.
C
A
Yeah,
so
you
know
what
let
me
go
ahead
and
link
this
up,
because
we
do
do
a
lot
of
we
can't
answer
it
live
and
then,
as
soon
as
we
stop,
someone
in
chat
is
able
to
answer
the
question.
So
if
everyone
could
check
that
one
out
to
help
all
lead
out,
that
would
help,
because
we've
got
more
generic
questions
here.
Coming
up
DevOps
day
Mir
asked
we
are
running
a
machine
learning
model
in
kubernetes
pods.
If
my
Potter's
reach
a
hundred
percent
pod
resources
automatically
restarted
but
need
to
make
some
delay.
A
A
G
A
A
E
A
So
Chris,
honey,
hey
Chris,
welcome,
says
checkout
termination
grace
period
seconds
under
container
lifecycle
hooks.
This
might
be
able
to
give
you
what
you
want,
but
I'm,
not
sure
that
is
an
interesting
one.
I
have
no
grace
period
would
be
a
good
one,
but
thanks
Chris
for
that
one
I
appreciate
you
joining
the
show.
A
Let's
see,
we
had
a
really
good
question
from
Bri
again
says:
do
you
have
any
oh,
this
one
has
hits
really
close
to
home.
Do
you
have
any
suggestions
on
how
to
organize
the
yamo
files
for
kubernetes
resources
right
now,
our
cluster
was
set
up
with
a
bunch
of
yamo
files
that
were
applied
manually,
which
is
very
difficult
to
track.
Would
you
use
how
much
arts
heavily
there's
is
something
like
get-ups
workflow,
half
years
recommending
customized
with
a
k'
with
flux
Monica?
You
had
some
yet
some
comments
here.
If
you
could
summarize
those
yeah.
C
I
haven't
worked
with
home,
charts
I've
found
people
either
love
them
or
hate
them
right.
It's
kind
of
no
middle
ground
there.
So
if
it's
something
that
you
have
you
know,
I
mean
keep
keep
going
with
it.
I,
don't
think.
There's
any
one
perfect
answer:
there's
not
it's
hard
right.
If
you
yam
will
sprawls
an
issue
so
I'm
I
use
customize.
You
know
it
works,
it's
nice
to
kind
of
be
able
to
configure
the
different
environmental
variables
and
then
pass
it
in.
So
that's
really
handy.
C
As
far
as
like
preventing
people
from
deploying
I
mean
that's,
that's
a
procedure.
That's
a
process
issue
that
you
have
to
figure
out
right.
If
you're
gonna,
you
have
to
establish
your
promotion
process
for
the
code,
you
know
if
you're
gonna
incorporate
into
CI
and
how
about
deploy
automatically
based
on
whatever
your
process
is
that's
a
much
bigger
thing
than
I
think
just
the
technical
spec's
right.
If
you're
looking
at
you
know,
you've
got
rogue
developers
or
people
that
are
just
applying
whatever
yeah.
Well,
they
want.
C
You
know
you
got
to
look
at
that
privilege
escalation
and
the
access
for
roles.
Things
like
that.
So
that's
a
very
big
question,
but
if
you
can
get
more
automation
in
and
people
out,
I
think
is
always
a
good
thing
there,
but
definitely
you
have
to
look
at
like
what
are
your
controls
like?
When
should
you
deploy
to
production
production
like
do
you?
What's
the
impact
need
you
stuff,
can
you
just
roll
that?
Can
you
do
daily
deployments
right?
Everybody
wants
that
instant
like
commit
to
master
and
it
gets
deployed.
So
it's
it's.
A
D
Yeah
I
was
just
gonna
mention
on
kind
of
we
talked
about
like
customize
and
helm.
If
you
just
kind
of
getting
started,
I
really
like
customize,
because
it's
a
lot
easier
to
reason
with
and
all
the
templating
and
everything
it
goes
into
helm
and
the
intricacies
of
helm
home
does
provide
value.
Don't
get
me
wrong,
but
a
lot
of
times
when
you're,
just
starting
out
with
kubernetes,
in
that
it
can
add
another
layer,
abstraction
that
you're
going
to
have
to
start
learning
where
I
think
customize.
You
can
pick
up
pretty
easily
there.
G
Yeah
I
agree
with
Dave
on
that.
One
I
think
that
we
customized
salsa
is
more
geared
towards
when
you're
deploying
your
own
in-house
apps,
perhaps
but
helm
on
the
other
hand,
is
I.
Think
a
big
reason
why
kubernetes
has
reached
in
traction.
It
has
an
e
CS
hasn't
because
kubernetes
has
a
packaging
system
for
helm
for
getting
quickly
up
and
running
these
crazy
applications
like
Prometheus
look
at
the
chart,
for
that
I
would
never
want
to
be
maintaining
my
own
customized
for
Prometheus,
but
once
you
start
doing
it's
always
layer
in
the
layers
or
layers.
G
So
now
you
have
all
of
these
Helms
the
charts
that
you're
deploying
so
you're
moving
that
much
faster.
But
at
some
point
you
reach
critical
mass
with
that.
How
do
we
manage
the
configuration
now
for
that?
And
that's
where
our
other
tools
have
come
up?
We
use
the
tool
called
helm
file,
which
makes
it
very
easy
to
declaratively
describe
how
to
deploy
those
apps
to
your
communities.
Cluster,
the
change
control,
I'd
love
to
talk
about
change
control,
but
I
don't
know
we
exceeded
this
time.
We're.
A
A
We
have
time
for
one
more
and
then
we're
gonna
give
away
the
t-shirts,
and
this
one
comes
from
Xavier
who
asks
what's
the
purpose
of
setting
a
CPU
limit
if
the
container
might
be
allowed
to
exceed
it
and
won't
be
killed,
or
is
it
just
to
override
the
limit
range
admission
controller?
How
does
kubernetes
know
whether
it
will
allow
it
or
not?
Does
it
depend
on
the
allocatable
resources
of
the
node?
And
then
there
are
some
comments
here
that
I'm
going
to
go
through.
While
you
all
talk
about
this
one.
A
A
It's
memory
it'll
get
owned.
If
CPU
it
will
throttle.
So
we're
not
talking
about
memory,
we're
talking
about
CPU
and
then
Jewesses,
maybe
the
docs
wait.
Maybe
the
docs
are
wrong,
then
about
the
CPU.
No,
not
really.
The
pod
won't
be
killed
because
of
CPU
usage,
but
CPU
usage
will
be
throttled.
It
might
be
allowed
to
exceed
the
limit,
but
on
average
it
will
be
within
the
limit
you
set.
A
So
is
this
one
of
those
things
where,
like
a
CPU
will
bursty
and
then
it'll
just
come
back
down
versus
I
feel
like
memories
kind
of
like
a
hard
cutoff,
but
that's
just
a
guess
audience
is
that
Sunday
and
then
Xavier,
says
or
Wally
says:
yes
kill
from
memory.
Only
a
zoom
and
then
Xavier
said
the
behavior
should
be
the
same
as
memory.
How
does
kubernetes
know
whether
it
will
allow
or
not
does
it
depend
on
the
allocatable
resources
of
the
node.
G
So
the
limits
are
what's
gonna
help
it
then
get
rescheduled
on
to
other
nodes,
and
basically
it's
given
a
time
slice
of
cpu
time
to
operate
in
and
after
that
you
know,
the
cpu
goes
on
to
the
next
process.
What's
happening
down
millions
of
times,
second,
but
yeah
that
time
slice
is
how
it's
throttled.
A
G
Know
you
can
get
noisy
neighbors
on
one
box
and
it's
especially
a
problem
if
you're
not
setting
CPU
limits
on
there,
but
if
you
have
the
CPU
limits
tick
on
average,
like
you
said
it
should
be
fine
and
you
pods,
that
gets
scheduled,
won't
be
scheduled
on
that
node
wrap.
What
I've
seen
is
you
can
forget
exactly
what
you
can
also
have
is
something
so
that
the
pods
don't
live
forever
and
they
get
rescheduled
and
that's
how
on
average,
they
get
me
distributed
across
the
cluster.
Maybe.
D
I
was
just
gonna,
say
you
really
to
kind
of
understand
all
that
it
is
really
looking
at
because
you
have
like
a
quality
of
service
too,
and
it's
going
to
be
dependent
on
what
you
say
for
requests
and
limits.
So
if
you
want
to
have
Q
guaranteed
QoS,
then
you
have
to
set
those
requests
and
limits
the
same
where
there
is
also
going
to
be.
You
know
best
effort
and
also
other
classes.
First
of
all,
so
all
those
have
kind
of
an
impact
on
there.
I
find
that's
one
thing.
D
It
seems
like
an
easy
concept,
but
when
you
start
digging
into
like
resource
management
kubernetes,
it's
something
you
really
need
to
start
understanding.
I
see
users
run
into
this
issue
all
the
time.
It's
like.
Oh
that's,
it's
kind
of
a
different
behavior
than
I
thought.
I
ran
into
a
lot
of
this
when
I
first
started
learning
kubernetes
myself,
so
it's
an
important
kind
of
subject
to
really
start
understanding.
All
those
things
that
go
into
how
the
scheduler
is
going
to
schedule,
workloads
onto
the
nodes
based
on
request
allocations
in
that
yeah.
A
A
The
winners
today
are
Bri
BRE,
sorry,
if
I'm
pronounce
this
pronounced,
Annette
and
Javier
you've
both
won
kubernetes
t-shirt,
so
I
will
PM
you
after
this
give
me
a
little
code,
you
go
to
store
it
on
CN,
CF,
dot,
IO
and
get
your
cool
kubernetes
t-shirt,
which
we
always
forget
to
wear.
So
we
can
show
you
what
it
looks
like,
but
big
shout
out
to
the
CNC
F
for
sponsoring
the
t-shirt
giveaway
and
with
that
any
final
thoughts.
Panel
Eric.
This
is
your
first
time
great
job.
Oh
thank.
A
A
So
we're
thinking
we
might
do
something
like
ingress
next
month,
where
it's
a
dedicated
session
on
just
on
ingress,
and
then,
basically,
you
know
having
a
generic
open
session
like
this,
but
also
having
a
topic
one
every
month,
as
we
take
a
tour
throughout
the
kubernetes
components
and
kind
of
you
know
be
like
alright,
if
you're
really
stuck
on
ingress.
This
is
the
one
session
you
need
to
catch
and
to
give
us
kind
of
a
video
documentation
for
the
rest
of
the
year.
A
So
if
you're
interested
in
helping
doing
that
is
definitely
hang
out
in
the
office,
hours
channel
feel
free
to
hang
out
here.
You
know
it's
like
a
smaller
subset
of
kubernetes
users,
so
consider
it
your
safe
space,
a
safe
place
to
ask
questions,
and
with
that
any
any
last
ones
thanks
everybody
for
joining
us.
We
hope
you
had
a
good
time
and
we
hope
you
come
back.
It's
always
a
third
Wednesday
of
every
month
and
keep
an
eye
out
thanks
panel
panel
actually
stick
around
for
a
second
and
everybody
else.