►
From YouTube: Kubernetes Office Hours 20210217 (EU Edition)
Description
Office Hours is a live stream where we answer live questions about Kubernetes from users on the YouTube channel. Office hours are a regularly scheduled meeting where people can bring topics to discuss with the greater community. They are great for answering questions, getting feedback on how you’re using Kubernetes, or to just passively learn by following along.
For more info: https://k8s.dev/events/office-hours
A
Welcome
everyone
to
today's
kubernetes
office
hours,
I'm
dan
pop.
I
have
my
steam
panel
here
we're
going
to
introduce
in
a
second
we're
going
to
answer
your
user
questions
live
on
the
air
with
our
steam
panel
of
experts.
You
can
find
us
in
office
hours
on
slack,
that's
pound
office
hours
on
slack
and
check
the
topic
for
the
url
for
the
information.
So
there's
a
pin
topic
here
before
we
begin.
Let's
start
by
introducing
ourselves
around
the
horn.
D
E
Looking
forward
for
today,
hey
everybody,
my
name
is
mario
lauria.
I
am
a
devops
engineer
at
carta
specializing
in
everything
from
kubernetes
to
developer
experience
and
everything
in
between
so
nice
to
be
here
today.
F
My
name
is
chris,
I'm
a
customer
engineer
with
google
cloud.
Canada's
public
sector
team,
I'm
based
out
of
ottawa
ontario
background-
has
been
mostly
in
on-prem
kubernetes,
but
now
I'm
focusing
more
on
the
cloud
space
your
make
is
up.
G
Yeah
chris,
can
you
check
your
microphone?
My
mic
is.
F
F
Before
off
to
a
good
start
yeah,
so
I'm
chris,
a
customer
engineer
for
google
cloud,
canada,
public
sector
backgrounds,
primarily
in
on-prem
government
kubernetes.
A
Would
you
want
to
introduce
yourself
before
we
go
here.
B
Yeah
and
I'm
george
castro,
I'm
the
former
host
of
the
show
kind
of
helping
dan
and
david
figure
out
the
ropes,
and
you
can
find
me
in
the
kubeflow
community.
A
And
I'm
dan
pop
I'm
chris
carty's
executive
assistant.
So
let's
get
get
go
kick
this
off.
Let's
go!
Let's
make
this
rain
all
right
before
we
start
here
are
the
ground
rules.
This
is
a
kubernetes
event,
so
the
code
of
conduct
is
is
in
effect,
so
please
be
excellent
to
each
other.
This
is
a
judgment-free
zone.
Everyone
had
to
start
from
somewhere.
So
please
help
out
your
buddy
by
having
a
supportive
environment,
not
only
in
this
channel
but
also
in
life.
A
While
we
do
our
best
to
answer
your
questions,
the
panel
we
don't
have
access
to
your
cluster,
so
like
live
debugging
is
probably
off
topic,
but
we'll
do
our
best
to
help
you
get
get
started
and
get
moving.
Normally,
we
provide
shirts.
However,
the
cncf
stores
replenishing
its
inventory.
We'll
give
you
a
shout
out
in
our
undying
devotion,
I
mean
what
what
more
could
you
want?
Mario
lauria,
just
being
devoted
to
you?
A
That's
pretty
awesome
right,
panelists
you're
encouraged
to
expand
on
answers
with
your
experiences
and
pro
tips
audience
you
can
help
by
pasting
in
urls
to
official
docs.
If
you
hear
something
that
you
know
and
you
can
answer
or
anything
that
you
do,
you
think
that
might
be
relevant
the
topic
at
hand
post
your
questions
on
disco,
discuss
dot,
kubernetes,
dot
io.
A
You
can
always
help
us
out
by
tweeting
spreading
the
word
and
paying
it
forward
and
by
the
way
this
this
panel
is
made
entirely
of
volunteers.
If
you
want
to
come
up
here
and
hang
out
with
us-
and
you
know
monthly
and
answer
some
questions,
let
us
know
we'd
love
to
have
new
people
rotating
in
and
out
I
mean
we
had.
You
know
steering
committee
co-chairs
on
this
panel.
For
god's
sakes,
it's
you
know,
it's
that's
how
people
start
you
gotta
start.
A
You
know
getting
involved
here
so
with
that
said
panel,
are
we
ready
for
our
first
question?
You
all
pumped,
let's
go
all
right,
let's
bring
it
first,
stop
andrew
we're
writing
a
controller
with
controller
runtime
and
trying
to
use
the
generation
observed
generation
pattern
to
avoid
reconciling
if
there's
any
change,
not
using
the
predicate
provided
by
the
controller
runtime
for
the
purpose.
Yet,
though,
my
question
is
how
could
that
work
with
the
possibility
of
stale
cache
when
we
write
the
observed
generations
to
the
status
of
our
cr,
it
triggers
another
reconcile
immediately.
A
C
C
A
B
Yeah,
I
was
going
to
say
just
hang
out
in
the
channel.
Usually
if
we
can't
answer
something
we
will
find
someone
to
come
hop
in.
A
All
righty
we're
ready
for
the
next
question
panel,
all
right
simone,
but
yeah-
I'm
adding
the
italian
touch
here.
I'd
like
to
configure
my
small
cluster
as
highly
available
with
the
no
single
master
single
point
of
failure
and
may
make
the
best
use
of
all
the
cluster
resources.
My
current
plan
is
to
make
three
nodes
run
as
masters
and
be
able
to
schedule
pods
on
the
masters
from
my
research.
The
issues
in
doing
this
are
security
issues
pods
competing
and
I'm
not
concerned
about
security.
A
C
E
My
I
think.
E
I
was
just
gonna
say
my
initial
reaction
is
like
annotations
labels,
taints
and
tolerations.
Those
sorts
of
things
in
terms
of
you
know,
control,
plane,
nodes
are
very
much
geared
for
control,
plane,
workloads
and
worker
nodes
are
very
much
geared
for
worker
workloads.
If
you
will
so
you're
gonna
have
to
get
through
some
of
that,
but
I
think
it
it
is
possible.
I
don't
know
if
I
would
recommend
it.
Although
it
sounds
like
this
is
more
of
a
testing
environment
et
cetera,
so
go
ahead.
Chris.
F
Yeah,
I
was
just
thinking
there's
I
can
reference
by
default
now
or
not,
but
you
can
set
reserved
resources
for
the
control
plate
and
components.
So,
if
you're
worried
about
things
being
or
workloads,
trouncing
your
control,
plane
elements
having
that
set
will
help
protect
those
and
that'll
just
be
your
workloads
that
get
fudged
but
again
like
if
it's
not
a
production
environment
or
if
you're
not
running
super
sensitive
data
in
there,
it
shouldn't
be
a
problem.
D
Yeah,
I
think
the
key
statement
there
is
it's
a
non-production
workload,
because
you
expose
yourself
to
potential
security
issues
with
you
know,
pods
that
are
running
in
potential
privilege
mode,
et
cetera.
They
could
escape
out
and
get
access
to
fccd,
and
that's
not
something
that
you
really
want
on
in
that
particular
context.
If
you
have
three
nodes
that
you're
running
as
a
master,
it
seems
as
though
that
your
in
a
state
to
where
you
don't
necessarily
need
to
have
you
know
any
applications
running
on
those
nodes,
etc.
A
B
The
next
question:
wait
can,
can
I
add
a
follow-up
while
we're
in
here.
Can
we
add
in
just
a
question
for
the
panel?
What
about
fcd?
Do
you
do
you?
When
do
you
all
decide
whether
it's
to
run
it
along
with
all
the
other
control
plane
elements
or
when
to
split
it
off
into
its
own
cluster?
Is
that
like
really
high
end
or
is
there
like
a
rule
of
thumb.
F
F
Has
the
resources
to
handle
that.
E
I
think
long
term,
it
depends
on
your
scale.
I
think
you
have
to
grow
into
it.
I
know
at
stockx
we
started
with
fcd,
even
cluster
managed
for
us,
and
then
we
eventually
had
to
grow
out
of
it.
You
can
tell
when,
when
it's
just
time,
I
think
there
is
a
100
odds,
sort
of
thing
that
you
can
just
say
when
you
hit
this
number,
you
know
you
have
to
think
about
it
long
term
as
well.
If
you
want
to
plan
for
skill
something
you
should
do
upfront.
A
B
I'm
gonna
take
it
jesper
asks.
Is
it
possible
to
limit
a
service
account
to
only
have
rights
to
create
edit
and
delete
custom
resource
definitions
that
are
related
to
a
certain
namespace
since
crds
are
not
namespace?
I
only
see
the
option
to
give
my
service
account
rights
to
create
edit
and
delete
all
crds
on
the
cluster
for
the
system
we
are
creating.
We
do
not
see
this
as
secure
and
would
like
to
know
if
there's
a
way
to
limit
our
service
account
great
question.
B
C
A
Awesome
all
right
next
question
from
k:
naben
I'm
planning
to
install
falco
or
an
aws
nodes.
What's
the
best
way
to
bring
them
up
using
the
auto
scaling
or
adding
new
new
nodes
to
the
pool,
so
it
can
be
installed
as
a
daemon
set
right.
So
basically,
from
from
that
perspective,
it
could
be,
it
would
be
installed
on
every
everything
that's
being
deployed,
in
said
the
name
in
said
name
and
set,
so
that
could
be
a
method
or
you've
seen
some
customers.
A
Again,
I
see
the
question
if
it's
plain
vanilla,
just
dc2
hosts
running
it
or
eks,
see
customers
also
just
like
embedded
in
some
type
of
ami.
So
then
that
that
would
be
the
one
that
they're
that
they
would
have
there,
because
you
can
also
install
it
as
a
assist
a
system
d.
You
know
a
daemon
data
center,
proc
or
demon
process
there,
so
that
could
be
a
method
for
doing
it
from
an
eks
perspective.
You
know
again
the
helm
chart
anytime,
you
spin
up
something
it
will
be
just
available
there.
A
So
that's
that
that's
the
my
unabashed
answer
there,
but
I
would
ask
you
to
also
join
the
falco
channel.
If
you,
if
you
have
more
questions.
D
C
Guess
the
tricky
bit
there
is
that
falco
can
be
run
using
a
kernel
module
and
I
guess
the
tricky
bit
is
you
know
you
have
to
bake
the
kernel
module
into
the
ami
or
do
something
in
a
user
data
script
eks
that
will
get
a
little
bit
difficult.
I
think
falco
has
a
way
to
run
without
that
driver,
I'm
sure
pop.
Can
it
taps.
A
E
I
just
wanted
to
add
really
quick,
because
I
have
done
auto
scaling
on
eks
with
multiple
daemon
sets
that
you
don't
have
to
do
anything
extra,
deploy
the
daemon
set
new
nodes
when
they
come
up.
They'll
get
the
workload.
It's
it's
just.
It's
that
easy!
No
local,
dns
data
dog
nginx.
We
had
multiple
daemon
sets
and
we
there
was
some
prioritization
that
we
did
a
little
bit
here
and
there
in
terms
of
what
happens
when
the
node
is
coming
into
compute
restraints
and
things
like
that.
E
C
A
A
ton,
so
I'm
I'm
in
there
and
and
one
of
the
things
I'll
say,
is,
like
you
know
some
of
the
best
practices
around
you
know.
Folks,
just
like
you
know
doing
things,
there
there's
also
a
group
that
tabby,
sable
and
and
ian
coldwater
run
in
terms
of
just
overall.
You
know
kubernetes
security.
You
know
like
things
that
are
going
on
with,
like
you
know,
deprecation
of
psps,
all
those
types
of
things
great
group
to
join,
to
understand
exactly.
A
What's
going
on
there
they're
coming
up
with
like
again
what
the
methodologies
are
for,
like
you
know,
for
instance,
like
you
know,
processes
for
like
rootless
and
all
these
other
fun
things
that
we
have
going
down
the
pike.
So
like
there's
a
ton
going,
I
would
absolutely
recommend
you
join
that
kubernetes
security
group
and,
like
I'll,
find
a
link
to
the
channel
as
well.
C
All
right
anyone
got
anything
else.
They
want
to
add
to
that.
F
Okay,
I
was
trying
to
find
the
links
to
the
two
enhancement
proposals
for
psps
that
are
have
been
put
forth.
C
C
A
Yeah
yeah
yeah,
that's
another
thing
we
we
talked
about
in
the
beginning
and
and
that's
something
if
you
join
that
channel,
it's
still
being
discussed
on
how
you
know
how
that's
going
to
be
messaged
out
like
what
the
actual
methodology
is
going
to
be,
and
all
that.
I
think
that's
a
very,
very
good
discussion,
so
making
sure
that
anybody
that's
involved,
has
a
kind
of
say
and
understands
what's
going
on
is
why
that
group
is.
A
A
And
that's
the
other
part
of
this,
too
is,
is
you
know,
there's
there's
going
to
be
things
like
you
know
the
as
a
cncf
project
as
kubernetes
is
there's.
There
are
certain
things
like
you
know,
vulnerability
assessments
that
are
being
that
are
done
and
all
of
that
there's
you
know
if,
if
you're
joining
the
contribution,
I
know
that
there
was
a
like
a
release,
discussion
on
on
certain
things
where
certain
dependencies
might
be
vulnerable
and
those
types
of
things,
and
so
they
were
working
out
how
those
things
would
be
addressed.
A
Perfect
thing
for
somebody
that
is
looking
to
just
expand
on
their
security
practice
within
kubernetes
and
provide
like
you
know
if
they're,
a
pen,
tester
or
something
like
this
might
be
a
you
know,
useful
place
for
them
to
join.
Like
you
know,
whatever
is
kubernetes
security,
it
seems
to
have
security
to
be
able
to
make
sure
that
you
know
kubernetes.
A
B
D
F
Studying
for
it,
I
think
borka
who's,
a
hangout
on
the
channel.
He
just
got
his
or
he'd
put
you
on
the
spot.
Buddy.
A
B
A
All
righty:
what
do
we
got
next
here.
C
I
got
a
question
over
twitter
dms,
this
kind
of
related
to
the
last
one,
but
I'm
not
sure
if
we
answered
it
in
passing
or
not
so
I'll,
just
ask
it
explicitly
and
we'll
see
if
there's
anything
we
want
to
drop
in
there,
but
balor
reached
out
and
just
asked
with
psp's
being
deprecated.
What
are
the?
What
are
my.
D
C
Yeah,
I
think
those
are
the
two
most
popular
options-
open
policy
agent
in
calvary
now
I'll
plug
myself
here,
just
while
I'm
at
here,
but
I
actually
have
a
live
stream
with
two
members
of
the
caverno
team.
On
tomorrow
night,
there
we
go
5
pm
gmt,
that's
at.
A
Rockwood.Live
all
right
should
I
get
to
the
anchor
question
there.
We
go
and
kit
has
a
question.
B
A
I
kind
of
lost
the
channel
here,
I'm
sorry
so
so!
Basically
you
know
you
mentioned
that
you
know
you
said
some
folks
passed.
It's
a
pavel
pass
that
barco
passed.
It
naven
passed
it
raw
code's
too
scared
to
to
take
it.
We
have
the
book
for
in
the
channel
for
the
cks
scenarios
environment's
a
bit
buggy
as
what
barco
says,
and
then
it
looks
like
maybe
wrote
some
notes
and
it
looks
like
you
know,
they're
talking
about
oppa
and
gatekeeper,
shout
out
to
waleed
absolutely
great
resource.
A
He's
he's
great,
I
mean
just
in
general,
I
mean
you
know,
while
he's
not
even
just
for
the
cks,
I
mean
the
ck
and
ca.
B
A
Yeah
yeah
and
then
borco
just
put
you
know
the
comparisons
between
opia
gatekeeper
versus
caverno.
This
is
again
really
cool
kind
of
you
know.
Tech
and
those
are
things
I
think,
they're
also
getting
discussed
in
that
kubernetes
security
group
that
I
talked
about
as
options
for
the
deprecation
of
psp
alrighty.
I
think
we're
through
the
backlog.
Let's
get
through
some
more
questions
here,
so
ankit
asks
when
I
delete
pves
retention
policy
and
retain
pbs
remain
in
the
system
which
is
expected.
A
When
I
delete
that
pv,
I
expect
the
underlying
volume
to
be
deleted
on
the
cloud
provider,
but
that
doesn't
happen.
So
does
that
mean
that
one
has
to
manually
go
and
clean
up
the
volumes
which
mean
in
the
cloud
do
the
behavior
or
to
take
otherwise
is
to
increase
the
unnecessary
costs?
I
wish
to
create
a
feature
set
to
enable
deletion
of
under
un
underlying
volume
on
deletion
of
pv.
What
do
you
thought
of
this?
I
am
missing
something
here.
I'm
on
eks.
A
This
is
an
interesting
thing
because
I
think
you
know
my
thought.
I
I
just
want
to
throw
something
out
there
like
if
you're,
I
think
cloud
providers
also
like
almost
want
to
fail
safe,
because
it's
like
a
block
storage
method
like
for
them
to
like,
like
an
ebs
block
or
something
like
you
know,
a
google
block
or
something
like
this,
where
maybe
they
want
to
retain
that.
A
The
customer
may
want
to
retain
that
data
so
like
there
might
be
less
liability
if
something
gets
deleted,
even
though
I
don't
know
I'm
just
literally
a
conjecture
at
this
point
I
don't
get,
but
you
know
if,
if
that's
something
you
aim
to
do,
maybe
there's
kind
of
a
thought
process.
If
anybody's
in
the
channel,
from
like
a
cloud
provider,
has
thoughts
on
that.
A
I
think
that's
actually
a
very
legitimate
thing,
because
if
I
sometimes
if
I
delete
my
gke
cluster,
there
is
remnant
there
if,
like
like
I've
chosen
like
you
know,
you
know
a
specific
set
of
you,
know,
storage
or
something
like
this,
so
there
might.
That
might
be.
I
don't
know
if
maybe
that's
a
feature
versus
a
limitation.
F
A
Yeah,
the
in
policies
yeah
even
for
google
and
both
sides.
Think
of
that
as
well.
It
could
be
like
just
you
know.
You
don't
have,
and
if
this
is
your
personal
account,
you
know
or
if
this
is
like
a
corporate
account
where
it's
basically,
you
only
have
a
set
amount
of
rules
to
be
able
to
create
and
not
delete
like
I've.
Seen
that
as
well
so
yeah,
it's
an
awesome,
awesome,
respect
response
there.
C
C
A
Yes,
sir,
all
right,
I'm
completely
new
to
this
and
investigating,
if
it's
worthwhile
for
my
company
to
move
to
kubernetes,
we
run
several
smaller
websites.
Some
of
them
get
a
lot
of
traffic
after
a
social
media
post.
My
concern
is
abstracting
the
manual
provisioning
of
vms,
because
it's
burdensome
to
document
the
configuration
that
would
be
easier
with
containers
and
no
general
advice.
It's
worth
moving
to
k8s.
A
Can
I
one
quick
thing
giant
swarm.
Did
an
awesome,
awesome
thing
you
might
want
to
check
out
they
like
how
to
con
how
to
talk
through.
You
know
how
how
to
basically,
like
you
know,
convince
like
your
vp
on
why
you
should
use
you
know
kubernetes.
I
thought
it
was
very
good,
so
shout
out
to
giant
swarm,
see
if
I
can
find
a
link
to
that.
B
E
It
so
cool
I
just
want
to
mention
working
in
e-commerce.
I
think
using
kubernetes
for
us
was
pivotal
to
how
we
handled
scale
we're
talking
about
a
marketing
push
to
millions
of
people
in
the
scope
of
five
minutes,
we're
talking
about
really
learning
hpa,
having
cluster
autoscaler
as
well,
and
even
building
our
own
proactive,
auto
scaler.
That
would
kind
of
get
a
marketing
feed
for
when
these
sorts
of
pushes
were
planned
and
then
scale.
E
Based
on
that
I
mean
this,
is
you
know
I
left
a
few
months
ago?
I'm
not
sure
what
they've
done
kind
of
since
then,
but
I
can
say,
like
a
lot
of
it
was
very
mechanical
and
tactical.
It's
getting
a
lot
better.
There's
tools
like
like
cada
out
there
that
are
in
the
cncf
that
are
providing
a
lot
more
facilities
for
auto
scaling
on.
I
think,
if
you're
using
virtual
machines
right
now,
you
could,
if
you're
in
aws,
you
can
do.
E
You
know
auto
scaling
groups
and
get
pretty
far,
but
I
think
in
terms
of
the
life
cycle
and
overall
management
of
things,
kubernetes
is
going
to
be
a
lot
easier.
A
lot
more
lower
level
in
terms
of
you're
running
containers,
you're,
not
worried
about
vms
the
application
itself
spawning
instances
scaling
instances,
readiness,
probes,
right,
dialing.
A
lot
of
those
things
in
can
help
verify
that.
It's
not
just
that.
You
have
an
instance,
but
that
instance
actually
can
do
its
job
properly.
E
A
hundred
percent
and
service
a
request
and
res
and-
and
you
know,
return
to
200
et
cetera,
also
load
balancing
plays
into
this
as
well.
You
have
a
lot
more
control
and
options.
I'm
gonna
actually
link
something
called
captain
captain.
I
don't
know
exactly
how
it's
how
it's
said,
but
if
I
was
doing
it
again,
a
lot
of
what
we
did,
we
used
datadog,
but
a
lot
of
what
we
didn't
get
was
the
the
loopback.
E
So
what
happens
after
the
the
event
has
occurred
and
scale
has
been
increased
and
we
now
have
75
instances
instead
of
45
instances.
How
are
we
looking?
What's
our
slas
right,
and
so
what
are
those
slis
that
we're
going
to
keep
keep
looking
at,
keep
maintaining
and
reporting
upstream,
and
how
do
we
view
those
in
a
cohesive
way
so
great
question?
I
could
talk
about
this
all
day,
so
thanks.
A
One
other
company
in
terms
of
slo
and
service
level
objectives,
and
I'm
going
to
mention
them,
is
noble
9.
I
saw
their
demo
when
they
were
stealth
and-
and
I
thought
it's
just
a
fantastic
thing
and
you
got
like
alex
hidalgo.
You
got,
you
know
like
really
heavy
hitters
in
terms
of
understanding,
sra
functions
and
all
of
that,
so
not
only
you
know
kept
in,
but
another
one
is
exactly
what
you
said:
it's
not
so
much
looking
at
metrics
and
yeah.
A
D
I
would
say
if
your
application
is
already
in
a
docker
container.
Moving
to
kubernetes
is
going
to
be.
You
know
a
little
easier
for
you.
It
allows
you
to
get
better
utilization
of
your
vms
by
you
know,
managing
memory
and
all
that
stuff,
better
and
resources,
so
moving
to
that
environment
is
is
a
good
thing,
however,
having
in-house
skills
or
someone
to
manage
it.
D
If
you're
going
to
try
to
manage
the
full
control
plane
or
is
just
a
managed
service,
but
it
sounds
like
this
is
in-house,
so
I
would
think
through
it
and
you
know
make
sure
that
you
know
this
is
what
you
want
to
do
and
then
once
you
get
there,
you
know
you
put
yourself
in
a
position
to
where
you
can
put
pretty
much
run
in
any
kubernetes
environment.
You
know,
as
your
digital
ocean,
et
cetera,
et
cetera,.
A
D
And
also
say,
kubernetes
kind
of
leads
to
not
having
a
lot
of
vendor
lock-in
going
forward
once
you
adopt
it,
because
you
can
you're
pretty
much
subjected
to
the
cloud
providers
api.
The
difference
is
there,
but
you
can
still
run
pretty
much
anywhere.
B
I'd
just
like
to
add
one
thing
I
just
take
into
consideration:
you
know
the
amount
of
state
in
the
app
you
know
if
and
sometimes
what
it's
written
is
important
right.
If
you
have
an
older
java
app
that
doesn't
behave
well
in
containers
and
is
very
stateful,
that
is
a
totally
different.
You
know
way
of
approaching
it
than
if
you
had
you
know
some
php
web
front
ends
or
something
like
that.
So
just
something
to
take
into
account.
B
I
know
in
the
in
the
show
in
the
past,
especially
with
older
java
versions,
there
were
issues
with
the
oom
killer
and
containers
and
stuff,
and
it
got
a
little
hairy,
but
all
that
stuff's
getting
better.
But
just
something
to
remember
is
you
know,
memory
consumption.
What
the
platform
is
all
of
that
good
stuff.
A
D
You're
not
aware
of
it
either.
You
know
you
can
do
I
get
rolls
or
roll
bindings
hyphen
a
just
to
get
a
list
of
everything,
that's
in
a
particular
namespace,
etc.
D
A
You're
welcome
alrighty!
Next
up
do
we
have,
let
me
see
in
the
channel.
Is
there
anybody?
You
know
we're
still
talking
through
some
of
the
slo
functions
here
all
right
and
there's.
B
A
caller
this
is
the
last
queued
up,
so
if
anybody
has
new
questions,
now's
the
chance
to
add
them,
while
we
get
to
this
next
one.
B
A
A
C
D
Now,
if
it's
a
multi,
if
it's
multiple
replicas,
they
start
up
in
order,
it's
like
zero
is
going
to
start
before.
One
completes
et
cetera,
so
you
know
that
pattern
there,
but
as
far
as
if
you
have,
if
it
has
an
unknit
container,
that
will
get
triggered
start
first
and
then
all
other
containers
within
it.
C
Yeah,
I
think
that's
a
good
point,
so
the
it's
also
something
to
be
careful
with
stateful
sets.
You
know
if
you've
got
the
ordinary
the
ordering
guarantee
set
to
order,
then
you
know
you
can't
actually
start
three.
If
one
is
down,
so
they
have
to
go
zero,
one,
two
and
so
forth,
and
then
a
container
is
worth
noting
that
you
know.
If
any
other
container
fails,
the
normal
containers
will
never
start
and
if
any
any
container
is
failing,
no
subsequent
ending
containers
will
start,
but
once
you
get
to
the
container
section,
it's
completely
random.
A
A
Yeah,
absolutely,
how
can
I
use
a
gpu
by
multiple
pods,
I.e,
request
fraction
of
a
gpu
from
a
pod
alice
responded
in
the
channel
with
a
couple
of
links
for
a
gpu
share
device
plugin
as
well
as
gpu,
share
schedule,
extender
and
also
explain
how
on
nodes
with
nvidia,
docker,
2
and
nvidia
drivers
you
can
install
the
schedule
allows
you
to
configure
gpu
sharing
in
your
yamls
in.
A
It's
very
cool.
We
have
another
question
here:
is
there
any
twitter
tutorial
where
I
can
learn
kubernetes
from
scratch?
Another
shameless
plug?
Chris
short
just
put
this
awesome
thing.
It
was
basically
like
a
readme
for
for
that.
If
we
can
put
that
that
link
up
as
well,
and
it
basically
puts
all
of
the
books
out
there
that
kind
of
like,
like
the
illustrated
guide,
you
know
like
the
the
children's
guide
to
kubernetes
kubernetes
the
hard
way
up
and
running.
A
You
know
like
all
of
those
in
one
central
place,
and
I
just
love
that
he
did
that,
because
it's
just
like
again
from
somebody
from
start
jumping
through
this.
I
always
have
to
provide
that
those
links,
hey
you're,
thinking
about
it
versus
now.
I
could
just
have
one
place
to
be
like
go
thoughts.
Panel.
E
Yeah
and
then
he
has
devops
to
read
me
as
well
that
he
just
did.
I
also
want
to
highlight
the
awesome
kubernetes
list,
which
I
just
linked
in
the
chat
as
well,
that
was
kind
of
my
go-to
in
terms
of
like
all
right.
I'm
doing
this
thing,
what
are
the
what's
available
to
me
what's
out
there,
but
it
also
has
great
issues:
resources
for
learning
and
doing
other
things
as
well
exams,
whatever
it
might
be
so.
A
It
looks
like
yeah
so
mars.
You
put
your
awesome
resources
there.
Somebody
for
yogi,
put
cube
academy
again,
really
awesome,
really
good
work.
There.
C
D
I
guess
I
posted
in
the
kubernetes
hardware.
I
think
that,
starting
there
or
starting
with
that
pattern,
building
the
compute
and
all
of
the
components
give
one
very
good
insight
into
how
kubernetes
is
put
together,
as
well
as
putting
in
a
good
position
of
learning
how
to
debug,
because
you
kind
of
like
have
pieced
together
the
infrastructure.
F
F
When
I
was
studying
for
my
cka,
I
went
through
the
building
a
single
cluster
tutorial
on
the
kubernetes
website,
and
that
was
super
helpful,
just
getting
started
with
cube,
adm
bootstrapping
a
cluster
and
then
I'm
going
to
find
a
link.
They
also
had
a
companion
doc
for
kubernetes
tasks,
so
it
walks
you
through
debugging
how
to
fix
things,
deploy
things.
Those
are
really
good
and
kubernetes
the
hard
way.
If
you
want
to
really
dig
deep
and
do
that
linux
from
scratch
approaches
is.
D
Awesome
also,
don't
forget
the
the
concepts
and
tutorials
on
keeping
it
io.
Those
are
very
good
for
sure.
F
Shout
out
to
the
kubernetes
docs
team,
they're
they're,
amazing.
A
B
A
Hard
way
and
cube
admin
spinning
up
a
cluster.
I
think
these
days,
it's
even
more
approachable.
Again,
you
can
spin
up
a
k3s
mini
cube,
kind
micro,
cates.
You
know
and
get
things
up,
but
again
that
what
chauncey
suggested,
where
you
understand
those
bits
through
the
hard
way,
hey
take
a
node.
You
know
install
like
you
know,
cube.
You
know
the
basically,
the
q
proxy
cube
apis.
You
know
put
the
certs
in
place,
so
you
understand
how
all
of
these
pieces
all
interact
from
the
mac.
A
C
Yeah,
I
think
it's
also
important
to
recognize
that
there
are
two
different
types
of
people
that
are
going
to
want
to
learn:
kubernetes
people
that
want
to
be
operating,
kubernetes
and
people
that
just
want
to
consume
kubernetes,
and
we
have
to
you
know,
resources
for
that.
I
always
reference
cataclysm
kubernetes.
I
like.
A
Looks
like
there's
some
suggestions
in
the
channel
as
well.
I
mean
obviously,
like
you
know,
chris,
you
put
some.
You
know
things
on
like
ed
jacks.
It
looks
like
pavel
kubernetes
in
action
by
marco.
It's
a
manning
book.
You
also
put
another
thing
in
terms
of
just
the
kubernetes
docs.
Chris
rocco
talked
about
the
cat
coda
and
then
you
know
I
put
the
link
for
for
chris's
readme.
F
B
H
B
Yeah,
I
tell
you
what
doing
it
the
hard
way
was
was
important
once
so.
If
you
are
a
developer
like
you,
don't
need
to
know
a
lot
of
the
stuff
in
there.
Don't
don't
let
that
scare.
I
just
wanted
to
put
that
out
there.
The
tooling
sense
the
hard
way
came
out
has
just
improved
so
much.
B
You
might
find
yourself
like
if
you're
old,
you
used
to
like
compile
your
own
linux
kernels
and
then
at
some
point
you
just
use
your
distro
kernel
and
you
haven't
had
to
care
about
kernels
in
a
long
time.
It's
similar
to
that.
Usually,
unless
you
really
really
need
to
go
in
there,
then
having
those
skills-
and,
let's
say
you're,
you
know
your
cluster
administrator.
B
Absolutely,
you
should
know
those
things,
but
for
most
of
you,
if
you're,
if
you're
just
consuming,
do
the
hard
way
once
and
you'll
appreciate
how
much
the
tooling
has
gotten
better.
E
I'm
I
was
just
gonna
say
the
hard
way
is
fantastic.
The
fundamentals
is
really
important.
If
you're
coming
from
a
sys
admin
background,
an
operations
background,
a
devops
background
do
that.
I
actually
found
this
kubernetes
patterns
book,
which
just
came
out
relatively
recently,
is
actually
good
for
developers.
So
it's
stockx,
a
lot
of
developers
are
like
what
are
flows
or
the
scenarios.
E
What
are
some
of
the
the
key
patterns
that
I
need
to
know
when
I'm
debugging
my
application
or
when
I'm
deploying
something
or
the
long
term
or
scaling
or
whatever
it
is
and
patterns,
gives
a
good,
really
good
way
of
thinking
about
approaching
some
of
these
problems
and
get
into
how
to
handle
them.
You
know
it
could
be
with
deployments
right
or
how
replica
sets
are
actually
working
behind
the
scenes
and
it
doesn't
get
too
dense,
but
it
also
gives
kind
of
a
good
high
level
of
here's.
E
E
So
these
are
both
relatively
recent
books
that
I
would
definitely
suggest
I
have
not
actually
gone
through
best
practices
yet,
but
if
you've
been
using
kubernetes
long
enough,
you
probably
don't
need
it,
but
of
course
kubernetes
in
action.
This
is
the
original
version.
There's
a
new
one
he's
working
on
right
now.
I
think
it's
coming
out
later
this
year.
We
can
put
a
link
to
that
in
the
chat
as
well.
You
can
actually
get
a
early
copy
right
now,
so
books
they're
great
yeah.
I.
F
D
Yeah-
and
I
just
found
the
book
that
was
referenced
in
q
and
as
patterns
and
pasted,
the
pdf
into
the
channel.
B
A
Definitely
hey
just
because
you're
you're
new
in
you
want
any
thoughts
on
this
in
terms
of
just
so
many.
You
know
somebody
asked
the
question
in
terms
of
any
tutorials
where
I
can
learn
kubernetes
from
scratch
that
helped
you
originally
and
are
things
you
could
suggest.
Now.
H
Yeah
definitely,
I
think
the
hard
way
is
really
good
if
you,
if
you
need
to
manage
kubernetes
yourself,
if
you
have
some
like,
even
if
you
are
in
charge
of
like
let's
say
starting
a
cluster
and
then
keeping
it
up,
because
if
something
goes
wrong,
you
need
to
know
what
it's
about.
H
If
you're,
on
the
user
side,
there
is
definitely
better
use
of
your
time,
maybe
even
doing
the
ckid
or
learning
some
of
these
basics.
However,
you
want,
I
mean,
if
you're
a
video
person,
maybe
the
udacity
courses
are
good.
Katakota
was
awesome.
I
think,
because,
like
hands-on
having
actual
exercises
and
doing
them,
but
I
would
also
employ
everyone
to
go
really
deep
on
each
of
the
concepts
like
look
at
what
is
a
pot?
Actually,
why
is
it
there?
H
Why
is
it
not
just
a
container
like
understanding
these
concepts
or
why
there
is
a
service
and
what
you
can
do
with
it?
That
gives
you
a
better
state
of
mind
of
using
them
like
knowing
that
the
service
is
an
abstraction
and
they
can
use
it
in
different
ways
like
there
was.
There
was
things
that
helped
me
a
lot
in
the
beginning,
actually
understanding
it,
especially
if
you
come
from
something
like
that
is
not
kubernetes.
That
is
maybe
plain
docker
or
non-docker.
Let's.
A
Say
all
righty,
hey,
hey
panel?
Is
there
anything
else,
that's
new
and
hot,
and
do
you
wanna
you
wanna
talk
about
like
because
we're
still
waiting?
We
have
some
no
questions
at
the
moment,
so
it'd
be
cool
to
maybe
just
talk
about
things
that
you're
excited
about
from
technology
and
kubernetes
perspective.
D
We
need
to
recreate
lower
environments
from
that
and
importing
it
via
just
like
a
mysql
import
to
20
plus
minutes,
and
then
once
we
had
upgraded
to
kubernetes
17
dot,
something
volume
snapshots
came
available,
and
now
we
reduce
that
down
to
like
less
than
five
minutes,
so
volume
snapshots
and
potentially
volume
cloning
or
two.
If
you
need
you
know,
if
you're
in
a
developer's
environment,
where
you
know
they're,
building
out
their
environments
from
code
pushes
etc,.
C
Yeah
my
favorite
feature
of
the
120
release.
In
fact,
I'll
describe
the
scenario
first
right.
A
C
Running
some
random
application
in
your
cluster,
it's
in
a
crash
loop
back
off.
You
want
to
work
out.
What's
going
wrong,
you
try
to
execute
into
the
container
with
a
shell
and
there's
no
shell
right.
We've
we've
all
been
there
we're
as
of
120,
I
think
ct
cube
control.
Debug
went
into
beta,
giving
you
the
tools
to
either
provide
a
like
container
with
tooling,
like
a
shell
that
you
can
do
stuff
with
or
creating
a
copy
of
that
pod
so
that
you
can
get
in
and
do
stuff
as
well
too.
C
So
it
just
simplifies
that
whole
I
need
to
get
in
it
and
you're
gonna
have
to
modify
it
change.
The
image
use
next,
three,
whatever
now,
it's
just
a
lot
easier.
So
that's
a
new
feature.
People
should
be
checking.
That's.
H
So
important,
that's
so
important,
especially
like
you
should
not
have
a
shell
in
your
container
like
training
you
and
enabling
you
to
actually
not
have
the
shell
in
there
and
not
have
the
excuse,
but
I
need
to
debug.
I
was
like
that's
the
the
excuse
like
I
need
sshd
in
there,
because
I
need
to
debug
or
something.
A
C
C
D
Actually,
I
feel
like
that's
based
on
the
environment,
the
environment
that
I
work
on,
where
developers
need
to
debug
their
code
et
cetera.
We
allow
them
to
keep
exec
into
their
pods.
We
just
disable
the
service
account
so
that
it's
unable
to
be
mounted
and
that
removes
their
kubernetes
access
as
far
as
interacting
with
that
and
just
don't
run
it
in
privilege
mode.
But
you
know
there's
times
when
a
developer
needs
to
get
in,
and
you
know
tell
the
logs
for
the
drupal
app
or
something
like
that.
H
H
You
have
lightweight
containers,
you
can
test
very
quickly
and
you
can
still
just
with
a
debug
command,
go
in
and
do
all
your
tails
do
all
the
things
that
you
need,
but
you
don't
leave
the
things
open,
and
even
I
mean
if
I
look
at
dev
environments,
most
of
them
are
open
to
the
internet.
They're
not
they're,
not
completely
locked
down,
and
I'm
I'm
pretty
scared
of
of
cves
or
dev
environments,
and
I
wouldn't.
A
I
wouldn't
leave
them
too
too
far
open
before
this
feature.
How
would
you
have
done
it
in
the
past?
You'd
have
to
recreate
the
thing
and
s
trace
or
detrace
after
the
fact
right.
So
you
know
what
I'm
saying
so
lick
this
doing
it
in
line
in
this
method
is
really
cool
right,
so
definitely
props
on
that
one.
We
have
another
question
you
all.
Unless
anybody
has
any
other
thoughts
on
the
cube,
ctl
debug
the
how
to
manage
dns
I.e,
create
destroy
sub
domains
in
route
53
for
public
facing
applications
running
on
eks.
A
And
it
looks
like
yogi
jumping
in
and
just
smashing
with
some
amazing
links
here
he
put
kubernetes
external
dns
and
he
uses
a
simpler
technique
technique.
He
creates
an
ingress
controller,
gives
an
lb
load
balancing
address
and
then
creates
two
routes.
I
again
that
is
one.
I've
seen
be
effective
every
single
time,
so
yogi
gets
if
we
had
a
fictitious
t-shirt.
Eventually,
you'd
get
our
love
and
eventually
we'd.
Send
you
a
t-shirt.
It's
amazing.
H
Yeah,
I
think
both
I
would
actually.
I
think
we
do
both
most
of
the
time,
or
at
least
we
give
both
options
to
people,
and
then
it
depends
a
lot
on
like
how
strict
are
you
in
your
organization
about
like
giving
access
to
to
zones
to
dns
zones
to
to
people?
And
how
are
you,
okay,
with
having
an
im
account
in
your
external
dns
that
is
allowed
to
create
the
dns
entries?
H
External
dns
is
really
really
nice
and
quick,
but
sometimes
you
like
people,
don't
want
to
give
that
much
success
or
they
might
split
up
between
route,
53
and
cloudflare.
And
then
you,
you
want
to
have
a
bit
more
process
in
there
involved.
F
F
A
Shout
out
to
the
graduated
project,
opa
again
shout
out
to
you
all
again,
very,
very
awesome
project
that
I'll,
let
you
know
a
lot
of
people
have
adopted
quickly
because
of
the
ease
of
use
and
exactly
for
this
scenario,
it
literally
can
hit
er.
You
know
api
level
scenarios,
kubernetes
level
policies,
it's
it's
a
really
kind
of
swiss
army
knife
of
policies.
So
it's
great
yeah.
B
A
There
and
it
looks
like
discussions
continuing
again
after
you
kind
of
gave
chris
in
that
channel.
You
gave
the
the
oppa
suggestions
here,
but
it
looks
like
yeah
cube2iam
to
use
annotations
and
and
then
there's
a
question
so
like
there's
they're
still
going
through
the
motions
there.
A
But
let's
ask
the
panel
of
questions
in
terms
of
you
know
best
practices
around
you
know
and
any
kind
of
suggestions
from
an
open
perspective.
I'd
love
to
hear
you
know
something
that
you
found
recently
with
opengatekeeper
that
you'd
like
to
share
with
with
folks.
F
Big
one
for
me
is
they
actually
have
a
repo
for
the
library
common
libraries,
so
you
don't
have
to
write
everything
from
scratch.
Oops
wrong
link,
that's
too
far
down.
F
So
one
of
the
hardest
part
for
oppa
is
writing
your
policies.
These
cover
a
good
80
90
of
those
common
use
cases,
and
then
you
can
go
in
there
and
modify
things.
Yes,
thanks
david
that
play
for
play.
Oppa
is
awesome.
F
Conf
test
is
also
really
good
for
testing
those
policies,
so
you
can
drop
that
into
a
ci
pipeline
and
start
applying
policies
to
your
email
before
it
even
hits
your
cluster
and
whatever
you're
using
github
actions
or
gitlab
ci
or
whatever
ci
tool
you're
using.
So
that's
those
are
really
awesome.
A
They
give
examples
like
you
know
it's
like
hello
world.
They
have
examples
of
different
levels
of
policy
and
I'll
tell
you
quite
frankly,
it's
something
like
we're,
gonna
we're
emulating
with
falco
without
a
doubt,
because
I
think
it's
a
fantastic
way
for
somebody
to
kind
of
kick
the
tires,
be
able
to
figure
out
like
okay.
Will
this
policy
work?
I
don't.
I
need
to
understand
the
machinations
of
it,
because
it's
a
modular
thing
right.
A
It's
like
okay,
you
know
like
what
are
the
things
that
I'm
I'm
kind
of
raiding
this
policy
for
shout
out
to
utorrent
and
team.
C
H
And
they're
great
and
a
lot
of
people
are
used
and
that's
super
useful.
But
on
the
mutation
side
I
see
very
low
beyond
kind
of
at
this
label.
If
it
comes
from
a
team
or
has
anyone
seen
like
interesting
mutation,
use
cases
out
there
or
like
examples,
because
it's
really
hard
to
to
implement
mutation
to
like
json
patches
and
all
these
things.
F
Yeah,
I'm
finding
a
video
for
you
right
now.
There
was
I'm
blanking
on
his
name
right
now,
but
he
did
a
talk
at
the
last
cubecon
about
how
they're
using
mutation
but
they're,
using
the
older
version
of
gatekeeper,
because
the
new
one
doesn't
support
mutation
yet,
and
it
runs
through
some
of
the
pitfalls
of
doing
mutation.
D
D
H
C
H
H
F
It
is
tricky
I
remember
doing
that
with
the
first
version
gatekeeper,
sometimes
the
they're,
using
a
config
map
for
the
policies
right.
It
just
would
sink
in
time,
and
then
you
get
these
weird
false
positives
and
it
required
a
lot
of
tinkering.
Even
if
your
unit
tests
for
right,
your
rego
worked,
it
wouldn't
neces
yeah
they're.
It's
weird.
A
A
Oh
brian
rocco
put
the
link
to
the
stream
he's
doing
this
week
with
caverno
brian
came
back
with
mutate,
the
repos
for
different
clusters,
mutate
security
context,
mutate
volumes,
bala,
put
a
make
on
a
link
on
a
mission
controller
web
hooks
and
it
looks
like
again,
allison
put
the
the
regal
playground
as
well
as
or
the
as
well
as
another
link
to
open
policy
agent.
That's
on
the
for
kubernetes
in
their
get
page.
A
All
righty
we're
looking
at
four
minutes
left
here.
Do
we
have
time
for
one
more
question:
do
you
think
or
that's
it.
B
We
do
anyone
have
an
easy
one,
since
we
we
don't
have
t-shirts
to
give
away.
We've
we've
saved
a
lot
of
time.
A
Yeah,
I
got
it
yep,
so
you
wanna
do
it
yeah.
You
got
one
more.
A
Okay,
is
there?
Oh
I'm
sorry,
so
in
terms
of
you
want
to
do
the
outro,
because
it
looks
like
there's
one
more
question.
C
A
Service
discovery
between
eks
and
bare
metal,
currently
writing
console.
We
have
found
some
undesirable
behavior
with
help
checking
while
using
catalog
sync.
Do
we
have
a
preferred
native
solution
that
are
simple:
well,
no
more
complicated
than
console
that
doesn't
involve
using
headless
services
and
then
losing
balancing
like
core
dns
kuramete.
E
Mean
I'm
interested
if
this
person
tried
console
because
that
was
kind
of
going
to
be
my
go
to
I'm
wondering
if
something
like?
What's
the
anthos
equivalent
on
aws,
can
do
this
natively
a
little
bit
as
well,
but
yeah?
I
don't
know.
H
What
google
is
trying
is
is
getting
the
the
google
dns
into
your
your
clusters
and
then,
by
that,
having
having
your
service
discovery
natives
through
dns,
otherwise
I
mean
console-
is
what
I've
seen
most
in
those
use
cases,
but
they
seem
to
have
already
hit
some
health
check
problems
there
and.
H
Is
like
the
full
on
service
mesh
is
the
answer
to
these
things.
You
would
mesh
the
whole
whole
networks.
B
B
A
Thanks
to
the
following
companies
for
supporting
the
community
with
developer
volunteers,
we
have
giant
swarm
phase
2
weave
works,
vmware,
red
hat
equinix,
google,
microsoft
assistant
utility
warehouse
special
thanks
for
the
cncf
for
sponsoring
the
t-shirt,
giveaway
that
will
be
back.
We
will
have
t-shirts
soon.
So
thank
you.
Thank
you
all
to
the
panelists,
lastly
feel
free
to
hang
out
in
the
office
hours
channel
afterwards.