►
From YouTube: Kubernetes Office Hours 20210421 (EU Edition)
Description
Office Hours is a live stream where we answer live questions about Kubernetes from users on the YouTube channel. Office hours are a regularly scheduled meeting where people can bring topics to discuss with the greater community. They are great for answering questions, getting feedback on how you’re using Kubernetes, or to just passively learn by following along.
For more info: https://k8s.dev/events/office-hours
B
Welcome
everyone
to
today's
kubernetes
office
hours,
where
we
answer
your
user
questions
live
on
the
air
with
our
esteemed
panel
of
experts,
you
can
find
us
in
office
hours
on
slack
and
check
the
topic
for
the
url
for
the
information
before
we
start,
let's
introduce
ourselves
we'll
go
around
the
horn,
who
wants
to
go
first.
C
Yes,
I
am
mario
lauria.
I
currently
work
at
a
company
called
carta.
I
previously
worked
at
stockx
building
out
the
kubernetes
infrastructure,
bringing
cloud
native
and
really
defining
our
cloud
strategy
so
excited
to
talk
to
all
of
you.
Fine
people
today
bring
your
questions.
We
are
ready.
D
I'm
chancey
thorn
a
certified
kubernetes
application,
developer
and
administrator
just
here
to
try
to
answer
as
many
questions
as
possible.
Thank
you.
F
Hey
everyone,
my
name
is
rich
lander.
This
is
my
first
time
doing
kubernetes
office
hours.
So
I'm
glad
to
be
here.
I
currently
work
at
vmware
as
a
platform
field
engineer
previously
worked
at
heptio
and
core
os
as
a
field
engineer,
primarily
helping
enterprises
adopt
kubernetes
and
cloud
native.
B
G
Hey
everyone,
I'm
rachel
leakin,
I'm
also
at
vmware,
I'm
a
solutions
architect
over
there
now
focused
on
app
modernization
onto
kubernetes
platforms
or
cloud
other
cloud-native
platforms,
I'm
also
working
to
work
with
rich.
So
I'm
glad
that
he's
here
he
taught
me
everything.
I.
H
Know
hi
everyone
that
arch
is
here
from
canada.
It's
my
second
time
welcome
to
yogi
and
rich
for
their
first
kubernetes
office
hours,
I'm
working
at
google
and
I'm
a
custom
engineer,
but
I'm
also
spending
a
lot
of
time.
Helping
community
bring
kubernetes,
I'm
organizing
meetups
in
canada.
So
I'm
cncf
ambassador
over
here,
happy
to
be
here
again
and
share
the
knowledge
with
community.
J
Coming
in
from
kent
as
well,
first
time
participating
here,
learning
along
with
everybody
else
and
hoping
to
help
out
others.
B
And
my
name
is
dan
papadrea,
I'm
from
cystic,
I'm
the
director
of
open
source
and
community
and
ecosystem
for
for
systig,
and
I
work
on
the
falco
project,
which
recently
is
now
pr'd
as
for
adoption,
or
excuse
me
as
a
graduating
project
within
the
cncf,
so
moving
forward.
Let's
talk
about
the
ground
rules,
the
ground.
Those
are.
This
is
a
kubernetes
event,
so
we
follow
the
code
of
contact.
It's
in
effect,
please
be
excellent
to
one
another.
This
is
a
judgment-free
zone.
Everyone
had
to
start
from
somewhere.
B
B
While
we
do
our
best
to
answer
questions
answer
your
questions,
the
panel
doesn't
have
access
to
your
cluster
to
be
able
to
do
live,
debugging,
there's
a
show
called
clustered
where
rocco
does.
If
you
want
to
see
some
live
debugging
anyway,
so
please,
you
know
we
will
do
our
best
to
kind
of
keep
get
to
get
this
moving
forward.
Normally,
we
provide
t-shirts.
However,
the
cncf
store
is
replenishing.
Inventory
we
will
give
a
shout
out
and
just
undying
devotion
and
appreciation.
B
Panelists
you're
exp
you're
encouraged
to
expand
on
answers
with
your
experience
and
pro
tips
audience
you
can
help
by
pasting
in
urls
to
official
talks
or
anything
that'll
be
relevant
to
the
topic
in
it.
We
want
this
to
be
interactive,
so
please
do
so.
You
can
also
post
your
questions
on
discuss.kubernetes.io.
B
You
can
also
help
us
out
by
tweeting
spreading
the
word
of
paying
it
forward
again.
We
totally
appreciate,
like
we
have
some
new
volunteers
this
week,
but
it's
made
entirely
of
volunteers
if
this
panel,
if
you
want
to
rotate
in
please
let
us
know,
we
love
to
have
folks
rotate
in
and
out
chauncey's
a
great
example
rich
yogi.
This
is
a
great
like
you
know,
having
everybody
join.
This
is
what
makes
I
believe
what
makes
this
community
amazing
is
everybody.
You
know
contributing
as
we
do
so
it's
awesome.
B
So
without
further
ado,
let's
see
we
have
the
first
question
question
is
I
want
to
change
the
empties
by
pods
following
configure
mtu
to
maximize
network
performance?
I
am
able
to
do
that
by
changing
the
vet
m.
Underscore
mtu,
however,
I
found
that
the
vxlan
calico
interface
on
the
bare
metal
machine
that
host
the
pod
is
still
1410,
which
hurts
performance.
I
also
tried
to
edit
the
mtu
field
using
cube
ctl
edit
cm
calico
dash,
config
and
cube
system,
but
to
no
avail.
Any
suggestions.
F
The
first
thing
that
comes
to
mind
to
me
is
when
you're
editing
that
config
map
are
you
sure
that
that
altered
configuration
is
being
picked
up?
I
think
it's
calico
node
is
going
to
need
to
pick
up
that
config
and
then
change
it
on
the
node
change
that
interface
and
the
other
thing
I'm
wondering
about
is
have
you
seen
anything
useful
out
of
the
the
logs
on
calico
node.
That
may
provide
a
clue
as
to
why
it's
not
updating
that
value.
D
I'm
just
going
to
say
in
the
reference
to
the
config
map,
if
you
make
an
edit
to
that,
it
takes
the
time
for
the
part
to
refresh
it.
So
maybe
restarting
the
state,
facet
or
damon
said
might
solve
that
particular
problem
there
at.
D
I
Yeah
I've
seen
this
before
as
well,
not
with
not
with
calico
but
with
psyllium,
and
that
you
do
have
to
roll
out
those
parts
in
order
to
get
the
changes.
They
usually
don't
pick
them
up,
live.
B
All
right
so
as
we're
waiting
for
questions,
let's
talk
about
the
latest
release
that
come
out
at
least
1.22.
Am
I
accurate
on
that
one?
No
one
time,
two
one
one,
twenty
one,
one,
two
one
yeah,
I'm
sorry,
I'm
going
ahead
of
the
time.
Sorry
just
go
to
whatever
I'm
so
excited
about
one
two
one.
I
just
skipped
it.
I'm
like
this
is
too
awesome.
Anybody
have
any
thoughts
on
the
new
release.
What
they
like.
C
Yeah,
so
I
think
graceful
mode
shutdown
actually
went
ga.
I
think
I
could
be
wrong
on
that.
This
is
huge.
I
remember
running
clusters
back
in
the
day
and
nodes
would
come
and
go,
which
is
expected
behavioral
and
and
having
those
pods
kind
of
not
gracefully,
be
shut
down,
can
really
hurt
the
overall
capacity
and
performance
and
and
really
the
the
customer
experience
right
of
people
that
are
tapping
into
our
platform.
I
think
that's
a
huge
feature.
C
I
think
this
just
shows
us
that
we
are
continuing
to
focus
on
optimizations
that
are
making
kubernetes
even
better,
more
reliable,
more
resilient
and
the
number
one
choice
for,
for
you
know
most
cloud
native
microservice
based
environments
and
hopefully
in
the
future
serverless
as
well.
So
that's
a
really
exciting
feature
to
see
lots
of
other
great
stuff,
though,
and
I
I
think
cystic
did
a
great
breakdown
of
what's
new,
with
one
two
one
above
and
beyond
the
even
the
normal
or
the
the
de
facto
blog
post.
B
I
think
again
what
I
like
about
this
is
this:
is
the
cube,
the
kubernetes
security
team,
tabby's
tabby
sable,
really
coming
to
the
forefront
in
terms
of
how
that
you
know
she
was,
you
know,
I'm
part
of
sick
communities,
kubernetes
security
and
basically
taking
that
was
a
herculean
task.
There's
a
lot
of
things
I
had
to
take
into
consideration
for
pod
security
policy
so
being
able
to
take.
B
You
know,
feedback
from
the
community
and
the
way
that
they
did
it
and
there's
a
link
I'm
going
to
put
in
the
channel
as
well
about
you
know,
past
present
and
future.
So
people
are
like
scared,
because,
oh
it
doesn't
mean
that
it's
going
away,
it's
more
embracing.
You
know:
policy
management,
tech
technologies
like
oppa,
like
caverno
those
types
of
things,
so
I
think
it's
very,
very
cool,
so
shout
out
to
you,
tabby
and
team.
B
Anything
any
any
other.
Oh
there's
a
question
in
the
channel
you
all
this
is:
how
do
I
configure
kubernetes
to
release
evacuate
pv
and
pvc
once
pod
has
been
faulted?
Error
terminated
for
x
amount
of
time.
The
problem
I'm
having
is
when
a
worker
crashes,
the
pod,
will
go
down
and
redeploy
after
30
seconds
to
another
worker
pot.
Eviction
timeout
equals
30
seconds,
but
the
pvp
vc
will
not
move
until
the
dead
workers
back
online.
H
B
G
H
Yeah,
because
this
problem
will
not
happen,
probably
in
the
cloud
provider
where
the
you
know
it's
gonna
be
controlled
by
back
backed
cloud
storage
yeah.
So
as
soon
as
your
node
goes
down
and
the
pod
will
start
on
a
different
node,
the
the
pvc,
the
volume
will
follow
the
pod
or
actually
the
opposite.
The
volume
will
be
first
connected
to
the
node
and
then
the
pod
will
start
so
that
that
probably
shouldn't
happen,
maybe
on
the
cloud
provider
side
but
on-prem.
H
E
Could
it
be
because
of
the
csi
driver
as
well
tuning
in
csi
driver
because,
like
if
you're
using
say
a
vsphere
csi
driver
and
your
like
actual
storage?
Is
a
data
store
or
something
like
that?.
D
E
B
I
was
going
to
suggest
the
same
thing.
I
don't
know
if
they're
using
something
like
cubester
or
something
like
portworx
or
something
like
that.
So
if
I
scuzzy
that's
the
other
part
of
this
is
you
know,
there's
a
lot
of
things
that
could
you
know
that
could
be
lung
issues
that
are
attributed
to
this.
B
This
could
be
a
litany
of
different
issues
right
versus
having
some
type
of
csi
provider
kind
of
in
the
in
the
middle
of
this
to
be
able
to
like
also
help
with
divi
through
that,
I
think,
would
probably
help
adjust
this,
but
this
is
where
you
know,
taking
a
look
at
the
you
know,
your
your
storage
paradigms
and
making
sure
that,
like
you,
know,
they're
at
a
place
where
either
like
a
you
know
at
a
place
where,
like
something
can
be
tuned,
that's
as
the
panel
has
discussed.
J
B
All
righty,
I
think,
we're
going
to
move
on
to
the
next
question
and
matt,
you
know.
Definitely
love
to
you
know,
go
more
more
deeply
on
that
one
at
some
point,
but
all
right.
So
how
do
I
migrate
one
service
at
a
time?
Thank
you.
This
is
from
yoggs
I'm
running
rancher
and
want
to
migrate
one
service
at
a
time
on
eks.
Do
I
need
this
to
do
some
sort
of
alb,
so
I
have
service
xyz
and
want
to
forward
traffic
to
eks
service
xyz.
F
The
other
thing
you'll
have
to
consider
if
this
is
in
play
is
any
persistent
storage
that
can
be
the
trickiest
part
to
migrate
over
when
you're
live
migrating
services
from
one
cluster
to
another.
The
alb
sounds
like
the
application
load
balancer
in
aws,
but
I'm
not
sure
if
that's
what
you
mean,
but
some
global
traffic
routing
proxy
that
you
can
run
traffic
through
will
usually
do
the
trick,
at
least
for
that
purpose.
C
Like
it's
definitely
possible
to
do
this
in
dns
as
well
right,
if
you
had
two
different
load
balancers
going
to
your
rancher
and
your
new
eks
system,
you
could
actually
put
both
of
those
load
balancers
as
responses
in
the
dns
queries
for
your
application.
However,
I
I
really
agree
with
rich.
I
think
you
should
let
the
load
balancer
be
the
core
manager
of
that
and
you
you
have
full
control
over.
You
know
what
that's
doing
so.
I
would
definitely
go
that
route.
C
First,
I've
been
at
companies
that
have
had
a
layer
outside
even
close
to
the
edge
of
you,
know,
cloudflare
in
front
of
everything
coming
in
right,
managing
those
domains
and
cloudflare
and
many
other
cdn
type.
You
know
load
balancers,
ddos
mitigation
providers
provide
tooling
for
doing
this
sort
of
thing,
including
waiting
based
on
how
you
want
to
do
it.
Geolocation
thing,
other
great
features
so
definitely
consider
those
as
well.
H
I
I
I
would
propose
a
more
complex
solution
but
fun
to
to
probably
work
on
it.
You
can
also
set
up
like
service
mesh
like
istio,
and
you
know
we
configure
it
in
a
way
that
it
will.
You
know,
span
your
like
in
a
hybrid
mode,
where
your
part
will
be
running
on
on
wrencher
and
another
part
on
ets
and
you'll.
Be
able
to
you
know,
gradually
shift
the
traffic
on
the
on
the
eks
side.
H
That,
of
course,
will
require
some
learning
curve
on
your
side
learning
studios
and
how
to
configure
it.
But
you
know
I
think,
as
as
gentlemen
mentions,
and
I
think
in
this
case
you
know.
If
you
have
stateless
pods,
you
can
actually
just
redeploy
it
on
your
eks
site
and
then
whatever
you
have
stateful,
you
know
you
potentially
can
cut
over
on
eks
site
where
you
have
like
stateless
right
away,
and
you
know
run
it
there
and
then
think
about.
Maybe
at
the
time
where
it's
true,
slow,
maybe
low
traffic.
H
You
can,
maybe
just
you
know,
dump
your
database
and
restore
it
on
the
eks
side
and
then
switch
traffic.
So
maybe
in
the
slow
time
of
the
like,
maybe
at
nighttime
when
on
that
this
traffic
low
traffic,
that
could
be
easy
path
to
go
without
you
know
re-engineering
things
just
for
migration.
E
Yeah,
so
I
was
actually
kind
of
going
to
the
ecu
thing
the
that
I
ran
into
this
problem,
where
we
were
actually
migrating
from
one
kubernetes
cluster.
To
another
and
yeah
I
mean
the
assumption
was
that
all
these
services
are
exposed
by
a
load
balancer,
but
they
were
like.
We
had
15
services
and
three
of
them
were
kind
of
internal
services,
so
that
was
the
part
that
was
actually
missed
initially
and
then
came
in
much
later,
so
do
know
your
dependencies.
E
So
if
you
have
any
internal
services,
where
you're
actually
using
the
internal
dns
routes,
make
sure
that
you
know
they
actually
get
moved
along
with
their
dependent
services.
Otherwise
you
will
have
to
go
crisscross
between
clusters
or
use
an
sqlite
solution.
B
All
right,
the
follow-up
question
is
there:
is
there
any
docs
available?
So
if
there's
anything,
we
can
kind
of
give
pointers
over
to
yogs
that'd,
be
great.
Just
kind
of
just
you
know,
get
them
give
them
a
breadcrumb
crumb
trail
to
you.
So
then
that
the
person
knows
where
to
where
to
go
alrighty.
So
next
question
is
from
kishaba.
This
is
a
long
question,
so
you
know,
in
terms
of
you
know,
he's
putting
in
the
debug,
so
you
all
can
see
in
the
channel,
but
I'll
kind
of
try
to
try
to
paraphrase
that.
B
That's
okay,
so
keshav
has
been
using
k3s
air
gap
version
1.17.2
and
using
default
ingress
and
using
his
own
certs.
Now
I've
updated
the
certificate
secret.
It's
not
picking
up
the
new
certificate
secret,
but
still
serving
the
old
one.
I
also
confirm
that
my
application
pods
are
picking
up
new
certificates.
B
Can
anyone
help
me
here
or
let
me
know
in
for
more
information
required,
so
you
all
can
kind
of
see
the
logs
again
I
want
to
preface
that
you
know
in
you
know.
This
is
definitely
something.
If
we
had,
we
had
the
ability
to
live
debug.
It
probably
be
a
little
bit
easier
right,
so
this
isn't
clustered
shout
out
rocco.
E
My
my
immediate
sort
of
go-to
thing,
or
this
one
is
have
you
tried
turning
it
off
and
on.
E
B
I
B
B
Do
we
want
to
you
want
to
maybe
give
a
link
to
just
some
helpful
stuff
on
cert
manager
for
kashaba.
I
F
The
in
the
traffic
logs
there
that
traffic
has
acknowledged
that
there
is
a
new
certificate
in
that
it's
it's
recognized
that
the
secret
has
changed,
but
it
explicitly
says
skipping
addition
of
certificate
for
that
particular
domain,
as
it
already
exists
at
that
point,
I'd
be
a
little
bit
upset
with
traffic
and
I'd
bounce
a
traffic
pod.
I
B
I
B
F
There
could
be
a
good
reason
that
traffic
is
saying:
I
don't
want
to
switch
this
certificate
out,
for
example,
if,
if
the
the
new
certificate,
no,
I
actually
I
take
that
back.
I
was
going
to
say:
maybe
traffic
has
recognized
the
problem
with
the
new
certificate,
but
it's
not
explicitly
saying
that
in
a
log,
so
I
can't
think
of
a
reason
why
it
would
not
want
to
change
it.
A
E
B
B
B
Yep
yep
and
again
archie
awesome
thanks
for
giving
that
really
awesome
answer
with
some
a
couple
of
options
with
valero,
and
you
know
the
stateless
aspects
and
awesome.
Thank
you.
Archie
next
one
is
from
mohit
patil.
Can
we
have
multi-factor
authentication
in
between
any
of
the
kds
authentication,
plugins
oidc
x509
webhook.
H
H
Is
that
for
authorization
portion
for
like
when
you're
trying
to
get
into
the
kubernetes
api,
so
you
might
have
a
like
a
multi-factor
authentication
or
it's
more
for
the
workloads
probably
would
be
good
to
clarify.
I
think,
because
we
have
authentication
and
for
application
authentication
for
kubernetes
api
and
maybe
some
other
things,
so
it
would
be
good
to
clarify
exactly
where
he
wants
to
have
mfa
configured.
B
B
F
B
A
bit
more
detail,
yep
yep,
all
right,
so
I've
been
running
windows
workloads
in
my
kubernetes
cluster
as
windows,
docker
images,
huge
pod
starting
time
is
high.
I've
been
thinking
of
creating
custom
images
for
my
windows,
worker
nodes
and
bake.
My
base
docker
images
in
there
is
there
any
other
suggestion
to
overcome
this
before
we
get
started
there.
Can
I
ask
the
question:
is
how
large
are
these
images
a
couple
of
gigs,
usually
yeah
in
terms
of
yeah?
That's
I
definitely
like
to
see
that
so.
E
Yeah,
I
think
it
was.
There
was
a
similar
one
right.
It
was
for
the
ml
use
case
where,
if
the
images
are
huge-
and
somebody
mentioned
that
we
could
actually
use
the
cache
warmer
through
a
dimension
so
baking
into
the
image.
Probably
not.
F
H
D
Option
would
be
that
if
the
contents
of
the
image
can
be
put
on
a
persistent
volume
claim
or
something-
and
you
just
use
the
pod
to
mount
that
persistent
volume,
they
would
allow
you
to
do
it
now.
The
quirk
in
that
particular
context
is,
if
you
get
a
read
write
only
that
means
it
only
one
part,
so
you
need
to
get
that
read,
write
mini
so
that
multiple
pods
can
attach
to
that
volume,
and
you
said
that
etc.
H
Yeah
and
probably
make
a
research
in
terms
of
like,
what's
the
most
optimal
base
image
that
would
has
a
smallest
footprint.
Unfortunately,
I
haven't
looked
at
this
for
a
while
on
the
windows
side,
but
if
anyone's
running
windows
workloads,
maybe
if
you
can
share
you're
in
a
slack
channel
what
base
images
that
you
recommend.
Maybe
in
this
case
so
at
least
your
initial
image
size
will
be
smaller,
obviously,
moving
to
net
core
would
help
a
lot
because
then
you
can
running
on
this.
B
All
right,
I
think,
we're
going
to
go
back
to
the
mfa
question,
because
mohit
responded
back
to
considering
so
basically
added
considering
kds
dash,
but
only
for
now.
So
it
looks
like
mfa
fa
for
the
kubernetes
dashboard.
F
B
I
was
thinking
the
same
thing
where
there
would
be
a
front
end
thing
like
an
nginx
or
something
like
that.
That
would
be
the
helder
for
the
mfa
and
then
trying
to
funnel
to
to,
like
you
know
the
dashboard
or
something
else
like
what
what,
if
your
applications,
not
only
have
it
from
the
dashboard
perspective.
What
if
you
want
to
expand
beyond
the
dashboard?
Maybe
you
want
like
in
in
in
ingress,
for
whatever
it
might
be
right,
you
you'd,
you
probably
want
to
front
loaded
with
something
so
yeah.
I
agree
with
that.
H
Yeah
and
generally
you
know
there
was
a
lot
of
conversation
in
terms
of
like
how
kubernetes
dashboard,
maybe
not
the
safest
solution,
sometimes
so.
Obviously,
if
you're
running
on
the
cloud
provider
or
if
you're
running,
maybe
some
platform,
they
usually
tackle
this
problem
by
adding
their
like
save
kubernetes
dashboard.
I'm
sure
there
are
a
lot
of
you
know,
projects
that,
like
provide
better
experience
as
well
with
kubernetes.
H
I
remember
we,
we
covered
few
examples
of
the
open
source,
kubernetes
dashboards
that
are
kind
of
can
be
available
as
well,
but
like
yeah,
running
kubernetes
dashboard
sometimes
could
be
security
risk,
and
I
guess
that's.
Why
he's
trying
to
configure.
I
H
Yeah,
in
this
case,
it
looks
like
it's
like
any
front
end
right.
So
in
this
case
the
the
front
end
is
running.
You
know
kubernetes
dashboard,
but
you
can
run
pretty
much
any
other
application.
One
solution
recently
has
been
added
to,
I
think
istio
1.9.
They
have
external
authenticator
integration
right
now,
so
you
can
potentially
connect
to
your
mfa
provider.
So,
like
this
external
authentication
mechanisms,
modistio
is
kind
of
simplifies
previously
was
kind
of
baked,
and
now
it's
like
more
separated.
So
it's
supposed
to
be
easier.
H
I
B
Awesome
all
right,
I
think,
we're
going
to
move
move
forward
on
the
next
question.
You
all
all
you
all
the
panel
is
attacking
the
questions.
I've
been
looking
at
it
and
streams.
The
replies
are
going
up
so
awesome.
So
the
question
here
is
from
tyson
moyes,
I'm
sorry.
If
I
got
your
name
incorrect
hi
all
I've
been
working
on
are
back
in
a
cluster.
I've
been
trying
to
create
a
cluster
role
that
has
full
access
to
the
entire
cluster,
except
for
the
cube
systems.
Namespace
from
what
I've
found
clustered
cannot
exclude
namespaces.
B
My
theory
right
now
is:
I
would
have
to
use
opa
to
enforce
this
type
of
policy
curious.
If
there's
any
other
ideas
in
the
community-
and
I
also
see
barco
hype-
hey
borko,
it
looks
like
barcode
has
also
responded
there,
but
you
want
to
try
to
you
know
kind
of
work
through
the
questions
with
him.
There
live.
J
Well,
I'm
just
trying
to
figure
out
whether
the
cluster
role
that
he's
looking
for
needs
access
to
cluster
level
resources
or,
if
he's
just
trying
to
say
I
need
some
world
that
has
access
to
all
of
the
namespaces,
but
not
cube
system,
because
there
could
be
different
solutions
depending
on
which
one
he's
exactly
looking
for.
J
Generally,
I
think
if
you
just
want
access
to
like
a
bunch
of
namespaces,
but
you
don't
want
access
to
cluster
level
resources,
you
probably
don't
want
to
be
using
cluster
rules
from
a
security
point
of
view.
So
in
that
way
you
kind
of
want
to
automate
creating,
perhaps
roles
in
name
spaces.
You
want
access
for.
J
If
you
want
cluster
role
that
doesn't
have
access
to
certain
namespaces
or
resources,
I
guess
you'd
have
to
create
some
sort
of
custom
class
role.
I'm
not
I'm
trying
to
think
that
one
through,
but
that
yeah
I
don't
know.
I
don't
have
a
good
answer
yet.
B
I'm
going
to
someone's
shameless
plug
and
say:
look
once
you
do
figure
out
the
roles,
and
all
of
that
you
can
also
use
falco
from
audit
logs
perspective
and
see
who's
actually
been
creating
name
spaces,
because
we
tap
into
the
audit
log
capability
there
to
see
if
they've
created
a
namespace,
they
showed
an
outside
cube
system
and
all
of
that,
but
you
know,
I
think,
looks
like
chauncey.
You
chimed
in
there
as
well.
D
Yes,
one
pattern
would
be
to
apply
on
a
role
to
a
namespace
similar
to
what
the
kubernetes
project
does.
If
you
look
at
that
namespace
user
role
that
I
just
pasted
into
the
slack,
they
provide
a
role
based
on
a
namespace,
so
you
could
have
maybe
an
operator
that
applied
that
role
to
each
of
your
namespaces
or
specific
namespaces
with
you
know
an
annotation
or
a
label
or
something
on
it,
and
they
can
help
you
control
access
to
your
infrastructure.
H
Yeah,
I
just
I
want
to
add
to
you
and
chansey:
there
is
a
new
project
in
kubernetes
community
called
hierarchical
namespaces
that
potentially
can
simplify
this
in
your
like.
If
you
have
a
complex
enterprise
environment,
maybe
maybe
where
you
have
many
different
teams
and
then
with
hierarchical
namespaces,
you
can
actually
create
this
more
elegantly
and
it
will
propagate
down
your
our
back
configuration
to
your
team,
for
example.
So
it
could
be
a
a
place
to
look
as
well.
F
We
move
on
to
the
next.
I
actually
just
wanted
to
talk
through
the
idea
that
tyson
floated
there
about
using
open
to
enforce
this
sort
of
policy,
and
I
was
thinking
through
how
that
might
work,
and
it
sounds
like
this.
F
This
role
is
kind
of
a
cluster
admin
light
whereby
they
have
full
access
to
the
cluster,
but
read
only
to
cube
system,
and
I
can
think
of
a
way
that
that
might
work
using
open
if
you
used
another
role,
if
you
made
a
new
role
that
had
everything
that
cluster
admin
has
all
the
permissions,
but
then
use
a
validating
policy
and
open
to
say
if
it's
that
role
don't
allow
any
changes
to
cube
system,
that's
conceivably,
that
conceivably
would
work.
So
that
could
be
a
workaround
in
in
lieu
of
something
more
elegant.
B
All
righty
moving
forward
on
the
next
question:
how
do
I
access
eks
cluster
in
the
ci
cd
pipeline
without
with
without
having
to
use
cube,
config
file?
Look
this
is
the
question
is
getting
attacked
by
the
panel,
so
I
don't
know
who
wants
to
take
that
one
first,
it
looks
like
mario
chauncey
there
take
whoever
wants
it
wants.
It
first.
C
Yeah,
I
think
really
quick.
I
mean
you,
you
need
something
to
authenticate
the
cluster,
I
mean.
Maybe
it's
not
the
cube
config,
maybe
there's
other
things
there
and
you're
using
other
plugins
or
something
like
that,
but
I
think
this
person's
just
trying
to
kind
of
figure
out
in
their
head.
If
I
have
an
eks
cluster,
how
do
I
get
what
I
need
to
be
able
to
contact
that
eks
cluster
from
from
other
tooling
right,
and
I
think
that's
maybe
a
little
bit
tricky
at
first.
C
I
know
tools
like
eksctl
and
obviously
the
aws
command
and
there's
documentation
for
this,
as
well
as
how
to
get
a
cube
config.
That
should
work
so
in
this
case
it's
this
person's
doing
it
from
their
cicd
pipeline.
So
it's
some
sort
of
environment
that
needs
to
contact
kubernetes,
probably
for
a
cd
or
something
like
that,
so
I
think
yeah.
I
think
we're
kind
of
sharing
some
some
guides
here
of
how
they
can
do
that
and
understand
how
the
impact
of
iam
as
it
relates
to
accessing
your
cluster.
C
So
I
think
that's
just
what
this
is.
So
I
would
consider
this
one,
probably
potentially
pretty
close
to
solved
and
yeah
eks
can
be
a
little
bit
hard
to
understand
at
first
that
relationship
and
how
that
works
in
other
other
environments
versus
just
like
something
on
your
laptop.
C
But
hopefully
this
helps
and
yogs
can
can
move
forward
and
get
that
pipeline
dialed
in.
B
One
thing
I'll
add
here
is
a
big
shout
out
to
james
strong
in
here.
He
he
wrote.
You
know
he
wrote
some
really
good
training
from
the
aws
side
and
shout
out
to
you
james,
strong
thanks
for
jumping
in
the
channel
and
answering
maybe
you
should
be
on
the
panel
someday
bud
anyway.
There's
another
question:
should
we
move
on
to
this?
Unless
chauncey,
you
want
to
add
any
comments
to
this
one.
Are
you
good.
D
We're
just
trying
to
clarify
is
the
desire
to
like
not
distribute
a
keep
config,
you
know,
or
is
it
trying
to
limit
it?
Because
if
you
want
to
interact
with
your
cluster
in
a
ci
cd
pipeline
without
necessarily
having
a
cube
config
that
exposes
it,
you
know
the
get
up.
Pattern
is
like
the
perfect
solution
for
that
word.
If
you
have
a
image,
that's
built
from
your
ci
cd
pipeline,
like
flux
and
a
later
version
of
argo
cd
can
detect
that
image
change
and
pull
it
into
your
cluster
and
then
update
your
repo.
D
B
D
Doesn't
opa
have
a
rule
that
prevents
deploying
to
default
namespace?
It's
like
an
example
that
they
have
out
there.
So
this
seems
to
like
be
the
indication
that
you
don't
put
anything
there,
because
if
you
accidentally
deploy
a
cluster
role
and
apply
it
to
like
the
service
account,
you
know
every
pod
that
you
create
in
that
particular
default.
Now.
Has
you
know
cluster
capabilities
or
cluster
admin
capabilities?
So
you
want
to
be
careful.
H
I
I
think
it
really
depends
also
on
the
your
company
structure.
How
you
want
to
you
know,
tackle
this
problem.
I
would
say,
like
you
know,
some
companies
have
the
operations
team
or
I
don't
know
devops
or
how
they,
how
you
may
call
it
in
your
team.
They
they
will
be
like
more
focused
on
the
kubernetes
cluster.
H
Configuration
and
policy
deployment,
so
they
will
kind
of
prepare
your
environment
for
deployment,
and
here
you
can
actually
also
use
hierarchical.
Namespaces
that
the
you
know,
operation
teams
will
will
actually
prepare
for
you
and
then
you
can
actually
create
your
own
namespaces.
That
has
already
specific
our
back
and
policy
applied
on
them,
and
you
know
you
can
have
a
less
strict
environment
to
deploy
your
stuff.
So
it's
really,
I
think,
depends
on
how
your
team
operates
and
like
if
it's,
if
you
alone,
then
you
can
do
whatever
you
want.
H
If
you
have,
you
know
different
teams
that
are
responsible
for
different
things,
and
then
you
might
need
to
look
into
get
ops
for
something
you
know
config
configuration
and
policy
management.
Maybe
it
could
be
a
different
team.
So
it's
it's
a
very
complex
question
and
it
really
needs
to
down
to
your
use
case
and
probably
have
a
discussion,
and
you
know
understanding
what
is
your
environment
and
what
you're
trying
to
tackle
there.
G
Okay,
so
I'll
just
I
think
overall,
though,
regardless,
if
you
know,
if
it's
the
operator,
creating
the
name
space
for
your
team
or
you're
doing
it,
I
think
the
sentiment
is
just
don't
use.
Storage.
If
you
don't
have
to
create
your
own
right,
don't
use
default.
Okay,
I
think
either
way,
you're
still
not
going
to
get
default.
C
Really,
quick,
I
don't
think,
there's
shame
in
using
the
default
name
space.
A
couple
companies
ago,
like
we
use
the
default
name
space
we're
getting
into
kubernetes.
At
the
end
of
the
day,
we
need
our
applications
to
run.
We
need
them
to
service
other
applications.
We
need
to
move
to
service
customers
incoming
requests,
it's
more
or
less
just
the
organization,
the
teams,
the
service
owners
and
the
kind
of
the
architecture
of
what
you
want
to
do.
C
Sometimes,
when
you're
in
innovation,
startup
mode,
you
don't
have
time
to
think
about
problems
like
what
namespace
things
are
going
in
policies,
other
things
like
that.
You
just
need
a
place
where
your
applications
can
run
and
they
can
run
reliably
and
knowing
that
kubernetes
can
do
that,
you
just
have
to
move
forward.
So
it's
not
like
oh
we're
running
in
the
default
name
space.
This
is
the
number
one
priority.
We
should
absolutely
stop
everything
we're
doing
and
break
things
out.
C
I
do
think
when
you
do
get
bigger
when
you
start
to,
you
know,
incur
massive
growth
when
you
start
to
have
lines
of
separation
boundaries
of
ownership.
You
need
to
start
thinking
about
how
you're
going
to
do
this.
I
think
hierarchical
namespaces
really
helps
make
that
easier.
I
think
tools
like
service
mesh,
cni's,
help
you
implement
network
policies
and
then
set
up
things.
C
You
know
quotas
per
name
space
and
you
can
really
start
to
build
out
when,
when
you
start
to
focus
on
security,
a
little
bit
more
when
you
get
to
that
phase,
the
tooling
is
there
and
you
should
very
much
talk
to
people
like
us
and
get
an
idea
of
what
you
should
do,
but
there
is
no
shame
in
launching
a
new
kubernetes
cluster
and
moving
forward,
but
there
are
things
you
should
definitely
think
about.
So.
B
Up
on
that,
I
definitely
agree
for
for
a
sandbox
and
a
development
instance.
I
totally
agree
with
you,
but
for
production
again
putting
the
security
hat
that
I
have
on
it
from
you
know
not
only
just
runtime
security,
but
all
the
rest
of
it.
I
absolutely
believe
name.
Space
should
be
attributed
also
thinking
back
to
what
just
happened
in
the
releases.
In
terms
of
you
know,
psp
deprecation,
but
psp's
in
general,
whatever
the
replacement
is,
how
are
you
going
to
apply
these
things
to
just
the
default
name?
Space?
B
Where
you
know
what
I
mean
like
that
to
me
is
is
asinine
so
as
as
mario
said,
as
you
start
going
into
more
of
the
production
level
services
and
having
to
segregate
and
and
make
sure
your
environments
are
compliant
from
a
certain
degree,
you
cannot
do
it
on
a
single
name
space.
It's
just.
It
just
goes
against
any
security
paradigm.
That's
out
there.
So
can.
I
I
I
F
Almost
every
use
case
I've
seen
when
a
namespace
is
provisioned
for
a
cluster
tenant.
There
are
other
resources
that
go
along
with
it
like
roles,
resource
quotas,
things
like
that
and
there's
like
there's,
no
technical
reason
why
you
couldn't
apply
that
to
the
to
the
default
namespace,
but
it
also
is
being
more
deliberate
about
it
and
creating
a
namespace
for
a
thing
is
a
little
more
explicit
and
descriptive
and
as
as
teams
develop
in
their
sophistication
of
managing
these
things,
it
just
it
makes
sense.
F
E
Was
this
was
something
I
saw
with
one
of
the
customers?
They
are
actually
using
multiple
kubernetes
classes
and
they
are
not
deploying
all
the
applications
on
a
single
cluster.
E
In
some
cases
they
have
multiple
applications
running
on
a
single
cluster,
but
they
are
kind
of
maintaining
globally
unique
beam
spaces.
So,
for
example,
an
application
deployed
on
one
cluster
uses
that
name
space.
If
it
were
to
actually
get
moved
to
another
cluster,
they
would
continue
to
use
that
same
main
space.
So,
for
that
application
the
name
space
remains
same
across
any
cluster
and
also
when
they
actually
move
from
lower
environments,
development,
uat
and
production,
the
namespaces
sort
of
go
along.
E
E
B
Alrighty,
it
looks
like
we're
light
on
questions,
so
maybe
we
can
go
back
to
in
terms
of
the
release
notes
for
1.21
anything
you
all
are
excited
to
talk
about,
or
you
know
what.
Let's
also
open
up
the
space.
Is
there
anything
in
in
in
kubernetes?
Now,
like
a
you
know,
kubecon
wise
cube
comes
coming
in
a
couple
of
weeks
that
you
all
are
excited
about.
C
I
I
just
want
to
say
you
know:
we've
got
eu
coming
early
may
and
it's
virtual,
which
is,
is
not
gonna,
be
the
funnest
thing
in
the
world,
but
I
have
a
talk
that
I'm
really
excited
about
that
we
actually
everyone
has
pre-recorded
their
talks,
and
you
know
myself
and
my
co-presenter
will
be
there
for
answering
questions
and
things
like
that,
but
I
do
want
to
say
I
logged
in
recently
to
edit
my
my
registration
and
there
are
tons
of
day
zero
events
like
tons
like
if
you,
if
there
isn't
something
there,
that
moderately
interests
you
I
I
don't
know
what
universe
you
come
from,
because
there's
so
many
cool
things
going
on
a
lot
of
which
are
free,
which
I
did
not
know.
C
So
please
go
jump
in
your
registration.
If
you
haven't
registered,
go
and
check
it
out,
there's
also
free
content
like
the
keynotes
and
other
things.
Please
go
and
check
it
out,
get
your
company
to
pay
for
it.
They
absolutely
should.
I
think
virtual
will
definitely
be
awesome
to
see.
There
will
be
some
amazing
talks,
there's
still
a
lot
going
on
and
and
tune
in.
C
I
know
north
america
is
coming
up
in
october,
planned
to
be
in
la,
and
I
will
be
there
physically
with
one
or
more
mass
on
it's
gonna
be
a
really
good
time.
I'm
looking
forward
to
partying
with
all
these
fellow
awesome
people
and
all
of
you
that
are
tuned
in
right
now,
so,
socially.
C
Exactly
exactly
yeah,
maybe
maybe
even
a
face
shield,
you
know
just
go
all
out:
it's
gonna
be
fun,
though
I
just
think
it's
gonna
be
so
great
to
see
people
in
every
p.
I
see
everybody
in
person
and
I'm
really
looking
forward
to
it.
So
stay
tuned.
B
So
before
we
get
to
north
america
in
terms
of
emea,
I'm
definitely
looking
forward
to
seeing
a
lot
of
sessions
that,
as
mario
said,
the
zero
days
are
great,
also
shameless
plug
we're
doing
a
a
cube,
co,
a
kubecon
end
of
the
day
kind
of
wrap
up
recap
as
well.
That
would
be
really
cool
if
you
all
want
to
take
a
look
at
that,
that's
pretty
much
it
on
that
anybody
have
any
other
thoughts
in
terms
of
just.
D
I'd
like
to
give
a
shout
out
to
a
book
that
was
released
in
march
of
2021
production.
Kubernetes
is
a
very
informative
book.
I
recommend
people
read
it
and
learn
thanks.
We.
F
B
You
know
rich
could
get
paid
on
this
or
something
is
that
is
that
something
we
could
do?
George,
I'm
sorry
yeah,
I
gotta
keep
the
lights
on.
I
guess
awesome.
Let's
see,
looks
like
some
questions
are
coming
in
here.
Is
there
anything?
Oh?
No,
you
got
this
go
all
right,
all
right,
all
right!
B
Oh!
This
is
a
good
one
here
from
oh
hit.
This
might
be
listen.
That
is
a
very
relevant
question,
because
guess
what
everybody
on
this
panel
was
in
the
same
place.
You
were
in
terms
of
trying
to
contribute
to
we
love
when
people
come
in.
We
want
to
nurture
that.
We
want
to
mentor
you.
We
want
you
to
come
in
and
feel
involved
with
this
amazing
community.
So
the
question
is,
I
love
kate's.
Well,
we
love
you
and
want
to
contribute
in
ks
repo.
D
D
G
Yeah,
that's.
I
was
just
to
go
off
of
that
chauncy,
I
kind
of
just
lurked
right.
I,
when
I
joined
when
I
met
like
rich
and
stephen
augustus,
and
they
were
kind
of
like
walking
me
through
of
how
to
get
involved.
I
kind
of
just
watched
for
like
a
couple
of
months.
I
think
it
was
a
really
long
time
and
then,
like
I
said,
joining
watching
the
sig,
seeing
what's
going
on
and
then
eventually
you'll
see
an
issue
you're
like
hey.
G
I
think
I
can
do
this
and,
like
you
said
chauncey,
it
doesn't
have
to
be
code.
It
could
be
like
hey.
Can
you
update
this
document
right?
You
could
start
there
and
eventually
start
getting
into
the
flow
of
things.
So
I
suggest
finding
whatever
your
strength
is,
if
it's
not
coding
like
google,
that
route,
if
it
is
coding,
just
look
and
see
if
any
issues
pop
up,
whereas
if
by
joining
meetings,
you'll
see
what
little
things
need,
people
need
help
with
as
well.
I
Yeah,
I'd
also
recommend
it's
like
contributor
experience.
If
you
don't
know
which
sick
or
which
aspect
of
kubernetes
you
want
to
get
involved
with
upfront,
then
you
know
get
in
and
join
the
contributor
experience.
There's
I
think
it's
bi-weekly
mentoring
meetings
where
you
can
go
along
and
actually
meet
mentors
who
can
help
guide
you
on
your
journey
as
well.
So
there's
that's
a
really
good
way
to
get
started.
B
That's
one
thing
I
just
again:
I
you
know
we
always
talk
about
this,
but
I
I
say
it
all.
The
time
like
anybody
in
the
community
has
been
so
warm
and
welcoming.
We
want
to
make
sure
that
everybody's
involved
in
this
that,
like
they,
feel
that
they're,
because
all
of
us
were
in
the
same
style,
same
boat
here,
learning
kubernetes
trying
to
contribute
trying
to
be
part
of
this
and
everybody
else
helps
false,
foster
us
going
forward.
B
So
we
would
love
to
see
that
mohit
and
we
love
the
fact
that
you're
wanting
to
get
involved
all
right,
vorko
responded
here.
This
looks
interesting
to
me:
do
you
want
to
talk
about
the
secret
logging
static
analysis.
J
I
was
just
browsing
run,
looked
interesting.
I
think
it's
going
to
beta
this
release.
It's
essentially
a
feature
that
prevents
a
logging
of
sensitive
information
and
secrets
to
to
logs
so
that,
if
you're
collecting
logs,
then
you
don't
get
sensitive
information
and
passwords,
and
things
like
that
in
your
logs.
So
I
think
that's
a
pretty
cool
thing,
because
a
lot
of
enterprises
will
have
policies
against
having
sensitive
information
in
logs.
B
All
righty
and
alumni
chris
carty
really
put
a
a
to
to
the
office
hours
and
put
a
link
to
the
release
team.
There's
a
link
there
as
well.
That's
definitely
a
place
for
going
back
to
we
kind
of
skipped
over
this,
but
going
back
to
my
question
how
to
get
involved,
archie
put
the
kubernetes
podcast
awesome.
I
totally
agree.
B
Navarone
was
incredible
on
that
navarone
is
one
of
the
release
leads,
and
you
know
just
I
think
this
release
was
the
first
one
that
was,
I
think,
by
somebody
internationally
versus
in
domestic
in
the
us.
For
this
release,
which
I
thought
was
incredible.
This
shows
the
worldwide
community
coming
together
and
putting
out
a
release
again.
Do
you
know
of
any
other
project
open
source
or
any
that?
Has
this
level
of
kind
of
you
know
just
international,
like
everybody
kind
of
global
kind
of
presence,
I
think
that's
just
phenomenal
cool.
H
I
I
probably
want
to
share
the
excitement
that
kubernetes
1.21
release
added
support
for
ipv4
ipv6.
Dual
stack
support,
it's
funny,
because
you
know
this
is
always
coming
up.
Every
platform
people
building
like
openstack
or
you
know,
ipv4
tv,
six
always
popping
out
as
feature
request,
but
I
think
it's
it's
something
coming
in
the
right
moment
because,
like
a
lot
of
telcos
is
building
5g
networks
and
they
they
have
million
devices
that
needs
to
connect
and
obviously
kubernetes
is
a
probably
the
best
platform
to
run.
H
The
applications
at
the
moment-
and
you
know,
having
kubernetes
supporting
ipv4
v6
will
will
probably
help
also
a
lot
of
telcos
move
from.
You
know,
infrastructure
to
more,
like
kubernetes
native
landscapes,
cloud
native,
so
I'm
very
excited
about
it.
B
Excellent
we're
going
on,
I
think,
another
five
to
six
more
minutes
any
kind
of
other
thoughts
we
want
to
have.
Besides
we're
waiting
for
some
more
questions
here.
Let
me
ask
you
all
this
in
terms
of
this,
this
release
and
you
folks
have
been
through.
You
know,
different
releases
and
all
the.
How
do
you
all
I'm
gonna
ask
this
question:
how
do
you
all
like
you
know,
test
new
releases?
What
are
you
using
kind?
Are
you
using
you
know
micro
case
are
using
k3s?
B
F
Outside
of
formal
testing,
I
have
one
suggestion
which
is
release
new
releases
of
kubernetes
and
all
of
the
platform
components
that
you're
running
on
it
into
your
development
environments
as
soon
as
possible,
run
the
software
that
you'll
be
running
in
production
in
development,
on
the
new
versions
and
you'll
organically
start
to
see
any
issues
come
up.
That's
not
a
very
structured,
formal
way
of
testing,
but
it's
it's
quite
effective.
Nonetheless,.
D
And
allows
you
to
do
that.
Better
kind
allows
you
to
check
out
the
code,
create
a
feature
branch,
modify
your
code
and
then
do
a
node
build
and
it
will
build
a
node
image
from
that
close
from
the
code
that
is
currently
looking
at
and
you
can
test
it
in
the
cluster,
et
cetera,
so
use
kind
for
a
lot
of
your
testing
or
just
learning
kubernetes
et
cetera,
is
provision
using
cube
adm,
which
gives
you
insight
into
how
that
works,
etc.
So
it's
a
very
cool
tool.
C
Yeah,
I
love
kind
because
of
its
kind
of
declarative
model,
for
how
you
want
the
a
format
to
be
clustered
what
you
want.
Your
options
are
like,
for
instance,
how
many
masters
do
you
want?
You
can
help
to
test
certain
things
and
it
really
just
uses
docker
on
your
machine,
which
you
probably
have
already
so
I
want
to
keep
in
mind.
There's
minicube.
There
is
the
kubernetes
variant
that
docker
for
desktop
provides.
There's
other
ways
to
do
this.
C
You
should
figure
out
what
fits
best
for
you
there's
also
a
new
project
ekz
that
helps
you
run
eks
on
your
laptop,
so
we're
seeing
this
idea
of
how
do
I
actually
get
the
exact
cloud
provider
environment
that
I'm
running
in
the
cloud
right
for
my
larger,
bigger
clusters?
How
do
I
get
that
similar
environment
locally?
So
it's
becoming
easier
to
do
that
and
stay
ahead
with
the
the
latest
version
of
kubernetes
so
absolutely
check
those
projects
out
I'll
put
some
links
as
well.
B
All
right
and
looks
like
we
have
some
links
to
kind
again.
K3S
is
another
one.
That's
that's
useful
microcase
as
well
to
be
able
to
kick
the
tires,
but
going
back
to
what
rich
said,
I
think
again,
it's
I
think
you
were
saying
at
a
macro
level
right
versus
on
your
personal
laptop,
where,
if
an
organization
wants
to
really
get
up
to
speed
on
a
release
really
like
putting
in
your
development
environment,
saying
look,
this
is
the
repercussions
if
something
happens,
be
able
to
address
those
before
putting
them
in
production.
B
F
Yeah
and
just
to
elaborate
on
that
a
little
bit
kind
is
fantastic
for
local
development
when
developers
are
putting
together
individual
components
but
we're
we're.
A
lot
of
us
are
using
more
distributed
service
oriented
architecture,
which
means
that
oftentimes,
you
will
need
a
development
cluster
that
teams
share
to
deploy
their
components
to
the
integration,
testing,
and
things
like
that,
and
that's
that's
kind
of
the
the
a
remote
development
cluster
is,
is
what
I
was
driving
at.
B
G
No
yeah,
I
actually
have
a
question
if
I
wasn't
on
the
call,
I
wouldn't
ask
it
speaking
of
like
new
releases
and
things
like
that.
How
do
you
all
deal
with
your
customers,
who
are
like,
maybe
n,
minus
five
or
either
four
on
some
of
these
releases?
How
do
you
recommend
that
a
they
keep
up
today
like,
for
example,
I've
been
so
out
of
it
that
I
didn't
even
know
they
were
having
a
release
today
right?
B
Can
I
take
the
first
step
at
this
and
again
putting
the
security
hat
on
you.
Do
not
want
to
be
five
releases
provisions
back
on
anything
from
a
security
perspective
of
the
components
and
also
the
dependencies
on.
You
know
the
underlying
you
know,
architecture
and
all
of
that
so,
like
you
know,
you
see,
you
know,
there's
misconfiguration
of
security,
cost
the
industry,
almost
five
trillion
dollars
and
that's
even
a
cloud
provider.
B
What
have
you
so
if
you're
going
back
and
saying
I'm
arresting
my
laurels,
going
back
five
revisions
back,
there's
talk,
you
know
tons
of
technology
to
keep
you
up
to
date.
It's
exactly
the
things
that
rich
is
mentioning
here.
It's
where
hey
you
want
to
see
the
repercussions
here.
You
do
it
in
your
dev
instance
of
upgrading
here.
That
just
tells
me
that
they
don't
they're
kind
of
don't
have
a
paradigm
for
rolling
up
up
releases
there.
So
that's
kind
of
my
security
level
opinion
there
on
that.
I
This
was
also,
I
guess
coincidence,
not
coincidentally,
but
related.
This
was
the
last
release
of
the
four
releases
per
annum.
I
believe
we're
now
lowering
the
release
cadence
of
the
project
of
three
releases
per
year,
just
to
help
tackle
this
problem.
Where
organizations
don't
have
to
have
well,
you
know
don't
have
to
be
doing
at
least
one
major
upgrade
every
year.
C
Yeah
I
wanted
to
mention
too.
I
know
I
think,
there's
wg
the
working
group
lts
as
well
for
long
term
support.
There's
been
many
discussions
there.
I
actually
don't
know
the
status
of
that.
So
maybe
someone
can
give
an
update.
I
think
when
you
are
a
tiny
devops
team,
maybe
in
a
startup
environment
or
a
company-
that's
just
getting
going.
This
is
incredibly
difficult.
C
B
Everyone
in
this
panel
just
stars
thanks
to
the
following:
everyone,
thanks
to
the
following
companies
for
supporting
the
the
community
with
developer
volunteers,
vmware,
google
card,
cystic,
equinix
metal,
anything
anybody
else.
I
forgot.
B
No,
I
think
we
got
everybody
and
also,
lastly,
feel
free
to
hang
out
in
the
office
hours
channel
afterwards,
if
there's
other
channels
that
are
too
busy
or
if
the
other
channels
are
too
busy
for
you,
you're
looking
for
a
friendly
home
you're,
more
than
welcome
to
pull
up
a
chair
and
hang
out
we're
back
at
the
same
time
next
month
until
then
have
a
great
one.
We
thank
you
so
much
for
joining
us
and
thank
the
panel
again
bye.
Everyone.