►
From YouTube: KubeLinter - Taking Control of your K8s Configurations
Description
StackRox recently announced its first open-source tool, KubeLinter. KubeLinter is a static analysis tool that identifies misconfigurations in Helm charts or YAML files. With 19 standard built-in checks and the opportunity to configure your own, KubeLinter provides flexibility, repeatability, and portability to your security checks and pipelines.
In this Office Hours, we will discuss with co-developers Viswajith Venugopal and Koki Yoshida aboutÂ
- KubeLinter
- The Open-source community
- Their developer journey
A
Awesome
and
we're
live
hello.
Everyone
thanks
for
joining
this
is
the
february
version
of
office
hours.
I'm
mike
foster,
I'm
the
cloud
native
advocate
at
stackrocks
and
I'm
moderating
the
chat.
Today
today
I
have
vishwa
and
koki
with
me,
they're
software
engineers
at
stackrocks
and
cubelinter
maintainers.
A
So
before
I
start,
I
just
want
to
go
over
some
rules
of
engagement.
The
lines
are
muted,
so
many
people
that
are
talking
are
going
to
be
s3
and
it
is
an
on-demand
recording
and
there
will
be
an
email
post
event
for
everybody
who
registered
and
thanks
again
for
joining
as
we
go
through.
It
feel
free
to
drop
some
q
a's
in
the
chat
you
know
ask
about
open
source,
cube
linters.
You
know
different
policy
engines,
anything
that
you're.
A
Thinking
of
that
you
want
to
ask
our
software
engineers
and
maintainers
we'd
be
happy
to
throw
those
questions
their
way
so
again
the
q,
a
in
the
chat
and
yeah.
If
you
have
any
trouble
with
the
webinar
console,
it
helps
to
just
refresh
the
the
browser
seems
to
to
fix
it
most
of
the
time.
It's
just
the
network
connection
range,
something
like
that,
but
without
further
ado
I
want
to
get
started
really
excited
to
talk
about.
Cubelinter
I've
been
excited
since
we
first
announced
it
at
cubecon
vishwa.
A
You
spearheaded
the
project
at
the
beginning.
You
know,
can
you
tell
us
a
little
bit
about
what
that
process
was
like
creating
cubelinter.
A
Oh
all,
right?
Well,
let's
just
move
on
since
we
I
think
vishwa
needs
to
refresh
his
browser.
Now
too,
I
guess
cokie
you've
taken
over
for
some
of
vishwas
duties
as
a
maintainer.
A
B
Sure
yeah,
so
I
guess
first
of
all
just
explain
what
the
code
printer
is
a
little
bit,
so
it
is
essentially
a
pre-deployed
time
tool
that
you
can
use
on
your
kubernetes
yamaha
files.
For
you
to
see
all
the
security
improvements
you
can,
you
can
make
on
your
yamaha
deployment
manifests
essentially
and
essentially
me-
I
joined
waylater
as
a
maintainer
to
this
project
and
speaking
a
little
bit
for
the
essentially
the
future
direction
of
the
project.
B
We
are
planning
to
add
some
more
checks
like,
for
example,
like
individual
checks
that
I
can
think
of
front
of
my
mind,
identify
paths
that
are
essentially
not
selected
by
network
policies,
for
example,
because
by
default
pause
are,
if
you
don't
specify
your
policies,
there
are
the
network
traffic,
the
network
traffic
for
those
files
will
be
open
all
so
it
is
essentially
considered
a
security
best
practice
to
specify
network
policies
for
each
of
your
kubernetes
box,
and
that
is
actually
not
currently
supported,
and
we
would
like
to
add
that.
A
B
A
No,
it's
all
good
kogi
was
getting
into
policies
which
we
definitely
will
get
into,
but
before
we
get
on,
I
wanted
to
ask
bisho
what
it
was
like.
You
know:
creating
cube
linter
what
the
original
motivation
was
and
what
it
was
like.
You
know
watching
it
get
released
as
kubecon
took
off
last
year,.
C
Yeah,
so
thanks,
michael
and
sorry
again
about
my
internet,
so
it
was
exactly
the
right
time
to
disconnect
so
kublenter
was
you
know
the
cool
thing
about
it
is.
It
was
actually
born
out
of
a
pain
point
that
we
ourselves
had
within
stack
rocks,
which
is
in
our
product.
We
were
able
to
flag
issues
for
for
bad
configurations,
and
we
like
in
our
commercial
product.
We
do
that
when
people's
applications
are
running
live
on
the
cluster,
we
can
tell
them
hey.
C
This
thing
is
running
as
privileged,
but
it
doesn't
need
to
or
hey.
This
container
is
running
with
the
root
user
things
like
that,
but
what
we
in
our
specific
case,
we
don't
run
our
product.
C
You
know
most
of
the
people
running
our
product
are
not
us,
you
know
we
send
our
product
to
our
customers
and
they
run
them,
and
so
we
needed
a
way
to
find
out
about
these
misconfigurations
in
our
product
in
an
automated
way
before
we
even
send
it
out
to
our
customers
and
the
way
we
would
do
that
is
by
just
checking
the
yaml
files,
which
is
ultimately
what
we
you
know,
send
our
customers
and
how
they
install
our
product.
C
So
that
was
the
kind
of
pain
point
that
kublender
was
born
out
of,
which
is
how
can
we
have
given
the
fact
that
kubernetes
configurations
are
managed
in
yaml
files
which
are
just
stored,
treated
and
stored?
As
code?
Can
we
just
write
rules
that
check
these
yaml
files,
again
security
policies,
and
then
you
are
ensuring
that
they
are
kind
of
followed
right
from
the
very
beginning
of
the
application
development
life
cycle.
So
that
was
like
the
seat
of
the
idea.
C
We
looked
at
a
few
tools
that
already
existed,
but
none
of
them
did
quite
what
we
wanted
to
do.
There
are
many,
there
are
a
few
that
come
close
or
do
like
related
things,
and
so
that's
when
we
decided
to
go
ahead
and
build
one
on
our
own
and
at
that
point
it
also
made
we
this.
We
realized.
C
C
And
then
you
know
we
kind
of
brainstormed
on
what
would
make
sense
for
us
and
we
were
like
the
initial
target
user
and
then
we
hire
we.
We
like
kind
of
recruited,
a
few
people
who
work
with
kubernetes
at
like
of
you,
know,
other
places
and
asked
them
to
give
us
feedback
on
the
tool,
asked
them
to
give
us
ideas
or
bounced
ideas
of
them
and
heard
what
they
thought.
C
And
then
there
were
many
people
who
were
very
generous
with
their
time
and
thoughts,
and
then
that
allowed
us
to
iterate
quickly
and
get
to
a
we
be
0.1,
which
we,
where
we
finally
officially
announced
the
tool,
and
that
was
just
before
coop
con
and
since
then
we've
been
pretty
happy
about
the
deception.
You
know
we've
we're
now
at
over
1k
stars
or
in
three
months,
which
is
pretty
above.
C
You
know
what
we,
what
we
would
have
thought
the
what
this
kind
of
traction
we
were
hoping
to
see
and
so
that
that
kind
of
shows
us
that
we
have
built
something
that
addresses
a
pain
point
that
people
actually
have
so,
and
you
know
now
we're
getting
a
lot
of
feature,
requests
and
and
just
trying
to
keep
on
top
of
them
and
make
sure
that
people
are
supported
in
their
use
of
the
tool.
A
Yeah,
that's
awesome,
and
I
mean
speaking
as
once,
a
newbie
in
kubernetes
right
those
manifest
those
misconfigurations
that
you
can
have.
They
can
be
a
real
pain,
especially
when
you're
trying
to
create
policy
across
different
teams
right,
so
it
makes
sense
and
and
really
the
tool
when,
when
I
got
to
do
an
in-depth
look
at
it,
it
stands
out
in
how
it
can
be
applied
right.
So
it's
obviously
it's
a
cli
which
can
be
implemented
as
part
of
a
pipeline,
whether
it
be
a
github
action
or
in
a
gitlab
runner,
or
something
like
that.
A
What
was
the
design
consideration
for
you
know
using
go
and
using
kind
of
a
similar
architecture.
To
you
know,
cube
ctl
was
the
thought
just
to
keep
the
knowledge
gap
low
like
easy
to
implement.
You
know,
because
you
do
see
other
sort
of
policy
tools
that
have
a
higher
learning
curve
right
for
adoption,
just
curious
as
to
what
your
the
thought
process
is
to
like
that
architecture
was.
C
Yeah,
it's
a
few
few
different
things
I
would
say
one
is
kubernetes
itself
is
written
in
go,
and
so
we
could
take
advantage
of
many
of
the
kubernetes
libraries
that
they
make
available,
whereas
if
we
did
it
in
a
different
language,
we
would
have
had
to
kind
of
re-implement
some
of
the
logic
so
fun
fact,
which
I
guess
not
everyone
is
probably
aware
of-
is
that
the
kubernetes
object
definitions,
the
source
of
truth
is
actually
go
code,
so
like
kubernetes
deployments,
and
things
like
that
are
defined
as
like
a
go.
C
Struct
and
all
the
fields
are
like
member
variables
of
this
truck
and
all
the
other
specifications
like
you
know
the
open
api
specification,
the
swagger
specification,
the
the
protobuf
protocol
buffer
specification.
C
All
of
that
is,
they
have
tooling
to
auto
generate
that
from
the
go
code,
but
the
go
code
is
the
source
of
truth
and
that's
the
one
that
people
modify,
and
you
know
they
built
a
lot
of
fancy
tooling
to,
for
example,
auto
to
auto,
generate
all
these
things
to
use
comments
in
the
go
code
automatically
as
the
api
descriptions,
and
things
like
that.
So
that
gave
us
that
means
that,
from
the
very
beginning,
we
just
had
this
very
powerful
direct
access
to
the
kubernetes
libraries.
So
I
think
that's
a
big.
C
C
One
of
the
use
uses
of
the
tool
is
as
a
library-
and
this
is
we
haven't-
had
too
many
people
use
it
as
a
library
now,
although
some
people
have
requested
it,
and
so
that
gives
us
some
sign
that
there
is
some
interest,
but
we
wanted
it
to
be
possible
for
people
to
use
google
enter
as
a
library
in
other
applications
if
they
just
wanted
like
some
functionality,
they
just
wanted
to
like
call
the
functions
directly
and
typically
most
of
the
users
in
our
ecosystem
work
in
go,
and
so
that
meant
that
we
would
probably
have
maximum
utility
as
a
library
to
end
users.
C
If
we
implemented
it
in
go
because
if
we,
if
we
implemented
a
different
language,
they
would
not
be
able
to
use
it
as
a
library
and
they
would
have
to
kind
of
run
it
as
a
sub
process.
And
you
know,
pass
the
output
and
things
like
that
which
they
will
have
to
do
today
if
they
use
a
different
language
than
go.
But
if
this
kind
of
optimized
for
the
most
common
use
case
there.
A
Yeah,
that
makes
sense
thanks.
That's
a
great
answer.
This
one
I'll
leave
to
visual
first,
because
you
were
sort
of
spearhead
of
the
project.
But
what
were
some
of
the
challenges?
Just
technical,
architectural
people
with
creating
an
open
source
tool
and
maintaining
it.
C
I
would
say
you
know
it's
something
we
didn't
know
a
whole
lot
about
before
we
before
we
did
this
project,
at
least
based
on,
like
the
experiences
we
had
on
the
team,
so
I
think
some
of
it
was
just
figuring
out.
You
know
how:
how
do
you
get
feedback
from
users?
How
do
you
kind
of
build
a
community
around
it?
How
do
you
make?
C
How
do
you
make
it
so
that
you
can
like
quickly
get
and
act
on
feedback?
So
I
think
just
figuring
out
the
like
nuts
and
bolts
of
it
was
one
in
terms
of
writing
the
code
itself.
I
don't,
I
think
that
was
actually
the
among
the
easier
parts
and
figuring
out.
Some
of
these
other
things
has
been,
has
been
a
bit
of
a
bit
of
a
challenge
and
another,
is
you
know,
balancing
the
work
we
do
as
engineers
day-to-day
with
on?
C
You
know
our
product
code
and
things
like
that
with
are
with
our
like
time
spent
on
maintaining
our
kube
lintel,
and
the
third
thing,
I
would
say,
is
to
some
degree:
we've
had
to
be
kind
of
driven
by
users.
So
we
do.
C
We
follow
our
issues
very
closely
and
a
lot
of
what
we
implement
is
kind
of
dependent
on
the
issues
that
people
file,
as
well
as
the
pull
requests
that
people
send
so
which
is
a
different
mental
model,
from
how
some
of
our
internal
tooling
gets
developed
or
internal
like
product
code
that
gets
developed
and
in
a
way
it's
better,
because
we
are
like
hearing
more
directly
from
our
users,
but
in
a
way
it
also
is
like
less
predictable.
A
Yeah,
I
feel
like
there
can
be
a
lot
of
static
and
and
open
issues
that
might
not
be
relevant
or
maybe
taking
the
product
in
a
different
way,
have
a
very
large
technical
debt.
Things
like
that
that
get
filed.
So
you
have
to
sift
through
those
right
speaking
of
of
policies
right
because
qblinter
by
default
has
19
policies.
I
know
kogi
you've
been
kind
of
you
know,
interacting
with
the
the
community
as
to
what
policies
you
think
should
be
added.
A
So
I'm
just
kind
of
curious
what
you
think
some
of
those
best
practice
policies
are
to
enforce
and
some
policies
that
you
think
should
be
enforced
in
the
future.
B
Yeah,
I
can
probably
start
with
giving
a
couple
examples
of
what
we
do
today
right
so
like,
for
example,
we
have
a
built-in
check,
that's
that
we
already
built
into
the
kubenter.
Let's,
for
example,
we
are
flagging
on
containers
that
are
running
as
a
route
running
with
root
privilege,
because
by
default
you
know
like,
if
you
really
need
you
you,
by
default,
like
containers,
should
not
be
running
with
your
purchase
right.
So
we
flag
on
those
and
another
thing
that
we
also
check.
B
I
think
with
the
built-in
checks
is,
for
example,
if
containers
don't
have
a
cpu
or
memory
limit
set,
because
again
you
know
for
security
best
practices.
You
should
always
set
your
container
with
sort
of
some
amount
of
memory
and
cpu
limits.
Another
thing
we
probably
checking
built-in
checks
is,
let's
say,
for
example,
if
you
have
read
only
root
file
system
to
configure,
because
by
default
you
should
always
configure
your
file
system
to
be
read
only
things
like
that
yeah.
I
think
we
have
a
full
list
of
building
checks.
B
That's
on
our
github
page
for
people
to
check
it
out
if
you're,
if
you're
interested
and
we're
planning
to
add
more
and
for
example,
one
thing
we
would
like
to
have
that
I
actually
even
talked
about
a
little
bit
in
the
beginning
was,
for
example,
if
pods
are
not
selected
by
network
policies
which
reflect
that,
because,
like
for
security
best
practice,
we
should
we
should
always
match
up
with
some
network
policies
right.
B
Otherwise,
I
think
by
default,
pods
have
open
tool
like
traffic,
but
I
will
bush
which
we
should
avoid
that
first
screen
best
practices
and
as
individual
checks.
Another
thing
I
can
think
of
is,
for
example,
just
to
make
sure
each
pod
has
its
own
security
context
set,
which
we
don't
really
support
today,
yeah,
I
think,
on
and
on.
I
can
talk
about
more
checks,
but
I
think
these
two
are
probably
the
immediate,
like
individual
checks
that
we
can
add
to
our
kubernetes
project.
B
Another
bigger
thing
that
we
can
add
is
probably
like
schema
validation.
That's
like
another,
not
check,
but
that's
something
we
can
add
to
the
existing
cognitive
project
right
because,
as
I'm
not
going
through,
only
checks
for
security
best
practices
but
not
for
let's
say
kubernetes,
schema
validation
but
yeah.
I
think
that's,
that's
not
checks,
but
that's
also
something
we
can
work
on.
Yeah.
B
A
B
B
Exactly
yeah,
I
think,
as
of
now,
we
don't
really
check,
I
think
at
least
for
pots
I
mean
there
are
something
we
do
with
container
security
context
like
we
check
if
some
certain
capabilities
like
linux
tablets
have
dropped
or
not,
but
like
on
the
pot
level
yeah,
we
don't
really
do
anything
with
the
pod
security
context,
so
yeah.
We
would
like
to
probably
add
some
checks
towards
that.
A
Yeah
awesome
how
so
so,
like
you,
we
mentioned
all
these
checks
and
policies
right,
but
it's
also
about
how
it
can
be
configured
and
reproducible
right,
especially
across
teams.
How
does
cube
linter
go
about
creating
that
reproducibility
and
and
kind
of
look
at
it
as
a
policy
as
code
way
where
you
can
version
control
these
things.
B
Right
so
in
terms
of
version
controlling
it
checks,
I
don't
think
we
couldn't
support
that.
So
I
think
the
flexibility
of
checks
are
still
on
the
dimension
of
let's
say
like.
Of
course,
we
have
lists
of
all
default
checks,
and
you
can
always
you
know,
enable
or
enable
some.
There
is
like
a
config
file.
We
can
pass
into
the
computer
which
to
specify
all
these
things,
and
also
these
are
all
just
default
checks
right,
and
you
can
also
write
your
own
custom
checks.
B
Utilizing
one
of
our
check
templates
that
we
built
into
the
codementor,
I
think,
for
more
details.
There's
you
can
check
check
out
the
github
page
or
like
docs
page
for
the
computer
to
for
for
how
you
actually
write
those
config
files,
but
you
can
definitely
write
your
own
custom
checks,
utilizing
just
check
templates
and
also,
for
example,
like
I
mentioned
before,
you
know.
If
containers
are
running
privilege
mode,
you
should
you
should
be.
B
You
should
know
about
that
right
and,
if
you
do
want
them
to
be
running
footage
mode,
you
can
also,
let's
say,
ignore
this
privilege
mult
a
check
for,
let's
say
specific
objects
or
specific
containers
by
specifying,
like
annotation,
like
our
reserved
built-in
annotation,
for
that
specific
object,
so
that
coventor
will
just
yeah
stop
reporting
on
the
object
yeah.
I
think
the
flexibility
is
still
on
this
dimension
in
terms
of
merging.
We
don't
really
quite
do
that
yet,
but
I
think
that's
something
we
can
all
think
about
like
down
the
road
mm-hmm.
C
And
yeah
the
way
we
kind
of
support
people
with
that
is,
you
know
we
envision
being
used
in
the
context
of
a
version
control
system
like
like
get,
and
so
we
by
storing
our
own
kubelinter
config
itself
as
code
and
by
storing
things
like
what
koki
mentioned,
the
ignore
annotations,
making
them
part
of
the
yaml
file
itself.
We
ensure
that
at
least
those
things
are
treated
as
code.
C
C
A
Yeah
not
so
much
that
the
internal
policies
are
versioned,
but
how
you
apply.
The
cli
through
the
config
file
is
versioned
right.
All
of
your
annotations.
All
of
your
ignore
checks,
it's
all
documented
with
with
descriptors
right-
and
you
mentioned
those
custom
checks.
You
can
add
descriptions
and
remediations
to
those
custom
checks
too
right.
So,
if
you're
pushing
it
to
your
developers,
would
you
that
that's
kind
of
the
way
you
would
recommend
using
it
right,
especially
if
you
were
using
across
teams.
B
Yes,
yes,
so
this
actually,
you
know
even
for
built-in
checks.
You
know
we
have,
let's
say
for
the
name
of
the
check
right
and
also
the
description
of
the
check
if
the
checks
are
like
flat
or
like
if
the
checks
are
applied
and
some
violations
are
flagged.
You
know
this
exclusion
was
definitely
displayed
and
also
there's
like
a
remedy
remediation,
along
with
it
so
yeah
for
also
writing.
Custom
checks.
A
Sort
of
applications
is,
you
know
you,
you
have,
let's
say,
you're
a
small
team
and
you're
beginning
using
it.
You
work
on
the
cli
or
you
have
some
intern
come
in.
You
can
just
kind
of
drop
this
into
you
know
the
ci
process
set
a
couple
checks
and
maybe
not
enforce
it.
So,
like
obviously,
you
get
a
non-zero
exocode
but
doesn't
necessarily
have
to
break
the
pipeline.
If
it's
in
a
you
know,
maybe
a
dev
branch
right,
but
it
gives
you
feedback,
especially
to
the
developers
of
hey,
like
watch
out
these.
A
These
things
are
coming
and
then
you
can
enforce
before
you
get
to
the
testing
stage.
Let's
say
right:
I
found
that
really
cool
and
yeah.
I
wanted
to
give
just
a
quick
shout
out,
because
you
mentioned
the
github
repo
a
couple
times.
There
are
resources
in
the
on24,
so
you'll
see,
there's
the
github
repo
there's
a
blog
about
kind
of
in
depth
on
stack
rocks
if
or
on
cube
lender.
If
you
want
to
learn
more
and
you
have
your
own
slack
channel
too
right,
q
liner's
got
its
own
slack
channel.
A
C
A
Ernest
said
what
about
versioning,
he
missed
the
question.
I
we.
We
were
just
talking
about
the
policies
not
being
versioned,
but
the
configuration
of
how
we
apply,
cube,
linter
and
then
the
annotations
themselves,
as
part
of
your
kubernetes
objects,
whether
it
be
like
ignoring
checks
or
something
like
that.
A
That's
going
to
be,
you
know,
versioned
using
your
vcs,
so
it's
it's
using
it's
it's
basically
using
those
existing
functionality
right
in
kubernetes
and
just
kind
of
building
a
tool
that
you
can
strap
on
a
little
bit
more
closer
to
the
developer
and
enforce
the
pipeline
right.
That's
that's!
That
was
kind
of
the
context
of
what
we
were
talking
about,
but
thanks
for
the
question,
speaking
of
ci
workflows,
you
know:
do
you
have
any
recommendations?
C
Yeah,
so
you
know,
this
is
one
of
those
things
where
we
have
some
thoughts,
but
we're
also
very
interested
in
learning
about
how
other
people
do
it
and
what
kind
of
use
cases
people
have
in
general.
C
The
way
we
we
built
the
tool
to
be
fairly
flexible
and
simple
to
configure,
but
the
the
high
level
vision
we
have
is
the
ci
workflow
involves
you
know
just
downloading
the
kubelenter
binary
from
a
well-known
location
like
our
github
artifacts
and
then
running
it
on
like
whichever
files
you
want
or
if
typically,
we
would
just
say,
run
kubelent
or
lint
at
the
top
of
your
directory
structure.
C
And
then
we
automatically
will
crawl.
You
know
walk
all
the
files
in
all
the
sub
directories.
Look
for
anything
that
looks
like
a
kubernetes
manifest
and
run
it
through
our
linter,
and
so
that
way
you
make
sure
that,
as
new
files
are
added
they're
going
to
get
linted
automatically,
and
then
we
would
recommend
checking
in
the
configuration
file
for
kubelinter
in
your
repo
with
the
rest
of
your
code,
so
that
you
know
your
custom
checks
or
your
like
list
of
enabled
and
disabled
checks.
C
All
that
lives
in
code
and
gets
kind
of
modified
along
with
the
rest
of
rest
of
your
like
code.
So
that's
like
the
high
level
recommendation
and
we
do
have
a
github
action
that
makes
this
very
easy.
You
know,
I
think
it's
like
three
lines
of
code.
You
just
add
the
config
file
and
add
the
github
action
and
it'll
it'll
just
work.
C
A
Yeah
awesome
it's
staying
on
use
cases,
I
guess,
is
there
any
policies
that
you
find
like
more
more
wanted
or
any
ones
that
you
really
want
to
implement,
because
I
know
most
of
the
objects
that
ublinter
looks
at
are
deployments
and
services?
A
Is
there
a
a
push
for
other
objects
to
be
validated
and
what
are
some
of
the
challenges
with
validating
certain
objects
over
others
or
you
know,
but
the
focus
being
on
service
and
deployments
like
moving
on
to
network
policies,
and
things
like
that,
just
curious
as
to
bizarre
how
you
look
to
implement
it
and
what
the
next
steps
are.
C
Yeah,
so
you
know
in
terms
of
complexity,
we
don't
have
it's
not
super
complex
to
do
it's
just
a
question
of
us
going
and
doing
it
and
I
think
in
the
beginning
we
focused
on
deployments,
because
I
guess
containers
I
mean
anything
that
runs
a
pod
because
that's
like
the
fundamental
you
know
unit
of
how
things
happen
on
kubernetes,
and
so
we
kind
of
focused
on
that
to
start
out
with,
but
we
do
want
to
expand
our
focus.
C
I
think
one
of
the
challenges
is
just
there's
a
big
surface
of
things,
and
this
is
especially
true
when
you
include
custom
resources
right
because
we
wanna
like
so
many
things
in
kubernetes-
are
now
defined
as
custom
resources
and
people.
People
are
coming
with
some
pretty
interesting
requests
on
you
know:
can
you
help
us
link
like
prometheus
configuration
files
which
have
their
own
schema
and
things
like
that?
So
we,
I
think
you
know,
there's
a
couple
of
things
one
we
have
to.
C
We
have
to
figure
out
how
to
we
can
we
can
implement
these
things
as
a
one-off
when
it
comes
to
custom
resources,
but
I
think
we
would
want
to
make
it
flexible
enough
that
you
know
if
you
have
a
new
custom
resource
that
you
want
to
run
kubelenter
through
or
the
tool
itself
is
flexible
enough,
that
you
can
kind
of
say,
point
it
to
a
schema
file
for
a
custom
resource
and
have
it
lint
the
resource
according
to
that
or
something,
whereas,
instead
of
kind
of
making
the
tool
more
aware
of
each
custom
resource
the
way
it's
aware
of
each
built-in
kubernetes
resource.
C
So
I
think
those
are
just
some
architectural
decisions
we
have
to
make,
and
you
know
it's
just
a
question
of
going
out
there
and
doing
it.
I
think
we,
a
lot
of
our
focus
in
the
last
few
months,
has
been
on
usability
enhancements.
Like
you
know,
we've
been
adding
new
command
line
flags
and
and
like
like
you
know,
people
like
we
want
people
want
json
output.
So
we
added
that
and
things
like
that,
but.
C
That's
a
big
one
that
people
yeah
that's
a
big
one
that
people
have
asked
for,
and
so
someone
at
stackrocks
actually
started
working
on
that
as
a
hackathon
project.
So
hopefully
we
we
get
that
through
to
completion,
and
we
can,
you
know,
release
that
to
the
world
soon,
but
yeah.
C
So
I
think
we
just
need
to
go
ahead
and
go
ahead
and
implement
some
of
these
things,
and
you
know,
as
with
everything
we
are
also
super
like
looking
for
help
from
the
community,
and
so
if
anyone
is
like
interested,
you
know
we'd
be
both
interested
in
hearing
about
what
are
some
other
custom
resources
or
other
built-in
resources.
You
would
want
to
lend,
as
well
as,
if
you're
interested
in
sending
a
pull
request,
just
getting
your
hands
hands
dirty
and,
like
writing
some
code.
C
We
want
to
like
be
supportive
of
that
and
if
you
join
the
slack
channel
and
ping
us,
we
are
super
happy
to
talk
to
you
and
like
talk
through
how
you
can
contribute,
because
there's
no
shortage
of
of
things
to
do.
A
It's
good
problems
to
have
right.
We
had
a
a
question
about
kublan
tier
supporting
compliance
policies.
Stackrocks
the
product
does
hipaa
and
and
the
nist
853
compliance
so
aaron,
that's
the
the
paid
version.
Cubelinter
is
mostly
just
configuration
to
help.
You
know
upstream
yeah,
anyways,
so
moving
on
to
to
cubelinter
and
some
flexibility
and
integration.
So
we
we
obviously
talked
about
ci
pipelines.
We
talked
about
the
config
files.
A
What
what
makes
cubelinter
so
easy
to
install
and
get
going
right
because
we've
talked
about
you
know.
One
of
the
issues
with
security
is
most
organizations
have
a
security
team
that
kind
of
crams
down
policies
on
the
developers
and
really
we
want
to
take
advantage
of
containers
in
kubernetes.
So
what
makes
cubelet
they're
so
different
and
so
easy
to
adapt
versus
other
products.
C
Okay,
I
can
go
yeah,
I
think
you
know
we've.
I
think
we
tried
to
make
it
as
lightweight
as
possible
to
use
and
install.
So
I
think
that's
the
big
one
where
it's
written
in
go
and
the
the
nice
thing
about
go.
Is
you
know
it
produces
these
little
self-contained
binaries?
You
know,
there's
like
no
dependency
whatsoever
on
your
system.
You
just
download
this
file
and
you
know
it'll
work
without
you
having
to
install
like
a
bunch
of
other
things
and
you
know
to
add
something
to
your
path
and
whatnot.
C
So
I
think
that's
the
big
one
and
I
think
we've
generally
tried
to
make
it.
C
We
heard
on
the
side
of
making
the
tools
simple
and
having
fewer
options
and
then
we're
like
being
we're
like
slowly,
adding
more
flexibility
and
functionality
as
people
requested,
but
like
thinking
carefully
before
each
thing,
because
each
there's
always
a
trade-off
between
like
when
it
comes
to
configuring
a
tool
like
there's
a
big
trade-off
between
like
flexibility
as
well
and
complexity,
and
so
you
want
to
make
sure
that
you
know
you're
giving
people
the
flexibility
they
need,
but
not
making
it
too
complex
and
be
earned
on
the
side
of
making
it
more
more
simple
to
use.
C
And
so
that's
that's.
The
other
thing
that
I
would
say
has
had
make
help
does
make
it
like
easy
to
use
and
we've
tried
to
make
it
like
fairly,
and
we
we've
also
tried
to
make
the
out
of
the
box
experience
good
right,
like
you
just
download
the
tool
run
it
like.
You
know
we
are
kind
of
opinionated.
We
say
these
are
the
default
policies
that
we
think
makes
sense,
and
you
run
it
you
you
get
useful
output
immediately
and
then
you
can
tweak
stuff
and
do
whatever.
C
So
that's
another
thing
which
is
super,
which
I
think
is
pretty
important
to
just
give
people
some
opinions
and
not
have
them
have
to
figure
out
everything
both
at
one
extreme.
We
could
not
enable
any
checks
and
then
force
people
to
like
go
through
the
list
and,
like
add
stuff
and
at
the
other,
we
could
just
enable
every
check
and
like
have
them
just
drown
in
noise,
and
you
know
be
like
I'm,
not
I'm
never
going
to
be
able
to
get
through
the
output
of
this
tool.
C
A
Yeah,
you
mentioned
default
checks
and
I
think
it
was
effective
feedback,
because
I've
always
found
with
with
tools
you
know
just
just
being
swarmed
with
information,
doesn't
necessarily
mean
you're
actually
observing.
You
know
what
the
system
looks
like
right
or
the
state
of
your
security
in
your
system.
So
I
know
that
you,
you
there's
19
total
checks,
but
you,
the
default,
is
13
up
and
I'm
assuming
that
is
for
more
observability
and
actionable
intel
right,
because
not
all
the
checks
are
weighted.
Equally.
C
Yeah
pretty
much,
and
some
of
them
are
like
not
even
things
that
you
would
want
everyone
to
enable
like
some
of
them
are
almost
like
examples
of
custom
checks
that
people
may
want
or
if
they
are
enforcing,
like
a
certain
kind
of
policy
like,
for
example,
you
know
like
a
required
annotation
like
we
have
a
policy
at
stackrocks,
where
we
say
each
deployment
should
have
like
an
email
annotation,
because
we
we
think
that
that
allows
it
to
be
very
clear
how
you
can
get
in
touch
with
whoever
deployed
it.
C
But
not,
and
you
know
we
we
think
that's
like
a
good
thing
for
people
to
do,
but,
like
not,
everyone
may
have
the
same
annotation
right
like
some
people
may
have
a
different
way
to
specify
like
who
owns
a
deployment,
or
some
people
may
just
want
a
different
annotation
key.
C
So
in
that
case
we
would
expect
people
to
like
not
use
this
built-in
check
by
default,
but
like
slightly
modify
the
check-
and
in
this
case
the
built-in
check
is
really
just
an
example
of
like
what
you
can
do
with
the
tool
and
giving
people
ideas.
And
so
we
would
not
want
to
enable
that
by
default.
So
that's
so
there's
a
few
different
cases
of
you
know,
which
things
make
sense.
C
Another
one
which
we
they're
conscious
about
is
the
like
and
pod
anti-affinity,
so
pod
at
the
affinity
is
a
way
to
say
that,
like
this
is
a
deployment
if
it's
running
multiple
pods
make
the
pods
run
in
as
far
as
possible
in
different
nodes,
and
that's
what
you
always
want,
because
you
know
the
whole
point
of
having
multiple
pods
is
generally
to
increase
your
like
availability,
and
so
one
of
the
events
you're
trying
to
guard
against
is
a
node
going
offline,
and
so
if,
but
if
all
your
pods
are
for
the
same
deployment
are
on
the
same
node
and
that
node
goes
offline,
then
you
haven't
really
benefited
from
having
multiple
pods
and
so
pod.
C
Anti-Affinity
is
a
way
that
kubernetes,
you
can
tell
kubernetes,
do
your
best
to
put
these
spots
away
from
each
other.
So
that's
the
anti-affinity,
and
we
have
that
as
a
check.
But
not
everyone
will
enable
this,
and
so
by
default
we
only
have
it
enabled
if
you,
if
your
deployment
is
configured
to
have
two
or
more
replicas.
C
But
then
we
kind
of
made
the
judgment
call
that
we
don't
want
to
for
deployment
that
you
know
are
just
going
to
run
as
one
part
we
don't
want
to
like
force
you
to
do
that.
So
those
are
like
some
examples
and
you
can
always
you
know,
write
your
own
check
or
to
say
enforcement,
pod,
np,
affinity
on
everything,
and
you
just
add
that
to
your
config
file
and
it
will
work.
But
we
just
chose
like
a
default,
that
we
thought
made
sense.
A
Yeah
and
I
think
koki
you
touched
on
on
cpu
time
right,
like
if
you're
setting
limits
and
requests
right
and
that's
that's
what
I
always
see
people
gloss
over,
but
it's
really
important,
especially
when
you
start.
You
know
using
auto
scaling
rules,
not
just
from
a
security
standpoint,
but
if
you're
in
the
cloud
from
a
a
bank
account
standpoint
right
if
you're
you're,
auto
scaling
on
the
wrong
metrics,
so
those
are
really
useful
and
they're.
Not
it's.
It's
interesting
to
see
how
security
is
woven
into
those
day-to-day
exercises.
A
When
you're
talking
about
scaling
and
managing
pods
right
misconfigurations,
you
know
they
they
become
more
costly,
like
as,
as
you
get
into
clusters
that
scale
right
so
setting
those
rules,
especially
upstream,
very
important.
A
quick
question
is
cube.
Linter
does
it
go
hand
in
hand
with
kubernetes
versions?
How
do
you
kind
of
you
know
structure
it
around
the
api?
I
believe
there's
a
policy
for
anything
that
is
the
deployment.
That's
not
like,
I
think,
app
v1.
A
B
Yeah
for
sure
yeah,
so
I
think
yeah
actually,
api
versioning
is
one
thing
that
I
that
we
want
to
do,
but
we
haven't
done
yet.
That's
kind
of
also
go
on
the
like,
not
same
line
but
like
similar
line
with,
like
schema
validation
right
because
on
a
different
kubernetes
api
version,
you
have
different,
let's
say,
objects
or
like
the
the
the
fields.
B
C
Yeah,
but
one
one
general
thing
in
this
that
I
want
to
add
is
you
know
we
want
to
make
it
so
that
you
don't
need
a
different
version
of
kube
linter
for
different
kubernetes
versions.
So
exactly.
C
When
new
kubernetes
versions
come
out,
you'll
need
to
upgrade
google
enter
to
support
some
things
that
are
in
that,
but
you
know
we
want
to
make
it
possible
for
like
you
to
just
install
kubernetes
once
and
use
that
on
all
your
clusters,
irrespective
of
the
version
they're
on
and
right
now,
all
the
checks
we
have
are
kind
of,
broadly
applicable
across
all
supported
versions
of
kubernetes.
But,
like
cookie
said,
we
will
need
to
do
some
schema,
validation
and
things.
C
When
we
add
that,
then
we
can
give
people
more
fine
support
where
they
they
can
say.
This
is
the
exact
version
that
I
have
and
I
like
best
prayer
following
the
best
practices
for
that.
A
Yeah
awesome
james,
I'm
assuming
james,
is
asking
questions
about
cis
benchmarks
and
I'm
assuming
the
question
was
pointed
at
the
stack
rocks
application
because
stackrocks
does
help
you
and
supports
managing
cis
benchmarks
as
well.
I
don't
believe
cubelinter
does
right.
Yeah.
C
We
don't
but-
and
I
think
you
know
to
some
degree-
it's
a
slightly
different
domain,
because
a
lot
of
the
checks
that
see
the
cis
benchmarks-
or
at
least
the
ones
for
kubernetes,
which
we
do
support
in
stock
rocks
a
lot
of
the
checks-
are
things
that
are
kind
of
related
to
the
configuration
of
the
cluster,
a
lot
of
like
individual
applications
on
it.
C
So,
for
example,
like
you
know
the
way
we
do
it
is
we
go
to
each
node
in
the
cluster
and
then
we
see
like
we
look
at
the
files
on
the
node
to
see
like
how
is
the
node
configured,
but
the
node
configuration
is
not
related
to
like
the
application
configurations
in
the
yaml
files
and
so
to
some
degree.
It's
it's
a
it's,
not
a
thing
that
kublenter
can
do.
C
I
think
there
are
maybe
some
that
are
related
to
application
configurations,
but
those
are
typically
not
stored
in
you
know
your
code,
those
are
like
stored.
Those
are
like
just
this
scuba
coupe
system
kind
of
pods
that
come
up,
and
they
are
more
often
than
not
just
going
to
be
like
what
you
get
with,
how
whatever
like
tool
you
use
to
install
kubernetes.
A
Yeah
and
cis
benchmarks
are
tough,
especially
once
you
get
into
a
cloud
platform,
because
really
the
benchmarks
need
to
be
applied
to
the
control
plane,
nodes
too
right,
and
you
know
during
cluster
setup.
So
there's
there's
some
things
there
that
I
think
are
a
little
bit
out
of
cube.
Litter's
scope
right
funny
question
so
is
cube
later
going
to
be
tuned
specifically
for
openshift.
Now
that
there
is
an
open
shift,
cis
benchmark.
C
Yeah
again,
you
know
we
do
want
to
support
openshift
and
kubelinter
to
some
degree.
We
do
because
many
of
the
cube
resources
are
just
the
same,
but
we
want
to
support
like
deployment
configs
and
build
configs
and
other
security
context,
constraints
and
other
things
that
are
like
only
in
openshift
and
not
in
regular
coupe,
and
we
see
this
kind
of
the
same
way.
We
see
the
general
question
of
like
how
do
we
support
more
kinds
of
resources
in
kubernetes?
C
So,
yes,
we
do
want
to
support
those
with
respect
to
cis
benchmark.
I
think
we've
taken.
We
do
support
the
openshift
cis
benchmark
in
our
like
product
and
we've
taken
a
look
as
part
of
that,
and
you
know
I
think
the
same
comments
I
had
earlier
kind
of
apply,
which
is
that
these
are
not
exactly
within
the
domain
of
something
you
can
link
in
a
in
the
cube
application,
yaml
files
and
something
you
just
kind
of
have
to
check
in
the
cluster.
A
Yeah
and
there
are
already
open,
like
open
source
tools
that
do
check
these
things.
Right
then
paid
tools
that
check
them,
so
the
cube
linters
staying
within
the
domain
check
of
misconfigurations
within
the
cluster.
It
makes
sense
right
and
especially
because
that's
what
developers
are
going
to
work
for
are
going
to
work
with
right.
So
if
it's
a
tool,
that's
supposed
to
be
shifted
left
for
developers
right,
that's
its
domain
versus
some
other
tools
for
cis
benchmarks.
A
That's
going
to
be
more
your
cluster
admin
right,
your
specific
security
people,
so
yeah,
it's
a
great
question
and
it
actually
really
touches
on.
I
think
a
lot
of
the
challenges
with
security
tools.
Right
is:
how
do
you
develop
a
tool
and
who
do
you
develop
it
for
right,
like
stack,
rocks
who's
going
to
use
it
developers,
maybe
admins,
most
likely,
probably
more
like
the
security
people
right
compliance
issues?
So
how
does
that
communication
happen?
And
I
I
really
like
how
cube
winter
shifted
the
focus
like
towards
the?
A
I
guess
you
could
say:
developers
are
now
developer,
slash,
devops
engineers
right
there's
always
that
weird
fuzzy
line,
I'm
sure
you
guys
have
experienced
it
in
your
software
engineering
journey,
so
yeah
cubeletter
really
does
a
kind
of
a
good
job
at
fitting
into
that.
That
niche
one
quick
question
and
I
was
touched
on
earlier-
but
what
are
all
the
objects?
What
objects
are
covered
with
cube
linter?
I
know
you
said
you
had
custom
checks.
C
Yeah,
so
we
support
all
objects
for
some
checks
right
like
for
required
labels,
for
example,
like
all
objects
have
labels.
So
then
you
can
kind
of
require
those
on
any
object
and
then
there's
like
well
there's
like
some
generic
checks.
We
have,
like
you,
know,
deprecated
api
versions
which
are
kind
of
broadly
applicable.
So
you
can
say
I
want
to
like
disallow
these
api
versions.
C
So
there's
a
few
things
where
which
are
like
where
we
support
everything
network
policies,
role
bindings
whatever
it
is,
but
I
will
say
that
some
of
our
texts
that
go
deeper
into
the
specific
configurations
of
different
objects.
Those
are
currently
mostly
focused
on
things
that
create
pods,
so
deployments
demon
sets
or
maybe
just
pods
directly,
and
that's
where
we've
kind
of
gone
deeper,
because
those
are
some
of
the
most
high
priority
things
but
like
cokey
said
earlier,
you
know
we
do
want
to
support
like
network
policies
and
things
like
that
over
time.
C
But
at
this
time
it's
like
we
support
them
for
some
of
the
global
checks.
But
not
we
don't
have
like
deeper
checks
that
understand
those
two
or
those
things
yeah.
A
Yeah
james
has
said
he
said
this
tool
is
more
for
developers
rather
than
ops
or
security
guys,
and
I,
in
terms
of
day-to-day
use,
I
think
james
touches
on
it,
but
really
it's
the
ops
and
security
guys
that
I
think
are
going
to
look
and
say:
hey.
We
don't
want
any
privileged
pods
running
and
then
they're
going
to
kind
of
implement
the
tool
into
you
know.
Maybe
the
ci
pipeline
right,
so
that
they
get
notification
and
the
developer
gets
feedback
right.
So
it's
it.
I
wouldn't
say
it's
more
for
developers.
A
It
is
that
tool
that
kind
of
is
so
flexible.
It
can
fit
into
many
different
teams
and
how
they're
organized
right,
you
know
like
if
you're
just
starting
with
kubernetes-
and
you
want
to
lint
your
your
your
files
right
for
some
best
practice
like
I
think
it's
a
great
start.
If
you're
an
operations
team-
and
you
have
you-
know
five
teams
or
maybe
have
like
a
data
science
team-
and
they
don't-
they
just
want
they're
in
their
own
cluster
and
they
want
everything
as
privileged,
because
you
know
their
data
science
team.
A
Well,
you
can,
you
know,
use
cube
linter
on
a
more
lenient.
Let's
say
policy
right
configuration
as
part
of
their
ci
pipeline.
So
there's
there's
a
couple
different
ways.
You
can
apply
it
right,
correct
me.
If
I'm
wrong
guys
but
yeah
it's
I.
I
see
it
more
of
also
like
an
education
tool.
A
I
I
can't
really
show
the
output
but
again,
if
you
head
over
to
the
the
github
repo
there's
a
bunch
of
documentation,
as
well
as
like
outputs,
but
the
outputs
when
you
have
a
misconfiguration
kubernetes
documentation
as
to
why
the
policy
is
created
and
then
the
remediation
steps
too
right,
which
I
I
thought
was
awesome
that
you
guys
put
it
in
there
and
you,
especially
with
the
custom
checks.
You
can
actually
create
your
own
remediation
steps.
A
A
So
it's
it's
kind
of
a
catch-all
tool
that
I
think
is
really
easy
to
implement.
Is
that
a
fair
assessment.
A
If
anybody
else
doesn't
have
any
other
questions,
I
I
don't
have
any
other
questions
for
you
guys.
Is
there
anything
that
you
guys
are
looking
forward
to
with
cubelinter
this
year
any
open
source
tools?
I
know
it's
a
general
question,
just
kind
of
wanted
to
pick
the
engineers
brains
as
to
what
they're
excited
about.
B
B
Working
on
was
supporting
kind
of
the
line
number
like
a
49
number
on
the
violations,
and
also
automatic
rewrites,
which
is
what
most
like
lint
tool,
like
other
context
of
link
tool,
does
right
like,
for
example,
if
it's
like
a
easy
check
like,
for
example,
if
you
have
a
container
that
has
non
read,
only
root
file
system.
This
is
like
a
relatively
like
a
more
straightforward
check
and
more
stronger
thing
to
fix
right.
B
You
just
essentially
has
to
you
just
you
just
essentially
have
to
insert
like
a
a
really
file
system
key
and
like
a
value
of
true
like
under
your
container
security
container
security
contacts.
B
These
kind
of
things,
I
think,
will
be
cool
for
a
cool
linter
to
support
online
rewrites,
rather
than
just
reporting
on
the
violation
and
waiting
for
users
to
fix
it,
but
yeah
that
this
is
again
gonna
be
some
amount
of
work
and
yeah.
We
have
to
probably
plan
for
these,
but
like,
alongside
with
like
the
scheme
violation,
we
have
been
like
repeatedly
talking
about.
So
that's
definitely
one
thing
that
I
think
winter
can
potentially
work
on
in
the
future,
or
maybe
this
ceremony
next
year
we'll
see.
I.
A
B
Yeah,
mostly
right
now,
it's
just
a
cli
too
obviously,
and
also
actually
one
thing:
we
we
actually
the
completor
has
a
docker
image.
That's
under
stackrock's
docker
hub
so
yeah.
If
anyone
doesn't
want
to
use
the
giveaway
action,
which
is
probably
the
easier
way
to
do
it,
but
you
know
that's
that's
also
another
option.
There.
A
Yeah
and
there's
a
good
question
about
cube
length,
support
for
like
slack
your
web
hooks,
it's
the
only
way,
you'd
really
get
a
lot
of
use
out
of
that
is,
if
you
were
to
build
cuban
turn
to
your
pipelines.
Obviously
you
know
github
gitlab
jenkins,
things
of
that
sort,
ci
pipelines
that
sort
they
have
those
interactions,
they're
web
hooks
and
and
support
for
notifications.
A
So
you
put
it
in,
you
probably
say,
enforce
a
policy
and
then
you'd
get
some
notification
that
somebody,
you
know
violated
the
cube
linter
policy
through
that
to
to
slack.
But
if
you
want
to
contact
on
slack
you
can
you
can
ping
them
and
let
them
know
that
you
want
more
more
functionality.
Sorry,
I'm
signing
you
guys
up
for
a
lot.
Now
we
got.
We
got
ui
visual
studio
code
integration.
A
What
else
right?
The
we
got
a
lot
more
issues
for
you
guys
to
tackle
now.
So
I
appreciate
you
guys
signing
up
for
this,
for
this
chat
other
than
that,
I
think
that's
all
we
have
for
today.
All
the
questions.
Thank
you
guys
for
joining
everybody
who
joined
appreciate
the
questions.
If
you
want
to
learn
more,
there
are
a
bunch
of
resources
again
feel
free
to
ping
us
on
the
slack
channel.
The
koki
and
vishwa
would
love
to
get
to
you.
A
C
A
C
And
thanks
thanks
everyone
for
joining
and
again
feel
free
to
check
out
our
github
and
join
us
slack
and
hit
up
any
hit
any
of
us
up
and
we're
more
than
happy
to
talk
to
you
and.