►
From YouTube: StackRox Community Meeting #13 - 2023-04-11
Description
The StackRox community meetings are held on the second Tuesday of every month. We use this time to get together and discuss gaps in the product and how best to move forward. Contributors are rewarded with StackRox gear as the RoxStar of the month.
- If you want to learn more about the project, head to StackRox.io
- The project's code repository can be found at https://github.com/stackrox/stackrox
A
Yes,
hello
and
welcome
to
the
actual
first,
we
we
just
finished
a
year.
This
is
the
13th
meeting,
so
our
first
meeting
was
last
April
when
we
open
sourced
stacks
for
the
first
time
so
those
following
along
thanks
for
making
it
through
here
we
appreciate
it.
Along
with
a
year
of
Open,
Source
stack
rocks.
We
also
have
4.0
release
coming
out,
so
we'll
be
chatting
about
that
I'm.
One
of
your
co-chairs,
Michael
Foster
and
I'm,
joined
by
my
other
co-chair
Matthias
medinger.
So
we
are
looking
forward
to
hopefully
an
insightful
meeting.
B
B
We
are
planned
to
release
in
a
little
bit
later
this
month,
but
we'll
update
you
on
a
specific
date
when
we
have
more,
if
when
we
have
a
little
bit
more
clarity
on
actually
how
that
how
and
when
that's
going
to
be
so
with
that
update,
what's
going
to
expect
what
what's
going
to
happen
for
you,
folks
is
the
main
and
biggest
change
is
we
are
changing
the
default
database
that
Central
is
actually
running
on
or
not,
basically,
not
only
Central,
but
all
a
lot
of
the
data
keeping
of
our
platform
is
currently
running
on
an
internal
database
which
we
are
switching
to
post
grade,
which
means
with
4.0
we
will
switch
to
post
screen.
B
You
will
have,
you
will
see
a
new
database
container
and
what
that
also
means
is
that
we
will
deprecate
the
current
internal
database
so
be
mindful
of
that
usually
deprecation.
As
soon
as
it's
announced,
I,
don't
know
from
the
top
of
my
head,
but
I
don't
think
we
have
announced
the
deprecation
of
the
current
database
just
yet.
But
if
we
announce
it
from
that
point
on,
we
usually
give
it
two
to
three
releases.
So,
six
to
nine
months,
roughly
to
actually
phase
that
out
with
that
change
as
postcreate
is
a
big
upgrade.
B
We
actually
also
decided
to
switch
our
default
channels
for
the
operator.
So,
if
you
use
ACS
with
an
operator,
you
will
not
get
an
auto
automatic
upgrade
to
4.0.
You
actually
will
have
to
manually
go
in
there
and
change
the
channel
from
the
default
channel
from
latest
to
stable
So.
Currently,
the
ACs
operator
always
updates
on
the
latest
channel
from
4.0
on.
We
will
start
using
the
stable
Channel.
B
This
is
mainly
to
prevent
Auto
upgrades
to
follow
zero.
So
if
you
want
to
upgrade
and
you're
running
on
the
operator,
make
sure
to
actually
check
our
re,
our
upgrade
documentation
and
switch
the
default
Channel,
and
with
that
we
also
have
more
exciting
features
to
actually
introduce
in
4.0.
So
we
will
introduce
for
the
openshift
folks.
We
will
introduce
our
H
cost
node
scanning,
which
is
a
feature
that
basically
gives
you
vulnerability
insight
into
the
actual
nodes
So.
B
Currently
we
are
only
or
we
are
primarily
looking
at
the
docker
and
kubernetes
runtime,
as
well
as
some
other
basic
information
about
the
node.
With
4.0,
we
actually
started
also
looking
at
package
vulnerabilities,
so
installed
packages,
and
maybe
and
and
basically
all
the
language
components
all
these
system
components
that
we
can
find
through
the
through
the
package
database
also
with.
A
The
I
was
gonna
say
real,
quick
Matthias
because
we
have
Boaz
on
the
line,
we're
actually
looking
for
some
clarification.
Network
graph,
2.0
I,
heard
a
lot
of
great
feedback
visually.
You
know
a
tremendous
Improvement,
just
curious
on
what
the
plan
is
for
the
network
graph
2.0
in
the
newest
release.
Do
you
have
any
updates
on
that?
One
Boaz?
A
Although
I
appreciate
you
coming
on,
we
got
somewhat
of.
C
A
packed
house
yeah
thanks
guys,
hey
great
to
be
here
so
yeah
thanks.
We
did
have
great
feedback.
We
are
announcing
the
deprecation
of
network
graph
1.0.
C
That
said,
it
will
still
be
there
for
the
duration
of
two
releases
and
the
biggest
change,
and
the
reason
why
we
actually
take
bothering
like
we
don't
need
permission
to
change
a
feature,
but
the
reason
we're
doing
this
is
because
we're
taking
away
some
functionality,
specifically
the
ability
to
apply
a
network
policy
from
within
the
network
graph.
We
are
not
doing
that
anymore
and
the
reason
is
that
we
believe
people
should
apply
Network
policies
outside
and
in
their
during
their
automation.
This
feature
was
was
here
for
historical
reasons.
C
A
Awesome
so
a
couple
releases,
it
will
be
just
2.0
as
the
default
and
I'm
assuming
the
2.0
is
going
to
be
dropped.
It's
going
to
be
Network
graph
when
that
change
finally
happens.
So
that's
awesome,
Matthias
I
know,
along
with
the
postgres
change.
That
also
means
the
collection
feature
is
going
to
be
permanent,
so
in
3.70
Boaz.
You
were
also
working
on
this,
so
the
The
Collection
feature
for
policy.
Let's
say
scaling
and
management
really
helps
a
lot.
You
will
only
get
that
feature
with
the
database
upgrade
any.
C
I
mean
thanks
for
the
opportunity.
It
was
with
very
limited
exposure
because
obviously
only
people
who
had
postgres,
so
it's
really-
we
just
expect
people
to
start
seeing
it
now
with
postgres
becoming
widely
used.
Anyone
on
forto
will
get
to
to
just
experience
the
collections
and
as
we've
messaged
in
the
past
and
I
guess,
it'll
be
good
to
just
talk
about
what
collections
are
good
for.
What
are
we
planning
to
do
with
them?
The
their
limited,
usefulness
and
4.0
out
of
the
out
of
the
gate?
C
We
are
taking
a
phased
approach,
it's
a
pretty
significant
change,
and
so
we
want
to
learn
as
we
go
before.
We
make
really
huge
changes
and
because
collections
are
references
that
are
resolved
in
runtime,
they're
extremely
powerful
and
they
they
can
create
all
kinds
of
weird
situations
where
you
want
to.
You
can
Define
things
in
different
ways
and
you
can
create
conflict,
so
we
without
going
into
a
deeper
conversation
which
I'd
love
to.
C
If
you
want
to
set
up
time
and
I'd
love
to
just
talk
about
that
right
now,
you
can
use
collections
in
order
to
create
your
infrastructure
for
reporting,
All
Foreign
abilities.
It's
a
really
good
exercise
to
start
with.
Pretty
soon
we
want
to
see
collections
used
as
filters
So,
eventually
filter.
Anything
using
a
collection
is
very
powerful,
so
you
don't
have
to
go
and
do
that
one
by
one
this
cluster
or
those
deployments,
those
conditions
you'll
be
able
to
do
that.
C
A
C
So
I
might
have
a
cluster
for
Dev
cluster
for
QA
within
the
dev
cluster
Matthias
might
be
a
privileged
team.
That
is
a
mature
team.
They're
allowed
to
do
anything
they
want
I'm,
not
going
to
really
want
to
have
any
violations
or
nothing
reported
for
them.
But
then
with
Mike
I
don't
know,
Mike's
is
just
a
beginner
and
we
want
to
make
sure
you
do
the
right
thing
just
teasing.
C
All
is
really
provides
a
language
for
you
to
just
Define
that,
once
you
can
do
that
today,
with
labels
right
I
mean
you
can
say:
what's
the
big
deal,
I
have
I,
can
add
a
label,
and
that
then
automatically
my
my
application
is
already
defined
by
kubernetes.
The
fact
is
that
security
organizations
actually
don't
like
that,
a
lot
because
they
don't
have
control.
So
this
gives
the
organization
the
tools
to
work
across
between
security
and
Dev
teams
to
agree
on
those
terms,
and
so
security
can
say
you
know
what
okay
go
ahead.
C
You
define
that
label.
Any
application
comes
up
with
that
label
will
treat
it.
As
you
know,
Mike's
team
you
can
do
whatever
we
already
agreed
that
you
can't
the
security
also
can
enforce
it
in
different
ways.
So
that's
in
a
nutshell,
the
the
idea,
Beyond
collections,
I,
hope
that
wasn't
too
confusing
no.
A
That
wasn't
and
I
think
we
should
definitely
do
a
little
bit
more
of
a
deep
dive
in
the
future,
once
the
release
happens
and
once
I
get
to
play
around
with
the
feature
a
little
bit
more
just
moving
on.
In
the
sake
of
time,
thanks
for
that
Boaz
Matthias,
so
we
changing
some
of
the
default
resources
in
the
sensor.
You
want
to
expand
on
that
a
little
yes.
B
So
we
have
in
the
in
the
past,
we
have
seen
that
actually
sense,
or
especially
in
larger
deployments,
needs
a
little
bit
more
of
a
bump
and
to
accommodate
for
that
so
that
everyone
is
aware
we
are
removing
the
requests
of
or
the
resource
requests
of
sensor
up
a
little
bit
as
well
as
the
limits.
So
keep
that
in
mind,
if
you
are
running
in
a
rather
constrained
environment,
that
the
request
for
the
for
Ram
and
CPU
course
for
sensor
will
change
to
two
cores
and
four
gigabytes
of
RAM.
Now.
A
Just
be
clear,
those
are
the
defaults
if
you're
running
in
a
resource
constraint,
smaller
environment,
you
can
obviously
change
that
if
you
understand
your
resources
that
are
that
are
necessary.
This
is
just
so
that
by
default,
especially
in
bigger
environments,
that
everything
will
work
smoothly
and
it's
a
lot
easier
to
shift
down
when
you
know
your
resource
restrictions
and
requirements,
correct,
correct.
B
So
you
can
always
go
in
and
obviously
adjust
these
be
it
in
the
helm,
values
or
even
I
mean
we
I.
We
would
recommend
to
do
that
in
the
helm,
values
or
configuration,
but
you
could
all,
but
you
can
as
always,
do
it.
What
works
for
your
or
do
what
fits
your
workflow
best
and
how
you
actually
deploy
ACS
and
with
that
said
additionally,
to
that,
if
you
are
using
RH
course
node
scanning,
there
will
be
an
additional
container
in
The
Collector
Diamond
set.
A
B
The
additional
node
scanning
container
will
only
be
there
if
you
actually
choose
to
use.
If
you
run
on
openshift
and
you
actually
want
to
use
or
make
use
of
the
feature
it
will
I
think
it
will
be.
Yes
would
yet
there
will
be
an
additional
feature
or
not
a
feature
flag.
There
will
be
an
additional
environment
variable
that
you
can
set
to
actually
switch
that
off,
even
if
you're
running
on
openshift-
and
you
don't
want
that
feature.
B
So
there
is
an
additional
point,
which
is
we
have
permissions
or
I?
Think
we
yes
permission
sets
we
call
them.
So
we
have
the
analyst
permission,
and
this
one
is
a
rather
old
permission
and
it
will
change
with
4.0.
We
had
to
adjust
it
a
little
bit.
It
used
to
be
able
to
access
Administration
resources,
which
it
will
not
be
able
to
anymore.
B
There
is
a
little
bit
more
information
on
this
in
our
changelog
and
also
in
our
documentation.
Obviously,
but
this
is
the
most
important
change
of
this
role.
So
if
you
use
an
automation
or
an
access
token,
that
is
using
the
analyst
permission
set
to
do
something
in
administrative
resources
of
ACS.
It
might
be
a
good
idea
to
proactively
create
a
new
permission
set
that
explicitly
is
allowed
to
do
to
access
administrative
resources
and
use
that
one.
Instead
of
the
analyst
permission
that
you
might
be
using
today,
gotcha.
A
A
Silence
is
okay
again,
if
you
have
any
questions,
feel
free
to
leave
them
in
the
slack
or
in
the
documents.
We
just
try
to
collect
everything
and
then
talk
about
it
every
month.
The
the
last
question
about
the
release
is:
is
there
a
beta
test,
so
this
is
asked
by
Neil
now
I
know:
3.74
comes
with
some
of
the
features
like
Network
graph,
2.0
postgres,
not
as
a
default
database,
so
that
could
be
somewhat
considered
the
beta
but
Matthias
just
curious.
A
What
your
thoughts
are
on
upgrading
you
know,
I,
don't
think,
there's
like
an
official
beta
build
for
people
to
test
for
4.0,
but
having
those
features
in
3.74
also
allows
people
to
test
them
out
just
wanna.
Yes,.
B
We
I
I
think
it
is
unprecedented.
So
far
as
at
least
since
I
joined
stack,
rocks,
I,
don't
think
we
ever
had
a
beta,
so
I
would
argue.
We
don't
really
do
that
in
the
traditional
sense.
What
you
can
do
well,
Foster
actually
already
hinted
at
is
74,
so
the
current
release
came
with
the
option
to
enable
host
gray
as
a
database,
which
also
brought
with
it.
B
The
collections
feature
the
network
graph
2.0,
as
well
as
some
other
under
the
hood
import
improvements,
which
means,
if
you
want
to
actually
test
this,
you
could
roll
and
upgrade
to
the
274
with
the
database
enabled,
but
the
the
difference
between
migrating
to
74
with
the
database
flag,
enabled
and
upgrading
to
4.0
is
actually
not
that
big.
So
technically
we
or
we
have
invested
a
lot
of
time
and
testing
into
the
documentation
and
testing
all
the
different
steps,
including
already
writing
and
testing
rollback
documentation.
B
So,
as
always,
we
make
very
sure
that
you
can
always
go
back,
but
that
said,
as
usual
with
upgrades,
we
would
recommend
to
test
this
for
your
individual
environment
because,
as
it
goes
with
kubernetes
deployments,
these
are
highly
specialized
and
there
is
no
way
that
we
could
cover
every
possible
scenario
so
to
be
on
the
safe
side.
I
would
recommend
to
always
test
in
a
staging
or
Dev
environment
before
upgrading,
prod.
C
C
It
was
in
in
the
release,
notes
and
we
encourage
people
to
test
it
out,
but
we
also
worked
actively
with
a
few
customers
to
make
sure
we
do
test
that
in
in
as
as
a
realistic
environment,
as
we
can
again
to
matthias's
point,
you
know
cannot
emphasize
this
and
not
on
under
underemphasize
how
important
it
is
to
test
this
run
run
your
staging
environment
is
one
part
of
the
reasons.
This
is
a
major
release.
C
It's
a
4.0
so
that,
as
Matia
said,
the
channel
is
updating
you're
not
going
to
automatically
update
to
4.0.
You
have
to
make
a
conscious
decision,
so
it's
all
you
know
supporting
our
customers
to
to
take
the
right
steps
in
doing
so.
C
That
said
again,
just
repeating
what
Matthias
has
already
told
you,
we,
we
have
tested
this
deeply
and
we
are
confident
that
this
is
GA
level.
That's
why
we're
announcing
it
GA.
So
you
are
supported
in
production.
It's
not
that
we're
we're
not
telling
you!
This
wasn't
tested,
we
don't
know
we're,
not
sure
we
are
sure
it
is
tested.
We
encourage
you
to
move
to
4.0.
A
lot
of
features
depend
on
postgres,
as
you've
already
heard.
Some
three
74
was
a
stepping
stone
and
roxdb
is
going
away.
There's.
C
No,
it
is
going
away
like
quickly
so
4.0
is
is
the
way
forward.
It
is
fully
GA
and
we
expect
customers
to
use
it
and
I,
don't
know
if
Matthias,
you
probably
did
say,
but
I
I'm,
not
sure
we
already
run
it
in
ACS
as
a
service.
We
are
using
this
technology
already,
so
we
have
a
lot
of
run
hours
already
with
it
and
we're
very
confident
with
it.
Foreign.
B
I've
actually
not
mentioned
that,
but
yes
you're,
absolutely
correct.
We
are
also
already
running
the
postgrad
version
in
our
managed
Service
already,
so
we
were
able
to
collect
a
little
bit
of
or
actually
a
lot
of
lots
of
runtime
hours
with
it
and
are
sure
reasonably
sure
that
things
work
as
we
designed
them.
C
The
point
the
point
is-
and
this
is
I-
guess
the
difference
between
open
source
and
and
and
red
hat
so
like
like
I,
said
we
are
confident.
Of
course
things
can
go
wrong,
but
we
are
confident
it
is
fully
GA
fully
supported.
So
we
are
absolutely
guiding
people
to
move
over
to
4.0.
It's
not
there's
not
a
a
lack
of
confidence
or
or
a
debate.
There.
A
Agreed
so
in
in
the
interest
of
time
as
well,
should
we
get
into
the
user
issues,
talk
about
comments,
certain
ones
we've
answered
in
the
chat
or,
for
example,
you
know
we
we
identify
a
gap.
Can
you
open
up
a
GitHub
issue,
something
small
you'll,
see
I,
put
it
in
bold
and
answered
in
the
doc,
but
I'm
just
going
to
talk
about
anything
that
we
haven't
got
to
in
the
in
the
chat
and
Dane.
Thanks
for
the
the
questions,
I
will
definitely
get
to
those
as
well
at
the
bottom.
A
B
Actually
Dane
basically
made
the
answer
already,
which
is
I,
have
checked
in
with
our
scanner
folks,
and
the
scanner
database
by
itself
is
something
that
can
go
away.
So
it
is
oh
God,
I
forgot,
the
I
forgot
the
term
for
it,
but
basically
the
DB
can
go
away,
which
means
it.
B
C
B
But
the
question
is:
how
much
sense
would
it
make,
unfortunately,
without
knowing
the
exact
use
case
that
that
Victor
has
it
is
harder
to
answer
so
the
the
answer
to
the
quest
the
technical,
correct
answer
to
the
question
is
you
can
change
the
connection
string
in
the
scanner
config
map,
but
that
will
most
likely
be
overwritten
by
in
by
the
next
Helm
upgrade
or
operator
upgrade.
So
it
is
not
designed
to
stay
permanently.
A
Makes
sense
and
Victor's
comment
is
in
the
slack
Channel
Victory.
We
have
you're
watching
this
and
you
have
anything
else.
Just
add
to
the
thread:
Boaz
is
in
there.
Matthias
and
I
are
in
there.
So
if
you
want
to
clarify
happy
to
help
out
next
question,
are
there
any
plans
to
release
the
helm,
charts
that
support
declarative
Ops?
The
current
model
is
that
you
have
to
install
one
and
then
make
some
calls
to
feed
into
the
next
Helm
chart,
preventing
it
from
being
declarative.
A
This
is
always
a
challenge
with
Helm
charts,
especially
with
the
init
bundle
and
the
values
that
you
need
to
connect
to
Central,
where
you
store
those
how
you
do
upgrades
is
also
a
security
risk.
If
you
do
declare
a
devops
and
you
do
a
CI
CD
process
for
upgrading
ACS,
then
you
have
to
install
those
those
files,
the
the
secrets
and
insert
files
somewhere,
so
whether
it
be
Helm
or
like
a
vault
or
a
Seeker
integration,
and
then
how
do
you
call
that
it
does
introduce
all
these
various
issues?
A
So
I
don't
really
see
a
scenario
where
you
can
just
inject
secrets
into
your
Helm
charts
unless
it's
pulling
from
your
own.
You
know:
secret
storage,
where
you
have
Helm
charts,
you're
gonna,
go
pull
the
secrets
of
the
init
files
and
then
do
an
upgrade,
but
Matthias
I'm,
not
sure.
If
you
want
to
shed
any
light
on
that
I,
don't
see
a
path
forward
where
you
have
like
the
secrets
in
the
helm
chart
to
make
it
all
just
declarative.
B
As
far
as
I'm
aware
of
yes,
I
know
that
there
were
some
discussions
or
architectural
design
discussions
about
what
we
could
change
about
the
process.
But
honestly
I
do
not
know
from
the
top
of
my
head
the
status
of
these,
because
it
is
still,
it
is
actually
a
surprisingly
hard
problem
to
solve
because,
as
you
said
there,
there
is
I
mean
there
is
a
multi-stage
or
step
process
that
we
need
to
follow
for
our
platform
to
be
up
and
running
there
could
be.
B
Obviously
there
is
always
options
to
change
these,
but
the
question
is
we
we
usually
try
to,
or
we
usually
strive
to
improve
the
experience
and
for
the
user
or,
alternatively,
improve
the
the
ease
of
use
for
the
whole
platform.
So
the
question
would
be:
how
can
we
construct
a
scenario
where
it
actually
improves
the
situation
well
versus
where
we
are
today
so
I
act?
So
long
story,
short
I,
don't
know
where
we
are
at
that
I.
B
Remember
that
we
had
some
discussions
about
this,
but
I
don't
think
I
have
spotted
it
in
any
roadmap
of
ours
just
yet.
But
if
we
get
enough
feedback
from
the
community
and
the
users
that
is,
as
always
subject
to
change,
yeah.
A
I
might
take
this
as
a
to-do,
because
I
think
a
use
case
showing
maybe
best
practice,
might
be
an
order
regardless
of
if
the
product
or
you
know
how
we
want
to
do
that.
It
might
just
be
hey.
You
know
this
is
where
you
recommend,
storing
these
files
and
and
upgrading
using
Helm
charts
moving
forward
until
last.
Well,
there's
a
couple
more
questions,
but
Mike
says
hello:
we
use
ACS
3.74.1.
Is
there
a
way
to
generate
Network
policy
to
include
egress
traffic?
A
A
C
Because
so
the
Baseline,
the
network,
Baseline
by
Design,
only
creates
Ingress
policies,
the
whole
network
Baseline
in
the
build
in
the
runtime,
but
remind
everyone.
This
feature
that
Matthias
and
I
have
collaborated
on
which
we
shipped
as
Tech
preview.
Quite
a
few
releases
ago
allows
you
to
generate
Network
policies
in
build
time,
and
these
are.
C
We
actually
think
that
is
the
potentially
the
better
path
to
take.
This
is
the
shift
left
approach.
You
can
generate
those
Network
policies
based
on
your
yaml
code.
That's
all
it
needs.
If,
if
you're,
you
don't
need
to
do
anything
that
you
don't
already
do
if
you're
working
with
best
practices,
you
would
have
all
your
yaml
files
that
you're
about
to
deploy.
C
B
Go
ahead,
yeah,
so
what
we
actually
do
and
what
what
Bo
is
so
fittingly
described
is
we
have
currently,
we
have
two
different
mechanic
mechanisms
we
have
one
which
is
you
deploy
something
in
a
monitor.
Cluster
and
ACS
will
then
generate
a
baseline
out
of
it.
This
one
doesn't
generate
egress
rules
today.
What
you
can
do
is,
instead
of
looking
at
deploy
time
or
runtime.
B
You
can
actually
do
a
static
analysis
of
your
deployment
files,
which
also,
which
includes
Ingress
and
egress
rules,
and
these
tend
to
be
in
our
so
at
least
in
in
our
testing.
These
were
way
tighter
because,
as
we
are
doing
static
analysis,
we
don't
need
to
wait
for
it
for
a
crown
job
to
run
or
anything
else.
So
this
you
might
miss
in
a
in
a
one
hour
Baseline
which
is
at
runtime.
B
B
Again,
even
there
may
be
subject
to
change,
or
even
we
or
we
might
progress
with
the
shift
left
approach
and
actually
move
that
into
into
build
time
into
actually
on
the
developers
to
actually
generate
that
generate
the
network
policies
before
a
deployment
and
also,
if
I,
remember
correctly.
We
might
pick
up
on
these
features
again
very
very
soon,
right,
yeah.
C
Timeline
is
always
is
always
a
beast.
So
what
happens
with
build
time
in
the
future?
I
think
depends
on
you
guys
on
customer
feedback.
We
really
encourage
people
to
go
ahead
and
use
the
shift
left
approach.
We've
already
had
some
really
exciting
feedback
from
people
already
using
it.
There's
a
there's
a
really
exciting
demo
that
Mike
you
can
maybe
share
when
that's
going
to
be
published,
that
Rodrigo
and
Mike
are
working
on
right.
That
really
shows
people
how
to
bake
this
in
in
their
pipeline
so
end
to
end
automated
creation.
C
A
C
That's
that's
a
really
interesting
one
I
think
that's
part
of
what
Matthias
was
referring
to
when,
when
we
looked
at
this
technology,
especially
you
know,
Matthias
and
I
were
just
spending
brainstorm
time.
We
came
up
with
all
sorts
of
interesting
directions
that
we
could
go
and
so
we're
definitely
open
to
user
feedback.
But
what
right
now?
What
is
clear
is
this
solid
approach
that
automated
people
don't
want
to
mess
with
it?
C
So,
in
fact,
if
you
think
about
this
problem,
who
wants
to
generate
Network
policies,
raise
your
hands
right,
so
people
just
it's
a
it's
something
that
they
just
want
to
get
done
for
them
and
and
that's
what
this
technology
does
for
you
just
generates
it
automatically.
You
can
then
debug
it
just
like
you
debug
code.
It
should
be
part
of
the
process.
You
debug
code,
you
debug
security,
you
do
the
bug.
Network
policy
is
part
of
the
process,
and
then
you
let
me
ship
it.
A
C
So
there
shouldn't
be
too
much
of
a
need
for
UI,
but
that's
going
to
be
interesting
as
people
pick
it
up
we'll
see
if
people
have
different
perspectives.
Yeah.
C
A
You
know
he's
like
why
am
I
only
getting
Ingress
here
and
not
egress,
so
either
we
need
to
make
it
clear
that
netpole
is
the
way
forward
or
add
the
egress
ability
to
be
able
to
generated
in
the
UI
I
think
both
of
those
are
fair,
look
towards
like
an
rfe
or
an
issue
getting
created
in
the
future,
and
then
hopefully
you
know
if
people
want
more
of
the
netball
generate,
and
maybe
you
know
the
feature
in
the
UI,
then
that's
something
we
could
put
in
the
roadmap,
but
yeah
stay
tuned
for
a
demo
on
the
net
pull
generate
mic
I
will
at
you
when
it
comes
out
all
right
on
to
the
last
two
questions:
Dane
thanks
for
patiently
waiting
and
putting
them
in
at
the
end,
too,
you
could
have
put
it
up
at
the
beginning
and
and
taken
off,
but
really
appreciate
it
any
info
on
the
new
GitHub
actions.
A
D
Actually,
I'm
not
sure
I
saw
some
Piers
where
it
they
were
just
talking
about
GitHub
actions
that
were
being
created.
Maybe
it
was
totally
all
wrong
then,
and
nothing
happened.
I'm.
A
Not
sure,
but
I
definitely
think
that
going
on
the
marketplace,
it
would
be
pretty
useful
to
have
some
some
default
GitHub
actions
for
stack
rocks
nacs,
although
that
has
its
own
I,
think
with
the
cloud
service
and
with
the
ability
to
query
automatically
with
a
simple
API
token
and
string
that
might
be
an
option
but
yeah.
We
have
to
think
about
that
a
little
bit,
Matthias
yeah.
B
I
think
the
So.
Currently
what
we
do
is
we
recently
got
new
members
in
our
infrastructure
team,
and
these
folks
have
been
at
work
to
actually
migrate
parts
of
our
internal
build
automation
onto
GitHub
actions.
So
that
might
be
the
reason
why
you
have
seen
a
good
discussions
about
GitHub
actions
here
and
there
recently
in
in
the
in
stack
rocks
in
the
repositories
on
GitHub.
B
Yeah
for
that
one
I've
been
already
coming
through
our
upgrade
documentation
and
I.
Remember
that
we
have
so
we
are
right
now
we
are
in
even
though
we
have
code
freeze.
We
don't
have
documentation
freeze
yet
so
the
documentation
is
still
in
the
process
of
being
written.
As
far
as
I
remember,
there
was
discussions
about
migration,
so
I
remember
that
we
definitely
have
migration
instructions
so
that
data
is
moving
over
it
or
is
migrated
into
postgrad,
but
I'm
at
the
moment,
unable
to
find
them.
B
So
if
I
find
them
or
come
across
them,
I
will
definitely
let
let
you
know
I'm
very
sure
that
we
have
them,
but
I,
don't
know
where
I
mean
Bose
post
looks.
C
Like
yeah,
no
you're
you're
right
because
we're
we're
you
know
we're
racing
against
the
clock
here,
documentation
isn't
isn't
published
yet,
but
of
course
yes,
it
is
we.
The
migration
is
automatic,
in
fact
fun
fact,
and
then
the
I
was,
as
I
was
just
reviewing
the
the
operator
based
instructions
and
in
in
the
do
you
use
operator,
they
know.
Are
you
just
using
helm.
C
So
it'll
just
be
an
indication,
though,
because
with
the
operator
there's
actually
a
field
in
374
you
change
one
field
and
the
operator's
back
and
after
30
seconds
you
have
you
have
an
environment
running
with
postgres.
All
the
postgres
features
were
enabled
in
the
UI.
It's
like
magic,
it's
I,
I
was
I,
was
blown
away
on
how
well
engineering
executed
on
that
one.
You
literally
changed
a
single
word
in
openshift.
Save
boom
you're
done
so
like
Matthias
likes
to
say,
life
is
going
to
be
a
little
bit
more
complex.
C
I'm
sure
for
some
of
the
folks,
but
migration
is
is,
is
what
we've
been
working
on,
so
the
the
it
should
be
seamless.
B
To
add
to
that,
the
operator
is
currently
a
helm-based
operator,
so
our
openshift
operator
is
based
on
the
helm
charts,
which
means
we
already
are
also
building
that
into
the
helm
chart.
That
is
not
an
operator
thing
with
the
auto
upgrade
as
far
as
I'm
aware
of
but
I
will
follow
up
on
this
and
let
you
know
as
soon
as
we
have
the
some
version
of
documentation
on
that,
or
at
least
I
got
confirmation
from
our
database.
Folks.
A
Sounds
good,
let's
see
at
the
end
of
all
the
questions.
If
anybody
on
the
call
has
anything
else
speak
now
or
forever
hold
your
peace
just
kidding,
you
can
always
just
post
them
in
the
slack
channel,
Matthias
any
fun
facts,
anything
fun
coming
up
other
than
the
release
or
we
just
wrapping
it
up
here.
B
So,
as
far
as
I'm
aware
of
nothing
to
report
here,
it's
been
it's
been
quiet
days
after
the
code.
Freeze,
it's
it's
the
it's
the
usual
development
limbo.
Is
it
right
right
after
you,
you
wrap
up
one
code
freeze
and
you
just
wait
to
start
another
another
iteration
and
make
things
better.
A
There
you
go
yeah
and
thanks
for
everyone
for
the
questions,
you
are
definitely
helping
make
things
better.
Any
issues
that
pop
up
as
well
looking
for
feedback,
4.0,
Network
graph
2.0
collections,
feature
core
OS
scanning.
So
please
let
us
know
in
the
slack
Channel
and
thanks
for
one
year
of
Open
Source
boys,
you
were
a
little
late
to
the
party,
but
yeah.
It's
officially
been
one
year,
so
it's
yeah.
A
It's
really
really
cool
I
know
that
there's
some
people
that
have
been
following
since
the
beginning
so
really
appreciate
all
the
help,
and
until
next
month,
May
on
the
second
Tuesday
I'll
post,
the
slack
Channel
but
again
Matthias
meninger,
co-chair
of
Stack
rocks
and
myself
Michael
Foster.
Thanks
for
joining
and
we'll
see
you
next
month,.