►
From YouTube: Apr 27, 2023 - Ortelius Archtiecture Meeting
Description
A review of MegaLinter, the organization of Helm Charts and other technical topics are included in this meeting as the team gets ready for microservice contributions.
A
B
Welcome
everybody
to
the
April,
27th
artillius
architecture
meeting.
Let
me
go
ahead
and
share
my
screen
and
just
give
you
a
rundown
what
we
got
going
on
all
right.
B
Okay,
so
I
made
some
progress
on
the
common
code,
so
some
of
the
things
that
I've
been
doing
across
all
the
repositories
and
I'll
show
you
how
this
works
is
I've
added
the
mega
linter,
so
megalinter
I
don't
know
if
it's
a
direct
Fork
a
super
linter,
but
it
is
a
little
more
flexible
and
easier
to
configure
than
the
Google
super
linter
and
basically,
what
it
does
is.
B
It
goes
through
and
checks
out
your
whole
repository
to
make
sure
it
passes
certain
conformance
tests
and
stuff
like
that,
as
part
of
that
I've
gone
in
and
reworked
some
of
the
added
things
to
the
code
to
make
sure
that
it
passes
so
like
one
of
the
things
you
need
is
a
package
description,
and
then
you
need
things
like
for
every
and
anything
and
go
with
a
cat
that
starts
with
a
capital
letter
is
exported.
B
Anything
of
the
lower
case
is
not
export
is
so
all
of
the
exported
structures.
Variables
functions
have
to
be
commented
at
that
level,
so
that
is
all
in
place.
Some
of
them
are
pretty
basic
comments
just
to
get
past
the
linter.
At
this
point
and
I
just
was
looking
at
the
open
API,
there
is
a
basically
a
plug-in
for
go
to
generate
the
Swagger
code,
so
you
can
give
us
your
this
says.
It
runs
as
a
microservice
you
can.
B
When
it
starts
up,
you
can
hit
the
it'll,
basically
publish
a
Swagger
page
for
us
to
look
at
so
I.
Think,
there's
going
to
be
additional
tags
that
we
need
to
put
in
the
comments
for
like
the
structures
of
the
fields
and
the
structures
and
stuff
like
that
for
the
open,
API
Swagger
stuff.
B
So
that's
out
there
I
also
did
a
base
database
layer
that
will
it
still
needs
a
lot
of
work,
but
basically
it
will
connect
up
to
a
local,
a
Rango
database,
that's
running
or
or
wherever
you
or
wherever
we
decide
to
host
it.
So
this
one
just
is
running
locally
and
we'll
be
able
to
what
this
does
is
goes
through
and
creates.
The
database,
like
I,
said
I
got
to
change
up.
B
Some
of
the
the
default
stuff
I
have
in
here
that
I've
been
using
for
testing,
creates
the
logs
in
and
then
creates
some
Collections
and
gives
us
a
basic
starting
point
for
managing
that
database
abstraction
layer.
B
So
if
we
look
at
another,
so
this
is
all
the
the
security
comments.
So
all
this
has
been
pushed
out
to
that
repository,
the
way
go
works.
Is
it
it's
everything's,
based
on
a
tag
so
I
have
a
v0
0.1.0
tag
out
there
that
you'll
be
able
to
download
this
version
of
the
library.
So
whenever
we
make
it
a
change
to
this
common
code,
then
we
have
to
bump
the
the
version
number.
So
the
next
one
will
be
zero
one
one
at
that
level
so
to
in
order
to
use
it.
B
This
is
another
repository
I
haven't
pushed
yet,
but
basically
this
will
be
the
template
that
we
use
for
the
microservices
and
basically
in
go.
There's
a
program
called
fiber,
that's
their
their
web
framework.
That
is
used
for
like
the
restful
apis.
B
So
at
the
bottom
here
you
can
see
where
we
set
up
our
routes
for
the
incoming
transactions
and
how
they
get
routed
over
to
a
function
and,
for
example,
like
a
new
domain
function,
is
one
of
the
ones
that
I
put
in
there
right
now.
I
just
have
the
the
this
hard-coded
and
I
need
to
change
it,
so
it
accepts
the
payload
correctly
from
there
we
go
ahead
and
and
take
that
object
we
do
our
nft
stuff
and
then
we
go
ahead
and
add
it
to
the
The
Collection
at
that
level.
B
So
it's
pretty
basic.
This
will
the
other
functions
haven't
been
implemented
yet
they're
just
police
holders,
but
this
is
where
we'll
keep
on
expanding
out
the
microservices
and
all
this
has
to
be
combinated
for
the
Swagger
documentation
and
things
like
that-
and
this
is
where,
like
we
connect
up
to
the
database
at
that
level.
B
Now
one
of
the
things
we
had
on
our
last
architecture
meeting
was
trying
to
decide
whether
the
nft
storage
would
be
part
of
the
database
abstraction
layer
or
if
it
would
be
a
microservice
and
I
think
we
should
make
it
a
microservice
that
we
that
all
of
the
other
microservices
talk
to
and
the
reason
why
I
was
thinking
that
was
it'll.
B
Allow
us
to
go
ahead
and
swap
out
based
on
the
helm,
charts
that
somebody
implements
whether
they
want
to
use
nft
storage
if
they
want
to
use
the
oci
registry,
so
that
will
allow
us
to
swap
things
out
without
having
to
recompile
or
have
different
versions
at
that
level.
So
that's
my
thought
on
that.
So
we'll
need
a
like
I
said:
another
new
microservers
for
handling
the
nft
storage,
for
example.
B
So
in
in
logic's
standpoint
after
we
go
ahead
and
create
the
the
document
inside
of
a
Rango,
the
next
step
after
it
comes
back,
successful
would
be
going
in
and
sending
you
know
a
send
message
to.
B
So
that
would
be
like
one
of
the
next
things
that
we
would
do
and
because
things
that,
when
we're
talking
about
persisting
in
I,
nft
storage,
because
it's
so
slow,
this
will
actually
have
to
happen
in
basically
the
background
and
then
the
this
microservice
can
return
back
to
the
client,
and
we
can
persist
everything
in
the
in
the
nft
storage
at
that
level.
B
B
If
it
doesn't
exist
in
in
the
orango
database,
then
we
have
to
go:
send
a
transaction
over
to
the
the
database,
abstraction
layer,
microservice
go
get
the
data
and
then
do
the
extra
work
on
this
side.
So
that's
how
things
are
coming
together
pretty
well
on
that
front,
like
I
said
this,
this
I
have
to
to
keep
on
building
out
the
rest
of
this
week
and
into
next
week
with
like
what
we
need
to
do
for
Swagger
those
type
of
things.
B
Just
so,
when
I'm
able
to
pass
off
this
repository
to
Arvin
to
start
replicating,
this
microservice
will
be
like
this
starting
point
will
be
replicated
I
think
about
maybe
18
or
20
times
are
the
different
microservices
that
we'll
have
so
everything
will
have
their
own
Repository
and
as
part
of
that,
this
well,
this
one
isn't
in
isn't
in
git.
Yet
if
I
go
back
to
the
other
one.
B
B
So
all
of
so
when
we
we
publish
this,
we'll
be
good
to
go
for
making
sure
we
have.
You
know
everything's
up
to
date,
we're
getting
we're
passing
all
the
vulnerability
scanning
and
stuff
like
that.
B
B
Could
I'd
have
to
see
I'd
like
to
get
the
the
template
out
earlier
than
later,
but
I'll
see
if
we
can
do
the
helm
chart
at
the
same
time,
I'll
just
see
how
complicated
it
is.
C
B
Okay,
if
not
we'll,
just
have
to
come
along
with
another
pass
and
add
the
home
chart
directory,
basically
to
all
the
the
repositories.
So
it's
not
the
end
of
the
world.
If
we
have
to
do
it
in
in
two
steps.
B
B
So
we'll
do
it
the
same
way
that
artillius
is
currently
laid
out.
Each
microservice
will
have
its
own
chart
and
then
we'll
have
a
a
parent
chart
that
includes
the
the
child
charts
so,
and
we
have
that
Automation
in
place
to
manage
the
parent
chart
automatically.
So
it's
it'll
be
easy
to
pull
together.
D
And
right
now
you
are
making
a
call
to
your
like
sample
or
angodb,
but
later
that
call
will
be
replaced
by
the
abstraction
layer
right.
B
I'm
not
sure
I
was
I
was
going
back
and
forth
on
that.
So
if
I
go,
which
Repository.
E
B
So
we
right
now
like
this
is
going
off
to
the
initialization
function,
as
in
the
the
set
comments
database,
basically
reusable
common
code.
B
So
one
thing
that
I
was
thinking
of,
like
you
could
technically,
instead
of
calling
their
Rango
like
create
document
that
this
would
be
actually
calling
the
abstraction
layer
function.
That,
in
turn,
then
calls
the
create
document.
B
I
was
I'm
up
in
the
air
on
whether
we
need
to
totally
hide
the
database
from
the
microservices
or
let
them
just
go
ahead
and
interact
with
it
directly
and
the
reason
why
I'm
kind
of
hesitant
is
let's
say
you-
we
have
a
more
complicated
microservice
that
needs
to
do
a
couple
things
where
you
get
into
actually
running
queries
and
doing
like
joins
and
stuff
like
that
that
pushing
that
over
to
the
abstraction
layer,
maybe
just
too
much
work
than
it's
worth,
you
know
I
mean
to
handle
the
edge
cases.
D
B
I'm
kind
of
up
in
the
air
on
that
so
we'll
have
to
see
how
much
we
want
to
hide
from
the
microservices
and-
and
you
know
some
of
the
basic
stuff,
like
you
know,
connecting
to
the
database
and
figuring
out
all
that
makes
sense
to
be
in
the
in
a
common
place.
So
everybody's
doing
it
the
same
way,
but
actually
like
running
a
query.
B
B
D
So
it
depends
right.
So
what
are
we
abstracting?
So
my
thought
was
I
think
we
are
abstracting
the
database
layer
all
together,
so
right
now
the
technology
that
we
are
using
right,
nft,
plus
Rango,
so
I
was
thinking.
Our
objective
is
to
abstract
these
two
but
looks
like
you
are
making
this
Rango
layer
a
bit
earlier
than
abstraction,
so
Rango
would
be
calling
the
abstraction
layer
and
nft
would
be
abstracted
out.
B
Right:
it's
because
the
the
arango
is
the
kind
of
like
the
fast
database
and
the
nft
is
a
slow
one.
Interacting
directly
with
the
the
fast
database
seems
at
this
point
seems
to
make
sense,
and
then
you
know
going
and
doing
the
long-term
persistence
back
into
the
nft
storage
would
be
something
we
need.
We
kind
of
need
to
do
in
the
background.
That's
where
I
kind
of
broke
it
apart
and
said.
We
need
to
thought
that
we
should
do
it
into
a
separate.
You
know,
transaction
to
a
separate
microservice.
B
So
that's
that's
the
way
I've
kind
of
been
thinking
of
it
is
in
in
and
arango
will
always
be
there
where
the
nft
or
the
oci
it
can
be.
Kid
gets
swapped
out.
D
D
Do
you
think,
like
OCA
registry,
would
also
process
the
data
like
in
the
similar
way.
B
Oci
registry
is
going
to
be
slow
as
well.
It's
going
to
be
at
this
probably
the
same
speed
as
the
nft
storage
and
the
reason
being
is
depending
on
your
oci
registry
implementation.
They
could
be
distributed,
Registries
backed
by
ipfs
like
nft
storage.
It's
just
there's,
so
many
different
implementations
of
the
oci
registry.
B
B
Played
with
the
the
docker
the
apis
directly
to
see
how
fast
they
they
work,
but
I'm,
just
making
that
assumption
that
it's
gonna
be
there's
a
potential
for
it
to
be
slower
than
we
want,
and
that
and
that
slowness
could
hang
up
the
front
end.
Basically.
D
B
What
will
have
to
happen?
That's
on
the
get
side
so
like
in
here,
we'll,
go
and
and
basically
go
go
get
the
document
to
see
if
it
exists
in
in
a
Rango
and
if
it
doesn't
then
go
off
to
an
ft
storage
to
fetch
it.
So
in
that
case
we
may
we
may
need
to
implement
something
on
the
front
end,
to
send
a
message
back
saying
that
you're,
you
know
dealing
with
a
long,
slow,
running
transaction
on.
D
B
D
B
Not
yet
I'll
probably
publish
it
tomorrow
as
a
starting
point;
it
is,
it
needs
a
lot
of
work
still.
So
if
you
look
at.
B
Is
so
there's
there's
this
reusable
code
and
I
went
to
the
wrong
place.
B
B
So
this
is
a
sample
program,
so
we
I
have
to
go.
We
have
to
go
through
and
add
all
the
tags
add
a
little
bit
of
code
in
here
to
enable
the
the
Swagger
piece
of
it.
It's
not
hard
to
do
and
then
like.
This
is
another
endpoint
that
we
have
to
add
the
tags
for,
and
then
it
gets
even
more
complicated
when
you
get
into
the
individual
structures.
B
There's
a
way
to
annotate
all
of
it,
so
this
is
like
annotating,
the
the
main
part,
all
the
tags
that
to
be
added.
You
know
like
the
typical
license:
URL
those
things
that
that
open,
API
and
Swagger
are
looking
for
and
then
there's
also,
if
you
have
a
individual
like
parameters,
all
the
parameter
attributes
and
things
like
that,
so
I
will
get
this
out
there.
But
it's
going
to
be
one
of
those
work
in
progress
pieces
that
needy.
B
We
need
to
get
some
of
that
in
place,
so
people
don't
have
to
we
when
we
let
Arvin
go
and
replicate
this
to
all
the
other
repositories
that
it's
you
know
most
of
the
work
is
done
and
we
don't
have
to
tell
people
to
go.
Add
this
as
a
a
second
pass.
B
Kind
of
know,
what's
happening.
I
have
also
been
working
on
the
our
security
stance
and
how
things
are
being
checked
so
on
our
existing
repositories.
B
I've
added
on
some
of
them,
I
haven't
rolled
it
out
to
all
of
them.
You'll
see
that
we
have
our
workflows
here
and
we
have
the
mega
linter
workflow
and
what
that's
allowing
us
to
do
it.
It
runs
on.
Let's
see
what
is
it
on
I
think,
on
a
push
yeah
push
the
main
it
will
run
in
any
pull
request.
So,
basically,
when
you
create
a
pull
request,
it'll
go
out
there
and
run
the
the
checks.
So
if
you
look
at
this,
one
I
did
a
couple
days
ago.
B
So
the
mega
linter
is
one
of
the
pull
request,
checks
that
has
to
pass
and
the
basically
the
image
has
to
build
cleanly
before
you
could
go
ahead
and
and
enable
the
merge.
So
what
ends
up
happening
with
that
is.
If
we
look
at
our
code
scanning,
we
can
see
that
we
have
some
high
issues
that
have
to
be
addressed
at
that
level,
and
this
is
coming
from
the
openssf
scorecard
step
as
well.
B
B
B
E
B
E
B
C
B
B
Okay,
I,
don't
have
anything
so
you
can.
You
can
install
it
locally
and
there's
basically
a
shell
script
that
you
know.
I
did
a
I
can't
remember.
If
I
did
a
brew
install,
but
basically
you
install
it
locally
and
what
it
does
is
it
actually
runs
through
a
Docker
image.
So
if
I
happen
to
have
the
the
image
locally,
if
it
doesn't
it's
going
to
bring
down
like
a
seven
gig
image
to
work
through
in.
B
Yeah
there's
a
a
yaml
file
that
is
used
to
kind
of
configure
what
you
wanted
to
to
for
it
to
do
like
where
the
config
file
is
for
python.
Black
exclude
one
of
the
things
that
it'll
trip
up
on
as
the
the
helm
templates.
B
The
the
this
yamo
linter
doesn't
know
how
to
deal
with
Helm
templates
or
yaml
files.
So
we
have
to
do
some
exclusion.
Things
like
that.
The
spell
the
spell
checker
just
Falls
over
goes
crazy
on
on
some
of
the
technical
words
that
we
use
and
stuff
like
that.
B
So
you
have
this
config
file
and
what
it
does
is
it
it
goes
through,
gets
going
and
it
says
what
it's
going
to
do
and
one
of
the
things
that
it
can
do
is
it
will
actually
fix
up
the
format
So
like
trailing
spaces,
new
line,
characters,
spaces
in
between
words
and
things
like
that
and
markdowns
and
Json
this.
This
linter
can
actually
go
ahead
and
fix
that,
for
you,
you
know
what
ends
up
happening.
Is
it
actually
fixes
the
code,
and
then
you
have
to
recommit
that
fixed
up
code.
B
So
this
one,
because
it's
a
go
repo,
we're
running
the
two
go
linters
in
in
this
one.
If
we
go
over
to
the
python,
one
it'll
run
the
python
linters
and.
E
B
Go
go
has
when
you're
inside
of
visual
studio,
the
visual
studio
plug-in
for
go
automatically
for
format.
So
every
time
you
you
hit
save
so
that's
usually
pretty
well
this
look.
This
will
catch
if
they're
not
formatted
correctly,
and
then
you
just
have
to
go
format
them
manually.
B
There
I
think
there's
a
go
fmt
program
you
could
do
as
well,
so
you
can
see
where
it's
going
through
running
all
the
different
linters
and
it
will
like
right
now
it's
going
through
the
repository
history
and
checking
for
any
secrets
that
are
in
the
the
repo
history
very.
B
So
this
is,
this-
is
one
of
the
so
I
I
wrote
ran
it
manually
just
by
running
the
mega
linter
command.
You
can
also
set
it
up
as
a
pre-commit
hook
to
your
git
repo.
So
on
the
pre-commit,
it
will
go
ahead
and
run
the
mega
linter
to
make
sure
everything's
linted
and,
looking
correctly
before
your
commit
gets
pushed
in.
So
we
could
actually
see
here.
B
We
actually
failed
one
of
our
linters
and
looks
like
just
a
timeout,
so
I
may
have
to
go,
adjust
the
the
timeout
at
this
level.
It
may
be
because
I
have
zoom
running.
That
is
just
out
of
resources.
B
But
like
to
fix
this,
there's
a
there's,
a
timeout
option
and
then
in
the
mega
linter
you
could
do
there'll,
be
like
a
keyword,
it'll
be
like
goaling,
CLI,
lint,
pre-command
or
commands
or
args,
or
something
like
that.
It's
all
documented
and
you
just
pass
the
timeout.
B
So
that's
that's
one
thing
that's
been
coming
along
with
all
the
repositories
trying
to
roll
that
out.
So
all
the
new
repositories
should
have
the
the
mega
linter
in
place,
and
one
of
the
things
with
the
the
scorecard
is
the
scorecard
from
the
openness
SF
counts.
If
you
have
a
linter
in
place
or
not,
so
you
get
so
many
points
for
having
linting
in
place
at
that
level.
So
if
we
actually
look
at
this
repository,
this
is
the
repository
I
play
with
a
lot.
B
So
here
so
it's
part
of
the
readme
I've
added
the
the
additional
tags.
So
the
current
release
number
Sasha.
You
were
able
to
figure
that
one
out
there's
a
patch
the
license
tag.
Our
current
build
is
passing
Mega
linter
pass
code,
ql
passed
and
our
score
is
a
7.6
and
on
Discord
we
got
20
folks
online.
E
Yeah,
that's
so
cool,
that's
so
cool.
B
Yeah,
it's
the
stuff
that
I've
been
learning
is
really
really
nice.
You
know
getting
these
things
in
and
making
sure
that
everything
is
clean.
It's
amazing
what
you
learn
when
you,
when
you
run
the
linters
and
the
linters,
are
saying:
oh,
your
coding
style
you've
done
acts,
and
you
should
be
doing
why
and
it
really
helps
understand.
Oh
yeah.
That
is
a
better
way
to
do
stuff.
D
Is
this
adding
time
to
the
build
of
the
pipeline.
B
A
little
bit
but
I
think
the
what
ends
up
happening
is
it's
a
the
mega.
Linter
is
a
separate,
so
all
of
these
are
separate
workflows,
but
there
is
a
hard
check
on
Mega
linter
completing
successfully
before
you
can
do
a
PR
merge.
B
So,
even
though
these
all
the
build
push
the
code
ql,
the
Mega
linter,
the
scorecards,
will
all
run
in
parallel.
You
have
to
wait
for
the
required
ones
to
finish
before
you
go
and
do
your
merge.
B
And
then,
if
you
look
like
the
the
build
push
one
here,
let
me
find
one
off
of
Maine.
B
And
then,
like
this,
one
I
figured
out
on
how
to
basically
set
the
environment
variables
once
and
then
pass
them
along
to
all
the
different
steps.
So
some
of
the
things
that
we
do
is
like
an
image
tag,
so
we
set,
we
determine
what
the
image
tag
is
once
it's
one
of
those
weird
things:
GitHub
doesn't
have
the
seven
character
short
commit.
B
There's
no
variable
for
that,
so
you
actually
have
to
derive
it,
and
we
use
that
a
lot.
So
that's
like
what
this
set
EnV
does
is
derive
that
variable,
and
then
we
do
the
build
and
then
once
the
build's
done,
then
we
do
the
trivia,
the
helm
and
the
s-bomb
all
in
parallel,
and
you
can
see
that
the
s-bomb
is
where
we
actually
hook
in
artillius
deploy
Hub
CLI
to
record
this
often
to
the
deployup
team
database,
so
we're
actually
taking
the
s-bomb
and
uploading
it
at
that
point.
B
So
that's
what's
kind
of
happening
on
on
that
front
it
it.
It
adds
a
little
bit
of
time,
but
it's
I've
seen
builds
run
longer.
You
know
these
are
usually
you
can
see.
The
code
cable
seems
to
be
one
of
the
longer
ones
at
just
under
five
minutes,
but
we
don't
we
don't.
The
I
didn't
enable
the
checks
to
say
code
ql
has
to
be
passed
before
you
can
move
on.
Do
your
PR
merge?
B
B
B
B
And
part
of
that
is
because
of
a
issue
with
the
crypto
and
SSL
at
the
operating
system
level.
B
So
the
fix
for
this
is
be
is
bumping
the
base
image
of
Wolfie
to
the
next
version
and
the
docker
file
and
then
it'll
it'll
clean
it
up.
B
But
there
there's
a
bunch
of
stuff
I'm
gonna
take
care
of
all
at
once.
So
if
you
look
over
to.
B
B
B
Basically,
just
oh
and
then
I
can
go
through
and
approve
them.
So
if
you
pass
in
the
approve
parameter,
it
does
use
the
GitHub
command
line,
see
if
they
have
GH
installed.
B
B
B
Yeah
and
I'll
drop
this
script
in
there,
so
that's
kind
of
what
we
got
going
on
I
will
be
going
in
hopefully
next
week
and
adding
bounties
to
all
the
new
issues.
I
have
a
better
feel
for
how
long
things
are
going
to
take
so
expect
that
to
get
in
place
we
will
need
help
Arvin
and
Sasha
on
the
home
charts
and
getting
all
the
repos
rolled
out
with
the
the
template
once
I
get
it
done.
C
B
Yeah
and
then,
if
we'll
probably
need
a
new
another
terraform
for
the
the
current
one
versus
the
new
one,
yeah.
E
E
A
B
We'll
figure
out
some
co-coding
times
and
get
that
out
there
and
get
get
folks
paired
up.
B
I
haven't
talked
to
Brad
in
a
long
time,
so
I
will
have
to
I.
Don't
know
if
he's
gonna
be
in
I,
don't
think
he's
gonna
be
in
Vancouver.
A
B
B
Yeah
I
saw
him
open
an
issue
to
rebuild
them
and
part
of
that
I
think
was
there's
a
new
version
of
kubernetes.
B
Well
put
it
this
way,
there's
I
think
where
you
can
get
one
27,
126
or
127
and
they've
the
other
ones.
The
one.
The
version
that
we're
probably
on
is
going
to
be
obsoleted
pretty
soon
so
yeah
I
think
a
rebuild
would
be
pretty
good
to
do.
B
Yeah
that
that'll
be
one
of
the
things
that
we
need
to
figure
out
is
how
to
do
testing
as
part
of
the
workflow.
B
B
B
But
that's
we
should
create
an
issue
on
around
that
on
adding
adding
some
test
cases
or
test
runs
to
the
workflows.
B
A
B
B
So
as
part
of
that,
we
need
to
figure
out
how
we
want
to
notify
people
if
we
want
a
slack
channel,
if
we
want
a
Discord
or
if
we
want
email.
E
B
B
C
B
You
know
pretty
much
you
if
you,
if
you
have
a
cve
that
comes
through
the
nres,
has
a
fix
that
you
want
to
be
able
to
notify
them
pretty
much
within
a
couple
minutes.
Yes,.
B
E
B
And
we
have
the
data
inside
of
ortillas
that
we
should
be
able
to
do
something
with
that,
so
we
don't
have
to
solve
it
now,
but
just
think
about
other.
You
know
poke
around
see
what
other
people
have
done.
B
B
E
Do
you
have
to
ask
yourself
the
question
whether,
if
you
have
Auto
patching
mode
for
the
very
high
ones,
right,
there's
different
levels
right?
Is
it
different
levels
of
security
issues.
B
E
E
B
B
So
think
about
it
and
we'll
see
what
we
come
up
with
and
we
can
continue
the
the
discussion
on
Discord.