►
From YouTube: March 17, 2022 - Ortelius Architecture Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Perfect,
okay,
so
welcome
everybody
to
march
17th
st
patrick's
day
architecture.
Meeting
this
one.
We
slated
a
little
more
time
for
to
go
over
some,
some
of
the
kind
of
the
co-coding
to
get
through
some
of
the
issues
that
we
need
to
kind
of
tick
off
the
the
make
the
check
mark
by
so
we
can
get
our
our
release
out.
A
So
for
those
of
you
that
are
new,
our
the
way
we
have
our
repositories
set
up
is
all
of
our
issues
are
under
the
main
artillious
artillery
issues
like
repo,
I
did
add.
I
had
a
meeting
with
a
new
person
yesterday,
I
believe,
is
samil
he's
in
armenia
and
was
interested
in
doing
some
additional
devops
pieces.
A
So
I
created
some
new
issues
right
now,
they're
all
assigned
to
him,
but
if
we,
if
we
want,
we
can
split
these
out
into
different
folks
taking
each
one
of
these
one
of
the
challenges
with
our
architecture,
and
this
is
just
in
general,
with
microservices.
A
If
you
have
the
same
change,
you
need
to
make
across
all
the
services
you
have
to
go
visit,
a
lot
of
repos,
which
is
fine.
You
know
the
the
changes
are
easier,
easy
easier
to
make,
but
you
have
them.
You
have
to
do
them
in
more
places,
so
we
basically
have
our
monolith,
which
is
our
java
tomcat
server
that
is
running
the
main
part
of
ortelius
and
then
from
there
anything
new
we've
been
writing
in
in
python,
using
fast
apis
as
a
microservice
container.
A
A
Scanner,
so
this
came
out
of
encore.
Encore
has
a
commercial
side
where
they're
kind
of
like
a
black
duck,
where
they're
going
to
go
ahead
and
do
some
scanning
of
your
code
figure
out
what
the
package
is
cv.
I
can't
remember
if
they
do
cves
or
not-
I
don't
think
sift
does
I
know
encore,
does
and
give
you
a
list
of
everything
that
you
need
to
fix.
A
They
will
also
go
into
like
licensed
compliance
with
their
main
commercial
product,
but
the
part
that
we're
going
to
be
using
is
the
open
source
sift.
So
outcast
is
very
similar
to
what
you
wrote
in
go
but
they've
taken
it
to
the
next
level,
where
it
will
support
a
number
of
operating
systems.
A
This
one
will
go
with
not
only
the
packages
so
initially
when
we
when
bootcamp
was
writing
his.
We
were
looking
just
at
the
python
libraries
running
those,
but
this
one
has
gone
to
the
next
level
where
it's
looking
at
the
the
individual
packages
at
the
os
level.
A
If
there's
any
ruby
gems,
the
python
pieces,
which,
what's
what
we
are
going
to
be
needing
out
of
our
microservices,
mpms
and
yarn,
and
then
also
the
java
dependencies
as
well
go
in
php,
the
one
it
doesn't
address
at
this
time
is
rust
is
one
of
the
ones
that
I'm
sure
is
in
their
pipeline,
but
isn't
out
there
yet
so
the
way
we
run
this
is,
we
would
have
do
the
install
and
then
go
ahead
and
basically
run
a
just
a
command
line
and
tell
it
what
we're
looking
for
and
what
we're
going
to
do
is
output
it
as
a
cyclone
dx
json
file.
A
So
what
we
need
to
do
is
figure
out
the
exact
command
line
for
for
sift
and
start
adding,
adding
this
to
our
microservices.
A
So
this
is
the
workspace
for
which
one
I
pick
the
the
ms
comp
item
crud
service.
So
in
here
we
are
going
to
have
our
cloud
build
yaml
file,
so
our
cloud
build.
The
ammo
file
is
what
we
use
to
run
our
builds
of
our
services
to
create
our
our
docker
images.
A
A
A
So
this
is
like
the
definition
of
how
we
want
to
represent
our
service
inside
of
ortelius.
So
we
have
the
name
of
the
component,
we're
going
to
give
it
a
variant
which
is
the
branch
name
and
then
we're
going
to
build
up
a
version
number
at
that
level.
A
The
there's
some
other
things
that
we're
going
to
do
we're
going
to
capture
like
the
service
owner
the
commits
the
builds,
the
actions,
those
type
of
things,
the
the
build
url.
All
this
service
catalog
data
is
some
of
the
information
that
we're
going
to
capture
on
the
fly
like
the
license
and
readme.
A
I
don't
know
if
this
one
has
a
swagger
file,
that's
on
disk
the
way
our
our
microservices
work.
We
have
a
built-in
swagger
engine
as
part
of
them,
so
if
you're
running
the
the
the
microservice
you
could
use,
I
think
it's
slash
doc
and
it'll
return.
The
swagger
file
for
you
I'd
have
to
double
check
on
that
ooh
kirsh.
Do
you
remember
how
to
get
to
the
the
swagger
and
the
microservice?
Is
it
in
the
slash
dock
url.
B
Exactly
I
can't
remember
that
it
should
be
slashed.
Often,
let
me
check.
A
Yeah,
I
can't
remember
either.
I
know
we
built
it
in
so
when
we
built
our
services.
Here's
the
example
of
our
service.
We
try
to
keep
our
services
to
I
like
to
under
500
lines.
I
think
this
one
is
a
total
of
yeah
we're
right
around
500
lines.
It
is
we're
522
as
part
of
that,
so.
A
Okay,
perfect,
so
what
ends
up
happening?
Is
we
fill
out
information
about
our
structures?
A
You
know
what
are
we
gonna
be
passing
back
and
forth,
so
we
create
a
class
for
the
the
json
structure
and
then
we
have
a
like
a
list
which
is
a
class,
a
a
class
that
contains
a
list
of
the
items,
and
then
we
expose
our
endpoint
and
what's
expected
from
the
endpoint
relationship.
So
in
this
case
we're
expecting
the
list
of
the
the
models
and
then
we
get
into
our
actual
coding
of
the
microservice
itself,
and
you
can
see
the
actual
java.
A
I
mean
the
sql
queries
that
are
that
are
happening.
A
A
A
So
we
have
our
our
coding,
our
our
sql
statements
and
then
finally,
one
of
the
things
that
we
just
do
in
this
case
is
we
just
build
up
the
result,
which
is
just
an
ordered
dictionary
in
our
case,
for
this
example,
we
return
that
and
then
the
fast
api
takes
over
for
us,
which
makes
it
really
easy
to
code
these
microservices.
A
What
ends
up
happening
is
because
this
is
python.
We
need
to
grab
at
the
container
level,
all
the
python
dependencies
that
are
good
and
all
the
os
dependent
dependencies
to
create
our
s-bom.
A
So
that's
where
we're
going
to
go
ahead
and
add
on
to
our
cloud
build
the
next
step
of
doing
the
the
sift
execution
so
again
to
walk
through
the
the
the
cloud
build,
and
this
will
pretty
much
turn
the
cloud
bill
will
translate
over
to
a
jenkins
pipeline
or,
if
you're
doing
azure
pipelines,
it's
going
to
prove
pretty
much
tran
or
like
a
github
action.
A
The
steps
are
going
to
change
are
going
to
I
mean
the
steps
are
going
to
be
identical,
but
the
some
of
the
nuances
of
the
build
tool
are
going
to
you
have
to
kind
of
adjust
for
so
in
this
case,
with
with
google
cloud
build
the
way
you
pass
information
from
one
step
to
the
next
step,
is
you
actually
have
to
pass
it
in
files?
A
So
one
of
the
things
in
order
to
do
that
is,
we
have
to
fetch
all
of
the
branches
and
all
the
versions
all
the
commits
that
are
out
there.
So
that's
what
the
fetch
on
swallow
does
it
turns
it
in
from
a
sparse
repo
into
a
full
repo.
A
Then
we
go
ahead
and
count
all
the
commits
out
there
and
that's
we're
going
to
use
that.
That's
going
to
be
a
unique
number
that
we
can
use
for
our
build
number
next,
we're
going
to
derive
the
the
repo
name
and
the
repo
url
one
of
the
weird
things
with
google
cloud
build.
Is
they
don't
give
you
that
in
a
nice
url
or
like
an
environment
variable,
so
we're
going
to
go
ahead
and
derive
that
as
a
part
of
our
process?
A
A
So
the
next
step
we're
going
to
go
into
because
we
use
quay
we're
going
to
log
into
the
into
quay,
we'll
see
at
the
bottom,
where
these
encrypted
user
id
and
password
are
coming
from.
The
next
step
is
we're
actually
gonna.
Do
the
the
build
and
push
so
one
of
the
things
I
missed
right
here
is
we're
actually
sourcing
in
this
works,
but
the
shell
script
that
we
created
from
the
previous
step.
A
So
this
is
how
we're
passing
environment
variables,
along
from
step
to
step
so
the
source,
the
source
command
here-
is
a
bash
command
that
allows
you
to
take
run
a
shell
script
and
expose
all
the
environment
variables
to
the
parent
shell.
So
this
is
giving
us
all
the
variables
that
we
derived
and
making
visible
execution.
A
So
we
do
our
login,
we
do
our
build,
and
then
we
have
the
one
of
the
weird
things
with
with
docker.
Is
we
actually
don't
have
a
docker
digest
mat
a
digest
of
the
manifest
doesn't
exist
until
you
do
a
push
to
a
repo.
A
A
So
after
we
do
the
push,
we
can
go
and
inspect
and
query
the
information
about
that
the
tag
and
stuff
and
get
our
digest
number.
So
this
is
a
quick
little
one-liner
and
here
we'll
see
we're
appending
it
to
our
to
the
end
of
our
shell
script.
So
we're
appending
the
export
of
the
digest
into
that
shell
script
and
again.
A
So
at
this
level
we're
doing
our
update
component
and
we're
passing
in
that
toml
file
and
also
again
we're
sourcing
in
the
the
cloud
build
now.
One
of
the
things
that
we're
doing
here,
because
we're
executing
in
different
contacts
we're
just
making
sure
that
the
deploy
hub
has
already
been
installed.
Sometimes
these
will
come
up
in
new
containers
and
not
reuse
the
existing
one.
So
we
have
to
make
sure
that
the
cli
has
been
installed.
A
So
at
this
point,
we're
going
to
take
this
whole
tomo
file
and
everything
all
these
variables
will
be
resolved
through
our
derivation.
So
here's
like
the
git
url
that
we
derive
the
get
repo
suresha
happens
to
be
one
that
is
exposed
from
google
cloud,
build.
Here's
our
digest
that
we
drive
the
image
tag
is
another
one
that
we
drive.
This
was
actually
kind
of
like
a
two-step
process.
A
We
derived
it
from
variant
and
version,
and
these
so
you
can
do
like
this
recursive
resolution
of
the
variables
so
that
that
gives
us
our
image
tag
and
then
everything
is
is
then
pushed
over
to
as
a
new
component
version
with
that
name
that
we
have.
In
this
case,
we
have
the
deploy,
env
is,
is
uncommented,
so
we'll
actually
go
and
do
a
deployment
after
we
do.
The
creation
of
the
new
version
we'll
also
create
a
new
version
of
the
application
if
we
needed.
A
So
what
we're
going
to
be
doing
with
the
the
sift
is
adding
it
in
like
after
the
docker
build
we're
going.
Gonna
need
to
run
a
step
to
create
that
cyclone
dx
json
file,
so
it
just
has
to
it,
doesn't
really
matter
if
it's
it'll
have
to
be
after
the
docker
push,
but
before
the
update
comp,
because
what
it's
going
to
be
is
we're
going
to
have
one
more
parameter
here.
A
And
in
in
the
google
cloud
build
world,
the
workspace
is
the
directory
where
all
files
get
to
be
persisted
between
the
the
steps.
A
A
A
I
think
I
cleaned
them
up
well,
anyways.
Basically,
it's
just
a
a
simple
json
file
that
has
the
the
packages.
Oh,
I
don't
know
what
is.
A
So
this
is
the
what
it's
going
to
look
like
and
there's
a
lot
of
a
lot
more
data
that
is
in
here
that
we're
not
grabbing
or
not
utilizing
at
this
point
in
time.
A
So,
let's
see
here
is
a
jar
file,
so
we
have
our
our
spring
boot
jar
file,
the
demo
jar
file.
Let's
see,
I
think
this
one
may
even
have
log
for
j
in
it.
A
Yeah,
here's
one
of
the
log
for
j
components,
so
this
is
the
information
that
we're
gathering
and
we're
utilizing
this
mainly
right
now
we're
looking
at
the
name
and
the
version
to
give
that
information
up
and
push
it
into
ortulius.
Also,
I
think
we
take
the
location
as
well.
I
have
to
look
at
that.
Also.
The
licenses
is
another
important
part
that
we
grab
some
of
the
things
we
do
is
we
or
organize
by
license
to
show
what
what's
consuming,
what
at
the
license
level.
A
So
this
is
where
right
now
we're
grabbing
a
minimal
part
of
it.
One
of
the
things
I
want
to
do
for
long
term
is
to
take
these
json
files
and
instead
of
just
picking
and
choosing
what
we
want
out
of
them,
that
we'll
go
ahead
and
store
this
information
into
a
graph
database.
A
A
It's
already
been
loaded
for
us.
So
if
we
needed
to
grab
like
the
hashes
and
stuff
like
that
display
those
or
use
the
hash
for
something
down
the
road
we'll
have
those
available
in
our
in
our
database.
So
that's
kind
of
like
where,
where
we're
at
and
where
we're
headed
with
our
coding
changes.
A
B
Yeah,
I
think
I'll
take
one
of
the
microservice
to
investigate.
A
Yeah
so
the
when
we
go
to
the
it's
this.
A
So
this
the
the
ms
debt
package
and
the
debt
package
iron
package
cud,
are
the
ones
handling
on
the
our
server
side,
uploading
of
the
s-bombs
and
reading
of
the
s-bombs
from
the
database.
A
A
So
that's
part
of
the
stuff
that
we
need
to
work
on
also
we're
going
to
be
wrapping
up
the
additional
changes
to
the
to
other
services.
So
we
have
our
our
our
demo
world
out
there,
which
is
our
hipster
store.
Last
or
two
weeks
ago,
arvin
and
sasha
myself
worked
on
getting
the
component
toml
docker
files
kind
of
rolled
out
and
in
in
place,
so
I
think
most
of
them
are
in
place.
A
I
have
to
check
a
couple
of
the
pull
requests
and
then
also
the
the
ones
for
this.
Our
services
are
well
are
pretty
far
along
at
that
level.
So
I
think
we're
gonna
be
good
to
go
ahead
and
grab
these.
A
Okay,
so
I
think
we're
going
to
be
running
this
one.
Can
everybody
see
that
or
do
I
need
to
increase
the
font?
Let
me
go
a
little
bigger
there.
We
go.
A
A
At
the
alpine
level,
so
it
figured
out
that
we're
running
on
a
particular
alpine
distribution,
which
is
pretty
cool.
A
I
have
never
seen
this
line
before,
and
this
may
be
of
interest
to
us
down
the
road
poll
dependencies
where
it
looks
like
it's
pulling
in
some
shared
libraries
at
the
os
level.
A
A
Yeah,
the
the
buffer
overruns
on
on
the
decoder
is
a
great
exploitation,
so
there's
pi
jwt.
A
So
it
doesn't,
look
like
like
our
our
requirements
are
up
to
date
or
they're
must
be
coming
from
a
separate
layer.
Look
at
the.
D
B
A
Yeah,
so
just
so
you
so
like
utkarsh
was
saying:
is
we've
created
a
base
image
that
does
a
multi-stage
build
because
one
of
the
things
that
we
needed
to
do
was
to
get
the
python
sql
alchemy
and
a
couple
other
libraries
python
modules
in
place
and
the
only
way
you
can
install
them
was
by
running
a
compile.
So
you
had
to
run
the
gcc
compile.
A
So
what
we
did
was
we
did
a
multi-stage,
build
to
be
able
to
to
throw
away
the
the
build
layer
and
not
have
the
the
gcc
compiler
as
part
of
our
our
our
layer.
Let
me
see
if
this
is
going
to
be
python.
Ms
base.
A
Yeah,
so
this
is
where
we're
actually
adding
on
the
additional
packages
and
some
of
the
development
libraries
that
are
needed,
because
when
we
do
this
pip
install
of
the
esco
alchemy
and
the
jwt
and
the
cryptography,
they
need
things
from
the
os
level
to
do
to
compile.
A
But
one
of
the
nice
things
is,
we
can
throw
it
away
and
only
copy
over
the
pieces
that
we
needed
so
from
there
we're
going
to
take
the
what
was
built
as
part
of
the
install
and
push
them
over
into
user
local
at
that
level.
So
when
we
get
to
this
level,
we're
inheriting
that
and
then
we're
good
to
go.
It
just
simplifies
this
docker
file
would
have
without
having
to
worry
about
what
we
have
going
on
at
the
lower
layout.
A
So
when
we
have
a
cve
in
crypto,
we
actually
go
and
fix
the
base
as
one
of
our
our
steps
and
then
we'll
go
ahead
and
bump
the
version,
so
it'll
be
1.2
that
will
roll
out
at
that
level.
So
we
have
our
sql
alchemy
that
I
found
we
should
have
like
a
tamil.
A
Interesting
that
it
did
not
see
what
it's
doing
is
it's
also
picking
up
so
pi
pi
is
the
location.
So
that's
the
the
python
registry
for
where
our
dependencies
are
coming
from.
A
Click
click
is
a
weird
one.
Click
gets
brought
in,
even
though
we're
not
using
it
in
our
code.
A
I
don't
think
we
import
click,
but
even
though
we're
not
importing
click
in
our
our
main
program
where
click
is
coming
from
is
for
the
pip
installer,
so
pip
itself
uses
click
and
we'll
get
a
vulnerability
showing
up
and
click
through
the
use
of
pip.
So
one
of
the
things
that
one
of
the
tricks
that
we
can
do
is
to
remove
pip
from
our
final
image.
A
So
when
we're
at
this
level,
you
can
see
that
we're
uninstalling,
pip
or
trying
to
to
get
rid
of
the
dependency
on
click,
and
that
may
be
some
of
the
other
things
that
we
need
may
need
to
do
for
cves
is
to
clean
up
stuff
that
we
that's
just
not
needed.
A
There's
the
our
fast
api
engine-
oh,
I
just
read
it
wrong,
there's
our
jwt
that
we
depend
upon
also
we
do
some
yamo
pieces,
so
it's
actually
finding
dependencies
that
aren't
in
our
our
requirements,
file,
which
is
great
that
we're
bringing
all
that
information
in.
A
Okay,
so
what
we're
going
to
need
to
do
is
we
will
put
in
a
new
step
to
go
ahead
and
run
first,
we
have
to
install,
because
I
don't
believe
the
the
the
google
base
image
is
gonna
have
sift
installed,
so
we'll
have
to
do
an
install
and
then
run
the
collection
of
the
of
the
data.
A
A
A
So
what
we're
gonna
do
is,
if
you
want,
does
it
did
anybody,
let's
take
a
look
again
look
at
the
install
process
yet
of
sift.
B
A
Yeah,
putting
the
the
one
thing
that
we.
A
I
can't
remember
for
google
cloud
build
if
we
have
access
to
user
local
bin
I'll
have
to
play
around
that.
With
that
to
see
what
happens,
we
may
need
to
add
in
the
sudo
in
order
to
give
us
permissions
to
write
into
that
directory.
A
A
So
who
wants
to
tackle
this
first?
This
is
this
is
a
co-coding
event.
It's
not
it's!
It's
not
for
me
to
do
everything.
A
B
A
A
A
B
A
What
is
your,
what
is
your
this
is
a
fork
right
that
you
have.
B
A
No,
just
those
just
say
your
your
branch
doesn't
have
I
mean
your
fork
doesn't
have.
A
Now
yeah
switch
over
to
oh,
you
gotta
get
one
more
into
it.
A
So
try
yeah.
A
Try
get
get
checkout
just
deploy.
A
C
A
Yeah,
it's
weird
that
it
doesn't
so
do
I
get
get
branch
dash
a.
A
Okay,
do
I
get
do
a
get
remote
dash
v.
A
Oh
okay,
so
on
the
switch
utkarsh
just
drop
the
remote
origin,
so
we
just
get
switch,
deploy.
B
B
A
It
still
doesn't:
oh
that's
your
old
repo
close
that
workspace.
B
A
Yeah,
because
we're
missing
so
go
back
to
yeah,
do
a
ls
there.
A
Yeah,
so
I
think
we're
missing
the
tamil
file.
The
component.tamil
should
be
in
that.
Let
me
double
check.
A
A
Yeah,
so
that's
actually
a
bad
one
to
choose,
because
I
haven't
gotten
to
updating
it.
Yet
try
the
comp
item,
one
see
if
you
can
clone,
if
you
can
clone
that
one.
B
A
Yeah
that
one
looks
better
so
comp
item
crud.
A
Yeah,
so
do
this
just
go
back
to
the
to
the
parent
repo.
A
And
just
clone
that
one
and
we
could
actually
work
on
this
stuff
and
then
we
can
worry
about
a
branch
later.
A
Cool
yep
now
change
the
branch
over
to
deploy.
C
A
A
B
Like
I
saw
this
parameter
like
scope
all
layers,
I
think
this
will
give
us
more
glycogram.
A
And
then
or
the
other
parameters,
we
had
dash
o
cyclone
dx-json.
A
A
You
could
do
bom.json
or
cyclone
dot
json
either.
One.
A
A
Because
the
workspace
is
is
open
to
all
steps.
We
should
be
good.
I
think
you
need
on
the
previous
line
after
you
use
your
local
bin,
a
semicolon.
A
A
A
A
So
go
down
to
your.
A
So
go
scroll
down
to
the
last
step
in
that
or
near
the
last
step.
A
Yeah
rate
line
93.
A
Yeah
we're
gonna
do
space
dash,
dash
dep.
B
All
without
any
spaces
right.
A
A
Yeah
and
then
single
tick,
cyclone,
that's
that
looks
like
a
cyclone
dx.
B
A
Yeah,
so
that
should
upload
it
correctly.
So
what
we've
done
on
the
on
the
cli
is
so
debt
package
is
just
a
we're
going
to
hit
that
the
restful,
the
micro
servers
for
the
depth
package,
cud
endpoint
and
one
of
the
first
things
we're
going
to
do
is
tell
it
what
type
of
file
we're
giving
it
and
that's
where
the
cyclone
dx
at
sign
is
telling
us
the
type
of
file.
A
So
there's
a
couple
s-bomb
files,
one
of
them,
is
the
cyclone
dx
s-bomb
and
then
there's
one
that's
coming
out
of
the
linux
foundation
called
spdx
right
now
we,
when
we
initially
created
our
endpoint,
we
were
just
focused
on
a
cyclone,
so
we
actually,
if
anybody's
interested
in
any
python
coding,
let
me
know
and
we'll
get
you
a
sign
to
enable
us
to
upload
spdx
format
as
well.
I
can't
remember
it's
python
coding
or
java
coding,
one
or
the
other.
I
I'll
check
here
real,
quick.
A
Let
me
go
over
to
the
package:
let's
try,
try
committing
that
to
kirsh
and
then
to
see
if
it'll
it'll,
let
you
upload.
A
A
Okay,
cool:
I
will
take
a
look
github's
acting
up
today,
so
we
will
I'll
have
to
look
go
into
the
pr.
Hopefully,
when
github
gets
fixed.
E
A
We're
having
problems
just
doing
a
simple
push.
A
A
Yeah,
but
that's
basically,
what
we've
done
is
is
going
to
be
the
the
steps
that
we
need
to
do
for
the
other
five
microservice
repositories.
B
A
I
just
tried
assigning
you
to
that
issue
and
it
doesn't
look
like
it
worked.
B
A
So
hamid
was,
I
just
generically
assigned
muhammad
a
bunch
of
the
s-bomb
issues
yesterday
and
we
could
break
those
out
now,
since
everybody
has
an
idea
of
what
to
do,
and
it
was
it's
issue
480
was.
I
was
trying
to
assign
to
you
courage,
but
it
doesn't
look
like
it's
gonna
work.
A
Now
he
he's
trying
to
commit
to
the
microservice
repo,
but
github
is
totally
down.
A
This
is
the
the
depth
package
and
we
can
see
we
have
the
cyclone
dx
as
one
of
them
and
then
there's
gonna
be
a
safety,
so
safety
is
another
scanning
tool
that
is
used
for
cves
for
python
packages.
A
A
What
ends
up
happening
is
he
ends
up
getting
routed
over
to
this
endpoint
to
go
ahead
and
take
the
cyclone
json
format
and
pick
through
the
pieces
that
we
want
so
we'll
go
through
and
grab
the
payload
and
the
list
of
components
from
that
specific
json
format
and
we'll
go
ahead
and
grab
like
the
package
name
package
version
try
to
find
the
licenses
and
then
put
together
the
information
as
part
of
that
call
to
save
the
component
data.
A
So
it's
kind
of
like
a
little
intermediate
wrapper
that
we
have
that
the
endpoint
is
doing
before
it
goes
and
calls
the
common
routine
at
that
level.
So
we
will
need
to
create
another
endpoint
for
sds
pdx
format.
So,
instead
of
that
package,
cyclone
dx
we
have
debt
package,
spdx
will
be
a
new
endpoint.
So
if
anybody's
interested
in
tackling
that,
let
me
know
I
would
love
to
get
out
there
and
create
an
issue
and
assign
people
to
it,
but
with
github
being
down
we're
kind
of
struggling
today
with
on
that
front.
A
A
And
one
of
the
reasons
why
we're
we're
we're
doing
this
is
the
supply
chain.
A
Piece
of
the
whole
devops
puzzle
is
coming
into
play
where
folks
want
to
know,
want
to
be
able
to
answer
what
are
all
the
microservices
and
all
the
versions
of
the
application
that
are
running
logged
for
j,
for
example,
and
with
ortilius
we'll
be
able
to
give
them
that
answer
in
a
few
clicks,
instead
of
them
going
around
and
looking
through
all
all
different
web
pages
and
everything
trying
to
find
who's
running
log
for
j,
for
example,
in
the
different
locations.
Let
me
go
ahead
and
pull
up
that.
A
So
we
can
see
that
it's
going
to
be
the
the
data's
be
similar,
but
they're
just
going
to
have
different
tags.
External
arrests
versus
you
know
other
one
dependencies.
I
think
it
was
called
they're
going
to
have
a
different
locators.
Now
this
one's
actually
interesting.
So
this
reference
locator
is
actually
pretty
interesting,
because
you
can
use
this
string
to
query
the
public
cve
database
to
find
out
what
cves
are
exposed
for
that
particular.
A
In
this
case,
it's
called
python
run.
Depths
is
the
package
name
and
then
all
these
little
stars
at
the
end
are
indicating,
which
versions
that
you
want
to
query
for
cves.
It's
basically
giving
a
range.
A
So
the
information
that
we're
gonna
need
out
of
it
is
gonna,
be
like
the
the
license
declared
the
package
name,
those
type
of
things
so
we'll
have
to
do
a
little.
C
A
A
So
if
some
like,
I
said,
if
somebody's
interested
that
is
I'll
as
soon
as
github
comes
back,
I
will
go
ahead
and
create
an
issue
for
us
to
support
spdx.
A
A
C
A
So
let's
just
go:
do
a
quick
review
and
curse.
Why
you
go
ahead
and
share
your
screen
again.
A
Or
we'll
make
who
cares
walk
through
what
he
did
today.
B
Okay,
so
basically,
the
objective
was
to
include
the
steps
for
security
scanning
tool,
which
is
soft
shift.
So
what
we
did
we
added
a
additional
step
for
that
in
cloud.
Build
yaml
cloud
build
yaml
is
basically
a
yaml
spec
that
is
used
for
gcp
google
cloud
platform,
basically
to
build
the
application
or
something
and
what
we
did.
This
is
it's
like
simple
step.
We
have
a
name
for
that.
We
have
id
for
that
and
we're
executing
a
couple
of
bash
command
here.
B
A
So
that
that
the
dh
update
comp
dh
is
the
cli
program
that
takes
an
action
called
update
comp.
This
is
it's
it's
a
a
weird
carryover
with
a
name.
I
I
initially
wrote
it
for
deploy
hub,
but
it
actually
lives
in
the
ortillius
repo.
A
It's
just
a
somewhere
down
the
line.
We
got
to
figure
out
how
to
do
a
like
an
alias
or
or
get
a
new
new
cli
name
out
there
for
this.
But
if
you're
interested
in
it's
going
to
be
under
ortelius
comp
update
is
going
to
be
the
repo
name
that
you'll
you'll
see
where
the
cli
lives.
A
So
what
we're
going
to
be
doing
is,
like
I
said
we
have,
I
think,
five
micro
services
that
we'll
need
to
make
the
the
updates
to
and
there's
a
couple
that
are
there
behind,
like
the
initial
one
that
we
grabbed,
which
was
a
text
file
that
one's
slightly
behind.
So
if
you
do
go
into
find
a
repository
there,
kersh
in
the
search
bar
under
or
go
down
a
little
bit,
yeah
right,
if
you
do,
if
you
do
a
filter
on
ortulius,
ms.
A
Yeah
right
there
in
that
no
go
to
the
entry
field
on
the
left,
yeah
just
type
ortilius.
A
Yeah
so
there's
some
of
them
we
haven't
been
working
with
lately,
but
like
the
comp
item,
the
depth
package
text
file
app
history
is
a
new
one.
That
karsh
is
working
on
and
validate
user.
Those
are
the
main
ones.
These
other
ones
like
the
pi
util,
the
report
ones
the
report
ones.
We
haven't,
we
initially
started
with,
but
we
haven't
worked
on
those
in
a
while,
so
the
main
ones
we
have.
A
We
took
a
slight
shift
in
how
we're
doing
some
of
our
services,
so
the
main
ones
are
going
to
be
at
the
top
there
that
we
need
to
roll
out
and
if
you
go
to
the
up
to
the
ortillius
artelius
repo.
A
Yeah
so
we'll
go
ahead
and
oh,
it
looks
like.
B
A
I
will
go
ahead
and,
if
you're
interested
go
ahead
and
grab
one
of
the
issues,
ahmed
is
going
to
be
working
on
one
of
them,
but
just
make
sure
that
we
leave
him
as
one
of
the
coders
and
everybody
else
can
grab
one
of
the
other
issues.
So
just
go
in
and
grab
the
issue
and
change
the
assignee
to
yourself
at
that
level
and
go
ahead.
Okay,
see
if
you
can
do
your
push
again
now
that
things
are
slowly
coming
alive.
A
No
not
yet
yeah
so
give
it
some
time,
and
you
could
use
utash's
example
the
comp
item
as
an
example
to
to
work
from
now.
Typically,
these
will
be
worked
off
of
a
fork
with
a
with
a
pr,
so
you'll
be
able
to
go
in
and
fork
the
microservice
repo
and
then
go
ahead
and
make
your
changes.
A
Do
your
pr
and
we'll
be
able
to
see
where
they're
running
now,
one
of
the
things
that
we
can't
quite
see
are
the
results
but
kirsch.
If
you
go
to
your
discord
to
our
channel
I'll,
show
you
how
how
you
can
get
to
the
results
if
you
go
down
to
the
build
bots
you'll
see
where
I've
been
doing
playing
with
doing,
updates
and
stuff
like
that.
So
if
you
go
to
any
one
of
those
where
it
says
build
blog,
you
can
it'll.
A
So
after
you
do
your
changes
and
you
want
to
see
what's
happening,
you
can
go
in
and
look
at
the
the
results.
A
A
So
this
is
the
the
trigger
that
I've
set
up
yeah,
you
won't
be
able
to
change
it.
So
basically,
what's
happening
is
on
any
push
to
a
branch
we're
going
to
go
ahead
and,
oh
I'm
sorry
we're
looking
at
main
and
deploy
right
now.
Are
the
filters
yeah
exactly
so.
Those
are
the
two
filters
that
we're
going
to
trigger
off
of
and
if
you
scroll
down,
we'll
see
that
we're
doing
some
additional
filters
for
ignoring
stuff-
and
this
has
to
do
with
our
get
ops.
A
We're
going
to
be
using
a
google
cloud,
build
it's
going
to
come
from
the
repository
and
it's
in
the
cloud
build
directory.
I
think
that's
most
of
it.
We
don't
get
into
any
any
additional
variables.
We
could
define
any
variables
that
we
wanted
at
the
trigger
level,
but
it's
just
too
hard
to
maintain
it's
actually
easier
to
maintain
variables
down
in
the
the
cloud
build
yaml
file.
A
So
that's:
what's
that's
what's
happening
when
we,
when
we
do
the
check-in?
That's
what
actually
is
kicking
off
now.
We
also
do
have
some
github
actions
that
brad,
mccoy
and
ben
are
working
with
from
the
get
ops
process.
So
if
you
go
to
our
our
actions,
there.
A
So
the
the
page
build
one
is
just
an
annoying
one
that
google
puts
into
play
by
default.
This
one
here
is
actually
taking
our
helm,
charts
that
we
have
and
publishing
them
to
a
certain
format
under
the
github
pages
branch
in
the
repo
and
that's
all
used
for
our
artifact
hub.
So
we
have
a
lot
of
moving
pieces,
and
today
we
just
focused
on
one
little
piece
of
the
puzzle
and
gathering
our
s-bombs
now
down
the
road.
A
There
will
be
adding
on
additional
information
around
cves,
like
I
said,
and
also
signing
or
or
gathering
signed
information
about
those
packages
so
what's
happening
in
the
supply
chain.
World
is
they're,
realizing
that
they
need
to
sign
packages
that
they
create
out
of
a
build.
A
So
they
know
who
did
what
and
when
and
that's
going
to
be
coming
down.
The
road
there'll
be
additional
information
that
we'll
want
to
gather
from
a
server's
catalog
perspective
who
signed
the
package
that
we're
using.
So
we
can
go
if
it
breaks
or
has
a
vulnerability.
We
know
who
did
it
at
that
level.
So
it's
just
bringing
another
level
of
trust
to
the
to
our
whole
world.
Out
here
and
software,
so
we're
just
coming
up
at
the
the
end
of
the
half
hour
hour.
A
For
you,
some
of
you
does
anybody
have
any
questions.
Does
this
kind
of
make
sense
we
kind
of
ran
into
a
little
bit
of
a
roadblock
today
with
github,
but
does
this
make
sense?
The
changes
that
we've
been
working
on
today
give
me
a
thumbs
up
or
thumbs
down.
B
Yeah,
I
think
that
was
helpful
to
understand
the
you
know.
End-To-End
flow
and
you
literally
went
at
a
very
basic
level
of
the
coding.
A
Yeah
and
like
I
said
at
the
beginning,
there's
because
we
have
the
the
poly
repos,
you
know,
divide
and
conquer
for
rolling
out.
These
changes
is
definitely
a
plus,
so
it
may
seem
like
a
trivial
thing
to
do,
but
as
we
expand
into
more
and
more
services-
and
we
need
to
roll
something
out
that
this
type
of
help
is
definitely
needed
and
we're
in
a
big
big
place
where
right
now
we're
trying
to
do
some
cleanup
of
our
repos
get
everything
consistent.
A
So
that's
where
we're
at
so
thank
you,
everybody
for
coming
today
and
as
soon
as
github
comes
back
alive,
I
will
go
ahead
and
create
that
issue
for
you,
joseph
for
the
coding
change,
and
then
I
will
ping
everybody
on
discord
for
them
to
go
ahead
and
grab
an
issue
to
work
on
in
arvin.
I
will
look
at
your
prs
that
you
did
as
well.
A
Okay
sounds
good
all
right.
Well,
thank
you,
everybody
and
we'll
be
in
touch.
If
anybody
figures
out
that
github
comes
back
alive,
please
send
out
a
message
on
discord.