►
From YouTube: June 24, 2021 - Ortelius Architecture Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay,
so
today
is
the
june
24th
ortillius
architecture.
Meeting
I
put
the
link
in
the
main
thing.
A
A
As
part
of
that,
we
may
need
to
reassign
that
to
somebody
to
do
the
research
or
reach
out
to
jesse
and
see
where
he's
at
on
that,
I
updated
the
issue
yesterday,
asking
him
where
he
was,
but
that's
like
one
of
the
things
that
we
need
to
kind
of
nail
down
here
sooner
than
later
on
runner
in
the
swagger
piece.
A
A
There
are
going
to
be
python,
flask
programs
that
are
going
to
store
the
readme
file
up
from
the
git
repo
for
a
component
up
into
our
database.
So
we
can
kind
of
version
that,
and
it
just
helps
with
the
the
navigation.
A
When
we
go
to
get
data
instead
of
going
out
to
a
git
repo
that
we
may
not
have
access
to
at
visualization
time
and
also
the
second
one
was
to
post
the
swagger,
json
or
yaml
up
to
the
database,
a
second
microsoft,
so
those
two
are
going
to
be
pretty
paired,
pretty
close.
So
if
you
knock
one
out,
the
other
one
should
be
pretty
quick.
The
table
definitions
are
out
there
in
the
issues.
A
B
A
I
did
not
create
repos
for
these
yet
for
the
code
base,
but
we
can
get
that
once
somebody
picks
them
up
we'll
be
able
to
get
that
taken
care
of.
So
if
you
look
under.
B
A
So
these
are
the
two
new
ones
as
part
of
that,
and
then
on
the
front
end
side.
We
obviously
have
the
the
visualization
of
the
readme
as
part
of
that,
and
then
this
is
the
one
that
jesse
was
working
on
around
the
javascript
that
we
need
to
use
to
integrate
into
the
ui
for
rendering
that
a
couple
other
things
if
you're
interested.
A
There
is
going
to
be
some.
I
think
it
will
be
probably
a
golang
program
that
we
basically
need
a
wrapper.
A
couple:
the
cyclone
and,
like
the
python
safety
commands
to
run
the
scanning
tools
to
figure
out
the
cbes
and
licenses
in
a
container
at
that
level.
A
So
if
something
somebody's
interested
in
that
probably
be
the
go
leg
to
make
it
the
easiest
for
installation
into
the
pipeline
process,
that
was
sorry.
I
think
those
are.
C
The
overlap
program,
basically
the
script
for
us
right.
What
was
that
so
the
go
link
would
be
the
script
for
us.
Maybe
like
run
these
commands
and
then
remember
the
files.
A
Yeah
it's
more
of
like
you,
can
actually
do
it
with
shell
script.
You
know
it's.
Basically,
the
commands
aren't
that
complicated.
What
you
need
to
do
is
basically
start
up.
You
run
the
the
image
and
once
you
get
it
running,
then
you
can
copy
over,
like
the
the
cyclone
dx
tools
into
the
container
copy,
make
sure
that
safety
is
installed
on
the
container
for
like
a
python
program,
and
then
you
run
those
tools
and
then
they
copy
the
output
back
out.
A
It's
kind
of
it's
a
little
kludgy,
but
it's
a
way
that
we
can
inject
programs
into
a
a
image
that
you
know.
We
don't
own.
You
know
it's
somebody's
running
in
their
their
their
pipeline
and
they
want
to
go
ahead
and
grab.
You
know,
run
cyclone
to
grab
the
the
cve
information
and
the
package
information,
but
they
don't
want
to
load
cyclone
in
as
part
of
their
build
process.
A
So
this
is
like
a
non-invasive.
We
go
in
and
inject
gather
the
information
for
that
image
and
then
from
there
we
have.
You
know
the
other
microservice.
That's
going
to
load
the
resulting
json
file
that
we
get
out
from
cyclone
and
we
post
it
up
for
that
component
version.
So
we
got
a
couple
moving
pieces
here,
but
that's
kind
of
what's
happening
on
that
front.
A
So
if
there's
other
ways
to
do
like
inject
into
the
image
and
gather
the
information
that
we
need,
that's
I'm
totally
open
to
that
and
that
this
is
the
one
way
that
seems
to
be
working
pretty
easy
for
now
and
as
part
of
this
we're
not
the
goal
is
not
to
have
customers
like
rewrite
their
pipeline
or
their
docker
bills,
or
anything
like
that.
You
know
if
they're,
if
they
are
putting
cyclone
scanning
into
their
process
or
you
they're,
using
trivi
as
part
of
their
process.
A
We
want
to
gather
that
information
and
then
load
it
into
the
artelius
around
the
component
version
that
the
scan
was
done
against
some
of
the
reasons
why
we
can't
just
go
with
one
one
tool
like
is
you
get
different
results
for
different
language
sets
that
are
inside
of
the
container
image
it
trivia
works
great
on.
I
think
golang
was
the
one.
I
was
testing
it
on.
It
does
a
good
scanning
at
that
level,
but
python
modules
it
just
blows
off
and
doesn't
record
any
of
them.
A
So
there's
a
lot
of
and
some
of
them
look
at
operating
system
packages.
You
know
what
what's
your
open
ssl
package
that
you
have
installed.
A
What's
your
glib
c
package
that
you've
installed
in
the
image
and
other
ones
we'll
look
at
the
actual
application
code,
while
other
ones
will
look
at,
you
know,
license
information
on
the
spdx
front,
so
it's
gonna
be
kind
of
a
mix
of
what
we
need
to
gather
and
that's
why
I
like
this
little
wrapper
that
we'll
need
to
build
up
is
something
that
we
need
to
look
at.
A
And
I
figure
golang
is
a
an
easy,
install
process.
You
know
it's
nothing,
fancy
to
install
a
go
link
program
into
your.
You
basically
do
a
wget
and
copy
that
down
into
the
pipeline
and
run
it
against
the
image
so
just
trying
to
simplify
things.
Those
are
the
main
things
that
were
coming
up
on
this
front.
A
A
I
did
get
an
update
from
sergio
where
he
has
added
in
the
pie
breaker
code
into.
I
think
this
code
base
I
gotta
look.
I
have
to
see
where
you
put
it,
but
basically
the
pie.
Breaker
code
allows
us
at
the
python
flask
level
to
recover.
If
we
lose
a
database
connection,
it'll
do
a
graceful
recovery.
You
know
go
to
do
like
this
exponential
timeout
on
the
retry
process.
Also
it'll
help
us
when
the
microservice
starts
up
prior
to
the
database
being
available.
A
That'll
do
some
retries
along
that
process
as
well.
So
I
have
to
look
at
his
code.
I
got
to
review
it
to
see
what
he
did,
but
the
goal
there
was
to
help
make
the
micro
services
the
python
flask
microservices
a
little
more
resilient
and
be
able
to
recover
on
their
own.
He
was
going
to
I
don't
know
if
he
got
to
this.
He
was
going
to
do
the
pie.
A
Breaker
has
like
a
entry
point
for
a
health
check,
and
I
have
to
look
to
see
where
he
was
going
to
implement
that
as
the
kubernetes
health
check
endpoint
as
part
of
the
process.
So
we
can
actually
get
a
status
around
how
the
the
microservices
were
running
if
they're
up
and
things
like
that.
A
I
think
that
was
the
main
thing
on
the
australian
group,
the
artillious
australian
working
group.
They
are
working
on
the
get
ops
piece
and
brad
mccoy
has
pushed
put
a
document
out
there
I'll
find
the
link
to
it
and
add
it
to
our
notes.
I
think
it
may
be
in
there
already.
A
No
I'll
add
it.
I've
been
adding
the
notes
from
those
discussions
into
our
general
meeting
notes.
So
you'll
see
the
australian
time
zone
discussion
happening
as
well.
A
The
main
thing
that
they're
doing
right
now
is
they're
just
trying
to
get
the
infrastructure
in
place.
At
the
azure
level.
We
got
another
25
000
from
microsoft
to
run
our
cluster
for
another
year
and
to
do
that
we
had
this
changes
billing
subscription,
which
is
a
major
pain
because
they
have
all
these
options.
That
says,
move
the
you
know,
change
the
subscription
id
for
your
your
resources
and
you
go
click
on
it
and
you
wait
15
minutes
and
it
says.
Oh
sorry,
you
can't
do
that
for
that
resource.
A
So
it's
a
typical
microsoft
thing
it's
like.
If
I
don't
even
give
me
the
option,
if
I
can't
do
it,
but
that
and
sasha
is
working
with
some
git
ops
at
his
work
and
he
said
he's
he's
up
to
10
10
repos
that
he's
trying
to
work
with
on
the
get
outside.
He
said
he's
just
getting
getting
a
lot
of
sprawl.
A
So
I
asked
him
to
put
that
use
case
into
the
get
ops
architecture,
the
get
ops
document
about
the
solution
around
artelia.
So
we
can
capture
that
make
sure
we're.
We
got
some
real
life
situations
that
we
need
to.
You
know
some
good
use
cases
that
we
get
to
work
around.
A
So
that's
kind
of
where
we're
at
and
of
course
we
have
cd
con.
We
have
crumbs
coming
up
here
in
a
couple
hours.
Well,
that's
maybe
what
yours
is
later
this
afternoon.
C
A
A
So
the
general
game
plan
around
swagger
is
when
we
go
when
you
create
a
component,
we'll
we'll
need
to
know
what
the
either
what
the
swagger
yaml
file
is
or
the
json
file
is
for
swagger.
That
is
part
of
the
get
repo.
So
if
they
have
persisted
the
the
swagger
two,
the
get
repo
will
load
it
in
as
a
parameter
at
that
time
and
that's
where
we'll
take
the
actual
file
and
we'll
push
it
up
into
the
database.
A
A
If
we
have,
you
know,
everything
rendered
inside
of
ortulius
it'll
be
a
little
fancier,
but
functionality
wise.
We
could
go
the
other
route
if
we
want
to.
A
Okay,
so
you
can
so
swagger
had
the
swagger.
Was
it
the
swagger
ui
website
you
can
post
a
yaml
file
or
a
json
file
to
it
and
it'll
render
yeah.
C
A
Yeah,
in
that
case,
we
may
you
know
the
the,
and
this
came
up
last
time
at
the
architecture
meeting
is:
do
we
want
to
start
doing
a
bunch
of
like
iframes
inside
of
ortillius,
where
we
would
do
an
iframe
of
the
swagger.io
for
rendering
out
the
swagger
piece?
A
B
C
A
So
the
one
of
the
things
that
came
out
from
last
time
was
being
able
to
collapse
the
rows
on
the
component
side,
so
we're
going
to
rearrange
them
some
and
then
also
we'll
make
them
collapsible.
So
we're
we're
still
going
to
stick
with
a
single
page
and
not
multiple
tabs
on
that
page.
A
But
instead
of
doing
multiple
tabs
we're
just
going
to
collapse
everything
so
people
can
see
you
know,
get
things
out
of
the
way
by
collapsing
the
row
that
was
kind
of
like
the
ui
decision
that
the
folks
made
last
week
or
two
weeks
ago.
A
One
of
the
things
that
we'll
do
is
like
what
I
did
with
like
the
readme.
We
have
a
a
python
program
that
you
insert
into
your
pipeline
that
will
it's
called
dh.
It's
a
bad
name,
even
though
it's
part
of
the
the
artelius
repo,
it's
called
dh.
A
I
I
gotta
do
a
rename
on
that,
but
basically
it's
a
command
line
program
that
will,
if
you
pass
in,
like
the
git
repo
as
one
of
the
attributes
to
record
for
service
catalog
data
it'll
go
and
see
if
there's
a
readme
file
in
that
repo
in
a
well-known
location,
same
thing
like
with
licenses
we'll
go
pull
the
license.
That's
there,
usually
that's
in
a
well-known
location.
A
A
So
if
there's
a
way,
we
can
automate
it
I'd
love
to
to
be
able
to
gather
the
more
automation
we
can
put
into
the
process
the
better.
A
C
C
A
Yeah
and
that's
where
you
know
we
may
you
know
just
look
at
something
like
terraform
start
with
that
you
know
the
the
inside
of
the
container
images
for,
like
the
infrastructure
code,
that's
inside
of
a
container
image
like
the
like
open,
ssl
package.
That
is
pretty
standardized
for
the
most
part
you
know
based
on
the
package
manager,
so
we
don't
have
to
deal
with
like
a
cloud
provider
at
that.
A
So
that's
going
to
be
we'll
be
able
to
suck
that
in
pretty
easily
the
infrastructure
piece
with,
like
the
different
cloud
providers
that
one
gets
gets
tricky
on.
How
we
want
to
reference
that-
and
I
don't
know
if
we
should,
just
just
you
know,
interact
with
terraform
to
start
with,
or
you
know,
go
down
the
different
cloud
provider
routes.
C
A
It
exactly
yeah,
especially
you,
could
take
the
same.
The
same
micro
service
and
the
same
micro
servers
can
you
know
and
develop
development
they
may
be
using.
You
know,
like
google
and
then
for
testing.
They
may
end
up
like
at
aws
and
then
in
production,
they're,
going
to
run
on
prem
and
in
openshift
the
same
service.
So
because
we're
seeing
high
enough
up,
we
can
we
span
over
all
those
different
clusters
and
cloud
providers.
A
A
So
if
you
change
your
like
config
map,
your
key
value
pairs
for
a
cluster
that
that
would
create
a
new
version
of
the
environment,
so
that
may
be
where
we
can
attach
that
infrastructure
cloud
information
to
is
that
the
environment
level?
And
if
someone
makes
a
change,
then
we
go
ahead
and
record
that
as
a
new
version
of
the
environment.
A
We'll
have
to
think
about
that
one
that
that
that's
the
hard
part
when
you're
when
you're
small,
like
us,
you
know,
integrations
end
up
being
you
know
trying
to
integrate
with
everybody.
Is
it
becomes
a
major
project,
sometimes
even
if
you're
doing
a
plug-in
type
of
model.
A
Yeah
well,
that's
where,
like,
like
you
said,
like
things
like
open
telemetry,
you
know
if
we
go
and
interact
with
open
telemetry,
that's
going
to
give
us
the
the
entry
point
into
all
these
other
tools.
A
A
You
know
so
we
just
start
listening
to
events
and
doing
less
hard-coded
integrations
to
every
single
tool
out
there.
C
And
I
had
some
like
so
I
had
a
bit
of
thinking,
so
I
was
pointing
out
like
a
whatsapp.
C
We
can
also
include
open
telemetry
and
brazil.
You
can
provide
real-time
stats,
but
what
actually,
what?
What
like
you
enter
you
and
tracy,
which
are
like
what
should
rts
be
because
now,
currently,
you
have
a
proactive
kind
of
system,
but
do
you
want
to
go
into
real
time
or
do
you
want.
A
Yeah,
so
we
really
want
to
take
it
from
the
proactive
view
and
be
able
to
determine.
A
The
impact
of
a
change
prior
to
rolling
it
out,
so
you
know,
like
tracy,
says
the
blast
radius.
You
know
I
go
and
change
this.
I
have
a
breaking
micro
service
change,
who's
it
going
to
affect
that
type
of
blast,
blast,
radius,
implementation.
A
A
So
that's
where
the
open
telemetry
comes
in
and
like
prometheus
and
some
of
the
other
tools
out
there
to
grafana.
You
know
to
be
able
to
bring
in
some
basic
health
of
a
service,
because
one
of
the
things
that
we
have
is
we
have
the
application,
the
logical
application
view
and
if
we
know
that
a
service
is
unhealthy,
that
we
can
roll
that
up
to
the
application
layer
at
that
level.
So
right
now
we're
being
proactive,
but
I
can
see
the
the
benefit
of
adding
reactive
information
on
top
of
our
proactive
relationships.
C
A
A
So
what
we
would
be
doing
is
in
the
service
catalog
data.
One
of
the
things
that
we
I
kind
of
specced
out
and
designed
was
the
for,
like
the
logs
for
a
service.
If
the
service
is
deployed
to
let's
say
devtest,
prod
we're
going
to
have
a
a
log
url
to
get
to
the
the
logs
for
dev.
A
We're
gonna
have
another
url
for
for
tasks,
another
one
for
prod
for
that
service,
so
on
an
environment
and
probably
within
an
environment,
we'll
have
the
the
endpoint
or
clusters
because
you
may
have
in
production.
You
have
15
clusters
that
this
service
is
running
in
that
we'll
have
to
have
the
log
pointers
and
it's
just
going
to
be
a
url
pointer
to
where
the
real
logs
are
so
we're
not
going
to
try
to
repeat
dynatrace,
but
if
we
can
get
the
information
of
where
dynatrace
is
monitoring
this
service.
C
Good,
and
so
I
was
looking
at
we'll,
say
telemetry,
so
we
can
basically,
basically
they
have
provided
a
collector
and
the
format
of
data
the.
What
is
the
standard
is
otlp,
the
the
format
that
data
is
coming
about
coming
in,
okay,
so
a
good
way
basically
would
be
to
basically
we
are
going
to
be
what
is
the
back
end
or
vendor
who
is
going
to
analyze
the
data?
C
C
Basically,
agents
which
help
us
in
basically,
you
can
say
process
the
collector
basically
process
the
data.
If
you
want
to
do
some,
I
would
say
we
have
some
sensitive
information.
You
want
to
remove,
remove
that
from
the
data
or
we
might
have
some
what
you
say
so
once
that's
one
thing
another,
and
if
you
want
to
add
more
attributes,
you
want
to
basically
process
transform
the
data.
D
C
We
have
so
that
characters
take
care
of
that
and
then
we
can-
and
it
also
allows
the
possibility
of
transporting
the
data
to
multiple
back
ends
or
exporters.
So
we
can
so
we
can
use
oltp
otlp
as
a
format.
That's
the
standard
being
followed.
There
are
also
other
standards
like
w3c
b3,
but
otlp
is
the
like
open
delivery
format.
They
are
using,
so
we
can
basically
what
was
basically
the
thing
would
be
the
what
is
say.
C
A
Perfect,
so
on
the
the
data
that
we
need
to
collect,
and
I'm
sure
it's
there
somewhere
is,
we
need
to
know
which
container
that
the
data
is
for
so,
if
it's
a
transaction
going
to
a
specific
container
or
if
it's
an
error,
you
know
whatever
we're
looking
at,
we
need
to
associate
it
to
a
container
and
then
we'll
look
up
that
container
id
for
that
and
well
it'll,
be
two
pieces,
it'll
be
the
container
and
the
cluster
and
basically
the
environment.
So
we
have
to
know
that
this
is
this.
A
This
is
where
this
thing's
running
and
what
is
running
you
know
so
we're
running
container
a
in
cluster
five
in
the
dev
environment.
So
with
that
information
we
can
map
that
telemetry
data
back
to
a
component
version
in
artelias.
C
A
Yeah
and
even
if
we
have
the
yeah
with
those
three
things,
that'll
get
us
to
exactly
to
the
right
version
of
the
component
and
which
environment
that
it's
belonging
to
because,
like
I
said,
you
have
the
same
components
spread
across
multiple
environment
clusters
and
environments.
So
we
have
to
know
which
one
is
reporting
in
that
you
know
which
one
we're
getting
the
data
stream
from.
A
But
I
think
the
data's
there,
it's
just
a
matter
of
trying
to
grab
it,
even
if
it's
like,
so
we
store
like
the
the
docker
tag
and
the
repo
and
also
the
digest
of
the
image.
A
A
It
only
stores
the
the
tag.
C
C
There
was
one
more
thing
like
this:
collector
can
collect
the
data
from
the
kubernetes
tagger,
so
that
will
provide
the
information
you
got
in
the
port
or
something
in
fact
I've
been
reading
in
depth,
but
I
that
was
something
that
came
up
when
I
was
reading
that
okay
cool
so
like
in
the
data
itself,
you
can
get
the
for
watch
what
part
or
what
you
say.
Instance.
The
data
is
coming
from
that.
A
A
In
a
really
basic
metrics,
you
know
a
thumbs
up
thumbs
down
that
this
thing's
running
or
not
running.
You
know
this
service,
you
know
we
don't
have
to
get
into
metric
trends
or
anything
like
that.
You
know
or
anything
like
that,
be
just
basic
health
of
a
service.
C
Right
right,
we
are
not.
What
is
the
analysis
of
analysis,
backend
platform?
We
are
just
serviced,
trying
to
say
be
the
source
where
everybody
can
look
up
and
see
what
services
are
we.
A
Yeah
and
then,
as
part
of
that,
the
let's
say,
we
have
a
health
metric
that
this
this
service
is
healthy,
right
side
by
side
will
be
like
we
were
talking
the
url
to
go
to
dynatrace
or
the
datadog
to
go,
get
the
exact.
You
know
details
about
what's
happening
with
with
the
service
now.
The
other
thing
on
the
open,
telemetry
side
that
we
didn't
talk
about
is
the
transaction,
routing
and
mapping
out
the
relationships
at
that
level
that
microservice
a
sent
a
transaction
to
microserviced.
A
That
would
be
something
of
interest
as
well
to
overlay
the
component
to
component
relationships
from
a
runtime
perspective.
A
Then
that
one
I
gotta
look
at
we'll
have
to
do
some
work
on
the
visualization
around
that
and
see
the
best
way
to.
A
Look
at
that
that
result,
because
I
think
your
idea
that
you
had
about
a
visualization
around
a
domain
instead
of
a
component
or
that
may
be
the
way
to
kind
of
do.
The
drill
down
is
to
look
at
the
security
domain
and
we
see
the
security
domain
talking
to
the
front
end
domain
and
we
see
the
transactions
going
back
and
forth
now.
If
we
want
to
drill
down
into
one
of
them,
we
can
get
into
more
specifics
about
you
know
which
part
of
the
security.
A
Is
it
the
the
login
process,
or
is
it
the
logout
process?
You
know
part
of
the
transactions
and
then
drill
down
into
that
a
little
bit
lower
and
then
also
the
idea
of
the
component
sets
of
tightly
coupling
components
together.
I
think,
will
fall
onto
that
world
as
well,
but
component
sets
will
need
to
span
across
a
domain
driven
design
because
you
could
tightly.
A
You
may
want
to
tightly
couple
like
the
front
end
to
a
specific
backend
service
that
you
want
to
tightly
couple
to
make
sure
that
they
move
through
the
pipeline
together,
so
components
as
I
envision
will
be
cross-domain,
but
on
the
visualization
level
that
we
may
be
able
to
use
our
domain
driven
design
hierarchy
to
help
navigate.
That.
C
Right
so
I
guess
a
lot
for
say:
we
don't
need
to
do
some
research
on
open
database
in
depth
and
if
you
have
some
foundation
we
can
then
basically
connect
to
the
open,
telemetry
would
say
group
they
are
on
greater
and
cncf
they're
under
cncf.
So
basically
they
should
be
easier
to
interact
with
and
get
in
touch.
Okay,
perfect.
A
Yeah,
let's
do
a
little
more
research
on
that
and
then
what
we'll
do
is
we'll
open
up
another
working
group
specifically
around
bringing
that
information
in
we'll
probably
do
it
around
the
component
sets
and
probably
the
open
telemetry
at
the
same
time,
because
I
don't
want
to
miss
out
on
mapping,
you
know
designing
like
components.
That's
one
way
and
then
the
data
we're
getting
from
open
celebratory
just
doesn't
fit.
A
So
right
now
the
the
two
main
projects
that
we
got
going
on
are
the
the
service
catalog
and
the
get
ops
integration
and
then
I
think,
later
early
fall,
we'll
look
at
the
or
late
summer.
Look
at
the
open,
telemetry
component
sets.
A
But
see
what
you
can
dig
up
from
you
know
it
if
the
what
what
we
can
gather
from
open
telemetry
in
that
level
and
also
like
a
like
we're
talking,
the
events
protocol
is
coming
about
as
well.
I
don't
think
it
will
be
finished
before
the
end
of
the
year,
but
early
next
year.
I
think
we'll
have
some
things
that
we
can
work
around
on
the
event
side
as
well.
A
It's
it's
interesting
now
that
it
just
it
seemed
like
in
the
last
six
to
eight
months,
a
lot
more
people
are
embracing,
kubernetes
and
microservices,
and
they're
they're,
really
getting
out
of
the
monolith
and
they're
really
trying
to
start
tackling
rewriting
their
applications.
A
Exactly
all
right,
so
if
anybody
one
last
closing
note
here,
we
ran
over
a
little
bit
today.
If
anybody's
interested
in
coding,
we
have
some
python
and
goaling
coding
that
we
need
to
do
if
you're,
just
starting
out
and
you're
new
to
new
to
programming.
That's
fine!
If,
as
long
as
you
have
some
some
basic
stuff,
we
can
help
you
out
and
get
you
working
with
one
of
the
microservices
and
that's
one
of
the
nice
things
of
the
microservice
architecture.
Is
it's
a
small
isolated?
A
You
know
your
use
case
of
what
the
problem
you're
trying
to
solve
is
is
very,
very
contained.
So
it's
not
like
our
monolith,
where
the
the
java
class
that
goes
and
interacts
with
our
database
is
40
000
lines
long.
You
know
these
mic.
These
python
micro
services
are
looking
right
around
two
to
three
hundred
lines
of
code.
A
So
if
you're
interested
reach
out
to
me
on
on
discord
or
go
grab
an
issue
on
the
artelias
project
under
the
issues
just
assign
yourself
to
it
and
you
can
post
in
the
dev
channel,
if
you
need
any
help
and
somebody
will
reach
out
and
give
you
a
hand
if
it's
not
not
me,
there's
a
bunch
of
other
people
out
there
that
are
keeping
an
eye
on
that.
That
channel
as
well
any
last-minute
questions
or
comments.