►
From YouTube: May 25, 2023 - Ortelius Architecture Meeting
Description
Work on microservice dependabots, #Emporous and Killarcoda are included in this architecture meeting.
A
B
So
just
a
couple
of
things
I
want
to
follow
up
on.
First
offers:
you're
working
on
the
with
Arvin
on
those
reusable
actions.
C
Yeah
so
for
setup,
environment
I
was
trying
that
on
my
local
local
project,
so
that
is
like
coming
along
pretty
good
so
like.
If
you
are
fine
with
that,
using
that
reusable
job
right.
C
D
B
C
Yeah
so
I'll
make
the
changes
today
in
the
actual
couple,
okay,
I'll
show
you
the
link.
Okay,
so
the
concept
remained
the
same
okay,
so
everything
will
work
as
it
is
because,
anyway
we
are,
we
were
referring
those
output
variables
from
the
job
right
right,
so
everything's
remain
same
it
just
those
section
would
be
present
in
the
other
in
the
other
repository
and
will
be
called
from
there.
D
Cool,
let
me
sure,
to
talk
real,
quick.
B
Yeah,
because
one
of
the
things
that
we,
because
we're
gonna
have
so
many
workflows
that
we
need
to
manage,
it
would
be
nice
to
be
able
to
just
have
a
core
set
of
actions
that
we
can
change
in.
One
place
and
I
believe
depend
about,
will
come
along
and
overnight
and
bump
all
the
versions
if
we
make
a
fix
to
them.
B
So
then
it's
just
a
matter
of
doing
PR's
against
the
repositories
to
update
you
know
and
when
we
make
a
change
to
the
reusable
ones
which
will
make
things
easier.
C
B
D
B
This
is
a
gosh
we'll
share
the
just
share
the
the
link
when
you
have
it,
when
you're
able
to
over
Discord.
C
B
So
I
think
once
we
get
a
lot
of
so
once
we
get
these
reusable
actions
sorted
out,
Arvin
will
have
to
go
through
and
kind
of
do
a
copy
of
the
yaml
files
from
the
new
one
into
all
the
repositories
and
do
updates
PR's
for
all
that.
So
once
we
get
like
one
kind
of
sorted
out,
then
we'll
be
able
to
replicate
it
to
all
the
other
ones.
So
we'll
we'll
give
you
a
heads
up,
probably
will
be
next
week
on
that
front.
B
And
then
also
Arvin
I
have
to
update
the
the
the
Bounty
and
on
the
issues,
and
what
we'll
do
is
we'll
use
your
script
that
you
use
to
approve
the
PRS
as
kind
of
like
the
the
placeholder.
That
will
get
the
three
of
reviews,
so
we
can
then
follow
along
with
the
Bounty
payout
process.
B
Pretty
much
you'll
see
the
notifications
come
through,
like
I
get
them
through
email
about
the
pen
about
created
a
PR
overnight
and
then
I'll,
usually
either.
A
B
Because
we
have
all
the
the
microservices
kind
of
stubbed
out
and
the
helm
charts
in
place,
we
are
able
to
create
the
the
parent
home
chart
and
Sasha
I.
Think
you're
gonna
pick
that
one
up
and
start
doing
a
terraform
around
it
is
that
correct.
B
F
B
A
a
terraform
to
run
everything
in
kind,
correct.
F
Yeah
yeah,
the
repo
is
done
thanks
for
creating
that
and
I've
added
all
the
base
code
now
I
just
gotta
change
the
home
shot
with
with
the
base
code.
Pointing
now
it's
just
like
you
know,
just
pointing
to
the
old
stuff.
I
was
just
update
that
quickly.
I'll.
Do
it
today
all
right.
B
Right
now,
I'm
not
going
to
push
it
out
to
artifact
Hub
I.
Think
I'll
confuse
folks,
so
we'll
just
leave
we'll
just
leave
it
off
on
its
own
repository
and
once
we
get
further
along
we'll
we'll
publish
it
to
artifact
hub.
B
All
right
and
then
on
that
same
note
cat.
What
do
we
need
to
do
on
the
poorest
side
to
kind
of
start
running
everything
as
like?
Well,
we'll
give
you
a
little
background,
so
Sasha's
been
able
to,
in
a
few
other
folks,
work
to
put
together,
Helm,
charts
and
terraform,
and
all
that
to
run
ortilius
in
a
kind
cluster
k3s
and
then
also
be
able
to
use
use
that
kind
of
like
as
our
little
test
environment.
B
But
then
those
are
then
pushed
over
to
as
Azure,
where
we're
able
to
run.
We
have
a
an
Azure
cluster
running
on
the
emperor
side.
What
do
we
need
to
do
to
kind
of
set
up
that
little
I
want
to
kick
the
tires
environment.
G
Good
question:
actually,
so
that's
something
I
have
some
free
time
and
I
can
actually
work
on
today
and
tomorrow,
but
what
I'll
do
is
I
will
I'll
get
repositories
moved
over
today.
I
can
make
some
issues
for
accomplishing
that,
and
then
we
can
track
it.
That
way.
So
good,
perfect.
B
So
one
of
the
things
that
we
do
in
an
artillius
is
all
the
issues
are
centralized
in
the
artillius
artillius
repo.
Instead
of
having
issues
scattered
about
it
just
makes
it
a
little
bit
easier
for
folks
to
find
stuff
to
work
on.
B
B
C
Yeah
what
I
was
asking
is
there
a
way
to
embed
like
this
thing
that
we
have
built
right
in
our
website.
B
We
should
probably
reach
out
to
him
to
see
where
he's
at
so
he's
in
the
Australia
I,
don't
I!
You
should
be
back
in
Australia
now
unless
he's
on
the
road,
but
he
did
a
lot
of
work
on
the
killer
Coda
pieces,
so
we
may
be
able
to
leverage
what
he
already
is
done
and
documented.
If
not,
then
we
can
figure
out
the
next
steps
on
that.
Do
you
want
to
reach
out
to
him.
B
I
think
he
may
have
done
some
training
videos
already.
C
C
Yeah,
at
least
like
I,
saw
like
what
the
steps
are
required.
They
have
like
documented
on
the
artifact
upside
yeah.
D
B
C
B
C
B
And
one
of
the
things
I
found
with
with
the
killer
coda
is
you
have
to
expose
and
Sasha?
You
ran
into
this
with
one
of
the
other
kubernetes
environments,
where
you
have
to
expose
the
node
Port
and
ignore
using
the
Ingress
so
you're
on
mute,
Sasha,
yeah.
B
So
that's
one
thing:
I
found
with
the
with
like
the
firewall
and
killer
Coda.
You
have
to
go
with
the
node
port
forwarding
route
versus
an
Ingress.
For
some
reason,
I
I
tried
to
get
the
Ingress
like
even
on
the
k3s
up
on
the
killer,
Koda
to
work
and
something
with
their
port
mapping
and
the
IP
forwarding
it
just
doesn't
expose
what
you
need
to
get
the
routing
to
work
correctly
but
doing
doing
the
node.
B
The
node
ports
is
the
way
around
it
and
I
believe
that
should
be
documented
in
the
helm
chart.
If
you
want
a
runner
under
killer
coda.
C
B
And
I
think
once
we
get
that
sorted
out
being
able
to
either
do
a
blog
or
links
off
of
the
website
on
how
to
kind
of
that
walk
through
piece,
and
we
should
definitely
do
this
for
both
artillius
and
emporis.
B
Those
are
the
main
things
that
I
had.
The
only
other
thing
is
the
the
microservice
for
the
nft
storage
ukash
you're,
going
to
handle
that
right.
Yes,.
C
Yes,
yes,
okay,
so
actually
that
is
pretty
ready,
but
do
you
have
like
the
Rango
DB
dump,
like
whatever
data
you
have.
B
That
you
could,
you
could
I
can
get
you
set
up
with
running
the
commands,
for
so
the
the
microservice
that
I
created
kind
of
like,
as
our
starting
point,
is
just
the
domain
microservice,
which
will
allow
you
to
create
a
domain
retrieve
a
domain
and
list
the
domains.
So
it's
a
pretty
simple
microservice
that
we
can
you
just
run
some
curl
commands
to
post
the
payload
over,
so
I
can
get
you.
B
C
Okay,
okay
and
other
thing
was,
you
were
like
also
talking
about
adding
it
like
a
flag
or
something
that
will
tell
us
like
if
this
particular
value
has
been
persisted
to
nft
storage
or
not
right.
B
Yeah,
we
have
to
figure
that
out,
I
had
to
look
at
in
the
architecture
diagram
there
was,
and
the
domain's
a
bad
example,
because
the
the
name
of
the
domain
and
the
value
is
all
in
one:
it's
not
like
a
component
or
like
a
user.
So
we
may
want
to
Implement,
like
a
user,
implement
the
user
microservice,
because
you
have
the
username
and
then
like
the
user's
profile,
like
their
email
address,
their
phone
number
stuff
like
that.
B
So
the
search
key
is
basically
the
username
and
then
all
of
the
users
profiles
persisted
in
long-term
storage.
And
then
you
want
to
be
able
to
find
that's
where
that
flag,
that
cash
flag,
you're
talking
about,
comes
into
play.
C
So
what
I
was
thinking
if
there
is
some
flag
right,
so
what
I
can
do
I
can
like,
rather
than
using
a
pull
mechanism
right?
So
pull
mechanism
in
this
sense,
I'll
be
like
invoking
a
API
from
from
from
the
other
layer
right
to
process.
This
document
to
nft,
but
I
have
like
those
flags
in
the
place.
What
I
can
do
that
abstraction
layer
right
that
itself
will
pull
up
the
data
and
see
like
what
all
values
are
not
there
in
nft
and
take
care
of
position.
Those
settings
yeah.
B
So
the
your
microservice
should
because
it's
a
microservice
because
we
changed
up
the
architecture
slightly.
So
originally
the
we're
going
to
have
the
microservice
for,
like
the
user
object,
talk
through
the
going
module
directly
to
the
nft
storage
and
then
we
changed
it
up
to
make
it
more
reusable.
B
So
the
in
this
scenario,
the
like
the
user
microservice,
will
do
all
the
normalization
denormalization
of
the
Json
and
then
it'll.
Take
the
normalized
Json
package
that
up
and
send
that
over
to.
C
H
Well,
it'll
it'll
it'll
it'll
store
the
the
the.
B
Denormalized
version
in
a
mango
and
it'll
send
a
normalized
version
over
to
your
microservice
to
be
stored
into
nft
storage,
so
you'll
have
all
the
SIDS
and
all
that
will
be
basically
like
an
array
of
it'll
be
a
array
of
key.
It's
basically
a
dictionary,
so
you
have
the
key
in
the
Json
that
needs
to
be
persisted
in
in
ipfs,
so
that
will
come
across.
So
your
your
microservice
will
have
a
single
interface,
which
will
be
basically
a
dictionary
with
the
Sid
and
and
the
Json
as
in
your
list.
B
That
would
be
posted
to
you
and
then
from
from
there
on
the
retrieval
side,
we'll
we'll
need
to
say,
I
want
to
go
retrieve
this
Sid
and
all
the
associated
SIDS
back
and
return
that
list
back
to
me
and
then
from
there
the
the
calling
microservice
will
will
take
that
normalize
data
and
denormalize
it
and
store
it
back
into
arango.
C
Yeah,
that
makes
sense
so
basically
you'll
make
two
calls
from
module.
One
will
one
will
be
like
storing
denormalized
data
to
Rango
and.
C
C
B
Exactly
and
that
way
that
way,
your
your
microservice
for
the
long-term
storage
can
be
generic
and
then,
when
we
go
to
replace
nft
storage
with
the
the
emporis
oci
registry,
we'll
be
able
to
just
swap
out
that
microservice
and
so
the
same
data
would
get
will
get
sent
across,
and
then
we
just
have
to
deal
with
the
different
persistence
at
that
level.
C
C
Rather
than
doing
a
two
calls,
can
we
just
do
like
one
call
that
will
proceed
data
in
Rango
and
there
is
this
abstraction
layer
which
will
schedule
like
a
call
every
like
30
minutes
or
so,
and
the
job
would
be
to
you
know,
get
the
all
the
data
and
infer
like
what
all
data
has
been
persisted
and
if
the
data
is
not
persisted
to
an
empty
persist,
dose
and
again
update
the
flag.
B
Let's
say
we
need
to
rebuild
a
Rango
from
The,
blockchain,
Ledger
and
the
nft
storage.
We
would
need
that
job
to
go
through
and
build
up
the
cache
on
the
orango
side,
but
one
of
the
things
when
we,
the
one
thing
we
don't
want
to
do-
is
mirror
all
the
data,
that's
in
the
long-term
storage
in
a
Rango,
because
it
will
have
terabytes
and
terabytes
of
data.
B
So
what
we
want
to
do
is
the
Rango
really
needs
to
be
a
cache
of
basically
the
name
and
and
the
Sid
in
so
we
can
go,
find
the
objects
on
kind
of
on
demand
from
the
the
long-term
storage
and
there
there
will
be
a
lag
there.
That
will
have
to
do
something
on
the
front
end
saying
you
know
recurring,
requiring
the
X
this
this
repository
for
your
data,
we'll
we'll
we'll
notify
you
when
it's
been
retrieved
or
something
like
that.
C
Yeah
yeah,
so
in
that
case
like
we
would
need
some
like
classification
like
few
information,
we
would
need
to
be
present
right.
H
C
Like
implement
it,
that
way
that
that
will
be
like
we
could
avoid
luxury
of
the
problems
that
would
face
like.
D
C
Yeah,
if
you
are
good
like
I'll,
continue
writing
a
script
there
itself,
just
like
a
normal
job
that
will
run
like
every
30
minutes
and
take
care
of
like
or
take
care
of,
like
persisting
data
to
nft
from
your
angle,.
B
Yeah
yeah
exactly
so
on
the
on
the
persistent
side,
so
there
there
will
be
a
a
point
in
time
where
we
could
lose
there's
a
potential
where
we
could
lose
data
and
it's
an
arango,
but
not
in
long-term
storage
in
a
Rango
crashes
and
we
lose
Durango
the
data
out
of
Rango
for
some
reason.
But
we
can
put
a
rainbow
into
a
high
cluster
availability
to
minimize
that
type
of
scenario.
C
B
The
other
way
to
do
it
would
be
we
in
the
transaction
process.
You,
you
add
the
data
to
a
Rango
and
then
the
next
step
is
we
go
off
and
and
talk
do
the
request
over
to
the
long-term
storage
microservice,
and
then
we
let
that
run
in
the
background
and
we
return
back
control
back
to
the
end
user
and
there's
a
background
thread.
That's
waiting
for
nft
storage
to
do
all
of
its
work
and
work
its
way
back
to
Mark
everything
in
a
Rango
at
that
level.
E
C
Done
yeah
basically
asynchronous
request.
You
want
to
place,
but
the
problem
here
is,
we
don't
know
like
how
long
it
will
take.
So,
let's
Suppose
there
are
like
a
thousand
requests.
Okay
and
thousand
requests
came
for
like
two
or
three
days
and
for
two
or
three
days
this
nft
storage
was
not
up
for
some
reason
or
not
working
for
some
reason.
So
in
that
case,
what
will
happen
thread?
Will
you
know,
keep
on
accumulating
and
we
will
end
up
like
situation
where
Heap
memory
is
too
much
out
of
memory
situation
right.
C
C
D
B
Right
now,
what
we're
running
into
is,
when
we
so
we're
going
to
be
persisting.
Json
into
NFC
storage,
NFC
storage,
basically
is
backed
by
ipfs
and
what
we've
run
into
it
can
take
if
I,
if
nft
storage
is
up
and
running
it'll,
take
maybe
five
seconds
to
persist
a
piece
of
Json
into
the
storage.
G
G
C
I'm
not
sure
because,
like
currently,
we
are
using
test
Nets
right,
test,
Nets
and
developments
right.
Is
there
any
like
more
high
performance
Network
that
we
can
Leverage
I'm,
not
sure.
B
B
Yeah
and
I
don't
know
if
they're
connected
together,
if,
if
they're,
totally
distinct
networks
or
because
I
think
I've
run
into
this,
where
there
is,
even
though
it's
still
part
of
nft
storage
and
the
ipfs
that
there's
a
different
entry
point,
that's
faster
that,
in
the
background,
will
eventually
replicate
to
all
the
other
nodes
that
are
out
there
in
the
ipfs
world,
I'll
I'll
check,
I
think
I
ran
into
that.
B
Because
I
remember
having
to
use
a
different,
you
know
the
ipfs
URL
that
was
slightly
changed.
They
put
the
Sid
first
as
part
of
the
domain
name.
Instead
of
having
the
domain
slash
and
then
the
Sid.
C
B
But
do
you
have
everything
you
need
to
kind
of
move
forward
with
your
piece
occurs.
C
I
just
need
to
test
my
application,
so
what
I'll
do
I'll
I'll
see
like
that
application
that
you
have
created
right
now
if
I
can
create
some
like
sample
data
in
your
angle
and
corresponding
data
to
NFS
yeah
and
one
more
thing.
So
you
have
like
previously
mentioned
to
use
car
sorry,
CI,
CID,
yeah,
address
Archive
registry
or
something
so
right
now,
I'm,
not
using
that.
Maybe
in
future,
we'll
we'll
see
like
how
we
can
Implement
that
yes.
B
That
is
not
a
requirement
to
start
off
with
so
so
what
we're
talking
about
is
it's
called
a
car,
it's
a
Content
archive
format
and
basically
it's
a
you
can
think
of
it
as
a
zip
file
format
for
ipfs
and
what
that
allows
you
to
do
is
to
you
create
a
car,
and
then
you
persist
the
car
and
it's
like
takes
everything
in
one
transaction
and
moves
it
into
ipfs
instead
of
having
like
layers
at
that
level.
B
So
it
saves
you
with
the
number
of
implement
calls
over
to
the
nft
storage,
ipfs,
so
I
think
right
now
we
don't
have
to
worry
about
that,
but
I
think
down
the
road.
It
will
help
us.
B
And
there
is
a
goaling
module
that
will
allow
you
or
help
you
create
the
car
file.
Basically,.
B
So
what
I
remember
reading
about
it
was
you
you
create
all
the
different
layers
so
like
in
your
zip
file.
You
have
a
each
file
is,
is
would
be
equal
to
a
layer
so
and
then
at
each
one
of
those
layers
has
its
own
Sid
and
then
you
have
a
parent
set
of
all
the
other
SIDS.
So
it's
kind
of
like
a
a
hash
of
hashes,
so
everything
becomes
immutable.
B
Oh
and
that's
why
you
need,
and
it's
a
little
weird
how
you
create
it,
because
you
create
like
a
placeholder
sit
at
the
parent
level.
You
go
through
and
add
all
of
your
other
layers,
and
then
you
have
to
go
back
and
calculate
what
the
layer
hash
all
the
layers
and
their
hash
ends
up
being.
And
then
you
go
back
up
back
and
update
the
the
parents
did
with
the
real
hash
and
replace
the
placeholder.
A
B
I
was
looking
at
doing
it
in
Python.
I
was
just
going
to
be
way
too
complicated
because
they're,
just
we
could
have
done
it
just,
would
have
been
writing
it
all
from
scratch.
B
We
can
spend
our
time
better,
okay,
cool,
so
your
second
cars
cat.
Do
you
have
anything
on
your
list
that
you
need.
B
Right
cool,
so
Sasha
take
a
look
at
the
artillius
in
the
Box
for
version
11.
Wildcat
Wildcat
works
on
moving
some
of
the
repos
over
and
maybe
next
week
or
the
week
after,
we'll
take
a
look
at
doing
in
Porous
in
a
box.
What
we
need
to
stand
up
there.
B
And
Arvin
we
will
keep
on
working
on
those
reusable
actions
and
hopefully,
early
next
week,
we'll
be
able
to
roll
all
those
actions
out
to
all
the
new
repos.
B
And
go
ahead
and
approve
those
those
PR's
that
are
outstanding.