►
From YouTube: March 3, 2022 - Ortelius Architecture Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
No,
it's
okay,
we'll
if
folks
jump
in
in
15
minutes,
we'll
just
catch
them
up
to
speed.
B
So
this
is
the
architecture
march
3rd
architecture,
meeting
we're
going
to
do
a
little
bit
of
coding
today.
So
there's
it's
going
to
be
more
collaborative
coding
and
welcome
joseph
and
arvind,
and
let
me
share
my
screen
and
we'll
kind
of
dive
in
here.
C
Just
so,
you
know
guys
I'm
I'll
stay
for
only
15
minutes
I'll
have
to
catch
up
with
some
things,
but
I
just
wanted
to
have
a
view
of
how
things
are
run
over
here.
How
things
go
over
here
so.
B
C
B
So
one
of
the
things
that
ortelius
does
is
it
goes
and
grabs
information
from
the
build
process
and
it'll
take
that
information
from
the
build
and
capture
that
and
we
store
that
as
a
component
version.
B
So
this
is
one
of
the
things
that
I
been
working
on
as
moving
the
data
that
we
collect
all
these
variables
and
stuff
out
of
the
pipeline
itself.
So
if
you
like
in
jenkins,
you
have
a
jenkins
pipeline
file.
It's
called
a
jenkins
file
in
the
past.
We'd,
put
all
this
information
into
the
pipeline
file
and
what
ended
up
happening
was
the
pipeline
file
ended
up
being
very
unique
to
that
microservice
so
to
be
able
to
make
the
pipelines
more
generic,
we
separated
it
into
a
separate
file.
B
So
what
we
ended
up
doing
is
we
have
these
couple
steps,
so
we
can
actually
see
here
we're
actually
doing
the
docker
build
at
this
level.
B
Now
one
of
the
things
that
happens
in
these
different
online
build
tools
is
they
have
different
ways
to
pass
data
from
one
step
to
the
next
step
in
google
cloud
build
they.
You
can
pass
things
through
the
workspace
file
system,
so
that
file
system
is
mounted
between
every
single
step.
So
you
you
can
have
access
to
that
now.
One
of
the
weird
things
is
in
in
the
google
cloud
build.
Is
you
can't
pass
environment
variables
from
one
step
to
another
step?
B
Now
every
cloud
tool
cloud
build
tool
like
whether
it's
going
to
be
github
actions
or
google
cloud
or,
like
jenkins,
they're,
all
going
to
work
slightly
different
on
how
you
can
move
things
from
one
step
to
the
next,
but
basically
in
in
general,
the
safest
way
that
I've
seen
is
to
put
the
environment
variables
that
you
in
the
data
that
you
want
to
share
from
step
to
step
into
a
file,
and
then
you
load
that
file
in.
B
B
Another
one
is
the
short.
Sha
is
another
variable
that
gets
pre-populated
things
like
the
get
repo.
Those
are
something
that
we
can
drive
by
looking
at
the
the
workspace.
So
what
ends
up
happening
is
one
of
our
first
steps
that
we
end
up
doing.
Is
this
one?
Does
it
for
us
other
ones,
other
build
tools?
You
have
to
actually
do
it,
but
what
ends
up
happening?
B
Is
we
end
up
checking
out
from
the
git
repo
our
project
and
that
ends
up
in
our
workspace,
so
you
can
think
of
it
as
a
you
have
a
local
clone
in
your
workspace
at
that
level,
because
we
have
a
access
to
the
our
get
workspace
or
our
git
repo.
B
We're
able
to
do
git
commands
against
that.
So
that's
where,
like
in
this
case,
the
git
repo,
the
git
url
some
of
those
commands
we
can
actually
derive
on
the
fly
by
doing
a
series
of
get
commands.
Like
if
you
want
to
get
the
the
url
it's
like
get
remote
dash
v,
basically
will
get
you
the
the
repo
url
and
stuff
like
that.
B
The
dh
compound
the
cli
is
where
we're
able
to
interact
with
between
it's
the
bridge
between
like
the
jenkins
and
the
and
the
tamil
file
and
all
the
restful
apis
that
run
the
the
ortelius
on
the
back
end.
So
if
you
guys
want
to
take
a
look
at
that,
it
is
actually
out
there.
Let
me
navigate
to
that.
B
And
it's
basically
a
python
program
that
it
it's
broken
into
two
parts.
The
first
part
is
kind
of
like
the
the
main
driver
program.
This
is
the
one
that
does
the
interface
with
all
the
parameters
and
stuff
like
that,
and
it's
based
on
a
set
of
actions.
B
B
So
what
ends
up
happening
is
we'll
run
our
our
workflow
here,
our
workflow
will
have
our
dh
steps
in
it.
One
of
the
first
ones
that
we
will
do
is
to
create
our
environment
file.
B
So
basically
we
do
a
pip
install
of
the
cli
and
then
from
there
we
actually
go
ahead
and
create
a
new
script
that
contains
all
the
environment
variables.
Now,
one
of
the
things
that
we
did
was
we
added
this
export
section,
so
we
didn't
have
to
have
data
split
between
the
pipeline
and
the
tamil
file.
B
The
version
is
made
up
of
the
build
number
and
the
short
sha.
Now
the
build
number
is
another,
a
another
variable
that
we
calculate
based
on.
I
believe
it's
passed
by
google
cloud
build
as
well
and
same
with
the
short
sha.
So
what
ends
up
happening
is
we're
just
kind
of
consolidating
all
these
variables
into
one
place
and
that
will
allow
us
to
go
ahead
and
export
details
back
to
that
shell
script
that
we're
utilizing
here.
B
So
that's
what
we're
doing
at
this
level
and
then
we
just
basically
source
it
and
just
print
it
out.
So
we
can
see.
What's
happening
to
make
sure
it's
working
at
that
level,
so
when
we
do
that,
we'll
go
through
our
build
and
then
we
actually
get
to
our
update
component
step,
which
again
reads
the
the
tamil
file
and
more
information
that
was
added
on
to
the
added
onto
the
shelf
shell
script.
B
So
one
of
the
weird
side
effects
of
of
is
you
can't
get
the
docker
digest
unless
until
it's
been
pushed
to
a
repo,
so
we
actually
go
in
and
calculate
that,
and
then
we
go
ahead
and
append
that
to
the
the
file.
So
we
have
all
of
our
variables
and
when
we
run
that
command,
what
ends
up
happening
is.
B
And
in
that
component
we
can
see
things
that
we've
got
passed
across,
so
the
build
url
the
registry,
the
the
git,
commit
the
the
the
repo
that
was
used.
All
this
other
detail
that
were
that
were
able
to
come
across
like
the
slack
channel.
So
we
should
see
the
slack
channel
over
in
our
tomml
file
here.
B
So
that's
where
we
keep
on
passing
grabbing
this
information
and
pulling
that
data
across
now.
One
of
the
things
that
it'll
also
do
is
look
for,
if
you
don't
supply
it
on
the
list
here,
it'll
go
and
look
for
things
that
exist
because
we're
running
in
the
context
of
the
of
the
get
workspace
so
we'll
pull
in
the
readme
file
and
a
swagger
file
if
they
exist.
So
if
we
look
back
here,
we
should
have
our
swagger.
B
B
You
know
the
application,
the
like
words
stored
in
the
domain
hierarchy,
whether
it's
in
store
services
or
payment
processing.
So
those
would
be
some
of
the
things
that
end
up
changing
like
the
docker
repo
will
change.
The
chart
will
change
so
there'll,
be
some
things
owner
will
change
there'll,
be
some
things
that
change,
but
a
lot
of
it's
going
to
be
similar
as
part
of
that
process.
So
we
need
to
make
sure
we
have
a
component
tomo
file
for
every
one
of
our
microservice
repos.
B
We
need
to
make
sure
that
the
the
cloud
build
is
updated
to
use
the
new
format
of
calling
the
our
cli
and
then
from
there
we'll
be
able
to
validate
that
we
have
everything
being
populated
back
over
on
the
artelius
and
deploy
up
side.
B
So
this
is
where
we'll
we'll
be
able
to
gather
the
data
right
now
right
now,
we're
pushing
it
over
to
the
deploy
hub,
sas
the
free
version
of
so
basically,
it's
hosted
or
ortulius
as
part
of
that
process.
B
Just
because
our
our
ortulius,
that's
running
on
azure
is
is
isn't
quite
stable
enough
and
that's
what
some
of
this
is
going
to
fix
as
part
of
this
process.
So
if
we
go
back
to
github,
go
to
artillius.
B
What
were
they
called?
I
think
they're
called
store.
B
There
they
are
so
these
are
the
the
store
services
that
we
need
to
update.
Oh
and
one
other
thing
I
was
running
into
with
the
store
services.
B
A
lot
of
the
this
is
the
example
that
comes
from
from
google,
and
this
example
is
a
little
bit
out
of
date,
so
the
some
of
it
won't
compile
anymore
because
of
they've
updated
the
net
core,
for
example,
as
part
of
that
process.
So
one
of
the
things
I
did
just
to
get
us
over
the
hump
is
the
docker
file.
I
just
changed
it
over
to
an
alpine
base
image
so,
instead
of
actually
building
and
compiling
code,
because
we
don't
need
it
for
the
demo,
we
never
run
the
demo.
B
We
just
we
never
run
like
the
hipster
store
website.
We
just
go
and
gather
we
just
use
it
for
the
build
something
to
build,
so
we'll
need
to
change
the
the
docker
file
for
all
of
these
workspace
get
get
repos
as
well,
so
a
good
one
to
start
with
should
be
recommendation
service.
Let
me
see
if
it's
actually
pushed.
B
Yep
that
looks
good,
so
recommendation
service
is
going
to
be
one
of
the
ones
that
we
can
kind
of
break
out.
You
can
use
as
your
starting
point
and
then
I
hate
when
it
does
this.
B
Yeah
looks
like
shipping
service
has
been
completed
as
well,
are
pretty
close,
you'll
see
some
of
the
things
that
one
of
the
things
I
noticed
here
is
like.
If
we
compare
the
the
the
tamil
files,
this
one
has,
I
believe,
github
lines
added
and
deleted
and
a
commit
timestamp
that
we're
picking
up
now,
one
of
the
the
interesting
things
with
the
the
the
lines
added
and
deleted.
B
The
way
we
figure
that
out
is
by
going
to
find
the
previous
version
of
the
component
grabbing
that
git
commit
from
that
component,
and
we
know
the
git
commit
for
this
component
and
there's
some
git
commands
that
will
give
us
lines
added
deleted
between
two
shas.
So
that's
how
we
go
and
grab
that
information
if
it's
not
specified
the
cli
will
go
ahead
and
insert
that
data
if
it
can
determine
it
so
from
like
getting
started
and
working
on
stuff.
B
B
So
we
have
our
six
services
here
that
will
need
to
add
in
the
same
tomo
files
and
same
information
to
these
as
well.
But
I
want
to
give
the
the
source
services
ago
because
we're
not
going
to
break
anything
if
we
mess
anything
up.
So
this
is
a
really
nice
little
playground
for
us
to
work
with,
so
that
I'm
just
going
to
take
some
questions.
B
So
arvin
does
that
kind
of
help.
Give
you
an
idea
where,
where
we're
gonna
start.
A
B
Yeah,
my
screen
is
blanked
out
there
I'll
give
you
the
user
id
that
we
can
log
in
with
and.
B
We
can
see-
let's
say
you
take
ad
service,
you
can
go
into
the
existing
ad
service
and
you
can
see
that
it's
in
store
services
as
well.
The
name
of
it's
gonna
be
ad
service
and
I
think
in
it
one
of
the
old
ones
will
be
like
the
chart
name
as
part
of
that
process.
So
you
could
use
we'll
we'll
we'll
be
able
to
get
some
of
this
information
from
the
old
data
in
here
again,
it's
not
100
complete
because
it's
like
missing
a
swagger
file.
B
It's
missing
some
other
stuff
as
part
of
that
process,
so
like
to
get
the
lines
of
code.
Those
things
are
missing
from
from
some
of
these,
so
here
we
don't
have
the
lines
of
code
and
like
the
git,
commit
timestamp.
So
for,
like
the
name
of
it,
we'll
be
able
to.
You
know,
look
at
the
old
data
and
get
that
pulled
from
there.
So
let
me
drop
in
the
chat,
the
user
id.
B
It's
it'll
be
what
ends
up
happening.
Is
we
if
you
make
your
own
account
the
you
get
your
own
copy
of
all
the
store
services,
so
you
want
to
make
sure
that
we're
for
the
demo
purposes,
everything's
updated
in
one
place.
So
let
me
just
put
it
in
the
chat
so.
A
B
B
It's
a
real,
unique
user
id
and
and
password
so
who
did
we
just
have
join
here.
B
So
tracy
the
calendar
invites
off
by
a
half
hour.
A
A
B
Okay,
well
I'll,
take
a
look
at
it
in
what's
the
the
deadline
date
on
that.
A
It
says
a
lot
april:
8
2022.
B
F
Arvind
I
just
looked
at
this
form.
It
looks
pretty
straight
forward
you're
going
to
have
to
look
for
your
airfare.
F
F
A
A
F
Just
go
online
and
look
to
see
they
should
know
what
the
hotel
cost
is.
F
So
a
figure
400
us
dollar
a
night
for
the
hotel
at
minimum,
and
so
that's
sixteen
hundred
dollars
for
the
hotel
and
then
you
gotta
figure
out
your
your
hotel.
Your
travel
car,
your
flight.
C
B
Yeah
it's
when
we
get
into
the
the
trade
show
season
in
like
san
diego
and
certain
pounds,
the
like,
if
you
go
to
like
a
trade
show
in
san
francisco,
yeah.
C
E
B
G
B
A
B
Jump
back
and
share
my
screen.
We
had
a
little
mix
up
on
the
the
time
today,
sasha,
so
I
kind
of
went
over
what
we
what
we
need
to
do,
but
I
will
catch
you
up.
Real
quick.
E
B
B
So
what
we're
going
to
do
is
some
devops
pieces
to
get
all
of
our
our
example
microservices.
So
we
use
that
hipster
store
as
the
for
all
of
our
kind
of
like
demo
stuff.
B
So
one
of
the
things
that
we
need
to
do
is
there's,
I
think,
14
micro
services
that
are
part
of
the
hipster
store
and
we
need
to
get
them
converted
over
to
the
the
latest
version
of
the
cli
interface
as
part
of
that
process,
and
what
that
allows
us
to
do
is
we'll
be
gathering
and
passing
along
from
the.
B
B
You
know
the
build
information
all
is
going
to
get
passed
along
from
the
the
the
cloud
build
side
and
one
of
the
things
that
we've
done
is
we've
actually
switched
it
from
the
pipeline
having
all
of
the
data
in
it.
So
let
me
pull
up
an
old
one
and.
B
B
Currency
service,
I
don't
think
this
one's
been
updated
yeah.
So
if
we
look
at
the
the
one
from
two
years
ago,
we
were
gathering
a
bunch
of
as
like
a
bunch
of
command
line
parameters.
B
So
we
have
all
this
big
long,
long
line
of
all
the
command
line,
parameters
that
we
want
to
gather
from
the
cloud
build,
so
we
basically
had
had
in
here
hard-coded.
You
know
what
the
names
are.
Things
like
that
of
of
the
service
and
what
that
caused
us
to
do
is
to
have
a
very
specific
data
in
the
pipeline
instead
of
it
being
generic,
so
we
split
it
out
to
have
a
autumnal
file
that
contains
all
of
the
variables,
so
you
like
the
unique
data.
So
this
is
recommendation
services,
weren't.
H
B
Nice,
so
what
we
did
was
we
changed
over
to
having
the
tamil
file
and
then
basically,
what
we're
doing
is
that
long
line
I
was
showing
you
now
is
just
a
simple
one-liner
with
everything
coming
from
now.
One
of
the
the
weird
things
that
we
run
into
with
the
different
cloud
build
tools
is
how
they
pass
data
between
steps.
B
So
if
we
have
like
the
the
environment
variable
here,
the
user
id
password
for
clay,
we
we
expose
that
at
the
this
step,
the
login
step
of
the
build
process.
Now,
when
we
get
to
the
build
and
push
step,
those
variables
don't
exist,
they
the
these.
These
tools
don't
share
environment.
E
E
B
Well,
what
they
do
is
they
allow
you
to
share
a
volume
mount
basically,
because
these
actually,
in
the
background,
end
up
being
docker
images
that
are
running.
E
B
Yeah,
so
what
we
end
up
doing
is
a
workaround,
and
this
works
well
with
all
of
the
the
tools
without
having
like
corp
installed.
Is
you
can
because
there's
a
shared
some
sort
of
file
system
between
the
different
steps?
B
What
you
do
is
you
write
the
variables
to
sh
file
and
then
you
just
source
them
in
so
when
you
source.
B
Well,
this
is
what
right
here,
these
are
all
executed
in
the
same
bash.
Shell
script.
E
B
And
then
there's
there's
a
case
where
we
need
to
calculate
the
digest
of
the
image
that
was
pushed
because
you
can't
calculate
a
manifest
digest
until
it's
actually
been
pushed
to
the
repo.
So
you
can
build
it.
But
if
you
build
it,
you
don't
have
the
the
digest.
B
So
what
you
have
to
do
is
actually,
after
the
push,
go
ahead
and
calculate
the
digest,
and
then
we
just
append
to
that
file.
So
the
next
steps,
when
they
go
we'll
source
it
and
have
that
new
variable
exposed
as
well
so.
E
B
That
we're
getting
from
the
environment
so
in
this
case,
because
we're
under
cloud
build
like
the
short
sha
is
a
built-in
environment
variable
from
cloud
build.
I
believe,
like
branch
name,
is
another
one
that
that
we
gather
from
the
build
system
as
part
of
that
process,
some
other
ones,
because
the
the
workspace
is
where
the
git
clone
happens
to
when
you
do
your
git
clone,
it
goes
into
your
workspace
directory.
B
We
can
actually
do
git
commands
against
that,
and
we
could
figure
out
the
timestamp
of
the
last
commit
of
the
short
shaw.
We
can
go
and
grab
the
previous
commit
from
deploy
hub
or
ortilius,
and
then
we
have
two
commits
that
we
can
compare
and
get
lines
of
code
added
and
deleted.
Those
type
of
things
can
happen.
B
E
Your
secrets
now
are
they
exposed
in
that
sourced
file.
E
B
Yeah
so
in
this
case
the
I'd
have
to
double
check.
I
can't
remember:
we'll
have
to
look
at
the
output
to.
C
B
If
the
so,
basically,
what
we
do
here,
one
of
the
first
steps
that
we're
doing
here
is
we're
taking
information
from
the
tamil
file
plus
information
from
the
environment.
So
this
is
like
kind
of
global
variables
that
we've
defined
and
we're
combining
that
information
together
and
creating
the
the
shell
script
now.
What
ends
up
happening
on
most
of
the
build
systems
is.
B
It
is
true
that
the
the
dh
user
and
dh
pass
will
be
in
the
shell
script,
but
when.
B
It
and
you
go
to
print
it
out
the
build
systems
node
and
mask
it
is
so
it's
not.
It
doesn't
show
up
in
the
output.
E
B
So
what
we
end
up
doing
is,
and
the
reason
why
we
have
this
this
initial
step,
to
create
the
like
our
starting
sh
file
is
we
wanted
to
have
everything
in
the
tomml
file
so.
B
Of
having
like
the
component
version
defined
up
at
like
this
environment
level,
which.
B
I'd
have
to
have
a
different
cloud,
build
yaml
file
for
every
single
project.
B
Of
a
generic
one,
what
we
did
was
we
put
it
into
the
toml
file
here,
and
I
have
an
export
section
here
that
will
go
through
resolve
as
many
variables
as
it
can
and
then
export
these
two
variables
into
the
sh
file.
So
here
we'll
grab
variant,
which
is
branch
name,
which
then
is
also
being
passed
in
from
cloud
build
and
then
same
thing
like
with
the
version
versions
getting
passed
in
from
the.
G
B
E
All
these
years
of
knowing
all
these
all
these
months
of
being
in
autism,
I'm
finally
kind
of
I'm
seeing
how
you
or
how
your
head
worked
with
this.
It's
bloody
smart
man,
steve
you're
piping
variables,
all
the
time
into
the
next
step,
yep
and
then
gathering
what
was
already
there.
Instead
of
having
to
go
and
sort
of,
like,
I
don't
know,
use
a
hashicorp
type
of
tool
right,
yeah
or
something
that's
a
smart.
B
Yeah,
so
here
we
we
set
up
that
we
get
our
initial
sh
file
and
then,
like
you,
said,
sasha,
we
keep
on
sourcing
it
in
so
we
expose
those
variables
over
to
the
next
step
so
and
that
just
keeps
on
happening
like
in
this
case
we
source
in
the
cloud
build
which
is
exposed
to
those
environment
variables,
and
now
we
have
docker
repo
and
the
image
tag
exposed
for
this
command.
B
So
it's
like
this
multi-pass
derivation
of
of
stuff
that
we
got
going
on,
but
it
actually,
it
really
simplifies
what
we're
doing
in
the
pipeline
and
minimizing
the
need
for
updates
to
that
process
and,
like
I
said,
depending
upon
the
the
cloud
build
tool,
they
do
things
they
pass
data
across
slightly
different
github
actions.
You
have
to
use
this
cache
thing
that
cache
things.
C
B
We
create
our
our
docker
image
and
then
we
tell
ortilius
or
deploy
hub
everything
about
what
we
just
did
so
this
toml
file
here,
plus
all
the
information
that
we
get
from
google.
We
end
up
with
all
of
this
information,
so
we
get
the
the
user
ids.
B
Here's
like
the
name
that
we
got
from
the
that
we
put
in
the
tamil
file
the
build
id
build
date,
so
the
build
id
is
actually
the
google
id
that
you
can
reference
it
to
the
build
url
which
has
like
the
project
name
in
it
and
stuff.
B
And
then,
even
though
we
didn't
specify
in
the
the
tamil
file
stuff
about
the
readme,
we
went
and
actually
looked
for
a
readme,
a
license
file
swagger
file
in
certain
directories,
and
we
loaded
those
in
as
information
as
well.
So
we
grabbed
the
the
license
file
from
the
repo
the
swagger
file
at
that
level.
B
Here's
the
license
for
that
microservice,
and
here
we
can
see
our
additional
data
coming
across
like
the
commit
timestamp,
we
didn't
get
any
ads,
outlines
added
or
deleted
because
we
didn't
have
a
previous
commit
at
that
level.
But
this
is
the
the
data
that
we're
gathering
at
this
process
so,
like
I
was
telling
arvin
what
we
need
to
do
is
we
have.
D
B
B
G
B
So
that's
that's
the
good
starting
point.
Let
me
just
double
check
that
it
was
pushed
I
can
tell
by
looking
at
the
yeah.
It
was
21
days
ago.
So
there's
that.
So
if
we
go
back
to.
B
If
you
look
at,
let's
say
payment,
service
payment
service
will
still
probably
be
old.
So
if
we
look
at
the
cloud
build.
B
Yeah,
so
this
cloud
build
is
using
the
old
old
version
of
the
build
file
and
we
can
see
that
it
has.
B
I
can
just
tell
by
the
the
way
the
arguments
are
being
passed,
so
this
is
the
old
version
of
the
cloud
build
and
probably
the
old
version
of
the
the
tamil
file,
so
like
payment
service
is
one
of
the
ones
that
we
need
to
update,
yeah
here's
the
old
version
of
the
tamil
file,
so
our
our
goal
is
to
we
have,
I
believe,
what
was
there,
like
a
dozen
of
the
whoops
store
service
store.
B
So
we
need
to
go
so
recommendation
service
will
be
where
we
start
we'll
just
just
have
to
call
out
which
one
you're
gonna
work
on
and
go
ahead,
and-
and
I
don't
know
if
you
guys
can
do
direct
commit
to
this.
B
Yeah
that
could
be
the
way
to
go.
What
we'll
do
is
I'll
I'll
get
the
security
sorted
out,
so
you
get
so
we
can
commit
directly
to
the
to
the
repos
without
having
to
do
a
fork
and
a
branch
and
all
the
prs
and
stuff
like
that,
because
this
is
oh
and
the
other
thing
that
sasha
that
I
found
was
some
of
these
don't
compile
anymore,
because
like.net
cores
changed
and
stuff
like
that,
so
they
retired
one
of
the
dotnet
core
base
images.
B
B
B
You
want
to
take
the
one
that
says
so
so
arvin
you
will
work
on
ad
service.
B
So
you
clone
ad
service
as
well
and
in
here
you'll,
look
at
ad
service
main,
for
example.
B
Let
me
see
if
ad
service
has
been
done.
Some
of
these
I
got
done
some
of
them.
I
didn't,
but
things
like
here's
like
the
the
name,
the
domain
name
of
the
service
and
the
base
name
of
of
it.
So
what
ends
up
happening?
Is
we
take
the
domain
name,
so
store
services,
dot,
add
service,
and
then
you
do
dot.
Add
service
lower
case
to
come
up
with
the
name
so
you'll
see
in
that
the
com?
That's
where
it's
kind
of
dotted
together
at
this
level.
E
B
E
C
B
B
B
No,
I
don't.
When
I
was
looking
at
email,
it
was
not
done.
D
B
So
looks
like
the
we
have
four
left
to
do,
plus
the
six
artillious
ones.
B
Yeah
so
go
ahead
and
clone
that
one.
B
Just
you
can
just
do
a
direct
clone
and
I'll
just
if
you
can't
push
to
it
I'll
fix
the
security
on
it.
E
I've
seen
your
skills
yeah.
A
Yeah
I
want
to
that
in
email
service.
There
are
a
lot
of
images
so
which
one
to
take
consider.
A
D
In
this
part
email,
so
we
did
oh
yeah
so.
A
B
So
that's
going
to
be
so
your
your
the
name
of
your
component
will
be
that
that
domain
name
dot
email
service-
all
one
word
lower
case-
so
go
ahead
and
copy
yeah.
If
you
want
to
go
into
any
one
of
them,
yeah
right
there
so
copy
that.
B
No,
you
can,
we
can
go
back
and
get
it
so
so
go
to
recommendation
service.
The
tom
will
and
I'll
go
back
to
your
your
editor
and
you
cloned
recommendation
service
right.
A
B
And
go
up
to
ortilius.
A
B
B
And
say
view
raw.
B
Copy
all
that
copy
that
whole
whole
yep
and
now
go
back
to
your
visual
studio
and
paste
it
in
there.
So
get
rid
of
all
that
perfect
now
see
in
line
five.
We
need
to
update
that
for
to
to
match
up
with
what
are
you
in
email
service,
so
now
go
back
over
to
the
console
yep
copy
that.
B
B
A
B
So
do
dot
no
do
dot
email
service
all
lower
case
smash
together.
B
Exactly
because
we're
defining
everything
for
this,
the
email,
service,
micro
servers
and
then
the.
A
B
Yep
exactly
now,
this
one
go
ahead,
paste
the
whole
thing
in
there
to
get
rid
of
that.
B
B
Okay,
now,
let's
go
ahead
and
go
to.
We
have
one
more
file
to
update,
which
is
the
docker
file.
B
A
A
B
B
A
B
There
it
is
now
hit
save
it's
this
weird
thing
that
people
get
want,
basically,
a
new
line
at
the
end
of
the
file,
all
right,
save
all
those
and
go
ahead
and
commit
them.
E
E
But
I've
I've
planned
the
repo.
I
just
need
to
make
those
updates
and
just
just
go
over
the
changes
I
need
to
make
for
that
particular
service.
A
B
A
A
B
B
Let
me
know
when
you
have
your
pr
created
at
all.
E
It
so
what
did
I
need
to
do
on
the
store
currency
service?
Again
the
cloud
build
or
the
tomo
file.
I
remember
you
had
you
stored
all
the
configuration
now
in
the
thumbnail
file.
You
can
see
my
tumble
here.
B
All
right
cool
so
go
bring
up
github
and
go
to
the
recommendation
service.
B
E
E
E
B
Go
to
component
automall.
B
On
line,
five
is
the
recommendation,
so
we
need
to
replace
that
with
which
which
one
you're
on
currency
currency.
So
the
easiest
thing
to
do
is
go
over
to
the
the
playhub
sas
version.
B
B
Try
doing
the
replace
that
underscore
yeah.
D
B
Yeah
there's
a
when
it
goes
from
http
to
https,
it
gets
confused.
Okay
do
username
will
be
stella99.
B
B
B
Yep
now
online,
you
can
copy
that
because
you're
going
to
need
it
online
15,
just
the
currency
service
part,
the
last
yeah
that
part
so
you'll
see
the
chart.
B
G
B
Like
it,
it
works
okay.
Now
we
need
to
do
the
the
cloud
bill
file
that
yamo.
B
B
Yep
cloudbuild.yaml.
B
And
that's
the
nice
thing
when
we
move
to
this
method,
is
it
really
separates
out
what
you
have
to
do
to
your
pipeline?
So
if
you
have
a
jenkins
pipeline
or
you
know
whatever,
we
just
insert
our
couple
little
commands
in
there
and
then
there's
no
hard-coded
variables.
A
D
E
E
B
E
B
All
right
now
you're
ready
to
commit
and
push
it.
E
E
B
E
H
B
I'm
gonna
hijack
it.
I
think
okay,
so
we'll
just
take
your
update,
you
just
did
you
did
currency.
B
B
E
What
other
way,
okay,
so
doing
with
all
your
environment
variables?
Obviously,
if
you're
just
using
google,
then
you
could
just
put
it
all
into
their
secrets
and
conflict
and
config
manager
really
right
environment
variables
and
all
that,
but
because
it
has
to
be
cloud
agnostic,
you
have
to
find
a
way
of
making
it
dynamic,
no
matter
what
environment
you're
in
right!
Well,.
B
E
B
And
a
lot
of
the
variables
these
are
build
time
variables
which
are
different
than
kubernetes
runtime
variables.
B
So,
like
the
branch
name,
the
the
get
repo,
those
are
the
ones
that
are
what
we
need
to
gather
from
the
build.
B
Okay,
so
can
I
kind
of
walk
through
what
what
we
have
going
on
so
actually
let
me
do
this:
I'm
going
to
do
a
side-by-side
kind
of.
B
B
B
So
it
did
the
build
and
the
first
step
that
it
did
was
it
installed
the
pip,
so
we
can
see
pip
is
being
installed.
We've
installed.
H
E
D
B
B
And
then,
and
then
because
of
that,
when
we
got
down
to
the
the
actual
build
step
these
variables
for
the
tag
weren't
defined,
because
the
tamil
wasn't
exi
wasn't
there
and
because
the
tama
wasn't
there,
we
didn't
derive
what
these
new
variables
were.
B
D
B
B
So
you'll
see
here,
let
me
make
this
bigger.
B
B
We
downloaded
the
cli
and
then
we
rendered
the
we
read
in
the
toml
file,
which
is
what
this
is
doing,
and
it's
going
through
and
replacing
variables
that
it
currently
knows
about.
So
it
will
grab
information
and
replace
what
it
can
and
then
it
creates
the
new.
B
So
the
one
that's
kind
of
interesting
that
you
got
to
double
check
on
is
version,
so
we
should
have
in
the
list
here,
oh
and
then
we're
going
to
set
we're
going
to
export
version
as.
B
B
E
E
B
Not
any
changes
that
you
did
so
what
ends
up
happening
is
so
these
are
all
the
new
variables
like.
So
here
sasha
we
figured
out
the
lines
of
code
that
changed.
B
Between
the
last,
the
last
commit
that
we
did
and
the
new
one,
so
the
two
thousand
lines
of
code
is
gonna,
be
all
the
changes
to
the
to
the.
E
E
B
You
have
to
you,
have
to
tell
it
to
refresh
so
currency.
D
E
B
Yeah
so
there's
our
our
build
id,
so
I
don't
think
that
project
may
not
have
a
readme
in
the
root.
I
don't
know
double
check.
E
No
there's
not
that
I
can
see
no
there's
nothing
here.
There's
no
license
or
read
me
here:
yeah.
B
So
that's
that's
the
reason
why
we
didn't
get
didn't
get
those
uploaded
and
then
we
have
all
of
our.
You
know
nice,
nice
information
as
part
of
that
that
process,
so
our
slack
channel,
I
have
to
check
there's
some
things
that
aren't
quite
coming
across,
like
the
business
business.
Url
isn't
quite
right.
G
B
B
A
E
That's
so
cool
steve
thanks
for
your
time.
No.
F
B
Called
load
balancer
load,
something.
B
E
B
B
And
see
in
yours,
we
brought
in
your
your
readme
is
a
real
small
readme
file,
just
like
one
line
in
it.
B
So
we
brought
all
all
that
that
information
in
as
part
of
that.
A
A
B
I
can
do
you
can
start
on
the
the
ones
for
the
our
micro
services
and
we
can.
I
have
to
run
here
soon.
But
if
you
look,
if
you,
if
you
search
our
repo
on
ms
dash,
we
have
all
of
these
micro
services
that
we
need
to
work
on.
B
I'll
actually
I'll
drop
the
list
in
because
it's
a
little
bit
shorter
than
the
one
we
have
here,
but
I
will
some
of
these
I
may
archive.
B
Yeah
I'll
drop
them
in
discord
and
we
can
put
them
in
place
now.
These
I'll
give
you
a
different
login
because
they
go
to
a
different
domain
structure
in
ortelius,
so
we've
logged
on
with
the
stella,
which
is
the
the
like,
the
user
id.
E
B
So
this
one
will
go
into
a
different
domain,
so
the
user
id
is
artillious
artelius
for
those.
I
think
that's
what
it
is
yeah
there.
They
are.
E
Yeah,
I
don't
mind
doing
yeah
like
alvin
said,
as
I
don't
mind,
helping
out
alvin
and
I
can
crush
these
for
you.
B
Those
are
all
bpr's
because
you
probably
won't
have
direct
access
to
those.
B
So
these
will
be
the
the
naming
convention
will
be
slightly
different,
but
the
the
main
part
will
be
in
the
the
domain
or
julius.sas.
E
Yeah,
I
want
to
learn
how
to
you
could
use
this
tamil
process
that
you've
got
going
in
my
own
stuff
at
work,
because
it's
like
super
awesome
and
it's
like
so
much
faster
than
yeah.
Now.
B
The
the
the
trick
on
this
is,
if
you
look
at.
B
So
it
it's,
this
repo
sasha
is
the
command
line
interface,
so
this
is
just
written
in
python.
E
B
So
what
what
I've
done
is
I've
created
to
some
functions,
to
make
life
easier,
just
like
getting
a
json
file
posting
a
json
file?
It's
this
you'll
see
like
it's
going
to
deploy
an
application
by
application
id,
and
this
is
where
we're
actually
making
the
call
up
to
the
the
restful
api
endpoints,
and
it
does
things
like
just
you
know,
figure
out
if
it
was
success,
clean
up
the
data
that
it
gets
back.
B
So
that's
what
this
this
first
one
is,
is
the
the
api
interface,
the
other
one
is
the
driver
which
is
in
the
bin
directory,
and
this
one
is
where
all
the
the
actions
occur.
So
the
actions
that
we've
been
working
with
today
are
the
update
comp
and,
like
the
env
script,
one
this
one
here.
B
B
So
this
is
where
we
actually
read
in
the
files.
One
of
the
things
I
do
it
just
it
seems
to
work
better.
Is
I
actually
just
cap
the
file
and
I
pipe
it
through
a
decoder,
there's
other
ways
to
do
it,
but
this
seems
to
work
the
best
for
loading
in
the
tomml
and
then
this
is
just
a
a
python
module
cue
tommel,
there's
another
one
out
there
just
called
tomml,
but.
B
B
H
B
We
want
to
see
if,
if
we
need
to,
I
wrote
a
just
a
a
a
helper
function.
B
Just
tell
me
if
something's
defined
or
not
in
python,
if
it
is,
we
go
through
it
and
read
in
the
existing
sh
file
and
then
we're
going
to
append
to
it
and
oh
so
what
this
is
doing
saying
if
I
don't,
if
I
don't
pass
in
the
name
of
the
file,
I'm
going
to
go
and
just
give
it
a
default,
and
then
we
go
ahead
and
open
that
up
and
we
pull
the
information
together
at
the
dictionary
level,
and
this
is
where
I
have
to
look
to
see
why
the
build
build
number
is
not
coming
across.
B
And
then
what
ends
up
happening?
Is
you
end
up
looping
through
the
keys
in
the
dictionary,
so
we
do
a
multi-pass
on
it
and
that's
where
we
actually
end
up
writing
out
the
environment
variables
to
that
and
there's
another
one.
C
B
Remember,
there's
a
a
neat
function
in
python:
it's
like
a
string
function,
that'll,
look
for
this
bracketed
type
of
variable
and
allow
you.
B
Report
in
any
text
string
for
that's
that
that
type
of
format.
B
So
let's
see
this
one's
like
update
component
and
one
of
the
things
that
we
do
is
load
in
the
dictionary
again.
The
tamil
file,
so
we'll
read
that
in
and
sometimes
we'll
need
to
do
some
clean
up
on
it.
So
you'll
see
in
the
code
where
I
actually
go
and
clean
up
stuff
at
part
of
the
process.
B
But
that's
kind
of
the
the
I
can
walk
you
through
a
few.
I
try
to
document
this
pretty
good,
but
it's
just
been
growing.
B
E
E
A
G
B
And
we
had
to
use
some
like
flat
and
stuff
as
part
of
that,
but
it
it
it
comes
out.
It
gives
us
a
nice
little
solution.
Like
I
said,
I
have
to
go
figure
out.
Why
build
numbers
not
coming
across.
F
B
But
that
kind
of
what's
happening,
that's
driving
the
whole
process,
so
taking
the
the
the
tamil
file
and
or
taking
the
tamil
file-
and
you
know,
massaging
the
data
and
pulling
information
from
the
pipeline,
and
then
you
know
our
end
result
is-
is
uploaded
to
as
a
component
data
now
other
things
that
we're
going
to
be.
B
You
know
the
readme,
the
swagger
there's
more
data
that
we
want
to
start
capturing
along
that,
so
we'll
be
keep
on
expanding
and
if
you
think
about
anything
else,
that
would
be
important
sasha,
especially
for
like
what
you,
what
you've
been
doing,
if
there's
anything
that
the
developers
or
the
sres
want
to
know
about
a
component
that
we
should
capture,
let
me
know-
and
we
can
make
sure,
because
this
attributes
list
can
is
basically
can
be
pretty
infinite.
E
Yeah,
actually,
when
I
still
want
to
do
it
as
deploy
otilius
into
my
current
environment
and
use
it
for
myself
and
then
I
can
actually
show
people
right.
Look
I'm
using
ortillius
to
help
me
with
my
day-to-day.
You
know
and
like
show
them
what
I
can
actually
do
in
a
real
environment,
right,
yeah.
So
I'd
love
to
try
and
I
want
to
plug
it
in
be
quite
cool-
to
do
a
session
like
that
to
see
how
do
you
deploy
the
telus
in
your
own
environment,
yeah.
B
I
think
with
us
getting
the
the
helm,
charts
sorted
out
and
doing
like
one-click,
installs
and
stuff
like
that
through
rancher
of
ortrelius
or
terraform.
You
know
we'll
we'll
be
able
to
to
make
it
easy
to
set
up
and
get
going.
E
B
All
you
have
to
do
is
is
literally,
if
you
don't
have
to,
if
you
don't
have
anything
fancy,
let's
say
you're,
like
the
only
reason
like
image
tag.
I
had
a
drive
from
the
data
here,
but
if
the
image
tag
is
already
derived
by
jenkins
or
the
the
build
system,
this
can
be
something
that
we
can
use
and
we
can
actually
skip
this
first
step
of
generating
the
sh
file.
B
So
so
you
literally
when
you
go
into
your
pipeline,
all
you
have
to
do
is
is
plop
in
these
eight
lines
of
code
and
a
tamil
file
and
you'll
start
gathering
all
this
information
about
it
automatically.
E
B
Just
need
to
add
a
shell
command
to
run
the
the
dh
update.
G
E
Oh
my
word:
okay
keepers.
Okay
I'll
start
setting
it
up
right
away;
actually.
E
B
C
B
And
give
it
a
go
for
those
other
repos
I'll
drop,
I'll
drop
the
the
list
later
today
in
in
which
ones
we
need
to
tackle.
There's
like
six
of
them.
B
A
A
B
The
helm
chart
advanced,
helm,
charts
would
be,
you
know,
nested
home
charts
or
what
are
they
called
a
parent
sub
helm
charts
would
be
the
next
one
and
we're
we're
gonna
be
working
on
that
with
ortelius,
so
you'll
get
a
feel
for
it
with
with
that,
but
that's
the
one.
B
Basically,
you
want
to
be
able
to
install
like
five
different
charts
from
different
repos.
B
So
if
you
look
like
at
the
wordpress
one
off
of
bitnami's
site,
it'll
install
like
it'll
install
wordpress
itself
and
then
there's
like
a
couple
other
containers
that
it
uses,
I
think,
like
an
nginx
front
end.
So
these
these
more
complex,
helm,
charts
is
what
I'd
start
looking
at
the
other
one
is
customize.
B
Customize
is
pretty
cool.
I
I
like
it
a
lot
versus
home.
So
look
at
customize
as
well
for
updating
the
kubernetes
files.