►
From YouTube: Mar 16, 2023 - Ortelius Architecture Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
Welcome
everybody
to
the
artillius
March
16th
architecture
meeting.
Let
me
share
my
screen
and
we'll
get
going
here
so
go
ahead
and
sign
in
to
the
share
doc.
B
B
So
let
me
jump
over
of
course,
zoom's
always
annoying
first
thing
I
want
to
do
is
just
go
through
the
the
payouts
that
we
have
going
on.
B
So
I
have
one
question
utkarsh
on
issue
515.
We
didn't
assign
a
bounty
to
that
one.
If
you
could.
Let
me
know
what
how
many
hours
you
put
in
on
that
one
and
then
I
will
get
everybody
paid
out.
Foreign.
B
Just
just
either
update
the
issue
or
or
just
DM
me.
B
So
what
and
I
think
everybody
has
been
set
up
in
everybody
on
the
list
has
been
taken
care
of
in
the
get
up
sponsorship
setup
portion,
so
I
will
once
I
get
that
number
I'll
go
through
and
pay
everybody
out
today
and
I'll
I'll.
Just
what
I'm
going
to
do
is
I'm
gonna
put
in
a
pay
date
here.
B
B
So
some
of
the
things
that
we
have
going
on
on
that
front
I
did
finish
up
the
architectural
document.
So
if
you
want
to
look
at
that,
it
is.
B
So
in
the
document
I
kind
of
went
over
where
we're
at
today
and
our
existing
architecture
diagram.
B
B
One
of
the
weird
things
with
the
mermaid
stuff
is:
you
have
to
use
your
browser
Zoom
to
zoom
in
on
the
information
they
don't
have
like
a
zoom
button
available,
yet
they
they
said
it's
a
known
issue.
B
So
there's
a
diagram
of
the
existing
architecture
and
then
the
new
architecture
diagram
is
also
going
to
be
available
out
there
that
one
I'll
pull
up
separately
so
it'll
be
a
little
bit
easier
for
us
to
take
a
look
at
here
in
a
second,
so
one
of
the
things
I
did
was
try
to
Define
our
functional
requirements.
Some
of
the
information
that
we
need
to
grab
like
I
said
last
time.
This
is
from
that
spreadsheet.
B
I
had
of
the
information
that
we
want
to
grab
I've
been
playing
around
with
the
openssf
scorecard
portion
and
there's
probably
another
six
things
that
we
should
add
to
this
list
from
the
scorecard
portion
of
things.
B
So
if
you,
if
you're
on
Mac,
you
could
do
a
brew,
install
a
scorecard
and
what
you
do
is
you
pass
it?
You
can
either
pass
it
a
local
repo
or
you
can
pass
it
a
a
remote
Repository
and
like
I,
said
these
are
the
things
that
we
need
to
add
like
is
the
code
maintained
token
permissions
stuff
like
that,
so
what
it
ends
up,
let
me
actually
run
it
without
the
show
details,
oops.
B
So
it
gives
you
a
score,
and
this
is
the
information
that
we
want
to
select
Branch
protection
here
we
don't
have
in
our
list
and
that's
one
of
the
things
that
I'll
need
to
go
through
and
do
another
PR
to
update
like
Branch
protection
contributors,
different
contributors
from
different
organizations.
B
B
B
B
So
the
web
front
end
will
be
accessing
the
restful,
API,
endpoints
and
there's
parts
of
how
we
deal
with
the
pull
push
through
cash
and
the
CLI
will
use
the
same
Russell.
Apis
for
the
most
part,
so
whatever
the
front
end's
doing,
the
CLI
will
be
there
as
well
to
populate
the
data
into
the
database.
B
Some
of
the
things
the
technical
technical
decisions
that
we've
gone
over
the
last
pretty
much
a
year,
we're
going
to
move
from
postgres
over
to
a
graph
database
and
the
reason
we're
going
to
use
a
graphs
database
is
because
of
the
dependency
relationships
that
graph
databases
can
navigate
much
quicker
and
we
can
have
a
simplified
data
structure
in
the
graph
database
following
those
dependencies.
B
B
So
we
have
to
go
through
the
normalization
process
to
eliminate
the
Redundant
data
and
Only
Store
data
once
and
then
point
to
it
at
that
level.
Why
did
we
choose
Rango
over
any
of
the
other
graph
databases?
B
One
of
the
things
when
I
was
looking
at
the
Rango
database.
It
acts
as
a
document
store,
as
well
as
a
graph
database,
so
you
don't
need
a
predefined
schema
and
that's
one
of
the
nice
things
when
you
look
at
other
graph
databases,
you
have
to
create
predefined,
schemas
and
and
create
the
indexing
and
stuff
like
that,
and
a
Rango
from
I
could
tell
so
far
just
handles
that
natively
without
us
having
doing
extra
work.
B
Why
are
we
using
blockchain?
We
pretty
much
all
know
that
we
want
to
give
a
immutable
history
of
the
transactions
and
one
of
the
things
when
you
look
at
the
xrpl
blockchain
over
some
of
the
other
implementations.
Is
it
uses
proof
of
consensus
in
versus
proof
of
stake
or
there's?
B
No
one
I
can't
think
about
the
I,
can't
think
of
the
term
off
the
top
of
my
head
and
one
of
the
things
with
proof
of
consensus,
especially
in
the
xrpl
implementation,
is
the
the
current
state
of
each
block
is
maintained
so
like
with
Bitcoin.
If
you
want
to
go
through
and
add
something
to
the
blockchain,
you
have
to
actually
start
at
the
beginning
of
time
and
walk
your
way
all
the
way
through
to
every
single
block.
B
B
This
is
a
big
one.
Why
are
we
using
nft
storage?
B
B
It's
about
four
terabytes
a
month
compared
to
like
a
AWS
which
is
20
dollar
twenty
dollars
for
a
terabyte
per
month,
and
that
stuff
will
add
up
very
quickly
and
the
reason
why
nft
storage
is
so
cheap
is
because
they
actually
use
filecoin
behind
it
to
kind
of
sell
or
get
contracts
with
storage
companies
and
they
bid
to
become
part
of
the
network
and
get
paid
on
that.
Much
on
that
part.
B
Some
of
the
things
that
we're
going
to
be
doing
we're
going
to
retire
the
monolith
and
replace
it
with
a
new
front
end
right
now,
lean
in
towards
the
riot
JS.
So
Docker
Hub
is
what
is
uses
riot.js.
So
we
have
a
working
model
that
we
can
copy
from.
So
if
you
look
at
the
docker
Hub
UI
repo,
you
can
see
their
layout
for
Riot
JS,
one
of
the
other
things
with
and
I'm
going
to
write
this
up
for
the
UI
part.
B
That's
another
phase,
two
of
the
architecture,
so
Riot
is
component
based,
so
you
can
actually
have
web
components.
So
you
know
like
in
our
component.
We
have
the
different
boxes
of
the
information
each
one
of
those
can
be
a
component
and
what
I'm
looking
at
is.
We
may
even
split
out
each
web
component
into
its
own
repo,
so
I
actually
have
microservices
at
the
UI
level
as
well.
They
actually
won't
be
running
an
individual
containers
because
they
actually
get
compiled
into
a
runtime
web
front
end
through
node.js,
but
from
an
architectural
standpoint.
B
We
could
actually
have
a
web
component
repo
broken
up,
so
it
makes
it
easier
for
us
to
allow
people
to
work
on
different
pieces
without
stepping
on
top
of
each
other
jury
still
out
on
that.
I'll
I
have
to
do
some
more
research,
but
I'll
write
up
a
document
on
that
front.
B
B
B
I,
don't
think
we're
going
to
need
an
an
interim
postgres
database,
but
we'll
have
to
see
I
that's
going
to
be
one
of
those
things
where
we
may
have
a
split
world
to
start
with,
and
then
here's
all
the
microservices
that
we're
going
to
need
our
Json
data,
how
the
push
through
cache
works
and
the
transaction
flow,
so
I'm
gonna,
actually
pull
up,
is
some
of
these
diagrams
locally.
In
my
browser.
So
you
can
actually
see
them
a
lot
easier.
D
B
B
D
B
B
Okay,
so
we
have
our
our
front
end
like
I,
said:
it'll
be
more
likely
the
right
JS,
it
will
still
interact
with
the
nginx
reverse
proxy
and
then
we'll
go
off
to
each
one
of
the
objects.
So
each
object
will
have
its
own
microservice,
we're
not
going
to
split
it
down
any
smaller,
so
we
could
actually
create
a
application
version,
read
and
then
a
another
version.
Another
microservers
for
the
create
update,
delete,
I,
don't
think
we
need
to
go
to
that
level.
B
I
think
we
can
keep
everything
a
relatively
small
code
base
with
the
application
version
microservice.
When
we
go
through
and
create
a
new
application
version,
it
will
actually
go
through.
We
go
through
a
validation
process
and
then
from
the
validation
process.
It
says:
yes,
that's
a
valid
user
and
then
that
microservice
will
will
go
ahead
and
actually
read
and
write
to
the
blockchain
ledger.
So
each
microservice
is
going
to
be
doing
its
own
reads
and
writes
to
The
Ledger.
B
If
we
look
at
the
transaction
flow
next,
we'll
see
how
that's
implemented.
This
is
not
a
a
different
microservice.
This
is
actually
just
a
function
that
we'll
call
to
deal
with
that,
so
it
makes
it
easier.
B
I
did
look
at
other
ways
to
where
you
could
do
like
a
pub
sub
and
put
all
this
stuff
on
the
on
a
topic
and
stuff
like
that
and
really
break
it
down
into
fine
grain.
So
you
could
actually
split
the
mic
split
the
application
version
into
multiple
microservices
and
do
routing
through
Pub,
sub
and
stuff,
like
that
I
think
it
just
became
overly
complicated
for
what
we
needed
to
do
so,
just
kind
of
skipping
that
level
and
trying
to
keep
it
simple
whoops
for
what
we
need
there.
B
So
this
is
kind
of
the
the
state
of
things
how
things
move
across
like
I
said
we're
going
to
create
it.
In
this
case,
I
just
did
a
component
version,
so
we
go
from
the
UI
to
reverse
proxy
to
the
component
version
microservice.
B
We
have
that
thanks
to
ukirch,
we
have
that
normalized
function
that
will
normalize
the
Json
and
then
we'll
have
the
persist,
and
this
is
the
next
thing
that
we'll
need
to
write
is
a
database
abstraction
function
that
will
persist
in
a
Rango
and
then
persistent,
nft,
storage
and
then
finally,
over
to
the
ipfs
I,
don't
know.
B
What's
behind
that,
I
can't
tell
yeah
ipfs
and
then
that
that
comes
back
and
from
there
once
we
get
the
ipfs
Sid,
then
we'll
go
ahead
and
actually
add
it
to
the
blockchain
at
that
level,
and
then
that
ends
up
returning
all
the
way
back
when
we
do
a
add
something
to
the
database.
So
this
is
kind
of
like
the
the
save
step.
If
we
look
at
how
to
read.
B
Read
kind
of
works
very
similar
where
we
come
through
this
Mica
service.
One
of
the
things
on
the
read
will
be
a
certain
key
that
we're
looking
for
it'll
basically
say:
I
want
to
find
a
component
version.
That's
named
X
that
will
actually
look
into
the
Rango
database
if
the
rank,
if
the
Rango
database
has
it
because
it's
been
cached
will
actually
return
the
full
Json
file
and
then
return
that
back
over
to
the
microservice
and
the
microservice
denormalize
it
and
return
it
back
over
to
the
UI.
B
B
This
right
here
may
change
slightly
because
we
actually
will
go
through
a
a
normalized
to
denormalized
step
here.
So
I
have
to
look
at
that
when
we
start
implementing
it,
but
we
will
have
that
database
abstraction
layer
and
hopefully
it's
generic
enough
that
we
can
use
it
for
all
the
microservices.
So
like
this
python
denormalized
function,
normalize
denormalized
function
is
pretty
generic
and
I'm,
hoping
that
the
abstraction
layer
will
be
the
same.
Now
when
we
get
into
oci
registry.
B
B
It
would
be
because
the
once
we
normalize
the
Json
through
normalized
denormalize,
the
Json
file-
we're
really
at
this
point.
The
the
abstraction
layer
is
just
talking
Json
files,
it
really
doesn't
need
to
know
the
contents
of
them.
B
So
when
we
go
to
per
that's
the
the
beauty
of
the
Rango
and
the
ipfs,
we're
really
just
storing
the
the
Json
as
it
exists,
so
the
abstraction
layer
doesn't
need
to
know,
really
know
what
it's
dealing
with
it
may.
The
highest
level
it
will
need
to
know
is
the
type
of
the
object
and
the
name
of
the
object
and
probably
the
the
domain,
and
that
that's
part
of
what
will
be
added
to
the
blockchain
as
well.
B
So
the
only
thing
that
really
needs
to
know
the
type
is
the
microservice
itself.
Even
there,
it
is
kind
of
generic.
B
The
only
reason
we've
we
put
this
microservice
level
in
place
is
if
we
need
to
combine
data
from
malt
of
multiple
types,
so
like
an
like.
An
application
is
made
up
of
components,
and
we
kind
of
need
to
pull
that
information
together
at
the
microservice
level.
It
is
theoretically
possible.
We
can
make
one
microservice
in
one,
you
know
be
be
totally
generic,
but
I
think
we'll
run
into
issues
down
the
road
when
we
try
to
combine
things
and
add
extra
data
to
it.
C
C
Yeah
that
makes
sense,
I
have
something
in
my
mind,
so
what
I
was
thinking?
Maybe
if
you
can,
you
know,
create
one
generic
method,
signature
story,
gesture,
signature,
no
definition,
and
if
you
have
like
multiple
implementation
and
based
on
some
decorators
or
some
annotation
that
we
can
know,
tell
APA
that,
do
you
want
to
use
this
nft
storage
or
do
you
want
to
use
OCR.
B
Yeah
exactly
and
I'm
thinking
at
the
at
this
level.
This
function
would
look
at
maybe
an
environment
variable
or
something
to
say
that
or
like
a
database
connection
string.
B
That
would
tell
us
that
we're
talking
to
this
this
abstraction
function
will
always
talk
to
a
Rango,
but
it
may
or
may
not
talk
to
ipfs
or
may
talk
to
oci,
so
I
think.
Based
on
the
cons,
the
connection
string,
we
can
kind
of
have
an
if
statement
in
this
abstraction
function.
Saying
I
need
to
go
talk
to
oci
and
perform
the
these
oci
queries
to
get
the
data,
pushing
push
and
pull
data
from
oci
versus
talking
to
xrpl
and
ipfs.
B
Yes,
I
think
what
we,
the
next
steps,
I'm
going
to
go
ahead
and
and
create
more
issues
laying
out
the
functionality
that
we
need
to
start
pulling
together
so
like
this
one's
basically
done
so
we'll
need
to
start
this
one
and
then,
when
we
do
this
one,
we
can
kind
of
do
it
incrementally.
B
So
the
database
abstraction
would
just
talk
to
a
Rango
and
we
wouldn't
worry
about
ipfs
to
start
with
and
then
once
we're
happy
we're
just
being
able
to
send
something
from
like
the
the
CLI,
where
we're
going
to
add
a
new
component
version
through
the
CLI
that
we
would
go
ahead
and
add
the
component
go
through.
This
whole
process
make
sure
it's
in
a
Rango
and
then
we
can
kind
of
query
the
Rango
database
outside
of
our
implementation
to
make
sure
the
data
is
getting
stored
correctly.
B
Once
we're
happy
with
that,
then
we
can
go
ahead
and
do
the
next
step
where
we
can
hook
in
the
UI
to
pull
the
information
from
a
Rango
and
then
at
the
same
time
we
can
add
it
to
the
the
back
end,
storage,
the
blockchain
and
and
ipfs.
B
And
Sasha
I'm
going
to
go
ahead
and
create
an
issue
for
Helm
charts
and
standing
up
a
Rango
in
our
cluster.
A
B
I
think
we
can
start
with
just
a
like
a
basic
like
we
did
with
the
postgres
either
a
stateful,
Setter
or
pot.
I
can't
remember
how
it's
implemented
running
just
a
Rango
local
is
in
in
the
Pod
I
mean
in
the
cluster
itself.
F
As
a
starting
point,
yeah,
that
makes
it'll
be
great.
We
can
add
that
to
the
little
the
the
mini,
the
mini
environment,
on
your
local
machine.
B
So
we
have
the
the
artillius
Helm
chart
I
think
we
should
create
a
second
whole
set
of
Helm
charts.
B
Maybe
we
could
call
this
like
artillius,
whatever
version
V10
or
something
like
that
or
v20
would
be
the
or
artillius
Ledger
or
something
like
that
Tracy.
What
was
the
are?
We
using
evidence
store
or
evidence,
catalog.
B
So
we
may
may
want
to
I
may
create
a
new
home
chart
called
artillius
catalog
or
something
like
that
as
the
the
home
chart,
which
would
be
our
new
the
new
version
versus
the
the
existing
version,
so
we
kind
of
have
two
different
playgrounds,
so
we
don't
step
on
top
of
each
other.
B
Yeah,
because
if
you
look
at,
if
we
go
back
to
I
lost
it.
B
So
if
we
go
back
to
the
architecture
diagram
we
have
about,
you
know
two
dozen
at
the
most
microservices
or
like
18
microservices,
that
will
need
to
work
with.
F
Steve,
are
you
okay,
if
I
could
put
like
a
high
level
diagram
of
what
you
were
showing
us
now
into
my
into
that
Dev
environment
setup?
Just
so
that
when
developers
go
there
or
anybody
new
like,
for
example,
like
Ian
right,
you
could
I
send
it.
We
send
him
there,
and
somebody
can
like
exactly
see
straight
away.
Oh
okay,
so
this
is
what's
going
on
yeah.
B
Yeah,
what
we'll
do
is
I'll
work
with
you
on
that,
because
what
what
we'll
probably
need
to
do
is
go
through
and
stub
out
the
microservices,
so
at
minimum
they
just
have
like
a
health
check,
or
you
know
they.
They
you
hit
the
end
point
and
it
always
returns
true
or
something
like
that.
B
Just
so,
we
can
stop
out
everything,
and
then
you
can
pull
all
the
helm,
charts
together
and
and
build
up
the
The
Logical
application
yeah.
That
would
be
cool,
and,
speaking
of
that,
one
of
the
things
this
kind
of
goes.
B
One
last
thing
is
on
the
on
our
workflows,
I'd
like
to
figure
out
how
we
could
add
a
simple
test
where,
when
we
build
the
docker
image
before
we
push
it,
we
actually
run
a
quick
test
to
query
for
it
to
connect
to
a
database
and
just
run
a
simple
transaction
just
so
we
can
make
sure
that
the
database
connection
is
working.
B
If
one
of
those
dependencies
messes
up
the
database
connection,
you
know-
or
you
know,
like
the
fast
API
version,
gets
changed
or
Supply
that,
where
it's
no
longer
compatible,
so
just
wondering
how
any
ideas
on
that.
If
we
should
have
like
a
a
simple
database
out
in
Azure
that
the
workflows
can
connect
to
or
if
we
should
try
to
stand
on
one
up
on
a
fly
or
have
a
pod,
I
mean
a
Docker
image
that
has
a
database
in
it
that
we
talk
to.
C
G
G
Have
to
have
something
downloaded
locally
to
do
that,
or
is
it
something
we
can?
We
can
add
to
code
spaces.
C
C
B
Yeah,
because
there's
there's
things
that
yeah
we,
we
definitely
need
like
a
like
a
driver
application
that
would
go
ahead
and
well
for
us.
Even
a
simple
curl
command
would
work
to
make
sure
that
we're
connecting
to
a
database
and
just
returning
correctly,
because.
F
B
Yeah
so
like
one
of
the
things
that
I
can't
remember,
who
found
it
I
know
their
GitHub
handle
or
their
Dev
d
found
that
we
had
a
a
Bad
cast
in
one
of
the
objects.
So
it
was
expecting
a
list
with
a
lowercase
L
and
we
had
a
list
with
the
capital
capital
L.
So
when
we
went
to
hit
the
end
point
it
would
it
would
throw
this
python
air
because
the
cast
wasn't
quite
correct
and
that
that's
just
a
simple
you
know
string.
B
You
know,
string
test
that
is
querying
the
database
and
returning
the
data.
F
Is
that
is
that
tis
being
done
during
the
GitHub
action?
No.
B
No
right
now
the
health
API
endpoint
is
being
used
by
the
kubernetes
Clusters
as
a
health
check
as
part
of
the
pod.
B
F
B
The
database
server
to
that
against
that
image
that
we
just
built.
So
we
have
to
do
a.
F
Docker
lesson,
so
you
want
to
run
that
okay,
I.
B
And
we
could
have
a
like
a
predefined.
You
know
we
have
our
our
postgres
database
running
out
in
Azure.
We
could
connect
to
that,
so
we
can
actually
have
pass
in
the
credentials
to
that
Docker
run
to
have
it
Go
connect
to
our
our
postgres
database
in
azure
and
then
once
it's
running.
We
can
then
throw
a
curl
command
at
that
endpoint.
F
B
The
other
thing
we
could
do
would
be
to
run
run
change
the
python
code,
so
we
could
actually
run
a
command
without
running
the
actual
Docker
image,
so
you
can
actually
do
a
kind
of
like
a
run
with
a
different
entry
point.
Basically
that
would
run
a
snippet
of
code,
so
it
really
wouldn't
be
connecting
it
would
still
connect
somewhere,
but
at
the
same
time
we're
not
having
to
run
a
Docker
image
per
se.
We
could.
We
could
actually
execute
the
code
directly
because
the
python
interpreter
exists
in
the
workflow.
F
F
Was
that
is
that
to
mitigate
that
issue,
you
had
the
other
day
with
the
driver,
change.
F
B
Yeah
that
that
string
test
would
have
caught
that
issue
yeah.
That's.
F
A
B
F
B
That
weird
thing
where
you
can't
run
Docker
inside
a
Docker
yeah,
because
you
need
access
to
the
docker
socket,
which
is
a
volume
Mount
of
the
VAR
lib
docker.sock
file,
or
something
weird
like
that
that
you
have
to
expose
over
so
like
the
docker.
Build
subsystem
on
GitHub
uses,
build
X
and
build
ax
doesn't
require
a
Docker
demon
to
be
running.
B
B
In
those
dependencies,
even
though
they
resolve
through
the
package
manager
pip
those
dependencies
at
runtime,
May
fall
over
each
other
and
cause
a
breaking
change.
B
F
B
The
nice
part
is
we'll
see
because
they're
they're,
so
isolated
and
not
very
complicated.
You
know
doing
stuff
like
this
is,
should
be
pretty,
hopefully
pretty
simple.
It
just
really
comes
down
to
the
build
environment.
You
know
if
we're
running
inside
of
like
Jenkins
a
Jenkins
server
would
be
a
little
bit
easier
because
you
can
spin
up
Docker,
you
can
do
Docker
runs
because
most
Jenkins
servers
aren't
running
inside
of
a
container.
E
I
want
to
watch
something.
That's
me,
I
need
the
UI
stuff
for
the
audelius.
B
We
will
but
I'm
thinking
the
starting
point
will
be
from
the
command
line,
so
we
will
need
a
separate
version
of
the
command
line
that
will
match
up
with
the
new
back
end,
and
our
initial
starting
point
will
be
like
create
a
component
version
through
the
command
line,
so
that
will
populate
that
will
follow
that
transaction
flow
to
to
populate
the
back
end
database,
and
once
we
have
that
working,
then
the
next
step
will
be
to
have
the
UI
read
that
information
from
the
database
and
displayed
on
the
screen.
E
E
B
That
works,
and
you
know
initially
for
the
UI
part.
You
can
stub
out
the
back
end,
so
you
can
just
create
create
a
a
python
fast
API
back
end
part
that
just
returns
a
static,
Json
string
that
the
the
front
end
can
can
render.
B
Yeah
it'll
pretend
to
fetch
data,
even
though
you
don't
have
to
hit
the
hit
the
database.
E
B
Yeah,
you
won't
need
a
you,
don't
you
just
just
hard
code.
The
data
in
the
fast
API.
B
All
right
I
will
get
some
more
issues
out
there
about
all
the
stuff
that
we
talked
about
and
assign
the
bounties
to
them
and
we'll
get
going.
I
think
we've
gotten
over
a
lot
of
the
architecture,
decisions
and
stuff
like
that.
So
I
think
we
have
everything
sorted
out
to
start
doing
the
implementation
pieces.
A
G
Ian
I
would
suggest
that
you
try
to
get
on
Steve's
calendar
I'll.
Send
you
a
link
to
his
calendar,
so
you
can
maybe
take
a
look
at
what
the
what
the
report
repo
looks
like
and
maybe
he
can
get
you
help.
You
get
started
on
some
good.
First,
pull
requests.
D
That
would
be
great
I'm
thinking
tomorrow,
I'm
going
to
have
some
time
here.
If
there's
somebody
wanted
to
set
me
up
with
for
tomorrow,
otherwise
Monday
or
something
like
that
would
be
fine
too.
G
About,
if
we
do
it
at
nine
o'clock
tomorrow,
which
Steve
does
have
open?
Okay,
that
makes
it
like
I,
think
around
four
o'clock,
for
you.
G
E
D
Perfect
yeah,
if
you
wanna,
set
me
up
with
that
time,
slot.
That's
great,
otherwise,
just
send
me
an
invite
and
I'll
try
and
get
on
your
calendar
here.
The
next
20
minutes
or
something.