►
From YouTube: Aug 3, 2023 - Ortelius Architecture Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
Get
a
bunch
of
stuff
out
of
the
way
so
I
don't
know
for
those
of
you
that
caught
this
but
Tracy
put
together
a
presentation
to
we
had
a
quick
chat.
B
We
had
like
a
20
minute
chat
with
Jim
zimlin
Jim
Jim
Jim
is
the
executive
director
of
all
of
the
Linux
Foundation
and
he
reached
out
to
us
because
we
were,
he
got
win
that
ortelius
is
going
to
be
doing
having
the
data
do
AI
stuff
and
what
we
did
was
we
put
together
a
quick
presentation,
I'm
going
to
run
through
and
then
we'll
get
on
to
kind
of,
like
our
to-do
list,
just
our
basic
mission
statement.
B
You
know,
like
I,
I
like
to
say
we're
just
where
the
like
the
garbage
collectors
of
the
software
world,
we
go
out
and
pick
up
a
bunch
of
stuff
and
store
it,
and
that
goes
across
the
supply
chain
as
well.
As
you
know,
deployments
and
stuff,
like
that,
our
typical
statement
about
what
we're
collecting,
what
we're
going
to
do
with
it,
with
the
trying
to
get
away
from
siled
data
and
one
of
the
things
that
exists
today
that
doesn't
exist
in
when
people
talk
about
Ai
and
threat.
B
Modeling
is
there's
no
data,
so
you
can't
mod.
You
can't
do
anything
with
AI
with
with
no
data.
So
that's
where
artillius
comes
into
play
and
the
top
boxes
here
are
what
we're
currently
kind
of
able
to
do
and
gather
some
of
the
things
that
we
need
to
be
able
to
address
is
to
take
ever.
Basically,
the
goal
would
be
to
to
start
with
every
single
open
source
project
like
under
the
Linux
Foundation
start
collecting
their
supply
chain
data
and
be
able
to
do
some
other
stuff
with
that
data.
B
But
the
first
step
will
be
collecting
the
data
and
then
obviously,
obviously
it'll
be
a.
This
has
always
kind
of
been
been
in
play
with
artillius.
We
have
the
we
would
beef
up
our
public
SAS
version,
so
we'd
actually
convert
our
Azure
cluster
from
a
Dev
world
into
a
production
World.
B
If
that's
going
to
take
some
work
because
we'll
have
to
figure
out
how
to
fund
it
because
we're
gonna
be
storing
I
I
Envision,
probably
two
terabytes
of
data
is
what
we're
looking
at
on
that
front
and
then
on
the
private
side.
Somebody
like
USA,
which
is
a
insurance
company
here
in
the
states
they
could
be
running
their
own
private
version
of
artillius
and
I've,
always
envisioned
that
we'll
do
some
sort
of
federation
between
the
two.
B
We
haven't
really
gotten
into
the
architecture
down
the
road
on
how
to
aggregate
Federated
data,
but
I
think
the
importance
folks
may
be
able
to
help
us
with
that.
But
that's
the
the
angle
that
we're
looking
at.
A
C
B
So
basically
we're
artillius
stands
today.
We
kind
of
view
it
as
an
MVP
we're
proving
out
what
we
got,
but
for
the
AI
side
we
know
we
can
get
the
data
and
we
need
to
be
able
to
to
get
large
amounts
of
data,
be
able
to
do
the
modeling.
B
So
let
me
see
if
this
is
yeah,
so
the
first
AI
model
that
we
were
thinking
of
is
basically
trying
to
answer.
The
question
are:
are
the
project
pipelines
compliant
with
security
and,
if
they're
not?
How
do
we
fix
that?
So,
for
example,
we're
able
to
capture
that
the
pipelines
that
are
running
don't
generate
a
s-bomb,
and
we
know
that
that
pipeline
is
a
GitHub
action.
The
AI
models,
just
like
with
chat
GTB,
should
be
able
to
generate.
B
We
should
be
able
to
generate
updates
to
the
that
GitHub
action,
for
example,
and
add
in
the
s-bomb
collection
generation
and
collection,
other
things
like
linting
being
able
to
add
linting
into
your
pipelines
and
being
able
to
do
this
automatically
so
developers
don't
have
to
do
it.
Think
of
it
as
the
penabot
for
security
things
missing
in
a
pipeline.
B
So-
and
this
is
where
we
would
you
know
you-
you
model
the
top
performing
pipelines,
and
then
you
model
the
ones
that
your
you
currently
have,
and
then
you
figure
out
the
difference
between
the
two
of
them.
So
that's
like
the
first
AI
proposition.
B
B
How
do
we
get
notify
people
on
the
producers
in
the
consumer
side
about
a
critical
vulnerability?
That
needs
to
be
addressed
right
now,
a
lot
of
the
vulnerabilities
out
there.
B
When
you
look
at
them
they're,
you
know
there
may
be
a
critical
vulnerability,
but
the
the
case
for
you
to
exploit
that
vulnerability
is
is
very
low.
You
know
you
got
to
work
your
way
through
a
firewall.
Then
you
gotta
have
root
access
to
the
operating
system.
You
know
they're,
you
know
a
bunch
of
stars
have
to
align
for
you
to
exploit
it
in
in
that
case,
but
what
we
want
to
be
able
to
do
is
do
an
emergency
response
system.
B
That
would
not
only
say
that
there's
a
cve
out
there,
but
also
go
in
and
figure
out
how
to
fix
it.
You
know,
how
do
you
make
coding
changes
or
do
you?
Is
it
just
a
dependency
package
bump,
for
example?
So
that's
the
next
one
that
we
would
be
doing
that
generating
the
threat
models
and
fixing
it.
Basically,
this
one
is
this
one
I've
run
into
and
it
has
to
deal
with
your
dependencies
more
dependency
management
on
the
Persia
project.
B
We
ran
into
this
I
found
it
on
by
doing
a
bunch
of
package,
verification
and
they're,
actually
using
two
different
Json
parsers
one
was
secure,
one
was
unsecure
so
in
this
case
it's
like
all
right.
We
need
to
decide
this.
This
project
is
consuming
multiple
dependencies,
they're
doing
the
same
thing.
B
How
do
we
get
them
to
use
the
the
right
version?
This
one
is
more
notification
and
decision
making
versus
being
able
to
go
in
and
make
coding
changes.
There
may
be
some
pieces
on
the
AI
model
that
we
can
do
on
the
the
pruning.
A
dependency
tree
I
always
use
the
hello
world
for
node.js.
It
brings
in
a
couple
thousand
packages
just
to
print
hello
to
the
council
that
shouldn't
be
required.
B
We
should
be
able
to
have
a
small
dependency
tree
and
we
have
a
small
dependency
tree.
We
have
a
small
attack,
plane,
I,
think
that
was
the
last
one.
Oh
and
this
one
is
going
to
be.
This
is
where
Sasha
is
saying
further
on
we're
still
working
on
this
slide
deck.
This
is
going
to
be
emporis
how
we
can
leverage
importers
for
threat,
modeling,
I've.
C
Got
a
great
use
case
already
that
I
can
use
here
in
South
Africa
with
IBM,
and
they
have
these
weird
little
executable
jobbery
things
that
need
to
be
installed
on
AIX
machines,
right
and
so
otilius
and
emporis
are
perfect
for
this,
because
they
could
have
all
of
their
dependencies
for
all
their
little
IBM
thingies.
Well,
not
little
these
massive
data
power
things
and
yeah
mq
messaging.
So
all
that
stuff
they
can
store
all
their
dependencies
and
use
autilius
to
have
a
nice
view
into
all
of
that
right,
yep.
So.
C
Already,
installing
in
Porous
now
and
AWS,
to
do
this
cool,
how
we
can
stick
or
to
this
in
there
is
there
is.
Is
there
at
the
moment,
a
connection
between
the
two
or
is
it
still
being
fleshed
out.
B
That's
still
being
flushed
out
and
the
last
and
poorest
meeting
last
week
we
happen
to
have
the
we
happen
to
have
like
cat.
Was
there
andy?
Was
there
Alex?
B
So
the
main
players
were
all
there,
so
they
are
getting
excited
to
get
things
moving
again.
So
on
that
front,
I
have
not
looked,
but
Alex
was
going
to
add
issues
into
the
artillius
repo
and
I
think
he
was
going
to
label
them
as
emporis
issues.
So
if
we
go
into
the
repo.
C
B
Yeah,
so
the
Persia
is
the
distributed.
Bill
build
a
network
and
one
of
the
things
it
would
do
would
be
to
build.
You
know
the
same
object
in
in
multiple
locations
and
then
once
it
it
says.
Yes,
we
all
built
the
same
thing.
Then
it
has
to
take
that
artifact
and
add
it
to
somewhere
and
that's
somewhere.
I'm
envisioning
is
going
to
be
an
oci
registry
like
emporis.
C
There's
Persia
negate
needing
an
access
server,
something
to
store
your
dependencies
for
your
programmers.
Your
developers.
B
And
that's
one
of
the
things
that
that
the
Persia
project
started
out
was
they
were
writing
the
the
peer-to-peer
network
from
scratch
and
I
think
that
project's
kind
of
stalled
a
little
bit
but
I
think
what
we
can
do
is
you
can
take
the
oci
register
and
you
can
they
have
these
storage
drivers
and
a
storage
driver?
You
can
go
ahead
and
and
replace
the
storage
driver.
So
the
storage
drivers
is
like
an
interface
that
you
could
come
into.
B
So
if
you
want
to
store
everything
in
an
S3
bucket,
you
can
have
a
storage
driver
for
S3
versus
a
storage
driver
for
Google,
Cloud
Storage.
So
under
the
covers
in
the
oci
registry,
you
can
have
a
storage
driver
for
ipfs
and
when
you
do
that,
you
end
up
having
a
distributed
oci
registry
over
ipfs.
C
B
C
B
So
that's
where
I'm
using
the
oci,
which
ipfs
driver
the
ipsf
driver,
is
the
peer-to-peer
networking
that,
like
nft
storage,
uses
and
the
bittorrents
and
stuff
like
that,
I'll
use
the
basically
the
ipfs
storage.
B
So
you
could
run
a
node
yourself
connecting
the
oci
registry
with
the
ipfs
to
other
peers,
and
then
you
would
be
able
to
bring
in
those
pieces.
So
it
gives
you
that
peer-to-peer
networking
with
the
oci
registry
on.
C
My
developers,
then
I
could
think
I
could
have
I.
Can
internal
kind
of
peer-to-peer
type
of
network
yep
without
using
Nexus
I
wouldn't
need
a
Nexus
server
at
all?
I
could
just
use
that
technology
for
my
developers
to
have
their
dependencies
pulling
from
and
I
could
control
the
dependencies.
From
that
point
of
view,
what
versions
through
otilius
and
that
type
of
thing.
B
B
A
C
B
And
that,
and
that
and
and
the
other
thing
that,
even
though
the
oci
registry
spec
allows
any
type
of
artifact
to
be
added
to
it,
they
have
not
enabled
it
to
add
like
jar
files
to
it.
B
Yet
so,
even
though
the
spec
allows
for
it
to
happen
like
the
maven,
the
maven
or
any
of
like
the
maven
release
method
mechanisms,
don't
know
how
to
talk
to
the
oci
registry,
for
persistence,
yeah
so,
and
that's
one
of
the
things
that
Alex
Andrew
and
Cat
and
and
Jenna
are
have
that
inner
intermediary
client
and
that
client
does
the
translation
between
like
the
the
maven
or
Gradle
language.
You
know,
protocol
to
the
oci
registry,
so
they're
they're,
the
man
in
the
middle.
B
A
C
If
it's
okay,
I
hope
I'm
asking
questions
that
people
are
learning
from,
so
going
forward,
could
I
Implement,
say
emporis
and
Nautilus
or
happy
Japanese.
You
know
they're
connecting
there's
an
integration,
could
I
deploy
importance
and
artillies
together
and
use
that
as
a
way
of
dishing
out
the
well
with
Persia
could
I
use
that
as
a
could
I
configure
it
in
a
way
that
I
could
use
it
for
my
developers
at
my
company.
C
C
B
That's
not
that
moment
that
would
no,
because
the
it's
like
Nexus
is
mainly
well.
It
supports
my
Docker
images
and
you
know:
Sports
is
different
bunch
of
package
languages
same
with
artifactory,
but
what
ends
up
happening
is
that
middle
layer
that
man
in
the
middle
doesn't
exist
yet,
and
that's
where
the
emporest
man
in
the
middle
for
the
Java
goal,
laying
you
know
those
those
plugins
have
to
be
written.
B
So
you
can
do
that
and
the
goal,
the
long-term
goal,
that
red
hat
had
for
imporus
was
I
need
to
go
to
this
company
and
I
need
to
stand
up
this
server,
and
these
I
need
to
bring
all
these
dependencies
yeah
and
instead
of
going
out
and
collecting
all
the
dependencies
from
Maven
Central
going
out
collecting
all
dependencies
from
golang
and
Pi
Pi
that
they're
all
in
one
single
registry
that
I
can
back
up
and
ship
over
to
a
customer.
Stand
it
up
in
a
couple
minutes
and
they
have
all
dependencies
ready
to
go.
Yeah.
B
The
the
ipfs
driver
under
the
covers
what
that
would
allow
you
to
do
is
if
they
Implement
with
the
ipfs
driver.
When
you
stand
up
that
companies
server,
the
ipfs
would
kick
in
and
start
downloading
things
and
populating
that
server
automatically.
B
C
And
obviously
I'd
be
smart,
because
it's
got
the
peer-to-peer
technology
like
torrenting
and
find
the
closest
pier
right,
exactly
saving
your
bandwidth
and
cost
of
course,
because
it's
just
in
the
cloud
would
be
reduced
in
terms
of
bandwidth.
Hopefully,
hopefully,
the
cloud
providers
would
have
something
internally,
then
yeah.
B
And
I
believe
I
I
haven't
looked
at
the
the
ipfs
protocol
for
that
storage
driver
for
oci,
but
from
I.
Remember
the
the
lib
the
lip
P2P
for
that
percya
was
using.
You
could
set
it.
You
can
configure
it
different
ways.
You
can
say
you
can
go
to
your
peer
and
say:
hey
Pier
peer
a
do.
You
have
this
file
and
if
Peer
a
didn't,
have
this
file,
then
it
would
appear
a
it
would
go.
B
Ask
pure
B:
do
you
have
this
file
and
if
pure
B
didn't
have
this
file
going
off
at
ask
pure
C
pure
C?
Has
it
and
then
we
download
it
to
Pure
a
so
it
would
do
this
like
I'm
searching
for
dependencies.
So
what
ends
up
happening
is
your
peer?
A
would
not
have
the
whole
world
in
it.
It
would
only
have
what
it
actually
consumed
at
that
level.
So
it
has
a
subset
of
everything.
So
that's
the
whole
peer-to-peer
networking
piece.
B
B
So,
on
that
slide
deck
we
were,
we
did
ask
Jim
about
funding
and
he
thought
our
number
is
a
little
bit
low.
The
150
for
I
think
two
developers,
one
developer
for
like
five
or
six
months,
so
we're
gonna
go
through
and
up
that
number
and
rework
it,
and
we
have
to
do
the
presentation
with
Alma
car
at
the
open,
ssf
I'll.
Let
them
know
what
we're
trying
to
do
on
that
front.
B
Other
things
that
I
found
that
what
we
need
to
do
to
start
Gathering.
The
data
is
I've,
found
that
most
of
the
Registries
that
are
out
there
like
Docker,
Hub,
artifact
Hub,
even
like
I,
think
even
Maven
Central
does
it.
You
could
do
an
RSS
feed
and
you
can
go
in
RSS
feed
those
repositories
and
get
notified
when
an
update
comes
across.
B
So
what
that
allows
us
to
do
is
we
can
start
looking
for
and
monitoring
the
repositories
and
when
a
new
release
is
made,
we
can
go
grab
that
release
and
grab
as
much
information
from
that
release
as
possible.
B
B
So
there
may
be
things
that
we
need
to
do
to
go,
find,
hunt
that
down
to
get
the
CI
CD
information
like
what
was
the
git
commit
and
if
we
can
get
that's
the
main
thing
we
need
to
build
to
associate
is
the
artifact
to
the
git
commit
once
we
have
those
two
we
can
clone
the
repo
find
that
git
commit
do
a
bunch
of
interrogation
about
that
repo
and
stuff
like
that
at
that
level,
but
it's
kind
of
like
a
sidecar
version
of
monitoring
repositories.
B
It's
we
have
to
do
some
more
research
on
that
to
see
what
we
can
grab
and
you
can
even
get
into
like
if
it's
a
GitHub
action
that
built
this,
you
can
get
to
the
logs
and
things
like
that
and
dump
the
logs
to
to
get
more
data
out
of
that.
But
it's
gonna
be
a
limited
set
without
going
into
the
pipeline
being
inserted
into
the
pipeline
process.
D
So
Steve
yep
yeah,
so
that
monitoring
part
is
just
for
our
internal
microservice
applications.
B
It
would
be-
let's
say
we
want
to
so
the
way
I'm
kind
of
envisioning
it
is
take.
Take
our
our
Docker
build
that
happens.
B
We
know
that
we'll
just
take
one
of
the
Python
ones
for
simplicity's
sake,
so
we
know
that
that
that
Docker
build.
We
have
the
the
the
image
of
it,
and
we
know
in
that
image.
It
was
dependent
upon,
for
example,
chain
guards,
python
base
image
and
then
also
we
know
that
we're
dependent
upon
some
pipeline
modules.
B
B
We
can
subscribe
to
RSS
feed
on
the
chain
guard
python
image
we
can
subscribe
to
Pi
Pi
RSS
feeds
for
all
the
packages
that
were
dependent
upon
and
if
any
of
those
change
we
know
we
have
to
go
ahead
and
rebuild
and
do
create
a
new
image,
a
new
Docker
image
on
our
front
and
so,
for
example,
if
chain
guard
python
images
change,
we
want
to
be
able
to
go
and
grab
the
s-bomb
for
that
new
chain
guard
image
and
then
do
our
rebuild
of
our
images
and
then
same
thing
with
all
the
python
modules.
B
So
that's
kind
of
like
where
we
kind
of
keep
on
going
and
then,
if
we
get
a
new
s-bomb
out
that
has
new
packages
that
we're
depending
upon
we
want
to
add
those
to
the
list
to
start
monitoring.
So
what
ends
up
happening?
Is
we
keep
on
expanding?
What
we're,
monitoring
and
expanding
the
data
that
we're
collecting?
D
Yeah
yeah,
that
makes
sense
so
I
think
if
there
is
already
something
some
like
some
some
library
or
something
that
we
can
leverage
well
and
good,
but
I
think
I
can
build
a
solution
that
can
do
the
same.
B
Yeah,
it's
we
have
to
see
if
there's
a
goal,
laying
library
for
RSS
feeds,
we
may
not
even
need
a
golang
library
for
RSS
feed.
So
what,
when
you
look
at
RSS
feed,
what
it
is
is
these
server.
A
B
A
B
B
Basically
it's
it's
kind
of
clumsy,
because
you
have
to
go
and
pull
and
and
pull
the
XML
files
so
and
see.
If
there's
been
new
entries
made
in
the
XML.
D
File,
okay,
but
yes,
I
think
the
basic
idea
for
any
monitoring
Services
you
keep
on
putting
the
data
and
whenever
there
is
a
change
and
whenever
you
detect
some
change
in
that
data,
you
notify
the
blind
right.
B
Yeah
yep
and
I'm
thinking
when
we
find
a
a
change
that
we
throw
out
a
cloud
event
of
some
sort
saying
that
we
have
a
new
change
and
what
we
would
have
is.
We
would
have
a
a
long
list
of
basically
rrss
feeds
that
we
need
to
kind
of
pull
in
and
look
at
and
this
what
we
could
do
is
you
could
actually
we
could
get
into
some
fun,
kubernetes
parallelism,
so
we
could
take,
and
let's
say
we
have
10
000
RSS
feeds
that
we
need
to
go
ahead
and
query.
B
We
could
take
each.
We
could
have
the
same
like
microservice
or
same
job,
but
basically
would
be
a
kubernetes
job
that
you'd
spin
up
like
a
hundred
of
them.
They
each
take
10
pieces
of
the
of
the
URLs,
go
off
in
query
and
bring
out
and
do
the
event
listing
out.
So
we
could
get
some
some
fun
parallelization
out
of
kubernetes
with
the
job
going
through
and
scraping.
You
know,
bringing
in
and
then
kicking
out
the
cloud
events
for
the
ones
that
have
changed.
D
B
But
what
we
need
to
do
is
we
need
to
kind
of
think
about
when
we
do
this
RSS
feed
and
we
do
have
to
do.
We
do
know
that
something's
changed.
What
do
we
do?
B
That's
kind
of
The,
Next
Step
that
we
need
to
figure
out
is,
is
what
do
we
do
when
we
know
there's
a
change.
Do
we
know
like,
for
example,
on
the
docker
side?
If
we
have
a
new
base
image,
what
do
we
do?
B
Do
we
do
take
the
s-bomb
or
go
to
try
to
find
an
s-bomb?
Do
we
generate
an
s-bomb?
Do
you
know?
Do
we
send
out
notifications
those
type
of
things,
so
that's
kind
of
where
my
my
head's
at
on
those
two
topics
with
the
AI
stuff,
wanted
to
bring
you
up
to
speed
on
that
Sasha
as
soon
as
we
clean
up
that
slide
deck
I'll,
let
you
know
for
you
to
to
pass
it
around.
B
There's
a
couple
slides
we're
waiting
on
Andy
and
then
Poor
Side
to
fix
up
should
be
done
in
the
next
week
or
so,
and
then
we
have
the
RSS
feed
and
then
also
from
the
architecture
on
the
the
xrpl
stuff.
Utkarsh
I
did
look
at
your
your
code.
B
B
B
A
B
Yeah
and
then
there's
there's
another.
We
have
to
think
about
the
cloud
events,
how
we're
going
to
pass
them
around
and
stuff.
There
was
a
I,
don't
know
if
we
want
to
do
a
pub
sub
thing
or
you.
C
A
guy
in
Africa
he
writes
an
incredible
tool.
It's
a
product.
Unfortunately,
it's
not
open
source,
but
it
can
translate
messaging
between
any
databases,
any
messaging
system
and
it
can
format
it
can.
It
can
spit
it
out
the
other
side
to
whatever
format
you
needed
and
you
can
stick
it
in
anything
and
it
work
and
it
can
do
billions
of
transactions
a
second
okay,
it's
all
like
it's
I,
just
wish
they.
You
could
do
a
piece,
that's
open
sourced.
You
know
piece
of
that.
Yeah
yeah.
B
There
there
was
one
there
was
a
message
broker
I
found
that
it's
basically
like
rabbit
mq
on
steroids
type
of
world
and.
B
Have
to
I'll
I'll
look
up
see
if
I
can
find
the
the
project,
but
there
is
one
and
it's
sat
on
top
of.
B
It
was
weird
because
it
like
it
could
sit
inside
of
kubernetes
and
then
they
could
sit
outside
and
connect
multiple
kubernetes
clusters
together.
So
it
was
like
this
extra.
It
was
way
high
up
on
the
message.
Brokering
side
so
and
it
made
it
really
nice
to
be
able
to,
like
you
said,
do
the
routing
and
stuff.
C
Like
that
funny
messaging
from
any
system,
as
long
as,
if
it
makes
a
message,
you
can
use
it
as
an
adapter
in
the
middle
really
I.
B
That's
what
I'm
thinking
we
got
a
lot
of
moving
pieces
and
I.
We
just
gotta
and
kind
of
get,
get
focused
and
get
the
good
things.
C
Steve,
what
is
your
vision
for
us
to
focus
on
intermediate
once
you've
had
holidays
and
you
guys
have
come
with.
C
D
C
B
So
the
next
step
for
me
that
I'm,
focusing
on
is
taking
what
utkarsh
did
on
the
abstraction
layer
and
getting
the
basic
flow
of
creating
a
component,
adding
it
to
the
database,
adding
to
nft
storage
and
then
coming
back
and
retrieving
that
that
path,
so
at
a
component
version
level
and
then
from
there
build
up
a
UI
to
render
that
just
the
single
data
for
a
single
component.
Okay,.
B
And
then,
once
we
have
that
the
plumbing
kind
of
in
place,
then
we
can
start
doing
other
adding
on
like
application
versions,
domains.
Users
groups,
those
type
of
things.
B
They
they
have
to
stand
up
a
POC
on
their
side
and
and,
like
part
of
once,
we
get
the
base
component
version
going
then
we'll
be
able
to
see
if
we
can
hook
walk
in
if
it's
worthwhile
or
not
the
pull
in
additional
dependencies
at
that
level,
and.
C
B
B
Yeah
we
have
like
I
said
once
we
we
figure
out
how
to
once
we
get
to
the
point
where
we're
starting
to
automatically
collect
data.
Then
the
fun
starts.
B
Once
we
have
that
persistence
down
and
we're
able
to
to
take
that
data
and
start
working
with
it,
then
we'll
have
a
lot
of
fun
with
with
what
we
can
do
with
that
data.
C
B
Cool,
so
I
will
share
the
link
to
the
presentation,
the
AI
one
on
Discord
and
there's
another
one,
another
spreadsheet
I'm
working
on,
like
all
the
things
that
we
need
to
collect
as
part
of
the
pipeline.
I'll
share
that
spreadsheet
as
well
on
Discord
and
then
we'll
just
keep
plugging
along.
Like
I
said.
The
my
next
goal
is
to
get
the
component
getting
created
into
persisted
and
then
working
our
way
back
and
get
get
some
basic
UI
things,
so
we
can
visualize
it.