►
From YouTube: Aug 12, 2022 - Ortelius Keptn Events Sequence Diagram WG
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay,
welcome
everybody
today
is
a
working
group
for
the
ortulius
kepton
integrations.
A
So
what
I
have
in
front
of
us
is
a
an
event
diagram
that
I
laid
out
as
a
starting
point
and
just
want
to
walk
through
it.
I
think
we're
going
to
make
some
changes
just
to
make
things
make
sure
I
got
all
the
events
going
in
the
right
places
and
things
like
that.
So
the
way
this
works
is
it's
a
just,
a
regular
markdown
file,
and
so
the
markdown
files
over
here
on
the
left-
and
I
just
have
it
in
preview
inside
of
visual
studio.
A
So
all
you
do
is
you
tag
the
block
with
mermaid
and
then
we're
doing
a
sequence
diagram
and
then
all
it
is,
is
we
give
it
our
from
and
to
and
what
the
the
action
is.
So
that's
kind
of
how
this
is
laid
out
and
when
I
did
this,
this
is
really
just
to
get
us
through
the
dev
stage
of
the
pipeline.
A
I
did
put
in
all
the
all
the
events
so
you'll
see
that
we
have
the
starting
event
the
running
event
and
the
finished
event.
Now
one
of
the
things
I
kind
of
assumed
was
captain
was
going
to
be
our
control
plane
for
everything.
A
So,
with
captain
being
the
control
plane,
we
would
be
pushing
kind
of
broadcasting
the
events
out
to
captain
to
listen
for
and
not
have
every
single
tool
listening.
So,
for
example,
even
though
we
could
make
ortilius
into
a
listener
for
all
the
events,
so
if
like
something
happened
on
the
git
repo
side,
ortilius
could
know
about
it
and
act
upon
it,
but
what
we,
what
I
kind
of
decided,
was
to
make
kept
in
the
orchestration
tool
at
that
level.
A
So
with
that,
I'm
gonna
stop
there
and
see.
If
anybody
has
any
questions
with
how
it's
kind
of
laid
out
and
and
if
not
we'll,
go
ahead
and
dive
in.
B
B
C
A
A
View,
okay,
so
one
of
the
first
things
in
our
kind
of
our
use
case
scenario
is
we're
going
to
have
a
microservice
developer,
go
ahead
and
commit
their
changes.
Now
the
commit
could
be
either
a
direct
commit
to
to
main
or
like
a
merge,
commit
doesn't
really
matter
for
this
example.
But
what
we're?
What
we
have
at
this
level
is
something
happening
at
the
microservice
git
repo.
So
what
I'm,
assuming
is
it's
a
poly
repo
for
one
micro
servers
is
going
to
be
in
one
repo.
A
So
if
we
have,
you
know,
50
micro
services,
we're
going
to
have
50
repos
and
each
one
of
these
microservices
repos
could
be
doing
a
commit
at
different
times
so
versus
a
monolith
repo,
where
everybody's
working
on
the
same
repository
with
different
subdirectories
for
their
microservices
and
the
merge
and
the
and
the
commits
are
going
to
come
in
a
little
bit
differently.
So
this
is
assuming
a
poly
repo
at
that
level.
A
So
when
we
do,
the
git
commit
captain
would
be
listening
for
that
commit
to
come
through
now.
One
of
the
things
I
wasn't
because
I
have
not
configured
kept
in
myself
on
how
exactly
that
happens.
If
it's
like
a
web
hook
or
how
captain
is
configured
to
be
associated
to
a
particular
git
repo,
I'm
assuming
it
can,
if
somebody
knows
just
jump
in
and
and
fill
in
the
details
here,.
B
A
Okay,
so
the
action's
gonna
do
a
trigger
then
to
captain
to
do
to
let
it
know
that
something
happened
in
the
repo
okay.
So
the
the
next
thing
that
we
need
to
do
in
kind
of
like
in
our
our
overall
processes.
We
want
to
go
ahead
and
start
a
docker
build,
and
I
think
this
is
gonna
be
some
similar
where
captain
doesn't
really
have
a
built-in
docker
integration.
A
I
think
they
have
it's
just
like
a
generic
job
executor,
which
I
believe
we
would
use
to
start
to
do
the
to
use
the
the
docker
build.
A
So
when
that
generic
job
executor
starts,
it's
gonna
broadcast,
that's
gonna,
it's
gonna
actually
start
running
the
the
the
docker
build
step,
and
then
it's
gonna
go
ahead
and
broadcast
a
finish
when
that
happens.
A
A
The
next
step
is
going
to
be
the
doctor,
push
which
will
go
ahead
and
push
out
to
the
docker
registry,
and
again
I
put
in
the
the
running
and
the
finished
pieces
at
that
level
and
the
reason
why
we
have
the
kind
of
like
the
three
states.
You
know
the
start
running
and
finished
is:
if
you
wanted
to
do
notifications,
to
say
that
you
know,
if
you
have
a
long
running
like
the
docker
push
or
that
docker
build
may
run,
I've
seen
some
docker
builds
run
for
over
10
minutes.
A
A
So
that's
why
we're
kind
of
laying
out
in
the
state
diagram
the
sequence
diagram
the
starting
running
finish
once
the
the
we've
finished,
the
next
thing
that
we
want
we're
going
to
go
back
to
kept
in,
and
this
is
where
I
wasn't
quite
a
hundred
percent.
Sure
I
think,
like
I
said,
ortilius
can
listen
for
the
docker
push
and
I
think
I
need
to
change
this.
One
here
start
component.
A
So
again,
we're
going
back
to
captain
being
kind
of
like
the
control
plane
and
once
it
comes
back
with
the
docker
push
finish
and
then
captain's
gonna
tell
artelius,
go
ahead
and
create
your
new
component
and
app
version
ortulius,
and
this
is
where
we're
going
to
need
to
be
able
to
accept
the
event
coming
from
captain
and
also
be
able
to
publish
on
the
different
states
the
running
finished
and
back
now
kershaw.
What
was
the
thought
on
this
were?
A
Were
we
going
to
use
just
have
captain
call
our
existing
apis
through
a
job
executor,
or
were
we
gonna
write
our
own,
like
captain
services?
At
this
point.
B
Yeah,
so
actually
like
what
we,
what
we
thought
is,
we
will
have
a
captain,
ortiz
kind
of
service,
wherein
a
few
component
that
you
have
shown
here
like
docker,
build
tag,
docker
push
and
the
functionality
that
is
interacting
with
the
captain.
Plane
itself
will
be
encapsulated
that
application.
A
Perfect
yeah,
so
that,
on
the
ortilla
side,
what
we'll
need
to
do
is
because
captain's
going
to
send
us
a
cloud
event.
A
Ortilius
does
not
know
how
to
deal
with
that
cloud
event
and
the
payload
at
this
point.
So
that
would
be
something
that
will
need
to
define
a
new
rest
api
to
accept
that
cloud
event
and
the
payload
to
be
able
to
start
doing
the
the
component
updates
and
pieces
like
that.
A
Now
one
of
the
things
when
we
do
that
that
what
we
may
we're
going
to
need
when
artelius
does
this
the
component
app
version
piece
and
starts
running
it,
it
actually
does
work
to
interrogate
the
git
repo.
So
that
may
be
something
that
we
may
need
to
do
as
well
is
to
on
the
ortilla
side.
A
When
we
want
to
update
the
component
and
app
version,
we
may
need
to
clone
the
initial
repo
and
branch
to
be
able
to
get
the
information
we
need
out
of
that
repository
because
it
may
not
all
that
data
may
not
be
visible
at
the
at
the
event
level.
A
So,
for
example,
things
that
we
pull
out
of
the
git
repo
would
be
like
the
readme
file,
the
license
file,
any
swagger
or
open
api
files,
the
s-bomb
those
type
of
things
we
would
need
to
have
access
to
the
repository,
so
we
may
actually
have
on
the
artillery
side,
have
one
more
it's
kind
of
hidden
at
right
in
here
where
we
go
ahead
and
actually
do
a
git
clone
at
that
level,
we'll
have
to
see
how
it
plays
out
and
what
information
we
have
coming
across
in
the
cloud
event.
B
A
Okay,
so
once
ortelius
has
finished
the
the
creating
the
component
and
app
version,
we're
going
to
go
back
to
captain
say
that
we're
done
now.
Captain
in
this
example
we're
going
to
say
that
we
want
to
go
ahead
and
deploy
the
new
version
that
we
just
created
out
to
our
our
dev
environment.
A
A
So
if
we
have
multiple
this
happening
for
multiple
services,
we're
going
to
have
multiple
components
and
multiple
applications
created
when
we
go
to
do
do
the
deployment,
we
know
what
the
desired
state
is
of
what
development
is
going
to
look
like
and
because
we
know
what
the
desired
state
of
the
dev
deployment.
A
So
one
of
the
things
like
I
said
earlier
in
the
week
is
we
do
need
this
application
git
repo,
to
represent
kind
of
like
the
the
snapshot
of
all
your
manifest
files
for
your
deployment,
and
in
this
case
I
just
I
just
said
helm,
it
could
be
customized
and
and
do
it
that
way,
it
doesn't
really
matter
either
way.
Ortilius
is
going
to
go
ahead
and
update
the
helm,
charts
and
one
of
the
things
that
you'll
see.
Let
me
see
if
I
can
find
the
file
real,
quick.
A
So
when
we
go
and
with
this
re-render,
when
we're
doing
this,
the
update
component
we're
actually
going
to
go
ahead
and
create
a
chart
at
that
level,
and
I
may
figure
out.
I
may
break
this
out
running
component
app
version
into
more
detail
in
another
diagram.
Just
so
we
can
kind
of
see
under
the
covers
what's
happening,
but
basically
what
ends
up
happening
is
at
the
application
level.
We
want
to
go
ahead
and
and
update
all
of
these
dependencies.
A
So
let's
say
it
was
our
our
text
file
that
micro
servers
that
got
updated.
This
is
where
we're
going
to
going
to
go
ahead
and
write
out
a
new
chart,
yaml
file
to
the
get
application,
git
repo
bump,
the
version
number
to
the
correct
version
that
we
want
to
use
and
then
once
we
do
that
we
actually
go
ahead
and
commit
that
to
the
get
to
the
get
repo.
Now
as
soon
as
we
committed
to
the
get
repo
we're
going
to
argo
is
going
to
actually
pick
up
and
start
the
deployment.
A
So
this
is
where
we're
we're
not
interacting
with
argo
directly
we're
interacting
with
argo
through
the
get
repo
to
do
a
kind
of
like
a
get
ops
process
so
from
there
once
we
do
that,
where
two
things
are
actually
going
to
happen,
pretty
much
simultaneously
we're
going
to
tell
captain
when
we
finish
deploying
from
the
artelia
side
and
then
at
the
same
time,
argo's
going
to
start
its
deployment
process
and
then
argo,
I
believe,
brad
was
working
on
this
to
have
argo,
send
out
an
event
through
its
notification
hook
to
allow
captain
to
know
that
a
deployment
started
finished
and,
and
that
level
and
same
thing.
A
C
Steve
yes,
one
question
ortilis
has
like
well
as
like
the
the
dependency
more
information.
Just
it
has
like
information
additional
information
about
the
relation
of
the
of
the
the
components.
The
dependency
and
stuff
like
that,
when
we
like
update
a
helm
chart
like
the
the
metadata
from
ortillus
is
part
of
our
kubernetes,
manifest
is,
is
only
on
on
our
telus.
A
It
will
get
pushed
into
the
values
file,
so
let
me
bring
up
so
what
ends
up
happening
is
that
information
will
come
across
in
the
values.
So
if
we
look
at
this
chart,
so
this
is
one
of
our
microservices
charts.
C
A
And
this
was
really
simple.
I
didn't
include
everything
but
we're
going
to.
We
know
like
where
the
repo
is.
We
know
the
sha.
We
know
the
how
it
was
tagged
and
that's
gonna
be
coming
from
ortulius.
C
Okay,
I
was
just
thinking
that,
if
we
are
using
argus
argo
to
synchronize
everything-
and
we
need
we
have-
we
have
to
the
river-
has
to
be
like
everything,
because
it's
a
single,
a
single
source
of
true.
So
I
was
just
considering
if
we
are
like-
maybe
missing
something
that
maybe
we
can
put
it
like
on
on
manifest
metadata
or
something
like
that,
announcing
just
to
be
sure
that
everything
it
has
to
be
in
the
repo,
because
when
you
say,
like
application
repo,
it's
get
like
the
sensation.
C
A
That's
a
good,
that's
a
good
point.
I
think
we
could
right
now
we
don't.
We
don't
push
everything
into
the
the
values
file
for
each.
You
know
all
the
manifests,
so
we
could
change
that
to
go
ahead
and
take
everything
that
artillius
knows
about
and
push
it
into
the
values
files.
A
So,
like
you
said
it's,
it
becomes
the
single
source
of
truth
in
the
in
the
get
repo.
A
C
A
Yeah
and
then,
when
we
implement
the
the
blockchain
piece,
we
can
add
that
information
as
well
to
where
the
like,
the
the
nft
token
our
id
is
located
so
and
those
things
that
will
be
persisted
in
the
immutable
box
and
we
can
tie
that
back
as
well
in
from
the
manifest
point
of
view.
So
that's
a
great
idea
and
I'll
take
a
look
to
see
what
we
need
to
do.
A
And
the
other
thing
is
at
the
actual,
manifest
level.
Let
me
pull
up
a
manifest
like
at
the
deployment
right
now
we're
not
putting
any
specific
ortilius
information
into
the
meta
into
the
meta
data,
but
we
could
also
do
that.
So
it's
not
only
in
the
in
the
manifest,
but
when
you
do
a
deployment
that
it
actually
ends
up
in
the
kubernetes
cluster
as
well.
C
Yeah,
we
use
a
lot
of
now-
it's
pretty
common,
to
add
in
a
lot
of
stuff
media
that
inclusive
like
a
json
file
and
stuff,
like
that,
not
a
huge
one
to
have
like
a
really
ugly
manifest,
but
at
least
some
kind
of
information,
for
example,
enough
interviews,
pretty
much
made
radar
to
do
all
the
graphics.
C
So
you
can
like
put
it
from
the
middle
that
some
just
like
like
like
you
were
what
you're
doing
with
mermaid
and
we
took
it
and
in
the
graphical
console
you
just
have
like
a
all
the
graphic
design
about
the
deployment
or
something
like
that.
That's
pretty
much!
What
is
like
the
graphics
side
you
have
in
ortillius,
but
everything
like
that
can
be
just
like
that's
called
in
metadata.
A
Yeah,
I
think
that's
a
I'll
have
to
look
at
the
or
if
somebody
wants
to
look
into
what
is
the
requirement
to
add
in
custom
metadata
tags.
If
you
can
just
go
ahead
and
if
I
can
just
like
make.
C
Yeah,
usually
I'm
saying
like
yeah
because
at
some
point
you're
putting
like
a
lot
of
information
on
your
metadata.
So
if
you
use
like
name
spacing
it's
going
to
be
more
easy,
but
after
that
you
just
take
to
pick
one
convention
that
usually
is
going
to
be
just
a
value,
variable
value
or
maybe
something
more
complex
like
json
or
something
like
that.
But
that's
pretty
much
easy.
You
just
take
one
value
and
just
put
it
inside
your
application.
He's
can
do
like
everything.
C
A
Yeah,
if
someone
wants
to
take
that
task
on
that'd,
be
great
to
figure
out
what
the
best
practices
are
and
what
we
need
to
how
we
should
create
our
our
tags.
If
we
need
to
do
like
a
namespace
type
of
format,
which
I
think
is
the
correct
way
to
do
it
and
then
the
values.
So
in
this
case
our
values
would
be
down,
the
road
will
be
like
ipfs,
slash,
slash
and
then
sid.
A
C
Like
everything
there,
we
have
like
a
buoys
people,
responsible
people,
contacts
email,
we
put
a
lot
of
stuff
over
there
and
all
there
is
more
about
the
cicd,
so
everyone
that
is
part
of
the
cicd
like
add
some
kind
of
steps
into
the
metadata.
So
that's
useful
too!
So
it's
like
me,
the
platform
b.
C
A
Has
to
be,
and
one
of
the
things
that
I
know,
that
is
a
big
gap
that
artelias
provides
is
because
ortilius
is
listening
on
the
cicd
side
and
also
it's
it
can
be
involved
and
listen
on
the
deployment
side
it
can.
It
can
link
together
your
git,
commit.
A
And
whatever
it's
going
to
be
so,
you
could
actually
link
this
deployment
and
the
image
that
was
deployed
over
to
the
git
commit
that
was
the
source
code
change.
So
that's
one
of
the
things
that
I've
always
seen.
That's
bizarre
between
the
ci
tools
and
the
deployment
is,
there's,
there's
no
link
between
source
and.
A
That
was
deployed
and
ortillas
can
prov.
Has
that
information
and
can
provide
that
metadata
very
easily
so
yeah.
I
think
that's
a
great
idea
that
we
can.
We
need
to
expand
into
the
metadata
piece
and
add
in
what
we
know
about
the
components
and
the
applications
and
pieces
like
that.
C
A
Something
like
that
would
be
probably
more
accurate
awesome.
That's
a
great
idea.
We'll
definitely
have
to
take
a
deeper
dive
into
that,
so
just
to
wrap
up
the
process
oops.
We
can
see
it
down
here.
A
So
argo
finishes:
the
deployment
gets
the
the
cluster
synced
to
the
get
repo
and
then
tells
kept
in
that
it's
completed,
and
then
captain
has
a
built-in
feature
for
quality,
gate
checks,
and
then
we
go
ahead
and
do
the
quality
gate,
checks
and
and
finishes
the
quality
gate
check
at
that
point
now
the
next
I-
and
this
is
where
I
kind
of
stopped
just
like
at
the
dev
stage.
Once
the
quality
gate
check
has
finished,
we
would
actually
start
the
deployment
again
at
this
point
to
a
new
environment.
A
If
I
can
get
it
highlighted
correctly,
so
this
would
be
start
a
deployment
of
the
app
version
to
qa
at
that
level.
Now
one
of
the
things
that
ortilius
will
do
when
we
start
that
deployment
at
qa,
we
could
have
deployed.
You
know
20
times
to
development,
which
gives
us
drift
between
qa
and
and
and
development,
where
qa
is
20
versions
back.
A
Ortilius
knows
that
and
will
be
able
to
tell
argo.
This
is
the
new
desired
state
of
qa.
This
is
what
I
want
you
to
sync
to
based
on
the
version
of
the
application
that's
being
deployed.
A
So
that's
one
of
the
nice
things
that
artelias
will
do
automatically
is
to
regardless
of
the
state
and
how
much
drift
you
have
between
versions.
It
will
help
bring
bring
you
to
the
desired
state
that
you
want
for
that
application
version.
So
this
is
where
we
would
go
into
the
next
state
of
doing
the
deployment
again.
A
If
anybody
has
any
mermaid
insight
on
how
to
number
each
step,
that
would
be
great
and
then
from
there
we
would
go
ahead
and
do
qa
and
then
finally,
we
would
do
production
the
same
way
so
we'd
start
a
deployment
to
production
and
we'd
probably
run
the
quality
gates
again
to
make
sure
that
we
don't
need
to
do
the
automatic
now
that's
another
thing
I
didn't
put
in
here
is:
if
the
claudia
gate
failed.
What
happens
at
that
point
and
we
start
doing
back
outs?
A
I
just
wanted
to
keep
this
pretty
simple,
so
that's
kind
of
the
the
process
that
I've
thought
of.
Does
anybody
see
anything
that
is
out
of
whack
if
I
totally
miss
something
or
we
need
to
tweak
a
few
things
here,.
B
So
I
think,
thank
you.
Steve
is
a
wonderful
explanation
for
all
of
these
activities
and
I
think,
the
last
day
when
we
talked
with,
I
think
you
me
and
when
we
have
a
podcast,
sergey
andreas
brad
and
you
we
have
been
on
the
podcast
and
one
thing
that
and
andreas
shout
out
that
we
want
to
have
a
build
history
like
when
we
put
stuff
into
the
dev
environment
and
we
move
to
the
qa
how
the
things
is
happening.
B
Is
it
the
breaking
too
much
how
the
history
of
build
commits
is
happening
like
and
let's
say
we
do
a
quality
checks
and
we
see
a
failure
and
I
want
these
failure
to
be
listed
somewhere
in
the
application
manifest
trial.
So
I
think
they're
really.
I
think
right
now,
they're
figuring
out
a
way
to
add
this
information
into
the.
B
But
do
you
think
like
if
in
the
ortelius,
if
we
can
add
that
information
kind
of
a
quality
gate
between
in
the
ordinaries
like
when
we're
pushing
when
you're
starting
the
app
version
and
when
the
quality
is
passed?
We
can
add
that
information.
The
first
time
when
this
application
move
to
the
next
qa
environment
it
passes.
A
Yeah
and
that's
a
great
point,
because
right
now
in
this
diagram,
it's
kind
of
hard
to
see
I'll
go
from
this
say
so
when
argo
finishes
the
deployment
at
this
point.
A
A
So
that
was
one
thing
I
wasn't
sure
about
on
the
captain
side.
If,
if
ortulius
needs
to
listen
to
argo
or
if
artelius
needs
to
listen
to
captain
and
captain
does
another
notification,
I
was
kind
of
confused
on
the
the
and
how
things
are
moving
about
at
the
control
plane
level
for
the
events.
So
when
you
know
you
want
kept
it,
and-
and
you
know
when
you
need
multiple
tools
to
listen
to
the
same
event,
how
that
happens
inside
of
the
this,
the
event
diagram.
B
A
Right
so
I
think
at
this
level
we're
gonna
do
captain
is
going
to
send
out
to
artelius.
A
Oh,
I
spelled
it
wrong.
That's
why
it's
all
messed
up
all
right.
That
looks
so
I
think
that's
what
we'll
do
at
that
level
and
then
ortilius.
A
So
that
way,
like
you're
saying
syme,
that
it'll,
that
information
about
the
deployment
would
be
recorded
inside
of
artelias
and
when
we
put
the
blockchain
into
place,
we'll
actually
record
that
deployment
log
in
the
immutable
ledger.
So
we
have
that
information
persisted
permanently.
B
So
I
think
distributed.
Do
I
get
the
link
of
this
people
like
if
I
want
to
add
something
or
kind
to?
Is
it
available
on
the
github
repository.
A
Not
yet
what
I
will
do
is
if
this
all
looks
good,
I
will
get
it
added
to
our
documentation,
repo
and
from
there
people
can
pull
it
and
update
it.
B
A
And
I
think
I
I
had
to
look
at
the
mermaid
documentation,
but
what
would
be
nice
is
if
each
one
of
these
is
is
numbered,
because
what
we'll
need
to
know
from
kind
of
like
a
design
implementation
point
of
view,
is
to
call
out
like
when
we
do
this
git
commit
action.
This
is
the
the
example
cloud
event
and
the
cloud
event
payload
that
we're
going
to
be
pushing
around.
So
that's
going
to
be
our
like.
A
Our
next
step
is
to
really
document
which
what
is
what
is
the
data
being
passed
around
on
all
this
on
every
single
one
of
these,
so
it'd
be
great.
If
we
can
get
these
numbered
somehow,
if
anybody's
like
a
mermaid
guru,
that
knows
how
to
do
that,
we'll,
please
jump
in
and
make
those
updates
and
again,
if
everybody's
good,
with
this
as
a
starting
point,
I
will
go
ahead
and
add
it
to
the
documentation,
repo.
B
A
Yeah
because
because
this
is
markdown,
if
we
we
can
do
the
next,
if
let's
say
this
is
number
one
I'll
just
put
one
here,
for
example,
I
can
go
ahead
and
at
this
point
do
step
one.
A
And
I
could
put
more
detail
here
about
it
and
it'll
be
referenced
below
here.
So
that's
where
it
would
kind
of
lay
it
all
out.
So
everything
is
all
defined
in
one
document
to
make
it
easy
for
folks
to
have
the
reference.
B
Correct
correct
and
as
we
go
further,
I
think
there
would
be
some
changes
regarding
how
like
harvard
is
communicating
with
captain
and
artelias
yeah.
B
A
And
now
that'll
be
some
of
the
things
that
we'll
need
to
call
out
is
like
on
this
one
like
you're
saying
to
kirsh.
This
was
a
github
action
like
this
notification.
Docker
build
may
be
the
captain
job
executor
where
we
get
into
this.
This
will
could
be
a
native
our
ortelius
kept
in
service.
Doing
that
so
definitely
need
to
call
out
who's
how
it's
implemented
and
who's
doing
it.
B
A
A
And
I
will
sergio,
do
you
mind
creating
an
issue
about
about
the
metadata
in
the
manifest.
A
I
think
okay
I'll
take
care
of
it,
then
I'll
go
ahead
and
create
that
issue.
So
that's
what
we,
what
we
have
going
on.
Does
anybody
have
any
other
questions
at
this
time
or
kind
of
makes
sense.
A
Yeah,
so
what
we'll
do
is
this
will
give
us
when
we
number
each
item
we'll
have
to
see
if
it's,
if,
if
it's,
if
it's
already
existing
or
if
it's
something
that
we
need
to
do
or
if
there's
something
that
we
need
to
tweak
to
make
it
happen.
So
that
will
then
turn
into
kind
of
our
our
to-do
list
coming
up.
B
A
And
we'll
have
to
we'll
we'll
circle,
this
back
past
brad
as
well,
so
he
can
add
on
from
his
side.
I
think
he's
traveling
today
and
we'll
catch
up
with
him
next
week.
Oh,
and
that
does
bring
up
a
point,
we
will
do
we'll
start
scheduling,
probably
two
working
groups,
one
my
time
in
the
morning.
Basically
this
this
time
slot
and
then
another
one
in
my
time
in
the
afternoon
to
pick
up
brad
in
australia,
new
zealand
time
frame
their
morning
tomorrow.
A
So
we'll
we'll
figure
that
out
next
week
what
the
time
slots
look
like,
I
may
send
out
a
doodle,
so
we
can
get
some
good
participation.
B
A
All
right
well,
thank
you,
everybody,
and
I
will
get
this
posted
to
the
documentation
repo
today
and
I'll,
send
out
a
message
on
discord
to.
Let
folks
know
where
it's
at.
A
All
right
thanks,
everybody
and
I'll
make
sure
that
this
recording
gets
posted
as
well,
so
folks
that
couldn't
make
it
could
take
a
look
at
it.