►
From YouTube: VSMI & CDEvents Collaboration Kick-off - June 27, 2023
Description
For more Continuous Delivery Foundation content, check out our blog: https://cd.foundation/blog/
C
Good
then
done
my
heavy
travel
so
I'm
back
back
at
work
with
a
little
bit
more
time
to
spend
on
all
kinds
of
things,
this
being
one
of
them.
C
Right
but
we
we've
had
a
couple
of
a
couple
of
sessions
with
vsmc
in
the
meantime
and
I
think
we've
got
a
couple
of
updates
that
you
folks
might.
A
I
added
some
information
there
Andrea
sincere
initiated
it
with
some
input
from
earlier
discussions.
We've
had
on
on
the
sake
mostly
so,
including
the
reference
architecture
and
the
charger
and
then
some
some
conclusions
or
discussions
here,
but
I
hold
it
open
issues,
I'm,
not
sure
if
that's
the
correct
term,
but
anyway
things
we
are
discussing.
A
C
C
I
could
the
discussion
I
could
share
my
screen.
We've
kind
of
a
little
bit
of
work.
C
It's
mostly
it's
mostly
visual
changes.
C
So
this
is
kind
of
what
we've
been
putting
together
to
kind
of
establish
a
large-scale
representation
of
the
landscape,
where
we
have
The
Familiar
activity
layers
where
you'll
see
that
we're
representing
slightly
more
scope
than
something
like
CD
events,
would
have
in
its
immediate
purview
but
see
the
events
being
recognized
on
the
Ingress
layer
primarily
focused
on
real
time
and
poll
Services
I.
Guess
you
guys
do
you
have
do
you
have
any
incorporation
of
something
like
poll
services
or
is
it
primarily
just
ingestion
of
broadcasted
events.
D
D
A
I
think
I
get
it
I
mean
the
protocol
itself.
The
event
protocol
itself
doesn't
that
it's
not
the
post
office
obvious
yet,
but
then,
of
course,
the
consumers
of
the
events
I
mean
the
events
themselves
could
contain
references
to
additional
data
from
external
sources
somewhere
say
right.
Reference
today
gets
out
the
repository
or
something
like
that
yeah
then
the
consumer
of
this
event
could
go
and
fetch,
of
course,
pull
more
information
from
next
one
and
then,
of
course,
in
the
City
events
ecosystem.
A
As
such,
there
could
be
services
that
listen
to
City
events
and
then
go
out
to
those
external
services
to
pull
the
information
from
there.
So,
of
course,
obviously
in
the
City
events-
landscape,
yes,
poor
Services
should
exist,
but
not
as
part
of
the
protocol
itself
really
but
URL.
So
references
to
such
Services
is
in
the
scope
to
keep
in
the
protocol.
I
think
got
it.
C
And
is
it
fair
to
characterize
City
events
as
a
message
broker.
D
C
D
A
C
C
We
we
recognize
historical
content
as
well
because
of
the
possibility
of
ingestion
from
a
a
data
store
which
may
end
up
being
our
primary
or
primary
source.
But
we
don't
know
we
have
very
little
current
implementation
or
ad
hoc
implementation
to
to
reference.
That's
probably
one
of
our
next
steps
is
to
go
and
ask
a
bunch
of
folks
what
they're
doing
to
get
more
insight
into
how
they're
dealing
with
this
historical
data,
but
we
have
representation.
A
But
yeah
go
ahead
when
I
say
events,
though,
do
you
mean
events
in
the
same
term?
Yes,
we
used
in
City
events
or
is
it
more
General
any
events
they
could
be
like
open,
Telemetry
events
or
such
things
as
well
or.
C
It
could
be
I
mean
Telemetry.
Events
would
be
less
less
interesting
in
the
BSM
context,
yeah,
but
really
any
kind
of
I.
It's
actually
so
it
does
involve
events.
Let's
say:
I
should
probably
separate
these
out,
because
what
I
would
say?
C
Oh
these
are
kind
of
correlating
to
the
stages
of
this
workflow
or,
let's
say
the
cycle,
this
cycle
of
events,
but
we
typically
pay
closer
attention
to
things
on
the
right
and
things
on
the
left,
but
I
think
in
the
workflow
context.
If
we're
talking
about
work
items,
we
do
care
about
change
management,
release
management.
How
long
does
it
take
to
do
the
security
testing?
When
does
that
happen?
C
But
I
guess
that's
very
similar
to
your
context
as
well,
but
to
to
answer
your
question.
I
hope
we
would
be
looking
at
events
that
are
emitted
by
any
of
these
activities.
Events
being
this
thing
happened.
C
Which
we
would
imagine
are
primarily
stored
in
a
in
a
like
a
data.
Lake
course.
You
know
something
and
and
not
together.
Currently
right
I
mean
part
of
this
I
think
is
what
does
a
vsmi
data
store,
look
like,
which
is
a
layer,
a
layer
above
so
basically,
if
you
have
a
store
of
data,
if
you're
collecting
data
from
various
sources,
not
all
that
data
is
really
relevant
to
Value
stream
management
or
workflow,
so
the
broker
would
basically
be
collecting
data,
that's
defined
as
relevant
or
imagined
to
be
relevant.
D
C
And
then
we
have
the
practice
of
actual
value
stream
management,
which
would
rely
on
this
data
store,
let's
say,
and
hopefully
decouple
the
act
of
representing
and
manipulating
the
value
stream
management
data,
the
what's
relevant
from
the
storage
and
maintenance
of
that
underlying
data,
meaning
that
you
could
use
different
tools.
On
top
of
this,
you
could
change
tools
easily.
You
could
experiment
with
open
source.
C
You
could
create
your
own
representations
of
this,
and
this
will
live
in
various
places
in
in
various
ways
right
in
a
in
a
mature
ecosystem,
but
the
the
layers
that
we
identified
as
relevant
to
this
is
being
able
to
see
that
data
being
a
being
Guided
by
that
data,
so
having
a
layer
that
is
providing
some
analysis,
Beyond
just
raw
representation
or
observation
I
should
say,
because
representation
I'm
going
to
get
we're,
probably
going
to
throw
these
words
around
in
different
contexts
for
a
little
while,
and
then
we
have
beyond
beyond
that
those
basic
use
cases
prediction
and
simulation,
so
understanding.
B
Nice
yeah,
this
looks
very
interesting,
I
I
think
it
not
I
mean
we
don't
have
anything
to
this
level
of
details
in
terms
of
the
years
on
the
CD
event
side,
but
I
think
the
general
architecture
is
similar
in
a
way
to
what
some
of
the
orange
end
users
are
doing.
So,
for
instance,
Fidelity,
which
is
the
first
CD
events
adopter
they.
B
They
have
similar
approach
where
they
they
receive
all
events
from
the
different
tools
well
from
Jenkins.
That
is
the
core
bit
that
runs
activity
in
their
environment.
They
collect
and
all
these
events
in
what
they
call
evidenced
or,
and
then
they
they
have
like
this
evidence
store,
which
allows
them
to
do
different
kind
of
things.
So
they
can
evaluate
things
that
are
in
this
evidence
store
take
decisions
for
a
future
activities.
B
They
can
use
this
data
to
build,
like
maybe
all
the
trade
of
an
artifact,
so
they
can
run
yeah,
so
they
can
analyze
this
data
and
have
different
business
Logic
on
top
of
it,
basically
so
yeah
and
the
fact
that
there
is
CD
events
as
a
common
format
coming
from
the
different
parts.
D
B
C
C
There's
large
software
vendors
that
are
kind
of
kind
of
acquiring
Solutions
in
each
of
these
areas
and
then
aggregating
them
into
value
stream
management,
or
you
know
what
they
call
it
kind
of
portfolio
life
cycle
or
whatever
they
whatever
they
wanted
name
it,
but
this
doesn't
actually
exist
yet
so
that
so
the
need
is,
there's
definitely
a
need
to
be
collecting.
If
you
want
to
do
this
to
collect
data
from
all
these
different
separate
tools,
and
then
you
have
to
do
this
kind
of
assembly
and
normalization.
A
No,
not
not
really
I
mean
from
from
this
event's
protocol
perspective.
We
don't
really
care
too
much
how
the
upper
layers
look
right,
and
so
that's
fine
either
way.
I
think
this.
This
looks
like
a
well
thought
through
structure
so
for
the
absmi
or
the
VSM
use
case.
I
think
this
is
perfectly
suitable
and
I'm
glad
to
see
that
we
I
mean
seed.
Events
could
play
a
role
in
in
this
structure
as
well
and
as
an
information
better
from
the
bottom.
There
so
yeah
looks
promising.
C
C
Terms
we're
we're
far
newer
and
we
are
trying
to
bite
off
this
huge
piece
of
observability
with
this.
So
it's
nice
to
be
able
to
have
a
few
folks
as
guides
and
second
opinions.
A
C
Yeah
that
I
think
has
to
be
that's
a
good
point.
There's
almost
like
a
life
cycle,
a
life
cycle
need
I,
don't
want
it
I'm
going
to
show
all
my
cards
for
the
next
piece
of
this
puzzle,
but
I
like
that
idea,
because
this
topology
doesn't
tell
you
how
it's
actually
valuable
or
gonna
be
used.
So
that's.
A
C
A
Adverb
is
stating
what
properties
in
the
seed
events
themselves
is
relevant
for
the
SMR
use
case,
based
on
use
case
or
HMI,
whatever
traditional
things
that
we
don't
have
today,.
C
There's
no
current
correlation
aside
from
let's
say:
maybe
a
jira
tag
right
that
Associates
it
with
something
in
jarrah:
that's
going
to
be
notified
as
to
the
status,
so
there's
some
correlation
there,
but
we're
likely
pulling
directly
from
jira
to
form
that
well
I.
Guess
the
order
doesn't
really
matter,
but
we
would
have
a
representation
from
jira
to
associate
okay.
C
C
Does
it
get
broken
up?
Does
it
get
re-scoped
and.
A
C
C
C
Historical
is
probably
the
most
valuable
biggest
prediction
and
simulation
is
unreliable
and
and
relies
on
all
kinds
of
variables
and
then
real
time
being
the
least
valuable,
because
it's
not
you
can't
typically
act
on
anything,
that's
real
time
in
a
meaningful
way,
because
you
just
end
up
being
overreacted,
it'd,
be
like
day
trading
right,
like
we
know
that
the
best
way
to
trade
is
to
just
put
your
money
in
a
system
and
just
wait
but
put
your
money
in
the
right
place.
This
is
really
about.
Where
do
you
do
like?
C
So
the
focus
is
different,
but
that's
today,
right
I
mean
it's
hard
to
say
what
the
focus
will
be
in
the
future,
but
I
I,
suspect
prediction
and
simulation
will
be
much
higher
value
and
the
what
you
could
do
with
real-time
events
in
the
value
stream
world
is
less
less
significant.
C
C
A
Would
say
that's
more
for
operations
Personnel
of
the
system
itself,
looking
into
our
our
servers
loaded
now,
do
we
need
to
add
more
Hardware?
Can
we
reduce
some
Hardware
from
this
part?
To
put
it
there,
I
mean
such
things.
It's
not
really
a
concern
of
statement.
I
mean
there
was.
This
would
more
be
to
me
open
Telemetry
dates
are
really
influencing
that
people
yeah,
but
still
events
is
more
about
declaring
this
specific
Source
change.
That's
that's
it's
now,
our
starting
point.
It
could
still
it
could
be
a
requirement
instead,
but
I
think.
A
Currently
we
focus
on
a
storage
change
being
created
as
our
starting
point
and
then
what
happens
to
that
silver
change?
Well,
is
it
tested?
Is
it
it's
statically
limited
on
such
things?
Is
it
built
all
the
built
artifacts
then
tested
as
well?
Are
they
deployed
somewhere?
Are
they
working
well
in
the
customer
environment
and
those
things?
So
that's
currently
the
scope
of
City
events,
but
we
are,
as
I
said,
I'm
looking
into
also
referencing
issues
which.
D
A
Yes,
for
example,
in
this
case,
so
there
we
we
go
more
to
the
left
of
this
picture
from
developer
box
over
to
the
design
box.
If
the
Year
tickets
are
there,
so
we
are
discussing
introducing
a
I'm,
not
we
haven't
talked
about
the
naming
too
much,
but
like
an
issue
created
event
or
something
like
that.
Now
we
have
a
new
year
ticket,
which
will
eventually
be
referenced
in
a
source
change
down
the
line,
but
then
I
don't
think
that
we
intend
to
Implement
too
many
actions
or
what
should
I
say
predicates
on
that
issue.
A
Notifying
when
it
has
been
changed
in
many
different
ways,
as
you
were
into
as
well,
when
the
scope
is
changed
for
a
initial
or
when
they
just
went
to
some
team
member
or
such
things,
it's
not
too
interesting.
Maybe
for
a
city,
that's
from
a
city
limits
perspective,
but
it's
rather
when
it
has
been
included
in
the
story.
Changes
is
interesting,
but
I
think
we
need
to
to
reference.
A
If
there's,
we
will
for
sure,
as
I
said
earlier,
that
I
have
a
reference
to
the
year
of
tickets,
somehow
that
year,
ticket
reference
in
the
city
event
now
there's
a
new
source
change
and
then
the
the
full
service
on
top
of
that,
or
maybe
even
the
the
data
lake
or
some
system.
There
could
then
fetch
more
information
from
the
geosystem
based
on
that
New
Year
tickets
to
build
the
full
picture
of.
What's
what's
happened
earlier,
if
that's
interesting
in
the
ticketing
system.
A
Yeah
I
mean
I
expect
that
if
we
would
call
a
slow
guardian
or
Trek
Garden
in
in
our
world
in
Arizona
based
could
sit
on
looking
at
some
some
dashboard
somewhere
seeing
that
well
now
this
product
is,
there
is
some
new
feature
being
developed
for
it,
and
now
there
is
something
delivered
for
this
new
feature,
and
now
that
feature
is
into
this
specific
test
Loop,
and
it
has
passed
this
test
up
where
it
failed
it,
but
it
went
back
to
development
to
fix
it.
B
A
Slow
Guardian,
but
as
we
call
it
and
then
I
would
expect
that
information
to
be
notified
through
events,
which
would
eventually
be
seed.
Events
in
our
case
then
dot
leveraged,
of
course,
with
data
from
other
sources.
Through
this
poor
services
and
such
things.
C
Scenario,
maybe
added
at
a
higher
level:
do
you
have
any
kind
of
representation
of
events
as
related
to
products
or
portfolios
so
that
you
could
actually
say
this?
Is
the
performance
of
this
family
of
products
or
the
specific
product
across
all
of
its
apps,
because
you
have
that
taxonomy
of
or
that
level
of
granularity,
where
you
have
one
CI
CD
pipeline
is
a
single
microservice
which
is
part
of
an
application
which
is
part
of
a
product
which
is
part
of
a
portfolio
which
is
part
of
maybe
a
division,
yeah
and
upwards
and
upwards.
D
C
A
Levels
or
maturity
levels
on
each
of
these
different
levels
than
other
product
structures
would
say,
so
we
could
State
the
microservice
the
conference
on
the
microservice
itself
and
then,
when
it
is
included
in
some
application,
we
can
State
the
conference
on
on
that
depending
on
how
much
we
have
tested
it.
And
then
we
get
a
confidence
of
these
levels
as
well,
which
we
currently
notify
through
events.
B
D
B
C
A
A
C
D
D
B
Use
cases
that
we
target
and
one
is
the
the
more
real
time
to
kind
of
try
and
map
to
these,
which
is,
if
you
have
one
tool,
for
instance,
doing
certain
activities.
B
You
sound
events
and
you
may
have
other
tools
to
reacting
to
those
activities
in
real
time,
so
you
can
have
a
build
that
triggers
a
pipeline
or
you
could
have
a
pipeline,
the
triggers
a
notification
or
some
other
testing
or
whatever
and
but
the
other,
the
other
use
cases.
The
other
use
case
that
we
have
is
more
in
the
direction
of
the
historical
type
of
data,
if
you
will
so
having
the
all
the
tools.
B
So
you
can
see
how
efficient
your
power
plants
are
where
most
of
the
time
we
spend,
where
maybe
bottlenecks
and
think
the
wrong
things
happening,
and
you
can
use
that
to
decide
how
to
optimize
where
to
invest
in
your
architecture
too.
You
know
where
you
have
to
inject
work
or
resources,
so
so
I
think
in
it
mops
nicely
to
the
two
layers
that
you
have
there
of
Ingress
in
history
type.
C
B
D
D
C
Definitely
I
definitely
see
a
real-time
opportunity
here,
because
I
think
that
if
you
were,
if
you're
processing,
real-time
events
combined
with
a
prediction
and
simulation
layer
or
even
just
an
intelligence
layer,
you
could
say
that,
based
on
certain
things
happening,
they
either
put
your
let's
say
timeline
or
your
estimated
delivery
capacity
or
performance
at
risk
and
you'd
be
notified
as
that's
happening
so
that
you
could
intervene
so
something
like
if
we
take.
C
If
we
take
something
that
was
planned
and
we
we
start
work
on
it,
we
get
all
the
way
to
deployment
and
then
the
specifications
change
on
it.
Then
that
might
be
something
you
want
to
pay
attention
to.
That
could
be
beneficial
or
detrimental
or
if
you
you
know,
if
you
delete
something
that's
in
progress,
then
you
might
want
to
be
alerted
that
there's
been
a
there's
been
a
substantial
change.
C
C
On
the
ontology
side,
this
is
fairly
simple
representation,
but
I
basically
just
took
inspiration
from
I.
Don't
know
if
you
folks
have
heard
of
palantir
has
done
a
bunch
of
work
on
ontology,
they're
kind
of
trying
to
create
this
master
data
model
of
a
business
and
so
and
I
was
pointing
to
it
by
another
member
of
the
bsmi
working
group
and
it
essentially
just
splits
the
world
into
models
and
data,
but
I
think
that
Maps
well
to
what
we're
aiming
to
represent.
C
C
A
C
That's
a
good
point,
so
maybe
we
need
maybe
we're
missing
a
layer
of
examples
for
this.
But
if
we
go
back
to
these
areas
here,
entities
would
be
work
items
primarily,
but
also
systems
of
work,
so
something
like
Jenkins,
which
is
producing
and
consuming
work,
the
work,
artifacts
and
then
contributors.
So
people
who
are
involved
in
the
work
then
systems
being
primarily
the
the
stores
of
data
and
in
many
cases,
generating
that
data
and
sometimes
they're
acting
on
that
data.
C
But
in
our
world
they're
you
know
they're,
primarily,
sort
of
generating
and
restoring
and
representing
this
data
and
then
metrics
being
all
of
the
calculation
of
what's
actually
happening
and
the
events
being
the
source
of
a
lot
of
those
metrics.
C
C
All
of
these
things
could
be
potentially
represented
in
relationships,
but
primarily
this
relationships
is
between
entities
and
systems,
because
it
has
this
primary
job
of
relating
what
to
what?
What
is
what
is
connected
to
what
and
then
the
data
is
really
what's
happening,
so
models
pertaining
to
what
what
matters?
What
do
we?
How
are
we
representing
the
system,
the
overall
system
of
work
and
then
data
being
what
is
happening
within
that
system?.
C
A
Would
reference
the
models
in
some
way?
So,
if
I
think
what,
when
we
talk
about
monetize,
what
we
often
maybe
refer
to
as
the
source
of
an
event,
for
example,
so
Jenkins
yeah,
for
example,
this
could
be
a
source
of
an
event
being
sent
yeah.
But
then
we
also
talk
about
people.
I
mean
contributors
could
be
not.
A
B
And
I'm
sorry,
nope,
I
I
think
that
that's
something
that
we
we
have
in
City
events
a
certain
degree,
but
we
don't
have
such
like
structure
models.
So,
like
Emma
was
saying
we
have
sources
of
events
and
we
have
like
one
open
discussion
that
we
want
to
have
and
how
to
better,
describe
how
like
Services
should
be
defined
structured.
B
D
C
C
This,
the
idea
is
that
these
things
are
stored
separately,
so
a
metric,
any
given
metric
from
the
list
could
mean
different
things.
They
can
mean
different
things
to
different
people
depending
on
what
model
is
representing
that
data,
because
it
could
be
represented
in
multiple
different
ways
in
a
system
that
is
collecting
and
visualizing
and
analyzing
the
data.
C
So
this
would
leave
it
up
to
perhaps
the
value
stream
management
platform
to
act
on
models,
but
even
create
and
Implement
their
own
models
for
representing
the
data.
However,
it
was
meaningful
to
them,
but
be
able
to.
C
Maybe
leverage
standard
models
and
then
and
then
create
custom
models
using
whatever
representation
was
valuable
in
their
context.
C
C
C
It's
connected
to
it's
it's
somewhat
interconnected,
so
you
would
have
multiple
entities
represented
in
that
event,
you
would
have
multiple
systems
represented
at
some
point,
but
immediately
you
would
have
Source
control
as
a
system
that
is
associated
with
that
event,
and
that
would
be
extracted,
I
think
from
the
original
event.
So
let's
say
that
event
comes
through
as
a
API
call
to
a
broker
had
his
or
what
you
folks
call
protocol.
C
We
know
that
there
is
an
existing
representation
that
exists
in
the
system
already.
That
represents
Source
control
right
and
it's
like.
We
have
people
who
are
creating
code
changes
and
they
are
associated
with
branches
and
repo
and
and
all
these
things,
and
we
could
think
about
that
differently.
If
we,
if
we
assigned
a
different
model
to
it,
we
gave
that
data
to
a
different
model,
but
primarily
it
has
a
pretty
stable
representation.
A
C
A
A
A
A
B
But
I
think
going
back
to
your
the
incident
events,
those
they
would
contain
more
entities
than
the
sub.
Then
the
incident
itself
because
incident
event
Relay,
can
refer
to
a
specific
environment
and
that
environment
would
be
an
entity
as
well
and
it
could
refer
to
specific
software
version
and
I.
Imagine
a
specific
application
with
a
version
could
be
an
entity
as
well,
if
I
understood
correctly.
So
you
have
this
entities
kind
of
associated
together
within
this
event
or
activity.
B
C
Were
saying
yeah,
yeah
I
would
say
that
that's
that's
true.
I
would
say
that
there's,
and
this
is
over
engineering
at
this
point.
I.
Don't
think
this
is
something
that
we
would
really
prefer
to
support
from
day
one,
but
probably
because
we
have
a
lot
of.
C
We
have
knowledge
of
representations
in
a
value
stream
management
context
and
they're
largely
provided
by
the
systems
that
are
sending
information.
So
we
don't
need
to
redefine
their
relationships,
but
in
order
to
really
associate
these
things,
that's
where
I
see
the
value
of
models,
because
you
have
this
data
coming
from
all
these
different
places.
C
All
of
them
have
different
models
and
they
may
be
representing
the
same.
The
data
that
we
care
about
across
all
these
systems
is
the
work
item,
context
which
each
one
will
have
a
separate
representation
of
natively,
and
we
need
to
build
a
collective
representation
of
that
work
globally
across
all
the
systems.
So
the
part
of
the
model
goal
is
to
basically
say
that
all
these
things
are
the
same
thing
they're,
essentially
for
our
purposes.
All
these
separate
things
are
the
same
thing
and
coming
from
here,
it
looks
like
this.
C
Coming
from
here,
it
looks
like
this,
and
in
all
those
cases
we
are
we
care
about
this
aspect
of
the
thing
and
we
are
connecting
all
of
them
by
this
identification
criteria
that
may
vary
quite
a
bit
across
all
the
systems.
C
A
Yeah
I
think
it's
good
if
we
can
share
some
common
vocabulary
as
much
as
possible.
Also,
there
is
some
initiative
to
try
to
set
the
vocabulary
on
CD
level,
cicd
level
and
right,
probably
that
cannot
be
conformed
all
the
way
to
the
vsn
in
all
aspects,
but
as
much
as
possible.
If
we
could
I
think
it
would
be
valuable
to
somehow
think
the
vocabulary
and
we
intend
to
learn
the
City
events
vocabulary
to
that.
The
city
of
interoperability
defined
reference.
A
C
Well,
folks,
we
always
seem
to
use
an
hour
yeah
any
any
thoughts
about
next
steps.
I
think
what
we're
aiming
to
do
is
publish
what
we've
got
at
this
point.
C
I
will
do
a
pass
referencing
CD
events
vocabulary
to
make
sure
I
can
align
that
as
much
as
possible
and
we're
going
to
basically
publish
a
page
or
two
or
it's
probably
going
to
be
a
couple
of
pages
on
what
we
have
so
far
and
how
things
are
going
just
to
put
something
out
into
the
world
and
then
I
personally
I
like
this
idea
of
the
life
cycle
to
demonstrate
and
I
like
the
idea
of
having
an
example
of
this.
C
A
B
B
It
does
this
time
generally
work
for
folks,
I
mean
apart
from
I,
mean
I,
know
a
middle
you're
going.
B
B
You
know
so
should
I
kind
of
extend
this
into
kind
of
recurring
knitting.
So
we
we
had
like
a
the
next
session
that
we
can
sync
continue
working
on
the
charter,
see
if
there
is
any
other
follow-up
for
this
and
Define
next
steps.
So
it's
good.
A
C
I
think
every
third
week
will
give
us
enough
opportunity.
I.
Think
we
actually
I
every
three
weeks
might
be
two
too
frequent,
because
we've
got
I
think
we're
only
meeting
monthly
for
the
the
BSM
ITC
okay.
So
our
Cadence
is
monthly
and
we
wouldn't
have
too
much
progress
into
a.
D
B
All
right,
so
we
could
say
this
is
the
the
fourth
Tuesday
of
the
month.
So
if
you
follow
that
pattern
next
time,
it'd
be
like
on
the
28th
of
July,
does
that
work.
A
D
Whatever
the
question
as
a
even
greener
than
a
beginner,
so
so,
for
example,
for
financial
industry,
the
finals
Foundation
have
a
thing
called
CDM
common
data
model
for
finance
related
topics.
So
those
models
will
be
what
you
describe
your
on
your
picture,
that
model
versus
data
that
will
be
part
of
that
model.
Part
right.
D
Similarly,
like,
for
example,
in
the
security
realm,
there's
a
thing
called
Osco,
that's
kind
of
common
reporting
compliance
model
that
will
also
apply
right
for
special
use
cases.
No,
that's
like
a
natural
industry,
but
for
security.
That
also
is
model,
and
then
Theta
is
just
whatever
is
related
to
that
model.
Right,
okay,.
C
Unfortunately,
I've
got
to
jump
to
the
to
the
technical
committee
call
yeah,
but
thanks
for
the
time
everybody
good
to
see
you.