►
From YouTube: SIG Events Meeting - Aug 15, 2022
Description
For more Continuous Delivery Foundation content, check out our blog: https://cd.foundation/blog/
A
Yeah,
I
know
I've
been
I've
been
joining
some
of
the
so
I
know
they
set
up
a
call
on
the
europe
side
a
little
bit,
but
I
know
I
missed.
It
was
bank
holiday
here-
and
I
was
talking
about.
I
know
this
finale
use
case,
so
I
I
don't
know.
I
think
I
I'm
down
to
talk
a
little
bit
today
about
that.
So
I
have
some
some
time
if,
if
there's
other
things
like,
if
there's
some
free
slots,
I
know
emil,
I
put
it
down,
but
yeah,
that's
good
to
see
everyone.
A
B
A
Yeah,
it's
extremely
hot
in
europe.
At
the
moment
I
know
we
had
a
heat
wave
ourselves,
but
it's
nothing
nothing
compared
to
what's
going
on
in
london
or
to
wider
europe.
B
B
So
I've
met
both
of
you
kind
of.
Would
you
like
to
introduce
yourself,
michael
and
jamie,
so
that
I
don't
know
if
you
two
have
met
sure.
A
Yeah,
no
thanks
cara.
No,
I
I've
only
joined
a
few
of
these
calls.
I
come
from
fidelity
investments
and
we've
been
just
very
interested,
I
suppose
in
the
the
cd
events.
Specifically,
we've
been
working
for
some
time
on,
I
suppose
collection
using
the
cloud
event
model
ourselves
for
ingesting
data
from
our
our
ci
cd
pipelines
and
and
other
tools
around
our
devops
tool
chain,
and
we
got
connected
with
cara.
A
I
suppose,
a
long
time
ago,
when
the
cd
events
initiative
was
been
going
on
and
I
got
to
meet
andrea
and
then
when
we
were
in,
I
was
in
austin
there
back
in
june
at
cdcon,
so
we
got
to
connect
a
little
bit
around.
I
think
you're
coming
toward
the
first
milestone,
so
I'm
very
much
from
a
end
user
perspective,
I
suppose,
but
instead
to
see
how
we
can
help
drive,
potentially
some
of
the
use
cases
and
involved
in
picking
up.
A
C
Cool
so
yeah,
I'm.
C
I
was
working
at
city
up
until
april,
where
I
was
doing
a
lot
of
stuff
with
their
supply
chain
security
program
and
now
at
a
founded,
a
startup
with
some
folks
focused
on
supply,
chain
security,
and
so
we've
been
building
out
a
lot
of
different
things.
So
city
had
contributed
out
to
the
open,
ssf,
a
tool
called
fresca,
which
is
based
on
tecton
and
and
some
policy
tools
and
and
tecton
chains,
and
a
bunch
of
other
things
to
sort
of
make
it
you
know
like
by
default.
C
You
know
you
get
a
high
level
of
salsa
if
you've
heard
have
been
hearing
about
that
of
all
of
your
builds,
and
so
we've
been
doing
a
bunch
of
different
things.
So
I
I
co-led
the
cncf's
secure
software
factory
reference
architecture
which
fresca
is
based
on.
C
I'm
also
a
member
of
the
salsa
steering
committee,
and
you
know,
security,
a
lead
in
the
security
group
in
the
cncf,
the
security
tag
and
so
yeah
we're
looking
at
areas
where
we
can
sort
of
better
collaborate
with
the
cd
foundation.
Both
from
the
perspective
of
you
know
some
of
these
other
groups
that
I'm
part
of
like
the
open,
ssf
and
the
cncf
and
how
we
can
kind
of
you
know
some
of
that
work
together
and
then
for
stuff,
like
other
things,
that
we're
working
on
like
stuff
like
fresca.
C
You
know
events
in
there
as
well
right
because
we
want
to
be
able
to
sort
of
track
those
events
throughout
the
supply
chain
of
of
of
the
life
cycle
of
the
of
the
software,
and
then
you
know
be
able
to
sort
of
ask
questions
of
of
those
events
and
then
also
be
able
to
better
interoperate,
where
you
know
certain
tools
today
that
don't
have
let's
say,
salsa
integration
if
they
support
cd
events
and
they
support
some
of
these
other
things
they
can,
they
can
almost
automatically
get
at
least
some
level
of
salsa.
A
A
Hey
tracy,
I
was
just
introducing
myself
there.
I
think
I
might
have
seen
you
on
one
call.
A
long
time
ago
I
was
just
explaining
I'm
from
fidelity
investments
and
being
been
tracking,
this
cd
events
project
for
some
time
now
and
yeah
just
from
a,
I
suppose,
a
end
user
standpoint
very
interested
to
see
how
that
the
project's
evolving
and
helping
just
provide
input
to
some
of
the
use
cases
and
discussion
as
the
specifications
sort
of
initial
releases
being
better
out.
So
that's
a
little
bit
about
myself.
D
We've
been
all
working
on
this
for
a
while.
I
think
that
I'm
I'm
hoping
that
we,
I
think,
where
the
real
core
discussions
are
going
to
be
at
right
now,
for
this
group
is
going
to
be
in
the
vocabulary,
meetings
that
we've
been
having,
because
that's
at
this
juncture
for
the
cdf,
I
keep
telling
everybody
an
excuse
to
wait
here.
I
just
went
for
a
run
and
got
the.
B
D
And
then
it
was
like.
I
couldn't
jump
for
this
meeting
that
the
vocabulary
we
need
to
really
solidify
the
vocabulary
across
all
these
different
projects.
The
cdf
is
working
on
they're,
looking
at
building
like
a
reference
architecture,
and
I
think
the
very
first
thing
we
have
to
do
is
get
a
vocabulary
that
the
events
team
can
talk
about.
The
reference
architecture
team
can
talk
about,
we
can
display
on
the
landscape,
so
everything
is
clean
and
consistent,
so
the
use
cases
is
going
to
be
really
key
to
really
understanding
how
to
talk
about.
D
D
D
And
do
we
know
andrea's
going
to
be
on
this
call
today.
D
D
I
think
august
is
his,
I
think
he's
taking
the
time
off.
I
would
suggest
I
think
we
originally
suggested.
Maybe
we
cancel
some
of
these
for
august
and
says
you
know,
did
we
just
sometimes
just
need
to
take
a
break,
but
I
know
he's
been
super
busy
with
meetings
and
I
think
he
was
gonna
be
on
vacation.
B
A
Like
I
honestly
don't
mind,
I
can
defer
to
another
time
when
there's
more
people
here,
if
that
makes
sense
for
discussion,
because
I
know
like
I've,
got
quite
a
comprehensive
deck.
I
wanted
to
just
explain,
like
I
think,
michael
touched
on
some
of
the
stuff
there
like
around
just
from
a
security
standpoint.
We
have
a
lot
of
fidelity
being
a
large
institute
financial
institution,
we're
under
a
lot
of
rigor
right
from
all
that
security.
A
There's
just
a
few
different
lenses
to
how
these
events
will
help
our
security
team
or
sre
teams
right
lightweight
beyond
so
yeah
again,
I'm
I
know,
emil
and
andrea
were
specifically
sort
of
pinging
me.
I
missed
last
week,
unfortunately,
and
last
night
last
session,
so
it
may
it
does
not
makes
more
sense.
We
might
just
wait
to
this
wider
audience
to
get
everyone
together
and
go
through
that
just
to
have
the
supposed
present
discussion
points,
if
that
makes
sense,.
B
D
Let
me
just
take
a
minute
then,
and
let
you
guys
know
where
we've
been
and
what
we've
been
doing,
what
we've
been
working
on.
So
the
vocabulary
there's
a
separate
group
that
works
on
the
vocabulary.
They've
been
working
on
a
standard
vocabulary
document.
I
think
they
have
quite
a
bit
of
work
yet
to
do
and
they
need
to.
We
need
to
start
kind
of
pushing
that.
Maybe
some
of
some
of
these
conversations
should
be
reviewing
that
and
continuing
the
conversation.
D
They
have
a
poc
that
they
have
done
and
andrea
wrote
most
of
it.
We
have
a.
We
have
a
nice
white
paper
that
we
pulled
together
at
the
last
minute
for
cd
icon
con.
So
there
is
a
good
basis
for
getting
really
getting
moving
forward.
We
spent
most
of
last
year,
actually
figuring
out
if
we
were
a
working
group
or
a
project,
I'm
trying
to
come
up
with
the
project
kind
of
mission.
D
You
know
the
logo.
There
was
a
lot
of
like
infrastructure
work,
I'm
gonna
call
it
that
we
had.
We
did
to
begin
to
bring
focus
to
cd
events
in
general.
D
There
is
an
open
source
project
that
I'm
hoping
that
the
that
we
can
also
bring
into
this
the
cdf
that
is
an
event
treatment
event
driven
product.
It's
similar
to
captain
and
captain
has
been
a
main
focus
of
the
events
team.
To
up
to
this
point,
it
would
be
nice
to
have
a
second
one
so
that
the
industry
understands
that
this
is
a
shift,
and
this
is
where
things
are
going.
Sometimes
it's
hard
to
be
the
lonely
furniture
store
at
the
end
of
a
dark
street
right.
He
said
nobody
drives
down
there.
D
We
need
to
light
the
street
up
and
put
some
more
furniture
stores
on
it
so
and
that's
called
directive.
Let
me
see
if
I
can
find
it.
B
Awesome
thank
you
for
for
giving
us
that
sort
of
intro
to
where
we
are
tracy.
I
just
got
a
message
from
emil.
Actually,
I'm
glad
two
more
people
are
joining
us.
I
think
we
have
a
different.
I
think
we
have
two
meeting
invites
somehow
circulating.
B
So
oh
here's
amy,
I
think,
he's
telling
us
to
get
on
over
to
the
other
meeting
or
okay.
D
E
It
has,
it
is
a
bit
of
slow
progress.
I
would
say
in
the
in
the
work
towards
the
release,
unfortunately,
but
anyway,
we're
still
intent
for
it.
Let's
see
if
we
can
manage
so
jamie
good
that
you're
here
anyway,
emile.
A
Time
this
meeting
was
gone,
so
yeah
hey.
How
are
you.
E
E
D
E
Okay,
anyway,
without
further
ado,
I
guess
we
can
let
you
jamie
present
from
fidelity
perspective
you're
used
to
on
your
city,
pipelines.
A
A
So
yeah
thanks
thanks
everyone.
I
I
have.
A
Quite
a
comprehensive
deck
here,
but
I'll
try
and
just
skip
through
to
focus
on
the
the
key
points,
and
I
wanted
to
just
start
off
by
setting
some
context
like
fidelity's,
been
through
our
digital
transformation
now
for
for
many
years
going
to
cloud,
and
we
have
an
organization
for
those
that
aren't
aware,
like
we've
got
over,
like
60
000
associates
around
18
000
developers
across
america,
us
ireland
and
and
india,
and
we're
sort
of
broken
up
into
around
12
different
business
units.
A
So
it's
it's
quite
a
sprawled
organization,
and
up
till
now
we
we're
introducing
our
sort
of
next
phase
of
our
delivery
platform.
So
a
lot
of
focus
has
especially
when
you
talk
about
ci
cd
and
the
pipelines
and
audit
and
security,
it's
sort
of
critical
to
our
business
model.
So
it's
just
really
to
focus
on
these
challenges
at
the
start
like
is
in.
A
We
have
a
lot
of
limited
reuse
opportunities
and
in
the
way
we
were
building
our
capabilities
and
extracting
data
from
them,
and
we
have
a
lot
of
on
and
risk
stock
one
level
different
types
of
standards
that
we
need
to
meet.
So
the
way
we
scale
it
go
about
that
that
process,
making
it
less
painful
for
teams
and
and
making
sure
that
we
we
have
a
proper
lineage,
I
suppose,
from
when
teams
commit
right
the
way
through
the
asset
being
deployed
into
production.
A
So
that
was
just
sort
of
where
we're
trying
to
focus
some
of
our
energy
on
and
part
of
the
key
platform.
And
this
ties
into
the
the
way
we're
using
our
cloud.
Events
right
now,
which
we're
looking
to
port
to
cd
events
and
the
reason
we're
interested
in
this
project
is:
we've
come
up
with
leveraging
inner
source
within
the
firm
where
we
have
gathered
all
the
suppose.
The
knowledge
across
those
associates
into
a
single
place
and
focused
on
inner
source.
A
So
we
we
do
have
a
log
for
j
or
vulnerabilities,
for
example
our
ability
to
suppose
remediate
those
as
quick
as
possible,
but
also
facilitating
the
velocity
that
teams
can
push
code
out
without
doing
a
lot
of
heavy
processes.
A
We
created
a
council
inside
where-
and
this
is
really
what
the
events
are
helping
us
solve,
where
this
is
just
a
very
analogous
pipeline
at
the
top
here
at
the
various
stages.
But
we
have
something
called
an
evidence
store
where
across
our
whole
tool
chain,
whether
it
be
in
our
scm
space,
or
we
just
categorized
it
to
these
five
level
categories,
where
the
events
are
sort
of
swarmed
around,
so
we're
very
much
interested
in
standards
around
source
control,
using
the
platforms
appropriately
and
and
the
web
hooks
available
to
capture
events
at
source.
A
So
we
can
sort
of
automatically
attest
things
that
are
happening
as
opposed
to
a
sort
of
trust
based
model
or
teams
may
need
to
call
governance
gates,
and
things
like
that.
So
we've
identified
these
high-level
controls
so,
for
example,
ensuring
that
code
is
from
the
proof
system.
A
Peer
reviews
are
occurring
the
way
the
code's
been
built
and
unlimited
and
scanned
is
being
captured,
and
you
can
see
as
we
go
forward
here,
there's
a
higher
rigor
of
security
for
doing
containers,
for
example,
or
the
assets
being
built
in
the
appropriate
area,
testing
the
quality
of
the
assets
being
built
out,
and
then
production
predominantly
around
having
ensuring
that
we
have
the
appropriate
access,
controls
and
logical
separation.
A
We
want
to
ensure
that
it
meets
the
appropriate
controls
that
are
defined
here
and,
and
we
had
certain
legacy
items
that,
like
you,
could
push
code
in,
for
example,
not
tested
appropriately
and
it'd
be
pushed
into
production.
They
had
a
risk
factor
that
we
didn't
want
to
have
so
anything
new
in
the
existing
platforms
that
were
so
the
new
strategic
platforms
we're
building.
We
want
to
have
this
out
of
the
gate.
A
It
would
help
with
the
migration
platform
that's
happening,
and
I
think
it's
pretty
standard
for
what
we're
showing
here,
that
these
are
the
domains
so
like.
How
are
we
realizing
this?
A
If
we
look
at
this
high-level
sort
of
context
view
we
we're
using
we've
many
different
types
of
orchestrator
infidelity,
traditionally
we're
focusing
pretty
much
jenkins
core
is
our
core
orchestrator
right
now
we
do
have
like,
for
example,
you
deploy
and
a
few
different
types
is
that
the
legacy
the
teams
to
be
with
them
from,
but
we
have
our
pipelines,
but
that's
not
all
part
of
the
picture.
We
have
a
in-house
portal
where
we
deal
with
all
onboarding.
A
So
if
you
want
to
create
a
role,
if
you
want
any
access
to
an
ad
group,
for
example,
for
so
anything
to
do
with
role
management,
anything
to
do
with
onboarding
onto
the
systems
is
captured
through
our
lmx
portal
and
that
allows
us
understand
a
sort
of
application
profile
picture
of
what
roles
are
assigned
to
what
those
transient
actions
as
they
occur,
just
to
ensure
that
access
couldn't
be
mistakenly
given
to
one
team
and
then
revoked
at
another
stage.
A
The
pipeline
itself,
through
that
capability
model,
I'm
going
to
talk
to
now
more
detail.
We
can
extrapolate
events,
and
this
is
why
it's
really
interesting.
What
you
guys
have
done
with
the
cd
events
is
you've
got
that
high
level
domain
already
captured,
so
whether
it
be
the
source,
control
management
and
the
predicates
around
repo
push
events
pure
events
and
then,
as
we
go
into
the
specific
workload
definitions.
A
But
the
idea
is:
is
that
all
these
events,
whether
it
be
coming
from
the
systems
itself,
and
to
give
an
example
where
you're
using
bitbucket
or
github,
for
example,
those
apis
are
very
different?
So,
unfortunately
they
don't
support
cloud
events
out
of
the
box.
So
we
we
normalize
that
data
through
collectors
and
instantiate
an
event
type
that
will
publish
to
the
event
the
lm
data
stream
in
this
case
and
then
our
pipelines,
because
we
have
that
buy-in
across
the
whole
organization
that
this
is
the
spine
of
their
core
pipelines
itself.
A
We
have
a
nice
sort
of
base
for
all
of
the
events
to
be
published
into
the
stream
and
what
we're
doing
is
we're
building
out
an
evidence,
store
and
context
databases
that
we
can
then
offer
back
insights
back
to
the
development
community
and
teams
and
audience
security
alike,
and
please
jump
in
if
there's
any
questions.
But
I
I'll
this,
this
picture,
I
suppose,
defines
the
the
value
that
we're
going
to
get
from
this
is
so
by
by
having
these
high-level
assets
here.
A
The
goal
is,
is
that
we
can
we
can
trace
from
the
commit-
and
this
seems
to
be
a
good
overlay
to
the
existing
specification
right
now,
because
it's
very
clear
you've
got
commit,
build
artifact,
related
events.
You've
got
service,
deployed
service,
updated,
for
example,
so
this
is
where
we
see
the
key,
I
suppose
assets
that
we're
using
as
our
syncs
our
sources
to
provide
a
lineage
effectively.
A
A
How
the
artifact
was
created
will
hugely
help
our
sre
teams
when
instances
do
happen
and
help
correlate
some
of
this
data.
So
at
a
high
level
it's
really
data
mining.
We
want
to
use
the
systems
and
the
hooks
to
collect
these
data
points
as
they
occur,
and
we're
currently
focused
on
enabling
these
sort
of
the
ci
part
of
the
equation
just
to
denoting
that
as
publishing
to
an
artifact
store-
and
I
won't
go
into
this
in
too
much
detail
but
like
fidelity
pipeline
library
is
just
really.
A
We've
created
these
again
lego
box
for
a
better
word
high
level
domain,
where
we've
people
have
whether
it
be
the
capability
definition,
is
something
that
does
the
job
really
really
well
and
is
reusable
within
the
pipeline
context,
and
you
can
see
here
it's
a
very
simplified
view
of
the
high-level
domains.
People
need
to
clone
codes.
They
need
to
build
codes
need
to
tag
code.
Some
may
build
containers
from
it
in
a
consistent
way,
but
it's
all
about
the
standardization.
A
But
this
is
going
to
be
one
of
the
key
sinks
to
our
event,
sourcing
and
another
view
on
this
is
if
we
take
these
high
level
phases
and
again,
this
isn't
an
actual,
it's
just
sort
of
demonstrate
the
sort
of
key
phases
that
we
see
every
workload
going
through
in
fidelity
and
how
the
capability
model
will
fit
in
underneath
that.
But
you
can
see
here
whether
we're
dealing
with
our
the
various
stages
of
just
taking
the
sort
of
the
scans
and
the
code
analysis.
A
This
is
where
we
instantiate
the
event
types
that
are
coming
out,
that
we're
migrating
so
currently
we're
we're
home
growing
these
we're
building
those
types,
but
we
see
that
as
being
where
we're
seeding
the
cd
event
model
going
forward-
and
you
can
see
here-
our
categories
of
test
is
highly
abstract
here
right
now,
but
we
have
four
of
tools
that
the
fpl
or
fidelity
pipeline
library
would
provide.
A
The
adapters
for
change
management
is
very
stringent
and,
and
has
the
key
key
areas,
but
you
can
see
pretty
clearly
it's
just
to
provide
the
overall
capability
model.
That's
there,
the
eventing.
I
can
demonstrate
that
in
use,
but
we
just
very
simple
artifact
upload
how
we're
we're
tying
in
a
simple
one
line.
We
can
abstract
the
context
in
real
time
and
push
that
out,
and
so,
where
does
this
go?
When
we
step
back
all
of
these
events,
we
currently
have
a
plug-in
just
a
very
simple,
efficient
plugin
that
we
use
the
fidelity
pipeline.
A
Library
will
worry
about
the
model,
the
tools
themselves,
it's
very
scalable.
If
we
have
a
new
tool,
we
just
roll
in
the
adapters,
but
all
them
are
currently
emitting
the
cloud
events
we're
currently
in
the
process
of
moving
this
over
to
the
cd
event
specification
that
you
guys
have
and
just
prototyping
that,
but
it
all
comes
into
the
the
data
stream,
where
we
can
create
some
of
this
data.
A
For,
for
example,
we
have
real-time
gates
that
we
want
to
use
in
our
pipeline
to
evaluate
there
and
then,
or
we
have
just
data,
that
we
want
to
process
on
the
back
end
and
do
more
evidence-based
gathering
that
can
be
used.
But
at
the
end
of
the
day,
this
will
be
served
through
api
tier
or
through
a
push
event.
Notification
standpoint,
which
I
can
go
into
later
on,
I'm
just
conscious
of
time.
A
So
I'm
going
to
go
through
this
at
a
relative
pace,
but
it'll
give
you
the
idea
of
how
it
ties
into
the
discussion
that
we're
having.
This
is
just
another
deeper
view
on
the
actual
scm
data
points
itself.
So,
like
we're
very
interested
in
all
the
push
events,
the
pr
life
cycle
that
happens-
and
we
obviously
can
leverage
the
web
hooks
available
in
the
tools
to
collect
that
number
of
approvers
being
a
perfect
use
case.
A
We
process
in
our
data
store
right
now,
sort
of
it's
just
a
ledger.
Just
collecting
these
events,
raw
and
then
what
we
do
is
based
on
the
event
types
that
are
coming
in,
so
whether
it
has
a
high
level
calorie
of
cloud
event.
We'll
have
a
specific
data
type
where
we
can
specifically
hone
in
on
it,
and
we
use
like
dyno
streams
in
this
case.
To
actually,
for
example,
a
merge
event.
A
This
has
just
been
collected
behind
the
scenes
implicitly,
and
this
is
an
example
of
some
of
the
web
hooks
bitbucket.
For
example,
you
get
these
out
of
the
box
and
we're
able
to
instrument
up
what
we
want
to
and
we
can
capture
the
whole
life
cycle,
for
example.
So
it
provides
a
huge
amount
of
data
points
that
is
is
very
interested.
A
Not
only
from
our
evidence-based
items
but
from
our
our
chapter
leads,
for
example,
for
engineering
excellence,
they're
interested
in
understanding,
much
richer
insights,
for
example
around
pull,
request,
pickup
time
review,
depth
size,
so
there's
many
different
applications
to
this.
These
are
just
some
examples.
Whether
we
agree
on
them
or
not.
A
Is
another
thing,
but
it's
just
again
in
the
internal
environment,
we're
in
there
there's
a
lot
of
interest
in
how
we
can
use
this
across
the
board
and
in
a
nutshell,
I
have
again
a
small
example
of
how
the
the
events
are
running
on
the
pipeline
itself,
but
there's
a
lot
of
information
there.
But
again,
this
is
a
sample
catalog
that
we've
written
that's
doing
a
scaffold
deployment
into
an
eks
environment,
and
you
can
see
here
how
we
can
with
the
eventing
that
we
have.
A
We
can
actually
dynamically
aggregate
data
points
on
the
fly.
So,
for
example,
when
the
push
happens,
we
can
get
the
the
shot
that
was
just
pushed
down
and
pushed
that
into
the
evidence
store.
So
not
only
do
we
get
the
contextual
data
that's
happening
in
the
pipeline
run,
we
can
aggregate
it
on
the
fly
with
whatever
we
want
to
and
the
fact
it's
inner
source
new
capability.
A
We
can
get
the
data
point,
but
by
collecting
it
in
the
ledgers
and
having
the
background
processing
architecture
allows
us
to,
I
suppose,
do
a
lot
of
collation
across
our
back
end,
but
we're
very
interested
in,
I
suppose
tying
this
into
what
you
definitely
you
guys
are
doing
to
see.
Is
there
any
use
cases
so
far?
What
I've
seen
is
it's
brilliant,
because
one
of
the
challenges
we
have
is
is
how
we
can
instrument
this
into
our
existing
sre
telemetry,
for
example,
some
of
our
applications.
A
It's
that
richness
as
we
go
further
down,
but
right
now
it's
providing
huge
value
in
enabling
us
to
gather
a
lot
of
really
rich
information,
not
reuse.
The
custom
cloud
event
model,
but
I
don't
know
if
there's
any
questions
even
concerns
of
what
we're
doing,
but
it's
it's.
I
hope
it
gives
a
picture
of
why
I'm
interested
in
joining
this
call
to
see
how
we
can
help
see
how
you
guys
are
working
with
this
and
how
we
can
leverage
it
and
and
one
effort.
A
I
know
I
mentioned
jenkins
a
bit
here-
we're
very
much
interested
in
forking
that
existing
cloud
event
plug-in
and
adapting
it
to
cd
events
or
helping
contribute
to
that
effort.
So
we
can
get
these
out
of
the
box
as
much
as
possible
and
leverage
that
capability,
but
it's
yeah
I'll
stop
there
for
some
questions,
but
at
a
high
level.
That's
what
we're
we're
focusing
on
in
fidelity
right
now.
E
A
It's
abstracted
away
to
a
point
that
if
we
take
on
github
actions,
techton
whatever
it's
just
another
seating
that
we
have
but
right
now
as
part
of
our
our
overall,
we
do
have
some
more
complex,
like
training
applications
that
require
on
a
lot
more
rich
release,
orchestration,
for
example,
but
the
the
goal
is,
the
library
will
be
expanded
across
those
as
well,
so
they're,
just
additional
sinks
that
are
coming
in.
A
But
yeah
core
is
our
initial
target
for
this,
and
it
was
the
it
was
a
blank
slate
for
a
lot
of
our
legacy
platforms
to
move
over
that
this
runway
would
be
made
available.
Teams
could
just
contribute
to
this
library
and
then
the
catalogs
are
there
for
them
to
sort
of
migrate
their
workloads
over,
but
the
goal
is,
as
we
evolve,
our
ecosystem
that
this
will
adapt.
E
A
Yeah
we've
we've
had
some
challenges
with
obviously
the
ability
to
like
it's
balancing,
and
this
is
a
it's.
A
great
question
like
I
think,
a
lot
of
tools
that
come
and
go
even
we've
had
multi-instantiations
of
many
different
orchestrators,
but
the
key
value
that's
been
for
us
is
to
have
simple
contribution.
Not
everyone
is
the
rockstar
engineer.
What's
worked.
Really
really
well
for
us
is,
is
that
everyone's
got
involved,
whether
you're
a
systems
engineer
highly
programmed
we've,
just
architected
the
capabilities
to
be
as
simple
as
possible,
but
do
the
job
really
well.
A
If
we
want
to
port
that
back,
the
high
level
verbs
effectively
can
be
again
if
we
do
move
to
actions
down
the
line
that
will
just
be
another
instantiation
or
detect
on,
but
right
now
it's
it's
that
workflow
and
and
using
those
capabilities
themselves
that
allow
teams
to
construct
a
good.
We
have
a
very
rich
variety
of
workflows.
It's
we're
not
mature
enough,
yet
to
have
just
a
single.
This
is
what
you're
doing
so.
This
is
the
best
way
that
we
can
get
everyone
involved
and
scale
it
out.
From
that
standpoint,
there
right
right.
E
Sounds
good,
so
my
my
second
question,
then
I
guess
is
it
is:
do
you
have
in
some
kind
of
internally
published
library
of
the
event
types
you
talk
about
event,
types
right
and
you
you
listen
to
certain
types.
Do
you
have
them
defined
somewhere
that
they
could
share
or.
A
Yeah,
it's
it's!
It's
pretty
again.
It's
the
way.
We've
with
that
with
the
venting
closure.
I
was
showing
you
that
we
wrapped
the
function.
We
didn't
want
to
have
a
specific
contract
per
event.
We
dynamically
generate
that
right
now,
so
the
idea
would
be
is
for
the
capability
model
that
we
have
right
now.
A
So,
for
example,
if
it's
dealing
with
deployment
related
and
vars,
we
would
use
that
service
deployed
service
of
data
cloud
event,
type
we'd,
construct
that
with
under
the
hood,
we
provide
some
metadata
into
the
actual
event
creation
mechanism
that
will
instantiate
the
appropriate
cd
event.
It's
not
coming
directly
from
the
tool
itself.
So
that's
the
mechanism
where
we
can
based
on
the
domain
it's
using.
We
can,
I
suppose,
provide
the
appropriate
cd
event
at
the
moment.
It's
generating
a
custom
cloud
event
based
on
the
event
type
itself,
so
we've
we've
literally
beforehand.
A
We
didn't
know
this
project
existed,
so
it
was.
We
were
very
much
interested
in
just
having
nearly
a
one
to
one
mapping
to
the
vars
that
we're
supposed
to
the
event
types
that
are
being
created,
but
now
we
want
to
categorize
them
more
appropriately.
To
the
cd
event,
so
that
mechanism
can
be
tweaked
to
instantiate
that
so
that's
what
we're
working
on
since
we
met
in
in
austin
with
andrea
to
sort
of
adapt.
What
I
just
showed
you
there
to
produce
cd
events
as
the
event
publishers.
E
A
I
think
that's
it's
a
good
question.
I
think
that
was
one
of
the
questions
I
was
asking
andre
out
there
like.
How
are
you
guys
going
to
like?
I
think
there
is
flexibility
in
the
specification
that
we
you
can
provide
some
custom
fields,
for
example,
I
believe-
or
this
ability
to
override
there's
flexibility
in
there
to
provide,
if
there's
no
additional
attributes
that
we
want
that
we
could
add
in
honestly.
I
think
it's
helping
us
simplify
our
event
publishing
model
right,
because
if
you
take
deployment
you
could
be
going
to
ecs
native
multiple
csps.
A
Do
you
care
as
long
as
it's
deployment
of
some
kind
right,
the
actual
data
payload's
going
to
provide
that
context
and
we're
just
typing
at
what
we
want
with
the
environment,
information
and
the
target?
So
for
us
it?
I
think
what
I'm
seeing
right
now,
the
way
you've
molded
it.
I
think,
in
the
security
side,
on
the
government
side,
there
may
be
some
opportunities.
A
I
don't
know,
have
you
covered
that
already,
but
at
the
moment
I
see
the
specification
is
pretty
much
around
sem
related
activities
and
the
predicates
test,
related
information,
artifact,
publishing
and
deployment
and
some
of
the
instant
management
ones,
so
we're
still
building
this
out
so
right
now
it's
a
good
layer,
but
I
want
to
be
involved
in
this
discussion
just
to
see
like
I
definitely.
The
security
is
a
really
big
one.
A
But
again
I
don't
know
if
you
would
normalize
that
under
a
more
generic
category
or
would
you
would
you
would
you
make
it
a
first
class
event
effectively.
E
A
So
far,
I
think
the
model
fits
and
to
your
point,
if
we
need
to
intermix,
I
don't
see
an
issue
right
because
for
the
way
I'd
explain
that
architecture,
it's
the
observer,
that's
going
to
worry
about
how
it
handles
the
event
right.
So
if
we
can
put
as
much
into
the
cloud
event
model
and
if
we
need
custom,
we
can
cater
for
both.
It's
really
how
this
the
the
data
stream
would
process
that
event.
A
But
our
I
know
your
wider
goal-
and
you
mentioned-
is
more
interoperability
and
and
some
of
those
use
cases,
but
for
us
it
would
help
us
especially
around
the
metric
standardization,
because
because
you'd
have
that
standard
types
that
you're
using
it
allows
us
cover
a
lot
of
bases
under
a
sort
of
single
umbrella,
but
also
capture
the
metadata
that
we
want
to.
So
it's
just
one
way
that
we're
interpreting
the
use
of
this,
as
opposed
to
maybe
from
a
wider
interoperability.
It
would
help
us
future
proof
ourselves.
E
A
The
current
one
that
we're
we're
looking
to
again
redecorate
the
existing
model
and
with
the
can
use
the
right
term,
but
the
the
headers
that
you've
defined
will
allow
us,
for
example,
standardize
the
way
we're
going
to
do
that
in
the
the
fidelity
pipeline.
The
way
we're
publishing
those
events
out
and
using
the
evidence,
data
that
we've
got,
we
can
sort
of
look
up.
A
I
think
one
of
the
challenges
I
was
I
know
I
was
in
a
call
with
andrea
and
like
how
teams
are
going
to
core
like,
for
example,
they're
like
where
are
they
going
to
get
the
look
up?
For
example,
they
are
the
last
commit
id
where
is
that
being
consumed
from?
Is
that
the
observer's
responsibility?
In
our
case
the
evidence
store
that
we're
going
to
look
up,
so
we
that
correlation
activity
is
obviously
left
to
the
to
the
end
user
to
figure
out
how
to
link
it.
But
for
us.
E
Right
now
it
is
yeah
so
yeah.
What
I
was
saying
me
at
for
the
metrics.
I
will
soon
mark
it,
let
you
in
as
well
sorry,
but
what
I
was
aiming
for
was
the
door
matrix
that
we
are
aiming
for,
supporting
as
well
also
the
box
from
the
events
we
provide
with,
of
course,
the
deployment
events
on
the
ua.
A
C
Oh,
yes,
so
actually
yeah
a
couple
of
questions.
I
I
think
well
so
one
actually
before
I
get
to
my
questions,
there's
also
a
thing
that
just
came
out
last
week
around
some
of
the
security
event,
stuff
aws
just
released
some
schema
framework.
I
think
for
sort
of
standardizing.
C
Let
me
see
if
I
can
just
pull
this
up.
I
saw
the
link
a
couple
of
days
ago.
C
Yeah
here
we
go
they
they
in
conjunction
with
a
bunch
of
other
folks.
I
posted
in
chat
here.
They
have
a
schema
framework
for
for
some
of
those
security
events.
I
don't
know
how
exactly
it
would
integrate
here,
but
it
might
be
worthwhile
to
just
sort
of
take
a
look
there.
Yeah
separately.
C
C
Are
there
general
sorts
of
security
things
you
are
thinking
about
when
sort
of
doing
this
to
make
sure
that
hey
something
that
let's
say
has
been
compromised,
can't
try
to
falsify
some
of
the
events
right
or
or
something
you
know,
some
other
malicious
actor
can't
say
hey
I
created,
I
crafted
a
cd
events
and
I'm
sending
it
to
one
of
the
collectors
and
then
it
gets
accepted
as
a
as
a
valid
thing.
Yeah.
A
No
we've
we
good
point
like
we've
built
in
security
straight
away
to
the
the
event
producers
for
a
better
word,
to
ensure
that
they're,
obviously
from
the
way
we
encrypted
and
how
we
sort
of
ensure
that
the
it
came
from
the
source
that
was
there's
no
sort
of
man,
the
middle,
for
example,
injecting
these
around
this
is
something
that's
again
maturing.
A
When
things
go
bad
right,
we
have
a
clear
ability
to
sort
of
link
back
through
the
chain
on
this,
but
it's
the
core
concepts
I
demonstrated
today
is
to
facilitate
our
our
audit
and
our
security
and
and
just
collect
a
lot
of
the
time.
Now,
there's
a
trust
space
model
on
some
of
the
stuff.
So
we
want
to
make
sure:
that's
just
built
in
the
metrics
is
a
huge
benefit
on
the
side
as
well,
because
we
can
collect
that
and
then
to
your
point.
A
All
of
that
is
this
is
how
we
harden
it
from
our
ability
to
run
it
out,
but
it's
it's
not
perfect,
but
we're
still
iterating
on
it.
But
it's
it's
trying
to
solve
that.
If
you
think
about
a
enterprise
scale
where
there's
lots
of
different
systems
coming
through
it's
it's
a
half,
a
cultural
change
as
much
as
it
is
to
allow
people
follow
that
single
model
and
and
and
drive
on
it.
A
With
all
these
data
points
being
collected,
we
could
do
a
lot
with
it,
but
the
client
event
is
just
sort
of
it
was
a
key
sort
of
container
of
we
were
using.
I
think
what
I
liked
about
the
cd
events.
It
was
sort
of
at
least
somewhat
an
effort
to
sort
of
future-proof
us
and
standardize
on
how
we
we
look
at
that
and
we're
still
learning.
I
think,
to
be
honest
with
you,
it's
just
being
connected
in
this
effort
to
see
how
we
can
to
mule's
point.
A
Is
there
any
gaps
in
the
specification
that
we
maybe
couldn't
provide
some
guidance
that
maybe
or
maybe
not
be
heard
or
learn
how
other
people
are
using
it?
I
think
you
mentioned
their
supply
chain
and
that
lineage
back
is
very
interesting,
because
there's
there's
many
different
dimensions
to
this,
so
we're
we're
taking
it
pretty
basic
at
the
moment,
but
you
can
sort
of
hope
see
what
we're
trying
to
achieve.
It
has
many
different
security
now
know
not
only
okay,
I
know
what's
running,
but
how
did
it
get
built
right?
That's
an
obvious
question.
A
There's
there's
a
lot
we
can
get
from
this,
but
we
need
that
evidence
being
collected
and
then
it's
it
just
takes
away
a
lot
of
politics
from
the
equation
that
someone's
attested
and
it's
opinionated.
This
is
collected
through
the
standard
and
it
allows
us
to
sort
of
move
things
a
lot
quicker.
So
yeah,
that's
in
a
nutshell,
at
the
goal,
but
where,
like
you
say,
there's
there's
definitely
going
to
be
more
iterations.
C
Yeah,
no,
that
sounds
to
be
clear.
It's
what
you've
built
is
is
is
really
awesome
already.
I
I
think
on
on
that
end,
yeah,
like
there's,
there's
some
stuff
that
we're
starting
to
build
out
from
the
supply
chain,
piece
on
how
you
can
kind
of
have
that
flow
through,
and
so
I
was
actually
curious.
C
Have
you
considered
like
consuming
cd
events
or
cloud
events
from
other
external
systems,
so
that,
if
like
a
third
party
were
to
say,
hey,
we
discovered
a
thing
and
we
now
want
to
either
be
a
a
poll
or
a
push
model
depending
on
you
know,
security
and
whatever,
but
like
saying
hey,
you
know,
let's
say
a
cve
cve
gets
discovered
now
that
gets
pushed
through
to
now
update,
and
you
know
change
both
your
your
evidence
store
and
potentially
also
be
used
as
a
thing
to
kick
off.
A
Is
that
we're
not
doing
as
much
detection
after
we
want
to
get
as
much
done
as
the
developers
are
working
in
the
systems
itself,
and
I
think
that's
another
area
where,
if
we
had
more
buying
with
cv
events
with
the
vendors
itself
to
get
these
out
of
the
box,
but
the
goal
is
to
move
toward
that
direction
and
michael
and
and
to
absolutely
we
do
have
a
lot
of
homegrown
stuff
involved
as
well.
That
allows
us
to
to
follow
a
similar
model.
What's
beauty
about
the
cd
events?
A
Is
the
nice
abstraction
around
what
they're
actually
doing
the
actual
action
itself?
We
can.
We
can
hide
in
the
data
itself,
but
it
allows
that
sort
of,
even
within
internally
a
lot
more
standard
categorization
of
what
we're
doing
and
yeah.
I
think
there's
there's
opportunities
for
that.
Extensibility
to
your
point
from
external.
But
again
we,
like
you,
say:
we've
got
lots
of
it's
not
just
the
pipeline
itself.
A
E
I
was
looking
at
the
the
it
talks
about
this
in
the
middle
that
accident
security
in
the
events
and
so
on.
If
we
could
somehow
see
if
the
event
is
actually
sent
from
the
source
that
says
they
send
it
this
one.
I
was
looking
for
in
cloud
events
if
they
have
something
proposed
there,
because
it
could
really
be
on
the
cloud
events
level.
I
would
say
yes
as
an
extension
securing
that
the
events
are
not
tampered
with
in
the
middle
and
other
things
like
that.
A
E
A
Itself
that
the
to
your
point,
I
think,
like
some
of
that,
will
be
hard
to
distinguish
right.
You
want
to
make
sure
that
the
I
suppose
the
integration
points
from
the
producer
to
the
receiver.
It
was
ensured
that
that
was
nailed
down
and
that
we
understood
it
was
not
coming
from
any
other
thing,
like
effect
like
a
certificate
for
a
better
word,
but
like
yeah.
That
was
the
way
we
saw
the
obvious,
but
if
there's
any
additional
ways,
it
could
be
verified
and.
E
Mostly,
there
is
also
a
security
object
within
within
all
those
events
which
can
handle
those
kinds
of
signatures
and
yeah
making
sure
that
the
sender
is
not
tampered
with
and
those
things.
So
maybe
we
can
get
some
input
from
there
as
well
into
the
cdf
and
protocol,
eventually,
I'm
pretty
sure
which
we
will
not
have
it
in
the
zeros
one
version
anyway.
It
will
just
delay
it
further,
probably,
but
it's
very
good
to
lift
the
discussion
and
great
that
you
have
experience
there
jamie
from
from
your
end
response
no.
A
And
again,
like
I
know,
as
you
probably
deal
with
many
end
users
there,
I
know
it
was
one
of
the
last
calls.
There
was
a
lot
of
interest
in
like
how
the
dependency
chain
was
being
done
and
how
we're
looking
at
that
right
and
it's
it's
it's
fascinating
to
see
how
how
this
space
will
evolve
but
like
at
the
end
of
the
day.
A
It
makes
perfect
sense
that
we're
not
crafting
custom
events
and
how
we
can
leverage
this
ecosystem
to
future-proof
us.
For
what
we're
doing
and,
like
you
say
there,
there's
many
different
demonstrations
of
the
value
not
only
from
the
data
interoperability
but
from
the
to
your
point,
metrics
things
like
that
that
we
can
get
out
of
it.
A
So
yeah,
that's
just
our
current
view
and
would
like
to
be
it's
okay,
connected
to
this
group
just
to
sort
of
see
how
things
evolve
and,
as
we
get
more
experience
in
our
our
migration
to
cd
events,
just
feedback
from
it
like,
I
think,
that's
where
the
the
next
stage
is
for
us
and
getting
more
involved
in,
for
example,
we
can
help
accelerate
with
vendor
integration.
With
some
of
the
partnerships
we
exist
to
have
right
where
we
can
get
involved
in
that
so.
E
Great
so
great
discussion
and
great
presentation
from
you
jamie,
oh,
it's
reality
there
and
since
this
is
the
only
topic
for
for
today
that
I
had
planned,
I
was
actually
I
was
planning
to
prepare
more
for
this
meeting
today,
but
I
have
not
been
very
well
this
day.
So
that's
why
I
haven't
really
prepared
anything
else,
but
anyway
we're
almost
out
of
time.
E
E
B
I
just
want
to
make
sure
you
all
have
the
links
to,
because
the
work
on
this
is
is
across
a
couple
stakes,
there's
a
lot
of
activity
happening
and
I
want
to
make
sure
you
all
have
the
links
to
to
what's
being
done,
and
maybe
the
sdks
and
just
so
you're
aware
of
the
movement
and
and
possibly
where
you
want
to
be
contributing
or
what
you
think
is
most
relevant
for
you.
B
A
Yeah,
no,
I
I
one
of
my
team
just
pouring
this
through
with
the
available
sdks
as
a
stance
so
like
so
far,
I
think
we'll
get
more
involved
in
any
of
the
sort
of
low-level
feedback
contributions
prs
if
there's
anything
that
we
see,
but
so
far
it's
just
like.
I
mentioned
there
migrating
off
the
custom
to
to
the
cd
events.
B
Okay,
great
yeah,
I
was
just
thinking
that
your
your
work
probably
has
a
lot
that
you
can
input
into
into
how
it's
evolving
and
I
just
wanted
to
make
sure
that
you
were
aware
yeah.
So
it's
great
thank
you.