►
From YouTube: TAG Security Supply Chain WG 2021-12-09
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
B
All
right
we're
five
after
we
can
probably
get
started
so
just
a
reminder
here.
B
So
one
second
here,
yeah,
okay,
so
yeah.
This
is
the
supply
chain
working
group
meeting.
As
a
reminder,
your
you
know,
this
meeting
is
recorded,
it'll
be
uploaded
to
youtube
shortly
after
this
meeting
ends.
We
will
also-
and
also
your
participation
in
this
meeting-
must
abide
by
the
cncf's
code
of
conduct.
B
Today,
I
believe
we
have
somebody
give
me
a
presentation
and
I'll
have
aradna
introduce.
C
Hi
good
morning,
all
so,
a
few
weeks
ago,
I
had
shared
a
blog
post
from
aws,
which
talked
about
number
of
services
that
can
be
utilized
to
build
a
ci
cd
pipeline
and
build
security
into
those
cicd
pipelines
as
well,
and
that
blog
post
was
written
by
srini
manapali.
C
Who
is
on
the
call
today,
I
had
gotten
a
request
from
this
team
to
invite
him
to
come
and
speak
to
us
about
his
blog,
as
well
as
all
the
security
services
that
can
be
natively
used
in
aws
to
build
a
secure,
cicd
pipeline
and
also
secure
our
supply
chain
software
supply
chain.
So
over
to
you,
srini.
D
Okay,
yep,
sorry
so
hope
you
all
can
see
me
now,
so
I
was
trying
to
turn
on
my
camera
here
so
yeah
again.
Thank
you
aradhana
thank
you
for
involving
me
here.
So
I
appreciate
the
opportunity.
I
also
have
das
debushes
from
the
aws
he's
interested
to
participate
in
this.
One,
like
you
know
so,
he's
helping
me
with
this,
so
just
want
to
introduce
him
as
well
so
yeah.
You
want
me
to
get
started
with
the
presentation
or
anything
before
that.
D
Okay,
yeah,
absolutely
so
just
a
little
bit
about
myself,
I'm
a
senior
solutions
architect
here
at
aws.
My
primary
focus
is
diversity
corps,
particularly
at
aws,
but
before
joining
to
aws
from
last
two
decades
close
to
two
decades,
I
have
been
working
in
various
organizations,
helping
them,
build
devops
implementations
and
the
security
implementations
and
application
performance
monitoring.
So
all
the
nine
yards
of
infrastructure
set
up.
So
I
have
a
good
background
of
devops
and
the
security.
So
that's.
D
Why,
like
you
know
recently,
when
aradhana
reached
out
to
present
about
the
recent
plot,
so
I
was
it's
an
exciting
thing.
So
definitely
I'm
looking
forward
for
this
one.
So
with
that,
I
will
share
my
screen.
C
D
D
D
So,
okay,
I'm
sharing
my
screen
reader.
D
Awesome:
okay,
so
today,
as
aradhana
mentioned
I'll,
be
talking
about
the
recent
blog
which
was
published
like
you
know,
recent
means,
sometime
in
july,
which
is
about
building
an
entry
and
kubernetes-based
devsecop
software
factory
on
aws.
That's
a
solution
we'll
be
talking
about,
but
I'm
happy
to
answer
any
questions
or
anything
beyond
that
one.
D
So
the
for
this
talk.
This
is
the
architecture
by
the
way
we'll
be
talking
through
so
we'll
talk
about
how
we
implemented
security
and
compliance
in
this
pipeline
and
then
we'll
jump
into
the
the
pipeline
itself.
The
flow,
how
it
is
doing
the
security
and
compliance
into
the
pipeline.
D
So,
let's
start
with
the
security
and
compliance.
So
when
implementing
security
and
compliance
in
devsecops
software
factories,
there
are
two
aspects
we
will
address
right:
one
is
security
of
the
pipeline
and
the
security
in
the
pipeline.
That's
the
same
approach
I
took
when
building
this
solution,
so
security
of
the
pipeline
is
just
to
set
the
stage
security
of
the
pipeline
is
about
protecting
your
pipeline
itself.
Security
in
the
pipeline
is
basically
making
sure
whatever
you
are
pushing
through
your
pipeline
is
meeting
all
your
security
and
compliance
requirements
at
the
end
product
right.
D
So
how
do
we?
How
did
we
implement
security
of
the
pipeline?
So
when
talking
about
security
of
the
pipeline,
there
are
many
aspects
to
that.
So
some
of
them
are,
you
know,
authentication
and
authorization,
encryption,
patching,
alerting
and
monitoring.
Then
key
management
and
configurations.
D
These
are
all
various
aspects
we
can
enforce
into
the
in
the
pipeline
so
that
the
people
who
are
supposed
to
do
some
activities
in,
like
you
know
whether
they
are
doing
some
activity
in
the
pipeline,
like
you
know
whether
initiating
the
pipeline
or
changing
something
in
the
pipeline,
they
have
the
proper
authentication
and
authorization.
D
Similarly,
the
data
that
you
are
generating
in
the
pipeline
is
encrypted
at
rest
and
in
transit
and
coming
to
patching.
Like
you
know,
you
will
use
so
many
tools
and
services
in
the
pipeline
right
making
sure
it
has
all
the
up-to-date
patches
and
doesn't
have
any
vulnerabilities
in
there
so
and
then
moving
on
to
alerting
logging
and
monitoring
is
like
we
all
know,
it's
a
monitoring
and
observability
of
the
key
components
and
also
auditing
is
needed
right,
some
for
some
of
the
complaints
requirements,
whether
you
take
fedramp
or
any
of
those.
D
D
We
need
to
protect
our
keys,
how
we
are
producing
those
keys
and
how
we
are,
like
you
know,
encrypting
the
data,
so
we
need
to
have
proper
key
management,
that's
what
we
are
talking
about
here
and
finally,
the
config
management
is,
if
you
have
some
sensitive
information
like
you
know,
api
keys,
passwords
or
any
parameters
that
you
are
passing
in
this
pipeline,
making
sure
we
are
properly
managing
those
configurations,
not
hard
coding
in
the
infrastructure
as
code
or
anywhere.
So
these
are
the
certain
aspects.
So
how
did
we
implement
this
into?
D
This
solution
is
basically
using
the
infrastructure
score.
So
there
are
many
solutions
for
the
infrastructure
as
code
so
because
we
implemented
this
on
the
uws
side.
Cloud
formation
is
one
of
those
options,
so
all
the
previous
slide
components
whether
it
is
authentication
authorization
generating
the
keys
or
encrypting
certain
components,
all
those
are
incorporated
into
the
cloud
formation
template.
Basically,
so
that's
what
we
use
here
and
then
moving
on
to
the
security
in
the
pipeline
is
making
sure
your
code
is
meeting
all
the
security
and
compliance
requirements.
D
So
these
are
some
of
the
aspects
how
you
can
achieve
that,
but
this
is
not
a
at
the
complete
list
right
beyond
this.
There
are
many
other
ways
you
can
protect.
Your
security
in
the
pipeline
so
implemented
secret
analysis.
This
is
a
very
basic
thing
right,
so
we
make
sure
the
sensitive
information
is
not
pushed
to
the
production
workloads.
D
So
that's
assassin
and
dust
is
basically
looking
from
the
end
user
point
of
view
making
sure,
like
you
know,
once
you
deploy,
you
expose
that
url
and
make
sure
all
the
security
vulnerabilities
are
identified
from
the
end
user,
part
of
like
sql
injections,
cross-site
scripting,
some
of
those
and
finally,
we
implemented
runtime
in
application.
Self-Protection
using,
like
you
know
some
of
the
open
source
as
tools
as
well,
which
is
it's
like
finding
out
the
abnormal
behavior
or
unusual
pattern
in
your
production
workloads
and
take
some
action
related
to
that.
D
So
these
are
some
of
those,
but
you
have
any
many
other
components
like
ios
or
many
other
components
you
can
include
into
the
pipeline
security
as
well.
So
how
did
we,
if
you
take
all
those
components?
This
is
the
different
stages
in
the
pipeline
particular
to
this
solution.
We
implemented,
for
example,
secret
analysis
and
security
review.
You
can
incorporate
in
the
build
right,
as
we
all
know,
and
then
moving
to
the
test.
D
Once
you
deploy
to
your
lower
environment
like
staging
or
test
environment,
you
can
do
the
dash
analysis
and
identify
the
vulnerabilities
there.
Then,
once
you
deploy,
let's
say
you
deploy
to
production,
you
can
use
the
rask.
In
this
case
we
are
using
rasp
to
identify
any
abnormal
behavior
in
your
production
workloads.
This
is
how
those
previous
slide-
security
in
the
pipeline
components,
will
span
out
in
your
pipeline,
but
yeah
to
implement
these.
D
We
used
leveraged
open
source
tools,
offer
all
of
these
aspects,
so
we'll
talk
about
what
are
the
tools
we
used
and
then
how
we
did
we
achieved
that
so
another
company.
We
are
talking
about
security
and
compliances,
that's
the
key
piece
why
we
are
implementing
security.
So,
let's,
for
example,
if
you
take
a
an
example
of
me
standard
853,
then
it
has
20
control
families
and
each
control
family
has
so
many
security
controls,
so
the
solutions
and
the
services
that
we
used
are.
This
is
like
you
know.
D
We
mapped
so
that
you
know
each
control
family
will
address
with
these
particular
resources,
for
example,
on
the
right
side
column
of
each
table.
I
listed
some
of
the
cloud
formation
template
resources,
so
those
are
the
components.
Similarly,
if
you
use
terraform
or
any
other
things,
you
can
identify
those
resources
and
help
whoever
is
implementing
make
their
life
easy.
Basically,
that's
the
whole
purpose
here.
D
So
this
is
how
you
can
achieve
security
and
compliance,
your
requirements
right
so
moving
on
now,
let's
talk
about
the
pipeline
itself,
the
flow
and
how
we
use
the
open
source
tools
and
all
that.
D
So
these
are
the
high
level
services
we
used
left
side.
If
you
see
those
are
open
source
tools,
some
of
these
vendors
have
commercial
offerings
as
well.
For
example,
if
you
take
anchor
or
sneak
they
have
commercial
offerings
as
well,
but
we
used
open
source
in
this
solution
so
anchor
or
sneak.
For
example,
they
can
do
this
sc
and
sas.
There
is
a
little
overlap
there,
so
they
are
doing
like
an
scn
test
and
coming
to
dust.
D
We
used
was
foundations,
wasp
zap,
which
is
said,
attack
proxy
to
do
the
black
box
testing
from
the
end
user
point
of
view
and
finally,
coming
to
rasp
runtime
application
self
production,
we
used
cncf
cystic
falco
to
identify
the
abnormal
behavior.
So
these
are
the
open
source
tools
we
use
and
coming
to
the
right
side.
D
Those
are
the
aws
services
we
used.
For
example,
kms
is
like
you
know,
for
generating
the
cryptographic
crease
and
the
security
hub
is
a
like
a
single
pane
of
glass.
Basically,
you
have
all
these
vulnerabilities
scanning
tools
here,
but
we
need
to
find
a
way
to
make
life
easy
for
end
users
right
so
are
the
security
persons?
Other
operations,
so
that
if
you
identify
some
vulnerabilities
for
every
vulnerability,
you
don't
need
to
go
to
each
of
these
tools
and,
like
you
know,
see
what
is
the
vulnerability?
How
do
you
are
immediate?
D
So
we
are
providing
a
security
like
you
know,
single
pane
of
glass
with
the
security
hub.
Basically,
so
that's
what
an
amazon
ecr
is
a
repository
config
and
under
the
hood.
It
also
does
container
scanning.
So
that's
why
that's
been
used
in
this
solution
and
the
cloud
trail
is
like
you
know.
We
talked
about
the
auditing
right,
so
that
can
help
in
this
solution.
D
So
basically,
all
the
activities
that
are
going
in
your
pipeline,
it
records,
and
then
you
can
take
actions
on
top
of
that,
then
moving
on
to
config
is
like
you
know,
to
meet
security
and
compliance.
Obviously
you
need
to
enforce
certain
rules
and
evaluate
those
rules
continuously
to
make
sure
you
are
still
under
the
compliance.
D
So
those
are
the
ones
we
are
storing
in
the
parameter
store
so
that
you
know
you
are
not
hard
coding
all
those
pieces
and
finally,
on
the
aws
iim
side
is
for
the
authentication
and
authorization
so
to
enforce
that
and
amazon
git
secrets
is
another
open
source
tool
to
analyze
the
secret,
such
as
you
can
do
the
passwords
and
also
you
can
identify,
like
you,
know,
aws
access,
keys
and
secret
keys,
and
you
can
customize
that.
D
So
this
is
the
overall
tool
just
want
to
introduce
that
so
that
you
know
we
know
what
we
are
talking
about
in
the
next
slides,
so
then
moving
on,
and
obviously
you
need
the
build
tools
right.
So
we
have
jenkins
open
source
tools
and
the
circle
ci
or
bamboo.
There
are
so
many
tools.
Similarly,
aw
has
these
tools.
These
are
the
ones
I
used
in
this
solution.
Code
commit
is
a
code
repo
and
code.
D
D
Let's
say
we
are
pushing
a
user
pushing
a
code,
then
it
pushes
to
code
commit
which
is
a
code
report
and
then
that
basically
triggers
a
cloud
watch
event
in
this
case
to
initiate
the
pipeline,
your
pipeline,
so
code
pipeline,
in
this
case
we
are
using
and
that
triggers
various
stages
of
your
pipeline,
so
for
all
the
packaging
and
testing
in
this
solution
we
are
using
code
build,
but
you
can
replace
with,
like
you
know,
jenkins
or
any
other
open
source
tool,
but
this
is
the
basic
idea
here,
so
we
are
using
code
bill
to
do
the
building
and
testing
and
the
deployment
aspect.
D
D
Obviously,
if
not
it
move
on
to
the
next
stage,
where
the
another
code
build
instance,
will
kick
in
and
then
depends
on
the
tool
that
you
choose,
whether
you
have
an
option
to
choose
either
anchor
or
sneak,
then
it
loads
the
tool,
downloads
the
tool
and
performs
that
scanning
against
your
code,
and
if
there
are
any
vulnerabilities,
let's
say
then
in
this
case,
what
I'm
using
here
is
a
lambda
function
which
is
written
in
python.
It's
a
very
simple
lambda
function.
D
What
it
does
is
it
takes.
The
report
from
these
vulnerability
tool
depends
on
which
tool
you
choose
and
it
aggregates
that
and
pushes
it
to
security
hub.
So
in
this
case
we
are
using
security
hub,
but
you
can,
if
you
have
any
other
tool
like
is
to
give
you
a
single
pane
of
glass
view
of
all
your
vulnerabilities.
You
can
push
it
to
that
tool
as
well.
D
Your
container
image
goes
through
all
the
vulnerabilities.
If
there
are
any
vulnerabilities,
then
it
calls
the
lambda
function.
The
same
process
repeats
the
lambda
function
aggregates
the
vulnerabilities
and
pushes
to
security
so
that
the
operational
person
doesn't
need
to
go
to
the
ecr
to
see
what
vulnerabilities
are
there
and
all
that,
so
the
same
process
repeats
if
there
are
no
vulnerabilities,
then
obviously
deploys
to
a
staging
environment.
In
this
case,
I
am
using
eks,
which
is
again
a
cncf
certified
kubernetes
conformant,
but
it
doesn't
need
to
be
eks.
D
It
can
be
any
open
source,
kubernetes
environment.
So
once
you
deploy
there,
then
you
are
now
able
to
do
the
dashed
analysis
using
wasp,
zap,
open
source
tool
and
if
there
are
any
vulnerabilities
again,
the
same
process
repeats
so
idea
here
is
the
operation
person
doesn't
need
to
go
into
login
to
each
of
these
tools.
So
that's
the
basic
idea
and
then
triggers
a
lambda
function,
pushes
it
to
security
hub
stores,
the
report
in
the
s3
bucket.
D
So
if
there
are
no
vulnerabilities,
then
now,
let's
say:
if
you
are
deploying
a
sensitive
workload,
you
need
to
have
some
kind
of
a
manual
approval.
Then
it
triggers
a
email
using
amazon
sns
in
this
case
to
the
approver
or
whatever
the
pro
email
address.
You
have
provided
and
sends
them
a
link.
Everything
is
good,
no,
no
vulnerabilities
identified
now.
Can
I
go
ahead
and
deploy
to
production?
So,
let's
see
if
they
approve,
then
it
goes
to
the
goes
and
deploys
to
production.
D
If
they
reject
the
build,
will
fail,
then
right
so
it
deploys
to
production.
So
that's
the
complete
flow
here.
So
the
final
one
is
because
eks,
when
you
set
up,
you
can
deploy
falco
and
you
can
look
for
the
unusual
activities
in
your
kubernetes
environment
right.
D
You
know
any
other
log
aggregation
tool
and
then
like,
for
example,
splunk
or
any
any
other
tool,
and
you
can
inform
the
operation
teams
and
take
the
remediation
actions
on
top
of
that.
D
So
another
just
components.
Here
we
talked
about
parameter
stores,
to
store
the
sensitive
information
of
the
configurations
there
and
other
compliance
requirements
we
are
getting
using
like
you
know:
aws
ion
aws
cloud
trial
and
config,
so
I'm
just
calling
out
there.
D
So
that's
the
entire
flow
of
this
architecture
and
we
talked
about
the
security
of
giving
a
single
pane
of
glass
view
right.
So
this
is
how
you
will
see,
for
example,
if
it
is
identifying
any
ecr
vulnerabilities
identified
in
the
container,
then
you
will
see
easier,
the
first
one,
then,
if
it
is
anything
vulnerabilities
it
identified
from
the
sas
which
is
anchor
or
the
sneak,
then
it
reports.
D
Similarly,
the
dust
which
is
wasp
zap
in
this
case
reports
here,
so
that
we
have
everything
in
one
place
so,
and
you
can
extend
this
to
take
the
automatic
remediations
like
we
can
call
a
lambda
function
from
here.
If
this
is
the
cve
that
I
got.
This
is
how
I
want
to
address,
or
I
this
is
how
I
want
to
patch.
So
you
can
take
it
to
the
next
level
to
remediate
automatically
so
yeah.
D
You
can
dig
into
those
details
and
see
what
vulnerabilities,
what
information
it
has
all
that
so
does
the
solution
cover
the
entire
security
that
we
are
looking
in
the
software
separation
up?
No
because
there
are
so
many
other
components
right.
So
many
of
the
projects-
cncf
is
also
working
here,
so
we
can
extend
the
solution
to
bring
like
you
know,
artifact,
signing
and
verification
and
enforce
policy
as
code
like
using
opa
and
gatekeepers,
and
all
that
and
based
on
the
recent
cyber
security
order.
D
We
need
to
create
the
s-bombs
for
the
government
workloads,
so
you
can
integrate
open
source
tools
to
create
s-bombs
and
analyze
those
as
forms
and
also
like
you
know.
We
need
to
include
the
metadata
attestation
and
verification
like
in
an
in
total
project,
so
that
is
also
can
be
extended
to
this
one,
and
also
we
can
integrate
automatic
thread
detection
with
the
other
solutions
out
there.
So
these
are
the
the
ways
we
can
extend
these
solutions
and
bring
like
you
know,
some
of
the
goodness
that
is
happening
in
the
cncf.
D
Like
you
know,
projects
that
I
mentioned,
like
you
know,
in
total
or
opa,
and
all
those
projects,
and
also
like
you
know
we
can
from
our
side
we
can
bring
the
customer
experience
also
to
feedback
to
these
projects
to
see
how
can
take
these
projects
to
the
next
level.
That
is
an
option
there
and
then
so.
These
are
some
of
the
common
challenges
we
normally
hear
from
the
customers
when
implementing
the
devsicops.
D
So
basically,
if
you
are
implementing
the
government
workloads
devsicops
for
the
government
workloads,
then
you
need
to
achieve
ato
authority
to
operate
and
be
continuously
in
that
authority
to
operate.
So
that
is
not
an
easy
thing.
For
example,
you
talk
about
fedramp,
high
dod
srg
levels,
five
or
like
four
five
six,
so
those
are
not
easy
to
implement.
So
those
are
the
basic
big
challenge
we
see
when
customers
are
trying
to
implement
that
and
again
we
all
know
there
are
like
you
know
dozens
of
tools
out
there
when
implementing
the
factories.
D
The
biggest
challenge
is
how
to
stitch
together
all
these
tools
together
or
the
frameworks
together
and
build
an
entry
and
the
software
factory,
which
covers
all
the
requirements
of
the
individual
organization,
and
that's
not
an
easy
thing
to
do
you
that
requires
a
lot
of
skill
sets
and
a
lot
of
knowledge
in
different
areas,
security,
depth
and
operations,
and
many
other
aspects
right
and
unfortunately,
there
is
no
easy
solution.
Okay,
this
is
the
solution.
You
can
do
the
entire
np
and
devsicops
factory
with
all
the
security
and
complaints.
D
Unfortunately,
today
we
do
not
have
anything
as
a
devsecops
community
right,
so
hopefully,
in
future
we
will
get
there
and
help
the
organizations
who
are
building
devsicops
to
easily
build
those
solutions.
So
yeah
we
talked
about.
It
requires
range
of
skill
set
right,
so
so
finally
just
want
to
call
out
so
these
security
software
factories
are
very
complex
and
yeah.
It's
it's
not
easy
to
implement
all
the
security
and
compliance.
D
So
my
goal,
my
primary
goal,
like
you
know
from
last
one-
and
here
is
basically
how
to
make
it
simple
so
that
you
know
even
the
small
organizations,
the
small
teams.
Also
can
take
advantage
of
these
software
factories
and
able
to
produce
secure
softwares
at
the
end,
because
if
you
make
it,
if
it
is
too
complex,
the
so
small
teams
and
small
organizations
may
not
be
able
to
take
advantage
because
of
the
resources
and
availability
of
them
and
the
size
of
their
teams.
So
then
those
organizations
end
up
with
softwares
which
are
not
completely
secure.
D
C
So
again
I
mean
from
an
admissions
controller
perspective.
Is
there
one
point
of
enforcement
or
decision
of
all
these
controls.
D
Yeah,
these
are
like
you
know.
If
you
see
how
we
map
these
secure
controls
each
of
these
addressing
certain
security
controls,
for
example,
it's
you
can
call
it
incrementally
for
the
security
in
the
pipeline,
for
example,
all
these
different
tools.
Each
of
these
tools
are
mainly
targeting
about
risk
mitigation
right,
so
how
you
are
producing
your
secure
software
product
at
the
end,
but
the
security
controls
are
also
needed
at
the
security
of
the
pipeline.
That
is
not
incremental.
It's
one
shot.
D
C
In
in
terms
of
continuous
compliance,
like
you
said,
continuous
ato,
so
have
you
built
any
solutions
like
that?
Where
you
you
can
provide
that
assurance
to
government
entities.
D
So
that
is
a
tough
question
right
so
so,
unfortunately,
like
let's
say:
if
organization
is
looking
for
a
continuous
authorization
like
an
ato,
so
we
can
help
them.
Each
organization
is
different.
Each
organization's
solution
how
they
want
to
implement
their
co-ops
is
different,
so
once
they
automate
all
these
certain
security
controls,
for
example,
security
of
the
pipeline
and
security
of
the
in
the
pipeline,
then
you
have
higher
chances
to
be
in
continuous
etfo.
That's
the
only
way
to
so
that
you
know
you
can
assure
your
authorizing
official.
D
There
is
no
manual
intervention.
Whatever
we
are
doing
in
this
pipeline
is
through
the
automation
and
through
different
security
gates.
If
you
give
the
confidence
and
the
documentation
to
the
authorizing
official,
I
think
those
organizations
can
be
in
cato
without
much
efforts
every
time
they
go
through
the
review
but
yeah
we,
there
is
no
one
solution
for
that
one,
so
they
need
to
automate
to
the
extent
possible
so
that
they
can
be
in
c8u.
C
Yeah,
I
was
just
trying
to
figure
out.
If
you
have
reference
customers
who
have
implemented
it
right,
conceptually
theoretically
is
possible,
but
practically
what
the
challenges
are.
I
think
that's
where
enterprises
kind
of
struggle
right.
D
B
So
I
have
a
quick
question:
could
you
go
into
more
detail
about
the
the
deployment
step
into
production?
You
started
some
manual
approval
and
then
you're
using
you
know
the
notification
service
right
to
email.
So
what
happens
after
that?
Like?
Is
there
a
trigger
that
gets?
You
know
once
approval
comes
in,
I
guess
yes,
does
it
trigger
the
build?
So
is
that
a
manual
step
or
is
that
an
automated
step
also.
D
It's
like,
for
example,
in
this
case,
the
approval
email
goes
back
to
the
approver
is
an
automated
process.
So
let's
say:
if
code
build
electron,
all
the
security
scannings
are
completed
and
no
vulnerabilities.
Then
it
triggers
an
automatic
email
and
uses
sns
and
sends
an
email.
So
approver
can
log
into
the
console
because
to
approve
he
needs
to
see
certain
data
right.
Okay,
are
there
any
notes?
Because
you
can
add
some
information
in
the
email
saying
that?
Okay,
all
these
checks
are
good.
Now
you
can
approve-
or
you
can
say,
okay,
all
these
checks.
D
90
of
the
checks
are
good,
but
10
percent
are
failed.
You
can
maybe
attach
the
report
and
then
say:
okay,
you
want
to
still
go
ahead
and
approve.
The
risk
is
low,
so
that
kind
of
information
you
can
provide
and
once
they
log
into
the
console,
they
can
see
the
data
and
they
can
say,
okay,
approve
or
reject.
So
that's
a
manual
step
to
approve
or
reject
it's
a
manual
step
because
they
need
to
review
the
data
and
once
they
click
on
review
like
an
approve,
then
again,
the
next
step
will
be
automated.
D
So
it
will
go
ahead
and
deploy
to
production
in
this
case,
using
like
tube
cartel
or
what
in
this
solution,
we
used
cube
cutter,
but
you
can
use
any
other
ways
of
deployment.
D
D
I'm
sorry
yeah
yeah
code
build
is
a
managed
service
by
aws.
It's
similar
to
jenkins.
You
can
like
just
to
give
you
connect
so
that
you
can
connect.
I
compared
with
jenkins,
but
it's
a
it's
a
completely
managed
integration
service.
What
it
does
is.
It
brings
up
a
compute
instance
under
the
hood
similar
to
jenkins
worker
node
and
then,
whatever
the
build
commands
you
probe,
provided
it
runs
those
build
commands.
In
this
case.
Let's
say
you
are
creating
a
container
image
and
storing
it
to
ecr.
D
B
So
when
you
say
approve,
so
is
that,
like
a
sort,
I
assume
there's
some
kind
of
view?
I
haven't
used
code
build
myself,
so
I'm
not
sure,
but
so
is
there
like
a
ui
or
something
that
you
just
click
on
like
hey?
This
is
where
the
pipeline
is
at
right.
Now
you
click
on
approve
and
it
goes
on
to
the
deployment
stage.
D
Yeah
code
bill
does
the
scanning
and
sends
the
email.
That's
the
end
of
the
story
for
the
code
build
the
in
this
case,
then
the
up
email
will
have
a
link
to
the
aws
console
where
you
can
see
the
pipeline
activity,
so
they
log
into
the
aws
console
and
see
the
activity
and
then
they
can
review,
approve
or
reject.
A
D
C
D
Yeah
happy
to
answer
any
questions
like
you
know.
I
can
share
this
slide
as
well,
so
it
has
references
to
the
blog
yeah.
Please
hit
me
if
you
have
any
questions,
I'm
more
than
happy
to
talk
to
you.
A
D
It
is
public,
so
this
is
an
open
source
project.
Is
that
the
question
that
you
are
asking
yeah
yeah.
D
A
Yes,
okay,
cool
on
the
aws
lab
side,
cool
got
it.
D
A
D
Similar
to
truffle
heart,
like
you
know,
so
you
can
look
for
all
the
there's
two
information.
D
Yeah
now
the
next
step,
what
we
are
trying
to
do
is
like
you
know,
we
are
the
metadata
analysis.
Yep,
please
go
ahead.
A
Yeah
hey,
so
the
only
other
thing
I
was
going
to
ask
was
on
the
use
of
security
hub.
What
is
the
ability
for
you
to
put
these
custom
results
in
security
hub?
Do
you
do
you
happen
to
be
able
to
show
what
that
looks
like.
D
Yes,
so
this
is
the
security
hub
console,
so
it's
like
a
security
posture
management
service.
What
you
can
do
is
secured
hub
provides.
Let's
say
you
have
uws
accounts
by
default.
It
gives
you
capability
to
enable
pci
vulnerabilities
in
your
account
or
pa
vulnerabilities.
It
looks
at
multiple
data
sources
behind
the
scenes
and
then
look
for
the
vulnerabilities
and
reports
it
here
by
default.
If
we
enable,
on
top
of
that,
we
can
do
like
you
know.
Custom
integrations,
like
I
did
here
again,
I'm
sending
the
vulnerability
report
from
my
vulnerability
scanning
tools
to
security
hub.
D
So
that's
another
extension
you
can
do
and
also
security
also
gives
you
some
additional
integrations
from
some
of
the
tools.
Some
of
them
are
open
source.
Some
of
them
are
partners,
for
example,
you
can
go
there
and
just
integrate
okay,
I
want
to
receive
all
the
security
vulnerability
findings
from
45
and
as
long
as
you
give
the
end
point,
security
help
can
go
and
fetch
all
the
vulnerabilities
and
show
it
here.
A
So
real
quick,
these
insights
they're
not
actually
like
alert,
alerting
on
on
the
fact
that
your
vulnerabilities-
because
that
wouldn't
this
is
this-
is
so
that
you
can
actually
reference
some
sort
of
artifact
that
you've
stored
via
security
up.
Is
that
correct.
D
Yes,
I
mean
you
can,
if
you're,
if
you
can
push
it
to
here,
you
can
dig
down
to
each
of
those
vulnerabilities
and
see
the
details,
and
if
you
want
to
alert,
you
can
send
an
element
and
if
you
want
to
even
take
it
to
the
next
level
and
say
if
this
is
the
vulnerability
I
found,
then
I
want
to
remediate
this
vulnerability
by
calling
a
lambda
function
or
some
other
script.
You
can
do
that.
You
can
do
that
as
well.
Could.
A
You
could
you
also
do
this
on
the
side
of
car
duty,
creating
a
custom
finding
in
car
duty
and
having
you
know,
you
can
create
cloud
watch
event,
rules
that
trigger
off
of
card
duty
findings.
If
you
didn't
want
to
set
up
an
alerting
system.
D
D
Yeah
this
way,
like
you
know
it,
takes
away
the
heavy
lifting
from
the
operations
teams
right.
So
if
you
have
10
different
security
scanning
tools,
you
don't
need
to
log
into
each
of
those
tools
to
identify
all
the
vulnerabilities
and
again,
you
need
to
figure
out
how
to
remediate
each
of
those
vulnerabilities.
So
this
is
one
of
the
ways
we
can
like
you
know,
take
the
heavy
lifting
from
the
operation
teams.
B
One
more
question:
is
there
plans
from
aws
to
start
integrating?
You
know
like
the
signing
verification
attestation,
all
that
kind
of
stuff
and
all
those
kind
of
tools
into
some
kind
of
a
feature
in
aws.
D
Aws
has
a
tool
called
aws
signer,
but
it
doesn't
support
all
the
aspects
of
the
signing
like,
for
example,
if
you
want
to
sign,
containers
currently
doesn't
support,
so
I
think
that
is
in
the
road
map,
but
I'm
not
sure
at
this
point
when
that
will
be
released,
but
you
know
we
can
use
like
if
my
plan
is,
if
you
are
building
solutions
like
this,
whether
it
is
open
source
tool
or
aws
tool,
whatever
it
fits
well,
for
example,
sync
store,
google
seek
store,
we
can
use
right
so
to
sign
the
artifact,
so
whatever
the
customers
feel
comfortable
or
the
community
feels
comfortable.
C
Right,
what's
up,
no,
this
is
great,
I
said
still
absorbing
all
the
details
here.
D
I
appreciate
the
opportunity
here,
but
yeah
yeah
hit
me
up
on
linkedin
or
anywhere,
so
I'm
happy
to
answer
any
further
questions
and
we
want
to
take
this
similar
solution
to
the
next
level
by
incorporating,
like
you
know,
other
cncf
projects,
like
you
know
in
total
and
seek
stores
so
so
that
you
know
aws
customers
can
take
advantage
of
projects
like
metadata
analysis
as
well.
So
it's
in
the
pipeline,
hopefully
we'll
get
there
so
but
yeah.
A
Yeah
and
hey
srini
is
is
this:
is:
is
there
like
an
aws
labs
repository
that
you
guys
are
gonna
release
that
would
have
you
know
basically
an
infrastructure's
code
way
of
setting
this
up?
I
mean,
there's
nothing
preventing
anybody
from
creating
a
reusable
cloud
formation
temple
to
set
up
exactly
this.
D
D
C
B
Cool
thanks
again,
thank
you
very
much.
I
was
just
about
to
ask
you
if
anybody
has
any
other
final
questions
or
anything.
B
All
right
all
right,
if
not
yeah,
thanks
again
and
so
a
couple
of
housekeeping
things.
I
believe
this
is
the
last
meeting
for
the
year.
I'm
gonna
double
check
with
the
leads,
so
I'll
double
check
with
the
leads,
but
but
I'm
pretty
sure
this
is
the
last
meeting
of
the
year
and
as
far
as
stuff
that
that's
still
in
progress.
So
I
still
haven't
heard
back
on
when
the
draft
of
the
reference
architecture
is
going
out.
B
I
believe
I
believe
it's
supposed
to
the
the
draft
of
the
secure
software
factory.
Reference
architecture
is
supposed
to
go
out
either
end
of,
like
you
know
in
the
next
week
or
two
or
very,
very
early
next
year,
but
I'm
gonna
check
back
with
brandon
and
andres
to
to
on
the
status
of
that
and
yeah
any
other
topics
or
anything
else
that
folks
wanted
to
bring
up.
B
Cool,
if
not,
if
we
don't
have
any
meetings,
I
wanna
wish
everybody
happy
holidays
and
a
happy
new
year
and
I'll,
hopefully
see
all
of
you
in
in
the
new
year,
and
you
could
all
have
12
minutes
back.