►
From YouTube: Developer Experience Office Hours
Description
Join OpenShift's Developer Experience experts for our regularly scheduled program filled with cloud native, Kubernetes, and OpenShift tips and tricks for developers.
C
All
right:
well,
I
guess
good
morning
for
those
of
you
on
the
east
coast,
my
name
is
adam
kaplan,
I'm
the
team
lead
for
the
openshift,
build
api
team.
We
are
in
charge
of
the
openshift
build
system
and
today
I'll
be
talking
a
little
bit
about
my
history
of
building
applications,
and
hopefully
this
is
might
be
entertaining
for
some
of
you
on
on
the
twitch
stream
or
the
tv
stream
I'll
then
talk
a
bit
about
openshift
builds
and
the
evolution
of
that
with
this
new
project
called
project
shipwright
cool.
C
Oh,
so
I
want
folks
to
kind
of
maybe
think
back
10
years
ago.
How
are
you
building
your
applications?
What
did
it
take
for
you
to
get
your
code
into
production?
C
A
That
sounds
very
you
know,
stereotypical
for
old
school
setups,
no
offense.
If
that's
your
current
setup
now.
C
Digital
transformation
is
not
easy.
No,
I
remember
one
point
so
we
had
a
my
team
was
small.
There
was
another
team
that
was
a
little
bit
bigger.
There
were
about,
I
want
to
say
15
or
so
engineers
and
they
had
a
shared
code
base
with
other
ones
and
they
stood
up
in
the
company
meeting
and
they
were
excited
to
debut
this
new
system.
They
were
using
for
continuous
integration
called
hudson.
A
C
So
next
slide
show
vik,
so
I
want
to
flash
forward
then
to
about
four
years
ago,
I
had
moved
on
from
that
software
firm
to
a
startup
that
was
building
a
mobile
application,
and
so
our
backend
was
a
django
python
back
end
and
we
were
deploying
to
amazon
ec2
and
that
setup
worked.
You
know,
okay,
you
know
releases
were
instead
of
building
a
jar.
C
We
just
had
a
script
on
our
amis
that
would
just
check
out
the
code
from
our
production
branch
and
then
deploy
the
django
migration
scripts,
and
then
we
were
good
to
go.
C
But
the
cto
approached
us
who
was
a
friend
of
mine
who
brought
me
on
to
the
company
and
introduced
kubernetes,
which
at
that
point
was
version
1.3
in
part
to
deal
with
some
of
our
scaling
problems,
but
he
thought
he
could
also.
It
could
also
help
us
out
with
actually
shipping
code
so
that
first
challenge
that
I
had
and
our
team
had-
was
to
learn
how
to
assemble
this
application.
C
C
We
would
then
push
that
to
a
private
amazon
container
registry
that
we
had
set
up
and
then
we
would
go
ahead
and
deploy
that
onto
our
kubernetes
clusters
with
we
actually
had
yaml
templates
in
our
git
repository.
So
even
before
git
ops
was
in
our
vernacular.
We
were
already
doing
that
and
yeah.
It
was
awesome.
C
We
were.
We
were
so
proud
of
that
system.
Once
I
bet
yeah,
that
was
good
stuff,
but
getting
that
into
production
was
actually
not
easy.
We
had
several
failed
attempts
to
get
it
out
there.
You
know
we
would
have
you
know
we
try
to
deploy
monthly
fails
month
later
fails.
I
think
might
have
been
almost
it
took
us
six
months
to
actually
get
it
fully
hardened
and
out
there
in
production.
C
And
so
I
really
wish
that
we
had
known
about
openshift
when
we
did
that
so
with
openshift
as
first
off.
I
I
hope
folks
on
on
the
stream,
know
about
openshift
and
openshift
belts,
because
if
they
don't,
it
is.
D
I
I
would
just
like
to
say
that
openshift
builds
are
a
major
major
distinction
between
openshift
and
vanilla
kubernetes
in
in
some
regards.
Not
very
many
kubernetes
platforms
run
their
builds
on
the
platform
you
know.
Usually,
these
builds
are
processed
externally
via
some
other
system.
It
might
be
jenkins
or
something
else
more
and
more
common.
You
have
jenkins,
x
and-
and
these
you
know,
newer
style,
build
systems
that
are
running
on
kubernetes,
but
then
the
security
model
can
be
kind
of
tricky,
often
running.
D
These
builds,
introduces,
root,
context
and
elevated
privileges,
and
so
you
may
not
want
to
run
builds
on
behalf
of
other
users,
if
you
don't
trust
them.
So
there's
a
whole
lot
of
security
stuff
that
comes
into
play
when
you
decide
to
run,
builds
in
a
multi-tenant
system,
and
so
this
stuff
is
not
just
builds.
It's
like
layers
of
security,
expertise
combined
as
well.
D
So
this
is
really
interesting
stuff.
If
you
look
at
it
from
comparing
what's
available
in
openshift
versus
vanilla
upstream,
you
can
do
a
lot
of
these
similar
things
in
upstream,
but
with
some
added
security
risk.
So
I
yeah.
C
And
it's
a
good
point,
ryan
that
you
bring
up
security
because
that
it
most
things
that
are
assembling
a
container
need
at
least
to
be
running
as
root
inside
the
inside
the
container
which
for
folks
who
have
done
work
with
openshift,
know
that
that
can
actually
be
a
challenge.
You
don't
get
that
by
default.
C
Usually
you
have
to
ex
provide
some
additional
privilege
to
your
workloads
if
you
need
want
them
to
run
or
you
need
them
to
run
as
root
openshift
in
the
three
acts,
and
even
the
forex
days
took
care
of
that
it
knew.
It
knows
that
builds
are
a
sort
of
trusted
system
that
can
run
your
build
workloads
and
with
because
of
that
the
build
system
is
fairly
locked
down.
You
can.
There
are
two
sort
of
main
strategies
you
can
employ
with
openshift
builds,
and
I
wish
we
I
knew
this
back.
C
Then
you
could
use
the
the
sort
of
docker
strategy,
which
is
where
you
provide
the
docker
file
and
openshift
will
create
a
container
image
from
your
dockerfile
and
then
there's
also
the
source
strategy.
Where
you
don't
need
the
docker
file.
Openshift
takes
care
of
the
packaging.
It
via
the
source,
image
tool
and
the
source
to
image
ecosystem
of
builder
images
that
red
hat,
provides
and
maintains.
D
Yeah-
and
I
think
that's
a
it's
a
good
point-
to
take
a
little
deeper
look
into
what
we're
actually
doing
in
openshift
is
in
in
terms
of
builds
generally.
Those
builds
are
not,
may
not
be
installing
rpms
into
the
image.
Usually
there's
some
amount
of
kind
of
standardization
that
your
team
lead
or
your
security
team,
or
someone
may
have
done
to
produce
kind
of
a
base
system
image
and
then
line.
You
know,
average
developers
are
often
running
a
build
on
a
system
image.
That's
been
kind
of
provided.
D
Red
hat,
provides
a
good
stock
of
these
images,
but
your
team
lead
or
your
security
team
could
potentially
extend
those
images
in
some
way,
upload
them
back
into
your
openshift
system,
and
then
you
could
run
this
type
of
build
process
on
an
extended
image
instead,
so
there
are
still
ways
to
to
extend
these.
It
just
takes
some
extra
extra
effort,
absolutely.
D
The
pattern
everyone's
going
to
follow
anytime
they're
running
an
application,
build
they're,
rarely
starting
from
a
scratch
image.
They're,
often
using
something
like
you
know,
at
minimum,
an
alpine
image
but
they're
picking
something
that's
off
the
shelf
to
work
with.
So
we
follow
that
type
of
pattern
for
app
builds
right.
C
And
certainly
for
thinking
back
to
my
days
at
this
startup,
if
we
were
going
to
deploy
it
on
openshift,
we
would
have
likely
used
the
python
image.
That's
provided
by
software
collections,
it's
available
as
what's
called
an
image
stream
on
openshift,
which
basically
gives
you
a
shorthand
to
point
you
to
those
container
images
which
are
actually
res.
They
can
they
reside
on
quay
and
docker
hub,
but
they
also
when
you
install
openshift
it's
available
on
the
container
registry
that
lives
in
the
cluster
as
well.
C
So
you
don't
even
have
to
go
out
of
the
cluster
to
get
your
base,
build
image
and
with
cluster
admins
security
teams
who
want
to
extend
those
images
or
create
their
own.
They
can
create
those
image
streams
with
those
more
hardened
or
more
specialized,
based
images
for
their
builds
and
then
share
them
through
the
openshift
image
stream.
So
it's
a
very
powerful
part
of
the
openshift
developer,
experience
and
ci
system.
If
you
will.
D
Move
on
a
bit
thanks
for
asking,
we
don't
have
anything
in
chat
quite
yet,
but
yeah
good,
good
content.
So
far,
this
is
particularly
showing
java
examples.
I
was
just
going
to
comment
10
years
ago.
I
don't
even
know
if
I
really
ran,
builds
as
much.
I
mostly
just
checked
in
my
source
code
and
the
release
team,
maybe
had
hudson
or
jenkins,
or
maybe
something
else
downstream
from
me,
but
as
a
web
dev
builds
wasn't
even
something.
I
necessarily
had
a
concept
for
so
yeah
major
changes
in
the
last
10
years.
C
Yeah,
that's
a
that's
a
good
point
that
you
know
when
I
was
doing
this.
Since
we
were
a
small
company,
we
didn't
necessarily
have
a
release
team
that
was
assembled
in
charge
of
assembling
the
code
for
us,
but
we
had
a
far
worse
case
where,
most
of
the
time
we
were
building
the
code
just
on
our
laptops
and
then
shipping,
whatever
artifacts
were
on
our
laptops.
We
didn't
have
a
trusted
system
to
do
the
build
for
us
with
openshift.
C
You
get
that
out
of
the
box
where
it
can
be
your
trusted
system
for
providing
the
artifacts
that
you
can
deliver
to
your
customers
or
you
can
deliver
it
to
other
teams
within
your
organization.
B
Yeah
also
I
want
to.
I
would
like
to
add
that
maybe
10
years
ago,
we
didn't
have
a
way
to
resolve
in
a
consistent
way.
Our
dependency
right.
We
had
to
code
resolve
the
dependency
locally,
then
build
the
artifact
ship,
the
artifact,
but
setting
up
the
environment
was
requiring
maybe
lots
of
time
and
the
environment
weren't
all
the
same
right,
so
it
works
on
my
computer
now
with
containers
doesn't
make
any
sense,
because
the
container
is
the
image
is
portable.
B
C
C
So
I
guess,
do
we
want
to
move
on
yeah
yeah,
so
that's
good,
so
we
want
to
flash
fast
forward
now
to
today.
The
thing
that's
really
changed
is
the
plethora
of
tooling
that's
out
there
to
build
container
images
and,
more
importantly,
build
these
container
images
within
a
container
itself.
C
When
openshift
3
came
out,
the
only
option
really
was
docker.
That's
what
openshift
ran
on
when
openshift
bills
ran,
they
actually
needed.
They
talked
directly
to
the
docker
socket,
which
is
risky
from
a
security
standpoint,
and
so
you
can
still
do
that
today,
but
that
is
widely
considered,
not
a
good
practice
when
you're
in
production.
C
So
today
we
now
have
two
very
good
options
for
building
container
images
within
a
container
itself,
buildup
which
red
hat
maintains
and
conoco,
which
google
maintains.
If
I'm
not
mistaken,
that's
correct.
Yes,
there
are
tools
to
assemble
code
without
the
docker
file,
so
we
have
source
to
image
and
also
cloud
native
build
packs.
A
lot
of
people
are
starting
to
use
cloud
native,
build
packs
these
days.
They
find
it
to
be
a
very
easy
system
to
do
to
get.
C
C
And
there
are
many,
many
more
yeah
there's
lots
out
there.
There's
I'm
sure
there
will
be
more
that
come
out
either
specialize
for
specific
languages
or
take
a
different
approach
when
it
comes
to
security
profile
for
for
building
container
images,
and
if
you
move
on
shubik
to
the
next
slide,
that
is
now
where
we
on
the
build
api
team
are
starting
to
see
the
limitations
of
what
we
had
done.
C
So
when
openshift
builds
started,
their
docker
was
the
only
option
we
provided
source
to
image
as
a
way
to
ease
development
teams
into
creating
container
images,
but
it
is
very
much
a
black
box
system.
It
is
you
only
get
what
openshift
provides
you
it's
very
difficult
to
extend
this.
There
is
an
option
in
openshift
called
the
custom
strategy,
but
there
you're
kind
of
on
your
own.
C
You
have
to
if
you're
gonna
pro
if
safe
you
wanted
to
use,
build
packs
today
on
openshift
the
only
real
way
you
can
do,
that
is
to
write
your
own
custom,
build
strategy
using
the
base
images
provided
by
whatever
build
pack
provider
you're
going
to
use
like
paquito
or
heroku.
C
You
then
have
to
get
a
deep
understanding
and
knowledge
of
how
those
work
inside
of
a
container,
what
steps
you
need
to
run
in
order
for
you
to
create
a
container
image,
it's
very
difficult
to
for
any
one
organization
to
do,
and
especially
if
this
is
not
an
organization's
like
core
competency,
that's
not
what
people
in
the
company
are
supposed
to
do.
If
you
want
to
ship
software
for
quickly
it's
it's
really
not
a
good
option.
So
yeah.
We
realized
that
we
needed
to
take
a
different
approach.
E
Yeah,
I
mean
users
customers.
They
should
not
be
in
the
business
of
writing
up
tooling.
To
do
image
builds
with
different
strategies
on
their
cluster
they're,
not
in
that
business
they're
trying
to
get
stuff
out
for
their
customers
right,
be
it
in
telecom,
be
it
hospitality
whatever
it
is.
But
building
images
is
that's.
E
A
C
So
with
that,
we
decided
to
take
a
new
approach
and
we
wanted
to
make
this
something
that
is
open
and
flexible
and,
more
importantly,
can
work
on
any
kubernetes
distribution,
and
with
that
we
have
project
shipwright.
So
shubik
do
you
want
to
take
it
from
here.
E
Yes,
thank
you
very
much.
Adam,
hey
everyone,
I'm
shobhick!
I
work
with
adam
and
I'll
quickly
go
over
some
of
the
watts
of
project
chip
right,
so
adam
has
done
a
great
job,
giving
the
history
of
operation
bills
and
why
we
are
here
today
extending
the
whole
build
framework
in
in
a
way
that
it
can
run
on
any
kubernetes,
so
yeah,
so
project
shipwright
is
effectively
a
framework
to
let
you
build
container
images
on
kubernetes
with
your
build
strategy
of
choice.
E
The
main
reason
that's
important
is
that
I,
as
a
user,
have
heard
that
you
know
there
is
this
new
cool,
build
strategy.
That
is
amazing
and
I
should
not
have
to
be
a
kubernetes
expert
to
be
able
to
get
that
working
because
my
company
is
not
in
that
business.
My
organization
is
not
in
that
business,
so
this
project
effectively
lets
you
first,
it
facilitates
you
to
choose
your
strategy
of
choice,
your
tool
of
choice
and
be
able
to
build
the
container
image
on
kubernetes.
E
What
that
means
is
any
kubernetes.
It
could
be
red
hat
kubernetes
or
it
could
be
any
other
vendor.
That's
totally
fine,
but
it
gives
you
that
assurance
that
this
has
to
work
on
not
just
open
shift
kubernetes,
but
on
any
kubernetes
and
interestingly,
this
is
powered
by
tecton
apis
under
the
hood.
What
that
means
is
like
a
lot
of
people
say:
hey
we've
got
tech
town
that
can
build
images.
I
said
yes,
it
can
build
images,
but
techtone
is
a
general
purpose.
E
Api.
It's
like
a
swiss
knife.
It
can
do
a
lot
of
things,
but
what
the
build
project
wants
to
do
is
it
just
wants
to
build
images
in
an
amazing
way
where
you
take
care
of
caching,
you
take
care
of.
You
know
different
layers
on
your
own.
You
you,
as
a
user,
should
not
have
to
be
exposed
to
tekton
apis
for
building
images.
That's
a
specialized
task
that
shipwright
helps
you
do
and
with
that
I'll
quickly
show
you
how
the
apis
look
like.
So
there
are
these
three
different
apis.
E
E
There's
an
api
called
build
where
you
define
what
your
build
looks
like
you
see
on
the
left
and
there's
this
build
run,
which
means
that
hey
I've
defined.
What
my
build
looks
like,
and
I
want
to
execute
an
instance
of
a
build,
which
is,
I
want
to
run
a
build
that
that's
what
build
run
is
for.
So
these
are
three
apis.
We
primarily
expose
the
goal
is
that
we
want
to
make
sure
that
these
apis
are
simplistic
enough.
E
At
the
same
time,
they
are
powerful
enough
for
you
to
build
images
with
the
same
flexibility
that
we
allowed
today
and
the
same
flexibility
that
users
would
want
to
have
when
the
question
of
different
build
strategies
come
into
effect,
so
yeah
with
that
I'll
quickly
jump
into
a
live
demo,
where
we'll
see
how
these
work
in
action
so
quick
thing
before
we
do
that,
and
this
is
on
installation.
I.
A
E
E
Fantastic,
you
wouldn't
be.
You
should
not
be
disappointed,
let's
put
it
that
way.
Okay
right!
So
how
do
you
install
an
open
shift?
So
it's
an
operator
called
shipwright
that
you
go
to
operator
hub
and
you
install
it,
and
it
should
show
up
like
this
and
once
you
do
that
it
will
effectively
tell
you
it's:
it's
got
a
nice
description.
It
tells
you
how
to
get
your
favorite
build
strategies
out
there
and
I'll
show
you
them
in
a
bit,
and
it
gives
you
a
nice
information
about
how
to
get
stuff
running.
E
So
I
quickly
get
into
the
demo
before
the
demo.
I
think,
let's
quickly
go
over,
how
can
we
build
images
using
which
strategies?
So,
if
I
go
in
here,
I
see
there
are
these
three
build
strategies
that
we
have
already
on
the
cluster.
You've
got
beta,
you've
got
belt
packs
and
you've
got
kaniko.
So
it's
a
nice
diverse,
set
of
companies
who've
invested
in
these
diverse
set
of
communities,
invest
in
these
and
let's
try
to
get
a
build
on
using
belt
packs
on
openshift,
using
shipwright.
How
about
that.
E
In
this
screen,
no
okay
yeah
so
so
so
build
is
effectively
if
a
user
goes
to
the
builder
github
project
say
hey,
I
love
builder.
How
do
I
use
pure
builder
on
openshift
without
having
to
know
that
s2i
is
a
thing?
This
is
basically
your
strategy
to
use
that
builder
and
similar
to
carnico.
Let's
say
you
used
google
container
tools,
you
love
kaneko
and
you
want
to
use.
Can
you
go?
The
strategy
exists
here.
D
E
E
E
E
Red
hat
tv,
for
example-
let's
say
so-
I
want
to
push.
I
want
to
build
an
image
and
push
it
here
and
what's
special
about
this,
is
that
I
want
to
use
the
buildback
strategy
for
this.
Let's
see
how
this
goes
out,
so
I'm
going
to
start
with
defining
the
build.
That
doesn't
actually
mean
it's
going
to
run
right
now.
E
A
G
E
E
So
I'm
gonna
do,
is
I'm
gonna
go
and
try
to
run
my
build
so,
like
I
told
you
it
lets
you
go
and
create
a
build
run,
and
yes,
we're
going
to
build
experiences
around
this.
So
let's
use
a
form
here
right.
So
let's
call
it.
You
know
my
build
run.
My
build
execution
and
I'm
gonna
use
a
specific
build
that
I
created
right
now,
which
I
called
fruit
app
built.
That's
the
build
definition
that
we
just
created
and
that's
pretty
much
it
and
I
so
I
just
said
I
want
us.
E
I
just
remove
this
see
what
you
don't
need.
So
I
just
said:
I
need
a
build
execution
using
this
build
and
I
say:
go
ahead
and
let's
create
it.
So
I
go
in
here
and
it
says
yeah
it's
pending,
so,
which
means
it's
like
now
figuring
out
how
to
get
this
stuff
running.
E
It
went
to
running
awesome,
so
things
are
moving.
Let's
go
and
quickly
take
a
look
at
some
pod
logs.
The
build
is
running
I
can
effectively.
While
your
logs
show
up,
you
can
see
what
it
did
for
you
behind
the
scenes.
If
you
were
a
kubernetes
nerd,
it
basically
went
and
spun
up
a
bunch
of
containers
for
you
to
go
ahead
and
have
your
build
running.
H
A
E
Good
enough
cool,
so
yeah,
you
could
see
that
here
are
a
bunch
of
container
logs
that
are
going
to
happen
for
each
step,
and
you
can
see
here.
It
basically
says:
hey,
I'm
building
and
I'm
adding
some
layers
to
it
and
then
I'm
going
to
push
it.
It
says:
oh
nice,
it's
completed,
so
I
go
here
and
I
say
see:
there's
a
successful
build.
E
So
what
we
just
did
right
now
is
we
built
a
an
image
using
build
packs
on
openshift
using
shipwright
and
we
pushed
it
to
quay
using
a
very
simplistic
api
which
just
let
you
choose,
here's
what
you
want
to
build?
Here's
your
strategy
and
here's
where
you
want
to
push
to
and
that's
it
so
my
build
is
done
so
now,
I'm
going
to
show
you
a
few
more
interesting
things
right
now,
you've
successfully
run
a
build
using
build
packs.
Now
now
that
you've
heard
hey
bill,
pax
is
cool,
but
what
about
my
s2i?
E
I
love
my
s2i.
I've
been
doing
that
for
a
while,
I
hope,
you're
not
taking
that
away
from
me.
I
personally
love
s2i.
So
now
let's
go
ahead
and
do
the
usual.
E
Let's
try
things
so
what
I'm
gonna
do
is
I'm
gonna,
do
the
same
thing,
I'm
gonna
say:
hey,
let's
create
a
build,
let's
define
a
build,
let's
call
it
something
like
you
know:
let's
call
it
node
app
build
and
I'm
not
gonna
lie,
but
I
do
enjoy
pushing
things
to
the
internal
image
registry
and
openshift
because
it
just
removes
all
the
trouble
of
going
to
have
configured
an
external
registry
access,
so
I'm
gonna
say
hey
with
shipwright,
I'm
going
to
do
s2i
and
I'm
going
to
continue
pushing
to
the
internal
registry
so
that
if
you're,
a
lover
of
the
internal
registry
image
streams,
nothing
changes
for
you.
E
So
I'm
going
to
do
something
like
let's
say
push
my
image
to
the
internal
registry
and
why
not?
So
what
are
you
going
to
build?
I'm
going
to
build
a
software
collections,
node
application?
Let's
say
I'm
going
to
use
and
then,
since
I
chose
s2i
if
you're
having
deja
vu,
you
know
that
you
need
to
specify
your
builder
image.
So
I
just
go
ahead
and
choose
the
node.js
builder
image.
The
api
lets
you
do
that
this
is
not
a
docker
file.
E
Let's
get
rid
of
this
and
you
can
see
one
more
thing
which
I'm
not
going
to
demo
it,
but
you
could
potentially
say
hey
after
doing
the
build
here.
Is
my
runtime
base
image,
put
your
artifacts
in
a
lean
ubi
image.
If
you
want
to
the
api,
supports
it
out
of
the
box,
so
you
could
do
all
that,
so
we
fill
up
fill
out
a
bunch
of
forms.
Let's
look
at
the
good
old
yaml
view.
So
we've
said
that
we
want
revision
master
of
this
report
to
be
built.
E
We
want
it
to
be
pushed
to
internal
registry,
which
means
you
could
do.
Of
course
you
should.
I
showed
you
queer,
I
o,
but
I'm
going
to
show
you
internal
registry
now
and
then
this
is
what
you're
building
so
and.
F
E
This
is
cluster,
let's
just
call
it.
Oh,
I
think
I
missed
a
step
yeah,
so
I
think
one
thing
which
I
didn't
show
you
while
I
went
on
to
do
that,
is
that-
and
you
didn't
even
ask
me
that
that's
that's
not
fair-
that
there
is
no
exploit
strategy
here
right
how
the
heck
I'm
going
to
build
s2i.
So,
let's
go
and
add
the
strategy-
that's
not
too
hard.
E
E
If
I
love
s2i,
I'm
going
to
give
it
to
the
whole
cluster,
so
let's
use
a
build,
but
let's
use
a
build
strategy.
That's
only
available
in
this
namespace
I
mean
previously.
Whatever
you
enabled
was
available
to
the
whole
namespace.
Now
you
can
choose
a
specific
strategy
for
your
own
namespace.
Only
and
interestingly,
if
you
see
this
strategy
uses
the
red
hat,
supported
s2
image
as
well,
so
this
works
with
pulling
stuff
from
red
hat
registry
as
well,
without
having
to
disrupt
things
that
should
not
be
disrupt
dead.
E
So
awesome,
I
have
my
build
strategy.
I
have
three
clusterable
strategies
and
I've
added
a
build
strategy
called
source
to
image
red
hat,
and
now
I'm
going
to
continue
saying,
registering
a
build
using
that.
So
this
is
a
build
strategy
so
to
quickly
recap
we
said
we
want
to
build
this
kit
source
push
it
to
an
internal
registry
like
the
good
old
days
and
use
the
source
to
image
red
hat
strategy
using
the
upstream
project.
So
that's
a
lot
of
information
in
a
short
sentence:
let's
go
and
register
it,
so
I
registered
the
bill.
E
It
says
that,
hey
though
your
build
definition
looks
great,
which
means
now
it's
time
to
go
ahead
and
run
a
build,
so
we've
called
this
node
app
build.
I'm
gonna
do
something
very
simple:
I'm
gonna
just
say:
hey,
let's
go
to
build
run,
let's
create
a
build
run
and
I
want
to
run
my
node
app
build.
I
just
give
it
a
very
s,
basic
name,
and
I
say
that
it
refers
to
a
specific
build
definition
and
that's
it.
I'm
not
gonna,
say
anything
else
again.
E
In
my
yaml
view,
I'm
just
gonna
keep
it
very
simple
so
that
it's
easier
to
convey
what
I'm
saying
so
yeah,
I'm
gonna
do
a
build
run
now
for
this
using
s2i.
A
A
Able
to
answer
them
as
we're
going
here:
the
shipwright
support
building
from
local.
Does
it
support
strategy
detection
or
you
need
to
know
up
front
the
strategy
to
use
what
about
security,
who
controls
where
and
how
the
builds
run?
What
about
concurrency
parallelism?
There's
a
lot
of
questions
here,
ryan
help
me.
C
We
are
working
on
a
proposal
right
now
to
hammer
out
a
command
line,
interface
starting
simple,
but
one
of
the
main
use
cases
we
want
with
the
command
line
is
to
eventually
support
building
from
local
source
and
pushing
it
into
your
cluster
to
be
built.
C
C
C
E
Right
and
and
to
quickly
add
to
that
right,
so
you
can
define
a
build
strategy
which
does
not
ask
for
anything
privileged.
You
can
ask
for
a
build
strategy
that
asks
for
something
privileged.
So
if
you
happen
to
be
running
on
a
kubernetes
cluster
where
they
allow
you
to
run
privileged
stuff,
go
to
whatever
you
want,
but
then
you
have
control
over
your
build
strategy
and
you
can
basically
modify
your
best
strategies.
Hey,
I
don't
want
privilege
or
hey.
E
E
So
we
kind
of
democratize
this
bit
to
ensure
that
you
choose
the
level
of
security.
You
want,
of
course,
with
openshift
bills
when
we
ship
it
we're
gonna
ship
it,
of
course,
with
extreme
levels
of
security,
but
if
a
vendor
wants
to
take
shipwright
and
put
it
something
on
low
privilege,
that's
the
vendor's
concern,
but
there's
going
to
be
recommendations
in
general
to,
of
course
use
as
non-privileged
as
possible,
like
the
community,
is
going
to
recommend
that,
but
vendors
can
make
the
choices.
C
C
And
use
something
that
doesn't
doesn't
even
run
as
root.
It
could
run
as
any
uid.
It
only
requires
a
certain
little
level
of
se
linux
capabilities.
You
can
totally
do
that
too.
C
D
C
D
So
you
could
go
anywhere
from
like
fully
locked
down
to
fully
build
anything.
You
want
all
your
dreams
come
true
anywhere
in
between
yes
awesome,
I
had
one
other
question
and
we
can
handle
it
later.
It
was
more
about
architecturally
like
there
was
a
ton
of
screens.
D
You
had
that
were
showing
a
tab
for
like
build
run
and
a
tab
for
like
there
was
a
bunch
of
tabs
in
here
inside
shipwright,
and
I
was
just
surprised
to
see
all
of
these
tabs
all
instances
build
run,
build
build
strategy.
This
seems
like
a
lot
of
ui
for
an
operator
to
bring,
and
I
would
be
curious
to
hear
more
about
that
whenever
it
fits
into
the.
E
Yeah
sure
I
I
can
quickly
address
that
so
so
this
is
effect
so
currently
ship
pride
doesn't
have
any
openshift
specific
ui
at
all.
So
this
is
effectively
us
ensuring
that
the
stuff
the
project
does
shows
up
fine
and
open
shift,
but
there
hasn't
been
any
first
class
integration
of
ship
riot
into
openshift.
Yet
so
in.
D
E
Yeah
I
mean
from
from
an
openshift
perspective
I
like
quickly
add
to
that
that
will
effectively
invest
in
an
experience
which
is
more
first-class,
which
abstracts
out
even
further
from
a
ui
perspective,
but
from
an
upstream
project
perspective.
E
Cool
awesome
right
so
quickly
doing
a
recap
of
the
build
we
just
ran,
so
we
ran
an
s2i
build
and
we
just
did
a
few
typical
things
with
that.
We
pushed
it
to
an
image
stream
called
this
and
then,
while
we
were
talking
what
we
effectively
did
is
we
went
and
deployed
that
app
using
the
image
stream
that
we
just
created.
So
we
just
built
this
image
and
we
deployed
it
using
the
openshift
import
flow
effectively.
E
You
can
use
the
image
stream
that
you
pushed
using
shipwright
and
openshift
the
usual
way.
You've
used
images
or
image
streams.
No
surprises
no
unnecessary
disruptions
there.
E
So
with
that
I'll
summarize
and
hand
it
over
to
talk
about
them
to
what's
talk
what's
next
so
to
quickly
summarize
this
build.
This
brings
in
a
few
apis,
but
the
ones
which
you
primarily
should
be
should
care
about,
are
basically
the
build
strategy,
build
and
build,
run,
build
strategy
and
testable
strategy.
This
lets
you
figure
out.
What's
in
cluster
scope,
what's
name
space
scope,
it
gives
you
that
flexibility
and
then
the
rest
are
to
get
your
bills
working
with
that.
D
D
E
Hook
in
there
or
anything
that
we
can
use
right
so
so
there's
one
thing
that
we
are
working
on
right
now,
that's
a
good
question.
So
one
thing
that
we're
working
on
is
that
if
you
are
a
traditional
java
shop
and
you
are
effectively
pushing
artifacts
to
nexus
and
like
jar
artifacts-
and
you
want
to
build
images
out
of
it,
you're
working
on
supporting
that
as
well,
which
means
you
don't
necessarily
have
to
start
from
source
code,
you
could
start
from
a
jar
and
convert
that
into
an
image.
E
That's
a
pretty
popular
use
case
as
well,
so
you're
working
on
this
upstream
once
the
proposal
goes
in
code
should
show
up
and
support,
show
up
as
well.
B
E
Way,
that's
right,
like
binary
bills,
in
a
way
that
like
if
there
are
strategies
that
do
not
support
that
at
all
like
they
do
not
support
that
out
of
the
box,
that's
a
different
thing,
but
then
it's
going
to
give
you
a
unified
api
for
you
to
say
hey!
I
want
this
binary
build
with
that
x
strategy
and
if
the
strategy
supports
it
will
just
work
nice.
E
To
add
to
that,
I
think
there
was
a
question
on
detections.
I
think
right
now
the
user
has
to
make
a
choice
on
the
build
strategy,
because
it's
more
of
a
conscious
choice,
you're
making
between
kaneko
or
build
or,
let's
say,
bill,
pax
or
s2i.
E
D
A
D
A
E
Yeah,
I
think
so
so,
right
now,
from
upstream
project
perspectives,
we
haven't
put
those
knobs
in
place
to
be
able
to
configure
that,
but
then
the
the
general
recommendation
that
I
give
to
any
vendor
who
would
want
to
be
interested
in
adopting
this
and
we
have
a
vendor
who's,
actually
adopted
this
in
their
beta
size
already.
Even
though
the
project
is
pretty
new
is
that
you
can
always
build
those
knobs
out
on
your
side
of
the
tooling
when
you're
adopting
it
and
that's
been
done
already.
E
So
it's
it's
not
something
that
we're
recommending,
but
it's
awake
thing
so
already
there
is
an
there
is
an
adopter
called
ibm
code
engine
who's
using
this
well.
H
A
E
A
D
D
Rather
than
through
this
interface,
particularly,
I
really
like
the
simplified,
you
know
build
build
run.
It
reminded
me
of
tecton,
but
even
more
slimmed
down.
I
guess
so.
It
seems
like
a
relatively
easy
adoption
curve
for
for
developers
who
would
like
to
run
builds
on
kubernetes,
so
yeah,
good,
correct.
E
So,
since
you
mentioned,
you
know
some
of
the
new
capabilities,
so
just
a
quick
shout
out
to
the
audience
here
we
have
the
shipwright
project.
This
is
on
github.
E
There
is
an
enhancement
proposal
process
which
is
effectively
tell
us
why
you
want
something
and
give
us
some
more
details
and
the
community,
which
is
a
growing
one,
will
effectively
help
you
navigate
through
that
and
get
it
accepted
and
have
somebody
work
on
it.
It
could
be
you
or
it
could
be
somebody
from
the
team.
C
I
think
the
only
things
I
want
to
add,
all
speaking
of
contributing
there
are
our
future
course
includes
what
we
kind
of
mentioned
before
about
the
command
line
interface.
We
also
are
we
have
so
we
shubik
showed
you
the
deployment
installing
by
our
operator
hub.
We
have
a
script
in
the
repo
that
you
can
use
to
install
there's.
Also
we're
also
working
on
making
this
available
by
a
helm
chart
so
folks,
don't
have
to
choose
how
they
install
shipwright
builds,
there's
also
a
docs
website.
C
So
for
folks
who
are
non-coders
but
want
have
technical
writing
skills
so
want
to
help
us
out
that
website
is
in
progress
and
we
in
terms
of
feature
sets
that
we're
working
on
one
of
the
ones
that
probably
haven't
been
mentioned,
but
one
that
we'll
be
keen
on
adopting
is
integrating
with
tekton
triggers
so
folks,
you
know
when.
A
E
A
G
A
A
Thank
you
both
so
much
what
so,
we've
dropped
the
links
to
the
github
repo.
Where
else
can
people
go
to
find
out
more
info
right?
Is
it
just
github
right
now
or
do
you
have
like
a
doc
site
or
anything
so.
C
The
doc
site
is
pretty
bare
right
now.
Github
is
probably
the
best
resource
we're
working
on
getting
shipwright.io
up
and
running.
I
saw
that.
A
That
yeah
yeah
yeah,
getting
the
whole
thing
up
and
running,
is
tough
for
yeah
like
open
source
projects
start
in
github,
but,
like
you
have
to
have
a
website.
Nowadays,
you've
got
to
have
contributing
guides.
You
got
to
have
all
kinds
of
structure
and
governance
and
compliance,
and
all
this
other
stuff
just
to
even
get
going.
It
feels
like
so
huge
kudos
to
y'all
and
logos,
as
brian
points
out
and
dark
mode,
as
jorge
would
mention
so
like
the
the
amount
of
effort
that
goes
into
just
starting.
A
An
open
source
project
nowadays
like
if
you
want
people
to
take
it
seriously,
is
very
high
in
my
opinion,
and
you
all
are
working
through
it
and
you're
kind
of
showing
us
right
now
live
like
what
it
takes
to
get
something
like
this
up
and
running
right.
Like
jorge's
question
about
like
do.
I
have
to
do
this
every
time
you
know
like
y'all
are
answering
that
question
in
a
future.
You
know
build
or
future
release.
Sorry
technically
a
build,
I
guess
so.
Yeah
the
the
the
the
feature
planning
looks
good.
C
C
Is
if
you
look
at
our
issues,
one
of
the
issues
that's
on
the
project
is
like.
Ultimately,
we
want
to
be
able
to
use
this
to
build
the
thing
itself.
We
want
to
eat
our
own
dog
food.
C
That
is
if
we
can
get
get
to
that
point,
where
yeah
just
travis
launches
a
kind
cluster
with.
J
C
Shipwright
and
tukton
installed
and
it
builds
the
shipwright
images
and
then
deploys
it
on
that
same
one
or
even
a
different
kind.
Cluster,
that's
running,
there's.
A
E
And
that
should
work
now
to
be
honest,
you
just
have
to
get
it
wired
up.
C
I
think
yeah
yeah,
okay
and
it's
cool.
I
don't
know
if
folks
saw
the
some
of
the
videos
that
came
out
of
kubecon
eu.
There
was
a
lot
of
people
who
were
demonstrating
how
they
are
using
kind
and
other
tools
to
do
their
ci
cd
on
kubernetes.
J
C
Cdcon
actually
starts
tomorrow,
that's
the
first
one.
So
there's
I
don't
know
if
folks
here
on
the
stream
know
about
the
cd
foundation,
it
is
a
sister
organization
to
the
cncf,
their
first
virtual
now
conference
is
going
to
be
tomorrow.
It
is
free.
I
will
be
there
in
the
chats.
We
won't
be
doing
any
shipwright
conversation
there
specifically,
but
I
personally
am
excited
to
see
what
other
people
are
doing
with
their
own
ci
and
cd,
usually.
A
Yeah,
no
communities
for
some
reason:
twitch
is
botching
the
cd
foundation
link,
but
whatever
it's
there,
you
can
go
to
cdcon
tomorrow.
Register
is
registration's
still
open.
I
believe
yeah.
A
Yeah
so
like
donating
your
money
to
the
ticket
like
it
goes
to
a
good
cause,
you
will
learn
something
from
that
experience.
I
promise
there's
a
lot
of
friends
of
mine
that
work
in
the
cd
foundation
right,
like
I'm,
a
cncf
ambassador
cd
foundation
has
their
own
ambassador
program
like
there
are
friends
of
mine
that
have
that
overlap,
they're
very
brilliant
people.
You
will
learn
a
lot
by
going
to
cdcom
tomorrow.
I
promise
you
that.
C
So
there's
certainly
a
goal
for
the
project
to
submit
it
to
either
cncf
or
cd
foundation
as
a
sandbox
project.
There's
a
lot
of
work
that
you
need
to
do
to
get
to
that
point,
as
chris
had
mentioned,
so
we're
working
on
that
and
we
need
to
come
to
an
agreement
as
to
where
we
want
to
land.
But
ultimately
we
want
this
to
be
in
a
vendor-neutral
location
and
have
a
good
governance
structure
in
place
so
that
it
can
live
on
as
a
full
full-throated.
A
Open
source
project
and
from
a
technical
perspective
right
like
if
you're
dog,
fooding
yourself
you're
there
right
like
once
you
get
to
that
point,
you're
ready
it's
just
dotting
all
the
eyes
and
crossing
all
the
t's
to
figure
out.
Do
you
want
to
go
into
incubating
the
sandbox
or
wherever
you
want
to
land
inside
that
ecosystem
inside
the
cncf,
which
is
a
process
in
and
of
itself?
So
yeah?
If
folks,
have
questions
feel
free
to
reach
out
to
me
about
that,
I
can
answer
them
offline.
D
So
adam
schubeck
did
y'all
submit
anything
to
kubecon
for
the
this
fall.
A
C
Certainly,
certainly
for
the
next
I
probably
would
be
the
next
coupon
would
be
eu
on
the
calendar
or
maybe.
D
Or
try
to
keep
up
or
or
people
from
the
us
won't
be
allowed
because
we're
still
banned.
D
D
If
you're
looking
to
be
a
contributor
in
a
new
region,
look
us
up
on
github
we'd
love
to
have
your
feedback
thanks
all
for
attending
today.
Thank
you
to
adam
and
shubik
for
the
excellent
demo.
D
Natalie
for
setting
up
this
topic,
yeah
thanks
and
thanks
to
the
chat
everyone
in
chat
for
all
the
great
questions.
A
A
J
A
You
coming
on
yeah,
so
up
next
open
shifts
commons
briefing:
we've
got.
Oh,
my
calendar's
messed
up
wrong
tab.
What
are
we
doing
next?
I'm
sorry!
I
always
do
this.
A
Talking
about
security
for
cloud
packs
and
assist
flow,
deep
dive,
the
one
and
only
kirsten
newcomer
from
red
hat
will
be
there
as
well
as
an
ibmer
kirsten
is
a
friend
of
mine
here
at
red
hat
and
she
is
a
security
genius.
So
please
stick
around
for
that.
A
So
yeah
thank
y'all,
we'll
catch
you
next
time
and
as
always
check
out
openshifttv
for
the
latest
and
greatest
subscribe
to
the
calendar,
and
you
know
what
give
openshift
a
try
if
you
haven't
yet
you
might
actually
enjoy
some
of
the
experiences
we've
put
together
for
you
all,
including
the
new
assisted
installer,
which
is
available
at
openshift.com.
F
J
Hello
and
welcome
to
another
openshift
commons
briefing.
Today
we
have
rudar
mapidi
from
ibm,
as
well
as
jose
ortiz
from
ibm
and
kirsten
newcomer,
who
is
our
red
hat
expert
in
security
and
they're
here
to
talk
about
openshift
security
and
cloud
packs
and
a
deep
dive
and
system
for
sysflow
so
take
it
away.
Sridhara.
Thank
you.
L
Thanks
karina
good
afternoon
and
good
day,
everybody
thanks
for
joining
today
we're
going
to
talk
a
little
bit
about
a
the
breadth
of
what
it
takes
to
do:
security
in
a
hybrid
multi-cloud
but,
like
karina,
said
we're
going
to
focus
on
two
key
topics
to
get
to
the
level
of
detail.
L
So
thank
you
jose
and
kirsten
for
joining
me
in
this
discussion.
So
if
you
move
to
the
next
slide,
I
think
security
continues
to
be
a
key
inhibitor
for
a
number
of
our
clients
who
are
moving
to
cloud
on
some
way,
shape
or
form
right.
L
One
is
coming
around
the
fact
that
how
do
we
make
sure
that
security
is
not
an
inhibitor
in
terms
of
you
know,
running
with
the
pace
of
the
transformation,
whether
it
is
trying
to
modernize
an
application
or
lift
and
shift
or
combinations
thereof,
and
then
the
second
type
of
use
cases
are
coming
from
more
of
the
security
persona
that
is
looking
at.
You
know
how
do
I
keep
the
the
bad
guys
out?
How
do
I
demonstrate
compliance?
How
do
I
let
the
good
guys
in
in
a
manner
that
doesn't
disrupt
everybody
right?
L
So,
if
you
go
to
the
next
slide,
you
can
clearly
see
the
two
personas
that
weigh
into
security
heavily.
We
call
it.
The
shared
responsibility,
josh
and
jane,
are
our
personas
josh.
Is
our
quintessential
engineer,
developer
line
of
business
architect
type
of
persona
that
is
more
closely
aligned
with
the
transformation
wants
to
leverage
security
for
sure,
but
more
focused
on
the
rapid
business
transformation
right?
L
He
wants
to
ensure
that
he
can
get
through
security
capabilities
in
an
automated
manner
very
quickly,
so
that
the
application
he's
trying
to
go
live
is
secure
from
a
compliance
perspective,
allows
people
to
log
in
be
able
to
demonstrate
an
ongoing
compliance
as
well
as
keep
the
bad
guys
jane,
on
the
other
hand,
sits
in
the
more
predominantly
the
cso
or
the
I.t
line
of
side
of
the
business
focused
on
making
sure
that
the
entire
enterprise
is
secure,
not
just
josh's
line
of
business,
but
the
entire
enterprise
right
so
she's
more
interested
in
making
sure
that
any
application
which
is
going
out
to
public
is
also
demonstrating
compliance.
L
She
doesn't
necessarily
have
all
the
skills
in
the
team
related
to
the
application,
but
she's
trying
to
ensure
and
deal
with
the
large
volume
of
applications
that
are
going
online
and
each
one
of
those
has
to
be
appropriately
secure.
L
L
If
you
look
at
devops
or
devser
cops
and
there's
a
variety
of
different
variations
of
showing
this
picture,
some
have
a
lot
more
details
and
and
some
have
less
details,
but
the
idea
that
we're
trying
to
show
here,
if
you
take
a
very
simple
version
of
the
circumference
and
infuse
that,
with
security,
we're
basically
saying
that
it's
not
that
josh
is
developing
an
application
and
throwing
across
the
fence
for
jane
to
protect.
But
it's
a
continuous
integration
that
needs
to
work
together,
so
that
security
is
throughout
the
life
cycle
of
devops.
L
So
if
you
go
to
the
next
slide,
you'll
see
what
we
mean
by
that
right.
It
actually
starts
with
the
whole,
the
the
non-technical
side,
the
people,
the
process,
the
the
cultural
aspect
of
it
to
say
how
do
I
make
sure
that
we
have
the
the
the
discipline
to
be
able
to
think
of
security
right
from
the
get-go
right
being
able
to
shift
left,
not
just
in
tools
but
also
in
thinking
as
you're
developing
an
application
as
you're
planning
an
application?
L
L
And,
in
some
cases,
do
a
good
job
of
coding,
the
application
with
best
practices
in
some
cases
through
process
and
configuration
in
other
cases
it
may
be
by
controls,
but
it's
good
to
have
those
up
front
so
that,
throughout
the
rest
of
the
life
cycle,
you
can
then
start
instrumenting
security
as
needed.
For
example,
as
we
go
beyond
the
planning
phase
to
coding
phase,
how
do
you
leverage
the
best
practices
of
secure
application,
development
and
you'll
see
some
of
that
later
today,
as
well?
L
Okay,
providing
that
automation
for
testing
all
of
those
are
something
that
the
developer
community.
The
line
of
business
community
has
to
think
about
security
tasks
that
need
to
be
baked
in
as
a
part
of
the
devops
life
cycle
as
a
part
of
the
cicd
pipeline
versus
you
know,
trying
to
look
at
it
as
a
gate
right,
a
gate,
as
you
know,
tends
to
be
backed
up.
L
On
the
other
hand,
if
you
look
at
it
more
like
guardrails
all
along
the
way,
you
are
cruising
making
sure
that
you're
not
going
beyond
the
guardrails
but
you're
keeping
up
the
speed,
and
hence
you
know
making
your
business
happy
on
the
other
side,
if
you
look
into
jane's
world
she's
more
responsible
for
the
operations
right
once
the
application
is
running.
How
do
I
make
sure
that
the
right
person
or
the
application
is
coming
in
for
the
right
set
of
data
under
the
right
conditions?
L
How
do
you
ensure
that
you're
monitoring
everything
for
detecting
any
anomalous
activity,
anomalous
behavior
and
be
able
to
do
that
on
a
consistent
and
a
continuous
basis,
so
that
you're,
detecting
threats
accurately
right?
You
don't
have
time
for
a
lot
of
inaccuracies
and
then,
unfortunately,
it
does
if
it
if
a
threat
does
show
up.
How
do
you
investigate
that?
Very
very
you
know
quickly,
accelerate
the
investigation
and
then
automate
the
response,
so
that
you
can
then
go
take
care
of
it
quickly.
A
good
example
could
be
hey.
L
You
know
I
need
to
go
scan
a
container
and,
and
maybe
in
the
process
it
was
not
scanned
and
or
maybe
in
other
ways
the
malware
was
installed
in
a
container.
That
container
could
be
hijacking
or
could
be
hijacking
as
an
account
taker
or
container
takeover,
and
when
you
detect
it,
you
may
need
to
drop
that
right.
Those
are
some
examples
of
what
goes
on
in
at
least
the
world
of
security
people.
L
Now
that's
just
basically,
you
know
how
we
look
at
you
know:
security
from
two
different
lenses.
Now,
from
the
combined
red
hat
and
ibm
perspective,
you
go
to
the
next
slide.
How
do
we
look
at
security
in
a
hybrid
world
right?
We
look
at
client
deployments
predominantly
as
hybrid
and
multi-cloud.
L
That
means
workloads
may
be
both
on-prem
as
well
as
in
the
cloud
and
we're
not
necessarily
limited
to
one
cloud,
but
it
could
be
multiple
clouds
actually
working
with
one
of
the
clients
that
has
segregated
the
application
into
different
workloads
and
and
running
workloads
where
it
makes
most
sense,
for
example,
in
amazon,
where
the
the
compute
is
is
much
more
efficient
than
maybe
storage
that
may
be
more
efficient
for
them
to
store
on-prem
or
in
a
private
cloud.
L
And-
and
you
want
to
do
anything
with
data-
you
may
want
cloud
pack
for
data
and
they
all
work
together.
So
we
live
in
the
world
of
security.
That's
where
you
see
the
cloud
pack
for
security,
which
is
providing
that
hybrid
and
multi-cloud
security
across
all
of
these
different
cloud
environments
across
all
of
these
different
domains
and
the
services
which
is
an
important
part
of
providing
expertise,
sits
on
top
of
it.
L
So
we
focus
on
three
specific
areas.
Right,
one
is
making
sure
that
the
security
is
provided
at
the
infrastructure
layer.
These
are
your
cloud
providers.
This
is
your
in
our
red
hat
and
our
fabric
at
the
core.
That
itself
needs
to
be
secure
to
ensure
that
you
can
allow
for
the
secured
devops
it
can
allow
for
a
secure,
ci
cd.
L
You
can
allow
for
some
of
the
secure
processes
right
from
a
development
point
of
view
and
then
comes
a
set
of
cloud
packs
on
top
of
it,
where
we
provide
security
built
in
to
these
cloud
packs
infused
into
the
cloud
packs,
so
they
are
protected
at
the
application
level
and
the
data
layer
right.
This
allows
you
to
ensure
that
the
applications
themselves
are
secure,
for
example,
single
sign-on,
or
being
able
to
do
adaptive,
access
or
data
activity,
monitoring
being
able
to
provide
capabilities
for
audits
so
that
you
can
demonstrate
compliance
and
threat
management.
L
All
of
those
kind
of
fall
into
the
second
layer
and
the
third
layer
is
a
security
across
hybrid,
which
is
where
we
work
with
the
the
provider.
We
work
with
the
you
know,
different
layers,
which
encompass
a
hybrid
multi-cloud,
very
specific
client,
so
that
we
can
then
pull
all
of
those
different
capabilities
together
into
cloud
packet
security
as
a
mechanism
to
not
just
manage
your
growing
threats,
but
also
protect
your
digital
assets
like
users
and
applications
and
data
etc.
Right.
L
So
that's
a
in
a
nutshell
of
how
we
look
at
security
from
in
red
hat
and
ibm
perspective
for
hybrid
and
multi-cloud
deployments.
L
So
today,
what
we're
going
to
do
is
we
will
deep
dive
on
two
specific
topics.
So
I
have
my
colleague
kirsten
who's,
going
to
focus
on
the
the
bottom
layer
from
an
open
shift
perspective,
and
then
jose
is
going
to
talk
about
the
the
second
layer.
In
terms
of
an
example
of
how
we
infuse
or
instrument
security
into
cloud
pack
or
the
platform
layer
so
that
you're
secure
by
design
in
many
many
cases
so
with
that
here.
L
F
Yes,
all
set
so,
as
sridhar
said,
I'm
going
to
talk
about
how
we
build
security
into
openshift
itself
to
enable
security
for
the
applications
and
workloads
that
you
might
deploy
on
openshift,
as
well
as
for
the
cloud
packs
that
ibm
is
providing
on
top
of
openshift.
F
So,
let's
just
start
for
a
minute
by
talking
about
you,
know,
kind
of
quickly
the
value
of
containers
and
kubernetes.
I
imagine
many
of
you
are
familiar
with
these,
but
I
do
want
to
hit
them
for
a
reason
for
reasons
that
you'll
see
as
we
keep
going
right.
So
containers
are
very
popular.
I
don't
know
if
you
just
saw
my
cat
go
by,
but
containers
are
have
become
very
popular
because
they
make
it
so
easy
to
deliver
and
manage
applications
by
packaging.
F
The
system
dependencies
with
the
application
code
as
well
right,
they're,
portable
across
environments,
so
they
support
hybrid
multi-cloud
and
they
really
appeal
to
developers
for
the
control
that
they
provide.
There's
also
real
value
to
the
ops
team
as
well,
because
this
simplifies
the
whole
process
of
moving
from
dev
to
test
to
production.
It
simplifies
the
deployment
process
with
the
dependencies
bundled
and
packaged
together,
but
for
managing
containers
at
scale.
You
do
need
an
orchestration
platform
and
kubernetes
is
the
orchestration
platform
of
choice
by
it.
F
You
know
these
days
it
provides
scaling,
resiliency
h.a
for
the
application
platform,
as
well
as
for
the
cluster
itself,
all
sorts
of
things
there
next
slide,
if
you
would
so
red
hat
open
shift,
is
kubernetes
for
the
enterprise
we
start
with
kubernetes
and
rel
core
os
as
our
core
set
of
delivery
capabilities,
but
we
build
on
top
of
that
because,
when
you
think
about
some
of
the
differences
that
that
model
of
the
the
dependencies
traveling
with
the
application
package,
this
means
that
you
need
to
build
your
applications,
your
containerized
apps.
F
In
some
new
ways.
It
also
means
you
need
to
patch
your
applications
differently.
Right
best
practice
is
never
to
step
into
a
running
container.
You
always
need
to
be
re.
You
know
rebuilding
and
redeploying
when
patches
are
involved,
and
so
openshift
comes
with
security
built
in
at
at
all
layers.
We're
gonna
talk
more
specifically
about
that
in
a
minute,
but
also
with
tooling
that
developers
really
need.
F
So
that
includes
jenkins,
if
you're,
using
a
jenkins
pipeline
techton
for
a
next-gen
sort
of
cloud-native
pipeline
code,
ready
tooling
for
developers
is
available
with
openshift
and
then
a
whole
bunch
of
things
that
support
sort
of
the
more
cloud-native
solutions,
so
service
mesh
k-native
for
server-less
capabilities
and
lots
of
lots
of
language
and
runtime
support
for
enabling
micro
service
based
apps,
but
also
for
traditional,
more
more
traditional
architecture.
Apps.
F
If
you're
doing
a
lift
and
shift
openshift
includes
monitoring
logging
that
can
be
leveraged
by
the
application
as
well,
so
the
prometheus
instance
the
elastic
search
fluent
dn
cabana
instance.
All
of
those
are
available
to
support
the
app
dev
team,
as
well
as
the
ops
team.
Next
slide.
Please.
F
We
think
of
those
as
one
piece
and
so
we're
looking
at
the
whole
platform
in
that
case,
and
then
we
also
see
an
ecosystem
of
security
tools
that
are
particularly
good
at
securing
containers
and
kubernetes
or
enhancing
the
built-in
security
and
jose
and
srider
will
be
talking
more
about
one
of
those
tools
in
the
ecosystem.
F
So,
as
I
said,
best
practice
is
to
rebuild
and
redeploy
any
containerized
application.
So
when
you
think
about
that,
that
really
means
that
the
cicd
pipeline
for
your
containers
requires
a
more
automation
than
you
may
be
used
to
if
you're
working
with
a
traditional
app
and
one
that
is
perhaps
not
delivered
as
frequently.
F
So
if,
if
you
know,
when
you're
building
a
containerized
image
to
get
those
a
containerized
app
to
get
those
system
dependencies
you're
going
to
include
a
base
os
image
such
as
the
rel
universal
base
image,
and
so,
if
there's
a
cve
found
in
those
system,
libraries,
it's
not
in
your
application
code,
but
it's
in
those
system.
Libraries,
you
still
need
to
rebuild
and
redeploy,
and
so
that's
why
it's
so
important
that
we
think
about
all
the
automation
that
can
be
leveraged
for
a
true
devsecops
environment
right.
F
You
need
to
be
sure
that
you've
got
a
trusted
code
repo.
So
you
of
course,
will
be
pulling
content
down
from
external
sources
such
as
for
that
base
image,
but
it's
best
practice
to
have
a
registry
on
premises
or
your.
It
doesn't
have
to
be
on
premises,
but
your
private
registry
to
ensure
that
you're
managing
all
containerized
images,
all
container
images
before
they're
deployed.
F
You
need
to
think
about
ideally
automating
unit
tests,
code,
quality
security,
scans,
your
integration
test
and
openshift
comes
with
a
number
of
capabilities
that
can
help
you
there
code
ready
is
a
set
of
development
tools
that
again
are
available
to
you,
leverage
with
openshift.
They
include
ide
plugins
that
can
be
used
with
jetbrains,
with
eclipse,
with
visual
studio,
to
help
give
you
information
about
dependencies
and
potential
security
issues
right
as
you're
writing.
Your
code
quay
with
claire,
is
an
enterprise
registry
and
claire's
the
vulnerability
scanner
for
scanning
container
images.
F
F
If
there
are
changes
to
external
images
that
you've
included
in
your
in
your
custom
container
on
the
production
side
in
particular,
you
can
leverage
these
capabilities,
of
course,
in
dev
and
test,
but
on
the
production
side
in
particular,
it's
really
important
to
disallow
or
allow
access
to
specific
registries
and
again
a
best
practice
is
you
might
in
your
dev
cluster,
allow
somebody
to
download
from
outside
your
enterprise,
but
in
your
production,
cluster
really
best
practice
is
just
to
limit
access
to
your
private
registry
security
context.
F
Constraints
are
a
native
feature
of
openshift
they're,
a
kubernetes
admission
controller
plug-in.
They
ensure
that
by
default,
no
containerize,
no
container
images
run
with
privilege
on
an
open
shift
cluster
on
worker
nodes,
a
lot
of
great
features
built
into
openshift
to
ensure
that
not
only
is
the
platform
secure
by
design
but
to
help
ensure
that
the
applications
deployed
on
the
platform
are
also
secure
by
design
next
slide.
Please.
F
So
this
is
just
an
example
about
so
security
is
where
we're
focused
today,
but
all
of
this
is
really
about
supporting
business
value.
Right
and
devops
is
about
delivering
applications
faster
so
that
the
business
value
for
your
customers
is
available
sooner
devops
is
about
agility
and
we're
today
talking
about
devsecops,
which
is
always
intended.
Security
is
always
intended
to
be
part
of
that
devops
pipeline.
It's
just
that.
Sometimes
people
forget,
and
so
when
we
take
a
look
at
the
value
that
this
can
provide
right.
F
F
So
again,
if
we
come
back
to
kind
of
that
first
place
where
we
laid
out
control
application
security,
defend
the
infrastructure
and
extend
with
the
appropriate
tools
from
the
security
ecosystem,
this
is
just
kind
of
a
one
slide,
summary
of
the
majority
of
the
capability
security
capabilities
available
to
you
with
openshift,
there's
so
much
to
talk
about
that.
We
really
didn't
have
time
to
get
into
everything.
F
Today
we
touched
on
many
of
them,
but
if
you,
you
know
have
any
interest
in
following
up
on
any
of
these
capabilities,
we
do
have
earlier
recorded
talks
and
would
be
happy
to
chat
with
any
of
you
so
jose.
I
think
or
sridhar
back
to
you.
L
Thank
you
kirsten,
as
you
can
see
very
rich
set
of
capabilities,
and,
and
that's
just
one
part
of
what
constitutes
the
infrastructure
right
and
if
you
take
this
open
shift
and
lay
it
on,
let's
say,
ibm
cloud
or
azure
and
amazon.
You
need
to
think
about
some
of
the
security
capabilities
at
that
level
as
well
right.
So
that's
what
we
look
at
when
we
say
security
at
the
infrastructure
level.
L
Now
moving
up
the
stack
as
you've
heard
from
me
cloud
packs,
provide
a
mechanism
to
deliver
a
business
function
right
so
what
better
than
instrumenting
those
cloud
packs
with
security,
so
that,
from
a
developer
perspective
from
a
business
translation
perspective,
you
don't
have
to
worry
about
too
much
of
these
capabilities
but
use
those.
You
know
built-in
features
as
a
mechanism
to
secure
your
workloads,
especially
regulated
workloads.
L
Now,
there's
a
number
of
topics
around
security
data
security
gain
access
management,
you've
got
privilege,
management,
threat
management,
as
kirsten
said,
there's
not
much
time
to
go
through
each
and
every
part
of
security
over
here.
But
what
we
want
to
do
is
double
click
on
threat
management
right
in
in
a
hybrid
multi-cloud.
L
If
you
look
at
the
the
next
slide,
you'll
see
the
the
typical
landscape
that
a
administrator
or
a
sock
analyst
has
to
deal
with,
especially
in
a
in
a
hybrid
multi-cloud.
L
So
we
used
this
as
an
example
today
to
talk
about
one
project
called
sysflow,
and
the
idea
that
we
are
trying
to
build
on
over
here
is
to
say
we
want
to
accelerate
the
security
maturity
through
open
community
right.
So
this
is
an
open
source
project
with
a
view
to
be
able
to
provide
that
detailed
telemetry
across
a
number
of
different
systems,
and
jose
is
going
to
give
a
lot
more
detail
on
that
and
show
you
how
you
can
use
it
today,
jose.
M
Hi
good
afternoon
good
morning,
thank
you
trader,
I'm
kirsten,
so
I'm
gonna
touch
a
little
bit
of
what
cis
flow
is
and
astrid
is
an
open
source
project.
When
we
started
it,
it
was
focused
on
looking
at
the
telemetry
information
and
basically,
you
know
think
about
it.
The
same
way
that
you
will
think
of
netflow,
as
we
were
looking
at
raw
packages
being
captured
on
the
network
and
how
they
would
flow.
M
The
teams
are
looking
at
files
and
processes
and
network
access
into
those
files
and
processes
and
how
they
could
basically
generate
a
similar
notion
of
a
flow
by
looking
at
the
data
on
the
telemetry.
So
we
started
at
that
in
the
basic
of
the
system,
and
then
we
started
looking
at
how
we
could
apply
those
topics
or
that
model
on
the
telemetry
into
containers
and
other
kind
of
workloads.
M
We,
I
also
have
matured
the
project
and
added
additional
capabilities
that
are
all
touching
a
little
bit
around
how
we
collect
the
data
and
then
do
what
the
team
is
calling
edge
analytics.
So
the
projects
serve
as
the
telemetry
portion,
how
we
will
capture?
How
we'll
define
that
information?
M
I'm
going
to
go
into
an
extension
of
the
project,
which
is
we
started
the
project
with
this
notion
of
the
collector
that
we
could
get
information
from
different
sources.
So
the
architecture
is
there
to
do
that,
and
then
we
had
the
notion
of
an
exporter.
So
as
we
collected
the
data
we
processed
it
and
then
we
generated
a
cis
flow
telemetry
as
an
avro
format
into
s3
storage.
M
We
also
basically
have
a
json
format
that
can
go
into
a
syslog-like
endpoint,
but
the
team
added
this
additional
capability
that
we
call
the
processor
that
start
giving
us
the
ability
to
do
some
additional
enrichment
and
processing
at
the
edge
where
we
might
obtain
additional
information
about
the
system
or
start
getting
into
other
areas.
Like
other
than
looking
at
the
telemetry
and
doing
some
analytics
to
detect
the
problems
we
could
potentially
start
getting
into.
M
Some
level
of
protection
react
to
those
changes
locally
or
try
to
prevent
those
actions
in
certain
cases,
so
this
is
kind
of
a
a
new
update
that
is
going
to
be
available
fairly
soon
in
the
community
project,
probably
end
of
this
month
or
sometime
in
the
middle
of
the
month.
As
we
said,
this
is
being
done
in
the
open.
This
is
just
was
an
internal
project
that
we
decided
that
should
also
go
into
the
open
source
project
to
start
augmenting.
M
As
I
said,
the
edge
of
telemetry
and
capability
in
the
system.
We
have
some
implementations
on
the
right
hand,
side.
This
format
is
open,
so
the
data
can
be
sent
to
different
implementations
on
the
right-hand
side.
We
have
our
internal
projects
to
kind
of
monitor
this
data,
but
this
left-hand
side
project
is
totally
done
in
the
open
as
an
open
source.
M
Project
I'm
going
to
go
into
some
of
the
things
that
we're
doing
with
the
project.
As
we
said,
we
have.
The
open
source
project
is
included
as
an
operator
in
operator
hub,
so
it
can
be
applied
or
can
be
installed
on
any
openshift
environment.
One
of
the
things
that
streamer
said
that
we're,
including
in
our
cloud
box,
is
for
our
ability
to
include
the
telemetry
built
in
and
additional
rules
into,
the
cisco
environment
that
is
purposely
built
for
each
of
the
cloud
packs.
M
So
we
can
understand
application,
behavior
and
codify
into
the
rules
and
provide
it
into
the
system.
It's
the
same
operator.
So
while
it's
included
in
the
common
services
that
you
see
there
for
the
cloud
pack
we're
basically
sourcing
it
off
the
same
location,
so
it's
not
a
double,
install
or
anything,
we'll
detect
it
and
then
we'll
collect
the
data.
And
then
we
have
systems
on
the
far
right.
Here
we
have
cloudback
for
security,
consuming
the
sysflow
data
and
helping
the
analysis
are
set
to
process
information
and
do
their
job.
M
M
M
It's
a
very
lightweight
operator.
It
has
one
little
component
that
will
start
sending
data
the
way
to
configure
it
is
the
operand.
You
just
tell
it
the
end
point
that
is
going
to
receive
the
information.
This
is
one
configure
for
syslog
host
ip
address
the
protocol
type,
whether
it's
tcp
ip
usb
or
if
it's
tls
and
then
the
ip
address,
and
once
it's
configured
it
will
immediately
start
collecting
data.
M
The
data
in
the
syslog
is
going
to
look
like
this
json
file
in
when
we
do
open
object,
storage.
We
try
to
compress
it
for
space,
so
it's
gonna
be
an
avro
format
that
was
documented
in
addition
to
the
json
format,
and
we
have
system,
like
my
session
timer
like
qradar,
that
can
read
the
information
and
present
it
to
the
user.
M
D
M
And
it's
going
to
show
the
information.
In
our
case,
this
is
our
one
of
our
systems
that
we
process
information.
We
can
show
the
process
data
and
the
raw
data
as
it's
coming
through
to
the
system
so
fairly
easy
to
set
up.
This
particular
tool
is
able
to
resist
syslog
data
and
configure
it,
and
we
can
do
additional
events
analytics
on
these
particular
tools
that
we
have
here.
Trailer
back
to
you.
L
Yeah
thanks
jose,
I
think
the
you
know,
one
of
the
ways
that
I
take
away
from
this
cisco
is,
you
know,
for
those
of
your
security
minded.
It's
like
a
poor
man's
edr
right,
which
is
collecting
a
lot
of
telemetry
from
containers
which
are
the
endpoints
these
days,
so
that
you
can.
You
know
what
jose
was
showing.
L
You
is
a
mechanism
of
how
you
would
configure
it
and
what
we've
done
is
we
put
it
in
cloud
pack
so
that
every
cloud
pack
has
that
by
default
and
then
on
the
other
side
it
could
be.
You
know,
cloud
pack
or
qradar
or
splunk
or
whatever
that
can
be
leveraging
that
and
then
using
analytics.
You
can
do
attack
and
some
of
the
misbehavior
detection
at
the
workload
level,
because
one
of
the
new
thing
is,
you
know
more
on
the
behavior
side
right.
So
you
can.
You
can
definitely
look
at
that.
L
You
can
look
at
some
of
the
forensics
like
you
know
some.
What
happened
you
know
get
into
the
threat.
Hunting
side.
Do
some
multi-modal
telemetry
right.
It's
not
just
saying
hey
what
happened
with
this
file.
That
was,
you
know,
sent
out
or
used,
but
correlate
that
with
the
process
correlate
that
to
the
network
and
then
good
things
might
come
happen,
and
all
of
this
is
used
to
not
just
to
catch.
You
know
suspicious
activities,
but
also,
in
some
cases,
demonstrate
in
a
compliance
and
enforcement.
L
Basically
right,
so
we're
very
excited
about
that,
and
you
can
learn
more
about
that
and
in
the
urls
over
here,
along
with
what
kirsten
was
talking
about
in
terms
of
the
details
on
openshift.
L
So,
as
you
can
tell
we
we
have.
These
are
two
examples
of
our
overall
approach
to
security,
very
similar
to
how
we've
taken
an
approach
to
threat
management
being
able
to
instrument
something
like
sysflow
into
cloud
pack.
L
We've
taken
a
similar
approach
to
instrumenting
the
cloud
facts
with
identity
and
access
management,
with
data
security
with
some
of
the
audit
capabilities,
so
that
you're
now
able
to
go
and
collect
this
information
to
go
back
to
my
initial
point
of
managing
threats
better,
as
well
as
protecting
digital
assets
like
users
and
data
and
applications
and
devices
etc.
L
So
one
of
the
things
that
I
want
to
highlight
over
here
is
that
you
know
by
looking
at
these.
You
know
the
these
two
layers
right,
which
is
the
infrastructure
as
well
as
the
application
platform.
We
bring
it
all
together
with
enterprise
security
right.
That's
where
ibm
security
comes
into
picture,
to
be
able
to
not
just
look
at
you
know,
let's
say
red
hat
or
ibm
capabilities,
but
also
amazon
and
azure
and
on-prem
etc,
and
then
bring
it
all
together
to
be
able
to
provide
that
integrated
automatic
multi-cloud
security
management.
L
So
with
that,
I
am
going
to
pause
over
here
to
see
if
there's
any
questions,
we've
got
about
10-15
minutes
for
any
questions
that
we
can
deep
dive
into.
J
Awesome,
thank
you
so
much.
That
was
great
a
lot
of
really
good
information,
and
I
know
that
I'm
going
to
be
digesting
this
for
a
while.
We
do
have
a
couple
questions,
so
if
you
could
so,
first
of
all,
what
is
the
difference
between
the
logging
and
the
audit
data?
That's
captured
in
openshift
versus,
what's
available
with
sysflow.
F
That's
a
great
question
and
if
I
could
start
and
then
maybe
hand
off
to
jose
for
for
anything,
he'd
like
to
add
so
when
you
think
about-
and
I
do
think
it's
it's
appropriate-
maybe
also
to
kind
of
take
a
step
back
and
and
mention
kind
of
the
differences
between
vanilla
kubernetes
and
a
distribution
like
like
openshift
right
so
out
of
the
box.
Kubernetes
does
not
come
with
logging
and
there
is
some
audit
level
capability,
but
perhaps
not
everything
that
folks
would
want
so
to
make
an
enterprise
distribution
right.
F
So
by
default,
openshift
audits,
kubernetes,
api
events,
we
audit
login
to
the
platform
openshift
includes
our
back
by
default
and
integrates
with
an
external
identity
provider.
We
also
have
audit
on
at
the
host
layer
by
default
using
audit
d
and
rel
core
os,
and
then
we
include
an
optional
logging
stack.
Some
customers
use
ours.
Some
customers
choose
an
alternative
logging
stack
if
you
were
to
work
with
vanilla,
kubernetes,
of
course,
which
you
know
some
of
you
may
be
doing.
You
would
just
need
to
add
your
own
logging
stack
there
in
openshift.
F
The
logging
stack
will
capture
log
data
for
the
platform.
All
of
that
is
visible
to
the
cluster
admin
application.
Logging
data
that
goes
to
standard
out
or
standard
error
is
also
captured
and
visible
to
the
application
owner.
The
individuals
who
have
permissions
to
access
that
application,
and
then
we
have
a
logging
pipeline
which
collects
all
of
that
logit
logging
and
audit
data
and
makes
it
simple
to
push
that
to
the
seam
of
your
choice.
F
M
Sure
so
one
other
thing
that
sysflow
does
is
it
looks
at
what
file
is
accessing
what
process
and
instead
of
being
just
the
system,
calls
so
we're
getting
all
the
system
level.
Events
from
the
kernel
level,
what
file
got
open?
How
many
writes
reads
and
which
process
did
what?
So,
it's
looking
at
all
that
data
is
simplifying
by
saying
this
process
have
access
to
this
server
to
this
file,
or
this
network
requests
trigger
this
process
to
open
this
file.
M
It's
going
to
look
at
that
telemetry
data,
which
goes
a
little
bit
at
the
system
level,
and
it's
going
to
tell
you
on
which
host
on
what
container,
what
process?
And
it's
going
to
give
you
that
pedigree
versus
you
know.
Christian
was
saying
the
audit
and
the
logs
at
the
kubernetes
level
is
saying
at
the
kubernetes
level
what
api
was
requested
and
who
did?
What,
when
that's
typically
for
the
audit
and
for
log,
is
going
to
be.
L
Good
example
jose
on
the
on
the
file,
logging
versus
cisco
right,
the
actions
and
the
behavior
is
an
important
aspect.
So
as
security
folks,
we
are
always
hungry
hungry
for
more
data
hungry
for
context
right,
so
the
more
data
that
we
have
more
granular
that
we
have
the
more
context
we
have
the
better
in
terms
of
being
able
to
make
that
correlation
better.
We
can
in
terms
of
how
we
can
provide
that
level
of
accuracy
right,
that's
a
key
thing,
so
there's
always
going
to
be
that
nuances
around
that.
L
J
And
we
have
a
follow-up,
really
quick,
the
somebody's
asking
in
chat.
You
mentioned
that
the
sysflow
operator
would
be
available
for
all
six
cloud
packs.
Can
you
advise
when
it
will
be
in
the
gm
version
of
for
cloud
packs.
M
So
we're
working
on
including
getting
there's
a
common
layer
for
cloud
box
that
that
we
have
built
that
that
we
call
common
services,
we're
we're
targeting
our
fourth
quarter
delivery
to
include
the
sis
flow
operator
there
and
then
every
cloud
part
that
consumes
that
common
laser
will
have
access
to
the
sysflow
operator.
So
it'd
be
that
three
five,
if
anything
gets
delayed,
then
it's
going
to
be
in
in
one
queue.
But
right
now
we're
targeting
and
working
towards
a
december
type
time
frame
to
have
the
operator
included
in
the
cloud
box.
M
Yes,
so
we
have
the
thing
about:
the
project
is
going
to
be
our
upstream
project,
so
we'll
have
one
version
that
that
will
put
in
operator
hub
for
community
and
work
and
then
we'll
have
a
version
of
it.
That
is
the
one
that
we're
associated
with
the
cloud
packs
that
is
included
through
these
common
services.
M
From
an
open
registry
perspective,
it
will
come
down
and
then
that
one
that
is
has
better
you
know
some
is
a
downstream
version,
will
also
be
included
in
a
way
that
you
can
get
it
to
directly
to
openshift.
So
we're
gonna
have
the
downstream
version
available
for
both
straight
into
openshift
and
to
the
cloud
box,
and
then
the
community
is
our
off
stream.
That
will
continue
to
update
and
that
typically
will
be
to
operator
hub.
J
Terrific
awesome
thanks
switching
gears
a
little
bit,
there's
a
question
around
sccs
and
psps
for
those
of
you
who
aren't
sure
what
those
are
sccs
are
security
context,
constraints
in
openshift
and
psps
are
pod
security
policies.
F
Yeah,
and,
and
just
just
for
anybody
who
you
know,
who
needs
a
little
bit
more
background,
so
both
of
these
are
kubernetes
admission,
control,
plugins,
so
openshift
has
had
security
context
constraints
since
3.0,
which
maps
to
kubernetes
1.0,
simply
because
our
customers
needed
ways
to
really
tighten
the
security
and
manage
kind
of
in
a
in
a
very
declarative
way.
The
security
of
pods
that
are
deployed
on
the
cluster,
so
any
pod
can
request
certain
privilege
as
part
of
what's
called
a
security
context.
F
So
so
again,
these
are
both
cube
plug-ins
to
the
kubernetes
admission
controller.
If
you're
familiar
at
all
with
the
center
for
internet
security,
kubernetes
benchmarks,
the
cis
benchmark,
recommends
turning
on
psps
since
that's
what's
available
upstream
sccs
have
a
few
more
capabilities
than
psps
do
upstream.
F
Psps
are
still
beta
in
upstream,
and
the
upstream
community
is
still
debating
whether
that's
the
implementation.
They
want
to
proceed
with,
or
or
really,
there's
starting
to
be,
some
momentum,
building
around
opa
gate
gatekeeper
as
an
alternative
approach
to
do
that
same
sort
of
thing
to
gate,
admission
based
on
policies
that
are
managed
by
the
administrator.
F
J
In
thank
you
kirsten
so
sridar
and
jose
are
how
are
you
using
sccs
and
psps
in
cloud
pack
for
security
or
across
all
the
cloud
packs?
I
don't
know
if
you
wanna.
M
That's
a
good
question
so
as
part
of
the
ibm
process
and
when
we
build
our
container
software.
One
of
the
things
that
we
have
as
a
process
is
something
we
call
security
by
design.
So
we
go
through
the
same
devops
process
that
tree.
That
was
talking
about
our
planner
design
and
so
forth,
and
one
of
the
areas
that
we
include
as
part
of
looking
at
our
containers
is
how
are
we
going
to
run
them
in
kubernetes?
M
We
recommend
that
people
use
the
restricted
sec,
as
provided
by
rehab,
because
that's
going
to
give
us
the
bigger
deployment
capability
across
customers
and
then
based
on
that,
we
start
looking
at
what
are
the
capabilities
that
might
need
to
be
defined
within
a
particular
container
deployment
and
what
additional
security
context
needs
to
define
for
that
deployment.
So,
as
a
practice,
we
look
at
very
end-to-end
aspects
of
the
life
cycle
of
the
container
and
what
we're
going
to
be
allowed.
So
by
practice
we
try
to
be
unrestricted.
M
The
next
one
that
we
look
at
is
the
naoid,
so
those
are
the
typical
ones
unless
we
start
talking
about
components
that
are
monitoring
or
that
are
databases
that
need
far
their
access
into
the
system.
So
those
are
gonna
go
into
some
of
the
other
secs
or
security
policies
that
allow
us
to
look
at
ipc,
calls
or
other
things
like
that.
But
for
the
most
part,
most
of
our
containers
are
restricted
or
any
oid.
M
M
That
for
end
users,
especially
in
the
database
case,
because
our
customers,
when
they're,
deploying
and
managing
the
environment,
they
have
certain
practices
that
they
themselves
say
as
an
administrator.
What
kind
of
security
contexts
are
configured
so
databases
and
things
like
that
need
more
access,
so
they
need
to
know
so
they
can
prepare
those
namespaces
for
that
right.
F
And
jose
is
exactly
right
that
things
like
logging
in
particular
right
they
need.
You
need
to
be
able
to
scrape
endpoints,
and
things
like
that
so
absolutely
makes
you
know,
makes
total
sense.
The
way
the
the
team
is
laying
out
the
different
perspectives
based
on
the
capability
required
for
the
solution,
following
least
privilege,
right
least
privilege
whenever
possible,
and
only
only
adding
privilege
where
needed.
J
L
I
think
a
couple
of
things
right.
First
and
foremost,
I
think
if
you,
if
you
look
at
what
jose
was
talking
about
as
part
of
sysflow,
we've
introduced
a
lot
of
compression
with
the
mechanism
to
not
just
flood
the
gates
but
being
able
to
take
the
relevant
data
so
that
we
can
do
the
appropriate
analytics
right.
L
Second,
is
you
know
a
lot
of
analytics
on
the
edge
itself
right,
trying
to
ensure
that
you
know
if
you
can
do
a
compute
on
the
edge
you
do
want
to
do
the
compute?
A
good
example
is
in
the
things
that
we
use
for
behavioral
biometrics,
especially
as
you
know
how
you
hold
your
device
or
how
you
move
your
mouse
and
and
use
the
keyboard.
L
We
don't
necessarily
you
know,
send
everything
to
the
mothership
to
calculate
and
come
back
right.
How
do
you
compute
some
of
that
with
local
resources
as
much
as
possible
right
and
the
other
part
is
fundamentally
as
an
approach.
We
are
not
necessarily
saying
that
you
know
all
the
data
should
come
to
one
place.
L
That's
what
we're
trying
to
say
in
many
cases
most
of
these
hybrid
deployments
data
can
be
left
where
it
belongs,
and
we
bet
on
a
couple
of
other
industry
open
standards
as
well
as
open
source
projects
which
I'd
love
to
give
an
update
to
this
team
and
community
called
open
cyber
security
alliance
and
the
whole
idea
of
that
is.
L
We
have
leveraged
some
open
standards
like
sticks
and
others
to
keep
the
data
where
it
belong,
but
do
the
normalization
up
front
so
that,
as
you're
doing
things
like
a
search
across
number
of
different
products,
we're
not
necessarily
bringing
the
data
in
but
running
the
analytics
and
the
search
on
on
federated
approach.
Basically
right,
so
it
helps
in
terms
of
reducing
the
data
movement.
It
helps
in
terms
of
distributing
the
computer
load
to
to
kirsten's
point
need
to
know
basis
right.
L
Why
carry
all
the
data
into
one
place
and
get
into
data
residency
issues
when
you
don't
have
to
so
those
are
some
examples
karina
that
come
to
my
mind
in
terms
of
not
exploring
the
the
probable
data
lake.
J
Thank
you
now.
If
we
can
do
a
quick
shift
back
to
you
kirsten,
we
have
a
question
around
open
shift
and
security,
so
you
mentioned
that
our
cause
is
treated
as
part
of
the
as
part
of
the
openshift
platform.
What's
the
security
value
that
offers.
F
Sure
so
so
one
of
the
challenges
again,
if
you're
looking
through
the
full
stack,
think
about
kind
of
the
the
graphic
that
srider
shared
earlier
about
we've
got
rel
cor.
You
know,
you've
got
the
cloud
platforms.
You've
got
to
manage
the
security
for
those
you've
got
the
host.
You
know
the
vms
that
maybe
are
deployed
on
those
you've
got
the
host
operating
system.
F
You
know
our
integrated,
our
back.
Our
logging
stack
monitoring
metering.
All
of
those
things
we've
tested
together,
our
pipelines,
but
the
the
host
os
was
delivered
and
managed
separately
now
with
openshift
4
we're
using
rel
coreos
is
a
container
optimized
operating
system.
It's
got
a
reduced
attack
surface
and
we
are
managing
it
as
part
of
the
openshift
cluster.
F
You
might
think
of
it
like
an
appliance
and
we're
actually
able
to
do
that
through
the
use
of
kubernetes
operators,
so
that
we
are
using
the
kubernetes
declarative
paradigm
to
manage
openshift,
kubernetes
cluster
itself
and
the
underlying
host
os,
and
so
we
use
a
machine
config
operator
to
ensure
that
every
node
is
has
the
the
same
configuration
for
the
host
os.
It's
an
opinionated.
It
just
has
what's
needed
to
support
openshift.
F
F
If
there's
an
unsupported
config
change
made
even
at
the
host
os
level,
openshift
will
deprecate
that
node
and
again
sridar
you
were
talking
about.
Sometimes
the
security
team
wants
to
take
a
pod
out
of
service.
Well,
sometimes
you
might
need
to
take
a
note
out
of
service,
and
so
openshift
makes
it
much
easier
to
track
everything
as
a
holistic
set
right.
Your
whole
platform
all
the
way.
J
Through
thank
you
that
was
great
and
we'd
love
to
have
all
of
you
back
again
for
some
more
deep
dives.
Are
there
any
closing
thoughts
with
our
last
few.
L
Minutes,
I'd
love
to
you
know
have
the
opportunity
to
discuss
open
cyber
security
alliance.
This
is
you
know.
We
in
security
is
a
security,
is
a
very,
very,
very
fragmented
space
right,
a
number
of
different
capabilities
that
need
to
come
together
to
be
able
to
address
and
solve
some
outcomes.
L
Outcomes
like
you
know,
being
able
to
detect
insider
risk
or
be
able
to
detect
a
fraudulent
user
and,
and
all
of
these
capabilities
have
to
come
together
and
sometimes
you
know
it
becomes
the
onus
on
the
clients
to
pull
all
these
integration
things
together
right.
So
the
approach
that
we've
been
taking
is
to
say:
how
do
we?
L
How
do
we
come
together
as
an
industry
and
do
a
better
job
of
sharing
information
right?
Our
adversaries,
our
threat
actors,
are
doing
a
great
job
right.
We,
as
the
defenders,
should
do
a
better
job
or,
if
not
as
good,
a
job
of
sharing
information
and
capabilities
so
that
we
can
keep
up
with
the
growing
threat
landscape,
and
thank
you
for
the
opportunity
here
karina.
J
J
Another
we're
going
to
be
talking
about
integration
at
red
hat.