►
From YouTube: Operators for Distributed System State Management in Enterprise Apps at Black Duck by Synopsys
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
This
is
based
on
personal
information,
personal
experience
that
I
have
in
actually
creating
one
we'll
see
what
some
of
that
code
looks
like
we'll,
actually
use
it
in
the
real
world.
So
if
I
go
back
to
the
basics
of
containerized
environments,
container
design
is
really
about
well
how
we
would
put
together
a
complex
system.
So
automotive
is
a
pretty
good
example
of
that
we
have
the
engineers
who
are
designing
how
this
whole
thing
is
going
to
play
out.
A
We
have
the
production
assembly
line,
that's
going
to
take
all
of
the
pieces
from
various
binary
repositories,
maybe
a
system
that
is
upstream
so
as
a
base
image.
We're
gonna
run
it
through
a
variety
of
tests,
we're
going
to
make
certain
that,
for
example,
there's
no
bad
coding
patterns,
there's
no
API
issues
of
one
form
or
another
that
are
going
to
be
put
in
place,
and
then
this
application
this
vehicle.
It's
going
to
be
delivered
into
some
form
of
repository.
A
What
we're
talking
about
is
a
system,
but
when
we
actually
decompose
all
of
this
into
the
container
realm
and
we
bring
it
into
a
container
native
cloud
native
paradigm,
we
lose
a
little
bit
of
that
systems
level
viewpoint.
So,
if
I
go
and
take
a
look
at
for
practical
purposes,
any
Enterprise
application
I'm
effectively
looking
at
something
known
as
a
monolith,
I
have
this
great
system
that
may
have
been
decomposed
into
a
few
things,
but
now
there's
an
initiative
than
the
organisation
to
go
and
take
that
and
bring
it
into
a
cloud
native
form.
A
So
we're
going
to
bring
in
say,
pods,
which
is
a
unit
of
scheduled
ability
within
an
open
shift.
Environment.
I'm
gonna
have
many
pods
I'm
going
to
scale
these
things
horizontally
as
replicas
and
effectively.
That
is
a
method
of
gaining
scale
that
is
method
of
building
out
my
system.
Now
internally,
we
have
applications
within
black
duct
that
are
built
out
as
containers,
but
those
multiple
containers
that
are
part
of
that
set.
So
if
I
want
to
scale
something
horizontally,
I
can
apply
something
known
as
horizontal
quad
scaling
and
it's
a
reactionary
model.
A
I
can
look
at
say
CPU
or
memory
or
some
other
metric
and
say
I.
Don't
have
enough
of
these
I
want
to
have
more
now.
The
problem
with
that
model
is
that
it
makes
an
assumption,
the
assumption
being
that
if
I
want
more
I'm
really
measuring
the
same
quanta
of
thing,
a
web
request,
a
log
on
request,
query
result
a
new
entry
into
a
database.
All
those
things
are
roughly
equivalent
in
size.
A
What
we
really
need
to
do
is
look
at
a
micro
services
model
of
in
terms
of
service
level
agreements.
What
are
we
wanting
to
guarantee
under
what
conditions
that
are
going
to
be
effectively
what
the
world
is
going
to?
Look
like?
So
that's
where
an
operator
really
comes
into
play,
I
want
to
enforce
a
set
of
relationships
between
the
pods,
so
I've
got
pod
a
and
all
of
its
replicas.
It
implements
a
container
image,
maybe
even
multiple
container
images.
A
I
also
want
to
ensure
that
out-of-band
modification
doesn't
occur,
that
some
human
who
has
access
to
the
UI
can't
go
and
say
I
only
want
to
have
three
of
something
that
should
have
six
or
I
want
to
have
a
hundred
of
something
that
should
also
only
have
six
that
type
of
enforcement
of
what
the
the
operational
paradigm
should
be.
That's
part
of
the
operator
itself,
and
we
want
to
make
certain
that
it's
a
very
prescriptive
scenario.
A
So
it's
not
just
reacting
to
the
environment,
but
I
can
go
and
say:
I
need
to
have
this
capacity
for
reason.
X
now
I'm
going
to
use
the
black
duck
for
open
shift
platform.
As
an
example
of
how
this
is
actually
going
to
work
out,
so
last
year
at
Red
Hat
summit,
we
introduced
an
application
integrated
directly
within
the
OpenShift
environment
to
scan
for
all
the
container
images
independent
of
the
registry
wherever
they
came
from,
and
so
that
gives
us
the
ability
to
see
where
events
are
coming
from,
whether
they
be
a
pod
event.
A
A
A
new
scan
goes
into
a
container
that
we
call
a
hub
scan
and
its
job
is
to
persist
some
information
to
a
database
and
build
out
of
queue
and
through
that
cute
kick
off
a
series
of
jobs
which
are
going
to
process
those
scan
data
elements
and
the
hub
scan.
Well,
it's
a
scalable
entity,
a
job
runner
which
is
effectively
the
thing
that
actually
runs
the
jobs.
A
That's
a
scale
entity,
the
bigger
the
cluster,
the
more
jobs
that
can
come
in
because
the
more
pods,
the
more
images
that
it's
going
to
be
able
to
process
I
might
have
a
centralized
analytics
engine
for
both
dev
prod
and
production.
So
that's
now
three
separate
clusters
potentially,
and
they
all
need
to
play
into
each
other
very
well
and
of
course,
on
the
back
end,
we
have
our
knowledge
base
where
all
the
metadata
around
open
source
risks
are
contained.
A
This
analytics
engine
in
the
real
world
actually
is
13
separate
container
images
represented
at
11,
pods
or
11
deployments.
So
I
now
have
some
entities
that
are
going
to
have
a
scale
of
1.
In
other
words,
I
only
ever
want
to
have
one
instance.
Some
entities
that
are
going
to
have
a
variable
scale
dependent
upon
the
actual
number
of
scans
that
are
occurring,
the
hub
scan
and
some
pods
that
are
going
to
be
based
on
the
quantity
of
jobs
that
need
to
be
performed.
A
This
becomes
pretty
straight
for
I'm,
going
to
start
out
with
defining
a
custom
resource,
my
hub
capacity,
my
analytics
engine
capacity,
so
it's
gonna
have
a
name
and
it's
gonna
have
a
spec
now,
in
this
case,
I've
actually
hard-coded.
Something
here.
I
say
that
I
want
to
have
6
job
runners
and
to
scan
clients
at
the
bottom.
That's
the
status,
that's
the
representation
of
the
real
world
as
it
stands
right
now,
so
those
job
runners
are
actually
running.
That's
the
amount
of
RAM,
that's
associated
with
the
environment.
A
Similarly,
I
have
two
scanners
and
that's
the
amount
of
RAM
that's
associated
with
them.
If
I
wanted
to
scale
these
up
I
would
simply
go
and
reapply
this
gamble,
so
an
OC
apply
Pat
yeah
Mille.
That
gives
me
a
way
to
define
an
SLA.
That's
still
a
human
scenario,
but
the
operator
SDK
allows
us
to
do
is
make
this
more
programmatic,
so
this
is
all
of
the
code
that
is
necessary
to
actually
implement
a
new
operator.
A
So
it
starts
out
with
saying
that
I'm
going
to
have
a
hub
capacity,
I'm
going
to
have
what's
known
in
this
code
as
a
reconciliation
loop.
The
goal
of
the
reconciliation
loop
is
very
straightforward.
If
their
system
doesn't
agree
with
what
this
says
fix
it,
so
he
puts
the
wrong
version,
so
he
puts
the
marang
amount
of
RAM.
Somebody
goes
and
delete
some
pods.
Somebody
goes
and
adds
too
many.
Pods
some
system
goes
away.
This
is
what
this
would
all
end
up.
A
A
And
so,
if
the
demo
gods
are
willing,
once
this
install
completes,
what's
going
to
end
up
happening,
is
that
you're
going
to
see
the
scan
and
job
runner
counts,
increase
from
four
to
nine
and
so
effectively
what's
happening?
Is
that
I'm
going
to
generate
an
additional
capacity
for
this
system
in
real
time?
A
All
with
the
centralized
analytics
engine
all
configured
at
install
time,
all
done
with
an
operator
if
I
want
to
get
really
fancy,
I
could
go
in
here
and
actually
try
and
decrement
the
count,
and
it
will
automatically
reconcile
itself
forward.
That's
the
power
of
an
operator!
That's
why
the
operator
SDK
is
really
cool.
That's
why
you
should
be
looking
into
them
as
well.
Maintaining
system
state
across
container
images
across
pods
and
a
distributed
system
is
a
little
bit
of
a
complex
task
with
an
operator
SDK.