►
From YouTube: Building a Namespace as a Service on OpenShift at ING - Red Hat OpenShift Commons 2022 Detroit
Description
Building a Namespace as a Service on OpenShift at ING
Red Hat OpenShift Commons 2022 @ Kubecon/NA
Detroit, Michigan
October 25, 2022
Speakers:
Jan Willem Bijma (ING)
Arno Vonk (ING)
https://commons.openshift.org/gatherings/kubecon-22-oct-25/
A
A
A
A
So
what
Surfers
on
top
of
ing
container
hosting
platform-
and
it
is
completely
separated
we
as
platform
owners,
redevelop
the
platform,
maintain
a
platform
and
make
all
the
compliance
and
stuff
there,
because
we
are
a
bank
so,
and
we
also
make
sure
that
the
internal
audit
department
is
decided
off.
The
namespace
is
completely
responsibility
of
the
owner
of
the
namespace.
Normally
the
devops
teams,
our
consumers
and
they
are
responsible
for
what
is
running
in
the
name
phrase.
A
So
they
are
responsible
for
the
runtime
on
the
images
that
is
running
damn
so
can
they
our
consumers,
install
or
create
a
namespace
with
OC
Epping.
Now
they
can
do
that.
They
can't
do
that.
So
we
have
in
ing
our
ig5
Cloud
cell
service
portal
and
there
they
can
request
their
namespace
and
what
they
get
they
get
Resources
with
mini
course
and
memory,
but
also
they
get
a
secure
connection
to
the
one
pipeline
where
they
can
deploy
this
stuff.
A
A
A
We
have
the
image
comporter,
so
we
can
see
within
with
an
API
what
is
running
in
the
cluster
which
images
are
running
and
we
have
also
a
quota
Auto
scaler,
so
with
resize
automatically
the
the
resources
in
the
namespace
services
efficiently
used.
So
there
we
go,
make
a
picture.
What
we
do
we
have
two
data
sensors
always
and
then
we
order
a
lot,
a
lot
of
bare
metal
surface
from
our
internal
provider.
A
B
A
B
So
we
have
a
lot
of
dependencies
that
we
need
to
install
on
a
cluster.
In
the
past
we
used
ansible
to
install
everything
on
all
of
the
Clusters.
Sadly,
we
saw
that
there
were
problems
with
configuration
drift.
We
came
across
some
issues
on
one
cluster
that
didn't
show
up
on
the
other,
and
when
we
moved
from
openshift
3.11
to
openshift
4,
we
really
wanted
to
have
a
solid
installation
where
everything
is
equal
and
that
we
use
correct
versioning.
B
So
we
came
across
open
shift
githubs
and
we
started
using
it.
Sadly,
we
had
to
implement
it
a
little
bit
differently
than
what's
supposed.
How
are
you
supposed
to
implement
it?
So
what
we
do
is
we
push
and
copy
of
the
git
repository
into
the
cluster,
and
then
we
use
Argo
CD
to
pull
in
the
configuration
from
the
from
the
mirror
inside
the
cluster.
This
solves
some
compliers,
some
issues.
With
the
repository
we
have.
We,
we
can't
connect
directly
to
it
from
our
cluster.
B
How
does
it
look
so
we
have
a
CI
CD
pipeline
that
creates
a
output,
repo
image
and
that
output,
repo
image
is
deployed
inside
the
cluster
and
Argo
CD
application
is
created
inside
the
cluster
and
that
ensures
that
all
of
the
packages
that
we
need
are
then
pulled
in
and
configured
and
installed
and
in
the
end
we
get
into
a
state
where
all
the
features
we
need
are
there.
B
We
created
a
demo.
So
let's
see
if
the
demo
works
so
first
we
have
the
first
package,
which
is
the
we
call
it
Azure
deploy
because
we
deployed
with
Azure
at
some
point:
it
installs
all
the
output,
repo
images
and
slowly
all
packages
are
being
picked
up
and
configured
and
installed,
and
synced
and
yeah.
B
Think
it
always
looks
pretty
cool.
The
total
time
span
is
about
22
minutes
due
to
some
timeouts
that
we
came
across
and
we
saw
seeing
this
movie.
We
saw
some
improvements
that
we
can
make
in
the
future,
but
for
now
it
it
works
pretty
well
and
for
us
this
is
quite
fast.
So
we're
happy
with
the
result,
and
the
last
bit
always
takes
a
bit.
A
A
A
B
B
That
takes
about
one
minute
to
demo,
so
the
the
beginning
is
always
really
fast,
and
now
it's
completely
green,
so
yeah.
So
one
of
the
features
that
we
install
is
the
project
controller.
This
is
a
controller
that
was
made
to
ensure
that
all
the
namespaces
that
we
deploy
are
deployed.
In
the
same
way,
it
uses
a
crd
which
defines
the
resources
the
namespace
needs.
B
It
hardens
the
namespace
to
ensure
that
only
the
people
that
are
part
of
the
development
team
that
requested
it
can
access
it,
and
it
also
ensures
that
there
are
some
default
Network
policies
and
yeah
all
those
kind
of
things,
and
it's
based
on
the
on
a
framework
that
we
created
ourselves,
which
we
call
the
Scarface
framework.
It's
a
it's
a
framework
for
operators
and
you
can
write
it
in
Python
and
we
build
in
that.
It
also
does
an
automatic
roll
back.
B
So
how
does
the
Scarface
framework
looks
like
you
create
an
app
and
you
create
a
kubernetes
object,
a
crd
and
it
it
reads
the
custom
resource
and
it
ensures
that
whenever
something
changes
or
is
created
that
the
operations
are
taken
in
so
later.
In
the
second
day,
my
love
will
show
you
one
of
the
CRS
for
a
project
and
then
it
will
probably
be
more
clear.
B
But
what
we
Define
in
the
CR
is
the
we
call
it
purpose
code,
which
defines
the
Development
Group
the
resources
like
CPU
memory,
all
the
limitations
that
needs
to
be
set.
If
it's
stateless,
stateless
or
if
it's
a
data
service,
so
yeah,
the
Scoville
structure
has
automatically
created
a
event
listener
which
has
three
possible.
Three
methods
create,
update
and
delete.
B
But
we
have
one
problem.
We
saw
one
problem
with
the
project
controller
because
we
used
an
outside
orchestrator
to
configure
the
CRS
in
the
cluster
and
that
will
then
be
picked
up
by
the
project
controller
and
to
ensure
that
we
have
the
correct
State
across
two
clusters,
because
we
always
deploy
in
pair
the
ice.
Hp
API
was
created
and
that
allows
us
to
just
call
the
ice.
Hp
API
tell
the
ichp
API,
we
need
a
namespace
and
that
ensures
that
the
same
crd
is
the
same.
B
The
ichpa
API
uses
three
stages
for
us
because
we
have
some
dependencies
inside
organization.
So,
for
example,
we
first
need
to
have
a
network
ID
before
we
can
create
a
cmdb
entry
and
we
need
cmdb
entry
to
connect
it
to
the
namespace,
because
otherwise
we
don't
know
who
owns
the
namespace
and
yeah.
It
would
be
nice
to
know
who
to
contact
if
they
have
an
issue.
B
All
these
steps
are
sequential,
but
inside
each
stage,
they're,
all
concurrent,
which
is
really
nice
for
us,
because
it
saves
some
time
with
with
the
creation.
Some
cars
can
take
a
bit
longer,
but
they
are
then
already
processed
when
the
slowest
is
done
during
the
first
stage.
B
We
also
ensure
that
we
have
a
high
probability
that
all
steps
that
come
after
stage
one
are
succeeding.
So
we
check
if
the
namespace
already
exists,
we
check.
If
the
cmdb
entry
already
exists,
we
tried
to
see
if
it
might
have
already
been.
There
might
be
something
wrong
with
charging.
So
if
an
API
is
not
available,
but
still,
we
cannot
be
100
sure
that
everything
will
succeed.
So
in
case
the
checks
were
successful
and
in
the
end
the
next
race
was
unable
to
be
created
during
stage
three.
B
We
rolled
back
everything
everywhere
so
that
we
don't
leave
any
orphaned
or
still
entries
somewhere,
so
it
either
was
successful
and
it
is
created
or
it
failed,
and
we
don't
have
any
any
left
over
somewhere.
Another
tool
that
one
of
my
colleagues
Robin
sickman
created
was
a
quota
outer
scaler.
What
we
saw
in
the
past
was
that
our.
B
Our
consumers
request
more
than
they
need
and
we
had
a
lot
of
Hardware
that
was
unused
and
still
we
were
out
of
Hardware.
We
couldn't
create
new
namespaces,
so
we
created
a
tool
to
ensure
that,
in
a
certain
frame
and
automation
can
check
what
what
the
consumer
needs
and
will
scale
down.
If
it
doesn't
need
it,
and
it
will
increase
if
there's
a
burst
of
resources
required,
for
example,
during
an
update.
B
That's
why
we
prevent
that
consumers
are
reserving
resources
that
they
don't
need
99
of
the
time,
this
lowers
the
cost
for
them.
It
also
lowers
the
cost
for
us,
because
we
don't
need
that
many
many
Hardware
anymore
as
we
used
to
have
so
that's
the
second
demo,
for
that
I
want
to
show
you
so
I
talked
about
an
IPR.
I
talked
about
the
project
controller,
so
this
is
in
the
the
crd
that
we
sent
to
the
project
to
the
HP
API.
B
B
So
when
we
look
it
up,
we
see
all
the
running
all
the
stages
and
what
the
result
is.
We
see
the
on
the
first
few
lines
we
see
running
stage
check,
which
is
the
check
to
see
if
there's
a
probability
that
it
will
succeed,
they
all
succeeded
and,
in
the
end
the
State
Street
was
also
successful.
This
is
the
ichp
project,
object
that
the
project
controller
uses
and
there
we
see
the
actual
configuration
for
a
specific
cluster
and
it
has
the
same
information
as
what
we've
sent
to
the
ichp
API.
B
B
B
B
B
B
We
take
because
we
think
just
like
some
other
presenters,
that
it's
important
to
also
give
back
to
a
community
that
has
given
us
a
lot
and
now
we're
at
the
stated
we're
quite
close
to
what's
used
upstream
and
we're
not
stuck
behind
on
Oak
shift
311
we're
now
also
using
option
for
so
that's
that's
a
good
thing.
I
guess
I
want
to
thank
you
all
for
being
here
and
listening
and
have
a
great
day.