►
Description
The OpenShift Commons Gathering was held on Jan 29th, 2020 in London, UK, and featured guest speakers from local customers and users. The OpenShift Commons Gatherings brought together 300+ experts from all over the world to discuss container technologies, best practices for cloud native application developers and the open source software projects that underpin the OpenShift/Kubernetes ecosystem.
https://commons.openshift.org/gatherings/London_2020.html
A
Hello,
everyone
and
I
hope
everyone
feels
a
bit
more
awake
and
having
a
coffee
after
the
break
in
today.
I
want
you
to
talk
to
you
about
open
shift
if--.
Maybe
not
everyone
from
you
will
know,
but
we
his
hive
at
well
pay
a
little
bit
about
ourselves
and
my
name
is
burns
I'm.
The
principal
contain
a
platform
engineer
at
well
pay
I
moved
from
Berlin
to
London
in
2013
and
started
working
was
over
shift
in
communities
around
four
years
ago
and
joined
well
pay
beginning
of
last
year.
B
A
So
let
me
talk
about
our
journey
to
open
ship
for
because
I
think
that
might
be
very
interesting
for
everyone.
So
as
a
small
team
for
us,
it
was
very
important
that
we
need
to
work
very
efficiently
and
it's
like.
We
use
administrative
overheads.
We
had
to
experience
with
opposite
three.
That's
like
we
didn't
wanted
to
to
manage
our
infrastructure
and
our
cluster
provisioning
separately.
A
So
first,
there
was,
like
very
painful
of
you
know
like
it
was
very
time-consuming
and
a
lot
of
like
manual
steps
option
for
was
like
great
for
us,
like
it
addressed
like
some
of
the
issues,
but
like
we
felt
like
the
Installer
like
it's
it's
it
didn't
scale
it's
far
as
it
was
like
we
wanted
to.
We
knew
we
wanted
to
have
multiple
clusters
and,
like
we
had
like,
like
I,
said
a
small
team
of
team
members.
A
So
for
us
that
didn't
scale,
so
we
were
looking
for
something
something
missing
or
there
was
something
missing
and
we
were
looking
actually
for
like
something
was
more
IP,
API,
driven
so
having
a
cluster
management
and-
and
we
actually
wanted
to
run
cattle
clusters
and
not
pets
anymore.
Like
what
we
did
was
open
shifts
in
version
3,
so
this
all
started
basically
a
year
ago
and
when
we
became
involved
was
the
Opera
for
a
beta
program
and
we
increased.
A
Our
collaboration
with
read
sets
and
alongside
with
stats
and
sort
of
beta
engagement,
is
how
we
found
hi
factually
and
was
very
surprising
for
us
like
hearing
about
high
fly.
We
never
heard
about
it
before
so
we
we
asked
guys
from
Red,
Hat
and
London,
actually
if
they
knew
life
and
actually
not
many
of
them
knew
at
that
time.
A
B
So
this
is
on
our
hive
cluster
and
we've
got
a
manifest
here,
which
is
a
closer
deployment.
This
is
the
alpha
version
of
it
and
within
that
we've
got
some
labels,
one
of
them
being
one
of
our
own,
which
would
define
the
environment
so
we've
called
it
an
engineering
environment
we've
also
got
some
references
to
secrets,
like
our
AWS
account
credentials
and
we're
going
to
be
deploying
it
to
AWS
in
the
US
one
region
and
Scott's.
B
Those
of
our
secrets,
like
the
Red
Hat
pool
secrets,
it'll,
look
very
familiar
to
anybody
who
has
installed
a
apprecia
for
cluster,
it's
very
similar
to
that
same,
install
config
and
then
we're
just
describing
what
the
compute
looks
like
the
worker
sizes
and
though
it's
what
Maisy's
they're
going
to
be
in.
So
when
I
applied
this
manifest
the
hive
operator
be
watching
for
this
custom
resource
and
it
will
set
about
some
actions
in
order
to
install
it.
So
as
downloads
the
binary
installer,
it
will
take
our
secrets
which
are
held
in
this
management
cluster.
B
It
will
inject
those
into
pods
which
will
actually
do
the
installation
of
this
cluster.
So
we
can
see
here
when
I
do
get
cluster
deployments.
We
can
see
that
it's
not
yet
installed.
It
was
only
created
11
seconds
ago.
I
do
get
pods
and
we
can
see
that
there
is
a
image
sets
pod
and
a
provisional
pod.
So
the
image
set
one's
going
to
pull
down
or
the
resource
which,
which
I
need
from
Quay
and
within
the
Installer
you'll,
see
various
an
air
familiar
bog
entries
from
this
sort
of
terraform
bootstrapping
process,
which
they
install.
B
We've
got
new
control
plane,
which
is
taken
over
the
management
of
our
cluster,
and
we've
got
some
work
and
it's
coming
up
at
this
point,
the
installation
of
the
cluster
is
done
and
we
finished
our
table
and
activities
I've
just
creating
the
thing
you
might
be
using
this
as
a
tester
to
test
a
new
configuration
or
doing
some
development.
It
could
be
for
any
purpose
it
for
our
team
that,
since
it's
all
on
the
same
management
cluster,
we
all
have
accesses
management
cluster.
B
A
A
Also
how
we
do
that,
with
hundreds
of
configuration
manifests
and
with
multiple
clusters
and
how
we
do
promotions
and
like
the
biggest
problem,
actually
what
we
face
from
an
opposite
version.
3
is
like:
how
do
we
avoid
configuration
drift
and
I've
helped
us
and
by
using
sink
sets,
and
it
basically
regularly
recon
sides
the
configuration
of
all
our
clusters,
which
are
and
which
are
subscribed
to
you
and
keep
them
basically
synchronized
map
now
will
show
you
guys
how
we
use
sinks.
That
was.
B
Other--
custom
resource,
if
she's
available
to
us
and
on
our
hive
cluster,
is
called
a
sink
set.
So
within
this,
we've
basically
got
one
big
manifest,
which
contains
all
of
our
smaller
manifests.
So
within
that
we
might
have
some
configuration
items
like
a
daemon
set
or
machine
config,
which
will
change
our
NTP
time
to
sync
with
arm
Prem
area
or
something
like
that,
and
what
we
want
to
do
is
in
a
similar
way,
which
we
would
subscribe,
a
single
machine
to
say,
stable,
inyoung
or
unstable
or
testing
repository.
B
Making.
The
actual
sync
sets
it's
quite
difficult,
though,
because
it's
just
a
massive
manifest.
So
what
we
went
about
doing
was
create
a
generator
should
be
able
to
look
at
particular
directory
walk
through
it.
Take
all
the
smaller
manifest
which
were
in
there
and
generates
our
one
big
manifest,
which
we
can
then
apply
to
our
hive
cluster,
if
you're
interested
in
that
we
can
share
that
later.
So
here's
a
demo
of
it's
actually
running
so
within
this
particular
directory
and
this
emulates,
what
are
I
a
creeper
would
look
like.
B
We've
got
two
folders
patches
and
resources
or
patches
for
any
existing
resources.
For
instance,
we
don't
want
any
of
our
customers
to
be
self
provisioning
into
our
environment,
and
so
we
do
a
patch
there
and
we've
got
some
resources
where
we're
going
to
say,
set
up
ad
authentication
into
our
even
shift4
environments
set
up
some
secrets,
those
kinds
of
things
we've
got
a
little
binary
here,
which
will
run
in
this
directory
and
that
will
create
our
one
big
manifest,
which
has
all
the
different
resources
and
patches
which
are
available
and
to
us
there.
B
One
argument
to
it
is
the
actual
match
labels,
so
we
showed
a
label
earlier
where
we
defined
what
type
of
environment
mock
list
was
going
to
be.
So
we
want
to
target
all
clusters
which
are
in
that
sort
of
environment,
so
that
could
be
a
pre,
prod
environment
or
a
prod
environment
for
saying
any
cluster
which
has
those
labels,
so
we've
applied
it
now
to
our
hive
cluster.
So
the
hive
operator
has
recognized
this
and
we
can
see
what
the
status
of
this
particular
Seng
set
is
against
our
Commons
demo
cluster
and
then
within
there.
B
A
Thank
You
mats
yeah
so
that
basically
led
us
of
like
using
a
type
of
get
ups
model
to
manage
our
clusters,
so
the
cluster
configuration
so
the
cluster
deployment
and
the
configuration
actually
is
stored
and
gets,
and
our
engineers
basically
applying
changes
to
the
platform.
Only
very
a
single
repository
there,
no
manual
changes
to
the
platform
anymore,
which
helps
us
avoiding
drift.
What
I've
mentioned
before
and
like
the
the
big
advantage
actually
is
like
we
bundle
our
changes
into
releases,
so
you
have
seen
that
before
it's
like,
we
can
pin
environments
to
particularly
releases.
A
Basically,
our
CI
tool
is
basically
generating
the
manifests
and
applying
then
automatically
to
our
management
cluster
and
then
basically,
I
was
running
on
that
as
Matz
and
explained
it.
So
the
rest
is
basically
only
doing
high
if
pushing
down
the
sink
sets
was
a
configuration
down
to
do
clusters
and
which
are
subscribe
to
you
and
yeah
I.
A
Think
the
only
thing
additionally
to
mention
is
like
we
also
manage
our
and
promotions
to
her
environments
through
our
CI
tooling,
so
the
benefits
wasa
wall
pay,
like
a
mentioned,
the
Haifa
brighter
like
helped
us
to
adopt
the
get
ups
delivery
model.
This
had
a
big
impact
actually
on
on
the
team
itself.
It's
everyone
started
to
be
become
more
like
a
software
engineer
and
we
shifted
left
and
actually
started,
focusing
on
other
things
like
writing
tests.
A
A
Our
internal
developers
benefited
from
the
improved
self
service
capability
of
open
ship
for
what
was
great,
but
also
of
heights,
because
now
everything
is
in
quotes
like
like
adjusting
quota.
Changes
of
the
namespace
is
now
just
the
pr
basically
into
your
cluster
repository
and
as
I
mentioned
before,
we
have
released
channels
and
we're
able
to
do
controlled
promotions
of
all
our
changes
and
I.
Think
do
you
have
anything
to
add?
No,
and
actually
that's
it.
Thank
you
guys.
I
hope.
This
was
interesting.