►
From YouTube: GitLab and RedHat OpenShift-GMT20211215
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay,
welcome
everyone,
and
thanks
for
for
joining,
we
got
quite
a
big
crowd.
This
morning
sounds
like
a
redhead
open
shift,
it's
a
hot
topic,
and
so
so
this
morning
we
do
have
a
handful
of
experts
from
redhead,
including
our
own
director
of
alliances
of
vic.
Who
will
you
know,
give
us
a
quick
introduction
and
then
from
then
on
we'll
turn
it
over
to
phil
and
phil?
A
Will
you
know
get
into
you
know
open
shift
and
tell
us
how
open
shift
works
is
get
led,
and
if
you
have
any
questions,
I
put
the
document
in
the
chat.
You
can
put
your
questions
in
that
document
or
you
can
put
them
in
the
chat
and
I'll
move
them
later
on
into
the
document
that
you
have
in
the
in
the
meeting,
invite
so
without
much
further
ado.
I'll
turn
it
over
to
our
vic.
B
Oh
thanks
and
thanks
again
for
this
opportunity
right,
so
I
want
this
session
to
sort
of
focus
on
the
technical
product
integration
that
we
have
done
with
our
operators.
B
B
So
if
that
works,
I
don't
want
to
take
any
thunder
away
from
our
product
team.
So
dylan
I'm
going
to
pass
it
on
to
you.
C
Sure
sounds
good
thanks,
vic,
so
yeah
I'll
do
a
pretty
brief
presentation,
just
kind
of
highlighting
the
basic
overview
of
our
operator
mitchell
and
then
mitchell
jump
into
the
demo,
so
I'll
go
ahead
and
share
my
screen
and
I've
also
shared
this
presentation
in
the
agenda.
This
is
just
again
a
very
basic
overview
of
how
we
have
connected
the
dots
and
allowed
our.
You
know,
instance
of
git
lab
to
actually
run
on
an
openshift
container.
C
That
was
really
the
true
kind
of
purpose
of
our
operator
at
least
initially
was
we
wanted
to
enable
folks
to
run
an
instance
of
gitlab
in
a
red
hat,
openshift
container,
so
I'll
save
this
piece
for
later
after
we
do
the
demo
for
phil
and
other
red
hat
folks
to
talk
about.
You
know
the
benefits
of
red
hat
open
shift,
but
I
did
want
to
get
into
what
kind
of
operators
we
currently
offer.
So
our
self-managed
applications
can
utilize.
C
C
The
runners
are
going
to
be,
you
know,
executing
pipeline
jobs
and
sending
those
results
back
to
the
instance
of
git
lab,
whereas
the
actual
server
operator
is
intended
to
manage
the
life
cycle
of
that
instance
of
gitlab
running
in
openshift
and
again,
like
I
just
said,
both
of
them
are
operators,
so
they
are
intended
to
manage
the
life
cycle
of
both
of
those
two
use
cases.
C
This
is
just
a
very
basic
overview
of
operator
characteristics
and
how
they
relate
to
our
gitlab
server
operator.
Basically,
you
know
install
and
configure
gitlab
instances.
You
know
in
a
more
repeatable
fashion
for
our
cloud
native
installation,
basically
enjoying
you
know,
seamless
upgrades
from
version
to
version.
C
You
know
this
could
be
a
challenge
for
businesses
keeping
up
to
date
on
the
most
current
version
of
gitlab,
and
so
the
operator
kind
of
allows
you
to
do
that
in
a
much
more
smart
fashion
and
then,
as
we
move
forward,
the
last
three
bullets
are
kind
of
forward
thinking.
So
these
are
you
know
our
intended
direction.
C
You
know
backup
and
restore
characteristics.
We
want
to.
You
know,
offer
monitoring
and
and
better
metrics
and
visualizations,
using
prometheus
and
grafana
and
then
kind
of
moving
into
advanced,
auto
scaling,
as
we
upgrade
our
maturity
of
the
operator.
C
This
is
a
great
graphic
from
the
operator
framework
website,
which
I
have
right
here,
and
this
is
kind
of
where
we
intend
to
go
and
a
basic
overview
of
kind
of
where
we're
at
so.
I
would
say
right
now
we're
just
kind
of
dipping
into
maybe
level
three.
As
far
as
the
maturity
goes,
and
I
would
say,
we're
kind
of
more
truly
in
level
one,
and
then
we
intend
to
kind
of
take
this
as
far
as
we
can
go
as
it
relates
to
to
get
lab.
C
So
if
we
can
go
and
and
have
this
kind
of
autopilot
phase
of
our
opera
maturity,
we
will
go
there
and
before
I
kind
of
jump
into
some
of
these
links,
I
have
above
I
just
want
to
have
this
helpful
link.
Slide
is
also
included
here.
C
We've
got
a
couple
different
docs,
both
for
the
runner
and
the
server
operator,
and
I've
got
a
couple
contacts
there,
a
demo
which
obviously
we're
going
to
go
through
the
demo
today,
but
there's
also
a
video
that
cesar
recorded
and
then
we've
got
our
generic
operator,
epic,
so
going
into
some
of
the
things
we're
working
on
right
now.
This
is
the
operator
official
red
hat
operator,
certification
that
we're
currently
going
through.
So
this
will
allow
us
to
list
our
operator
in
the
marketplace.
C
So
just
another
buy
path
for
folks
who
want
to
download
the
operators,
the
gitlab
server
operator
and
utilize
it
for
their
distributions
in
red
hat
openshift.
And
then
this
is
a
overview
of
our
next
release.
So
we
have
done
our
ga
of
the
operator,
and
this
is
where
we
intend
to
go
for
our
next
update
of
the
operator.
C
Some
of
the
things
that
we're
working
on
include
you
know,
like
I
mentioned
the
operator
certification,
but
also
enabling
near
zero
near
zero
downtime
upgrades,
which
I
know
a
lot
of
customers
are
very
interested
in
and
then
even
more
parity
with
our
home
chart.
Currently,
we
offer
parity
with
all
production
kind
of
characteristics
of
our
helm
chart,
but
we
want
to
take
that
even
further
further
and
kind
of
have
full
parity
with
our
helm
chart,
which
I
think
mitch
can
talk
a
little
bit
about,
and
then
I
mentioned
this
before.
C
This
is
just
the
graphic
I
showed
before,
but
they
also
go
into
more
in-depth
kind
of
descriptions
of
what
each
of
these
levels
mean
and,
as
I
mentioned
before,
we're
really
in
level
two.
I
would
say
right
now,
which
is
basically
you
know,
upgrading
and
keeping
the
operator
up
to
date,
as
well
as
our
gitlab
instances.
C
So
these
are
the
docs
which
I
think
mitch
is
kind
of
going
to
touch
on
and
go
back
and
forth
with,
and
then
this
is
our
video
that
cesar
recorded
so
now
I'll
kind
of
toss
it
over
to
mitch
and
he'll
jump
into
the
demo.
I'm
also
happy
to
answer
any
questions
after
mitch
is
done.
D
D
The
operator
is
effectively
a
way
of
deploying
a
kubernetes
native
application,
so
I'll
compare
it
to
the
helm
chart
to
maybe
give
some
context
so
with
the
helm
chart.
What
we
do
is
define
basically
in
helm's
templating
language.
What
objects
we
want
deployed
to
the
kubernetes
cluster.
Those
objects
are
things
like
deployments
for
gitlab
web
service
and
gitlab
shell
and
sidekick
and
taskrunner
now
called
toolbox,
as
well
as
stateful
sets
for
things
like
min
io
and
postgres,
etc,
and
so
effectively.
D
D
So
this
gives
us
a
lot
of
exciting
opportunities
because,
with
the
helm
chart
we're
effectively
limited
to
whatever
we
can
provide
to
the
cluster
at
install
time
or
upgrade
time,
whereas
the
operator
having
a
live
connection
to
the
cluster,
we
can
do
things
like
actually
ensuring
that
the
secret
that
you
specified
exists
in
the
cluster
or
we
can
check
storage
based
on
the
government
based
on
the
gitlab
api
and
just
start
doing
a
lot
more
advanced
things
like
that,
especially
in
relation
to
hey.
D
So
so
here's
our
code
base,
it's
under
the
cloud
native
group
at
the
moment,
and
so
most
of
the
code
base
is
in
go
as
most
operators
are
what's
great
about.
This
is
that
the
concept
of
operators
uses
a
controller
under
the
hood,
and
controllers
are
one
of
the
core
concepts
of
kubernetes.
So
if
you're,
using
kubernetes,
you're
actually
already
using
controllers,
for
example,
there's
a
built-in
deployment
controller,
you
effectively
apply
a
deployment
spec
via
a
manifest
and
kubernetes
deployment.
D
Controller
will
ensure
that
those
deployments
and
pods
are
running
as
expected,
and
so
that's
why
this
is
so
worthwhile
from
our
perspective
in
terms
of
development.
Is
that
we're
using
basically
first
party
implementations
of
this
concept,
and
this
is
effectively
how
upstream
kubernetes
writes
their
logic
for
their
various
controllers,
and
so,
like
dylan
mentioned,
we
have
our
documentation
up
here
in
our
docs
site.
D
Overall,
what
we're
looking
here
in
terms
of
prerequisites
are
an
existing
kubernetes
or
openshift
cluster
today
I'll
be
using
kubernetes
cluster,
but
we
deploy
the
same
way
to
open
shift.
We've
got
three
different
versions
of
openshift
in
our
ci
that
we
test
against
and
we
deploy
it
effectively
the
same
way.
You
also
need
an
ingress
controller.
D
We
have
one
bundled
but
of
course,
you're
free
to
use
an
external
ingress
controller
as
well,
and
then
you'll
need
a
certificate
manager
in
most
cases
and
for
today,
we'll
use
cert
manager,
but
you
can
also
bring
your
own
certs
and
a
metric
server
as
well
as
dns.
So
a
lot
of
these
requirements
are
effectively
the
same
as
the
helm
chart.
The
reason
for
this
is
that,
under
the
hood,
we're
actually
rendering
the
helm
chart
based
on
the
configuration
that
you
pass
and
the
main
reason
for
that
was
just
development
speed.
D
All
of
the
values
that
you
passed
your
home
chart
today
can
effectively
be
copy
and
pasted
into
this
custom
resource
that
we've
built
basically
teaching
kubernetes
how
to
work
with
a
gitlab
object,
basically
telling
it
what
a
gitlab
is
and
ensuring
it's
running
as
it
has
intended,
and
so
very
quick
to
you,
know
kind
of
take
your
existing
home
chart
installation
and
mirror
it
into
an
operator
installation
because
of
that
commonality
under
the
hood.
D
So
moving
on
to
installing
the
gitlab
operator,
what
I'm
just
going
to
do
is
I'll
grab.
The
latest
released
version
which
we
have
right
now
is
o2o
and
effectively.
What
we're
doing
here
is
just
applying
a
manifest
this.
I
think
I
actually
have
a
copy
over
here,
so
it's
effectively
a
manifest
defining
the
custom
resource
definition.
D
So
we'll
follow
the
instructions
here
effectively
we're
just
making
it
easy
to
grab
the
correct
release.
Artifact
of
that
manifest.
If
you're
on
open
shift
you
effectively
just
set
platform
to
open
shift,
the
difference
in
the
manifest
is
there's
some
additional
rbac
permissions
for
the
nginx
ingress
controller
to
run
an
open
shift
due
to
security
context
constraints.
D
To
have
a
kubernetes
cluster
that
I
use
for
both
operator
and
charts
testing,
I
try
to
keep
my
clusters
to
a
minimum
given
they're
so
expensive,
but
the
only
difference
in
the
installation
for
openshift
is
that
I
would
set
platform
to
open
shift
and
it'll
grab
the
release
artifact
for
openshift,
which
is
the
same
as
the
one
for
kubernetes.
It
just
includes
additional
hardback
permissions.
D
And
so
now
I've
got
this
pod
running
here,
get
lab
controller
manager
and
you
can
see
in
logs.
It
started
up
and
is
listening
for
me
to
apply
a
gitlab
instance
effectively.
So
I
have
one
here
and
what
I've
specified
is
other
than
the
version
other
than
the
values
that
you
would
typically
specify
in
your
home
chart
which
are
here
under
spec.chart.values.
D
The
only
other
one
you
need
to
supply
is
the
chart
version,
so
there's
a
mapping
effectively
of
chart
versions
that
are
supported
for
the
operator
and
each
version
of
the
operator.
We
support
three
chart
versions,
the
three
latest
minor
revisions,
and
that
way
you
have
an
upgrade
path.
So,
for
example,
I
could
install
542
now
and
as
soon
as
it's
installed,
I
would
effectively
just
edit
this
file
to
5.50
and
apply
it
to
the
cluster
and
it
would
install
that
version
of
it
and
we've
got
those
versions
tracked
here
as
well.
D
D
And
you'll
see
that
it's
reconciling
gitlab,
you
can
make
this
full
screen
and
it
says
the
current
version
is
nothing
because
it's
not
installed.
The
desired
version
is
542.,
rendering
new
template
looking
for
a
designated
chart
specified
directory
so
effectively.
D
Now
it's
taking
all
those
values
that
we
passed
to
the
get
lab
custom
resource
and
it's
rendering
a
helm
chart
template
using
those
values
and
then
effectively
we
have
a
bank
of
kubernetes
objects
that
we
can
apply,
but
the
nice
thing
now
is
that,
instead
of
just
applying
them
all
at
the
same
time
like
the
helm
chart
does
we
can
actually
do
it
in
a
smart
way.
So
this
is
why
already
the
operator
has
more
functionality
than
the
helm
chart
in
terms
of
more
advanced
processes.
D
So
because
we
have
this
upgrade
logic,
we
can
actually
run
through
the
upgrade
cycle
for
for
get
lab
so
effectively.
That
means
we
reconcile
the
new
deployments
at
the
new
version,
but
we
keep
them
paused
and
then
we
can
run
pre-migrations
and
once
those
are
done,
we
do
a
rolling
update
of
those
deployments.
D
You
know
jump
into
the
cluster
scale,
down
the
deployments,
run,
migrations
by
hand,
etc,
and
now
all
the
administrator
would
have
to
do
is
change
that
file
to
the
desired
version
and
the
operator
will
handle
that
that
logic,
which
has
been
really
fun
because
now
it's
a
now,
it's
very
easy
to
test
upgrades
and
when
anything
fails,
it's
very
verbose
and
what
we're
wrong
and-
and
it's
all
go
so
it's
much
easier
to
write
and
test
in
most
cases
than
the
home
templating
language,
which
has
been
a
joy
to
develop
in
for
sure.
D
E
Hey
mitch,
this
is
andy
from
redhead.
I
wanted
to
jump
in
real
quick,
so
I
noticed
cert
manager
is
one
of
the
components
that
gets
deployed
as
part
of
the
helm
chart,
but
I
was
curious
whether
you
have
any
plans
to
leverage
the
like
openshift
ingress
router,
to
use
something
like
edge
termination
for
tls
connections
instead
of
requiring
cert
manager
as
like
a
hard
requirement.
D
I
believe
we've
got
documentation
here,
tls
options,
so
you
can
effectively
do
sort
manager
and,
let's
encrypt
or
you
can
do,
an
external
sort
manager,
which
is
what
I
have
here.
We
don't
bundle
sort
manager
with
the
operator.
All
I
did
was
basically
apply.
The
latest
release
of
sort
manager
using
the
kubectl
apply
command,
no
special
configuration
there
or
you
can
use
your
own
certificates.
You
can
actually
bring
your
own
effectively.
D
What
you
do
is,
however,
you
generate
your
secret
certificate,
just
upload
it
to
the
cluster
with
a
certain
name
and
then
in
the
helm,
values
you
effectively
just
give
it
that
secret
name
to
use
so
global.ingress.tls.secret
name.
So,
however,
you
can
get
secrets
into
the
cluster
that
have
those
certificates
they
can
be
used
by
either
the
operator
or
the
helm.
Chart.
E
Okay,
cool
yeah.
That's
really
good,
because
I
feel
like
some
of
the
experimentation
we've
done
internally
with
the
gitlab
server
operator
has
been
kind
of
contingent
on
deploying
cert
manager,
and
you
know
so.
We
we
typically
have.
We
do
a
separate
installation
of
cert
manager
which
in
and
of
itself
has
you
know
some
upgrade
related
inconsistencies.
E
So
we
find
that
sometimes
there's
like
a
packet,
a
problem
with
the
the
bundle
when
we
try
to
install
it
as
an
operator
or
something
so
you
know,
I
think,
finding
more
configurable
ways
to
have
lightweight
installations
of
gitlab
server
is
is
desirable.
So
thanks
for
pointing
this
out.
D
Yeah,
absolutely,
I
would
say,
they're
the
only
the
closest
you
could
get
to
saying
it's
a
hard
requirement
is
that
there's
a
web
hook
on
the
operator
that
is
required
so
that
when
you
actually
apply
that
custom
resource
there's
a
web
hook
that
receives
that
event
and
so
the
way
that
the
operator
sdk
and
kube
builder
configure.
That
is
assuming
that
cert
manager
is
there
because
it
effectively
will
generate
a
certificate
for
you
in
front
of
that.
E
That's
what
I
thought
was
there
was
because
you
know,
I
think,
still
and
correct
me
if
I'm
wrong
on
this,
but
the
process
for
doing
the
installation
in
openshift
requires
you
to.
You
know
manually,
apply
the
custom
resource
definitions
and
our
back
stuff
into
the
cluster,
as
opposed
to
installing
from
operator
hub
yeah.
So-
and
I
was
looking
at
the
at
the
crds
and
stuff
and
it
looks
like
cert
manager-
is
pretty
pervasive
there,
like
there's
an
expectation,
that's
going
to
be
there.
E
So
I
guess
that
is
the
thing
that
I've
yeah.
Maybe
it
is
the
web
hook
that
it
seemed
like
that
was
sort
of
unavoidable.
D
Yeah
yeah
great
point:
we
actually
have
this
tracked
in
an
issue.
We
actually
want
to
document
the
process
for
a
how
we
implemented
cert
manager
and
when
it's
actually
required
and
b.
If
you
want
to
replace
it,
how
to
do
that,
so
we've
got
an
issue
for
that.
It's
marked
for
scheduling
so
hopefully
we'll
get
to
that
as
soon
as
possible.
We've
had
a
pretty
good
experience
with
start
manager
so
far,
but
obviously
it's
the
point
is
to
be
as
flexible
as
the
home
chart
and
so
bringing
whatever
certificate
management
tool.
D
Yep
also,
if
you
ever
get
the
time
and
you
want
to
open
an
issue
for
those
issues,
you've
been
having
with
the
sort
manager,
installation
it'd
be
helpful
to
open
an
issue
here
and
we
can
keep
track
of
it.
Even
if
it's
not
necessarily
a
gitlab
operator's
problem.
It's
just
nice
to
know
what
what
folks
are
running
into
when
they're
trying
to
install
it.
D
Cool,
thank
you
alrighty,
let's
check
in
here.
So
all
of
the
components
are
running.
I
see
web
services
up
and
now
so,
for
example,
I'll
show
you
kind
of
this
concept
of
how
this
is
a
kind
of
a
built-in
concept.
So
right
now
I'm
listing
pods.
If
I
lose
my
so
let's
go
list
deployments,
for
example,
so
here's
a
list
of
all
the
deployments
that
are
in
the
cluster
you
can
do
the
same
with
stateful
sets
right
now.
D
I
can
actually
list
out
get
labs
so
because
I
applied
that
custom
resource
definition
telling
kubernetes
what
a
git
lab
is.
It's
now,
a
like
a
first
party
object
in
the
cluster,
and
so
when
I
did,
the
coop
ctl
apply
and
then
my
gitlab
manifest.
This
is
what
was
created
and
now,
if
I
describe
it,
I
can
see
let's
go
to
the
yaml
version
of
it.
So
here's
my
chart
values
that
I
specified
before
here's
some
status
conditions.
So
this
is
what
we
can
push
from
from
the
controller
the
operator
itself,
basically
giving
status
events.
D
So
if
the
upgrade
failed,
for
example,
we
could
push
an
update
there,
explaining
that
it
failed
and
why
this
is
exactly
how
you
might
debug
a
deployment
you
go
into
the
deployment
and
describe
it
and
there's
events
on
it
that
you
can
check
in
this
case.
There's
none,
because
nothing
went
wrong
over
time.
We
want
to
add
more
events
that
are
descriptive
of
the
installation
process
and
so
effectively.
D
D
So
here,
if
it
is
an
upgrade
then
effectively,
what
we
do
is
if
both
web
service
and
sidekick
are
enabled
we
reconcile
web
service
and
sidekick,
but
we
make
sure
that
they're
paused
we
pass
true
there,
and
then
we
can
run
the
pre-migrations
unpause
web
service
and
sidekick,
ensure
that
they're
running
and
then
run
all
migrations,
which
means
don't
skip
post
migrations
and
and
then
do
a
rolling
update
of
web
service
and
sidekick
to
those
new
versions.
D
What's
great
is
that
many
many
many
projects
in
the
kubernetes
world
use
go
already,
and
so
it's
a
great
lower
barrier
to
entry
for
contributors
who
want
to
help
us
develop
this,
which
is
in
the
spirit
of
open
source
and,
like
I
said,
much
easier
to
write,
much
easier
to
test
and
has
more
power
now
that
it's
a
long
running
process
in
the
cluster
so
really
excited
about
what
we
can
do
with
it.
Going
forward.
C
F
D
So
at
the
moment,
operators
in
the
marketplace
can
either
watch
only
its
own
namespace.
Therefore
it
ignores
anything
else.
That's
applied
to
another
namespace.
So
if
I
had
applied
that
gitlab
definition
manifest
to
the
default
namespace,
this
operator
wouldn't
have
done
anything
because
it's
told
by
default
to
only
look
in
its
own
namespace,
but
what
we
can
do
and
what
a
lot
of
operators
do,
for
example,
the
cert
manager
operator
and
the
nginx
operator.
They
watch
all
name
spaces,
but
with
great
power
comes
great
responsibility.
D
If
you
have,
you
know,
cluster
level,
scope
that
comes
with
elevated
permissions,
and
so
what
we're
working
on
here
is
effectively
finding
the
best
way
to
be
as
secure
by
default
as
possible,
so
default
to
looking
only
in
that
namespace.
But
if
you
want
to
be
able
to
reconcile
gitlab
objects
in
a
different
name
space
effectively,
all
you
would
need
to
do
is
apply
a
separate
manifest
with
the
correct
r
back
permissions.
D
We're
definitely
welcome
to
end
user
input
on
that.
I
think
the
end
goal
is
just
to
make
sure
that
it's
clear
what
permissions
it
does
have
and
what
is
required
to
reconcile
in
different
name
spaces
effectively.
What
I
did
as
in
the
research
here
is,
I
deployed
it
as
I
just
did
today,
and
then
I
tried
to
install
a
git
lab
in
a
different
name
space
and
just
checked
what
failed
and
it
was
effectively
all
our
backs.
There's
some
role,
bindings
roles
and
service
accounts
that
need
to
exist
so
effectively.
D
C
Cool
yeah,
I
think
we're
we're
good
on
our
side
vic.
If
the
folks
in
red
hat
want
to
talk
about
open
shift,
I
think.
A
Absolutely
thanks
mitch,
thanks
dylan.
G
C
It
okay
yeah.
No,
that
sounds
good.
Then
what
I
was
going
to
say
is:
you
know,
like
I
mentioned
the
beginning
of
the
presentation,
and
the
main
purpose
of
this
is
we're.
Having
there's
a
large
appetite
for
folks
who
wanted
to
you
know
we're
utilizing
openshift
already
or
wanted
to.
You
know,
make
sure
that
they
were
running
gitlab
in
the
same
place,
as
you
know
the
rest
of
their
workloads.
C
So
you
know
the
main
purpose
of
this
initially
was
to
unlock
openshift,
for
you
know
actually
running
gitlab
on
an
openshift
cluster,
so
that
was
kind
of
the
most
exciting
initial
piece
of
this.
G
Yeah
and
I'm
this
is
from
the
red
hat
side,
it's
great,
seeing
this
many
people
on
on
the
call.
So
that's
that's
really
that's
really
exciting
to
see
and
appreciate
the
interest,
and
so
just
by
introduction,
my
name
is
mark
vincent
and
I'm
I'm.
I
guess
you
could
say.
I'm
vic's
counterpart
over
at
red
hat,
manage
the
partnership
with
git
lab
here
at
red
hat.
G
So
you
know
if
there
are
questions
you
know
they
come
up
after
the
call
you
can
certainly
reach
out
to
me
and-
and
we
can
get
the
other
folks
that
I
work
with
on
the
on
the
partnership-
try
to
get
answers
for
you
or
if
there
are
some
live
questions.
I
think
there's
a
a
couple
of
people
on
the
technical
side
on
the
call
you're
welcome
to
put
those
in
the
chat
or
chime
in
but.
H
I
Well,
I
I
ask
a
question
that
was
skipped.
It's
not
particular
it
doesn't.
A
I
But
it
pertains
to
that
more
to
the
runner
operator.
You
know
I
I've
been
working
for
a
few
months
now
and
I
wanna
voice
here,
dylan
mitch.
Thank
you
so
much
for
all
the
work
on
the
gitlab
side.
I
I've
been
following
this
closely
since
beta
due
to
a
need
of
a
customer
of
mine,
and
my
question
pertains
like
to
one
of
the
from
the
other
side,
one
of
the
pending
things
right,
which
is
official
support
for
podman
on
the
gillab
up
runner
operator,
and
the
reasoning
behind
that
for
folks
on
the
call
is
that
red
hat
does
no
longer
support
docker
on
red
hat
enterprise,
linux,
8
and
beyond.
Right.
That's
a
decision!
I
You
cannot
even
install
it
as
far
as
I
know,
right
and
then
padman
is
the
official
solution
for
building
containers
and
right
now,
customers
have
to
jump
through
a
bunch
of
hooks
to
to
try
to
get
that
to
function
and
it's
not
as
stable
as
you
would
expect
to
be.
So
my
question
being,
do
you
have
any
updates
or
anything
you
should
be
looking
forward
to
that
development,
and
I
put
the
issue
in
our
google
doc
as
well
for
reference.
C
Yeah
hugo
thanks
for
sharing,
I
I'm
only
on
the
server
side.
So
darren
is,
you
know
pm
for
the
the
runner
I'll
definitely
pass
this
along.
You
know
I'll,
send
him
a
slack
right
now,
but
you
know
continue
monitoring
that
with
with
darren,
but
you
know
I'll
raise
this
again
on
the
product
side.
C
Thank
you.
Thank
you
for
sharing
that
you
know
insight
from
the
market.
A
Sure
cool
and
then
conley
rogers,
so
you
you
want
to
vocalize
your
question.
Yeah.
J
Great
yeah,
thanks
for
this
presentation,
it
is
very
timely.
I
have
some
customers
who
are
able
to
use
openshift
and
they
are
using
openshift
for
other
applications,
and
so
they
they
see
these
announcements
coming
out,
they're
wanting
to
increase
their
scalability
because
they're,
anticipating
more
users
coming
to
the
gitlab
instance
and
the
availability
they
also
want
to
minimize
downtime.
J
So
I'm
just
wondering
if
you
could
articulate
some
of
those
key
reasons
why
you
would
choose
the
gitlab
operator
over
a
typical
cloud
architecture,
some
of
our
reference
architectures,
that
we
have
out
there
where
it's
more
piecemeal.
You
know
running
all
of
the
servers
that
you've
got
your
your
postgres
database
and
kind
of
trying
to
maintain.
All
of
that.
That
way,
I
can
communicate
this
and
help
just
educate
them,
not
necessarily
persuade
them
one
way
or
the
other.
But
you
know
here's
what
you
need
to
keep
in
mind
when
making
that
decision.
D
Yeah
I'll
take
a
first
approach
to
that
and
then
dylan
or
vic,
if
you
want
to
chime
in
as
well
go
for
it.
What
I
would
say
which
I
was
starting
to
type
and
I'll
finish,
that
in
a
moment
is
effectively
if
the
way
we've
kind
of
been
pitching.
It
is
that,
if
you've
got
workloads
in
kubernetes
and
you're
very
comfortable
with
it,
it's
typically
convenient
to
deploy
in
familiar
environments,
and
so,
if
you're
already
in
kubernetes
you,
basically
the
perk
for
installing
the
operator
is
all
the
perks
that
you
get
from
using
kubernetes.
D
D
And
then,
of
course,
you
can
use
all
the
frameworks
on
top
of
kubernetes
that
are
a
big
help
like
service
meshes,
etc.
Now,
if
you're
not
in
kubernetes,
then
it
might
not
be
a
great
pick.
There's
a
whole
level
of
troubleshooting
and
extra
architecture
that
goes
into
running
things
on
kubernetes.
That
requires
quite
a
bit
of
contextual
knowledge,
but
if
it's
there
there's
huge
benefits,
you
know
in
terms
of
the
flexibility
of
those,
especially
the
stateless
run
times,
I'm
going
to
share
my
screen.
D
Real,
quick,
too
part
of
what
we've
been
working
on
with
with
other
teams,
is
documentation
around
reference.
Architectures
and
part
of
that
includes
the
process
of
determining
whether
you
want
to
pick
like
an
omnibus
installation
on
traditional
server
architecture
or
a
hybrid
installation,
where
you
run
most
of
the
stateless
services
and
kubernetes
and
stateful
services
on
traditional
server
architecture.
D
D
So
we
actually
lay
out
what
cluster
topology
we
recommend,
as
well
as
what
services
will
be
running
and
then
we
kind
of
give
a
you
know,
an
architecture,
diagram
of
what
that
would
look
like,
and
we've
got
that
for
most
of
the
architectures
to
my
knowledge
and
so
effectively.
It's
just
another
choice.
D
The
upgrade
process,
looks
like
my
cluster
is
getting
low
on
resources,
but
you
can
see
it
went
through
this
upgrade
process
here
and
if
I
list
my
git
labs
it's
preparing,
because
I
don't
have
room
for
that
last
pod,
I'm
only
a
two
node
cluster,
but
you
can
see
the
version
here
is
five:
five,
oh,
so
it
actually
went
through
and
ran
all
the
migrations
and
and
ensured
that
the
everything
was
upgraded
properly
to
version
five
five.
Oh
there
you
go.
I
got
room
so
now.
It's
running.
C
C
As
far
as
like
troubleshooting
and
costs,
the
operator
should
be
taking
off
some
of
the
load
on
an
admin.
You
know
some
of
those
things
that
an
admin
might
have
to
do
once
a
month
or
you
know,
once
a
quarter
or
even
every
week.
Some
of
those
small
processes
could
be
kind
of
offloaded
onto
the
operator,
giving
the
admin
a
little
more
freedom
to
focus
on
other
things,
and,
ideally
you
know,
troubleshooting
should
be
very
straightforward
as
well.
In
parity
with
our
home
chart.
C
You
know
mitch
just
showed
how
easy
it
is
to
you
know,
install
and
use
the
operator
install
on
instance
of
gitlab,
so
ideally
moving
for
the
operator
is
actually
easier.
If
you
kind
of
have
like
that
innate
knowledge
of
kubernetes,
like
mitch,
said
yeah.
J
C
Awesome
I
did
want
to
get
to
jc's
real
quick.
What
is
the
best
way
to
work
and
work
this
into
a
training
course?
I
guess
my
question
back
to
you
is:
what
sort
of
training
course
are
you
imagining
like
internal
or
to
a
customer.
F
F
To
this
particular
question
was
this
is
maybe
an
administrator-focused
administrator
scoped
responsibility
to
to
set
up
and
implement
operators,
but
is
that
the
only
type
of
situation
where
you
could
use
operators,
I'm
looking
at
what
was
shared
on
the
screen
and
it
looks
like
it
was
an
installation
of
reference
architectures
of
gitlab,
but
are
there
other
ways
to
employ
operators,
maybe
with
just
iac
or
things
like
that?
Thank
you
for
the.
C
Question
yeah,
so
I
would
say
this:
this
might
be
a
better
question
for
somebody
from
red
hat
if
they
have
a
little
more
knowledge
of
operators,
but
there's
lots
of
operators
out
there.
I
guess
is
what
maybe
you're
getting
at
our
operator
is
just
intended
for.
C
The
purpose
of
you
know,
keeping
the
you
know,
instance
of
gitlab
happy
and
running
and
upgradable
that
sort
of
thing,
but
there's
many
operators
out
there,
and
this
is
why
we
want
to
be
on
the
operator
marketplace,
because
once
we're
you
know
in
that
ecosystem,
with
both
operator
hub
and
the
official
red
hat
marketplace,
you
know
they
can
consume
and
find
our
operator
and
other
operators
to
you
know
really
make
their
their
business
better.
H
Yeah,
the
get
runner
operator
is
already
there,
but
like
you're
saying
earlier,
the
you
guys
are
going
through
a
certification
for
gitlab
server.
They're,
like
you
was
saying,
there's
a
whole
bunch
of
operators
and
in
fact,
openshift
at
this
point
is
basically
itself
all
running
based
on
operators.
H
But
you
have
all
kinds
of
databases.
You
have
security
tools,
many
of
our
own
tools,
advanced
cluster
security,
advanced
cluster
manager,
they're,
all
operators
out
there
on
operator
hub
and
they're
they're
made
for
people
to
be
able
to
easily
consume
them
and
package
them
up.
You,
you
even
have
the
operator
marketplace
where
much
like
aws's
marketplace
or
any
of
those
you
can
actually
buy
it
directly
through
openshift.
B
I
I
just
realized,
you
know
I
didn't
do
a
good
job,
so
thanks
mark
and
phil
for
jumping
in
from
the
red
hat
side.
So
let
me
backtrack
a
little
bit
and
just
kind
of
paint
the
bigger
business
picture,
and
you
know
the
demand
that
we
are
seeing
for
openshift
right
so
mark
vincent
leads
the
alliances
from
the
red
hat
perspective
with
us.
B
B
They
have
helped
us.
You
know
along
this
journey
for
the
last
six
nine
months
and
helped
dylan
and
mitch
team
kind
of
develop
the
openshift
operator
collaboratively
right.
So
we
meet
on
a
bi-weekly
basis
to
build
this
operator,
and
I
think
what
we
are
really
excited
about
is
getting
this
operator
certified
for
openshift.
B
You
heard
very
good
things
about
why
we
are
adopting
the
operator
framework
and
you
know,
there's
a
tremendous
interest.
I
you
know
from
public
sector
side
and
chris
roman
is
there
to
help
on
the
public
sector
side
kurt
dusek
on
the
alliance's
essay
side.
Please
reach
out
to
him.
If
you
have
additional
questions,
we
are
happy
to
connect
you
with
your
counterparts
on
the
red
hat
side.
B
So,
let's
sort
of
you
know
thanks
again
mitch
and
dylan
for
giving
us
this
amazing
demo
phil.
I
know
you
didn't
get
a
chance
to
present
today
on
openshift.
A
Yeah
I
mean
since
phil
is
on,
you
know:
phil,
you
want
to
take
a
five
minutes
or
so
just
highlight
you
know
your
thoughts
on
openshift.
H
Yeah,
so
openshift
obviously,
is
our
opinionated
version
of
kubernetes.
So
there
is,
let's
see
I
could
jump
to
that
part.
H
That
so,
if
I
share
this,
so
kubernetes
is
this
middle
piece
here
that
provides
the
networking
orchestration
containing
runtime
and
packaging.
Openshift
is
contains
that
plus
all
the
pieces
that
we
we
think
most
customers
need
for
an
enterprise
level
application.
H
So
it
does
your
self-service
management,
your
service
catalog.
We
talked
a
little
bit
about
operator
hub.
It
has
that
build
automation,
deployment,
automation,
some
built-in
ci
cd
itself
and
all
these
pieces
together
make
up,
make
up
kubernetes,
plus
openshift,
so.
H
We
do
have
managed
cloud
offerings
as
well
as
self-managed,
so
we
have
directly
on
the
marketplaces
with
amazon
and
azure,
as
well
as
ibm
where
they
can
subscribe
to
a
managed
offering
of
openshift.
Where
a
combination
of
our
engineering
teams
and
the
cloud
providers,
engineering
teams
will
provide
the
openshift
platform
infrastructure
and
management
support
for
them,
and
then
they
just
have
to
worry
about
running
the
applications
they
want.
So
then
they
can.
They
can
quickly.
For
example,
once
they
get
lab
server
is
ready.
H
They
can
pull
down
that
operator,
throw
it
up
in
the
cluster
and
start
deploying
their
applications
without
having
to
deal
with
all
the
the
background
management
and
support
of
the
openshift.
Of
course,
we
do
also
provide
the
ability
for
them
to
install
openshift
on
their
own
hardware.
If
they
want
to
manage
it
themselves
in
aws
or
the
other
clouds,
they
can
do
that
as
well.
H
They
can
do
it
on
vsphere
nutanix,
a
variety
of
different
offerings,
but
basically
wherever
they
want,
openshift
will
run
the
same,
so
that
can
help,
especially
when
you
have
outages
this
week
like
aws
east
earlier,
and
currently
I
don't
know
if
you
guys
saw
but
aws
west
is
now
having
issues.
H
So
it's
probably
going
to
be
spurring
some
people
that
want
to
look
at
multi-cloud
offerings
here,
I'm
guessing
in
the
next
couple
weeks
once
they
get
back
from
christmas,
but
with
things
like
openshift
and
then,
of
course,
with
the
your
get
lab
operator,
wherever
they
run
it,
it's
going
to
run
exactly
the
same,
so
they
can.
They
can
support
multiple
clouds
and
hopefully
mitigate
these
types
of
outages.
A
Well,
basically,
phil
edmond
was
asking
in
the
slide.
K
H
A
Ci
cd
is
listed
as
a
component
of
openshift.
Yes,
how
should
we
describe
how
this
compares
with
the
gitlab
ci
cd.
H
So
kind
of
like
what
mitch
was
talking
about
earlier,
where,
with
your
home
chart
and
stuff,
you
support
a
variety
of
different
tools.
We
do
the
same
thing
with
openshift.
We
do
have
a
built-in
ci
cd
pipeline.
They
can
use,
but
of
course,
if
they're
familiar
with,
for
example,
with
git
lab-
and
they
want
to
keep
using
gitlab
by
all
means
they
can
keep
using
the
gitlab
runners.
H
They
can
use
gitlab
for
doing
their
cicd
pipeline
if
they
just
want
to
use
it
as
a
shortcut
source
code
repository,
they
can
do
that.
H
A
All
right,
I
don't
see
any
other
questions
in
the
document.
I
think
hugo
was
just
a
comment
on
a
previous
chat
comments.
Any
other
questions,
yeah.
I
Yeah
my
comment
tell,
I
think,
there's
a
lot
of
confliction
of
concepts
right.
This
is
hard
people.
This
is
hard,
especially
for
people
they're
not
used
to
the
kubernetes
environment.
Where
mitch
said
it
cannot
be
overstated.
I
We
run
into
these
customers
all
the
time
they
come
to
us.
I
just
want
to
install
kubernetes.
My
first
question
is
it's
like
why
the
heck
you're
doing
that
yourself?
It's
like
self-inflicted
pain,
stay
away
from
it
right,
so
there's
a
lot
of
knowledge
that
needs
to
be
gathered
and
solidified,
because
you
know
even
I
was
I'm
typing
this
right,
just
the
fact
that
we
started
talking
about
the
lab
kubernetes
operator,
and
now
we
cannot
talk
about
the
lab
kubernetes
operator
anymore.
I
It's
simply
the
lab
operator
right
because
of
a
naming
issue
right
and
then
we
start
throwing
kubernetes
here
and
there
an
operator
and
gitlab
all
together.
I
see
a
lot
of
people
conflating
concepts,
so
I
think
it's
important
to
get
the
basics
right.
Even
for
us,
technical
people
at
gillam
get
the
basics
down
understand
where
things
are
coming
from
understand
the.
Why?
Behind
of
each
concept,
at
least
on
a
high
level
right
and
then
call
for
help
call
for
help,
because
we
all
need
those
big
brains.
I
So
thank
you
again,
distribution,
chairman
of
the.
K
G
And
to
that
point
I
think
that's
a
good
opportunity
to
you
know.
You
know
to
the
point
of
call
for
help,
it's
a
good
opportunity
to
engage
with
your.
You
know:
red
hat
counterpart.
You
know
if
you,
if
you
have
customers
or
whoever
coming
to
you,
you
know,
as
you
say,
all
the
time,
asking
questions
about
kubernetes
and
openshift
and
gitlab,
and
how
do
they
all
work
together?
G
A
All
yep
cool
thanks
all
we
right
at
time,
two
minutes
after,
but
that's
that's
cool
vic.
Any
last
comments.
B
Oh
thanks,
thanks
for
the
demo
thanks
everyone
for
giving
us
the
time
and
let's
continue
this
collaboration
and
learning
together,
but
there's
a
lot
of
help
available
inside
the
distribution
team
with
the
red
hat
experts.
So
please
reach
out
and
let
us
know
how
we
can
help
you.
A
Yeah
and
thanks
so
once
again
to
our
team
redhead,
for
you
know
taking
time
to
participate
in
this
skills
exchange
call,
and
I
would
definitely
appreciate
all
the
help
and
the
answers
that
you
provided
and
mark.
I
will
most
certainly
take
you
up
on
that
and
I
shout
out
the
questions
in
your
direction.
A
A
All
right,
thank
you
all
and
have
a
great
wednesday.
I
hope
to
see
you
next
year
actually
because
we're
not
gonna
have
another
session
this
year.
But
if
I
don't
get
to
see
you,
you
know,
have
a
happy
holiday.
Merry
christmas,
happy
hanukkah,
whatever
you
celebrate,
you
know
during
this
december
month
and
you
know
very
prosperous
new
year,
especially
for
gitlab
and
redhead.
So
all
right
thanks.
All
thanks,
cheers
cheers
bye.