►
From YouTube: GitLab KaS deployment with Charts 2020-11-03
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
everyone
welcome
to
our
red
hat
sync.
On
the
openshift
operator,
I've
got
a
few
items
in
the
agenda
that
we
can
discuss
and,
of
course,
anything
else
on
folk's
mind
the
very
first
one
was
the
decision
about
using
the
helm
sdk
we
reached,
I
believe,
a
decision
on
it
to
move
forward
jason.
Could
you
give
us
a
quick
synopsis
of
that
issue
and
the
next
steps.
B
We
did
do
the
experiments
with
this
early
on
and
edmond
did
some
checking
in
on
how
it
would
work,
and
we
did
notice
that
we
do
have
some
refinements
to
do,
but,
after
digging,
through
what
we've
got
in
place,
how
the
framework
works,
how
the
helm
code
base
itself
works
their
sdk
as
well
as
looking
at
some
of
the
concerns
that
were
raised
during
the
initial
investigation
of
using
that
work.
B
We
still
believe
that
it's
viable
to
continue
down
the
path
of
attempting
to
use
this,
as
opposed
to
straight
generators.
Within
the
go
code
base
itself,
there
will
be
an
amount
of
logic
that
is
required
to
understand
how
the
application
works
and
we're
going
to
face
that
no
matter
what
the
first
thing
that
we
need
to
address
is
not
having
the
template
called
every
single
time.
B
The
second
thing
is
one
of
the
things
that
we
noticed
is
the
methodology
for
actually
having
the
antibodies.
Now
is
effectively
repeatedly
calling
the
same
function
that
results
in
the
same
output
being
generated.
B
B
B
The
last
concern
and
the
largest
concern
that
we've
noticed
is
understanding
who
generated
a
change
and
how
often
those
change
notifications
would
come
around
and
making
sure
that
when
we
are
doing
a
reconcile
loop
that
another
reconcile
loop
doesn't
interrupt
us
and
result
in
us.
Not
knowing
that.
Oh
that's
something
we're
doing
in
this
other
loop
going
through
everything.
B
We
don't
believe
that
should
be
the
case,
because
it
looks
like
it's
a
fifo
stack
in
terms
of
how
this
is
handled
so
barring
us
having
to
wait
for
something
to
get
fully
populated
and
the
cycle
expanding
this
shouldn't
be
something
we
run
into,
but
there's
there
are
ways
to
mitigate
this,
whether
it's
with
annotations
or
many
other.
A
Methods,
jason
I'm
trying
to
capture
some
of
what
you
call
that
as
kind
of
the
next
steps,
but
certainly
that
could
use
some
polish
I
wanted
to
just
straight
up.
You
know
it
feels,
like
we've
reached
a
decision
now
we're
sort
of
planning
what
the
next
steps
are.
B
C
B
We
are
at
this
point
beyond
the
experimentation
we
are
going.
What
we
want
to
do
is
for
the
git
lab
portion
of
the
gitlab
operator
just
that
one
we're
not
talking
about
gitlab
runner
is
to
implement
the
helm,
sdk
templating
that
we've
done
as
an
experiment.
Until
this
point,
we
want
to
refine
that
to
to
the
position
where
we
can
fully
replace
the
existing
generators.
C
B
B
When
it
comes
to
the
application
work.
There
is
a
number
of
components
that
you're
working
on
now
that
are
state
specific,
specifically
openshift
and
the
operator,
and
how
we
actually
consume
other
providers
that
are
still
viable
things
to
be
working
on.
The
only
real
difference
is
going
to
be
things
like
where
we
use
gitlab
utils
to
generate
service
accounts.
B
Instead
of
calling
to
the
function.
We
would
call
into
the
object
to
just
give
me
the
service
account
for
this
name,
so
most
of
it
would
be
drop
and
replace,
but
I
wouldn't
go
too
deep
into
trying
to
replicate
additional
features
that
have
been
coming
along
because
we're
losing
parity
as
it
is
so
we'll
need
to
try
and
just
take
a
gap,
get
this
fully
implemented
and
then
regain
all
that
we
have,
because
now
we
don't
have
to
have
a
parity
between
two
projects.
C
Okay,
I
appreciate
the
clarity
I
think
will
need
to
be
more
specific
at
some
point.
We
may
not
be
able
to
do
that
because
it
definitely
wouldn't
benefit
us
for
me
to
work
on
something
that
I
think,
or
I
presume
would
be
useful
and
then
we
don't
end
up
using
it,
but
for
the
most
part
I
think
what
I'm
getting
is
for
this
point
on
I'll
focus.
A
Yup,
I
think
that
sounds
about
right.
We
know
that
the
runner
there's
there's
demand
for
it.
So
that's
certainly
not
going
to
be
wasted
and,
as
you
mentioned
it,
it's
not
directly
tied
to
the
homework.
So
I
don't
think
any
of
it
would
be
affected
by
by
the
decision
to
use
the
helm
sdk
in
the
runner.
D
Itself,
what
else
is
there
like
macro
level
from?
Is
there
to
do
on
the
gitlab
side?
So
there's
the
largely
consuming
some
crd
typing
that
into
the
helm
chart
that's
largely
work
that
can
imagine
can
be
done,
there's
using
the
helm,
chart
to
template
out
and
then
right,
which
I
think
we're
something
we're
changing.
D
B
There
is
this
the
regards
to
how
we're
doing
the
constraining
within
the
name
space
through
the
surface
accounts
that
particular
piece
of
work
which,
with
understanding
how
things
are
operating
and
doing
the
insuring
through
and
then
I
still
don't
fully
understand
how
that
behaves,
and
we
have
that
as
another
item.
Some
of
the
securing
work
in
is
can
still
be
worked
on,
but
the
the
step
between
what
are
the
properties
from
the
crd
and
what
is
the
object,
I'm
trying
to
enforce?
That's
where
the
helm
sdk
comes
into
play.
D
Yeah,
that's
the
other,
the
deployment
and
the
ongoing
validation
of
imagine
them
discreet,
features
cool
on
the
runner
side
or
are
we
effect
like
darren
and
evan?
Do
you
know
where
we're
at
they're
like
are?
We
are
we,
I
think
we're
gonna
have
a
beta
by
13.7?
How
do
we.
E
Oh
yeah,
hopefully
yeah
josh.
How
are
you
well?
I
hope
we
have
to
be
at
by
397
we
actually
assigned
on
the
on
on
the
engineering
team
sound.
We
actually
decided
to
pick
on
one
of
our
engineers
to
start
working
at
right
now.
We
think
we
should
have
most
of
the
work
rather
by
13
7..
The
one
thing
we
haven't
done.
We
haven't
think
about
edmund's
team
in
a
while.
So
that's
one
thing
we
probably
do
need
to
sync
up
as
we
get
going
but
yeah.
Hopefully
we
get
everything
knocked
out
by
37.
D
C
Yeah
so
from
the
runner
perspective,
so
the
operator
version
as
it
exists
today
only
supports
deploying
and
managing
the
gitlab
runner.
C
E
E
C
D
Go
off
the
conversation
here
or
make
sure
that
we
a
we're
leveraging
you
to
the
degree
that
we
can
and
we
are
trying
to
move
as
fast
as
we
possibly
can
on
openshift,
and
so
I
want
to
make
sure
that
we're
obviously
making
the
most
of
your
offer
to
help
here
and
we
appreciate
it.
And
so
I
I
I
want
to
make
sure
that
we,
you
know
sort
of.
D
And
as
far
as
like
talents
work,
like
you
mentioned
the
demand,
so
what
we
see
the
demand
for
is
number
one
by
far
as
the
runner
operator,
from
what
we
hear
from
our
customers
and
our
field
teams,
so
that
is
by
far
the
number
most
important
to
get
out.
There
number
two
we're
we're
getting
a
friend
our
questions
on
the
security
features
and
other
code
quality
features
that
we
run
a
part
of
auto
devops.
It's
a
bit
of
a
separate
thing
that
relates
to
the
operator
and
then
behind.
D
That
is
the
the
gitlab
instance,
so
that
can
hopefully
give
some
feedback
what
we're
hearing
as
far
as
the
prioritization
and
demand
from
our
side.
C
Cool-
and
there
has
been
some
confusion
as
far
as
the
operator
is
concerned,
which
I
think
with
the
discussion
that
we're
having
now
is
a
great
opportunity
to
bring
this
up.
So
the
work
to
deploy
the
gitlab
application
was
added
into
the
runner
that
was
deployed
initially,
so
it
was
going
to
manage
both.
C
B
We
are
not
necessarily
against
splitting
it
we're
not
exactly
against
keeping
them
together
either,
so
we
can
go
either
route,
but
we
should
be
keenly
aware
that
there's
two
completely
separate
products
actually
being
deployed
that
make
up
the
entirety
of
our
ecosystem
for
gitlab.
But
the
sheer
fact
is
that
runner
is
technically
a
separate
component
that
is
released
separately
and
managed
by
another
team.
D
F
Hey
joshua
team:
this
is
matt
mariani
from
the
red
hat
bd
side.
In
regards
to
demand,
you
know
I
admittedly
have
been
following
through
folks
in
the
team.
When
I
hear
about
opportunities
in
the
redhead
side
of
the
field,
I
am
hearing
requests
for
server
certification
coming
up,
so
I
would
definitely
get
the
sensors.
You
know
initial
demand
with
runner.
F
If
that
needs
to
be
refreshed,
I
would
also
throw
throw
out
work
with
ibm
on
like
the
marketplace
side
and
that's
been
an
area
I
think,
of
another
route
to
market
that
they've
been
kind
of
discussing
with
us
with
us
and
has
been
in
the
mix
with
vic
and
to
call
out
that
the
server
cert
may
be
a
dependency
in
that
that
train
as
well.
F
So
like
not
looking
to
dispute
the
statement,
but
definitely
want
to
just
just
test
and
see
if,
if
we
feel
that
the
you
know
the
input
on
the
demand
for
the
server
certs,
you
know
kind
of
current
with
respect
to
some
of
those
things.
D
Yeah,
I
can
buy
a
little
more
color
there.
Perhaps
that
might
help,
and
then
maybe
we
can
have
a
call,
if
necessary
between
us
too.
We
can
better
discuss
some
of
these
items,
but
we
have
a
fair
amount
of
opportunities
and
increasing
customers
that
we
are
tracking
that
want
openshift.
D
It's
a
sizeable
chunk
of
revenue
between
those
two
pieces
of
between
both
existing
and
new
customers,
no
opportunities
rather-
and
we
it's
hard
to
know
for
sure,
but
then
talking
with
them
of
that
batch
of
folks
like
50
50,
maybe
a
little
bit
more
than
that
is
looking
for
runner.
First,
existing
customers
already
are
managing
git
lab
in
some
way
shape
or
form.
D
D
If
they
have
to
run
a
vm
or
two
outside
of
openshift,
they
can
it's
not
as
desirable
if
they
wanted
to
run
the
whole
thing
open
shift,
but
they're,
you
know
again.
The
most
pressing
item
is
having
the
runner
there
in
in
their
cluster,
so
they
can
deploy
locally
from
there
without
having
to
type
it
externally.
D
So
that's
what
that's
the
general
pattern
we're
seeing
here
folks
do
want
the
server
we
are
working
hard
to
deliver
on
it,
but
from
a
demand
point
of
view,
it's
been
relatively
clear
from
the
field
teams
that
the
runner
is
number
one,
that
from
what
something
that
we're
hearing
from
that
demand
now
ibm
and
some
other
opportunities
may
be
a
little
bit
a
bit
different,
but
that's
larger
we're
hearing
from
our
field.
D
B
B
Matt,
my
apologies.
I
haven't
heard
this
term
before
the
server
cert.
F
Oh,
I'm
sorry,
the
certification,
the
operator
certification,
we're
talking
about
here
for
the
server
instance
as
opposed
to
the
runner.
I
apologize.
If
I'm
misusing
terms
there,
the
the
full
lab
offering
versus
the
the
runner.
G
D
Apologize
so,
but
we're
we
have
separate
teams,
you
know,
aside
from
I've,
been
working
these
things
and
so
we're
full
steam
ahead
on
both
paths,
but.
D
D
I
think,
okay,
my
main
question
was
just
making
sure
that
that
we
found
you
know
that
that
we're
making
the
best
use
that
we
can
of
edmonds
time
and
helping
us
to
accelerate
either
one
of
these
projects
to
delivery
as
much
as
we
can
so
all
right,
it
sounds
like
the
runner
is
a
good
place
to
work
and
draw
that
to
completion,
at
least
for
the
first
iteration,
and
then
perhaps
we
can
circle
back
and
address
the
other
pieces
and
for
the
server
side
of
things
seems
reasonable
to
me,
but
evan.
D
A
Cool,
I
think
that
takes
us
to
item
three
and
gerard,
give
us
a
quick
synopsis
of
the
gitlab
versions
and
the
operator
version
situation.
I
All
right,
so,
basically
the
I
started
looking
at
this
a
little
bit
and
trying
to
figure
out
how
we
should
handle
this,
and
I
think
the
way
and
again
this
is
gonna.
Change,
may
change
a
bit
if
we
decide
to
split
the
runner
and
application
into
two
separate
operators,
of
course,
but
initially
what
I
was
looking
at
is.
I
think
it
would
be
highly
desirable
to
get
the
operator
version
number
the
same
as
the
get
lab
release.
I
But
until
that
point
you
know,
I
don't
see
why
we,
especially
as
we're
mostly
developing
the
runner,
keep
just
a
simple
semantic
version
of
1.0
and
moving
forward
until
the
operator
comes
out,
the
parity
of
being
able
to
manage
the
runner,
and
then
we
just
bump
that
version
number
up
to
the
release
of
get
lab
at
that
point.
I
Another
piece
of
that
is
that,
as
I
discussed
in
the
comment,
basically
that
we
should
use
the
operator
and
handle
just
the
number
of
supported
releases,
so
basically
three
releases
from
the
current
in
the
operator
that
would
help
simplify
the
operator
code.
It
also
allows
to
help
remove
craft
that
accumulates
over
time
and
keep
the
operator
a
little
more
healthy.
So
those
are
my
my
thoughts.
I
I
haven't
seen
any
significant
discussion.
J
B
We
will
hopefully
not
run
into
the
number
of
problems
that
we
did
when
it
when
it
comes
to
the
runner
itself
being
a
less
complex,
total
piece
which
is
good.
Should
we
continue
to
keep
the
operator
together
or
operators
together?
I
should
say
that
we're
definitely
going
to
run
into
a
little
bit
of
a
hiccup.
While
we
deal
with
the
fact
that
we
don't
necessarily
have
the
same
contents
in
feature
parity
between
the
omnibus
and
the
operator
itself.
B
That's
one
of
the
distinct
reasons
why
we
don't
have
a
version
lock
between
the
charts
and
the
rest
of
the
the
cng
and
actually
how
the
omnibus
versions
directly
with
the
gitlab
code
base
is
because
we
don't
have
feature
parity
so
until
we're
100
sure
that
we're
up
to
feature
parody,
we
can
maintain
that
we
may
actually
have
pushback
from
the
release
cycle
until
so,
we
can
actually
say
that
yeah.
This
is
going
to
be
in
by
then
and
we're
not
going
to
lag
behind.
D
By
the
release
cycle
and
parody,
you
mean
feature
parody
like
pages
and
things
like
that
or
yeah
okay,
I
personally
I
we
can
keep
this
going,
maybe
in
the
issue,
but
personally,
I'm
less
concerned
about
that.
I
think
what.
J
D
B
B
There
are
times
when
we
have
to
make
large
shifts
in
application.
For
example,
we've
got
upcoming
italy
cluster
and
that's
a
big
change
in
the
way
the
chart
actually
will
operate.
If
that
mode
is
flagged,
we
may
not
be
able
to
necessarily
wait
for
a
major
revision.
Change
like
14.0
is
six
months
from
now.
We
may
not
have
the
ability
to
wait
six
months
to
implement.
B
But
I
think
for
now
saying
that
that,
at
least
in
the
beginning
that
we
specifically
maintain
the
version
of
operability
based
on
the
release
that
we
were
at
at
the
time
of
that
pre
1.0
makes
sense.
Let's
not
fight
to
have
346
versions
that
are
capable
of
being
in
play
and
then
upgraded
through
until
such
time
as
we're
actually
prepared
to
do
that.
I
I
View
the
and
initially
I
mean
we're
really
hitting
the
runner
with
the
operator
today,
and
I
think
initially
my
view
of
it
is
that
we
just
allow
the
operator
and
keep
just
the
current
release
of
the
runner.
In
other
words,
don't
try
to
do
six
releases
of
the
runner
in
the
operator
today.
Just
keep
that
one
version
and
keep
the
current
version.
That's
been
released
up.
So
that
was
my
thought.
I
Version
well
that
at
that
point
it
would
be
just.
Would
it
not
be
the
same
as
just
upgrading
from
the
old
version
to
the
current
version,
so
you
upgrade
the
operator
and
the
operator
would
then
go
and
upgrade
the
runner.
D
B
Yep
the
the
nature
of
the
runner
being
a
less
complex
application.
It
doesn't
have
19
components,
it's
got
one
so
versioning.
It
is
much
simpler
where
you
can
actually
say.
I
want
13
5
and
I
want
to
be
able
to
deploy
one.
That's
thirteen
four,
and
as
long
as
we
have
the
containers
in
the
registries,
then
you'd
be
able
to
do
that
to
a
degree
now.
That
being
said,
I
defer
to
the
operator
framework
experts.
We
have
on
the
call.
C
C
Yeah
that
becomes
an
interesting
question.
I
think
the
and
I'll
give
two
options
I'll
I'll,
give
two
options
here,
one
being
with
a
two
with
one
operator
managing
the
two
resources
and
us
having
separate
operators.
C
I
I
would
def-
or
I
think
the
approach
that
gerard
brought
up
makes
sense
in
that
we
have
the
operative
version,
matching
the
resources
or
the
version
of
resources
that
it's
deploying
so
that
when
somebody
sees,
I
have
operator
version
13.6,
they
know
this
will
deploy,
let's
say
runner
or
github
application,
server
version,
13.6
and
maybe
two
versions
behind.
C
I
I
think
emulating
a
similar
approach
would
make
sense.
I,
but
thinking
of
it
I
want
to
say
what
comes
to
mind
is
the
version
the
of
the
git
club
application
and
the
version
of
git
labrana
do
not
hold
as
much
is
that
correct.
C
Yeah,
so
with
that
being
the
case
it
it
makes
for
an
interesting
case,
especially
if
they're
going
to
be
combined.
But
if
we
decide
that
we'll
have
separate
operators
for
the
two,
then
for
the
for
the
runner
side,
we
can
always
assume
that
oh,
we
can
always
choose
that.
The
operator
version
will
also
deploy.
C
Operator
version
x
will
deploy
a
runner
version
x,
so
there
would
be
that
priority,
so
I
think
that
makes
sense
for
the
current
operator
that's
in
place.
Of
course
we
don't
have
that
and
the
other
thing
that
was
just
mentioned
for
the
run
aside,
I
I
don't
believe
it's
necessary
to
to
be
able
to
support
multiple
versions.
Maybe
if
somebody
can
has
more
information,
you
can
you
can
let
me
know,
but
I
do
see
yeah.
C
I
I
think
I'm
getting
a
nod
to
to
mean
that
one
version
makes
sense,
but
I
was
going
to
say
I
do
see
edge
cases
whereby,
if
there's
a
major
architectural
change-
and
it
disrupts
the
behavior
of
somebody
running
the
operator,
they
they
may
want
to
roll
back
to
a
previous
version.
C
So
in
that
scenario
I
think
just
being
able
to
support
two
would
make
sense
so
that
people
can
roll
back
in
that
weird
situation
or
that
one
of
situation,
but
beyond
that,
I
think
it
just
makes
sense
to
be
able
to
support
one
version.
D
Do
you
typically
see
like
a
version
tag
in
crs
for
the
operator,
or
is
it
just
dependent
on
the
operator
version,
that's
deployed
as
far
as
image
and
then
just
deploys
whatever
version
it
wants
of
the.
C
So
when
you
say
cr
tag,
are
we
talking
of
so?
Are
we
talking
of,
for
instance,
in
this
case,
if
we
have
a
gitlabrana
operator,
so
our
custom
resource
would
be
the
runner
right,
so
we
do
see
cases
where
people
want
to
be
able
to
override
the
version.
C
In
the
case
of
the
runner,
I
did
not
give
people
the
option
to
be
able
to
override
the
tag
of
the
image
or
to
to
be
able
to
give
an
alternate
image
and
that's
something
that
people
would
do
sometimes
and
the
reason
for
that
is
in
the
case
of
openshift
I
and
the
runner
specifically.
I
knew
that
the
image
we
were
using
in
openshift
was
modified
specifically
to
run
on
openshift,
so
we
did
not
need
any
uid.
C
So
if
somebody
was
to
replace
that
image
with
something
else,
then
the
runner
would
would
break
or
the
runner
would
not
deploy
successfully.
So,
for
that
reason
I
did
not
give
you
this
the
ability
to
do
that,
but
I've
seen
situations
where
people
have
given
the
user,
the
ability
to
do
that
and
in
some
cases,
depending
on
the
scenario,
it
may
mean
that
you,
you
have
to
account
for
maybe
changing
behavior,
but
in
most
cases
it's
just
straightforward.
It's
just
a
simple,
a
different
image,
but
the
behavior
is
going
to
be
the
same.
C
I
would
say
that
depends
on
the
application.
Of
course,
you
want
to
give
the
user
more
or
ability
to
override
or
more
flexibility
and
different
communities
or
different
projects
choose
different
approaches.
C
So
in
that
case,
you
want
to
restrict
the
options
or
what
variables
the
user
can
do,
and
you
also
do
that
with
a
stability
and
best
practices
in
mind.
So
you
you
come
from
a
perspective
of
a
bias
knowing
that
this
is
the
best
way
to
deploy
it,
and
this
is
what
I'll
support
and
I
might
give
you
a
few
other
options.
D
Running
out
of
time
here
so
appreciate
the
feedback,
though
that's
great,
that.
B
Rings
a
bell
josh
because
we've
had
customers
deploy
the
kubernetes
chart
and
then
specify
gitlab
version,
and
even
though
we
documented
left
and
right
don't
use
this
unless
you're
developer,
they
set
it
to
like
three
revisions
back
and
then
the
chart
doesn't
know
what
to
do
so.
I
I
completely
see
where
edmund
is
coming
from,
especially
from
the
support
side,
where
our
answer
has
been
yeah.
Take
that
value
out
and
just
upgrade
you'll
be
fine.
B
But
if
you
do
that
in
openshift
and
it's
your
operator,
that's
doing
that
reading
the
logs
and
going
oh
well,
that's
not
exactly
a
fun
support.
Call.
D
Cool
have
to
hop
real
fast,
but
thanks
everyone
and
anything
out,
and
I
open
up
the
issue
for
the
singular
split
operators
and
so
I'll
I'll
paste
into
the
docs
here.
In
a
second.
H
Sounds
good,
I
think
that
takes
us
to
the
next.
The
next
item.
A
B
Yeah,
I've
not
had
any
time
to
address
this
due
to
competing
priorities.
That's
entirely
possible.
Edmund
has
had
the
same
problem.
A
C
B
A
Sounds
good
that
takes
us
back
to
gerard
on
the
register,
image
submission.
I
All
right
so
last
friday
there
was
a
call
between
us
and
red
hat
on
the
working
through
the
image,
submission
certification
processes,
a
lot
of
good
information.
I
do
have
the
call
recorded
in
the
notes
that
reference
here
so
if
you're
really
interested,
I
was
hoping
that
the
call
would
be
served
as
a
training
for
anybody
else
that
needed
to
do
some
of
this,
and
it's
probably
not
quite
to
that
level,
but
it
does
have
some
good
information.
I
During
that
call,
we
identified
that
it
looks
like
there's
at
least
two
get
lab
company
ids
or
whatever
and
looks
like
I'm
the
only
one,
that's
not
in
that
company.
That
shows
the
products
and
the
images
and
so
forth.
So
I
think
that's
still
trying
to
get
resolved.
I
checked
a
little
bit
ago
and
it's
still
that
case
nothing's
moved
on
it.
For
me,
another
thing
that
came
up
because
we're
asking
about
is
the
api.
That's
available.
I
I've
looked
at
the
api,
it
does
look
like
it'll,
be
sufficient
enough
to
allow
us
to
integrate
the
image,
submission
and
certification
processes
into
rci.
We
will
have
to
pull.
It
doesn't
look
like
there's
a
a
web
hook
to
call
back.
Let
us
know
the
results
are
ready,
so
we'll
just
have
to
pull
occasionally
to
look
for
results,
and
they
then
take
action
on
that.
I
I
think
that's
about
it.
For,
for
that.
J
And
jared
just
to
close
the
loop-
and
I
was
talking
with
pete
about
this
as
well.
I
am
tracking
the
the
situation
with
your
accounts
and
I
had
sent
a
ping
out
to
charles
on
our
end,
who's
taking
point
on
that,
as
he
hasn't
gotten
back
to
me
just
yet,
but
once
he
does
I'll
be
sure
I'll
just
hit
you
up
on
slack
that
works.
That.
K
So
the
the
the
metadata
we
did,
the
rescan
and
that
didn't
seem
to
change
anything.
What's
the
next
next
steps
on
that
so
jared,
do
you
want
to
take
that
you
want
me
to
I.
I
J
Sure,
let
me
just
pull
up
my
notes,
real
quick
on
that.
So
it's
my
understanding
that
the
the
issue
here
is,
let
me
just
no
that's
it.
Okay,
cool
sorry!
Just
pulling
this
up.
J
J
So
I'm
not
sure
it's
that
so
what
charleston
said
was
he
discovered
the
product
which
contains
the
runner
and
runner
operator
projects
is
not
published,
and
so
that's
why
it's
not
it's
not
showing
up,
apparently
there's
some
missing
information
that
needs
to
be
added
in
order
to
publish
he
says
he
pasted
a
screenshot
below
showing
the
missing
information
there.
I'm
not
see
a
little
hard
to
see
from
screenshot.
C
Yeah
yeah
just
going
back
to
that
comment.
If,
if
I,
if
I
recall
that
conversation
properly,
I
think
that
was
the
initial
perception,
but
once
we
got
into
the
project
we
found
that
it
had
been
published
or
bought
the
metadata
and
the
images
had
been
published.
But
there
was
a
release.
So
we
currently
have
two
versions
of
the
operator
right
and
that's
the
operator
that
deploying
the
run
at
this
point.
So
we
have
zero
one.
Eleven
and
012.
C
there
had
been
an
attempt
to
to
publish
0-113
that
failed
and
it
was
just
left
in
failed
status.
So
the
during
the
call
there
was
an
attempt
made
to
try
and
re
republish
that
or
to
to
run
as
khan
on
that
to
see
if
that
fixes
the
issue.
But
if
I
just
take
us
back
to
that,
you
know
early
on
in
the
call
we
noticed
when
charles
was
walking
us
through
the
issue.
C
K
K
C
So
when
submitting
the
images,
we
have
a
different
process
to
submit
the
operator
and
to
submit
the
other
application
containers.
So
I
think,
from
that
perspective
they
should
be
different,
but
certainly
there
is
stuff
for
us
to
go
back
and
figure
out
just
seems
to
be
a
red
herring
and
yeah
yeah.
C
A
Cool
anything
else
on
the
registry
image
for
submission
and
we're
right
up
almost
at
the
time
edmund,
you
had
the
next
two
items
here.
C
Okay,
so
for
the
for
the
next
item
I
just
wanted
to
follow
up
during,
I
think
it
was
a
lot.
Our
last
meeting
there
was
a
discussion
of
potentially
being
an
architectural
change,
which
would
mean
either
we
wouldn't
need
the
github
secured
apps,
or
so
we
we
were
having
a
discussion
between
the
gitlab
secured
apps
and
git
lab
managed
up
namespaces
and
trying
to
figure
out
whether
we
would
need
them.
So
I
just
wanted
to
to
have
a
follow-up
to
see
whether
there
was
any
information
on
that
that
was
gathered.
B
C
Okay,
all
right,
my
I
rebooted
my
system
right
before
the
meeting,
and
I've
noticed
my
mouse
is
dragging
so
I
just
wasn't
sure
whether
my
system,
but
okay,
so
let's
go
into
the
next
thing.
If
something
else
comes
up,
we
can
always
revisit
the
next
thing
is.
I
thought
it
might
be
interesting
or
a
good
opportunity
for
us
to
review
the
scope
of
the
operator
as
it
stands,
and
you
know
have
a
little
bit
of
a
discussion
of
how
euler
would
handle
that.
C
So
for
the
version
of
the
operator
that's
currently
deployed-
and
this
is
specific
to
runner-
we
have
it
segmented
or
we
have
it
namespace,
scoped,
meaning
the
operator
would
only
watch
requests
for
runner
instances
within
a
single
namespace,
and
that
means,
if
we
have
let's
say
10
different
departments
or
different
organizations
sharing
the
same
cluster.
Each
of
them
could
have
their
own
operator
running
within
their
their
own
namespace
and
they
wouldn't
affect
each
other.
C
So
with
that
being
the
case,
I
wanted
to
to
plot
the
idea
of
if
we
have
a
cluster
scoped
operator,
for
instance,
in
the
case
of
gate
lab.
If
we
decide
to
have
a
cluster
scope,
the
ideal
situation
would
be
that
you
can
deploy
a
gitlab
instance
in
any
namespace,
and
the
operator
would
have
that
and
in
an
ideal
situation
you
should
only
have
one
instance
of
the
operator
in
the
cluster,
because
it
will
be
responding
to
two
requests
throughout
the
cluster.
C
So
I
think
it's
an
ideal
situation
for
us
to
evaluate
what
users
may
need
if
it's
likely
that
they
would
need
to
be
able
to
deploy
multiple
gitlab
instances
within
the
cluster.
C
Then
we
we
think
about
that
now
and
number
two
if
we
are
able
so
number
two.
That
goal
goes
also
to
security.
If
we
are
working
or
if
we
expect
people
who
are
security
conscious
to
use
the
operator
being,
namespaced
would
be
more
comforting
than
having
an
operator
or
an
application
that
has
access
to
to
the
entire
cluster.
Because
for
the
operator
to
be
able
to
deploy
to
any
namespace
in
the
cluster,
then
it
would
need
to
have
cluster
roles
versus
roles
which
are
restricted
to
a
single
name
space,
so
that
was
yeah.
B
B
I
think
it
was
queue
builder
based
operator
and
looking
into
the
question
of
whether
we
would
do
one
instance
of
that
operator,
or
we
would
do
one
with
namespace
and
actually
scoping
farther
through
api
labels,
and
we
effectively
decided
that
if
you
want
to
be
able
to
run
side
by
side,
if
you
have
an
operator
that
is
version
locked
to
the
application,
it's
extremely
hard
to
deploy
multiple
versions
of
the
application,
for
example
a
pre-staging
in
production.
B
B
C
You
cannot
deploy
multiple
watts,
gitlab
instances,
multiple
versions.
B
I
I
Well
I'll
tell
you
coming
coming
from
working
on
kubernetes
outside
of
development
and
operating
it
and
other
companies.
There
would
typically
be
multiple
clusters.
Almost
everybody
tends
to
have
a
a
stage
or
test
cluster
for
doing
quick
testing
and
so
forth.
A
Yeah,
that
would
be
my
expectation,
because
that
allows
you
to
actually
up.
I
could
upgrade
kubernetes
versions
right
independently
of
of
the
application
itself
and
have
a
little
more.
You
know
stable
progress
instead
of,
like
just
you
know,
changing
kubernetes
in
my
production
environment
and
impacting
all
these.
A
Yeah,
because
since
all
of
the
instances
would
be
or
environments
would
would
be
impacted.
B
B
C
I
Evan
quick
question:
this
spelled
on
your
on
operator
scope.
Do
you
have
that
in
an
issue
today,
because
I've
I've
also
got
a
thought
or
two
running
my
head?
I
wouldn't
mind
putting
down
on
the
issue.
No.
C
I
I
don't
have
anything
I
just
thought
about
it
today
when
I
was
looking
at
what
we
need
to
to
go
through
and
that
came
up
but
yeah
we
can
have
an
issue
often.
I
Yeah,
if
you,
if
you
can
get
an
issue
I'll
I'll,
put
some
comments
in
there
too,
because
I
I
see
value
in
both
ways
and
I'm
not
sure
they're
mutually
exclusive
either,
but
let
let
me
put
that
in
documentation,
so
other
people
can
either
shoot
me
and
say
no
you're
being
stupid
or
hey.
That's
an
okay
idea.
I
don't
know
which
it
will
be.
I
I,
and
I
may
I
may
try
to
so
if
I
do
get
a
chance
to
get
it
open,
I'll,
take
in
the
issue,
and
so
you
know
it's
been
there,
so
you
don't
duplicate.
B
Okay
sounds
good
all
right
phil,
it
looks
like
you
have
the
last
item.
J
Yep
so
I'll
go
quick,
be
respectful
of
phil's
time
here
so
the
nfr
licenses,
so
you
can
set
up
your
own
cluster
environments.
I
was
just
wondering
it
did.
Was
anyone
able
to
to
access
the
the
the
page
for
requesting
those?
That
was
an
action
item
that
I
had
followed
up
with
vic
on?
I'm
not
sure
where
things
are
with
that.
I
Okay,
I
I
know
I
did
that
months
ago,
when
I
first
started
looking
a
little
bit
at
openshift,
but
I
don't
remember
the
exact
process.
I
remember
submitting
it
and
then
like
a
couple
hours
or
maybe
even
minutes
later,
I
got
a
email
back
saying
they're
available
or
something
other
than
that's.
Not
all
I've
ever
seen.
A
Okay,
gotcha,
I
believe
dustin
he's
he's
not
here
this
week,
but
I
believe
he
got
all
the
way
through
the
process
and
got
to
the
point
where
he
was
actually
able
to
stand
up
a
working
cluster,
okay,
cool,
and
so
what
we
were
hoping
to
do
is
get
him
all
the
way
through
the
process
so
that
we
can
get
it.
You
know
defined
in
steps
and
then
have
other
folks
on
the
team.
You
know
essentially
go
through
the
same
process,
so
gerard,
maybe
between
now
and
the
next
time
we
meet.
A
I
A
Thanks
for
following
up,
though,
that's
definitely
one
that
we're
we
wanted
to
make
sure
we
we
had
everything
we
needed
absolutely
awesome.
I
think
that's
a
wrap
for
today.
Everyone
thanks
for
everybody's
time,
there's
some
good
notes
here,
I'll
record
this
and
add
a
link
to
the
recording
in
the
document
that
folks
want
to
go
back
later
and
then
there's
a
few
folks
on
my
team
that
are
in
alternate
time
zones.
That'll
watch
it
as
well.