►
From YouTube: 2022-07-21 - Delivery:Orchestration Q3 OKR discussion
Description
Team discussion about https://gitlab.com/gitlab-com/gl-infra/delivery/-/issues/2477
A
Okay,
welcome
everyone
thanks
for
joining
just
to
kick
things
off
on
a
little
bit
of
context.
So
this
is
the
sort
of
a
bit
of
a
casual
sync
on
the
okr
discussion
we've
been
having
on
the
issue,
so
I
will
summarize
from
this
put
this
recording
on
the
issue
and
also
summarize
this
discussion.
So
the
conversation
can
continue.
A
One
thing
that
I've
been
working
on
kind
of
alongside
the
what
we
do
on
the
okr
and
maybe
like
how
we
can
do
some
self-serve
is,
I
briefly
chatted
with
both
with
some
people,
not
everyone,
but
some
people
from
configure
and
from
package
this
week
and
both
of
them
actually
tentatively
are
saying.
Q4
is
better
for
a
kind
of
collaborative
effort
for
moving
things
now
configure
are
much
more
likely
to
be
flexible,
they're
like
q3,
maybe
q4.
A
So
what
I
have
suggested
to
to
configure
is
that
we
will
continue
thinking
that
we
will
ideally
use
the
get
lab
agent
service,
but
we'll
figure
out
before
we
say
for
sure,
we'll
figure
out
what
we
would
want
to
do
there
and
what
we
would
expect
from
them.
And
then
I
can
go
back
to
them
and
give
them
a
slightly
better
idea
of
like
how
much
time
and
roughly
where
in
the
quarter,
we
might
need
their
their
input,
but
keeping
what
I
am
keeping
in
mind.
A
Awesome
so
maybe
to
kick
things
off
what
I've
I've
suggested,
but
I'm
completely
happy
for
this
to
just
be
an
open-ended
conversation.
What
I
suggested
might
be
a
good
use
of
this
time
was
to
try
and
get
on
the
same
page
around.
The
comment
that
I
left,
which
was
in
response
to
your
comment
on
seo,
which
is
roughly
the
here,
are
the
steps
we
think
most.
A
Maybe
all,
but
certainly
most
components
would
go
through
in
order
to
reach
a
self-serve
deployment
and
then
what
I
tried
to
do
is
pull
out
some
of
the
details
about,
as
we
say,
okay,
if
we
wanted
to
do,
for
example,
automated
releases
the
when
we
talk
about
that
here
is
basically
what
we
would
want
to
have
for
that
piece.
So
we
have
the
sort
of
broad
pieces,
but
I'm
completely
happy
for
this
just
to
be
a
open-ended
conversation.
If
someone
like,
maybe
who
wants
to
kick
it
yeah
yeah,
I
don't
know.
B
I'll
just
jump
in
and
this
this
may
be
a
silly
question
and
I
feel
like
to
me
it's
actually
really
important,
because
I
think
over
the
course
of
this
whole
conversation,
I
actually
realized
that.
Maybe
I
didn't
really
understand
or
clarify
to
myself
and
maybe
to
others
within
the
team,
what
we
mean
when
we
say
self-serve
deployments.
B
So
what
I
mean
by
that
is
like,
if
you
take
what
people
do
with
it
like
take
what
a
developer
thinks
of
when
they're
developing
the
get
lab
rails
component
at
the
moment.
Do
they
consider
that
self-service?
Do
they
consider
that
it's
definitely
hands
off
like
they
just
have
no
involvement.
So
what
do
we
mean?
What
do
we
want
to
which
so
two
things?
B
Okay,
what
do
we
want
to
achieve
by
self-service
deployments,
or
what
do
we
define
that,
as
in
the
context
of
what
we
currently
do
now
and
two-
and
I
know
alessia
you've
already
touched
on
this-
already-
is
what
is
self-service
deployments,
and
does
that
mean
it's
still
serp
service
into
this
like
four
times
a
day
cycle,
or
is
it
a
more
rapid
cycle?
Is
it
choose
your
own
cycle?
Like
what
do
we
mean
by
that.
A
So
I
can
give
a
little
bit
of
an
idea
of
what
I
would
quite
like
to
see.
Now,
I'm
not
going
to
put
any
timelines
around
this,
like
probably
longish
term.
The
main
thing
that
I
see
people
struggling
with
is
the
coordination
with
us
and
the
timelines
that
we
fully
control.
So
I
think
to
answer
your
question,
I
think
for
now
no
someone
developing
in
the
rails
monolith
wouldn't
consider
that
self-serve
that
would
be
auto
deployed
now.
One
of
the
big
big
reasons
is
they
can't
really
control
anything.
A
We
choose
when
things
get
deployed
and
we
choose
when
things
get
rolled
back
and
they
don't
really
have
any
visibility
of
that.
So
I'm
thinking
self-serve
to
be
a
stage
group
has
the
freedom
to
control
their
rollouts
and
ideally
they
would
also
be
responsible
for
the
roll
back.
So
maybe
that,
like
hopefully
that's
automated,
but
I'm
thinking
that
they
would
they
would
have
I
mean.
B
Right
like
I
forget
what
the
times
are,
but
this
time
we're
going
to
quote
unquote,
deploy
this
time
we're
going
to
deploy
this
well,
actually
we're
going
to
release
and
deploy
right,
because
we
kind
of
consider
that
as
a
release
like
we're,
going
to
cut
this
release
and
try
and
deploy
it
so
there's
that
part
which
obviously
I
get
now
that
you
kind
of
talked
about
that
they
don't
have
any
control
over.
B
B
I'm
going
to
run
some
tests,
I'm
going
to
bake
I'm
going
to
like
and
that's
kind
of,
like
the
the
second
part
right
like
independent,
because
we
can
change
how
often
we
release
and
how
often
we
tag,
but
the
deployer
part
of
autodeploy
that
rolling
out
safely
is
like
a
fixed
thing.
It's
like
we're
defining
the
points,
the
time
we
release
and
then
that
process
itself
is
actually
doing
the
rollout.
So
so
really.
A
A
B
Okay,
so
so
that
makes
sense
so
so,
once
again,
I'm
just
thinking
internally
like
so.
If
we
split
those
two
things
off,
we
have
what
auto
deploy
does
now,
which
is
that
safety
rollout
through
environments,
which
I
think
is
reusable-
and
I
think
is
you
know
pretty
well
established
pretty
well
known
about.
We've
got
the
whole
post-deploy
migrations
part
there
as
well,
and
things
like
that.
So
the
next
question
is-
and
this
is
alessio-
I
guess
you've
already
touched
on
this-
and
I'm
interested
more
on
your
thoughts
is.
B
Do
we
still
have
this
like
one
auto
deploy
pipeline
or
does
auto
deploy
pipeline
becomes
something
that
every
component
uses
like
so
there's
multiple
repos
running
an
auto
deploy
pipeline
like
each
component
is
running
their
own
auto
deploy
pipeline.
Obviously
we
need
to
lock
and
like
make
sure
they
don't
go
over
the
top
of
each
other,
and
things
like
that
or
do
we
still
keep
this
model
of
this?
Is
there
is
only
you
know,
the
one
auto
deploy
pipelines
run
in
this
one
repo
and
we
feed
in
component
changes
into
that.
C
This
is
a
great
question,
so
I
think
this
is
far
far
far
away
in
the
future,
and
so
let
me
start
with
something
which
I
think
is
important
to
clearly
understand
and
has
to
be
clear
for
everyone
in
team,
which
is,
we
tend
to
consider
how
to
deploy
and
monthly
releases
patches
releases
as
two
separate
process
as
two
completely
different
things.
Even
in
release
tools
code
base,
they
are
coded
separately,
even
though
they
are
exactly
the
same
thing.
C
C
The
problem
is
that
how
to
deploy
runs
daily,
I
mean
more
times
a
day,
but
basically
the
basic
unit
of
time
is
the
day
how
many
times
I
deploy
a
day,
how
many
packages
I
built
a
day.
So
it's
something
that
begins
with
the
beginning
of
your
day.
It
ends
with
the
end
of
your
day
and
tomorrow
is
another
day
and
can
be
a
fresh
start.
C
So
that's
the
thing,
but
as
soon
as
we
enter
the
release
preparation
week,
those
two
process
collide
because
the
last
known
deployment
is
the
entry
point
to
the
stable
release,
and
this
is
something
that
is
quite
unique
to
our
company,
because
we
do
both
cells
on-premise
installation.
So
we
get
the
extra
challenges
here.
So
what
is
the
point
here
is
that
we
do
release
what
we
tested.
C
C
There
are
reasons
for
this,
for
instance
not
every
component,
but
many
have
integration
tests
within
the
monorepo
so
that
when
we
do
build
a
ci
test
for
rails,
we
are
compiling
the
the
version
of
gisely
that
is
declared.
We
are
compiling
workers
and
we
are
testing
that
workers
works
with
that
and
that
italy
is
actually
behaving
correctly
right.
So
that's
the
reason
why
things
are
referred
as
a
version
in
rails.
C
When
we
do
tag,
we
collect
those
version
and
propagate
them
to
the
packagers,
cmg
and
omnibus,
so
that
from
that
point
on
those
packages,
when
they're
tagged
are
they
contain
the
right
version,
the
expected
version?
Okay,
so
that's
the
thing
now
all
this
complexity
starts
from
development
and
goes
through
packages,
then
delivery.
So
it's
it.
C
It
affects
every
everything
in
the
engineering
function
of
this
company
right.
So
it
is
now
safe
to
say
that
if
things
are
aligned
with
this
model,
then
we
can
safely
consider
this
unit,
as
this
is
the
release,
the
monthly
release-
and
we
know
what's
inside
the
release,
we
know
that
we
tested-
we
are
not
even
yet
there,
because
there
are
components
there
are
getting
outside
of
this.
C
C
C
No,
no,
no,
no!
So!
Okay,
let
me
at
see
or
gitlab
gitali
server
version
the
version
file.
There
is
a
shine
side,
there's
no
version.
Okay,
there
is
a
sha
on
the
gita
repo
and
basically
we
built
every.
We
built
italy
from
that
shop
and
so
the
omnibus
package,
the
cng
packages,
whatever
they
have.
They
are
built
from
shop
when
it
when,
when
we
start
the
release
process,
release
tools
from
that
shop,
creates
stable
branches
and
creates
the
tag
release
which
has
the
same
tag
of
the
product
version.
C
Now,
when
we
hit
this
point,
we
have
options
like
automated
update
like
italy,
so
gizly
is
just
scanning
for
the
latest
master,
commit
and
updating
the
version
file,
also
in
autonomy,
or
you
can
just
say
developers
can
change
the
show
when
they
want
to
release
a
specific.
They
want
to
not
release,
that's
the
point
they
want
to
deploy
something
you
want
to
deploy.
This
commit
write
me
to
commit
then
by
the
22nd
I'll
make
sure
that
this
will
be
part
of
the
release.
C
If
we
are
at
that
point,
then
we
can
start
thinking
about
independent
deployment,
which
is
the
thing
that
you
touched
on,
which
is.
Do
we
really
want
to
deploy
as
a
monolith
every
single
component
together?
No,
I
mean
diesel
team
wants
they.
They
don't
do
it
once
they
have.
Assumptions
like
this
is
going
to
be
deployed
before,
and
so
unless
they
are
willing
to
have
more
freedom
and
then
have
more
stability
around
the
order
of
deployment.
C
The
stop
the
status
quo
right
now
is
that
italy
goes
first.
So
when
that's
something
but
let's
say,
register
registry
is
independent.
Okay
cass
could
be
independent,
so
I
do
understand
the
need
of
independent
deployment,
but
this
poses
new
questions
like
how
do
we
know
what
to
release
on
descent
on
the
22nd
if
we
are
no
longer
referring
to
those
version
files?
C
So
what
I'm
thinking
right
now
is
that,
speaking
with
developers
at
least
cast
developers,
because
registry
has
a
different
view
on
it,
but
cast
developers,
they
are
struggling
more
with
release
process
than
with
the
deploy
process.
So
they
will
be
fine
with
six
deployment
a
day
if
they
don't
have
to
touch
the
release
but
they're
doing
just
one
release
a
month
because
they
have
to
do
the
release
process
manually
and
it's
painful,
and
this
open
up
the
new
set
of
questions
which
are
the
link
between
deployment
and
releases,
which
is
we
do
patch
releases.
C
We
do
security
releases.
I've
been
involved
in
doing
a
security
release
for
pages
it
was
8
to
16,
merge
requests
because
it's
completely
manual,
you
have
to
do
the
fix
on
pages.
Do
the
backwards
four
times.
Do
a
changelog
bump
four
times
do
a
tagging
four
times
all
of
this
is
manual
and
then
the
version
file
has
to
be
reported
on
for
stable,
three
stable
branches
plus
master,
so
other
four
merge
requests
the
amount
of
people
involved
in
doing
this
was
plainly
wrong.
C
So
that's
the
that's
the
one
one
good
question
that
we
have
to
answer,
which
is
in
theory.
We
should
not
force
anyone,
but
we
should
just
say
if
you
want
to
adopt
this
model,
we
can
handle
everything
for
you,
but
if
you
want
to
tag
your
own
release,
it's
fine
as
long
as
I
have
a
way
to
figure
out
what's
inside
every
product
versions
that
I
release
because
right
now,
yeah
go
ahead.
B
B
I
do
pose
the
question
whether
we
should
push
that
into
its
own
repo
or
something,
and
even
the
gitlab
project,
maybe
feeds
into
that,
and
we
define
like
a
proper
open
api
model
for
defining
the
components
and
maybe
even
solidifying
that
and
even
for
components
we
don't
ship.
We
could
even
force
them
to
go
in
there
because
it
becomes
an
inventory
of
all
the
software
we
are
able
to
deploy.
B
So
that's
like
that.
That
could
be
one
way
to
look
at
it,
but
so
so
I
I
agree.
I
think
we
I
we
need
to
push
hard
on
people
who
aren't
in
this
inventory
to
be
in
this
inventory.
We
can't
no
matter
what
we
should
pushing
people
to
be
in
that
inventory,
because
you're
right,
then
we
can
like
auto,
deploy
or
kubernetes
whatever,
like
we
have
old
processors
from
there
that
can
do
it,
and
and
we
need
to
be
able
to
cut
that
patch
release
month,
release
whatever
from
there.
B
C
Yeah,
that's
the
that's
in
my
table.
That's
the
automated
update
section,
which
means
as
long
as
the
interface
point
between
delivery,
let's
say:
delivery
and
development
team
is
the
version
file.
C
If
you
want
to
run
your
own
release
process,
then
you
will
bump
your
version
file
with
a
tag
release
and
we
will
roll
this
out.
If
you
want
to
say
no,
I
want
to
have
all
the
automation
in
the
world
that
you
can
provide.
Then
I
will
two
options:
either
implement
an
automated
update
like
digitally,
which
say
every
hour
check
the
status
of
master
and
bump
or
developers
choose
themselves.
I'm
gonna.
I
want
to
do
a
release.
C
C
C
Now,
if
we
want
to
to
do
something
different,
like
independently
deployed
component
and
dependency
right
component
means
that
in
theory
that
could
be
deployed
before
the
version
is
bumped
because
we
no
longer
have
the
link
between
registry
is
running
version
one,
and
there
is
an
auto
deploy
package
that
has
version
one
inside
of
it
as
a
tracking
information.
C
We
can't
guarantee,
because,
as
long
as
we
split
the
two
elements
they
they
get
independently
deployed,
so
the
tracking
will
no
longer
work.
So
if
we
go
down
this
route,
then
we
have
to
clearly
think
about
how
we
want
to
reconcile
things,
because
then
we
someone
could
say
yeah,
but
we
have
integration
tests
in
in
the
rails
repo
and
they
were
not
run.
C
A
A
We
need
to
in
order
to
enable
properly
self-serve
deployments
where
a
t
a
stage
group
has
their
own
pipeline
and
pushes
to
production.
We
need
to
flip
the
order
I
believe
of
the
versioning.
Is
that
correct?
So
we
will
certainly,
we
will
need
a
a
more
robust,
independent
way
of
keeping
track
of
which
component
versions.
We
have
yeah
sure
yeah.
A
Grain
was
measured,
yeah,
yeah,
yeah,
yeah,
I've
got
this
side
somewhere
else
and
then
the
other
piece
we
have
is
as
part
of
those
independent
pipelines.
People
won't
necessarily
get
as
much
like
free
testing
as
they
maybe
do.
Right
now
is
that
right
like
right
now,
yeah
it's
more
about
the
integration
test.
B
I'm
I'm
kind
of
thinking
that,
even
if
people
want
to
do
their
own,
I'm
kind
of
like
make,
I
I
say
people
can't
avoid
the
inventory
I
say
they
have
to
do.
Use
like
they've
got
to
what
I'm,
what
I'm
trying
to
think
is.
Even
if
you're
like
or
or
for
self-serve
deployments,
I
think
the
models
should
be.
B
A
Write
your
own
yeah,
that's
exactly!
I
think
what
I
think
we'll
do
right
is
so
think
about
self-serve.
In
terms
of,
I
think
we
get
to
a
point
where
we
offer
people
choices.
So
we'll
say
you
can
just
do
your
own
thing
and
we
won't
have
anything.
We
won't
do
any
pieces
for
you
can
you
can
go
off
and
do
that
if
you
want
so
whatever
the
one
that
you
were
looking
at
the
other
month,
graham
applied
the
applied
machine
learning
applied
anyway,
they
were
like
we're
going
to
be
fine.
B
Pending
yeah,
it
was
the
on
review.
A
I
think
we
should
say
we
kind
of
have
like
a
minimum
set
of
requirements
that
you
have
to
meet
so,
if
you're
part
of
auto
deploy
you
do
this
this
this,
if
you're
going
to
be
whatever
the
next
stage
is
like
whatever
we
call
the
gitly
model.
A
You
have
to
do
this
this
and
this,
and
if
you're
going
to
be
doing
self-serve
you
have
to
do
business
and
people,
then
we
can
then
say
you
must
be
putting
version
files
here.
You
must,
I
know
you
must
have
tests
or
a
way
to
feature
flag
or
whatever.
We
actually
need
to
put
in
place
to
put
some
some
corrupt
sort
of
some
coordination
around
this.
C
I
would
try
to
release
the
requirement
on
testing,
because
these
are
the
things
that
are
slowing
us
down
and
they
are
making
harder
the
switch
to
independent
deployment.
Because
then
we
have
the
problem
that
I
was
mentioning,
which
is:
how
did
we
test
things
together
and
more
toward
having
stable,
sensible
metrics
so
that
we
can
check
at
time
of
deployment?
C
And
this
will
help
us
test
independently,
because
then
you
can
deploy
independently
and
we
can
still
validate
on
metrics
more
than
and
then
you
can
still
have
your
integration
test
as
part
of
your
own
component
ci
thing,
but
that's
another
topic
right,
so
don't
because
if
we
are
going,
we
want
to
go
and
subserve
independent
deployment
model.
So
we
can't
enforce
tests
to
run
on
the
rails
side
of
it.
If
they
are
there
better.
So
gita
introduced
an
erp,
there's
a
grpc
endpoint.
C
So
it's
it's
good
to
compile
it's
good
to
verify
the
end
point
and
things
like
that,
but
still
it
should
still
work
with
slightly
different
version
of
it
and-
and
there
is
something
that
we
can
consider
as
an
intermediate
point,
which
is
let's
say
you
want
to
have
deployment
as
part
of
your
own
pipeline,
so
independent
deployment,
I'm
still
thinking
you
already
have
implemented
everything
else.
So
you
have
automated
release
automated
tagging,
automated
everything
what
we
can
think
about,
because
all
both
cast
and
registry
they
first
deploy
on
pre
with
their
process.
C
C
We
may
even
consider
rolling
out
to
the
canary
stages
after
that,
but
then
going
to
production
is
always
in
sync,
with
deployment
with
the
auto
deploy
promotion.
So
something
like
we
have.
Let's
say
we
have
an
accumulation
point
before
the
main
stages
and
when
we
promote
we
collect
versions
of
everything.
So
we
know
the
packages
included
that,
but
then
we
also
know
that
the
independent
component,
where
inversion
this
this
and
that
bundle
things
together
and
promote
it's
still,
a
monolithic
approach.
C
So
it
has
downside,
but
it
kind
of
put
a
point
in
time
where
we
were
propagating
changes.
B
I
want
I'm
changing
the
inventory
file
for
staging
to
say
we
are
going
to,
or
maybe
actually
that's
not
right.
I
want
to
deploy
this
version
either
by
an
automated
tool
or
manually
or
whatever
into
our
inventory,
like
a
gitlab
get
like
all
inventory
or
whatever.
It
is
and
then
that
triggers
a
downstream
pipeline
that
we
still
own,
but
they
can
do
it
like
they
can
watch
they
can
monitor.
They
can
do
it,
but
it's
not
in
their
re.
C
Yeah,
it
makes
sense
it's
a
good
question.
I'm
kind
of
scared
by
the
complexity
of
allowing
every
single
component
to
have
access
to
every
cluster
and
have
correctly
well.
C
B
B
I'm
like
well,
that's
why
I'm
like
do
we
we,
we
should
put
like
basically
saying
no,
you
can't
have
your
own
pipelines,
you
know
just
doing
deploys,
but
we'll
give
you
an
interface
through
this.
You
know
you
bump
this
version
here
and
that
will
trigger
in
this
project
that
we
have
for
you
that
you
do
have
let's
say:
developer
access
or
maintainer
or
something
where
you've
got
at
least
some
like
level
permission
you
can
watch
the
pipeline.
You
can,
I
I
don't
know
it's
still
a
separate
repo
from
that
ownership.
C
I
I
think
that
maybe
register's
dream
thought
about
it,
but
I
think
that
any
other
team
ever
told
about
something
like
this.
They
are
so
far
away
from
having
something
like
this.
Many
may.
Not
even
I
mean
elasticsearch
indexer
is
not
really
changing
that
often
the
gitlab
shell.
He
had
a
bar
soft
development
when
we
moved
from
the
ruby
versions
to
the
sshd
go
develop
version,
but
they.
A
Tend
to
be
quite
disabled
work
this
stuff
out,
like
I,
I
feel
like
that
one
would
be
a
good
one
for
us
to
an
evaluation
to
figure
out
like
what
would
it
look
like
if
it
was
in
one
place
versus
the
other,
and
this
is
really
why
I'm
keen
that
we
have
a
team.
A
We
can
collaborate
with
so
that
to
answer
all
your
questions,
great
way,
you're
sort
of
saying
like
what
do
they
want,
do
they
want
to
just
honestly,
we
don't
really
know
we're
going
to
need
to
go
work
with
the
stage
group
to
actually
figure
this
out
with
them.
The
first
one,
I
think,
will
be
very
much
something
tailored
specifically
to
that
component,
but
I'm
hoping
from
there
we'll
learn
enough
to
put
a
generic
model
in
place
that
we
can
kind
of
just
roll
out
to
other
components.
A
So
for
this
one
it
does
sound
like
it
will
be
as
a
case
for
us
figuring
out
what
options
do
we
have?
What
are
the
pros
and
cons
so
where's
the
complexity
where's,
the
compliance
risks,
where
are
the
permission,
problems
and
then
actually
from
there
trying
to
work
out
like
do
we
have
options
here?
I
I
hope
we
do,
but
honestly
I
don't
know.
I
think
that
would
be
something
we'll
just
have
to
do
as
sort
of
part
of
the
work
of
the
of
the
actual
implementation.
C
We
were
also
thinking-
I
don't
remember
if
you
briefly
touched
on
this
at
the
beginning,
amy
or
not.
If
there
was
a
chance
to
dog
food
cats
to
deploy
cats,
so
we
were
thinking
about.
Maybe
so
did
we
learn
how
the
the
tool
works
and
how
operate
and
see?
If,
because
that
may
be
the
the
central
inventory
we're
discussing
before
right,
because
he
can
work
in
github's
mode,
so
we
could
have
the
central
inventory
which
is
cast
enabled
and
due
to
deployment
and
so
we're
thinking.
C
Something
like
we
can
install
cats
on
ops
and
use
it
to
and
put
the
agent
on
free
so
that
we
start
doing
the
deployment
on
pre,
using
cats
from
oops
that's
nightmare
in
terms
of
environment,
but
yeah,
so
that
we
we
before
we
commit
to
say
the
because,
as
I
say,
independent
deploy
in
the
timeline
that
we
describe
is
kind
of
games
after
other
things.
So
in
the
meantime
that
we
get
closer
to
give
people
independent
deployment,
we
also
figure
out
if
the
tool
is
okay.
B
I
think
the
thing
with
cass
is:
is
it's
basically
it's
a
you
know
it's
it's
like
I
like
the
tool.
I
think
it's.
It's
got
a
lot
of
room
to
grow,
but
it's
a
for
our
problem
here,
it's
at
a
very
low
level.
I
still
think
we
should
do
this.
I
still
think
we
should
use
it
and
we
should
dog
food
it
definitely,
but
it's
basically
like
so
it
it
is
really
comes
into
the
whole
gitlab
com,
repo
problem,
it's
and
in
the
problem
of
once
we
have
the
component
versions.
B
They
will
feed
into
something.
I
don't
know
what
jsonnet
kpt
customize
something
and
it
will
output
the
manifests
and
then
kaz
will
deploy
that
on
the
cluster
for
us,
so
it's
kind
of
like
it
will
be
a
step
in
the
ci
job
that
will
replace
helm.
It
really
just
is
a
way
to
say
two
things
right.
We
once
we
get
these
inputs
in
at
some
point.
B
So
really
that
would
involve
if
we
were
to
look
at
that
part.
That
comes
to
probably
that
epic
always
I've
got
talking
about
like
how
do
we?
How
do
we
change
gitlab
com
or
do
a
better
model
for
like
taking
these
inputs
and
outputting,
something
that's
consumable
in
a
pure,
git
ops
way,
but
but
still
like
that?
That's
fine
like
even
what
I'm
thinking
about
now
like.
B
If
one
of
those
inputs
at
the
moment-
so
here
you
know
one
of
the
inputs
into
the
get
lab
com
repo,
the
problems
we
have
at
the
moment
right
is
the
input
for
the
version.
Numbers
is
a
pipeline
environment
variable
and
that's
obviously
tricky
for
many
different
reasons
for
abusability
for
people
doing
locally
and
stuff.
B
If
we
are
able
to
lean
more
on
people
getting
things
into
the
gitlab
or
inventory
as
the
definitive
way
to
deploy
new
versions
of
components,
then
that
can
become
an
input
into
whatever
tooling
we
choose
with
cars
combined
like
some
some
tooling
to
render
the
manifests
out
with
kaz
applying
it,
and
we
can
build
a
very
strong
pipeline
there
of
inputs
come
in,
manifests
are
rendered,
we
apply
them
safely.
We
can
maybe
monitor
metrics
or
something
as
well
to
just
make
sure
things
are
going.
B
B
I
guess
that's
that's
kind
of
it.
The
question
is
the
input
so
so
kaz
and
the
gitlab
com
repo
just
become
a
pure
repo.
That
is
just
a
a
low
level
definition
of
the
state
for
kubernetes
like
this.
This
is
what
the
state
of
things
are
in
kubernetes,
there's
a
there's,
a
there's,
a
service,
there's
a
cloud:
there's
a
pasta!
There's
it's
still
like
it's
still
like
auto,
deploy
or
something
that's
got
the
logic
of.
We
need
to
change
to
feed
into
that.
B
To
say
we
need
to
change
the
state
of
station
canary
now
we
need
to
run
qa
okay.
Now
we
need
to
change
the
state,
so
I
certainly
think,
like.
Obviously
I've
got
many
ideas
and
want
to
change
that
low
layer
of
just
how
do
we
feed
in
the
inputs?
B
How
do
we
like
generate
outputs
from
that
cleanly
and
easily
for
everyone
to
see,
and
how
do
we
apply
that
to
the
cluster
using
cars
or
something
safely
I
mean
kaz
has
things
like
ci
tunnel
built
in
which
means
you
don't
have
to
give
out
runners
access
to
the
kubernetes
clusters.
The
other
thing
actually
talking
about
self-managed
deploys
as
well.
Actually
is
now
that
I
think
about
it.
Is
it
actually
does?
This
is
the
the
one
thing
I
remember
when
I
was
working
with
the
team
on
designing
it.
B
So
actually
thinking
about
this
more
if
each
component-
and
this
is
a
whole
other
technical
discussion-
I
don't
want
to
bog
us
too
much
down
into,
but
if
each
component
at
a
kubernetes
level
gets
their
own
name
space
right.
This
is
the
registering
net.
We
deploy
everything
in
the
one
name
statement
number:
we
just
jam
everything
together
right,
but
we
could.
We
could
do
better.
B
We
could
isolate
them
out
and
say
this
is
your
namespace
you
each
get
an
agent
and
what
and
all
the
agent
can
do
is
talk
to
that
namespace,
because
when
you
set
up
the
agent
you
give
it
it's
a
pod,
you
give
it
the
permissions
only
in
that
namespace,
so
you
enforce
it.
Everyone
gets
an
agent
only
in
their
name
space,
then
you
can
actually
connect
it
to
their
own
deployment
projects
like
in
gitlab.
They
can
give
it
permissions
to
whatever
group
project
whatever
and
they
can't.
B
B
C
A
B
C
C
That
makes
sense.
There
is
something
worth
mentioning,
though,
which
we
didn't
touch
in
all
this
conversation,
which
is
by
the
by
design
how
things
were
comes
from
information
from
development
package
and
us
in
order
to
have
an
image
for
a
component
on
c
and
g.
C
We
must
tag
and
also
deploy
release,
but
this
is
something
important
to
keep
in
mind
because
right
now
thinking
about
cast
so
let's
say:
cass
wants
to
deploy
from
sha
in
order
for
cng
to
have
that
image
built.
The
sha
should
be
on
the
version
file
on
gitlab
and
we
should
have
tagged
and
auto
deploy
so
that
we
propagate
the
versions
to
cng.
C
Because
we
they
were
doing
their
own
packaging
and
they
were
forced
to
move
into
the
cng,
so
cng
is
acting
as
a
gatekeeper.
So
when
we
talk
about
independent
deploy,
I
still
strongly
believe
that
we
should
talk
about
sha
based
deployment,
because
release
is
another
problem.
Releasing
is
another
problem
and
it's
our
problem.
So,
but
then
we
need
a
way
from
development
team
to
have
a
build
to
have
an
image
built
that
they
don't
have
right
now.
A
Is
that
part
of
like
the
automated
roll
out
step,
or
is
that
literally
something
that
happens?
That
becomes
a
problem
once
we
get
to
like
independent.
A
Just
want
to
make
sure
we
use
the
last
15
minutes.
I
just
want
to
kind
of
like
see
if
we
can
get
on
the
same
page
for
a
few
things.
Do
we
roughly
all
agree
that
having
everything
go
into
auto
deploys
as
they
exist?
Right
now
is
the
first
step.
B
C
Yeah
yeah,
I
do
agree.
The
only
two
components
that
we
know
that
are
outside
of
it
are
cass
and
registry
and
x
git
club
metrics
exporter,
but
it's
still
in
development,
so
it
will
be
part
of
it.
So
yeah.
I
think
it's
a
good
idea
and
for
registry
there
is
an
extra
step,
which
is.
This
is
not
tracked
on
github
or
gitlab
for
historical
reason,
because
it
was
a
third-party
component
and
when
we
forked
this
out,
it
never
got
integrated.
So
this
means
that
we
can,
even
though.
C
A
Time,
no!
No!
No!
No!
No!
Okay!
No!
Let's
do
that
in
q4
when
we
work
with
registry,
just
because
they've
got
the
added
complexity
of
their
migration,
so
I
don't
want
to
change
things
on
registry.
If
we,
unless
we
really
really
need
to.
A
Now
so,
given
that
then
right
now,
we
have
pretty
much
all
of
our
release,
tooling,
set
up
to
be
around
the
monthly
package
right
like
pretty
much
everything
is
designed
so
that
we
feed
into
that.
That
process.
A
Is
that
still
the
right
thing
like
if,
as
a
kind
of
product
direction,
we're
now
saying
with
sas?
First,
it's
like
if
we
start
thinking
around
for
self
sort
of
like
serve
deployments,
that's
very
much
sas
first
and
we're
seeing
more
and
more
components
that
are
not
really
going
to
self-manage
or
certainly
not
going
to
self-manage.
Yet
do
we
actually
want
to
change
anything
around
how
we
handle?
A
A
I'm
thinking
really
about
the
way,
I
guess
what
I'm
thinking
about
is
the
how
we
keep
track
of
what's
in
a
package
like
is
now
a
good
time
to
actually
separate
that
out.
So
actually
what
we
keep
track
of
is
what's
on
production
and
from
there
we
say
the
packages,
all
the
latest
stuff
is
that's
on
production
versus
that's.
C
Already
the
case,
because
we
it's
a
bit
tricky,
so
it
should
be
already
the
case
we
are
doing.
This
problem
are
cast
and
registry
because
they
live
outside
of
this
depending
on
timing,
they
could,
we
could
end
up
releasing
the
wrong
thing.
A
C
A
C
I
mean
yes,
it
is,
but
I
was
explaining
you
were
asking
about.
Where
is
the
the
connection
between
what
what
we
deploy
and
what
we
release?
So
what
I'm
saying
is
that,
regardless
of
how
we
name
things
so
removing
the
name
of
the
component,
the
versions
they
have,
we
release
what
we
deployed
on
production
re
registry,
not
for
registry
because
of
another
reason
as
well,
because
it's
not
tracked
on
the
main
repo.
So
we
just
we
just
take
whatever
versions
is
in
omnibus
and
in
cng,
because
it's
independently
upgraded.
C
We
deploy
we
release
what
we
have
deployed
in
production
and
for
gitly.
Specifically,
we
are,
we
even
disregard
the
content
of
the
version
file
and
we
enforce
getting
the
shot
of
what
is
running
in
production,
so
we
create,
because
this
is
how
the
release
is
automated
there.
So
we
say:
okay,
now
it's
release
time.
I
know
because
I
tracked
this
is
what
I
deployed.
I'm
gonna
release
for
you
from
there
and
this
will
be
in
the
release.
C
It
always
aligns
because
the
process
are
made
in
lockstep,
so
the
shot
is
the
same,
but
to
double
check.
Let's
say
we
roll
it
back
or
we
did
something
reinforced
getting
the
version
from
from
the
from
from
production.
C
I
think
this
is
what
grains
told
about
the
the
the
fact
that
the
metadata
file
are
not
the
same
for
how
to
deploy
and
on
how
to
deploy,
and
that's
why
I
said
we
are
doing
the
same
thing,
but
we
are
doing
with
two
different
classes
that
are
handling
the
process
in
a
different
way.
So,
if
I
remember
correctly,
when
we
do
an
auto
deploy
bom
another
tag,
we
damp
versions
of
everything
inside
of
it
when
we
do
monthly
release
or
a
patch
release.
So
it's
a
customer
release
on
premise:
release.
C
I'll,
I'm
not
sure
if
we
want
to
do
this
this
way
or
if
you
want
to
merge
the
two
process
together,
which
is
most
valuable
at
this
point
in
time,
because
they
will,
they
will
keep
diverging.
C
No,
I
mean
in
terms
of
release
tools
right
now.
If
we
say
we
are
tagging
how
to
deploy
or
we
are
tagging.
The
name
we
use
is
public
release.
But
if
we're
attacking
a
public
release,
we
are
going
in
two
different
code
parts.
They
are
just
it's
the
same
thing
re-implemented
with
taking
into
account
the
the
variation
between
the
two
processes.
One
has
change
log,
the
other
one
doesn't
one
involves
tagging
of
every
single
element
inside
of
it,
the
other
one,
only
tags
the
packages
and
propagate
version
files.
C
So
there
are
some
differences,
but
at
the
end
they
are
doing
the
same
thing.
They
just
skip
shots
from
component,
they
propagate
versions
to
the
packages
and
they
tag
the
packages.
C
So
we
could
so
we
we
need
to
take
a
look
at
the
code
and
figure
out
if
it
makes
more
sense
to
just
align
the
metadata
generation
or
just
put
the
two
process
inside
the
same
code
path.
Because,
okay,
someone
is
it's
a
horn.
C
I
don't
know
if
you
can
hear
if
someone
is
playing
with
a
horn,
okay
so
because
there
so
for
I'm
just
giving
example,
the
magic
exporter
is
being
integrated
in
release
tools,
and
one
of
the
things
we
mentioned
is
that
they
did
a
partial
implementation
because
they
didn't
figure
out
that
there
are
two
processes,
one
for
our
deploy
and
one
for
public
releases
and
they
basically
had
to
implement
versions.
Bumping
version
propagation,
sorry
in
in
two
different
places,.
A
We
need
to
take
a
look
at
this
right,
so
it
doesn't
sound
like
there's
necessarily
a
great
value
in
doing
this,
but
also
there
is
some
overhead
of
not
doing
it
yeah,
let's,
let's
move
on
leonard,
so
last
few
minutes
and
and.
A
A
With
I
don't
have
all
the
details,
I'm
hoping
you'll
have
time
this
week
to
maybe
point
us
to
some
details
things,
but
just
like
assume
that
we
can
kind
of
comprehend
getting
pages
to
have
automated
releases
and
know
that
would
help
us
on
the
security
side,
I'm
fairly
confident
we
could
get
the
gitlab
agent
service
into
auto
deploys.
I
think
the
configure
will
have
capacity
and
willingness
to
do
that,
and
perhaps
we
also
take
them
to
automated
releases
as
well,
and
that
gets
rid
of
some
of
that
manual
overhead
right.
A
So
then
we
have
three
really
kind
of
concrete
steps
towards
two
components
are
then
some
steps
closer
to
being
possibly
available
for
self-serve?
We
also
have
a
call
to
them
to
really
figure
out
what
self-stuff
might
look
like.
What
do
people
really
want
answer?
Some
of
the
questions
you've
had
graham
around.
Do
people
want
things
to
be
in
this
repo
or
that
repo
we
could.
A
We
could
actually
get
that
stuff
figured
out
this
quarter,
but
it
doesn't
sound
like
a
whole
heap
of
foundational
work,
and
I
know
we
do
have
loads
and
loads
of
foundational
work.
A
Those
are
really
nice,
visible
things
that
I
think
fit
really
well
with
an
okr,
but
I
think
what
it
might
be
really
useful
for
us
to
do
is
to
use
this
conversation
and
all
the
kind
of
conversations
and
thinking
we've
had
around
moving
towards
soft
serve.
Let
us
assume
there
is
going
to
be
at
least
one
probably
most
components
that
will
benefit
from
having
independent
deployment.
Even
if
right
now
we're
saying
it's
not
immediately
there.
A
In
that
event,
what
do
we
want
to
change
and
let's
see
if
we
can
get
some
of
that
work,
so
I
know
like
we've
got
health
checks.
Let's
go
you
mentioned,
we've
got
the
work,
you've
been
doing
graham
on
environments
and
how
we
manage
those
better.
We've
got
the.
I
think
there
is
some
definitely
some
interesting
stuff.
We
could
do
around
versions.
It
sounds
like
there's
lots
of
different
pieces.
A
A
We
are
not
the
only
people
tracking
these
versions,
right
literally
everybody
to
playing
code,
must
need
to
be
tracking
versions
in
some
way.
So
I
think
there's
some
big
pieces
like
that.
That
could
be
the
kind
of
the
center
point
of
work
for
the
quarter
and
then
the
actual
okr
itself
could
just
be.
B
Yeah,
I
think
it
makes
sense,
I
think,
probably
I
I
won't
speak
for
matt
for
definitely
myself
as
well.
Just
things
like
getting
the
component
like
has
into
auto
deploy,
I
think,
would
be
a
good
learning
experience
for
what
that
process
is
and
understanding
auto
deploy
more
as
well
as.
C
Okay,
I
always
also
want
to
say
that
cast
is
really
well
designed
for
this
move,
because
there
are
no
problem
with
version
names
and
things
like
that
they
are
willing
to
have
it.
So
it's
a
perfect
fit
pages.
On
the
other
hand,
they
will
like
to
they
have
the
fresh
wound
of
the
of
the
security
release.
There
wasn't.
C
It
is
a
nightmare
because
it's
not
yet
completed,
but
there
is
a
big
important
gap,
which
is
pages
is
currently
running
its
own
independent
release
with
an
independent
version
numbering
so
the
the
biggest
things
to
sell
them
is
that
now
you're
going
to
be
released
with
the
same
versions
of
gitlab.
It
was
okay
with
the
previous
management.
When
page,
when
pages
was
on.
C
A
C
Yeah,
in
any
case,
it's
the
only
way
to
automate
security
releases
for
them.
That's
that's.
This
is
the
selling
point
to
me
right
when
someone
experienced
the
first
security
release
and
all
the
nightmare
they
have
to
go
through
as
a
component,
you
say:
that's
automated,
for
you
just
merge
the
thing
and
we're
gonna
release.
C
C
C
A
Yeah
I
mean
these
are
our
opportunities
right
like
it's
always
a
shame
when
teams
have
a
terrible
time,
but
also
these
are
the
times
where
I
think
we
can
then
step
in
and
actually
offer
people
a
better
approach.
You
know
if
they
can,
if
they
can
meet
their
sort
of
race
requirements,
we
have
cool
okay,
great,
we
have
one
minute
left.
Is
there
anything
else?
Anna
wants
to
cover
quickly
on
this
call.
A
No
okay,
awesome.
Thank
you.
So
much
guys,
like
it's
been
great
shout
out
ignite,
I
will
put
a
summary
I'll
fill
out.
The
agenda
notes
a
little
bit
put
a
summary
on
the
issue.
One
thing
alessio
before
you
head
off
this
week.
Would
you
mind
just
and
everyone
else,
but
particularly
for
you
alessia?
Would
you
mind
just
filling
in
on
my
comment
or
add
a
comment
on
the
bits
where
I've
left
the
extra
gaps,
because
I
think
you've
mentioned
some
of
these
things
to
me
already.
A
Just
so,
we
get
a
bit
of
a
picture,
but
it's
that
I
think
we're
close
to
having
a
very
tangible
okr,
which
helps
other
teams
is
visible,
also
gives
us
a
chance
to
push
on
with
some
of
the
the
foundational
stuff
we've
been
doing
already.