►
From YouTube: 2021-12-08 AMA about GitLab releases
Description
Delivery team's monthly AMA
A
Hey
everyone
just
give
it
a
few
more
seconds
and
then
we'll
kick
things
off.
A
Okay,
so
welcome
everyone.
This
is
the
december
delivery
teams
ama,
so
we're
responsible
for
deployments
and
releases
to
self-managed
users.
So
we
will
kick
off.
We've
got
a
few
of
the
team
with
us
today,
so
looking
forward
to
having
some
interesting
conversations
victor,
we
may
summarize
your
points
in
interest
or
time
and
take
this
async,
but
would
you
like
to
please
verbalize
your
incredible
question.
B
Yeah,
I'm
I'm
more
than
happy
to
have
feedback
on
these
questions
async
as
well,
but
would
love
to
hear
your
thoughts
about
this.
So
the
question
is
that
if
you
have
rainbow
colored
ponies
and
anything
you
can
wish
for
how
would
you
design,
gitlab.com
deployment
pipelines
in
that
amazing
situation?
That's
the
question
because
I
know
what
we
do
today.
I
asked
that's
two
months
ago.
So
the
last
question
is
what,
if
you
can
build
up
from
you,
don't
have
to
use
anything,
you
don't
have
to
dock
food.
No
restrictions
apply.
B
What
tools
would
you
use
what
process
would
you
use
what
metrics
would
you
track,
etc
and
all
that
all
those
things
and
before
the
call
I
just
had
henry's
last
comment
in
one
dot
f
and
that
I
just
highlight
even
like
questioning
even
that
like.
Why
do
you
want
a
long
pipeline
in
the
first
place.
C
Yeah,
maybe
the
first
answer
to
point
a
what
I
would
like
for
driving
principles
and
the
one
thing
that
came
to
mind
for
me
is
if
he
could
split
up.
You
know
our
gitlet
gitlab
deployments
into
very
small
pieces
like
small
services
that
we
deploy
independently.
C
That
would
be,
of
course,
a
big
change
for
us
to
deploy
gitlab.com,
but
of
course,
there
are
reasons.
D
C
We
are
not
doing
this
and
they
are
valid
like
we
want
to
have
this
monolithic
application,
and
we
also
want
to
test
the
packages
that
we
deploy
to
our
that
we
give
to
our
customers,
and
so
we
want
to
deploy
and
lock
step
all
the
things
together.
C
So,
but
if
that
could
be
changed,
that
would
be
a
big
change
for
the
way
of
speed
of
to
get
something
to
production,
for
instance,
and
meantime
to
production,
and
things
like
that.
So
that
would
be
one
dream
if
I
could
ask
a
rainbow
colored
unicorn
and
then
to
your
ui
yeah,
of
course,
having
a
long
pipeline.
Maybe
wouldn't
be
that
great,
but
on
the
other
hand,
having
a
lot
of
stages
and
a
lot
of
insight
into
what
exactly
is
going
on
is
helpful.
So
by
not
displaying
it
right.
C
At
least
we
have
these
cases
now,
and
so
sometimes
it's
it's
not
really
nice
to
scroll
through
a
lot
of
jobs
and
a
lot
of
pipelines,
especially
if
you
want
to
publish
and
create
new
packages,
for
instance,
and
things
like
that
when
we
do
a
new
release.
C
A
Yeah,
I
don't
know,
I
think,
very
similar
to
kind
of
what
henry's
already
said
that,
like
for
me,
kind
of
fast,
safe,
automated
are
kind
of
the
the
things
to
keep
working
towards
the
faster.
We
can
get
changes
out
there,
the
better
for
everyone,
but
particularly
the
better.
It
is
for
us
to
be
able
to
recover
from
incidents.
So
at
the
moment,
that's
our
sort
of
super
big
pain.
Point
like
I
think
it.
A
You
know
we
can
build
processes
around,
expecting
a
code
change
to
take
like
a
number
of
hours
to
hit
production,
but
it's
less
good
when
it's
an
incident
that
we're
trying
to
recover
from
so
the
faster
that
goes,
the
easier
things
get
the
more
we
can
run
in
parallel,
the
faster
those
things
go.
So
can
we
run
tests
in
parallel?
Can
we
run
deployment
stages,
all
the
things
to
try
and
avoid
us
kind
of
having
to
to
wait
on
things
as
we
go
through
the
process.
B
Thanks,
I'm
I'm
digesting
stereo
or
answer,
but
I
I
have
a
question
for
her
already
just.
A
A
Go
ahead
with
this
one,
but
for
the
future
ones
like.
Can
we
put
those
down
on
their
future
numbers.
A
Awesome
great
were
there
any
other
bits
that
any
other
delivery
people
wanted
to
call
out
as
their
rainbow
colored
rooms.
A
No
okay:
let's
go
christian
thanks!
Viktor,
it's
dan
over
to
you.
E
Sure
it's
hard
to
follow
rainbow
colored
dreams,
but
I'll
try!
Congratulations
on
the
migration!
The
gitlab
pages
I
heard
that
went
well.
Was
there
any
surprises
that
came
out
of
it.
F
Excuse
me
there
there
was
a
mild
surprise
that
I
ran
into
and
jason
plum
helped
me
with
this.
This
was
something
that
occurred
prior
to
us
migrating.
It
was
during
testing.
F
I
personally
did
not
know
that
pages
served
artifacts
and
the
route
in
which
pages
serves
that
artifact
was
a
little
interesting
to
learn
because
we
reach
out
to
rails,
which
redirects
to
pages
which
redirects
itself
to
rails,
to
perform
the
authentication
and
retrieve
the
actual
object
and
just
learning
about
that
was
a
little
goofy
and
I
know
there's
reasons
behind
it.
But
that
was
kind
of
surprising
to
me,
but
outside
of
that
pages,
was
a
very
quick
and
easy
service
to
migrate.
E
F
E
Cool
and
along
the
same
lines,
I
know
you're
undergoing
the
reddest
migration
how's
that
going,
I
heard
there
was
some
password
thing
that
came
out
of
it,
but
in
general
like
what
do
you
think
of
it
right
now,.
F
I
wish
sean
or
igor
was
on
this
call
because
they've
been
doing
a
little
bit
more
work
on
this
than
I
have
so
far,
but
yeah,
so
we're
trying
to
figure
out
a
way
to
utilize.
The
same
helm
chart
that
we
utilize
within
our
existing
home
chart
and
currently
there's
a
few
little
goofy
items
related
to
how
that
particular
helm
chart
works.
F
That's
causing
us
a
little
bit
of
grief.
One
of
them
is
regarding
authentication,
so
sentinel
or
in
that
particular
home
chart.
When
you
set
a
password,
you
set
it
for
both
sentinel
and
redis.
Currently,
I
think,
if
I
remember
correctly,
our
application
doesn't
support
using
a
password
on
sentinel,
but
does
support
using
a
password
for
redis
or
might
have
that
backwards.
I
can't
recall
so
that's
one
blocker
and
then
the
other
one
is
related
to
how
to
access
applications.
F
If
we
run
redis
inside
of
kubernetes
how
how
do
we
access
redis
by
applications
that
are
not
running
inside
of
kubernetes
and
for
com?
In
our
particular
case,
we're
going
to
have
applications
that
run
in
other
clusters
that
will
need
to
talk
to
a
redis
cluster
that
may
exist
inside
of
a
different
kubernetes
cluster.
F
So
there's
a
few
challenges
that
and
there's
a
few
open
issues
inside
of
the
redis
home
chart
itself
about
how
to
address
this
and
we're
still
trying
to
work
on
our
best
way
to
figure
that
out
currently.
So
it's
still
a
work
in
progress
at
the
moment.
G
A
And
some
of
this
is
very
experimental
because
we
are
making
sure
we
want
to
test
out
how
multiple
redis
instances
work
as
well
as
actually
just
figuring
things
out
for
rate
limiting,
which
is
the
specific
one.
We're
working
on
right
now.
A
Awesome
thanks
for
asking
that's
stump
greg
fta.
H
Hi
everyone
so
in
support
and
in
the
community
forum,
I've
seen
an
uptick
in
customers
and
community
users
requesting
that
we
back
port
some
of
the
more
severe
security
vulnerability
patches
for
those
to
the
13.12.x
series.
H
Currently,
our
security
patch
release
policy
is
only
for
the
current
release
and
the
two
previous
miner
releases
so
just
curious.
What
amount
of
effort
and
work
would
it
take
and
what
challenges
might
there
be
if
we
were
to
move
to
back
or
major
security
patches
to
the
previous
major
release,
instead
of
just
the
two
previous
minor
releases.
A
Yeah
absolutely
great
question,
so
the
further
back
we
go
the
more
work
it
is
to
apply
these
back
ports
right
because
we're
taking
a
code
change
and
we're
having
to
go
back
a
long
way.
Lots
of
things
have
changed
in
the
meantime,
so
it's
increasing
amounts
of
work
and
particularly
as
we
go
beyond
like
back
beyond
breaking
changes
like
the
work
you
know
becomes
more
complicated.
A
The
big
things
we've
seen
on
recent
backports
is
keeping
stable
branches
around.
We
have
a
lot
of
test
changes,
and
that
makes
it
quite
hard
to
to
get
branches
passing.
We
also
don't
maintain
environments
for
past
releases
so
actually
having
somewhere,
where
we
can
put
a
package
and
test.
It
means
it's
quite
a
lot
of
work.
There
was
quite
a
lot
of
manual
work
to
get
things
installed
somewhere,
involving
quality
to
actually
be
able
to
kick
up
tests.
A
So
it
is
a
lot
of
work
and
marin
has
already
been
writing
down.
Some
of
this
stuff
enjoy
the
of
lights.
I
Well,
this
is
just
sharing
the
link
to
the
product
issue
where
we
are
discussing
some
of
these
things
from
a
non-technical
side.
So
what
kind
of
things
do
we
need
to
do
in
order
to
actually
support
this
effort
and
whether
we
want
to
do
it?
So
this
is
where
we
discuss
with
product
sales
and
other
parts
of
the
org,
whether
we
absolutely
need
it.
The
original
discussion
that
was
led
a
couple
of
months
ago
was
that
we
don't
right
like
that.
I
We
want
to
only
support
the
current
maintenance
policy,
but
in
the
past
couple
of
months
things
have
changed
again
and
there
are
some
additional
discussions
going
on
right
now,
whether
we
need
to
adapt
the
policy
again
or
not
again,
but
for
the
first
time
or
do
we
need
to
stick
to
our
guns
and
say
the
policies
as
is,
and
we're
not
gonna
make
changes.
Each
of
those
decisions
are
valid.
I
think
it's
just
where
we
as
a
company
want
to
go
to
support
our
customers.
A
Thank
you,
okay,
thanks
for
asking
are
there
any
other
questions?
People
want
to
go
through
before
we
return
to
victor's
rainbow
dreams.
A
No
okay
super
victor:
where
do
you
want
to
dive
in.
B
Sure
so,
first
question
to
henry
like:
why
would
smaller
parts
components
have
been
getting
closer
to
cd.
C
Yeah,
so
the
current
how
we
deploy
is
that
we
tag
all
components
at
the
same
point
in
time
so
that
they
are,
you
know,
fixed
with
their
versions
and
we
deploy
this
kind
of
fixed
version
package
to
our
environment.
But
we
can't
do
this
very
often
because
it
takes
a
long
time
to
build
the
packages
and
then
to
deploy
it
through
all
stages
for
all
of
those
components.
C
If
you
would
have
a
way
to
just
let's
say
you
have
a
new
gitly
version,
so
we
just
tag
italy
and
then
a
gateway
deployment
is
starting
and
it's
rolling
out
independently,
and
that
could
be
done
much
more
often,
right
and
and
for
smaller
components.
It
could
be
faster
for
things
that
take
longer
to
deploy,
of
course,
would
still
take
some
time.
But
then
not
every
little
piece
would
wait
for
everything
else
to
be
finished,
so
that
would
speed
up
mean
time
to
production,
for
certain
teams.
For
sure
I
see
cool.
B
A
I
mean
I'm
sure
we
can
it's
not
something.
We've
worked
on,
we'd
have
to
figure
out
it's
pretty
complicated
so
because
of
the
packaging,
so
we
have
a
package
and
then
we
pass
it
through
each
stage
out
to
production.
A
So
it's
certainly
something
which
we
plan
to
figure
out
so
that
we
we
can
go
faster,
but
we
haven't
yet
spent
the
time
to
actually
work
out
like
how
do
we
separate
that
out?
How
do
we
run
things
in
parallel
and
bring
it
back
together
in
a
way
so
that
we
know
we
can
like
safely
deploy
that
out
to
production.
I
I
do
have
an
addition
to
that.
Amy,
though
that
that
is
one
part.
The
other
part
is
also
dependent
on
our
application,
which
is
in
which
order.
Can
you
actually
execute
things
and
what
things
can
you
safely
run
in
parallel?
So
if
you're
talking
about
the
environment
level,
theoretically,
we
could
run
multiple
staging
deployments
and
get
different
types
of
results,
but
what
you
actually
do
with
that
type
with
the
information
you
gather
in
those
stages
to
actually
go
to
production,
you
still
have
a
accumulation
point
in
production
right.
I
So
that's
on
the
environment
level,
on
the
actual
level
of
of
gitlab
installation.
Certain
things
can't
go
out
of
order.
You
can't
deploy
italy
after
you
deploy
deployed
rail.
Actually
you
can,
but
you
cause
an
outage,
then
in
certain
cases
you
can't
even
deploy.
What
was
it
again?
I
forgot
now.
I
Maybe
someone
from
the
team
can
correct
me,
but
we
are
kind
of
tying
when
we
are
deploying
prefect
with
italy
with
rails,
so
the
order
needs
to
be
executed
in
the
right
order,
which
then
you
you
can't
actually
parallelize
that
right
like
it
takes
20
minutes
to
do
this
and
10
minutes
to
do
this
and
right
and
then
you
get
already
in
half
an
hour.
So
if
you
kind
of
look
at
those
two
levels
like
within
one
environment,
you
can
only
do
a
certain
order,
because
our
application
is
very
much
monolithic.
I
There
is
no
loose
coupling
between
between
the
components.
We
don't
version
different
components,
or
rather
in
some
cases
we
do
version
them,
but
at
the
same
time
we
don't
test
compatibility
between
those
different
origins.
So
that's
one
level
where
we
have
a
challenge
and
another
level
is
on
the
environments
level
right
like
how
things
get
to
production,
because
you
naturally
need
to
have
one
point
where
things
accumulate
before
they
go
to
production,
because
if
they
only
accumulate
in
production,
you
might
get
surprised
in
production
instead
of
in
underground.
I
So
it's
a
multi
tier
challenge.
I
Yeah
right
now
they
accumulate
in
every
environment,
and
this
is
the
this
is
the
challenge
we
have
right
like
because
we
get
the
result
of
an
output
of
one
environment
which
is
staging
the
or
canary,
in
this
case,
to
actually
make
a
decision
whether
we
go
forward
with
the
next
environment,
so
that
kind
of
naturally
creates
that
order,
and
it
makes
it
harder
to
parallelize
a
lot
of
these
things.
So
the
lower
you
are
in
the
environment
section.
I
So
if
you
have
multiple
pre-environments,
for
example,
you
can
be
loose
with
kind
of
what
kind
of
decisions
you
where
you
make,
but
you
make.
But
when
you
go
closer
to
production
environments,
you
need
to
have
one
environment
at
a
time
a
move.
B
If
anybody
has
questions,
I
would
say,
please
add
them
somewhere
to
doc
and
and
we
can
skip
this,
but
until
then
I
will
move
on
with
some
more
questions
like
what
tools
would
you
use
for
the
current
process?
I
mean
because
we
can't
solve
the
monolith
approach,
but
if
you
can
pick
any
tool
like
you,
don't
have
to
use
auto
deploy,
you
don't
have
to
use
harm.
Nothing
like
what
would
be
your
truly
preferred
approach.
You
can
use
argo
cd
if
you
want.
A
I
have
no
opinion
on
this.
This
is
a
terrible
question
for
me,
like.
I
don't
think
that
I
don't
think
that
tools
aren't
necessarily
the
limiter
that
we
face
right
now.
I
think
it's
more
the
the
kind
of
complexity
of
of
of
bringing
things
together
and
getting
them
to
production,
but
I
don't
know
if
anyone
team
actually
has
opinions
on
tools
they'd
like
to
be
using.
F
I
do
not,
however,
the
release
tools
that
we've
built
has
been
a
fantastic
tool
to
utilize,
if
only
it
was,
you
know,
part
of
the
gitlab
application.
Perhaps
that
could
be
kind
of
interesting.
F
I
think
the
nature
of
the
fact
that
we
have
to
talk
to
so
many
repositories
and
the
need
to
coordinate
between
how
we
deploy
to
various
environments
and
speaking
to
differing
instances,
all
makes
this
very
complicated
and
if
there's
a
way
that
we
could
wire
all
that
together.
That
would
make
a
lot
of
things
a
little
simpler.
F
B
Just
for
me
to
understand
so
the
the
release
tools
that
you
built
already.
That's
what
you
mean
that
if
that
would
be
more
built
into
gitlab
or
precisely.
B
F
To
hear
it
from
you
release
tools:
is
this
amazing,
ruby
project
that
robert
who
is
here
on
this
call
has
been
developing
a
lot
on,
along
with
the
rest
of
delivery,
to
start
the
process
of
a
release?
F
From
start
to
finish,
so
we
send
a
chat,
ops
command,
which
calls
upon
a
rate
task
which
will
reach
out
to
the
five
plus
projects
to
start
the
tagging,
and
you
know
we
have
a
coordinated
pipeline
that
it
kicks
off
to
manage
that
tag
all
the
way
from
the
process
of
building
that
tag
into
the
omnibus
and
the
cng
components
that
we
utilize
and
we
watch
that
coordinated
pipeline
go
into
staging.
F
F
This
project
is,
you
know
it's
it's
going
to
be
an
ongoing
project
for
quite
a
while.
There's,
probably
more
that
robert
could
speak
to
in
terms
of
you
know
how
it
works,
and
you
know
what
it
does
and
such,
but.
J
Yeah,
it's
definitely
become
kind
of
a
catch-all
of
like
we
need
this
piece
of
functionality,
let's
edit
and
release
tools.
So
it
does
a
lot
of
things.
One
thing
I
would
really
like
to
see,
at
least
in
the
product
is
kind
of
release.
J
Tools
currently
relies
on
a
lot
of,
I
guess
not
just
release
tools,
but
we
rely
as
a
team
on
a
lot
of
chat
functionality
and
we're
using
the
actual
chat
ups
gitlab
products
and
unfortunately,
that
kind
of
limits
what
we
can
do
in
slack
like
we
can't
do
any
kind
of
slack
interact
interactivity,
so
we
can't
like
say
I
would
like
to
see
a
message
from
the
deployer
saying:
hey
this
deploy
finished
to
canary
and
it's
baked
click
this
button
to
move
on
to
production.
J
B
Yeah
thanks
to
either
of
you,
how
might
like?
Okay,
let's
assume
that
just
configure
team
picks
your
ruby
code
and
put
it
into
gitlab
is
another
component
that
you
have
to
deploy
together
and
have
to
take
care
of.
Whenever
you
do
the
your
job,
is
it
usable
by
any
other
user?
Can
it
like?
Do
you
think
it's?
It
can
be
generalized
to
other
users
like
your
home
projects,.
F
I
think
that
there
is
nothing
that
is
specific
to
release
tools
that
can't
be
parsed
out
in
a
more
generic
form,
and
we
are
not
the
only
company,
that's
making
a
massive
monolithic
application
and
we're
not
the
only
company,
that's
making
an
application.
That
also
needs
to
be
packaged
in
various
ways.
F
It
would
take
some
work,
obviously,
because
right
now,
there's
a
lot
of
hard
coding
to
go
talk
to
gitlab
or
gitlab
for
this
specific
thing
and
go
talk
to
omnibus
for
this
specific
thing,
but
I
do
think
there's
room
where
we
could
make
it
more
generic.
For
other
end,
users.
A
Thank
you,
and
and
take
advantage
of
some
of
the
features
right
so
we've
this
year,
we've
built
in
of
kind
of
adjusted
to
take
advantage
of
some
of
the
new
features
coming
out
and
get
up
like
that's
the
kind
of
goal,
as
we
like
all
the
time,
but
particularly
sort
of
go
forwards,
is
to
reduce
the
complexity
on
our
side,
as
we
figure
out
how
to
solve
the
problems
of
deployments
and
take
advantage
of
new
features
coming
in
contribute
new
features
and
then
pull
these
together
so
that
yeah
the
goal
is
that
everything
we're
using
to
do
our
deployments
and
releases
is
in
the
product,
so
other
people
can
use
it
too.
B
Thanks,
I
got
the
message,
but
I
was
too
slow
to
type
it
down.
B
K
Yeah
sorry
about
that,
I
was
typing.
I
had
a
question
specifically
regarding
give
me
one.
Second,
I
guess
services
that
are
not
managed
by
omnibus.
I'm
not
sure
if
this
is
the
right
area
to
ask,
but
they
were
looking
for
a
defined
process
for
informing
self-managed
customers
regarding
updates
to
standalone
services
that
are
not
managed
by
omnibus,
specifically
aj
proxy
and
pg
bouncer.
They
were
looking
for
guidance
from
us
as
we
suggested
they
use
their
own
versions
of
those
services,
and
so
what
they
were
asking
was,
what
versions
does
gitlab
use?
K
We
would
like
to
know
if
we
should,
when
we
should
upgrade
our
services
like,
do
we
have
a
specific
version?
They
should
use
for
pg
balance
or
what
they're
looking
for
is.
You
know,
vulnerabilities
exploits
things
like
that
that
they
should
be
aware
of
and
how
they
can
go
ahead
and
update
that,
because
they
currently
do
quarterly
releases,
and
so
they
would
like
to
know
when
they
should
do
that
and
how.
K
I
think
specifically,
they
mentioned
the
nfs.
We
sent
out
some
communications
on
nfs
version
three
and
version
four.
It
broke
their
giddily
cluster,
and
so
they
were
looking
to
mitigate
that
and
kind
of
have
a
defined
process.
They
don't
currently
have
one,
so
they
would
like
to
build
one
out
and
they
were
wondering
if
we
have
a
way
of
doing
that
and
if
we
could
add
that
to
the
documentation
or
maybe
like
patch,
updates,
the
notes
that
we
do,
I'm
not
sure
how
we
would
do
that.
But
that's
what
they're
looking
for.
A
G
K
Yeah,
so
specifically,
what
happened
was
there
was
a
patrony
vulnerability
exploit,
I
think,
back
in
april
or
may
it
was
posted
on
petronas
website
and
then
on
github
and
that's
how
they
found
it.
And
then
they
applied
an
update
for
that,
but
that
exploit
basically
allowed
an
admit,
someone
to
gain
admin
control
to
the
cluster,
and
so
then
that
cluster
started
filling
up
and
it
started
failing
over.
Luckily
they
had
four,
so
they
went
ahead
or
five.
K
They
went
ahead
and
shut
it
down,
but
it
took
them
a
while
they
had
to
create
an
emergency
ticket.
What
they
were
saying
was:
how
do
we
know
when
those
exploits
are
vulnerabilities
of
those
services
that
are
not
managed
by
omnibus
but
included
in
giving
cluster
and
suggested?
How
do
they
mitigate
that
risk
for
vulnerabilities?
K
Currently
they
just
go
out.
They
have
one
person
that
goes
out
and
you
know
looks
for
those
updates
vulnerabilities
things
like
that,
but
they
just
don't
have
a
way
of
getting
that
communication
from
us
right
like
we
update
our
stuff-
and
we
know
about
those
vulnerabilities,
but
I
don't
know
if
we
share
that
with
anybody
and
they're.
K
Just
looking
for
guidance
on
that,
because
what
they
did
is
they
found
that
out
right,
they
were
doing
some
testing
before
they
upgraded,
but
what
they
were
saying
is
if
they
hadn't
done
that
when
they
went
to
upgrade,
they
would
have
experienced
that
and
they
would
have
caused
them
a
lot
more
downtime
than
the
normal
three
to
four
hours
that
they
unlocked.
G
I
Okay,
I
can
also
add
a
bit
more
color
from
the
infrasight
just
so
we
know
where
the
gap
is
coming
from,
so
rha
proxy
fleet
is
managed
by
reliability
teams
and
we
upgrade
that
when
necessary,
right
like
we,
don't
necessarily
have
a
clear
process
around
it.
It's
not
on
a
regular
cadence
and
so
on.
It's
more
when
an
external
event
comes
right,
like
a
vulnerability
or
a
necessity
for
a
new
feature
and
for
our
customers.
I
We
don't
include
load
balancers
inside
of
our
application
for
torah
reasons,
but
one
of
them
being
is
hard
to
package
one
solution,
given
that
customers
prefer
to
use
their
own
load
balancers
for
various
other
reasons.
When
it
comes
to
the
database
itself,
little
bitcom
had
to
diverge
at
some
point
a
couple
of
years
ago
from
what
we
package
within
omnibus
or
what
we
ship
in
general
to
our
customers
in
order
to
stay
ahead
with
the
scale.
I
So
this
is
where
the
gap
is
coming
from,
and
this
is
why
what
we
had
on.com
specifically
is
not
necessarily
tied
into
the
processes.
That
jason
is
mentioning
with
the
appsec
and
distribution
team.
K
Okay,
thank
you
for
that.
That
was
their
question
is.
Can
you
basically
just
share
what
you
do
on.com
with
us,
and
I
don't
think
that's
something
that
we
can
do
like
100
right,
I
think
we've
been
with
the
reference
architectures
we've,
given
them
as
much
as
we
can
be
transparent
with
them,
but
they
were
wanting
full
transparency
for
everything
like
hey.
When
did
you
find
this
update?
Okay,
I
can
open
an
issue
and
discuss
that
with
the
team.
I
appreciate
the
answers.
Thank
you
very
much.
A
Thanks
for
asking
right
we're
at
time,
so
thanks
so
much
everyone
thanks
for
the
questions
and
all
the
discussion
items
today.
I
hope
you'll
have
a
good
rest
of
your
day.