►
From YouTube: 2022-08-10 GitLab.com k8s migration (EMEA/AMER)
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
Yeah
you're
saying
to,
as
I
said,
like
you
came
into
the
waiting
room.
Does
this
meeting
have
a
waiting
room?
Everyone
else
seems
to
just
enjoy
it.
It's
like
a
special
entrance
for
you.
C
B
C
B
C
Since
so
I
think
what
we
should
do
today
is
kind
of
concentrate
our
conversation
on
what
goals
that
we
need
to
achieve
that
are
specific
towards
self-service
deployments
in
coordination
with
what
we
need
to
accomplish
with
the
team
orchestration
so
linked
in
the
discussion
are
there's
one
issue
and
one
epic,
and
I
kind
of
want
to
concentrate
on
the
epic,
because
I
feel
like
this
is
the
one
where
I've
put
a
lot
of
effort
into
trying
to
define
what
our
goals
ought
to
be,
trying
to
define
the
problem
statement
appropriately
and
what
I
think
that
we
need
to
concentrate
on
what
to
work
on.
C
So
I
guess
a
little
bit
of
prefix
to
this
before
you
start
discussing
things
and
amy
correct
me
where
I
may
be
incorrect
or
where
we
need
some
clarification
on
this.
But
from
my
perspective
it
sounds
like
team.
Orchestration
is
going
to
be
the
the
responsible
party
for
determining
what
a
pipeline
is
ultimately
going
to
look
like
and
what
kind
of
mechanism
or
system
that
we'll
be
using
for
the
actual
deployment.
C
Grain
has
a
lot
of
those
ideas
in
his
head
and
I
feel
like
a
lot
of
the
work
that
he
has
set
up
for
making
some
of
those
decisions
is
kind
of
like
the
direction
that
we
may
want
to
try
to
lean
towards,
and
I
don't
want
to
take
away
from
that
and
I
feel
like
there's
a
lot
of
exploratory
issues
or
at
least
items
that
still
need
to
be
discussed
and
flushed
out
as
implementation
details
before
we
are
able
to
really
understand
what
changes
we
need
to
bring
to
our
clusters.
C
B
One
thing
it
might
be
worth
just
sort
of
fetching
it
because
I
don't
know
if
this
changes
it
or
not,
but
just
so
you
have
all
their
sort
of
context
in
case
it
does
is
what
we
will.
What
orchestration
is
going
to?
Try
and
do
is
have
things
currently
deployed
via
auto
deploy
and
offer
like
it
will
be
an
option
offer
up
a
way
for
teams
to
not
necessarily
have
to
be
part
of
auto
deploy
and
if
they
want,
they
can
separate
away,
but
that's
actually
our
main
focus.
B
So
in
terms
of
like
how
that
reaches
production,
I
think
like
system
can
certainly
fully
own
that
if
you
want
to
right,
like
it's
not
to
say
like
we'll,
just
sit
here
and
and
wait
for
you
to
tell
us,
but
I
wonder
if
there
might
be
a
it
might
be
worth
keeping
in
mind.
I
guess
that,
like
a
component
will
come
out
of
either
auto
deploy
or
another
pipeline,
and
actually
maybe
it
doesn't
matter
so
much
for
you.
C
What
I'm
trying
to
gain
from
what
we
need
to
work
on
at
this
moment
or
at
least
think
about
what
we
should
be
working
on
now
is
what
leads
to
being
successful
with
that
pipeline
and
there's
been
there's
a
few
items
on
this
epic,
epic
779
that
I
feel
like
we
can
concentrate
our
efforts
on
while
that
occurs,
and
since
grain
has
all
the
ideas
for
that,
I
don't
I
I
feel
like
he
should
be
leading
that
direction
and
we
are
participants
or
contributors
to
the
direction
that
he
is
kind
of
setting
forward.
B
I
I
wouldn't
assume
that
too
strongly,
I
would
say,
graham,
has
got
a
lot
of
ideas
and
has
put
a
lot
of
suggestions
out,
but
don't
be
limited
by
that
right
like
it's.
It's
certainly
not
it's
there's
nothing
that
has
yet
come
out
of
orchestration,
particularly
around
the
system
thing
where
we've
said.
That's
absolutely
what
we
want
to
do
so.
I
know
there
are
lots
of
ideas,
but
please
don't
feel
like
you
have
to
just
wait
and
and
be
told
from
from
graham.
C
C
So
if
outlined,
the
formatting
looks
a
little
goofy
here.
I
guess
I'll
share
my
screen
that
way
everyone's
just
staring
at
the
same
thing,
I
am
I've
poorly
outlined
a
few
goals
so
like
one
of
the
items
that
I
feel
like
we
need
in
order
for
development
teams
to
be
successful,
is
a
mechanism
for
development
teams
to
see
their
metrics.
C
Currently
we
have
a
deployment
health
metric,
which
I
think
we
need
to
own.
Currently,
this
is
something
that's
just
kind
of
gathered
together
and
I
don't
actually
know
how
deployment
health
is
currently
built.
This
would
be.
This
would
require
a
little
bit
of
investigation
to
the
run
books,
but
I
think
exposing
that
directly
to
each
service
team
would
be
quite
important,
so
I
think
we
need
to
have
some
ability
to
query
prometheus
in
some
way
shape
or
form
as
part
of
a
deployment
pipeline
and
do
the
same
with
air
reporting.
C
C
Anyone
have
any
thoughts,
comments
or
questions
about
those
two
line.
Items.
D
C
Oh
perfect,
okay,
that's
good!
So
maybe
that's
something
we
could
piggyback
off
of
because
I
did
not
realize
that
or
maybe
I
did.
I
just
didn't
put
those
two
dots
together.
So
that's
interesting.
Do
you
by
chance
know
if
we're
creating
century
directly
or
do
we
have
some
sort
of
exporter
inside
of
center?
That's
feeding
those
metrics
into
prometheus.
C
So
I
guess,
if
we
want
to
enable
a
pipeline
in
some
way,
shape
or
form
to
leverage,
metrics
or
air
reporting.
Is
this
something
that
other
people
think
is
the
right
thing
to
be
working
on
or
contributing
towards?
B
I
think
it
sounds
like
an
interest
like
certainly
like
it's
going
to
be
super
valuable
right
like
it
ties
in
I'll
drop.
The
linking
it
ties
into
a
project
quality
you're
currently
working
on
about
improving,
like
first
stage
was
it
was
generating
traffic
on
staging
and
it
ties
into
kind
of
improving
the
the
sort
of
pre-deployment
health
checks.
B
C
D
Are
we
talking
about
pipeline,
metrics
or
sort
of
just
general
deployment,
health
metrics.
C
C
Our
production
checks
utilize
this
in
some
fashion,
but
that
metric
is
also
predetermined.
At
the
moment,
I
think
it's
using
the
standard,
aptx
and
error
ratios
to
determine
deployment
health
and
how
that
gets
put
together
is
a
black
box.
To
me
at
least,
I'm
sure
andrew
could
sit
here
and
point
you
directly
to
where
it
comes
from.
I
certainly
cannot,
but
I
would
imagine,
service
teams
are
probably
going
to
want
to
ask
questions
or
maybe
add,
to
the
list
of
metrics
they
may
want
to
provide
for
themselves.
C
So
I
think
if
we
could
probably
start
with
deter
learning
how
the
deployment
health
currently
works,
as
is
determining
what
kind
of
configurability
that
has
and
then
maybe
also
looking
into
expanding
that
scope.
This
may
be
something
deep
into
the
future,
but
maybe
expanding
that
scope
such
that
if
a
deployment
team
has
like
a
specific
error
or
a
specific
metric
that
they
care
about,
rather
they
could
add
to
a
list
of
deployment
service,
health
metrics.
That
may
determine
what
happens
with
the
deployment
brother
component,
like
example,
the
container
registry.
C
You
know
we
may
care
only
about
errors
and
latency,
but
they
have
a
huge
dependency
on
gcs,
for
example.
This
is
a
really
off-the-cuff
example.
If
gcs
is
failing,
maybe
don't
deploy,
because
that's
only
going
to
make
other
things.
Look
bad,
for
example,
you
know
if
gcs
is
perfectly
healthy
and
everything
else
is
healthy
proceed
with
the
deploy,
but
if
gcs
is
having
a
bad
day,
you
know
we
probably
don't
want
to
kick
off
a
deploy.
D
Yeah
makes
sense,
I
was
thinking
eventually.
I
suppose
the
teams
will
be
able
to
sort
of
make
changes
to
our
repositories
that
hold
the
rules
for
prometheus
and
grafana
yeah.
So,
like
yeah,
I
suppose
we
could
make
tooling
that
eases
that
make
it
makes
it
easier
to
add
stuff
to
deployment
health
checks
precisely.
C
Like
right
now,
I
think
it's
pretty
hard-coded
to
app
decks
and
error
reporting.
I
think
if
we
had
a
feature
inside
of
our
runbooks
repository
that
says
hey.
These
are
the
metrics
that
we
care
about
for
deployment
health
and
it's
just
an
array
of
items
that
the
development
teams
may
care
about.
You
know
something
like
that
would
make
it
super
easy
for
them
to
contribute
towards
and
easy
to
roll
back.
C
I
think
I
feel
like
there's
a
lot
of
flexibility,
because
it's
all
jsonnet.
So
if
you
know
jsonnet,
you
know
exactly
what
to
do.
I
don't
know
jason.
So
all
right,
so
I'll
get
up
I'll
get
some
issues
spun
up
for
both
of
those,
because
I'm
not
really
sure
what
to
do
with
sentry.
I
think
I'll
piggyback
off
of
what
ruben
you
mentioned
earlier
and
see
if
that's
something
that
we
could
leverage
or
if
there's
like
some
future
improvements,
we
need
to
bring
into
that.
C
Interesting:
okay,
if
you
have
any
links,
go
ahead,
drop
that
into
the
agenda
and
I'll
take
a
gander
later.
As
I
start
thinking
about
creating
issues.
C
Amy
you've
got
some
matrix.
What's
epic
771.
B
So
that
was
the
quality
project,
so
nelia
has
been
working
on
improving
kind
of
youtube
working
with
andrew,
improving
kind
of
staging
monitoring,
but
this
kind
of
tangentially
ties
into
this,
so
we
now
have
more
traffic
on
staging
and
the
next
step
that
she's
kind
of
starting
to
to
think
about.
I
would
love
to
work
with
kind
of
delivery.
Group
doesn't
have
to
be
system,
but
delivery
group
on
is
kind
of
related
to
this
so
like
how
would
we
bring
in
health
checks
around
staging
deployments?
B
For
example,
one
of
the
huge
questions
that
we've
talked
about
a
lot
but
never
reached
an
answer
of
is
at
what
point
is
it
safe
to
halt
deployment
like
we
just
have
no
answer
for
that?
So
I
just
put
it
in
here
just
for
now
for
context
not
saying
you
have
to
pick
it
up,
but
I
expect
in
the
next
few
months,
we'll
probably
figure
out
where
this
fits
into
our
roadmap.
C
B
Yeah,
we've
certainly
had
some
incidents
that
have
been
that
have
had
exactly
that,
and
actually,
when
we've
gone
back
and
looked
at
staging,
it
turns
out,
we
can
see
like
the
latency
increase,
or
there
is
some.
There
is
some
visible
thing,
but
only
if
you
know
what
to
go
and
look
at
based
on
the
way
that
staging
alerts
and
checks
right
now.
C
Yeah
all
right,
so
the
next
item
that
I
wanted
to
chat
a
little
bit
about
is
how
we
build
images.
Currently
things
get
built
inside
of
cng,
I
think
the
only
one
that
doesn't
get
well.
Actually,
I
think
everything
gets
built
inside
of
cng
right.
C
Pretty
sure
it
does
probably.
C
C
C
The
problem,
though,
is
that
we
don't
really
need
to
build
everything
each
time
a
component
wants
to
deploy.
You
know,
there's
no
reason
to
build
git
lab
rails
and
workhorse
and
giddily
and
shell.
If
the
container
registry
is
the
component
that
wants
to
perform
a
deploy,
so
I
think
we
need
to
have
a
think
about
what
we
need
to
do
with
cng,
whether
that's
implement
changes
to
cng
such
that
it
could
build
just
a
component
and
nothing
else.
C
Maybe
we
pull
the
work
out
of
cng,
or
maybe
we
duplicate
it
so
that
way
it's
not
just
in
cg,
but
maybe
closer
towards
the
repository
where
the
developers
work
in
something
to
that
effect.
I'm
not
sure
how
to
approach
this
yet.
I
think
I
just
need
an
issue
to
kind
of
spin
up
a
conversation,
because
I
think
we'll
definitely
want
distribution
involved
in
some
of
this
decision
making,
probably
because
there
are
a
lot
of
shared
components
like,
for
example,
gitlab.
Shell
does
have
a
dependency
on
something
upstream
in
cng's
pipeline.
C
D
B
Was
just
gonna
say
like
I
mean
this,
this
one
almost
feels
like
we.
We
should
certainly
put
together
a
proposal,
but
it
it
could
be
one
that's
well
suited
for
a
poc
like
a
few
days
or
something
like
I,
I
don't
I
don't,
let's
not
get
too
caught
up
if
we
can't
think
it
through
end
to
end.
We
could
you
know
it
might
be
a
good
one
to
just
set
up
some
hands-on
see
what
happens.
Type
of
issues.
C
Okay,
so
I
could
certainly
get
an
issue
started
that
way
we
could
track
having
that
conversation
and
putting
together
a
poc.
C
So
these
are
the
few
things
that
I've
thought
about
so
far.
That
I
think
are
the
most
important
that
we
should
start
working
on.
I
know
it's
very
loosely
defined
at
the
moment,
I'm
kind
of
curious
as
to
what
other
items
other
people
have
that
we
that
I
do
not
have
noted
here.
A
B
Looking
at
it
from
a
slightly
different
angle,
if
a
stage
group
were
to
remain
inside,
auto
deploys
either
from
their
choice
or
our
choice
based
on
how
things
play
out,
are
there
any
things
that
we
could
do
around
this,
this
sort
of
epic?
That
would
actually
help
either
us
as
release
managers
or
also,
ideally,
the
stage
group.
C
So
thinking
about
this
on
the
surface
right
now,
our
production
checks
are
very
minimal.
We
just
do
a
very
generic
deploy.
Is
production
healthy
via
the
deployment
health
metric
right?
C
If
we
make
improvements
to
a
specific
component,
perhaps
we
could
modify
autodeploy
to
make
better
decisions
for
certain
regards,
especially
when
it
comes
to
rolling
back.
You
know
if
things
go
unhealthy
while
giddily
is
deploying,
we
might
ask
giddily
hey,
stop
deploying
that
way,
not
all
the
servers
get
an
upgrade,
for
example,
and
the
same
could
be
said
for
pre-effect
as
well
as
kubernetes
too,
and
air
reporting
for
that
matter.
I
think
both
error,
reporting
and
metrics
both
have
the
same
goal.
C
C
C
B
Something
I
don't
know
an
answer
to,
but
it
might
be
worth
just
thinking
about,
or
somebody
kind
of
maybe
going
and
doing
a
little
bit
of
digging
is
whether
there's
more
that
we
could
do
even
today
to
link
our
deployments
to
error
budgets.
A
C
C
I
was
effectively
thinking
about
the
same
question
you
had.
I
don't
think
air
budgets
is
very
mature
at
the
moment.
I
think
that's
like
a
work
in
progress
from
what
I
understand
with
the
scalability
team
and
what
they're
working
on.
C
B
C
A
B
Yeah
yeah,
one
thing
we'll
probably
have
to
sort
of
we'll
have
to
decide
on
in
the
future
is
how
we
handle
attributing
to
rollbacks.
B
So
we
don't
really
want
to
penalize
people
for
a
rollback
which
doesn't
impact
users
right
because
great,
like
ever,
the
system
worked,
users,
weren't
impacted
and
we
went
along,
but
at
the
same
time
I
wonder
it's
if
we
had-
hopefully
we'll
never
have
this
case,
but
if
we
were
to
have
a
team
which,
like
you
know,
continually
deployed
and
rolled
back
like
five
or
six
times
a
day,
it
would
impact
quite
a
lot
of
stuff.
So
we
could
figure
that
out
a
bit
later.
But
I
think
it's
an
interesting
one
to
think
about.
C
C
C
Change
much,
I
think
that
is
probably
a
better
way
to
put
it
because
we,
you
know,
we've
got
auto,
deploy
today
and
it
works
at
some
point
we
want
to
provide.
C
I
guess
a
I
don't
want
to
use
the
term
framework
because
we're
just
building
a
pipeline
and
we're
doing
a
bunch
of
other
stuff,
but
I'll
keep
using
the
word
framework.
So
if
we
provide
a
framework
that
works,
we
could
hand
that
framework
over
to
other
people.
That
way,
instead
of
us
spending
x
amount
of
time
getting
people
into
it,
it's
more
like
here's.
What
we
have
built
start
using
it
and
tell
us
where
the
problems
are,
where
the,
where
we
are
insufficient,
that
we
can
make
improvements
as
necessary.
D
Sorry,
can
you
repeat
the
last
part.
C
My
thought
is
that
we
provide
the
mechanism
for
them
to
deploy
with
and
we
just
provide
them
with
how
to
start
using
this.
So
like
say,
we've
got
kaz
running
inside,
of
whatever
framework
we've
provided
at
some
point.
If
it's
working
properly,
we
could
just
tell
the
container
registry
team,
for
example,
go
here's
the
documentation
for
how
to
use
this
thing,
and
then
they
could
just
create
the
necessary
merge
requests.
We
don't
have
to
be
the
shepherds
at
that
point,
we
just
provided
a
system
and
they
start
consuming
it
as
they
see.
Fifth.
D
I
think
I
was
one
step
before
that,
so,
okay,
before
trying
to
build
a
framework
that
allows
say
container
registry
to
do
a
deployment
without
involving
us
suppose
I
want
to
do
as
a
release
manager.
I
want
to
deploy
just
italy.
C
I
think
italy
is
a
special
use
case
because
it's
you
know
it's
one
thing:
that's
not
in
kubernetes
for
one
and
it's
got
its
own
specialty
scenario
in
terms
of
both
deployments
and
rollbacks.
For
that
matter,.
C
My
hope
is
that
what
we
do
inside
of
this
epic
could
still
contribute
towards
enabling
such
but
like
dark,
deep
dark
into
the
future.
You
know
giddily
is
always
going
to
be
kind
of
special,
because
it's
required
to
be
deployed
in
a
certain
way
prior
to
the
way
that
we
handled
with
the
way
that
we
handle
all
the
deploys
today.
Gideo
is
a
special
use
case.
C
C
C
B
B
Would
that
give
you
a
way
of
actually
testing
out
the
sort
of
like
where
this,
where
this
lands?
D
B
D
B
Yeah
yeah
so
you're,
definitely
not
the
first
person.
I
give
you
the
same
answer,
which
is
one
of
the
things
we
lose
by
doing
this
this
quarter
is
we
lose
the
collaboration
so,
unfortunately
configure
are
not
ready.
I
ideally
would
pair
up
with
config
and
just
try
it
out,
so
it
is
a
little
bit
like
we.
We
certainly
could
do
it.
I
don't
want
us
to
be
blocked
on
that.
B
B
I
guess
I
wonder
if
we
get
to
a
point:
where
do
we
get
to
a
point?
Do
you
think
where
both
teams,
actually
that's
the
thing
we
need
to
see
happen,
and
you
know
you
can
figure
out,
name
spacing,
and
permissions
and
health
checks
and
things
and
orchestration
can
work
out,
like
all
of
the
kind
of
complexities
on
our
side
around
that
what
this
might
look
like
does
it
feel
like?
Actually
until
we
get
hands-on,
we
can't
really
answer
these
things.
C
A
C
Really
sure
how
to
answer
some
questions
at
the
moment
like
we
can
sit
here
and
do
all
kinds
of
work
to
be
like?
Oh
we're
going
to
promote
query
prometheus
using
the
simple
bash
script.
But
you
know
that
may
not
be
the
right
choice
or
the
right
move,
depending
on
what
kind
of
pipeline
we're
building
yeah
same
for
error.
A
lot.
B
Of
it
is
up
to
us
right
like
so
I
know
a
lot
of
it
will
be
kind
of
like
what
technically
works,
but
we
also
have
a
lot
of
control
over
what
we
want
it
to
look
like,
I
wonder
if,
for
now,
we
we
think
about
this
more
around
the
auto
deploys,
which
I
think
is
kind
of,
where
orchestration
will
also
be
kind
of
focusing
so
make
some
improvements
for
get
up.
Agent,
service
and
pages
is
kind
of
what
we're
focusing
on.
I
think
that
sounds
like
from
what
we've
been
talking
through.
B
There's
some
really
big
opportunities
around
health
checks
and
metrics.
We
will
need
to
know
more
about
those
for
any
deployment
regardless
if
it's
auto
deploy
or
self-serve.
So
I
wonder
if
the
the
framing
for
your
your
okay
is
a
little
bit
more
around
like
preparing,
but
it
the
kind
of
the
the
learning
comes
from
make
it
better
on
autodeploy,
and
then
we
pull
out
a
kind
of
a
slightly
more
generic
piece
from
there.
C
Okay,
so
what
I'll
do
is
I'll
try
to
when
michaela
comes
back,
obviously
I'll
work
with
him
to
reframe
the
okr,
because
I
think
ruben
brings
a
good
point
that
you
just
clarified
that
you
know
this
is
really
geared
towards
preparing
us.
Not
really,
you
know
we're
not
creating
the
system
just
yet
this.
These
are
items
that
contribute
to
that
system
right
and.
A
C
I'll
also
start
getting
issues
on
the
board
with
at
least
either
starting
the
discussion,
or
I
think
for
like
a
few
of
these,
like
air
reporting
and
metrics.
I
feel
like
we've
got
enough
information
where
we
could
start
an
investigation
into.
C
You
know
how
to
query
and
what
to
query
or
what
improvements
we
need
to
make
to
run
books
to
enable
such
error
reporting
will
be
a
little
bit
more
of
a
black
box
because
I
feel
like
we
need
to
learn
how
to
query
sentry,
probably
or
like
ruben
said,
there's
probably
some
metric
that
we
could
already
leverage,
but
so
I'll
get
some
issues
ready
on
the
board
that
we
could
start
pulling
and
then
maybe
next
week
we
could
either
roll
through
those
issues
or
maybe
start
demoing
stuff.
A
So
it's
just
just
to
understand.
Thank
you
for
putting
all
of
this
skirmik
but
like
this
is
gonna,
be
adapted
to
be
part
of
our
okrs
correct,
or
this
is
outside
of
the
ocrs
or.
C
This
is
directly
contributing
towards
the
one
of
the
okr
is
stated
around
like
cluster
organization.
I
think
that
okra
needs
to
be
reframed
as
creating
a
mechanism
or
preparing
ourselves
for
creating
a
mechanism
that
enables
self-serve.
B
A
We
have
to
like,
I
think
we
have
the
namespace
separation
and
the
cluster.
C
B
A
A
I
don't
know
like
how
it's
related
to
directly
to
the
occurs.
B
C
Yeah,
I
think,
when
we
started
with
the
language
of
the
ocrs,
I
don't
think
we
really
fully
understood
the
problem
that
we're
trying
to
solve
just
yet.
C
I
think
we
had
ideas
as
to
what
we
needed
to
what
we
needed
to
accomplish,
but
since
we
don't
yet
know
precisely
how
we're
going
to
build
out
a
pipeline
and
like
what
technology
or
what
have
you
to
actually
perform
that
deployment
mechanism
that
needs
to
be
solved
first
before
we,
in
my
opinion,
before
we
try
to
figure
out
what
we
need
to
do
for
how
services
run
inside
of
kubernetes,
which
I
think
is
a
shared
goal
between
us
and
the
systems,
orchestration
team.
D
Sorry,
I
was
just
I
was
just
going
to
say
that
I
was
like
thinking
exactly
this,
that
while
say
trying
to
pull
out
a
component
and
make
an
auto
deploy
pipeline
that
only
deploys
that
component
doing.
That
would
allow
us
to
figure
out.
You
know
what
needs
to
be
done.
B
So
we
certainly
can,
let's
let
me
review
that
with
mckelly
and
whether
we
want
to
pair
up
and
actually
have
like
some
people
pair
up
on
that,
because
senior
has
certainly
a
thing.
That's
come
up.
I
I
guess
I
don't
want
us
to.
B
What
I
kind
of
imagined
was
we:
we
would
pair
up
with
a
developer
from
the
stage
group
who
owns
the
component
and
then
work
through
with
them
in
terms
of
how
their
process
works,
and
where
do
they
test
things
and
what
sort
of
metrics
are
they
relying
on?
I
mean
we
could
certainly
do
it
as
a
very,
very
basic
test
of
like
a
component
and
not
worry
about
the
stage
group
things
and
have
it
as
a
pure
throw
away
we
could.
We
could
certainly
do
that,
but
I
think,
let's
just
take
care
to.
B
What
I
want
to
avoid
basically
is
going
into
q4
and
that's
just
handing
off
like
hey.
Whilst
you
were
all
busy
on
this
other
thing,
we
have
created
your
diploma
pipeline.
Here,
you
go
right.
I
I
really
want
the
stage
groups
to
be
involved
with
us.
Actually
you
know
setting
their
stuff
up
so
that
they
feel
they
have
the
ownership
of
it.
D
C
D
Hand
over
like
that,
because
like
right
now,
the
only
people
who
have
visibility
into
pipelines
is
us.
That's
just
one
of
the
problems
yeah.
So
in
order
to
hand
over
you'll
need
to
find
a
way
to
give
the
states
group
visibility,
while
still
maintaining
you
know
our
compatibility
with.
I
forgot
the
names
of
the
compliance
standards,
so
I
don't
think
we'll
be
able
to
just
hand
over
the
pipeline.
If
we
do
this
so
that
I
think
there
will
be
a
stage
of
collaboration
with
the
with
the
stage
group.
B
Okay:
okay,
yeah!
That's
a
good
point!
That's
a
very
good
point!
So
does
it
feel
like
it
would
be
useful
for
us
to
try
and
figure
out
like
a
cross
team,
so
orchestration,
plus
system
poc
of
trying
to
take
a
component
doesn't
matter
which
one
a
simple
component
and
get
it
deployed
through
to
production.
D
B
C
B
Yes,
so
let's
get
an
issue
stood
up
and
what
I
I
wonder
I
mean,
let's
get
an
issue,
stood
up
and
work
out
what
we
want
to
do,
because
it
might
be
that
actually,
having
a
back
end
engineer
and.
C
B
D
B
C
All
right,
so
we
we
are
at
time
but
or
you
know
well
past
time,
but
is
there
anything
else
that
we
want
to
chat
quickly
about
before
we
end
the
call.
B
Could
I
just
ask
for
a
quick
admin?
People
take
a
break
of
an
admin,
so
the
the
discussion
issue
I
mean,
I
know
we
didn't
use
it
today,
so
maybe
have
a
think
about
whether
it's
still
useful
it's
tricky
to
find
because
of
its
labels
and
being
unassigned.
So
if
we
could
kind
of
signpost
that
a
little
bit
easier
and
so
other
people
can
find
it
if
we
still
want
to
use
it,
that'd
be
good.
B
Awesome
thanks
that
and
then
the
other
one
is.
I
have
to
do
an
okr
update
today.
Would
you
mind
updating
your
epic
statuses?
I'm
not
sure
who
is
the
the
one
we're
re-scoping
is
finite,
because
I
can
put
an
update
on
that
one,
but
the
other
one
that
you
have
for
the
production.