►
From YouTube: 2023-01-11 - Delivery:Orchestration Demo
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
So
welcome
everyone.
This
is
January
the
11th
2023.
This
is
our
emea,
America's
timed
orchestration
demo
and,
let's
see
so,
we
upload
demo
items
today.
Steve
you
have
the
first
discussion.
B
Thanks
so
I
worked
through
some
issues
about
stable
Branch
configuration,
and
this
one
specifically
is
regarding
managed
components
and
what
I
found
is.
There
is
a
bit
of
inconsistency
in
the
different
settings
that
these
projects
have
for
merge,
request,
settings
and
just
a
repository
settings
who
can
merge
and
and
who
can
push,
and
while
this
doesn't
have
a
huge
effect
on
our
ability
to
extend
the
maintenance
policy,
it
does
present
sort
of
the
opportunity
to
have
differing
Behavior.
B
So
when
we're
trying
to
debug
something
it's,
you
know,
we
have
to
maybe
re-look
up
some
of
these
configurations
or
understand
why
certain
things
are
happening
which
causes
extra
time
spent.
So
so
I'm
kind
of
just
wondering
if
this
is
something
that
we
should
look
at
standardizing
more
or
if
there's
certain
types
of
things
that
we
might
consider
foreign.
C
C
What
I
see
here
is
kind
of
to
me
looks
more
like
a
lack
of
a
feature
in
the
product,
so
there's
an
opportunity
here,
a
huge
opportunity
to
have
some
sort
of
an
Enterprise
policy
enforcement
type
of
feature.
We're
going
to
say
those
projects
belongs
to
a
certain
Enterprise
policy
and
even
maintainers
are
not
allowed
to
change
those
those
settings.
That
being
said,
so
this
is
kind
of
a
strong,
long-term
desire.
C
That
being
said,
what
we
have
here
is
that,
if
you
are
maintainers
or
one
of
those
projects,
you
are
allowed
to
change
those
things.
I,
don't
I'm,
not
even
sure
how
many
of
those
are
logged
through
the
audits
logs
and
so
how,
if
it's
possible
at
all,
to
figure
out
that
something
changed
and
who
changed
it.
So
I
I
think
compliance
is
running
some
sort
of
automation
which
I
don't
know
if
it
was
some
sort
of
one-off
and
generate
a
report
out
of
this
and
generate
those
issues
where
we
are
all
involved
into.
C
We
were
toying
with
the
idea
of
generating
mirrors
using
terraform,
and
so
we
have
a
terraform
repo
telephone
project
on
Ops
that
is
taking
care
of
mirroring,
and
the
idea
there
is
that
you
are.
You
are
not
supposed
to
change
the
project
configuration
because
the
the
thing
will
just
every
time
it
runs.
It
will
reset
everything
according
to
the
the
the
the
state
that
is
supposed
to
have,
but
yeah.
If
we
don't
run
the
script,
the
the
thing
will
simply
diverge
right.
C
So
the
it's
not
designed
to
keep
a
system
consistent
is
that
one
is
more
design
as
I
have
a
project
and
I
want
to
have
mirrors
all
those
mirrors
must
have
a
given
path
in
a
given
name
given
schema
and
we
want
to
propagate
configuration
from
the
canonical
to
the
other
one.
So
we
kind
of
just
generate
those
and
create
tokens
and
mirroring,
but
it's
more
about
applying
it's
one
of
our
one-off
configuration
more
than
making
sure
things
stay
aligned.
C
B
Okay,
yeah
I
mean
I
think
that
that
long-term
solution
that
future
sense
definitely
really
nice
I'll
search
around
and
see
if
I
can
find.
If
there's
anything
proposing
anything
like
that,
already
yeah
I
think
the
the
one
area
that
I
found
to
be
the
most
like
sort
of
like
messy
it
was
that
I
think
the
the
mer
yeah,
the
merge
request
can
have
or
merger
class
settings
can
have
settings
regarding
approvals.
But
then
the
repository
also
can
have.
B
The
repository
settings
can
also
have
settings
regarding
approvals,
and
but
this
shouldn't.
C
One
is
that
should
remove
the
ability
to
edit
merge,
request
approval
at
merger's
level.
So
if
something
like.
D
B
Gotcha,
okay,
I'll
take
a
closer
look,
let's
see
if
if
there
was
any
of
those
cases,
yeah,
that's
some
good
context.
Thank
you.
Some.
C
More
of
a
intermediate
solution
could
be
exposing
those
or
let's
say
we
I'm
not
familiar
with
the
issue.
Okay,
but
let's
say
we
we're
checking
three
four
things
like
the
has
this
type
of
settings:
yes,
no,
we
could
even
consider
doing
some
scraping
this
like
some
sort
of
a
matrix,
so
when
in
delivery
metrics
we
already
have
are
scraping
something.
C
So
we
may
consider
scripting
settings
of
the
repo
under
say:
I,
don't
know
what
is
that
manage
the
versioning
or
whatever
is
the
thing
that
we
want
to
check
and
surface
as
a
zero
or
one
metric
and
say
this.
This
repo
does
not
comply
with
the
rules
that
we
need
and
from
that
we
can
create
alerts
because
it
goes
on
Prometheus
and
then
yeah.
That's
the
idea
right.
So
if
something
diverts
from
the
expected
configuration,
then
we
can
notify
using
the
alert
manager
of
Prometheus.
A
B
Thank
you,
you're
welcome,
I,
don't
think
I
have
anything
else
on
that
topic,
so
I'll
send
it
to
Myra.
D
Yep,
thank
you.
Steve
I
just
wanted
to
take
advantage
of
this
goal.
To
give
you
like
a
nice
status
of
the
maintenance
policy
work,
so
this
work
is
basically
split
in
I'm
gonna
call
it
five
phases.
The
first
one
is
regarding
building
or
having
enough
information
for
the
last
for
the
partialist
pressure.
This
was
one
cup
was
completed
and
we
can
see
the
end
result
on
the
release.
D
Manager.Com,
which
reflects
like
the
overview
of
what
is
the
battery
is
pressure,
and
then
the
battery
is
pressure
to
air
version
and
perseverity
as
well.
D
Since
we
just
perform
a
patch
release
and
we
are
doing
some
pattern
releases
right
now-
we
don't
really
have
any
pressure.
So
that's
good
to
know
the
next
one
is
about
our
tooling
adjustments.
It
is
basically
making
for
these
tools
to
be
compatible
with
extending
and
making
patch
releases
to
account
for
three
versions,
and
that
one
is
almost
there.
We
just
need
to
make
some
templates
that
just
means
that
depends
on
some
links,
but
it
is
basically
minor
work.
D
The
other
one
is
the
gitlab
configurations,
which
is
what
Steve
is
currently
working
on.
It
is
making
or
more
than
making
it
is
preparing
the
git
Love
Project
settings
for
maintainers
and
developers
to
merge
changes
into
stable
branches.
We
have
introduced
some
guidance
and
we
are
enabling
settings
to
make
them
safe
and
the
other
one
is
the
quality
adjustments
which
is
currently
in
progress
for
the
quality
strategy.
We
are
waiting
on
quality
to
get
back
to
us
to
discuss
what
will
be
the
best
quality
strategy
to
use
to
validate
and
release
packages.
D
Our
proposal
is
to
use
the
packaging
QA
pipeline
or
the
patraction
keyword
test
pipeline.
I,
don't
recall
the
name,
but
we
are
waiting
on
the
response.
We
are
also
collaborating
with
engineering
productivity
to
set
the
project
to
set
the
process
for
broken
stable
failures,
and
to
summarize,
we
are
going
to
rely
in
the
current
process
that
we
have
so
far,
which
basically
is
the
mitigation
dri
or
the
three
hdri
to
cherry
pick
failures
into
stable
branches.
D
C
Came
to
my
mind
regard
because
we
were
talking
about
the
patch
pressure,
so
we
may
consider
if
we
don't
already
have
having
a
label
that
will
not
count
those
cherubick
into
the
patch
pressure
I'm
thinking
about.
We
know
there
are
only
error.
Spec
fixes
things
like
this:
a
known
flaky
tests,
and
things
like
that.
C
So
if
we
are
asking
the
quality
to
proactively
back
back
Port
those
those
tests,
then
we
we
may
end
up
seeing
a
higher
pressure
than
the
real
one,
and
maybe
you
decide
to
make
a
release
that
only
contains
a
spec.
D
C
I
just
had
to
see
an
extra
step,
I
mean
it.
What
I'm
afraid
of
is
something
like
this.
Let's
say
we
have
broken
stable,
broken
Master.
Okay,
we
have
a
broken
Master
broken
Master
is
a
priority.
One
task
severity,
one
priority,
One
everyone
all
hands
on
deck,
so
that's
the
way
the
master
Mr
get
fixed
and
then
someone's
okay.
We
have
to
do
the
same
thing
for
the
stable
one,
stable
Branch,
because
we
know
was
affected
and
we
kind
of
do.
C
C
I
mean
it's
a
nice
Improvement.
We
we
can
live
without
it,
I
don't
think
we
will
ever
reach
out
the
point
when
we
are
going
to
make
a
release.
That
only
consists
in
aerospec
failures
and
broken
Master
back
posts.
Things
like
this
I,
don't
believe,
but
yeah.
D
C
D
Think
this
could
be
like
based
on
the
observation.
If
we
notice
this
is
happening,
we
will
need
to
do
something
about
it
and
also
I.
Think
patch
releases
are
going
to
be
automated,
but
still
I'm
sure
as
a
release
manager.
We
are
going
to
still
review
what
has
been
Patches
at
some
points
and
we
are
going
to
notice
like
hey.
There
is
just
a
bunch
of
our
spec
things
to
be
patched.
There
is
no
really
need
and
then
readjust
the
labels.
D
C
B
Sure
so,
with
regards
to
the
broken,
stable
Branch
process,
so
one
thought
that
I
was
thinking
like
with
the
dri
being
the
merger
Quest
author
and
Then
followed
up
by
possibly
the
release
managers.
So
if
we
have
the
case
where
stable
branch
is
broken,
let's
say
they've
merged
all
of
their
back
ports
and
then
either
that
you
know
that
triggers
them
making
a
change
to
what
they
did
or
someone
reverting
it
like.
What?
What
is
the
expectation
for
the
other
matching
map
back
ports
or
how
do
we?
D
Yeah
I
think
that
is
a
good
question
and
it
might
depends
I,
don't
think
this
is
a
black
and
white
response
or
process.
It
might
depend
on
several
variables,
for
example,
if
this
is
a
S3
or
S4
and
suddenly
fails
in
just
one
stable,
Branch
I
think
it
can
be
reverted
and
one
enjoys
that
branch
and
then
we
can
continue
without
it
on
the
other
branches.
D
I,
don't
think
it
is
a
big
deal
to
just
revert
that
and
continue
with
the
process
in
just
having
just
the
other
two
branches,
but
if
it
is
an
S1
and
we
just
revert
and
it
is
causing
a
problem
well,
we
cannot
continue
without
it
and
with
the
broken,
stable
Branch.
So
in
that
case,
I
think
we
will
meet
a
fix
and
we
will
need
to
wait,
but
yeah
I
think
to
answer
your
question.
It
might
depend
on
some
other
variables
for
the
patch
release.
B
C
Just
something
more
elaborate:
on
top
of
that,
which
is,
we
may
also
try
to
aim
for
a
situation
where
making
that
patch
release
is
a
trivial
operation
and
if
something
doesn't
work
on
a
given
stable
Branch,
it
means
that
it
need
that
specific
fix
is
not
back
portable
and
needs
more
work,
which
means
we
can
just
release
what
we
have
and
the
next
one
will
include
it,
and
we
even
to
the
point
that
we
can
simply
drawn
a
patch
release
with
only
that
single
fix
if
it
was
important
enough.
C
So
right
now
we
are
coming
from
a
situation
where
there's
a
still
quite
some
manual
work,
so
we
may
feel
not
comfortable
in
doing
something
like
yeah.
Let's
decide
to
do
one
now
and
one
tomorrow,
but
if
we
reach
the
point
where
it's
just
a
matter
of
running
chat,
ups
and
waiting
for
things
to
happen,
we
can
even
say
yeah.
This
didn't
make
today,
but
there
is
great
benefit
to
releasing
all
those
fixes
to
the
customers
that
are
running
on,
because
what
I'm
thinking
is
that
is,
it's
likely
that
those
will
fail.
C
On
the
older
version.
I
mean
you
develop
on
Masters,
so
likely
the
the
the
the
closest
stable
branches
will
have
the
same.
They
have.
They
looks
like
mustard,
so
they
should
behave
more
or
less
like
Master
as
some,
maybe
something
back
behind
that.
Maybe
there's
some
missing
feature
or
something
in
between
that
affects
the
result
of
the
merge
request.
C
D
Okay
and
then
once
we
have
completed
these
all
four
phases.
Well,
the
battery
is
the
touring
adjustments,
the
gitlab
configurations
and
the
quality
adjustments.
We
are
going
to
be
ready
to
start
testing.
I
am
thinking
of
at
least
well
I'm
gonna
put
it
like
three
spaces
for
testing
now,
the
first
one
will
be
some
sort
of
try
run
to
test
or
tooling.
D
C
I
have
a
question,
so
are
you
thinking
of
involving
some
of
those
Engineers
working
on
those
fixes
and
asking
them
if
they're
willing
to
participate
and
so
creating
a
say,
a
smaller
patch
release
with
say
some
some
feet?
Some
some
bug
fix
across
all
the
tree
or
to
say
take
a
look
at
whatever
our
today
is
labeled.
With
pick
into
last
three
version
make
a
selection
and
just
author,
the
things
by
ourselves
so
kind
of
removing
the
label
on
their
merge
requests
and
creating
the
backboards.
So
we
have
interior.
C
D
I
think
ideally,
I
would
like
to
ask
Engineers
Engineers
outside
our
team,
to
take
a
look
at
this
process
because
to
get
a
fresh
perspective
of
how
they
feel
I
think
all
of
us
are
very
familiarized
with
what
to
expect
and
what
to
do.
But
I
would
like
to
have
someone
that
is
not
familiar
with
this
process
to
get
a
sense
of
how
they
feel
and
what
is
the
process
from
their
perspective.
C
What
what
can
a
concern
is
a
big
word
here.
What
came
to
my
mind
about
this
is
that
when
we're
talking
about
dry
round
to
validate
our
toolings,
it's
in
in
the
new
world,
it's
more
about
validating
that
things
on
the
stable
branches
will
end
up
in
the
release,
so
it's
kind
of
after
the
new
workflow
for
Developers.
C
So
what
I
was
trying
to
so
my
idea
was
more
about
before
putting
the
new
process
in
front
of
Developers.
That
will
may
have
their
own
question
about.
Why
we're
doing
something
in
a
certain
way
and
then
also
surfacing
eventual
problems
with
the
tool
themselves
was
more
about.
Let's
test
the
final
part
of
it,
which
is
if
something
is
on
the
stable
branches.
C
Will
we
be
able
to
create
three
patch
releases
at
once,
and
this
is
all
about
orchestration,
there's
nothing
for
for
development
here
right
and
then,
if
that
piece
of
work
is
okay,
then
we
can
say:
okay,
now
development,
the
we
can
do
this,
and
in
order
to
do
this,
we
need
you
to
open
the
backboard,
merge,
request
and
work
with
this
new
workflow.
Would
you
mind,
participate
in
this
and
then
doing
so?
We
are
not
mixing
the
because
one
test
is
about,
does
is
the
tooling
working
and
another
one
is?
C
Is
the
process
reasonable
for
backend
Engineers
there's
something
that
doesn't
work
as
expected?
Does
it
create
more
complexity,
and
so
that
that's
why
I
was
thinking
if
we
may
want
to
split
the
two
tests.
D
A
D
C
Yeah,
that's
that
was
not
more
about
the
tray
run,
but
more
about
yeah
us
cleverly,
selecting
ourselves,
something
that
makes
sense.
That
is
still
at
a
feature,
a
bug
fix
or
something,
because
we
will
end
up
releasing
three
things:
three
versions
of
gitlab,
so
people
would
go
through
the
hassle
of
installing
it
and
things
like
that,
so
something
meaningful,
but
something
that
we
can
control.
C
We
can
check
ahead
of
time
if
the
back
Port
applies
and
all
this
type
of
things
and
not
involve
the
the
developers
at
all
at
the
stage,
because
maybe
I
don't
know
I'm
thinking
there
will
be
some
of
those
triage
or
the
bot
thing
that
will
interact
and
will
complain
about
stuff
right.
So,
yes,.
D
A
B
A
No
okay
I
jumped
to
final
bit,
then
our
collaboration
check
as
our
action
from
retro
so
quick
show
of
thumbs.
Are
you
personally
getting
enough
collaboration
to
complete
your
current
tasks.
A
A
D
A
D
D
Yeah
for
sure
I'm
just
kind
of
luck.
In
the
quality
side,
we
have
been
waiting
for
the
response
to
have
a
quality
strategy
for
this
backboard
first
world
for
the
yeah
for
the
patch
releases
and
I
know
that
you
talked
to
wincy
and
I
know
that
she's
she's
very
busy
and
her
team
is
very
busy.
But
she.
A
And
actually
that's
something
else
to
think
about,
as
in
the
testing
thing,
because
I
I
think
Vince
is
Keen,
that
a
set
will
take
sort
of
a
part
in
this
testing
with
us
to
be
able
to
confirm
that
yeah.
The
the
test
that
they
want
to
see
running
are
running
so
I'm
hoping
we'll
get
a
fairly
quick
response
which
is
sort
of
a
yes.
We
can
go
this
way
and
then
it
will
be
more
of
a.
How
do
we
plan
a
test
when,
when
everybody's
available.
A
Oh,
and
what
else
can
we
do
to
improve
on
collaboration.
A
Nothing,
it's
all
all
good
I
would
say
probably
on
this
one.
Actually,
this
one
was
a
little
unfortunate
on
the
quality
because
of
the
holidays,
but
I
think
for
your
original
issue.
Myra
Vinci
said
that
you
know
she
had
seen
your
ping
and
it
it.
It
was
just
one
of
the
many
many
things
that's
very
hard
to
get
time
to
go
and
read
an
issue
so
I
think
probably
what
we
should
have
done.
If
it,
the
holidays,
we
definitely
would
have
done,
is
spot.
A
That
was
going
to
be
an
issue
for
us
sooner
and
get
the
sync
course
set
up,
because
actually,
once
we
talked
through
what
we
were
asking
for
and
what
this
looked
like,
then
it
was
all
quite
straightforward,
so
I
say
that's
probably
just
the
the
takeaway
from
this
is
that
we
should
probably
recognize
these
things
like,
potentially
are
going
to
become
blockers
sooner.
If
we
can.
A
Nope;
okay,
in
which
case,
thank
you
so
much
everyone
great
to
see
you
and
thanks
for
the
discussion
items
and
have
a
great
rest
of
your
day,
take
care
bye,.