►
From YouTube: Ci/Cd Group Conversation (Public Livestream)
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
So
slides,
7
7
is
the
overestimated
Smouse
light.
This
is
one
that
we've
been
working
on
doing
a
lot
of
work
around
getting
better
data,
better
events
in
there
in
order
to
really
understand
who's
using
the
product.
That
was
part
of
a
nice
effort
that
the
girl
team
was
leading.
Unfortunately,
the
packaged
data
is
still
not
fully
integrated
in
there,
which
is
why
you're
seeing
packages
just
a
tiny
little
sliver
there,
which
isn't
accurate
but
oh
and
then
also
the
other
thing
to
mention-
is
that
the
October
data
is
partial
there.
A
Wasn't
they
a
big
drop-off
for
all
of
the
stages
in
October?
It's
just
there's
a
data
still
being
loaded,
so
otherwise
the
the
metrics
are
looking
good.
We're
still
seeing
growth
in
in
the
categories
on
an
ongoing
basis,
it'll
be
nice
to
get
the
package
data
in
there,
but
verify
getting
a
lot
of
adoption
and
release
a
bit
slower.
But
we've
got
the
releases
feature
that
we're
finishing
up
soon,
that's
going
to
be
more
in
a
viable
State
and
nice
to
use,
which
I
think
is
going
to
make
a
big
difference
there.
A
One
of
the
things
that
we
did
recently
or
possibly
it's
coming
out
in
this
release:
I
apologize
I,
don't
have
that
off.
My
head
is
linking
releases
to
milestones,
so
that
makes
it
so
that
you
can
see
you
can
find
a
release
from
an
issue
that's
associated
with
milestone.
You
can
look
at
the
burndown
chart
for
a
release
associated
with
everything
there
and
it
just
really
ties
the
data
together.
An
interesting
way
features
like
automated
release.
Notes
will
come
out
of
that
and
it's
really
really
good
stuff
Ori.
C
Yeah,
so
we
have
actually
two
features
that
were
working
really
now
to
help
adoption.
So
release
is
one
of
them
and
Jason
mentioned
association
of
the
mouth
center
release,
but
we
also
have
other
issues
that
are
in
that
area
like
actually
creating
release
from
the
galovski
IMO
file,
which
is
one
of
the
issues
that
has
the
most
of
votes
and
really
needed,
and
also
the
beginning
of
evidence,
collection
as
part
of
release.
C
So
we're
gonna
have
a
section
on
the
release
page
that
actually
has
evidence
collection
and
it's
going
to
start
collecting
metadata
of
the
release
over
there
when
it
matures
I,
guess
that'll
also
have
the
actual
binary
information
and
the
Cheyenne
trumpet
it's
the
beginning.
So
we're
really
excited
to
see
where
it's
going.
Another
phase,
another
category
that
we're
working
hard
on
is
now
feature
Flags.
So
we
delivered
something
viable,
but
now
we're
working
really
hard
dogfooding
it
and
we
have
a
lot
of
feedback
and
we're
evaluating
the
UX
and
UI.
C
B
A
A
Did
that
answer
your
question
sit
on
the
on
the
slider?
Does
anybody
else
have
any
follow-ups
before
I
move
on
to
eight
answers?
Mine,
great
yeah,
eight
is
a
bit
of
a
tough,
but
we're
seeing
here,
just
like
explain
is
the
metrics
about,
what's
being
delivered
over
the
1200
releases
for
across
CI
cd2
of
the
graphs
have
broken
down.
The
two
on
the
right
are
broken
down
for
section
upper-stage
and
then
the
one
on
the
left
is
is
combined
because
there's
no
way
to
disaggregate
the
data.
A
For
that
one,
or
at
least
there
wasn't
an
easy
way
to
do
it
quickly
before
this.
The
deliverable
items
missed
per
milestone
is
how
many
items
are
receiving
the
deliverable
label
and
but
then
ultimately
not
being
delivered
and
that's
on
an
upward
trend.
The
direction
items
delivered
per
milestone
is
actually.
A
Unfortunately,
the
working
theory
is
that
this
is
related
to
onboarding
and
engineering
and
that
there
should
be
a
upward,
a
very
sharp
upward
graph
just
to
the
right
of
this,
that
we
haven't
quite
achieved
yet,
but
I
am
working
with
Darby
and
all
of
the
engineering
managers
and
everybody's
proactively
engaged
to
try
and
figure
out
what
exactly
is
going
on
here
at
a
deeper
level
and
what
we
can
do
to
to
address
this
as
quickly
as
possible
and
get
everybody
up
to
speed,
get
everybody
contributing
and
and
delivering
great
things.
Darby.
D
No
I
think
that
it's
pretty
much
covers
it.
You
know
like,
like
Jason,
was
saying:
we've
had
a
lot
of
new
folks
join,
so
there's
kind
of
an
onboarding
component
to
it
and,
and
then
just
you
know,
I
think
we've
had
a
number
of
folks
who
have
have
kind
of
been
in
this
area
for
a
while
and
they're
they're
kind
of
the
experts.
D
But
you
know
we're
kind
of
getting
through
this
and
I
think
we're
well
we're
moving
to
a
good
place
then,
and
part
of
the
part
of
another
thing
that
was
mentioned
in
here
is
the
kind
of
experimenting
with
combo
and
I
think
changing
the
way
that
we
sort
of
plan
and
iterate
on
things
I
think
will
help.
Is.
B
And
Jason
there's
a
request
in
the
chat
for
it
to
share
your
screen,
so
everyone
can
follow
I.
Did
it
for
slide
seven
as
a
as
a
hint
I
think
we're
gonna
make
sure
that
these
videos,
if
people
watch
the
recording
they
don't
youtube,
they
might
not
have
access
to
the
Google
Doc,
it's
very
inconvenient.
If
you
talk
about
something
just
show
don't-don't-don't
just.
C
Yes,
so
far,
saga
with
the
merged
trains
is
that
we
released
the
merge
trains
and
saw
that
one,
it's
a
great
feature
and
we
were
happy
to
start
dogfooding.
It
took
this
I
guess
a
little
bit
over
a
month
to
enable
it
on
github.com
and
that
happened
early
September
and
then
shortly
after
we
first
enabled
it.
We
need
to
disable
it
because
there
were
some
issue
that
we
found
that
that
were
really
stopping
people
from
using
it.
C
So
we
decided
to
disable
it
and
we
worked
really
hard
on
fixing
those
problems,
and
then
we
were
happy
to
enable
it
again
end
of
October
and
while
everything
kind
of
seemed
to
be
working
as
we
wanted,
we
stopped.
We
started
getting
complaints
about
people
about
performance,
about
their
merge
request,
taking
a
really
really
long
time
to
actually
get
merged
on
comm.
It
was
just
around
the
time
we
really
needed
to
issue
an
urgent
blog
post,
and
so
we
went
ahead
and
disabled
it
again.
C
What
I
would
we
did
do
a
retrospective
about
this
issue
and
I
linked
it
in
the
slide?
So
anyone
who
wants
to
visit
it
and
comment
is
more
than
welcome
to,
but
I
think.
The
most
important
thing
that
we
learned
here
is
a
dog.
Fooding
is
crucial
because
we
got
so
much
valuable
feedback
that
we
we
wouldn't
have.
If
we
wouldn't
have
done
that,
and
so
as
I
think
it's
unfortunate
that
we
had
to
disable
the
feature
in
the
long
run.
I
think
it
was
the
right
thing
to
do.
It's
enable
and
then
disable.
C
What
we
did
figure
out
was
that
the
communication
plan
around
when
and
where
we're
going
to
enable
the
feature
again
was
a
little
bit
a
little
bit
lacking.
So
next
time
we're
going
to
be
sure
to
communicate
it
a
little
bit
better,
but
also
we're
going
to
enable
it
for
a
fixed
period
of
time
and
not
forever.
C
So
people
will
know
that
it's
going
to
be
enabled
for
two
or
three
days
and
then
we'll
stop
it
just
to
see
that
everything
in
order
and
and
that
people
will
know
that
to
be
patients
in
those
two
days
and
also
to
to
give
us
more
feedback
and
additionally,
we're
planning
on
collecting
performance
metrics
on
get
led,
calm
after
enabling
and
stabling
to
learn
more
about
it.
Now.
C
A
Thanks
for
that,
Orie
one
thing
that
I
would
add
here
is
the
maybe
it
is
included-
and
it's
just
not
listed
here,
but
one
of
the
the
patterns
that
we
saw
was
that
lots
of
people
were
skipping
the
Train
in
order
to
just
get
their
change
in
which
makes
sense.
If
you
don't
really
understand
how
the
merge
train
is
working,
but
every
time
someone
skipped
the
trains,
it
forces
a
recalculation
of
all
the
pipelines
that
have
run
that
we're
following
it.
A
So
I
guess
one
of
the
two
options
we
should
do
is
we
need
to
make
sure
that
that's
really
clearly
communicated
when
we
turn
the
feature
back
on,
so
that
people
aren't
skipping
the
queue
without
realizing
the
impact
that
it's
happening,
or
we
should
make
sure
that
that
features
included.
So
there's
a
little
warning
that
says
what
the
impact
is
of
of
skipping
the
queue.
Ideally,
the
second
one,
of
course,.
C
So
that's
a
really
good
question.
We
do
think
that
get
lab
comm
is
the
right
place
to
do
it,
so
that
was
we
also
revisited
if
we
can
check
other
projects,
but
that
one
is
the
right
one
we
think
and
I
guess.
We
could
also
turn
it
on
for
certain
amounts
of
hours
a
day.
So
we
can
definitely
discuss
that
offline.
Yes,.
A
B
The
Civic
website-
yeah-
that's
referred
to-
let's
refer
to
it
in
that
way,
but
yeah.
Let's,
let's,
let's
think,
let's
think
of
the
smallest
thing
that
could
possibly
work,
which
is
probably
turning
it
on
for
half
an
hour
and
then
go
from
there.
If
that
doesn't
cause
an
upset,
let's
go
and
go
and
go,
and
then
we'll
also
get
good
at
turning
it
off
and
on
what
happens
to
emerge,
saying
when
you
turn
the
red
screen
feature
off
a
feature
that
I
would
love
to
see
in
your
lab.
B
Is
that
if
I've
told
something
to
merge,
it
merges
at
some
point
and
it
will
just
rerun
after
24
hours.
It
would
just
try
reburning
the
tests
again
and
things
like
that,
so
that
I
don't
have
to
check
back
and
see
whether
it
merged
I'm
telling
you
to
make
that
feature,
but
that's
considered
first
getting
good
at
turning
it
off
and
on
because
sir,
if
we're
having
these
troubles,
customers
will
have
it
with
too
little
capacity
and
all
those
things
so
making
it
easy
to
get
out
makes
it
easier
to
get
in
yeah.
E
So
on
slide,
15
I
know
we
talked
about
a
lot
of
it.
See
the
growth
and
ask
for
runners
continues
to
grow,
wondering
from
this
standpoint,
they're
tightly
coupled
a
bit
today
in
terms
of
both
how
we
ship
as
best
I
understand,
is
there
any
thinking
in
making
this
easier
for
the
runners
or
for
people
to
adopt
or
for
us
to
move
faster
I
know
just
we
end
up
with
a
lot
of
dependencies,
so
I'm
wondering
if
that's
one
point
of
view
and
then
just
maybe
not
bundling,
but
at
least
helping
say
a
arm.
E
A
For
sure,
so
that
actually
is
the
goal
to
make
the
run
more
modular
and
to
be
able
to
be
contributed
on
by
our
partners
by
us
by
consultants
or
really
just
any
contributor
who
wants
to
get
involved
and
deliver
something.
So
we
recently
delivered
the
custom
executor,
which
is
sort
of
enables
this
actually
and
lets.
You
do
all
kinds
of
different
execution
methods
that
can
then
themselves
enable
new
kinds
of
integration
models
from
so.
A
good
example
of
this
is
like
all
of
these
different
platform
supports
those
can
all
be
implemented
in
the
custom
executor.
A
A
Silly
things
like
as
a
side
project
I
have
a
Commodore,
64
executor
running,
and
it's
just
like
I
silly
thing
but
like
it
shows
the
power
of
like
my
kid
labs,
see:
I
am
ilysm
Commodore
basic
now,
which
is
you
know
if
that
was
a
real
thing,
and
it
would
be
a
pretty
powerful
thing.
So,
if
you
replace
Commodore
64
with
something
like
our
more
or
other
other
platforms,
then
it
becomes
really
powerful.
A
So
we
are
planning
on
growing
the
runner
team.
It's
a
priority.
There
you're
also
involved
in
helping
to
finalize
some
work
in
getting
some
additional
help
on
the
runners
side
to
deliver
some
of
these
things
that
are
more
short-term
priorities
but
beyond
getting
through
this
kind
of
glut
of
things
that
we're
trying
to
deliver
now
around
the
windows
shared
runner
fleet
and
the
vault
integration
and
other
of
these
special
projects.
A
Once
we
get
past
that
we
are
going
to
have
used
this
infrastructure
that
we
built
to
make
things
more
modular
so
that
we
don't
have
the
same
problem
where
we're
trying
to
push
all
this
stuff
through
simultaneously
and
then
in
terms
of
things
like
orbs,
and
you
know,
actions
or
anything
else
like
that
other
kind
of
like
container
based
execution
models.
All
of
that
is
implementable,
using
using
this
custom
executor
as
well.
In
fact,
there's
a
I
found
a
command-line
tool
that
can
run
a
github
action
as
a
just
from
the
command
line.
A
So
I
was
also
going
to
try
and
write
a
custom
executor
to
see
if
we
could,
just
like
natively,
run
github
actions
and
see
what
that
says.
But
yeah
may
not
get
too
far
with
that.
We'll
see
how
that
works,
but
but
but
all
of
this
kind
of
stuff
is
possible
with
the
infrastructure
that
the
runner
team
has
built,
which
is
great.
We
just
have
a
short
term
kind
of
people
number
of
people
problem
to
contribute.
E
Would
love
to
wouldn't
you
guys
are
ready
to
blog
or
talk
about
it?
Let's
plan
ahead
of
time
and
see
I'd
love
to
get
some
people
from
a
bunch
of
different
groups
to
talk
about
the
custom
executor
and
why
it
was
useful.
So
maybe
we
can
get
like
arm
in
Fargate
and
you
know
a
couple
different
ones
in
there
whenever
y'all
are
ready,
maybe
just
to
really
have
a
reference
point.
Cuz
I
think
it's
awesome,
I,
don't
think
it's
as
well
understood
yeah,.
E
A
Of
course
way
to
think
about
it
is
the
custom
executive
s.
You
override
the
internal
steps
that
the
runner
does
like
provisioning,
the
execution
to
clean
up
the
all
of
that
stuff.
So
you
can
make
it
really
do
anything
and
then
get
the
results
back
in
the
expectin
format,
and
we
actually
do
have
a
really
great
example
of
somebody
who's
built.
Something
already
is
in
custom
executor
a
customer
built
it
so
I'll
with
you
privately
afterwards,
but
there
is
a
really
cool
win
so
far.
F
A
Be
be
doable
through
here.
The
right,
the
provisioning
thing
is
what
you
would
override
and
you
would
provision
the
right
testing
device
using
the
runner
from
your
pool
and
then
do
the
tests
that
are
needed
against
it.
If
any
custom
code
is
required
there
and
then
that's
fine
and
then
the
results
can
come
back
so
yeah.
That's
maybe
another
really
interesting
use
case
to
explore
right
thanks.