►
From YouTube: Ci/Cd Group Conversation (Public Livestream)
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
A
A
It's
always
nice
when
you
have
a
customer
involved
in
it,
because
you
have
the
nice
story
of
somebody
really
getting
some
nice
benefit
out
of
some
things.
So
I
think
that's
something
that
we
can
definitely
prioritize
doing
in
the
near
future
and
actually
I
do
have
a
couple
ideas
with
some
of
the
new
features
that
we
have
coming
out.
A
What
would
be
really
really
great
to
do?
Is
now
to
reach
out
to
one
of
those
folks
who
are
involved
and
write
a
blog
post
together
after
it
ships
in
order
to
talk
about
how
how
that's
changed
their
workflow
and
how
that
feature
has
improved
their
day
to
day
with
the
pipeline.
So
yeah
great
idea,
thanks
for
raising
that,
we'll
definitely
had
some
to
the
agenda.
A
B
Yeah
thanks
Jason.
This
question
comes
from
the
fact
that
I've
been
doing
the
CEO
shadow
this
week
and
had
the
opportunity
to
listen
in
on
investor
pitches
that
SIDS
done.
He
talks
a
lot
about
how
you
know
our
breadth
over
depth
story
as
we're
expanding
breadth.
We
also
have
this
amazing
ability
to
add
depth
and
the
great
example
of
CI,
where
we're
best-in-class
I,
just
wondering
if
you
could
give
your
thoughts
on
how
we
do
the
same
in
release,
which
is
probably
the
next
one
and
or
area
yeah.
A
I
guess
you
could
say
in
the
released
area
as
well
as
as
packaged,
and
what
we're
doing
there
is
building
out
a
really
solid
vision
which
is
leaked
to
from
in
from
the
deck
where
we're
really
going
after
some
customer
needs
that
we
found
through
customer
conversations
so
release
in
particular.
There's
a
couple
obvious
areas
feature
flags
is
one
of
them
where
there's
a
mature
products
on
the
market,
but
we're
really
the
only
solution
out
there.
That's
going
to
provide
a
integrated
solution,
there's
a
release
orchestration,
so
we're
leveraging
Jupiter
run
books
to
build
that.
A
A
We
have
the
releases
feature
in
which
lets
you
kind
of
publish
releases
that
in
the
future
will
have
different
metadata
associated
with
them,
and
then
the
run
books
will
be
tied
into
that
as
well,
and
all
of
the
parts
of
the
vision
that
we
built
the
foundation
for
over
the
last
few
months
are
coming
together,
and
so
I
would
look
for
us
to
do
you.
You
know
to
be
much
more
relevant
in
the
conversation
going
forward.
Even
pretty
much
immediately.
I
think
feature.
A
C
A
Where
they,
everybody
gets
in
the
room,
and
they
put
the
plan
up
on
the
board,
usually
in
Excel,
and
tying
that,
together
with
the
rest
of
the
get
back
platform
and
the
single
application
is
going
to
be
incredibly
powerful.
One
of
the
most
simple
things
that
we
did
recently
was
we
the
the
releases
feature
that
I
talked
about.
We
made
it
so
that
you
could
tie
a
release
to
a
milestone
or
set
of
milestones
and
just
doing
that.
A
You
can
generate
change,
locks
automatically
based
on
the
issued
content
and
then
one
of
the
other,
really
things
exciting
things
that
we're
doing
for
release
is
evidence
collection,
so
also
leveraging
the
git
lab
single
application.
We're
going
to
be
able
to
pull
in
the
changes
to
issues
to
the
issues
that
were
a
part
of
it.
A
The
results
of
the
security
scans
and
other
testing
that
ran
is
probably
release
and
just
make
that
a
nice
dashboard,
because
we're
able
to
traverse
the
data
from
the
release
to
the
milestone
to
the
change
request,
the
pipeline's
that
ran
for
those
for
those
merged
requests
and
and
just
tie
it
all
together.
It's
gonna
be
really
really
cool.
So
all
this
is
coming
really
soon.
I
think
that's
how
we're
gonna
achieve
that
for
release.
For
your
example,
it's.
A
It's
exciting
and
it
comes
together
from
these
little
iterations,
and
one
of
the
coolest
things
that
I've
learned
since
I've
joined,
get
lab
is
the
power
of
these
little
iterations.
That
can
you
know
till
the
soil
ahead
of
you,
and
all
of
these
great
features
come
out
of
it.
When
you
start
getting,
customers
engage
with
them.
It's
awesome
did
I
answer
questions
sufficiently,
Kenny
completely.
D
Great
presentation,
I
love
the
content
and
and
how
you
kind
of
walk
us
through
that
regarding
consumption,
pricing
for
Mac,
OS
and
Windows,
you
can
already
use
Mac,
OS
and
Windows.
We
get
lab,
but
we
don't
offer
it
as
on
a
consumption
pricing
basis.
Yet
what's
the
timeline
for
that?
How
are
we
iterating
like?
Are
we
starting
with
the
minimal
thing,
and
what
steps
do
you
see
and
what
we
make
something?
That's
also
usable
for
self,
hosted
instances
sure.
A
E
Sure,
yeah
so
I
think
we're
we're
live
streaming
out
on
YouTube.
So
I'll
talk
with
that
in
mind,
but
there's
three
facets
here:
one
is
windows:
share
runners,
I'm,
just
offering
them
right
now
we're
looking
to
utilize
the
custom
executor
that
the
team
shipped
earlier.
This
provides
a
pretty
easy
and
quick
way
to
do
some
scripting
within
the
get
live
runner
to
accomplish
some
goals,
and
so
what
we'll
to
do
is
we'll
look
to
essentially
start
off
with
a
VM
based
target
to
run
your
job
on.
E
So
we
would
call
a
cloud
service
provider,
API
provision,
a
machine
connect
to
it
and
then
run
the
job
on
it
and
then
tear
it
down.
So
it's
similar
to
what
we
do
today
with
doctor
machine
but
we'd
be
using
the
the
cloud
providers
API
to
you.
So
the
reason
for
this
primarily
is
that
a
doctor
machine
doesn't
work
for
Windows
runners,
I
didn't
run
Windows
and
B.
There
is
no
good
replacement
for
its.
E
The
other
aspect
of
this
is
that
doctor
and
doctor
doesn't
work
on
Windows,
and
so,
if
you
want
to
build
Caniff,
you
want
to
use
containers
which
we
could
potentially
offer.
If
you
probably
also
need
to
build
them,
but
you
can't
build
them
in
the
container,
so
you
need
to
have
a
beam
in
the
first
place
as
well,
and
so
we're
gonna
strap
on
VMs.
For
that
reason,
and
then
we
can
extend
the
VM
support
to
then
also
include
a
docker
runtime
at
a
daiquiri
container
in
the
future.
So
that's
the
current
plan.
E
We
were
a
few
days
in
here
we're
making
some
progress.
We've
essentially
crushed
out
some
other
potential
opportunity
potential
past
board
and
are
outlined
on
this
one
so
I
think
we're
in
good
shape
in
the
path
forward,
but
I'll
pause
there
on
on
the
on
the
direction
for
Windows.
One
of
their
caveat
is
that
we're
also
doing
some
validation
work
here
with
our
customers,
who
are
interested
in
Windows
and
making
sure
that
the
VM
model
matches
up
with
what
they'd
like
so
that's
also
in
parallel.
E
E
The
OS
10
portion
dude
the
way
Apple
licenses
Hardware,
it's
not
as
elastic
as
other
wet
OS
and
horses
are
like
limit.
Linux
and
windows
I
mean
so
you
kind
of
have
to
have
a
essentially
a
dedicated
or
reserved
instance.
Pricing.
That's
just
again
for
the
licensing
model.
That
apple
provides
we're
looking
to
go
with
a
vendor
who
provides
this
as
a
service
essentially,
but
they
still
have
to
abide
by
the
licensing
rules,
and
so
it
won't
be
as
elastic
as
you'd
expect,
as
as
other
again
other
operating
systems.
E
In
the
past,
you
got
an
account
going,
but
the
only
way
to
really
spin
up
images
up
and
down
at
that
time
was
through
VMware
ESX.
We
don't
have
dedicated
integration
right
now
with
VMware
ESX.
Luckily,
this
provider
has
implanted
a
new
solution
and
that's
launching
this
week
and
they
built
some
integration
to
go
along
with
that
with
gitlab,
and
so
we
should
be
getting
turned
up
on
that
next
week,
hopefully
on
Tuesday,
and
then
we
can
kick
the
tires
and
make
sure
it
works.
E
E
That's
so
much
really
good
and
then
the
pricing
side,
I'm
still
working
on
I,
think
one
thing
which
I
would
like
to
do
is
I
would
like
to
not
get
blocked
on
having
custom
or
special
pricing
for
for
mac
and
Windows,
because
they
are
more
expensive
than
Linux.
So
you
know
I.
Ideally,
if
we
can
get
these
things
done
quickly
here
we
could
launch
them
with
that
necessarily
having
the
sort
of
additional
minute
cost
or
different
billing
model
to
just
support
them.
E
So,
potentially
we
could
offer
like
a
promotional
pricing
period,
the
introduction
of
Windows
Linux
runners
and
offer
it
for
three
months.
While
we
align
on
what
the
right
billing
model
is
and
then
have
the
engineering
support
engineering
capability
to
support
that
model.
So
that's
one
thing
we're
discussing
now:
we
could
call
it.
You
know
other
promotional
pricing
period
or
user
experience
is
a
little
bumpy
like
on
the
window
side,
since
we're
we're
quite
not
going
to
have
support
for
our
warm
pool
of
VMs,
initially
at
least
in
the
MVC
I
might
take
to
it.
E
You
know
three
minutes
for
a
machine
to
spin
up
and
then
run
your
job.
We
could
know
if
it's
a
little
bump
that
we
can
consider
calling
it
preview
or
beta
and
maybe
making
it
free,
but
that's
my
that's
what
I'm
hoping
to
do,
which
is
in
order
to
launch
these
as
quickly
as
possible
into
the
marketplace
and
not
get
worried
too
much
about
a
pricing
model
to
support
them
and
rushing
through
that
decision.
Yeah.
D
I
think
like
if
the
billing
is
a
problem,
I
can
see
that
if
it's
just
about
picking
prices
I'd
be
glad
to
help
I,
don't
think
it's
very
hard,
as
mark
noted
in
the
comments
get
up,
has
copied
our
Linux
pricing,
so
we
can
probably
copy
their
other
pricing
and
I
do
think.
Look.
It
are
very
different
parties
I
think
Mecca
restaurant
is
about
ten
times
more
expensive.
Four
minutes
ABI.
If,
if
picking
the
pricing
is
a
problem,
I
gladly
help
if
the
billing
is
a
problem,
I
cannot
help
with
that.
Okay,.
E
E
So,
for
example,
if
we're
ready
to
launch,
say
Windows
and
Linux
should
runners,
but
the
engineering
works
to
be
able
to
charge,
say
10
X
for
a
Mac
minute
versus
a
Linux
minute
would
take
us
longer.
Would
you
be
okay,
with
sort
of
launching
with
like
a
promotional
period
or
something
like
that?
Yeah.
D
And
even
call
it
call
it
an
alpha
or
something
like
that,
where
look
you're,
not
you're,
not
paying
any
more
than
Linux
minutes,
but
probably
we're
gonna
be
under
provisioned,
because
we
cannot
pay
for
that.
Many
Mac
runners,
so
a
promotion
kind
of
assumes
that
it's
out
of
beta
so
I
would
just
call
it
a
test
test
period
or
something
like
that
to
further
serve
expectations.
Okay,.
E
D
One
is
like:
are
we
making
sure
that
we
make
something?
That's
also
usable
for
self-hosted
instances
because
like
when
we
have
our
default
kind
of
customer
rollout
is
first,
they
start
using
get
lab
and
then
next
step
is
shared
runners
and
the
next
step
is
complaining.
A
cluster
spec
will
be
great
if
shared
runners,
the
alternative
for
that
is
just
say,
put
in
a
credit
card
here
now
your
whole
company
had
runners
yeah.
E
So
since
it's
cloud
specific
right
now,
what
we
will
likely
do
is
we
would
of
course,
have
all
this
open
source,
and
so
the
executor
that
power
is
this
would
be
available
as
well,
and
so
we
would
include
the
executor
and
the
documentation
required
to
set
it
up.
It's
just
that
it
might
only
work
on,
say,
cloud
provider
a
for
example,
a
to
be
vets
initially,
and
then
we
need
to
then
go
build
like
a
GC
P
and
then
an
azure
one.
But
you
know
basically
docker
machine
provided
a
clean
abstraction
layer.
E
Where
did
the
CM
AP?
Is
you
could
call
it
a
doctor
machine
would
then
work
on
any
crop
provider
that
doesn't
really
exist
for,
like
VMs,
like
terraform
still
tends
to
have
custom
like
provide
specific
content
in
there,
so
you
included
terraform.
It
would
still
be
something
unique
blender,
but
but
we
would
be
to
offer
the
build
images
as
well.
As
you
know,
the
VM
images,
as
well
as
the
the
custom,
secular,
open
source
and
available
documentation,
yeah.
D
I'm
sorry
I'm
hugging
hogging
to
collab.
It
is
to
set
your
perspective,
that's
great
that
we're
going
to
release
how
we're
doing
it.
We
want
to
get
to
a
state
where
I
have
a
good
lab
running
self-hosted.
If
I
open
up
some
ports
in
the
firewall
and
put
in
my
credit
card,
then
I
just
get
all
the
runners
I
need
so
we'd
be
sending
we'd,
be
like
aggregating
runners
and
sending
them
over
to
self
ocean
instances.
I
know
that's
a
bit
out,
but
I
think
we
have
to
keep
that
in
mind
us.
E
A
D
That's
going
to
be
interesting,
probably
we
have
to
help
them
poke
a
hole
and
that's
firewall
like
have
the
instance
to
reach
out
to
a
server
that
we
control
and
have
two
runners
kind
of
communicate
back
to
that
or
something
something
like
that.
But
we'll
see
thanks
mark
it's.
The
next
quick
question
is.
C
Not
actually
a
question,
it's
actually
just
a
thank
you.
An
earlier
version
of
this
presentation
had
I
think
slide.
4
had
the
maturity
graphs
only
going
out
to
the
next
two
quarters
and
I
was
like.
Why
don't
we
just
make
this
a
rolling
four
quarter
thing
and
Jason
made
an
mr
to
do
that
and
then
josh
merged
it
that
night
and
now
our
maturity
page
goes
out
four
quarters
from
today.
So
we
have
a
rolling
four
quarter
plan
thanks.
A
D
Things
everyone,
the
maturity
page,
has
been
amazing,
as
we
kind
of
talk
with
customers
and
investors,
they
always
ask
like
okay.
Is
it
any
good
and
we're
like
okay?
Well,
this
is
the
current
state
and
the
question
next
question
is
like
okay,
what
you're
gonna
do
about
well,
this
is
what
we're
gonna
do
about
it
and
then
they're
blown
away.
D
So
it
creates
a
lot
of
trust
that
we'll
be
able
to
deliver
like,
because
the
only
way
to
deliver
is
to
measure
something
and
we
have
better
measurement
of
where
our
product
is
than
any
any
anything
they've
seen
before.
So
that
creates
a
lot
of
trust
that
we
will
be
able
to
deliver
on
improving
it.
So,
thanks
for
all
the
work,
especially
Josh,
this
is
it's
a
game
changer
for
us
as
a
company.
A
C
C
Actually,
this
is
one
of
the
topics
we're
gonna
have
when
we
do
the
CSU
D
strategy
review,
but
when
I
look
at
and
I'll
name
it
spinnaker
is
one
of
the
areas
that
kind
of
scares
me
because
they're
making
a
lot
of
progress
and
coming
from
0
to
1,
really
quickly,
not
sort
of
one
but
whatever
it
is
anyway,
they're
getting
a
little
of
they're
making
a
lot
of
ground
progress,
and
one
of
the
questions
is
a
what
do
they
have
that
we
don't
have?
Why
are
they
making
progress
and
then
be
like?
C
What
are
we
gonna
do
to
just
sort
of
solve
the
sort
of
the
more
boring
parts
of
it
just
like?
How
do
we
make
deploys
to
communities
easier
and
deploys
to
not
communities
to
vm's
easier?
How
do
we
make
all
that
stuff
better
and
easier,
and
how
are
we
gonna
then
use
that
I'm?
Assuming
that
that's
an
important
part
of
our
of
making
actually
release,
you
know
a
complete
and
lovable
pillar.
A
Yeah
great
question:
thank
you.
Mark
the
release
category
or
socks
on
a
stage
is
a
really
really
interesting
one
because
it
serves
two
very
unique
customers.
One
is
the
developers
who
are
writing
a
lot
of
the
automation
and
they're
loving
tools
like
spinnaker
and
there's
other
things
on
the
market
as
well,
that
are
just
really
oriented
around
factoring
traffic
delivering.
A
So
a
lot
of
the
features
that
I
were
talking
about
are
the
ones
that
serve
them,
but
you're
totally
right
that
there's
a
lot
of
research
that
we're
doing
now
that
some
of
that
stuff
has
gotten
a
little
bit
of
traction
around
how
to
make
more
deployment
models
like
base
deployment
models
like
a
bee
canary
and
things
like
that
in
the
product
and
our
deploy,
deploy
boards
and
things
like
that
work
better
for
developers,
we
coasted
a
little
bit
recently
on
just
that.
We
had
these
features.
A
They
were
working
and
some
of
these
other
areas
needed
attention
in
order
to
move
the
needle
with
how
we're
kind
of
engaging
with
the
market
more
broadly.
But
now
that
that's
got
some
traction
we're
turning
our
attention
towards
towards
these
types
of
solutions,
and
so
definitely
on
the
same
page.
That
solutions
like
spinnaker
and
develop,
developer,
owned
and
led
tooling,
is
where
we
want
to
be
as
gitlab
for
sure
and
I
hope
that
we'll
be
talking
soon,
maybe
in
the
next
one
of
these
about
what
some
of
those
solutions
are
going
to
be
right.
A
Now
we're
sort
of
in
a
research
phase
for
all
of
this
I'd
invite
anybody
who's
watching
or
interested
to
to
join
those
issues.
The
most
of
this
work
that
you're
talking
about
is
happening
in
the
continuous
delivery
category,
so
you
can
look
at
our
vision
there
and
see
what
we're
thinking
about
some
of
these
things,
but
again
that's
really
evolving
really
rapidly,
and
so
people
with
real
use
cases
and
experience
and
the
features
that
are
going
to
be
important
to
you
if
you've
used
spinnaker
before
and
you've
compared
our
product
to
it.