►
From YouTube: KubeVirt Community Meeting 2020-05-27
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
D
D
C
D
C
Recommend
I
would
recommend
enabling
this
data
stuff
resource
for
seared
for
our
Qbert
objective:
okay
with
customers,
just
definition,
because
then
we
get
article
generations,
the
generation
field,
bumped
the
communities.
Whenever
the
spec
changes
you
get
a
new
generation
and
you
can
add
the
generation
to
the
objects,
and
then
you
know
that
they
should
fit.
C
F
G
B
D
Maybe
we
could
conserve
something
like
this
I,
don't
know
a
better
way
of
doing
it.
Cuz
we
want
centralized
to
that
cute
CR
in
order
to
think
we're
have
difficulty
if
we
decouple
it
I
guess
that's
what
I'm
trying
to
get
up
but
I,
don't
I'm
still
concerned
about
the
usability
of
this
I.
Think
I
think
it
gives
us
the
flexibility
to
to
add
different
ways
of
manipulating.
D
C
What
I
like
about
this
is
that
it's
also
in
general
and
SK
pepperidge
for
people
to
just
do
the
things
which
did
he
is
ready
to
do
right
now
for
the
cluster,
and
if
there
come
some
patterns
up,
we
can
still
investigate
adding
special
to
be
various
introspect
or
something
yeah,
but
really
exclusive.
I.
F
D
H
Easy
to
hear
just
how
a
question
about
this
process
is
it
possible?
Maybe
instead
of
you,
know,
applying
our
object
and
then
patching
it
on
the
API
server
of
kubernetes.
Maybe
we
could
apply
the
patch
ourselves
and
just
have
it
modified
on
the
install
strategy,
and
then
we
don't
need
to
modify
any
process
and
we
don't
need
to
worry
about
any
edge
cases
because
after
we
apply
the
object,
it
is
just
like
a
new
object.
D
E
H
No,
so
you
install
Hubert,
and
you
want
people
to
be
able
to
modify
those
patches
on
the
CR
when
Hubert
is
already
installed
right.
So,
for
example,
what
if
your
operator
would
read
the
configuration
on
the
CR
all
the
time
and
modify
the
install
strategy
accordingly
to
what
we
have
like
apply
the
patches
on
the
on
the
objects
in
the
install
strategy,
instead
of
applying
them
on
the
API
server,
and
then
the
regular
process
kicks
in
and
applies
it
on
the
server
yeah.
D
A
We
have
a
higher
level
controller
that
manages
the
cube.
Vert
cuts
the
resource
the
contents
of
it
is
it
still
possible
to
take
advantage
of
this
framework
if
that
controller
is
managing
like
you'd
have
to
work
with
that
controller
right,
you
can't
just
arbitrarily
make
changes
to
the
custom
resource
I.
H
Think
that
when
well
yes-
but
this
is
a
very
specific
use
case-
because
usually
when
you
create
the
custom
resource,
it
is
what
it
tells
the
controller
to
kick
in
and
start
working
right.
So
usually
a
custom
resource
is
either
created
by
a
user
or
a
controller
that
created
in
on
behalf
of
the
user.
But
then,
usually,
the
user
provides
exactly
how
he
wants
the
higher-level
controller
to
create
right.
D
Yes,
all
this
is
optional
just
to
clarify,
so
we
would
only
need
to
understand
these
patch
fields
if
it
actually
needed
to
modify
components
outside
of
kind
of
the
default
way
that
they're
configured
I'm
hoping.
This
is
a
fairly
edge
case.
It's
similar
to
how
customized
works,
where
there's
a
base
set
and
configs
and
the
people
can
layer
on
their
patches
for
their
their
specific
environments.
We'd
essentially
be
allowing
a
similar
workflow
with
the
Qbert
op,
where
they
can
later
on
their
own
patches
onto
our
the
ones
generated
by
our
operator.
F
F
D
D
I
want
to
make
sure
that
we
can
move
forward
with
some
sort
of
solution
here,
because
this
is
kind
of
multiple
aspects
of
the
community
are
kind
of
all
converging
on
the
name.
The
ability
to
to
modify
components
just
make
sometimes
just
really
slightly
and
the
ones
that
the
board
operates.
So
if
we
can
get
people
involved
in
this
discussion
and
come
to
some
sort
of
direction
that
we're
all
comfortable
with,
it
would
be
good
to
do
that
sooner
rather
than
later.
D
F
I
think
that's
a
general
agreement
right.
That
dispatch
approach
we
have
today
is
something
we
can
do,
which
seems
to
be
pretty
straightforward
and
which
would
allow
us
to
address
the
issues
we
have
with
the
operators
of
Forex,
so
that
which
would
give
us
enough
flexibility
to
rescind
use
cases
that
have
been
pointed
out
to
be
difficult,
like
custom
rules
and
liquor
supposed
even
left-hander.
It's
not
like
yeah.
D
So
maybe
just
getting
some
more
people
to
weigh
in
on
that
issue
and
just
kind
of
further
that
discussion,
where
it
feels
a
little
bit
like
there's
consensus
involved,
then
I
feel
like
we
can
execute
on
it
right
now.
It's
just
not
enough.
People
have
weighed
in
we
just
like
to
see
a
little
bit
more
buy-in.
Okay,.
F
Yes,
issue
336
v,
but
maybe
make
sense
to
actually
do
a
call
out
to
the
community.
I
mean
there
was
that
very
old
email
thread
about
the
manufacturer.
What
the
title
was
but
about
you
know
why
people
don't
adopt
the
operator
they.
Maybe
it's
a
good
opportunity
to
revive
that
threat
and
ask
the
people
to
wait.
You
know
that
proposal
just
to
increase
the
visibility.
D
F
B
A
F
Quickly,
myself,
I
just
wanted
to
see
us
say
that
yeah
we
were
looking
at
reducing
custom.
I
said
you
know
it's
policies
that
might
be
interesting
to
anybody
used
to
implement
said
to
us
or
what
they
call
center
us
with
our
PMO
history.
I'm,
not
sure
what
the
name
is,
but
now
yeah,
so
that
might
be
relevant
for
anybody
who's
using
NASA,
Linux
right.
There's
the
custom
policy
coming
up.
We
try
to
make
that
compatible
with
the
post
operating
systems,
but
the
feedback
is
appreciated.
F
A
D
Sure
so
I'm
trying
to
think
of
how
to
most
accurately
answer
this.
The
patch
is
a
proof
of
concept
of
a
kind
of
supporting
an
edge
case.
So
this
isn't
the
main
flow
of
how
Cubert
is
used.
It's
the
edge
case,
where
we've
seen
that
people
would
like
to
utilize
the
pod
controllers,
so
things
like
deployment,
staple
sets
and
replica
sets,
or
jobs
or
whatever
and
use
those
to
control
virtual
machines.
D
So
this
is
kind
of
a
side
hatch
for
being
able
to
use
those
controllers
for
virtual
machines,
so
you
can
test
the
PR
and
it's
very
limited
functionality.
All
I
did
was
essentially
prove
the
concept
would
work
for
deployments
and
in
doing
that,
we
we
should
be
able
to
extend
it
to
other
workload
types
as
well.
D
You
can
test
it
just
by
checking
out
my
branch
and
building
it
I
mean
that
would
be
the
easiest
way
if
you
just
want
to
see
it
work
and
understand
the
details,
and
you
can
look
at
the
changes
that
were
made
just
by
looking
at
the
patch
is
that
does
that
help
answer
your
question?
I,
try
to
give
a
little
bit
more
context
there
as
well
yeah.
E
Actually
I
checked
out
the
master
and
I
just
patched.
You
will
change
this
on
top
of
the
master
and
I
did
a
build
and
I
was
able
to
create
the
VMS
using
EML
files,
but
I'm
just
trying
to
see
or
understand.
If
you
can
create
the
VMS
using
or
if
you
can
orchestrate
VMS
using
this
patch,
can
we
do
that.
D
E
D
D
Well,
turn,
let
me
give
you
some
more
details
on
how
the
patch
works,
so
you
create
a
pod
definition
and
I've
embedded
a
lab.
The
ability
to
embed
a
virtual
machine
spec
within
that
pod
and
what's
happening
is
when
you
post
that
pod
to
the
communities,
API
server,
there's
a
mutation
web.
That's
taken
that
pods
back
and
mutating
it
to
look
like
the
virtual
machine.
D
Pods
back,
so
we
construct
a
kind
of
well-defined
pod
spec
that
launches
with
the
exact
image
we
need
that
has
chemo
and
everything
in
it
and
our
components
for
for
managing
the
virtual
machine
launch
flow
and
all
that
we
inject
that
into
the
pod.
Along
with
any
other
arguments,
we
need
and
anything
that
you
put
in
the
pod.
Originally
it's
just
going
to
be
quickly
stripped
out
and
replaced.
D
You
can
use
cubed
CTL
to
manage
the
pods
and
deployments
and
they
will
be
running
virtual
machines
so
essentially
you're
managing
the
virtual
machines
through
the
pata
API.
That's
what
ghost
PR
we're
allowed
to
do
and
you
can
also
use
cube
CTL
to
manage
virtual
machine
objects
like
the
custom
objects
that
we
have
directly
as
well
so
they're.
It's
not
that
cube.
Ctl
has
often
been
unlocked
for
a
virtual
machine
management.
It's
just
that
you
can
use.
You
can
manage
them
as
pods.
D
I
D
A
Is
a
recurring
theme
that
has
come
up
more
than
once,
I
mean
we've
tried
various
different
sort
of
approaches
to
reversing
the
role
in
putting
the
pod,
be
the
things
in
charge
of
you
know
a
couple
times
now
just
curious
if
you
could
share
a
little
bit
of
insight
into
your
use
case.
If
that's
something
is
public
just
so
that
we
can
gain
an
understanding
of
why
this
is
important
to
wife.
Why
this
is
service
sure.
I
Right
now,
it's
just
a
docker
container
that
we
hope
to
make
be
like
avert,
launch
or
a
pod
right.
But
right
now
we're
just
using
docker.
We
have
we
have
a
pair
of
docker
containers
and
they
work.
They
live
as
a
pair
their
own.
They
have
their
own
h8
capability,
so
they
are
aware
of
each
other
and
one
of
them
is
always
active
and
the
other
one
is
standby
and
the
active
one
is
it's
a
VM,
so
we're
using
key
mu,
and
we
did
this
independently
of
cube
vert.
I
So
it's
at
the
kind
of
at
the
application
level,
instead
of
within
kubernetes,
that
we
are
doing
the
H
a
and
but
we
want
canet
ease
to
manage
these
things
as
a
pair
like
a
stateful
set
care
and
have
status
in
the
kubernetes
api
and
use
all
the
tools
that
come
with
Cubert
for
images
and
any
other
management
of
VMs.
So
that's
kind
of
our
news
case
and
like
David
was
mentioning.
C
From
what
you're
telling
here
I
think
they
don't
want
if
I
remember
it,
that
one
behavior
might
be
other
different,
that
you
may
expect
it,
and
that
is
when
you
create
a
stateful
set
and
scale
it
up.
So,
in
this
case
too,
if
you
have
a
failure
on
creating
the
first
one,
it
will
never
proceed
for
to
the
second
one
because
kind
of
expects
that
you
resolves
the
issue
before
it
goes
on.
Just
maybe
surprising,
yeah
yeah.
I
I
know:
that's
the
variation
on
stateful
sets
that
I'm
trying
to
work
through,
but
I
think
it
I
think
it
would
work
in
our
case
because
effectively,
when
one
fails
it
we
we
go
to
the
other
one.
The
pair
the
secondary
becomes
the
primary
and
if
we
can
give
through
through
feedback
to
kubernetes
the
status,
the
state
may
all
work
out,
but
I
definitely
have
to
work
that
that
a
little
little
more
on
the
stateful
set
front
that
that
it
works
in
the
context
of
stateful
sets.
I
On
just
I,
don't
go
ahead,
go
ahead.
Oh
okay,
I
was
just
I
was
just
gonna
say
the
question:
is
you
know
how
whether
we
just
try
to
work
this
PR
for
our
use
case,
what
I'm
a
little
bit
wondering
about
how
we
move
forward
from
here
and
I
have
to
kind
of
tell
management
of
my
company
we're
either
working
with
the
cuber
community
to
try
to
enhance
it,
enhance
cube
root
so
that
it
works
in
our
use
case.
Or
do
we
try
to
move
forward
and
work
on
this
PR?
F
First
I
think
of
the
three
remarks.
One
is
yeah.
You
need
to
be
careful
about
discs
right
with
all
those
water
controls
and
put
a
sound
like
it,
an
online
threat
with
David
anyway,
but
you
know
attaching
discs
in
that
case
of
water
controllers.
That
is
the
specific
topic
held
at
distilled.
You
know
how
to
use
that
the
other
is
actually
wonderful.
F
This
failover,
if,
if
for
you,
a
set
of
wouldn't
work,
we're
just
regular
bringing
up
two
VMs
right:
two
cubed
VMs
and
you're
they're
failing
over
service
in
front
of
them
right
I
mean
services,
they're
gonna
mistake,
and
they
can
also
do
failover.
I
mean
it
from
we're,
ready
to
an
on
ready
diem.
I
mean
they
they're
I'm
like
oh,
yes,
yeah
and
so
I
wonder
if
that
would
also
meet
your
use
case
and
I
think
it
would
at
least
be
clear
from
the
higher
level
architecture.
A
A
This
is
something
we've
tried
to
do
before,
of
course,
but
the
reason
that
we
it
keeps
getting
put
on
the
back
burner
I
guess
we
never
had
a
defined
tangible
use
case
that
made
sense
to
build
around
so
I
think
it's
in
everybody's
best
interests
if
it
becomes
part
of
the
main
Kubrick
codebase
just
from
a
maintenance
burden
perspective,
it's
going
to
be
easier
on
everybody.
If
it's
not
something
that
standalone
or
on
the
side,
and
yes,
it's
something
we
would
definitely
want
to
be
involved
in
okay,
so.
I
D
So
there
is
an
email
thread.
That's
started
around
this,
that's
on
our
key
Burke
devel
mailing
list.
You
could
attach
this
to
if
you're
interested
as
far
as
moving
forward
for
you
all
the
advice
I
would
give
is
to
to
research.
This
proof
of
concept
see
if
it's
something
that
that
can
be
shaped
roughly
to
meet
your
needs
and
then
tell
us,
and
then
we
could
work
together
on
actually
getting
it
supported.
D
I
would
encourage
you
all
to
abandon
it
pretty
quickly
if
it
looks
unwell,
D
and
approach
using
your
own
custom
resource
layered
on
top
of
our
API.
If
you
see
the
staple
set
kind
of
meet
your
needs,
but
not
quite
because
that's
gonna
be
difficult
for
you
all.
If
it's
not
perfect
fit
abandon
it,
that's
essentially
what
I'm
trying
to
get
at
okay.
I
F
I
think
it's
different.
So
with
with
the
arts,
you
could
effectively
write
your
own
controller
or
operator,
which
is
doing
that
steering
but
I'm
not
sure
a
follow
cracky,
but
in
a
community
service
right
and
you
you
effectively
tied
them
to
pods
by
using
one
mechanism,
is
using
labels
liquors
and
the
service
would
be
directing
traffic
to
pods
whenever
they
were
already
right.
And
if
you,
this
service
happens
to
be
pointing
to
two
pods
which
happen
to
run
cubed
games,
and
one
of
the
plots
is
not
ready
any
more
than
the
traffic
will.
Naturally.
F
Naturally
right
just
go
to
one
of
the
VMS,
so
maybe
that
is
also
addressing
new
use
case
right.
At
least
you
could
play
with
with
it,
because
humid
also
have
the
ability
to
have
loudness,
probes
and
Oman
David
or
whoever.
Please
correct
me
if
I'm
wrong
right,
but
we
have
lightness
robes
which
can
be
used
to
check
the
health
of
the
application
within
DBM
and
they
are
directly
used
for
the
ready
state.
Oh
yeah,.
F
I
Okay,
yes,
so
in
a
minute,
I'll
have
Wendy
comments
on
her
work.
So
we
we
have
this
a
chase
system
that
we've
containerized
in
dolphin
docker
container
and
we're
trying
to
now
make
it
a
vert
launcher.
So
we
have
built
cube
vert
and
we're
trying
to
no
go
go
towards
it
being
a
vert
launcher.
The
problem
that
we
face
is
that
we
have
to
use
an
old
version
of
sent
us
rather
than
fedora
and
an
older
version
of
key
Moo
to
achieve
what
we
our
code
base
is
centered
around
these
older
versions
and
I'll.
I
Let
Wendy
describe
the
cases
where
she
had
a
failure
and
got
it
to
work.
Basically,
using
you
know
a
version
of
hurt
launcher
with
fedora.
We
can
get
it
to
work.
We
can
build
it
our
own,
like
version
of
Berlin,
and
we
can
get
it
to
work,
but
because
we
have
this
requirements
on
an
older
version.
We
wanted
to
understand
that
what
would
the
dependency
might
be
on
fedora.
I
J
So
you
know
we
daniel
said
we
have
this
pair
of
the
m's
that
either
paired
together
and
from
externally.
They
look
like
a
single
VM
and
we
accomplish
that
through
a
modified
kiemce
you
and
our
last
rebase
of
this
was
kim
you
two
six
two
and
it's
always
been
CentOS
based,
and
so
your
requirement
for
this
vert
launcher
container
would
be
key.
Mew,
2,
6
2,
so
I've
been
playing
around
with
the
process.
I've
been
able
to
build
my
own
Lipper
container
and
put
it
in
cube
or
build
it
with
Qbert.
J
You
know
better
my
deployment
so
that
Burke
controller
knows
where
to
get
this,
and
you
know
it
works
for
fedora.
But
you
know
clearly
it's
nothing.
It's
not
work
with
an
older
Santa's,
older
team.
You
I
wonder
what
your
recommendations
are
for
minimal
key
meal.
Are.
There
are
some
assumptions
with
Qbert
as
far
as
keep
abilities
like
you
know,
q35
machine
type
or
you
fear,
or
that
sort
of
thing.
Then
you
give
me
some
guidelines.
C
One
example
yeah
one
example
would,
for
instance,
period
some
qumu
versions
had
a
back
that
once
you
open
a
serial
console
connection
once
and
you
close
it
again,
it
just
deleted
this
of
the
the
serial
console
socket
by
accident,
so
you
couldn't
connect
with
the
serial
anymore,
so
we
had
a
workaround
in
place
which
make
the
file
read-only.
So
this
human
couldn't
delete
it
anymore
right.
We
remove
that
you're.
Still
on
that
version,
the
rest
of
keyword
is
not
so
it
suddenly.
You
can't
connect
anything
anymore.
I
think
this
is
really
tough
to
write.
F
F
F
But
then
you
have
the
issues
that
it's
not
tested
at
all
anymore
right
you
effectively,
I
mean
what
woman
alluded
to
is
that
you
lose
all
the
benefits
we're
doing
on.
Do
you
all
the
testing
the
upstream,
however,
the
tests
week
is
still
provided,
but
you
then
need
to
thought
through
you
know,
I
didn't
rank
the
test
cases
which
still
apply
and
run
them.
So
it's
a
lot
of
work.
F
F
However,
it
doubles
and
then
you
know,
then
we
gain
the
benefit
that
we
could
test
the
runtime
in
upstream.
We
could
you
can
you
can
select
which
which
tests
apply
to
this
specific
new
runtime,
but
this
would
require
that
you
can
run
on
fedora,
and
the
second
would
be
that
you
can
run
upstream
your
heavily
modified
queue
and
I'm,
not
sure
any
of
these
two
requirements
can
be
met
by
you.
I
mean
that's
up
to
you
not
worth
there.
A
Overall,
you
know
that
defense-in-depth
things
we're
looking
at
is
like
the
sysadmin
capability
is
currently
needed
by
some
networking
components
such
as
SRA
ROV,
we're
looking
at
moving
sis
nice
for
I
was
at
DHCP
that
needed
that,
and
that
was
cyst
that
would
needed
net
rot
and
so
there's
just
a
few
capabilities
that
we're
moving
functionality
around
and
in
theory.
That
you
know
would
be
part
of
the
vert
handler
vert
launcher
codebase,
but
some
of
it
could
play
into
the
OS,
for
instance,
if
we're
doing
anything
with
IP
tables
or
what-have-you
I.
I
You
ever
considered
having
vert
launcher,
have
like
a
more
generic
API
that
you
control
from
vert
handler,
but
we
really
what
we
really
ideally
need
to
do
is
we
need
to
be
able
to
create
our
own.
That
is
nothing
to
do
with
vert
launcher
it's
just
our
pod.
You
know,
and
it
just
has
an
API
that
matches
what
you
expect
to
talk
to
you
like
a
rest
VPI
and
have
mirna
annular
talked
to
that
api,
and
has
that
ever
been
a
consideration
in
here
I
mean.
I
I
D
Just
to
be
clear,
like
it's
definitely
keeper
specific,
if
you
all
are
doing
enough
customization
work
and
everything
I
mean
at
some
point,
I
question
whether
Qbert
makes
sense
or
you
just
need
to
start
emerging
machine
in
a
pod
kind
of
using
the
techniques
that
we've
kind
of
proven
work
for
Kieffer's
that
that's
another
opportunity.
Yeah.
F
I
Well,
it
does
get
sensitive,
it's
it's
best
to
just
say
that
we
by
modifying
team,
you
were
able
to
do
a
very
high
level
of
AJ
okay,
so
we
we
basically
can.
We
can
lose
no
data
if,
if
there's
a
catastrophic
failure
in
the
VM
in
the
node
with
the
VM,
the
primary
VM,
the
secondary
one,
can
pick
up
without
any
loss
of
data
at
all
by
a
very,
very
tight
byte
for
byte
sync.
D
F
Yeah
yeah,
by
the
way,
the
topic
actually
came
up
in
different
forms
here,
as
well.
I
mean
I,
remember
that
was
actually
Pat
a
very
long
ago
on
the
different
Minnesota
commemorating
this
about
having
this
continuous
live
migration
right,
because
the
hot
standby
discussion
I
mean
that
was
also
topic.
That
was
discussed
here
in
there
for
Cubert
in
general.
But
it's
nothing
that
we've
pursued
yet
and.
H
H
Okay,
so
maybe
I'm
just
suggesting
here-
yes,
but
maybe
you
could
think
of
instead
of
modifying
root,
launch
or
maybe
you
could
somehow
modify
the
migration
process
that
we
already
have
and
be.
It
will
maybe
a
different
approach
to
the
same
problem,
and
maybe
it
will
be
easier
for
you.
I,
don't
know.
A
The
courts
require
tying
back
into
fabien
suggestion
earlier
that
services
could
be
the
answer
for
network
trafficking,
and
so,
if
you've
got
a
way
to
do,
live
migration
and
networking
now
you've
completely
emplaced
embraced
the
native
kubernetes
constructs
and
you're,
not
fighting
any
part
of
the
infrastructure.
Yeah.
D
That
second,
is
whether
they're
doing
so,
there
they're
continually
replicating
the
state
in
two
places
at
one
time
and
that's
how
they're
doing
your
failover.
So
it's
not
like
it's
the
state
that
they're
replicating
that
necessarily
it's
in
the
service
case
I'm
speaking
for
these
people,
which
is
they're
they're,
not
trying
to
replicate
necessarily
client
connections
itself
is
actually
my
process
state,
that's
being
replicated
across
to
two
pots.
I
C
C
I
C
I
K
K
H
So
I
don't
know
you
are
aware,
but
our
project
cube
root
is
participating
in
the
CN
CF
summer
internship
program
and
we
have
two
interns
which
will
join
us
for
the
internship
period
and
hopefully
they
will
continue
to
contribute
after
so
I.
Don't
know
if
they're
both
are
here.
I
see
that
our
tour
is
here,
I,
don't
know
if
Dean's
here,
but
maybe
you
want
to
introduce
yourselves
to
the
community.
L
H
D
Sure
I'm
a
box
per
second
alright,
so
I've
been
thinking
about
the
release
process
for
a
bit,
and
my
goal
here
you
know
tell
my:
did
you
talk
to
y'all?
Is
that
I
would
like
to
automate
everything
and
have
it
a
process
that
is
owned
by
the
team
collectively
and
all
the
bits
there
are
time
based
just
kind
of
happen
on
their
own
automatically.
D
So
what
we
have
for
our
releases
is
we
have
a
time
based
releases
where
they
occur
monthly,
ideally
like
at
the
beginning
of
the
month,
and
it's
well
understood
like
when
they're
going
to
occur
and
they're,
not
content
based,
meaning
that
we
don't
have
a
specific
set
of
content
that
needs
to
make
it
in
before
we're
gonna
cut
release.
We
can
directly
kind
of
release,
aim
point
we
want
directly
out
of
master
so
in
the
future.
D
In
order
to
automate
all
of
this,
we
occasionally
have
times
where
we
need
to
perhaps
hold
up
a
release,
because
we've
detected
some
instability
or
something
like
that.
So
we
need
an
automated
way
of
signaling
that
the
release
needs
something
to
be
resolved,
or
at
least
looked
at
before
it's
gonna
be
automatically
cut
so
I'm
working
on
that
part
and
kind
of
follow-up
and
the
victual
end
result
of
all
this
is
I
would
like
for
the
actual
release
branch
to
be
cut
automatically
by
a
cron
job
and
for
the
release
to
be
triggered
automatically.
D
D
What's
potentially
a
stability
problem
or
something
like
that
and
impact
this
automated
process
of
pushing
out
releases
so
I
kind
of
stumbled
through
all
of
that,
it's
something
I'm
investigating
right
now,
I,
don't
have
a
lot
to
show
anyone
and
it's
something
that
you'll
probably
hear
more
about
and
I'll
be
asking
a
lot
more
feedback
on
in
the
future
and,
of
course,
I'm
interested
in
anyone's
immediate
thoughts
on
that
as
well.
If
there
are
any
I'm
done
with
my
rambling,
I
have.
H
A
question
so
for
a
company,
and
so
a
at
least
as
I
see
it
usually
releases,
are
triggered
by
tugs
right.
So
you
said
that
you
want
to
trigger
the
release
with
a
cron
job.
So
how
do
you
see
it?
You
see
that
the
cron
job
runs
runs
the
test
and
if
you
decided
it's
time
to
release
it
pushes
the
tug
and
does
a
release
or
the
other
way
around
I.
D
D
Cut
at
the
first
of
the
month,
I
want
to
mention
whether
the
actual
staple
release
is
tagged
and
and
pushed
out
to
github
that's
dependent
on
CI
passing
and
ensuring
that
any
potential
blocker
issues
we
have
our
resolve
and
potentially
back
ported
to
this
release.
Branch
and
there's
a
whole
discussion
about
what
constitutes
a
blocker
like.
We
have
very,
very
strict
criteria
that
we
would
have
about
the
ability
to
block
a
release.
D
It
would
have
to
be
something
that,
like
maybe
just
approvers,
would
have
access
to,
or
things
like
that,
but
the
cron
would
kick
off
the
creation
of
the
branch
and
ideally
kick
off
the
tag
and
release
process
as
well,
but
that
process
would
be
gated
by
mechanisms
that
basically
have
distributed
ownership
across
the
whole
team.
So
the
ability
to
block
that
that
tag
from
occurring
until
we
we
feel
confident
but
everything's
stable,
which
should
be
most
of
the
time.
But
we
have
to
come
an
escape
hatch
there
somehow
I'm,
not
sure
yeah.
E
D
A
D
I
think
that
makes
total
sense.
We
could
have
the
policy
of
automating
automating
the
release
branch
on
the
first
month,
creating
immediately
a
release
candidate
and
then
saying,
if
nobody
erases
any
concerns,
this
is
automatically
going
to
turn
and
be
promoted
as
the
official
stable
release.
You
know
seven
days
later,
or
something
like
that
like
these
are
all
things
that
we
kind
of
find
like.
D
We
need
to
sort
through
that
kind
of
automation
that
I
would
like
to
just
have
the
process
just
working
for
us,
and
it
makes
it
predictable
for
for
the
gem
community
as
well,
when
releases
would
happen
and
they
have
visibility
into
exactly
why
they
didn't
occur.
So
if
a
release
didn't
get
cut,
there's
gonna
be
an
easy
way
of
sorting
why
it
didn't
occur
because
you'd
be
able
to
see
well.
These
labels
are
associated
with
these
issues
and
those
have
to
be
closed
before
the
automated
process
is
gonna,
promote
a
release
candidate.