►
Description
For more Continuous Delivery Foundation content, check out our blog: https://cd.foundation/blog/
Platform SIG - David Byron, Salesforce & Cameron Motevasselani, Armory
The Platform SIG session at Spinnaker Summit will cover updates, the current state of the release process, the ongoing work in that area, and any other topics attendees want to discuss.
A
Welcome
everyone
to
today's
impromptu
ish
platform.
Sig,
we
I,
don't
think
we
really
have
much
of
an
agenda.
It's
more
meant
to
be
a
discussion.
So
if
you
all
have
topics
you'd
like
to
bring
I
know
that
there
are
some
topics
of
discussion
that
we
would
like
to
kind
of
tackle.
I
have
I'm
gonna,
be
taking
notes
on
the
platform
Sig
agenda
doc,
which
I
can
pin
to
the
platform.
Sig
select.
A
C
Just
so
we,
these
meetings
are
regularly
scheduled
every
other
Thursday
at
some
time,
depending
on
your
time
zone
and
the
the
Spinnaker
governance.
Repo
has
a
list
of
all
the
Sig
meetings
and
there's
a
whole
Spinnaker
calendar
with
all
the
things
on
it.
So
don't
just
join
us
today
come
and
come
and
join
us
all
the
times.
C
C
If
there
are
any
questions
remotely,
let's,
let's
start
with
them
or
there's
one
here
great,
what's
up
default
and
Saul
I,
my
skin
always
sort
of
crawls
at
the
word
default,
but
yeah,
the
the
I
think
you're
talking
about
like
the
agreed
upon
Community
install
mechanism.
C
Sure
yeah
it
I
I,
guess
I'll
try
to
give
the
sort
of
background
a
little
bit
of.
How
did
we
get
here
and
then
see
if
we
all
agree
on
where
here
is
and
then
try
to
figure
out
what
to
do
next?
C
So,
in
the
beginning
of
my
time
with
Spinnaker,
there
was
a
thing
called
halyard,
and
that
was
the
one
true
way
and
I'm
gonna
go
out
on
a
limit,
I'm
going
to
say
nobody
likes
halyard,
it's
sort
of
hard
and
painful
to
maintain
it's
like
extra
layers
of
configuration,
that's
hard
to
keep
up
to
date.
C
If
you
add
a
config
flag
in
like
a
Java
class
and
Cloud
driver
and
then
that
needs
to
somehow
make
its
way
into
hired,
it's
like
that'll
never
happen,
and
it's
too
annoying
and
it's
hard
to
do
infrastructure
as
code,
because
there's
the
CLI
and
anyway,
that
it's
painful,
so
it
I
think
it
might
even
be
I,
don't
know
if
it's
deprecated
or
it's
just
like
spiritually
deprecated,
but
that
there's
like
a
general
notion
to
try
to
find
something
different.
C
C
So
and
the
cleat
RFC
is
like
merged
into
right
and
it's
probably
been
accepted
or
something
so
there's
another
another.
Repo
called
cleat,
of
course,
with
a
K
that
some
really
smart,
Google
folks
were
working
on.
That
was
supposed
to
be
a
replacement
for
halyard
and
seemed
kind
of
great.
It
turns
out
that
what
the
Hellyer
CLI
does
is
make
a
halyard
config
file
and
then,
when
you
say
how
deploy
apply,
it
takes
your
how
your
config
file
and
does
the
thing
I
know
in
a
previous
life.
C
But
if
you
never
used
halyard
in
the
first
place,
then
cleat
doesn't
really
help
you
so,
and
the
people
who
made
cleat
are
no
longer
with
us,
and
so
Clete
is
sort
of
a
little
DOA
and
for
people
new
to
the
community.
It
isn't
really
helpful
and
then
somewhere
along
the
way
or
maybe
I
have
the
sequence
wrong.
C
Maybe
this
came
before
cleet
Folks
at
Armory
wrote
this
operator
which
still
uses
halyard
under
the
covers,
which
is
sort
of
unfortunate,
and
it
doesn't
have
to
use
Hollywood
under
the
covers,
but
it
does
and
that's
supposed
to
make
life
easier
for
people
who
run
Spinnaker
on
kubernetes
which,
even
though
we
think
it's
everybody
is
not
everybody,
which
is
a
good
time
to
mention
that
Hellyer.
Does
this
other
thing
which
is
deal
with
people
who
run
with
Debian
packages
on
VMS
and
PS?
C
When
we
publish
things
we
don't
just
publish
Docker
images,
we
publish
Debian
packages,
so
I
think
there's
a
there's,
a
maybe
Fernando.
You
started
like
the.
What
are
we
gonna
do
issue
or
or
RFC,
or
something
I
think
it
was
even
I
think
maybe
the
point
of
it
is
to
like
standardize
on
the
operator
and
that
thing's
been
sitting
for
a
while
and
generating
some
Lively
comments,
and
maybe
we
could
get
some
more
lively
comments
here
before
I
before
I
rant.
Maybe
you
want
to.
A
A
To
give
a
yeah
operator
no
yeah,
so
so
there
there's
a
I
just
wanted
to
mention
the
RFC
process.
This
is
kind
of
a
common
process
within
open
source
projects.
So
if
you
have
any
changes
that
you
want
to
make
to
the
project
and
their
widespread
changes
like
change
of
the
install
method,
updating
something
else
or
I'm
a
little
tired
today,
but
the
idea
is
to
propose
your
changes
and
get
comments
from
the
community.
We
are
community
and
we
do
want
to
support
different
types
of
use.
A
Cases
and
the
operator
definitely
does
not
deploy
to
not
kubernetes.
So
it's
one
of
those
things
where
we
do
want
to
do.
Install
consolidation
and
I.
Don't
know
if
that
means
choosing
one
or
choosing
a
couple
so
that
our
our
use
cases
are
all
covered,
but
we
definitely
want
to
get
rid
of
halyard.
But
I
just
wanted
to
mention
the
RFC
process
and
I
guess
not
get
too
off
track
could
be.
D
So
today
the
cleat
is
not
being
maintained
right,
so
the
only
option
we
really
have
is
halyard
and
most
of
the
people
who
are
currently
using
Spinnaker
are
using
Hellyer
and
some
of
these
new
changes
that
are
coming
up
with
the
cloud
driver.
All
the
options
are
going
into
the
Hat
yard.
So
this
operator
it's
yeah.
D
C
E
Yeah,
so
right
now
the
operator
is
based
on
halyard.
That
was
a
decision
made
out
of
convenience
at
the
time,
but
it
was
never
the
end
state.
So
one
of
the
things
that
we
do
want
to
do
before
it
gets
donated
to
the
open
source
project
is
to
actually
remove
halyard,
not
just
because
you
know
we
want
to
use
cleator
or
whatever
right.
The
goal
is
to
make
it
easier
for
people
to
use
right
now.
E
If
configuration
options
are
split
like
if
you
misconfigure
something
or
you
add
a
configuration
value
that
the
operator
doesn't
validate
for
and
it
gets
sent
to
hired
if
hired
returns,
a
500
right
now,
the
experience
is
not
super
great
right,
like
you're
you're
stuck
looking
into
different
places
for
logs.
E
So
we
want
to
get
rid
of
halyard
to
make
sure
that
all
of
that
is
coming
from
a
single
place
and
then
make
sure
that
I
don't
know
that
that
gets
donated,
but
I
mean
ultimately,
the
goal
with
operators
is
to
make
sure
that
we're
encapsulating
all
the
best
practices
that
we
have
in
the
community
for
operating
Spinnaker
in
the
project
itself,
and
that's
one
of
the
things
that
cleat
won't
do
right,
like
cleat,
is
just
really
managing
configuration
files
and
then
splitting
it
across
a
number
of
different
projects
and
the
the
overriding
I
guess
goal
with
that
RFC
in
particular,
is
to
make
it
easier
for
people
to
get
started
with
their
first
maker
cluster.
D
E
Okay,
part
of
part
of
the
RFC,
the
reason
it's
not
merged
here
is
because
we
want
to
make
sure
that
we
agree
on
that
timeline.
So
there
was
a
timeline
listed.
Originally
we
had
hopes
to
have
it
merged
in
before
Spinnaker
Summit
didn't
happen,
that's
totally
okay.
So
we
would
want
to
update
that
timeline,
but
we
do
want
to
agree
with
the
community
like
hey.
This
is
the
timeline
that
would
work.
E
Should
we
want
to
move
forward
with
this,
where
we
would
remove
cleats,
make
sure
that
it's
donated
and
make
sure
that
it's
well
documented
and
that
everything
there's
like
a
whole
bunch
of
dependent
stuff
that
needs
to
happen
right.
The
whole
website
right
now
is
still
halyard
oriented.
So
that's
like
a
whole
effort
on
its
own
to
make
sure
that
we
update
any
install
instructions
but
yeah
that
that
all
goes
into
that
that
process.
So
we're
not
necessarily
like
saying
hey:
let's
do
this.
It's
like
hey.
C
I'm
going
to
try
to
summarize
some
comments
that
Carl
made
on
the
on
the
pr
or
the
yeah,
the
pr
Carl
and
I
worked
together
at
Salesforce.
So
we
talk
about
this
stuff,
a
lot
and
maybe
he's
even
on
the
call,
but
it's
probably
real
early
in
the
morning
in
New
Zealand,
where
he
is.
C
He
explained
this
probably
better
than
I'm
going
to.
But
the
general
idea
is
that,
although
we
want
to
make
it
easy
for
people
to
start,
we
also
want
to
make
it
easy
to
live
in
what
we
call
the
day,
two
experience
and
what
we
think
we
learned
is
that
nobody
ever
runs.
One
instance
of
Spinnaker
either
you're
like
testing
a
new
thing
or
you
have
prod
and
pre-prod,
or
you
have
Disaster
Recovery
or
whatever
it
is.
You
almost
always
have
two.
C
At
least
you
have
more
than
one,
and
some
of
us
have
way
more
than
two.
But
the
point
there
is
that
to
deploy
the
operator
or
maybe
not
to
deploy
the
operator.
C
Kubernetes
already
has
operators
for
managing
deployments
and
services
and
Ingress
and
config
Maps
so
like
what's
the
operator
doing
for
you
and
I
understand
this
notion
of
like
best
practices
but
I
think
that
sort
of
falls
apart,
because
then
you
end
up
trying
to
implement
Spinnaker
and
all
of
its
practices
like
blue
greens
and
canaries
and
whatever
into
the
Spinnaker
operator
and
like
I,
just
don't
get
it.
It
seems
complicated
for
no
reason,
or
at
least
for
not
enough
benefit,
to
justify
the
complexity.
C
B
F
On
like
dropping
the
operator
and
the
date
too,
what
operator
does
is
lowers
the
barrier
for
the
first
Spinnaker
users.
It's
easy
to
see
the
customized
purchase
and
visualize
the
changes,
I'm,
okay
of
good,
doing
like
proper
kubernetes,
manifest
I
can't
do
it,
but
I
have
three
years
production
experience
in
spinocular
like
a
newcomer,
won't
be
able
to
do
it
so
yeah.
We
can
do
the
best
practices
using
like
native
kubernetes
manifest,
but
is
it
easy
for
the
newcomers
to
the
community
to
take
advantage
of
the
power
of
spinocular
and
productionalize?
G
Cool
so
yeah
I
think
a
lot
of
what
you
had
said
David.
It
makes
a
lot
of
sense,
I,
I
personally,
don't
like
halyard
I
I.
Remember
when
I
first
joined
the
spirit
team
on
on
Apple
side,
I
was
really
confused
by
the
tool.
To
be
honest,
I
was
like
what
does
this
thing?
Do
it
generates
configs
and
it
deploys
it
deploys
spinner
girls
like
isn't
there
already
tooling,
for
that
I,
don't
get
it
so
I
I
completely
agree.
G
I
think
we
should
go
with
more
of
an
industry
standard
I
I
do
understand
that
you
know
like
that
that
that
it's
going
to
make
you
know
it's
probably
going
to
be
a
little
bit
tougher,
but
like
a
new
person
coming
in,
if
they
they're,
probably
going
to
be
more
familiar
with
something
like
customize
or
like
Helm
right
now,
they
have
to
go,
learn
a
whole
new
tool
right,
like
halyard
like
that,
doesn't
seem
like
a
good
idea,
so
I
I'm
on
board
with
like
deprecating
it
but
I
think
we
just
need
to
do
it
right,
create
a
timeline,
create
a
road
map
on
how
we
want
to
go
about.
G
You
know
implementing
something
with
like
customize
or
whatever
tool
we
choose
to
use,
but
I
think
I
think
it
is
important
getting
rid
of
it
because,
like
I
said
for
me
coming
in
it
confused
me
more
so
than
it
helped
me
so
so
yeah
that's
kind
of
my
two
cents.
There.
C
Yeah
I
think
I
think
there's
broad
agreement
to
get
rid
of
halyard.
The
question
is:
what
do
we
replace
it
with
and
this
this
notion
of
like
what's
easier
for
the
most
the
most
beginning
people?
C
The
things
that
people
have
to
fill
in
are
still
the
same
things
that
people
have
to
fill
in,
like
I
haven't
used
the
operator,
so
I
don't
know,
but
you
still
have
to
give
it
something
like
I,
don't
know
what
you
have
to
give
it:
the
location
of
your
redis
instance,
the
location
of
a
database
or
something
and
and
I
suspect.
These
are
the
same
things
that
you'll
have
to
give
to
customize
to
do
it
sort
of
the
plane.
The
plain
kubernetes
way.
C
F
The
the
power
users,
the
heavy
production
Spinnaker
users,
are
okay
to
do
a
comfy
Mark
with
the
local
Cloud
driver
configuration
the
echo
configuration
whatever
and
change
that
and
do
a
little
blue,
green
and
whatever
to
deploy
Spinnaker
right,
but
those
that
are
not
very
familiar
with
the
operating
Spinnaker.
F
F
E
Is
it
worth
considering
taking
inspiration
from
some
other
projects,
for
example,
when
you're
getting
started
with
like
a
new
react
project,
you
have
this
create
react
app
and
then
there's
an
eject
command
that
you
can
do
if
you
want
to
go
down
your
own
custom
route.
Is
that
something
that
we
would
want
to
consider
here
like
we
have
an
operator
that
does
optimize
for
that
first
user
experience
and
then
there
is
a
way
to
eject.
D
See
the
issue
here
we
are
talking
about
is
more
of
a
hell,
yeah
too
right,
so
the
halyard.
So
now
there
are
two
options:
we
can
have
a
customized
that
does
deploy
directly
to
kubernetes.
That
gives
you
the
base
installation
thing
it
doesn't
have
to
be
operator,
but
in
the
day
two
operations
also
right
now.
What
we
are
finding
with
halyard
is
any
changes
that
we
want
to
make.
D
You
have
to
change
code,
which
it
doesn't
work
very
well
for
us
to
maintain
it,
and
also,
if
you
want
to
have
additional
options,
there
are
some
of
these
Cloud
driver
files
that
are
generated
from
repository
that
are
built
in
right.
So
then
you
have
to
build
all
these
extensions
customizations
to
make
changes
to
that.
So
that's
we
agree
that
halyard
needs
to
go
away,
but
it
if
operator
is
again
using
halyard.
We
still
run
into
the
same
kind
of
issues.
C
First,
so
that
we
can
start
the
work
yeah
I
mean
for
my
take
is
that,
instead
of
investing
the
time
to
rip
halyard
out
of
the
operator,
invest
the
time
to
to
rip
out
the
operator,
it's
a
net,
simpler
experience
and,
and
we
can
make
it
so
that
it's
better
for
everybody
day,
one
and
day
two
for
the
beginning,
people
and
the
and
the
advanced
people.
The
the
base
thing
that
we
provide
is
still
going
to
be
the
same.
The
advanced
people
can
do
whatever
we
want.
We
don't
really
care
about
them.
C
H
I
mean
like
just
look
at
any
other
cncf
project
like
Argo
CD,
like
the
first
installation
thing
they
tell
you
to
do,
is
install
Cube
cuddle
and
then
curl
something
into
Cube
cuddle
and
it
gets
you
started
now.
We
all
know
that
that's
far
from
what
you
need
to
have
a
full
operational
Enterprise
ready,
Argo
CD
environment,
but
if
we
can
learn
a
lot
from
just
keeping
something
simple
because
then
we
at
least
have
a
platform
to
answer
questions
right
now.
People
are
just
like
opening
up
messages.
H
On
slack
saying,
I
tried
to
install
getting
this
weird
version.
Error
I'm
running
into
this
operator
doesn't
work
right,
or
at
least
we
can
then
like
educate
from
a
base
of.
Oh,
you
just
did
the
basic
install
right.
Let's,
let's
go
on
to
that
next
step
like
oh,
how
do
I
add
a
provider
or
how
do
I
add
a
another
storage,
persistent
store,
right
and
I
think
we
could
all
benefit
from
that
because,
like
right
now,
it's
like
I
think
the
question
was
raises
of
how
are
people
installing
it?
H
Some
people
are
watching
old,
YouTube
videos
from
like
2017
2018
to
install
it
right.
I
installed
the
Debian
one
in
her
last
job,
because
that
was
the
only
way
to
get
started
initially
at
the
start,
but
I
think
we
as
a
community
just
need
to
kind
of
figure
that
path
out
so.
C
Nice
I
guess
I,
just
I
have
one
more
thing
to
say
about
that:
I
I
think.
Oh
sorry,
my
I
have
this
I
have
a
suspicion
that
every
company
that
runs
Spinnaker
sort
of
resolves
this
problem
because
and
we
tried
to
publish
like
a
community
Helm
chart
and
it
turns
out
that
Helm
is
sort
of
a
drag
for
this,
because
you
have
to
sort
of
decide
everything,
that's
going
to
be
customizable
parametrizable
and
make
it
a
value.
C
And
then
somebody
can
set
the
value
and
then
somebody
adds
a
new
thing
or
somebody
forgets
and
so
like
it's
it's
hard
to
make
this
sort
of
Base
from
which
people
can
build
on,
so
that
there's
like
again
a
way
that
people
install
Spinnaker
and
then
like
companies
that
are
doing
fancy.
Things
can
do
fancy
things
but,
like
you
know
having
so
basically,
this
is.
G
So
yeah
I
think
from
for
me,
because
I
noticed
that
you
were
talking
more
from
like
the
first
user
experience,
so
I
definitely
think
we're
trying
to
solve
like
two
problems
here.
One
is
just
like
you
know
just
being
able
to
deploy
it
using
like
another
tool
that,
in
my
opinion,
that
should
be
more
familiar
with,
like
just
people
that
are
coming
into
the
kubernetes
space
or
whatever
deployment
space.
G
And
then
we
have
the
issue
of
knowing
what
all
the
configuration
does
right.
I
think
what
which
is
I
think
your
point
right
with
the
config
maps
and
operators
and
I
think
that
is
solvable.
G
G
C
Yeah
that
topic
of
of
what
are
all
the
possible
config
knobs
is,
is
sort
of
a
different
topic
it.
C
It
adds
a
lot
of
motivation
to
the
thing
we've
already
agreed,
which
is
that
Howard
needs
to
die
once
halyard
is
gone,
there
will
still
be
config
in
I'm,
gonna,
say
two
places
instead
of
one
or
three
places
instead
of
two,
and
there
are
some
unfortunate
details
of
some
of
the
way
config
properties,
some
config
properties
work
in
Spinnaker,
so
that,
like
spring
boot,
has
a
config,
props,
endpoint
I,
don't
know
if
people
know
this,
but
you
can
like
say:
curl
Cloud
driver,
slash,
config,
props
and
it'll.
C
C
That
would
show
up
at
that
end
point
and
that
might
help
the
kind
of
thing
you're
talking
about
you
know
trying
to
like
maintain
a
file
that
has
all
the
values,
if
that's
not
maintained
in
some
automated
way,
will
of
course,
get
stale,
I'm
I'm,
biased,
because
I'm
comfortable,
like
reading
the
code,
but
at
some
point,
if
you
want
to
find
out
what
the
flag
is,
that's
sort
of
what
you
have
to
do
and-
and
there
is
probably
a
better
way-
but
it
does
add,
add
some
friction
too
Joe
I
think
you
had
your
hand
up.
I
Yeah
so
I
think
one
of
the
things
that's
most
difficult
for
me
to
understand
about
halyard
is
it
tries
to
do
everything
and
so
tools,
like
operator
I,
think
do
a
lot
of
the
same
thing
where
it's
both
of
like
attempts
to
fully
manage
the
configuration
of
the
application
and
also
the
infrastructure
and
those
things
are
combined
a
little
bit
like
you
have
the
Define,
where
bits
of
infrastructure
are
and
the
names
of
things
and
URLs
and
credentials,
and
so
on.
I
But
beyond
that
we
don't
use
halyard,
for
we
use
the
smallest
part
of
halyard
possible,
which
is
basically
it
will.
It
will
generate
the
basic
manifest
for
all
the
service
and
apply
them.
We
do.
I
We
use
spring
profiles
that
are
generated
in
other
ways
for
everything
else,
including
persistent
storage
and
URLs,
and
where
things
are,
but
for
us
you
know,
we
use
the
very
smallest
part
of
halyard
possible
and
it
might
make
sense
to
like
try
to
separate
that
configuration
of
the
application
from
all
of
this
stuff,
because
you
know
it's
very
possible
to
have
a
sane
default
configuration
that
works
out
of
the
box.
You
know
that
functions
in
a
normal
way.
It's
like
all
right,
great
we've
got
this.
I
You
know
Helm
chart
or
whatever
it
is
customized
set
of
whatever's.
You
know
some
terraform
script
that
fires
up
stuff
on
AWS.
It's
all!
It's
all
really.
You
know
a
horse
of
Peace,
but
you
know
that
can
have
a
sane
configuration
out
of
the
box.
It
starts
up
a
database.
It
starts
up
a
redis.
It
just
gives
you
all
the
things
in
a
in
a
somewhat
same
configuration
and
then
from
there
we
have
a
you
know
a
way
to
maintain
the
application
configuration
in
a
much
more
sane
way
so
either
through
programmatic
means.
I
You
know
getting
information
about
spring
boot.
There
have
been
previous
efforts
to
use
spring
Cloud
config,
and
you
know
at
the
end
of
the
day,
there's
a
million
ways
where
you
can
reasonably
deliver
spring
configuration
to
a
kubernetes
cluster
or
something
else,
but
I
don't
know.
It
feels
to
me,
like
it'd,
be
worth
having
a
tool
that
creates
infrastructure
being
distinct
from
a
tool
that
manages
the
configuration
of
the
application
because
conflating
those
two
things
is
always
it's
impossible.
Like
you
know,
we
were
saying
before
halyard:
doesn't
it
knows
about
a
lot
of
config?
I
We
need
a
way
to
manage
the
application
configuration
a
way
to
you
know,
figure
out,
what's
available,
distinct
from
how
we
operate
the
infrastructure.
So
you
know
that's
a
that's
how
we've
been
doing
it
internally
for
the
most
part,
and
it
may
be
useful
to
just
like
bring
those
ideas
further
apart.
H
E
E
How
do
we
make
it
obvious
to
users
when
they've
misconfigured,
something
I,
think
we've
all
experienced
something
right
now,
where
we've
configured
something
in
Spinnaker
and
we
don't
find
out
until
a
pipeline
doesn't
execute
the
way
we
expect
and
then
it's
kind
of
like
a
path
to
figure
out
which
service
do
I
need
to
look
in
which
log
that
can
be
a
topic
on
its
own.
E
But
do
we
do
we
value
as
a
community
the
ability
to
validate
those
configurations
before
they
get
applied
and
if
so,
how
do
we
want
to
do
that
with
customize?
Because
that's
something
that
we
do
provide
today,
both
through
halyard
and
the
operator
experience?
Is
this
notion
of
validation,
of
configuration
values
so
that
those
newer
users
understand
hey
I
configured
this
S3
bucket,
but
I
can't
actually
talk
to
it
or
hey.
E
You
know,
I
haven't
configured
a
default
account
for
my
kubernetes
provider,
like
those
are
the
kinds
of
things
that
will
trip
up
new
users
to
the
community
and
I
want
to
make
sure
at
least
we're
we're
consciously
making
that
decision
to
not
worry
about
it
with
customize,
and
we
find
some
other
way
to
do
it
or
or
we
find
a
way
to
to
bring
it
in
to
the
world,
and-
and
there
are
a
few
things
that
we
can
do
here
right
like
with
customize,
we
can
use
conf
test,
which
is
this
tool
to
like
make
assertions
around
yaml
and,
and
we
could
go
that
route
if
we
wanted
to
so
there's
options
here,
I
just
want
to
get
everybody's
take
like
what
do
we?
E
What
do
we
value
in
that
experience
as
the
folks
in
the
room
here.
D
B
D
Particularly
for
the
people
who
are
starting
off
right
now,
they
don't
want
to
figure
out
which
service
logs
to
look
for
did
yeah.
We
need
to
provide
some
validation
but
separating
out
the
infrastructure
from
the
application.
What
we
have
seen
mostly
people,
the
way
they
use
it
is
all
the
cloud
driver
accounts,
Cloud
accounts
or
artifactory
accounts.
They're.
All
dynamically
said.
Most
of
the
configurations
today
come
through
dynamic
means
whether
they
are
creating
pipelines
or
creating
applications,
they're
all
Dynamic.
D
So
it's
infrastructure
in
terms
of
how
the
Spinnaker
comes
up,
which
persistent
stores
that
it
connects
to
those
are
the
ones
that
initially
is
set
up
and
for
the
people
who
are
starting
up
for
them.
If
we
can
give
that
configuration
and
have
some
way
to
provide
the
error
reporting
properly.
H
D
Rest
of
the
application
configuration
can
be
dynamic
like
this,
so
today
the
generation
of
this
Dynamic
configuration
everyone
has
built
their
own
automation
around
to
create
these
accounts,
create
these
applications.
So
that's
a
second
topic.
I
think
we
we
should
address
that
as
well,
but
today
I
think
we
are
focusing
on
initial
configuration.
So
since
the
initial
configuration
is
only
going
to
be
small
set
of
things
that
we
want
to
bring
up,
like
Argo
CD,
for
example,
brings
up
that
it
is
and
for
three
different
Services
it
comes
up.
C
I
think,
for
me,
the
priority
for
validation
is
to
is
to
make
sure
it's
in
one
place
and
I
think
whatever
validation
the
operator
or
is
doing
today,
it's
sort
of
getting
lucky.
Yet
it's
it's
getting
lucky
in
the
sense
that
it's
like
matches
with
the
code,
because
somebody
did
a
bunch
of
work
to
get
it
to
match
and
tomorrow
it
will
stop
matching.
C
Ideally,
we
want
something
to
not
start
and
if
something
you
know,
there's
of
course
lots
of
nuance
to
every
particular
parameter
and
every
particular
kind
of
Brokenness,
some
things
you
know
it's
better
to
start
and
and
warn
or
start
and
not,
warn
or
like
if
there
are
no
accounts,
make
it
okay
for
there
to
be
no
accounts
or
something
but
yeah
that
and
for
sure,
like,
let's
validate
to
the
level
of
schemas
and
and
what
we
can,
that
somebody
doesn't
have
a
syntax
error
in
their
yaml
or
they're
missing
a
required
field
in
in
a
kubernetes
object.
C
But
if
something's
missing
at
the
Spinnaker
level,
yeah
I
mean
what
assumptions
do
we
have
about
people
who
are
installing
Spinnaker
for
the
first
time?
Do
we
think
that
they
understand
the
notion
of
like
shipping
their
logs
out
of
the
cluster
into
a
log
aggregator
which
you
sort
of
have
to
have,
but
I
realize
that
some
people
are
learning
Spinnaker
at
the
same
time
that
they're
learning
kubernetes
and
sometimes
it's
hard
to
help
those
people
and
we're
happy
that
you
will
take
their
money
and
charge
them
by
the
hour
and
hold
their
hand.
G
So
I'm
I
kind
of
hold
a
probably
a,
not
popular
opinion
on
validation,
I,
don't
like
client-side
validation
at
all
I.
Think
it's
horrible
I
think
it's
a
good
way
for
this.
The
front
end
and
the
back
end
to
easily
get
mismatched.
Whenever
you
make
an
update
to
the
back
end,
you
now
have
to
update
the
front
end
and
there
can
be
a
little
bit
of
friction
there.
G
If
you
have
a
small
team,
that's
easier
but
like
if
you're
working
in
a
large
org,
where
you
know
you
might
have
like
some
front-end
engineers
and
you
might
have
to
backend
engineers-
and
you
would
have
to
be
constantly
communicating
with
them
just
to
make
sure
that
you
know
things
are
right.
So
I'd
rather
just
have
the
server
return.
Meaningful
errors.
I
know
that's
a
lot
to
ask.
G
But,
like
you
know
just
saying,
oh,
you
know,
this
validation
is
incorrect
or
you
know
on
from
the
server-side
perspective,
but
I
do
agree
just
validating,
like
the
simple
things
I'll
be
fine
with
that
I
think
trying
to
validate
like
I,
don't
even
know
how
you
do
that,
like
every
possible
use
case
for
for
a
customer
just
seems
crazy
to
me.
G
So
I
would
rather
just
like
be
like
okay,
the
yaml's
good,
you
know,
and
then,
if,
if
it's
a
Spinnaker
issue,
you
know
just
have
them
run
it
and
then
once
they
run
into
that
issue,
I
mean
that's
what
you
do
with
software
today,
you
when
you
run
into
an
issue
with
especially
like
configuration
as
you,
you
go
back
and
you
fix
it.
You
know
I
mean
if
you
think
about
it
right
just
last
week
what
Microsoft
like
leaked,
65
000
accounts
due
to
a
configuration
issue.
You
know
so
it
happens.
G
You
know
it
does,
but
that's
that's
typically
how
people
solve
it?
They
just
they
release
the
configuration
they
just
have
more
eyes
on
it.
Just
make
sure
the
configuration
is
good
and
then
they
deploy.
You
know
like
you're,
not
going
to
be
able
to
save
everyone
with
validation.
You
know
and
I
think
trying
to
go
down.
That
road
is
is
very
difficult
and
I
think
nearly
impossible.
To
be
honest,.
E
Yeah
I
definitely
don't
think
we're
going
to
validate
for
every
customer
use
case,
I,
think
more
from
where
I'm
coming
from
and
and
maybe
it
helps
to
explain
this
a
little
bit
more
is
trying
to
get
more
folks
into
the
community
that
are
engaged
in
the
project
if
they're
coming
like,
especially
if
they're
going
to
spend
the
rest
of
the
week
at
kubecon
and
they're,
coming
with
more
of
a
kubernetes
background.
E
My
assertion
is
that
they're
going
to
consider
Spinnaker
as
more
of
an
appliance
they're
not
going
to
consider
it
a
platform
so
when
they
go
to
an
Argo,
for
example,
and
they
do
see
those
manifests
and
they
do
apply
them
very
directly.
It's
very
straightforward
to
go
and
look
at
the
logs
and
say:
hey!
Look!
E
Here's
what's
going
on
with
my
Argo
instance,
but
we
all
know:
that's
not
the
case
with
Spinnaker
there's
nine
to
ten
microservices
that
they're
gonna
have
to
know
somehow
how
to
get
into
and
so
that
that
current
gate
at
the
client
side
is
I.
Guess
a
Band-Aid
over
the
larger
question
of
how
we
organize
the
price
project
and
I,
don't
necessarily
want
to
solve
that
today.
E
I
think
this
is
a
shorter
term
solution,
while
we
figure
out
what
we
want
to
do
at
a
larger
project
level
and
how
we
want
to
be
more
approachable
to
those
people
that
are
evaluating
Us
in
the
context
of
these
newer
tools,
right
and
and
sure,
like
Argo,
is
only
kubernetes
right,
and
it's
only
going
to
solve
like
a
very
narrow
deployment
pattern,
but
the
more
folks
that
we
can
get
into
the
community.
We
can
get
I
guess
engaged
in
working
on
more
ambitious
projects
than
consolidating
install
methods.
A
Thanks
very
much
everyone
for
the
for
the
conversations.
I'm
gonna
ask
a
question
from
Ashley
on
or
a
comment
question
from
Ashley
on
the
stick
platform
channel.
So
he's
asking
hey.
Not
everyone
has
access
has
kubernetes
clusters,
which
is
where
halyard
comes
in
handy
I,
get
the
pain
Point
caused
by
halyard,
but
perhaps
we
can
do
something
similar
to
the
operator
by
using
ansible
playbooks.
C
This
seems
uncontroversial
like
yes,
we're
I
feel
like
the
conversation
in
this
room
is
really
exclusively
about
what
what
to
do
for
kubernetes
and
and
maybe
I
know.
It's
not
fair
to
say
this
out
loud
but
quote
unquote.
Nobody
cares
about
the
Debian
package
in
this
room
at
this
moment,
and
so
we're
not
talking
about
it.
But
for
sure.
C
Like
each
of
these
pieces,
you
can
argue
about
the
borders
of
the
pieces,
but
they
really
do
have
pretty
different
scale
demands
and
so
like
having
five
of
this
one
and
three
of
that
one
and
two
of
that
one
really
does
make
sense.
And
yes,
when
you're
running
one
of
everything
you
could
just
have
one
but
again
the
day.
Two
experience
is
not
that.
C
A
Right
on
sweet,
oh
we've
got
an
online
question
as
well:
oh
I'm
gonna
get
the
mic
over
here.
B
A
A
very
good
question
so
that
that
is
a
little
bit.
Let's
see
a
little
bit,
not
quite
in
line
with
our
current
conversation
in
terms
of
the
operator,
but
maybe
it
does
does
relate
so
for
everyone
who
doesn't
know
Home
Depot
produced
a
go
Cloud
driver,
so
it's
Cloud
driver
that
handles
kubernetes
accounts
in
particular.
A
It
is
written
in
go
so
there
are
ways
of
managing
and
charting
your
accounts
so
that
you
can
use
both
go
Cloud
driver
and
Cloud
driver
separately
and
get
get
the
benefit
from
that
I
think
I
haven't
talked
to
Home
Depot
at
all
about
this
right,
but
we
could
have
a
conversation.
They
are
on
the
TOC,
so
we
could
have
a
conversation
about
adopting
it
into
the
community
so
on
and
so
forth.
A
That's
now
a
another
service
we
have
to
manage,
and
it's
written
in
a
language
that
we
don't
necessarily
have
expertise
in
so
I
think
there's
a
lot
of
discussion
that
we
should
have.
If
we
want
to
go
down
that
route,
we
came
together
as
a
Toc,
not
100,
all
of
us,
but
we
did
talk
about
wanting
to
improve
the
experience
for
new
developers,
so
we
want
to
reduce
Tech
debt.
A
This
is
around
language
consolidation,
so
we
have
angular
reacts
two
different
Frameworks
in
deck
we've
got
kotlin,
we've
got
groovy,
we've
got
Java.
Do
we
want
to
add,
go
as
well
and
who's
going
to
maintain
it?
So
those
are
some
some
questions.
I
have
in
regards
to
that
I
think
it's
still
very
early
on
I
haven't
seen
it
run
myself
yet,
but
I
think
I
would
love
to
hear
more
and
I
would
also
like
to
know
if
Home
Depot
is
interested
in
contributing
it
in
the
first
place.
A
But
that's
another
question
for
another
day:.
D
Yeah,
this
Pro
topic,
probably
is
not
in
the
platform,
but
to
just
add
to
the
comments
go
Cloud
driver
was
originally
done
by
Home
Depot,
but
we
at
optimax.
We
did
a
lot
of
improvements
on
that
and
it's
of
course
it's
open
source.
The
one
of
the
issues
is
it:
it
only
is
kubernetes,
it
doesn't
support
AWS,
gcp,
azure
and
now
that
requires
again
one
of
the
the
reasons
that
was
done
is
it
uses
a
lot
fewer
resources
and
basically
it
doesn't
do
caching
the
same
way
that
existing
Cloud
driver
does.
D
But
there
are
some
limitations.
We
definitely
need
to
talk
about
that.
It
is
open
source,
we'll
generate
an
RFC
and
see
if
it
makes
sense.
The
storm
driver
is
the
one
that
was
created
to
support
these
kind
of
multiple
instances,
so
you
can
have
one
instance
running
just
AWS
and
one
instance
running
just
kubernetes.
It's
like
a
proxy
in
front.
That
makes
a
lot
of
sense,
particularly
if
we
want
to
do
sharding
reduce
the
resource.
Consumption
for
individual
Cloud
driver
accounts,
and
we
will.
A
C
E
So,
maybe
so
that
we
can
focus
on
that
conversation
more,
we
can
close.
The
topic
on
on
install
there's
been
a
lot
of
great
feedback
today,
I
appreciate
everyone,
that's
been
sharing
it,
maybe
as
a
Next
Step
instead
of
the
RFC
as
it
is
today,
we'll
create
a
new
RFC
that
captures
the
things
that
we
care
about
as
a
community.
In
terms
of
that
install
experience,
so
we
agree
on
those
first
and
then
the
solution
that
follows
matches
those
those
requirements
right
so
that
we're
not
saying
well.
E
This
is
the
best
approach
for
like
automatically
choosing
one
over
the
other,
we're
choosing
the
solution
that
matches
that
first
and
I'm
happy
to
help
drive
that
and
I
can
post
it
when
that
happens
in
the
Sig
platform
channel.
Does
that
sound
good,
okay
cool?
So
we
can
close
that.
C
D
Eign,
we
won't
get
anything
in
the
view
right.
We
can't
see
it.
You
know
right,
but
with
the
Google
Cloud
driver,
you
do
have
the
ability
to
see
what's
currently
running
in
the
target,
so
that
that
feature
you
would
have
with
the
reduced
resources,
but
there
are
definitely
limitations
if
we
can't
replace
it
today,
a
long
way
to
go.
D
Yes,
we
did
say
if,
for
about
say,
500
accounts.
Cloud
driver
accounts,
the
existing
Cloud
driver.
If
it
takes
something
like
12,
GB
Ram
go,
Cloud
driver
runs
under
three
three
GB
Ram.
C
I
guess
I'll
say
in
the
absence
of
all
of
this
exciting
discussion.
There's
some
there's
some
other
more
much
more
boring
stuff.
That's
proceeding
relatively
behind
the
scenes
because
it
doesn't
seem
like
that.
Many
people,
maybe
care
or
want
to
help
which
is
okay
but
they're.
Like
spring
boot
upgrades
going
on
and
those
spring
boot
upgrades
facilitate
Gradle
upgrades
which
facilitate
Java
upgrades,
which
might
end
up
helping
the
Apples
to
Apples
comparison.
C
I,
don't
want
to
you
know:
I,
don't
want
to
wait
for
that
to
like
you
know,
we
should
still
talk
about
this.
Go
Cloud,
driver
thing
and
I,
don't
know
enough
about
storm
driver
I
know,
I
saw
some
PRS
go
into
orca
to
make
it
so
you
can
send
kubernetes
requests
to
a
different
Cloud
driver
URL
than
you
can
AWS
requests
and
so
on
I'm
sure
storm
driver
is
sexier
and
fancier
in
ways
that
I
don't
understand.
But
but
it
sounds
like
I.
C
Don't
know
that
the
the
general
notion
of
that
mechanism,
it
seems
like
enough
people,
think
that's
a
good
idea
that
it's
happening
one
way
or
the
other
and
yeah
I.
Guess
we'll
see
we'll
see
where
all
this
where
all
this
goes,
I,
don't
I,
don't
know
exactly
what
the
next
step
is
for:
go
Cloud
driver
or
who's
gonna
who's,
gonna,
drive
it
forward
and
and
keep
the
conversations
going,
but
I
don't
think
I'm
gonna
have
to
worry
about
that.
I.
Think
those
conversations
are
going
to
happen.
C
I
said:
are
there
any
other
I,
don't
know?
How
are
we
doing
on
time?
But
are
there
any
other
questions
here?
I
I
feel
like
this
is
not
a
super
busy
day.
So
if
we
need
to
run
over
it's
fun,
it's
fine
with
me,
but
oh
wonderful,
I
guess
from
my
perspective,.
H
Of
me
kind
of
trying
to
get
into
the
world
of
the
sigs
and
what
is
where,
like
the
landscape,
now
three
years
is
a
good
long
time.
So
where
does
the
Sig
start
so
I
see
very
clearly
like
the
release.
Where
does
it
end
in
terms
of
responsibilities.
A
Totally,
that's
that's
a
really
good
question,
so
wow
platform.
A
I'm,
like
speechless
almost
I,
will
say:
there's
not
too
many
sigs
that
are
very
active
right
now,
so
we
do
want
to
consolidate
the
sigs
and
part
of
that
might
mean
a
bit
more
broad
responsibilities
or
areas
of
coverage.
The
platform
Sig
has
traditionally
been
almost
a
catch-all.
It
was
like
oh
build
and
release
is
really
important.
That
needs
to
happen
and
be
managed,
and
we
built
this
plug-in
framework
and
we're
all
kind
of
interested
in
those
things
like.
A
A
We
we
touch
on
a
lot
of
different
things
and
it
would
be
nice
to
have
a
bit
more
focus
and
then
be
able
to
split
out
sigs
to
handle,
build
and
release
and
test
six
to
handle
yeah
guys
see
a
thumbs
up
over
there
stick
to
handle
the
plug-in
framework
improvements,
a
stick
to
handle
these
these
other
things.
But
again
we
need
people
to
staff
these
and
and
participate,
and
it
does
it
does.
A
It
does
take
work
but
that
that's
what
we're
here
to
help
to
help
manage
and
bring
bring
people
in
to
help
with.
A
Fully
supports
the
platform,
that's
on
cloud
Docs
platform
yeah,
so
the
the
sigs
that
I
at
least
intend
to
to
help
promote
and
kind
of
revive
the
the
ones
that
I've
kind
of
identified
as
high
priority
ones,
and
we
have
some
agreement
on
the
TOC
as
well.
Is
the
platform
Sig
the
security
Sig?
It's
a
needed
Sig.
We
need.
We
need
people
on
on
the
security.
Stick
Jason
has
been
doing
a
great
job
of
running
it
by
himself,
but
he's
one
guy
and
he
like.
A
That
would
be
great.
We
need
more
leads
on
on
that.
The
cloud
we
have
a
an
idea
of
having
a
cloud
Sig
I
do
see
that
as
having
a
splitting
into
kubernetes
and
then
having
another
Sig
for
the
other
Cloud
providers.
A
We
used
to
have
a
policy
where,
and
it's
still
it
still
is
the
official
policy.
If
you
want
a
Sig
or
a
cloud
provider,
you
need
to
have
a
Sig
associated
with
it.
You
need
to
have
it
staffed
and
that's
just
not
the
case
right
now.
A
Uh-Huh,
so
the
governance
yeah,
the
governance
treatment
which
we
do
need
to
update,
and
we
have
two
other
things
that
we
want
to
focus
on
too
contributor
experience
and
then
the
doc
Sig,
and
we
actually
have
our
doc
Sig
lead
right
here,
Tiffany
in
the
in
the
corner
over
here.
So
these
are
some
things
that
we're
going
to
want
to
update
and
I
I
mean
we
we
have
a
lot
to.
We
do
have
a
lot
to
update,
especially
in
that
governance,
repo
and
then
I.
A
We
want
to
make
some
we
heard
y'all
yesterday
and
we
want
to
make
some
a
blog
post.
We
want
to
share
some
sort
of
insight
into
what
we've
talked
about
as
a
Toc
and
do
that
in
a
in
a
transparent
way,
so
updating
the
governance,
repo
and
making
it
more
clear
which
which
sigs
we
do
want
to
focus
on
I.
Think
that's
a
topic
of
discussion
today
in
the
docsig
and.
C
I,
don't
know
if
this
has
gotten
mentioned
or
gotten
on
to
the
list
of
greatest
hits
of
possible
six,
but
one
of
the
one
of
the
gaps
that
I
see
is
in
the
UI,
mostly
because
I'm,
not
a
JavaScript
person
and
I'm,
not
a
react
person
and
I'm,
not
an
angular
person.
C
But
you
know
people
are
trying
to
do
work
and
get
help
and
there
are
very
limited
resources,
at
least
that
I
know
of
to
to
make
that
happen,
and
even
if
even
if
there
wasn't
an
actual
UI
Sig
that
met
every
couple
weeks,
if
there
was
like
a
UI
channel
on
slack
where
people
who
know
about
these
things
could
hang
out,
and
people
could
ask
questions
that
that
would
be
very
helpful.
C
So
if
you
are
a
JavaScript
person
or
you
have
JavaScript
friends
bring
them
on,
we
we
need
them
and
I
guess
I
I
do
feel
like
I
want
to
mention
this
one
other
thing
that
we
started
to
talk
about
last.
A
C
Which
ties
into
the
notion
of
a
separate,
build
and
release
date,
which
is
maybe
also
partly
this
conceptual
difference
between
like
a
long-running
Sig
versus
like
a
project-based
Sig
that
gets
spunned
up
to
accomplish
a
task
and
then
goes
away
again,
which
is
probably
healthy
for
like
like
we
all
know.
C
If
you
don't
exercise
things,
they
don't
end
up
working,
but
it
will
help
us
exercise
the
creation
of
sigs
and
the
destruction
of
cigs
and
make
sure
the
mechanics
of
all
that
work
in
the
in
the
general
theme
of
like
improving
quality
and
and
reducing
barriers
to
entry
and
reducing
friction
in
all
of
our
lives.
There's
a
there
was
an
RFC
about
this
I,
don't
know
some
time
ago
and
it
and
it
faded.
C
C
So
then
you
fix
it
instead
of
not
finding
out
when
you
break
something
and
I
think
Joe
has
even
signed
up
to
like
help
make
that
happen
in
the
Upstream
Universe,
and
it's
sort
of
not
fair
to
talk
about
it
without
you
know,
Joe
should
be
the
one
talking
about
it
and
anyway,
but
it's
also
this.
We
have
a
bunch
of
people
here
and
so
I
wanted
to
get
the
idea
out
in
the
world
so
that
it's
not
as
big
of
a
shock
and
surprise.
I
I
So
the
approach
we
ended
up
taking
was
combining
all
of
them
into
a
single
repo
made
of
git
subtrees,
and
we
just
merge
with
subtree
strategies
and
that's
it's
very
sane.
We
build
everything
composite
similar
I
was
talking
about
in
the
demo
yesterday
with
plugins
same
idea.
Gradle
composite
just
builds
everything.
At
the
same
time,
we
have
a
Gradle
build
cache
which
speeds
up
repeated,
builds
dramatically.
I
I
would
say
a
Gradle
build
cash
is
almost
a
requirement
to
operate
in
this
way
because,
for
example,
if
there's
a
cork
change,
you
would
necessarily
have
to
run
tests
for
every
other
service.
That
depends
on
Cork
and
there's
a
little
bit
of
dependency.
Graphing
there
that's
implemented
in
the
Gradle
tooling,
that's
not
automatic!
That's
by
hand,
it's
like
hey.
If
there's,
if
we
see
a
change,
this
folder
run
tests
for
XYZ
Cloud
driver.
I
If
you
run
everything
that
depends
on
Quark
same
for
Fiat,
there's
changed
Fiat,
we
run
builds
for
everything
that
depends
on
Fiat
and
the
the
whole.
You
know
the
whole
graph
is
implemented
in
that
way.
It's
not
a
very
complicated
procedure,
but
it's
just
a
little
bit
of
hand
tuning
that's
part
of
that
process,
but
in
the
end,
our
release
process
is
zero
work.
I
It's
just
it's
just
there's
a
button
that
builds
everything
and
ships,
containers
and
ships,
jars
and
rewrites
them
according
to
what
we
need
to
do,
and
it's
it's
been
an
incredibly
smooth
experience
once
we,
the
transition,
is
also
pretty
straightforward,
because
how
we
did
the
transition
was
we
just
kept
continually
integrating
into
the
new
thing
and
then
had
that
start
producing
artifacts
and
then
just
eventually
stopped
using
the
old
things
and
then
just
reintegrated
instead
of
from
like
what
we
were
working
on
internally
to
new
things
working
internally,
it
just
goes
straight
from
the
outside
world
straight
into
the
internal
Repository.
I
I
So
it's
it's
an
approach,
I'd
recommend
I
think
we
could
definitely
revive
the
RFC.
Add
some
clarity
and
context
in
a
in
you
know,
and
we
can
add
some
of
our
experience
as
well
as
to
illuminate
all
of
that
and
we'll
you
know
see
where
that
goes
and
other
people
may
I
see.
There's
some
excitement
about
it
over
there.
So
that's
good
after
the
I'll
be
honest.
I
After
the
the
original
I
read
the
original
proposal
and
I
was
like
oh
yeah,
very
nice,
but
then
it
you
know
it
was
their
objections
and
didn't
get
implemented,
but
we
ended
up
doing
it
ourselves,
and
you
know
it's
been
a
it's
been
a
huge
Improvement
so
and,
like
I
said,
with
Gradle,
build
caching
very
important.
G
So
just
to
expand
on
the
cash
a
little
bit
because
I
don't
know
if
everyone's
familiar
with
with
the
Gradle
build
cash,
but
you
could
actually
have
a
remote
cache
so
originally,
when
we
did
the
mono
repo
we
it
made
our
builds,
especially
if
you
made
a
cork
change.
It
would
be
over
an
hour
just
to
do
you
know
all
the
CI
CD
jobs,
but
once
we
added
the
remote
cache
it
took
15
to
20
minutes.
It
was
a.
A
G
C
C
I'm
I'm
super
excited
for
this.
It
will
also
dramatically
simplify
the
release
process.
It
will
now
be
two
two
commands
get
tag
and
get
push
which
is
fantastic,
I
think
there
were
some
concerns
the
first
time
about
like
how
big
a
machine
you
would
need
to
be
able
to
work
on
Spinnaker
and
part
of
at
least.
C
Maybe
this
is
like
a
sort
of
tangential
goal,
but
I
think
one
of
the
consequences
of
this,
that
is,
that
I
think
it
would
lower
the
barrier
to
entry
for
people
if
it
then
sort
of
simultaneously
raises
it,
because
we
all
have
to
have
super
dope
computers
to
be
able
to
build
it.
C
That
would
be
a
bit
of
a
drag,
although
I
mostly
think
it
would
be
worth
it
so
I
think
we'll
have
to
see,
but
I
don't
know
like
the
sort
of
the
minimum
system
requirements
to
to
be
able
to
develop
on
Spinnaker
they're
they're
already
pretty
brutal,
because
Cloud
driver
is
massive
and
I.
Think
from
what
I
heard,
there's
some,
maybe
a
little
bit
of
an
IntelliJ
magic
to
make
it.
So
you
don't
need
to
have
like
a
you
know:
800
gigabyte,
machine
on
your
desk.
A
Yeah
what
what
one
thing
that
that
we've
done
internally
at
Armory
and
it
it's
I
I-
could
see
these
things
kind
of
going
together.
A
bit
is
we've.
We
we
take
all
of
our
images
with
the
same
version,
so
there's
no
need
for
a
bomb.
You
can
just
download
all
the
images
of
that
same
same
tag,
so
it
does
simplify
that
that
process
a
little
bit
as
well.
Unfortunately,
with
with
the
Spinnaker
project,
the
service
versions
are
all
over
the
place.
A
I
wouldn't
personally
be
opposed
to
changing
the
the
version
scheme,
but
that's
a
conversation
for
another
time.
How
are
we
doing
that
on
time
by
the
way?
A
All
right?
Well,
thank
you
everyone
for
joining
today.
We
will
hopefully
see
y'all
next
Thursday
at
the
platform
Sig
the
governance,
repo,
there's
a
Sig
index
page,
and
you
can
check
out
the
the
meeting
calendar
and
it
just
for
your
time
zone.