►
From YouTube: SIG Apps Biweekly meeting 20200713
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay,
we
have
one
announcement
from
the
community,
so
there's
a
major
contributors
meeting.
That's
a
monthly
one
hour
opportunity.
You
can't
go
there,
ask
questions
and
then
you
can
also
participate
in
peer
reviews.
So
I
mean
peer
code
review,
so
we
can
paint
Paris
if
you're
interested
and
then
we
don't
have
any
demos
today
and
we
have
several
discussions.
B
So
can
everybody
hear
me
yeah
great
yeah,
so
we
hit
this
in
general,
which
was
we
there's
a
number
of
system
components
cube
runs.
We
had
this
kind
of
from
like
a
distribution
mindset
of
you
have
a
cube
cluster
and
you
want
to
run
infrastructure
components
that
other
things
depend
on
webhooks,
Estia
and
CSI
drivers
host
level
services
that
everybody
needs.
One
of
the
things
that
came
out
of
that
was
that
it's
actually
extremely
difficult
with
a
daemon
set
today
to
update
it
and
to
keep
availability
on
that
particular
node
of
whatever
services.
B
You
can
definitely
minimize
that
disruption
with
the
daemon
said,
because
the
daemon
set
strategy
today
is
delete
and
recreate
if
you're
pulling
a
new
image
on
that
node.
That
image
pool
you
know
usually
could
take
a
couple
seconds.
Sometimes
they
could
take
20-30
seconds,
but
in
some
environments
in
worst
case
it
might
take
minutes.
There's
also
like
you
know,
you
need
to
do
startup
code,
this
cubelet
choke
during
the
upgrade
you
know
just
cubelet.
Is
there
any
kind
of
disruption
to
the
cube
load
of
the
API
server?
B
That
would
prevent
the
next
stage
of
the
update
taking
up
so
there's
a
couple
of
approaches.
I
have
those
down
in
alternatives,
but
one
of
them
was.
We
had
always
kind
of
talked
about
this
for
David
sets,
which
was
the
idea
that
we
did
this
for
deployments
from
the
very
beginning,
because
people
you
know
they
replica
set.
You
want
to
go
from
one
to
two
and
then
back
down
to
one.
B
You
don't
want
to
go
from
one
to
zero
to
one
in
many
cases
by
default
with
a
lot
of
stateless
workloads,
because
the
whole
point
of
a
stateless
workload
is,
you
can
run
multiple
copies
of
it.
When
we
had
talked
about
it
for
daemon
sets,
you
know
the
idea
would
be
well
if
you
could
run
to
daemon
set
pods
in
the
same
node.
B
This
kind
of
approach
tries
to
kind
of
find
like
a
happy
middle
ground
which
is
not
going
to
be
magic
for
everyone.
It's
not
magic
like
and
Serge
would
be,
but
it
would
effectively
be
instead
of
doing,
delete
the
pod
and
then
start
the
new
one.
It
would
be
start
the
new
one
and
then,
when
that's
up
and
ready
signaled,
the
other
one
to
be
deleted.
Things
can
still
go
wrong.
You
know
if
your
first
pod
crashes
thinking
through
the
implications
of
that.
B
Usually
the
answer
is
if
your
goal
is
to
have
high
availability-
and
you
can't
maintain
that
in
your
app
that's
your
problem,
you
know
it's
like
you've
got
some
implication
in
your
code
and
you
didn't
do
it
right?
Well,
you
weren't
gonna
be
able
to
do
it
right
anyway,
there's
no
magic
and
Cube
that
can
handle
handoffs
between
two
things.
You
know
even
even
replica
sets,
if
you
don't
do
shut
down
correctly
with
gracefulness
and
all
that
you're
gonna
lose
connections.
That's
just
there's
no
magic
for
that.
So
kind
of
a
compromise
solution.
B
I
think
it's
kind
of
an
advanced
case,
but
it's
probably
pretty
useful
for
people
doing
infrastructure,
CSI
drivers,
networking,
plugins,
host
level
services
that
need
to
maintain
uptime,
like
anybody
doing
a
service,
local
endpoint,
that
another
service
is
expected
to
call
UNIX
domain
socket.
If
you
have
a
reason
that
that
process
needs
to
stay
running,
there
needs
to
be
at
least
one
instance,
your
process
running
at
all
time,
so
you're
going
to
drop
stuff
or
fail,
and
something
to
mention
this
would
help
you
through
it
and
I.
B
Have
a
quick
I
had
like
a
basic
prototype
of
this?
Actually,
the
daemon
set
code
and
the
controller
was
actually
reasonably
well
set
up
to
support
this.
What
isn't
simple
is
making
sure
that
we
have
all
the
exhausted
testing
so
I
did
some
of
the
basic
testing
and
worked
through
like
what
problems
would
I
hit
if
I
actually
tried
to
use
this,
but
I
do
not
have
the
extensive
set
of
you
know
the
hundreds
of
unit
tests.
We
need
to
make
sure
the
controller
works
in
all
cases.
C
C
So
I
mean
to
me
this
makes
it
like
the
one
of
the
most
major
like
kind
of
surveying.
What
daemon
sets
are
out
there
that
would
benefit
by
this
scarce
resource
conflicts
that
would
kind
of
make
it
bail
in
a
expected,
but
consistent
way
is
host
port
conflicts.
But
if
we
go
forward
with
this
and
then
the
advice
that
we
give
is
to
move
away
from
host
port
where
possible,
to
no
local
service
topology
in
order
to
say
okay,
this
is
how
you
communicate
locally
with
processor
on
the
same
node.
B
Not
required
to
declare
their
host
ports
usually,
when
you
declare
your
host
port
its
you
want
your
magic
host
port
versus
the
more
normal
like
I'm.
Listening
on
the
host
interfaces
like
if
you
have
a
high-performance
load
balancer
like
an
ingress
controller,
you
could
be
on
the
pod,
Network
that'll
work,
sometimes
you're
not
actually
registering
the
host
port.
Anyway,
you
could
run
an
ingress
controller
in
a
daemon
set
and
still
get
pod
networking
and
still
not
have
that
scheduling
problem,
but
have
the
other
problem
that
you
can't
have.
C
We're
incurring
increased
risk
and
I
want
to
combine
that
risk
to
a
particular
domain
during
rollout.
The
only
thing
I
would
want
to
make
sure
is
that
when
we
do
this,
we
think
about
the
other
case
to
make
sure
that
we're
doing
it
in
a
way
that
we
don't
block
going
forward
with
topology
aware
updates,
yeah.
B
A
nuance
is
if
your
previous
pod
crashes,
while
you're
trying
to
do
a
handoff,
you
like
there
is
very
little.
That's
not
just
like
go
ahead
and
clean
up
the
old
one,
because
the
assumption
would
be
if
you've
got
two
instances
and
the
old
ones
ready
and
the
new
one
isn't
and
the
old
one
dies.
The
thing
to
really
do
is
to
move
forward
and
then
to
pause
at
the
higher
level,
which
is
that
old
thing
is
either
broken
or
stayed
like.
B
You
know
if
it
starts
crash,
looping
or
if
it
goes
unready
the
moment
it
goes
on
ready,
it's
really
being
removed
from
consideration.
The
higher
level
thing
is
like
in
a
deployment,
you've
got
a
gate
rolling
out
anyway,
and
that
gating
has
to
basically
work
based
on
the
ready
status
of
the
nodes.
Anyway,
if
you
have
two-
and
neither
already
it's
basically
the
same
thing
as
one
and
it
isn't
ready
so
I,
add
some
notes
to
this
about
how
that
could
be.
Topology
awareness
could
play
into
it.
I
think
structurally,
this.
B
C
A
B
It's
interesting
and
I
called
that
out
here.
Jordan
had
brought
that
up
when
he
did
like
a
first
initial
pass
of
that
I'm,
not
at
really
sure
that
and
like
I'm,
somewhat
biased,
but
I've,
never
actually
used
host
ports
and
Cube
ever
I've.
Only
ever
used
host,
networking
or
node
ports,
and
so
I
would
say
we
should
do.
I
can
go.
Do
a
review
of
like
kind
of
common
daemon
sets
deployed.
I
would
probably
say
unless
everyone
is
using
daemon
host
ports
and
I'm
like
totally
wrong
I
think
the
a
argument
would
be
host.
B
Ports
is
definitely
a
portion
like
10,
15
20,
maybe
30
percent
of
a
certain
class
of
problem
in
general.
It's
no
worse
than
it's
no
different
than
any
of
the
other
optional
capabilities,
which
is
not
every
daemon
set
or
not.
Every
deployment,
for
instance,
is
going
to
be
able
to
have
more
than
one
replicas.
It
doesn't
prevent
deployment
being
useful
for
one
replicas.
It
just
means
that
the
two
animals
are
there
for
it.
B
I
think
some
of
those
arguments
probably
apply,
which
is,
if
you
set
up
an
impossible
thing
in
a
deployment,
it's
still
impossible
right.
If
you
reference
a
persistent
volume
that
has
can
only
be
attached
to
one
node
and
you
set
two
replicas
that
is
impossible
today.
It
is
not
a
valid
use
case.
It
is
something
that's
possible.
Our
answer
is
to
detect
warm
and
then
give
you
the
option
to
go
change
it.
Now.
B
There
is
something
going
into
storage
that
actually
changes
that
now
so
storage
and
is
getting
alpha
feature
in
119
that
allows
PVCs
to
be
instantiated
on
a
per
pod
basis
and
share
the
pods
lifecycle,
the
implication
of
that.
If
we
had
taken
a
hard
rule
and
said
well,
you
know
we'll
do
a
bunch
of
validation
on
deployments
around
having
replica
size
greater
than
one
but
a
PVC.
Reference
would
actually
begin
correct
now,
because
of
the
way
that
that
is
being
handled
so
I
felt
like
host
port
is
situational.
B
It
doesn't
remove
the
need
in
general,
for
this
pattern
and
we
could
probably
say
solving
the
host
port
scheduling
problem
with
a
recommendation
above
worn
or
fail.
Gracefully
can
be
done
as
a
follow-up
if,
if,
if
we've
misinterpreted
how
many
host
ports
are
out
there
in
terms
of
a
total
percentage
of
daemon
says
then
yeah
I
would
be
out.
We
open
to
spending
more
time
of
that
in
the
first
pass.
B
I
will
a
day
I'll
go.
Do
some
research
on
number
of
people
using
host
ports
and
scenarios
that
they're
doing
it?
My
gut
is
that
host
network
has
mostly
subsumed
after,
like
networking,
plugins
and
I
do
not
know
what
people
are
doing
with
CSI
drivers,
but
I
haven't
seen
a
ton
of
host
port
usage.
There's
some
casual
browsing
I
did
so.
C
Also,
if
I
remember
correctly,
the
reason
we
decided
to
the
original
proposal
that
was
kind
of
similar
to
this
was
create
before
delete
strategy,
not
adding
Mac
search,
and
we
didn't
decide
to
not
do
it
ever.
We
decided
that
we
didn't
want
to
do
it
then,
and
we
didn't
want
to
block
going
to
v1
on
it.
C
There
are
some
subtle
differences
to
what
was
proposed
there
to
what
is
what's
proposed
here,
especially
from
like
how
I
would
wrap
my
head
around
the
implementation
standpoint
create
before
delete,
was
actually
kind
of
monitoring
the
status
of
pods
on
a
node
and
trying
to
explicitly
create
a
new
one
prior
to
deleting
the
old
one
where
Mac
Serge
seems
a
little
bit
more
simple
in
terms
of
I'm
just
going
to
surge
up
and
it's
a
controllable
parameter
for
my
surgery
and
then
surge
back
down.
Well
yeah.
A
lot
of
that
yeah.
B
It's
the
it's
the
slightly
higher
meta
I
like
it,
because
the
API
was
consistent
with
deployments
and
it's
it.
But
it
is
a
different,
a
subtly
different
implementation,
and
it
could
be
that
create
before
I
was
like
you
just
overall,
like
not
inventing
new
concepts
when
it's
close
enough
and
I
feel
like
the
search
percentage
would
tie
more
closely
with
topology
versus
create
before
delete,
which
then
might
still
require
a
percentage
for
how
you
behave.
Okay,
yeah.
C
B
Documentation
is
the
key
here.
Certainly
I,
don't
know
that
everybody
could
get
deployment
max,
Serge
correct
without
actually
reading
the
description
and
I
know
the
amount
of
time
tomash
and
others
spent
in
the
early
days
of
review.
We
got
it
wrong
subtly
many
times
so.
I
did
there's
an
element
of
them,
no
matter
what
there's
going
to
be
some
complexity,
getting
a
clear
human,
understandable
description
of
it
is
probably
the
the
similarities
between
them.
B
They're
already
somewhat
nuanced
processes
like
the
difference
between
one
and
zero
percentages,
rounded
down
and
rounding
up
and
as
I
was
copying.
The
deployment
doc
like
the
deployment
go
doc
to
think
about
what
surge
would
be
those
edge
cases
in
the
way
it
was
described.
It
actually
helps
because
it
was
easy
to
take
the
sentence
for
a
deployment
and
change
like
talk
about
how
it
applied
specifically
for
Damon
set
but
kept
all
of
the
same.
B
You
know,
like
oh
I,
think
about
them
the
same,
it's
just
which,
like
I'm,
choosing
which
machine
to
do
the
pod
on
versus
choosing
which
which
replica
set
to
scale
down.
So
there
were
some
elements
of
it
that
that
had
some
similarities
that
I
liked
it's
going
to
be
complicated
by
the
way
I
think
and
having
less
API
concepts
would
better
doc
is
probably
where
I
would
lean
to
from
a
reviewing
perspective.
B
So
if
there's
any
other
feedback,
just
let
me
know
and
I'll
take
it
in
the
doc
got
some
comments
from
Tomas
and
Jordan
and
Jordan
was
kind
of
asking
some
of
these
same
questions
based
on
his
memory.
If
there's
anything,
anybody
missed
I'll,
add
it
and
try
to
get
this
in
a
reviewable
state
soon
for
first
subsequent
discussion.
B
We
could,
in
this
case
it's
certainly
always
easier
to
relax
that
validation
later.
If
we
decide
it's
not
too
much
of
an
issue
there
today,
you
will
not
get
scheduled
if
you
have
a
host
board
unless
you've
heavily
customised,
we
do
not
block
on
replica
size
greater
than
1/4,
PVCs
and
replica
sets
and
I.
Think
that's,
probably
ok,
I,
don't
I'm
not
proposing
to
change
that
here.
D
C
B
The
surge
like
surge
to
be
cleared
because
it
is
a
new
field,
will
default
to
not
specified
it
has
to,
because
we're
not
we're
adding
something
novel
and
we
already
have
a
default
on
update
strategy,
which
is
the
percentage
of
daemon
sets
to
go
roll
on.
So
this
the
defaults
it's
called
out
here
is
zero.
You
have
to
explicitly
opt
into
it
because
you
have
to
explicitly
opt
into
it.
B
The
act
of
turning
that
on,
while
you
know,
while
the
flag
is
on
the
validation,
only
applies
when
you
have
that
set
and
the
flag
is
on
and
you
are
changing
the
values-
and
this
is
like
the
this
is
like
the
eye.
Jordan
is
drilled
into
me,
like
the
reviewer
list,
for
when
you
add
a
feature
gate
with
validation
and
add
fields
because
I
forget
it
every
time
but
the
if
that
value
was
already
set,
and
you
somehow
got
into
that
state
we
would
not
validate
on
it.
B
B
If,
generally,
the
rules
we
have
settled
on
is,
if
you
have
invalid,
that
invalid
data
in
storage-
and
you
did
not
change
that
in
your
update-
we
do
not
validate
it.
That
is
the
kind
of
the
default
premise
because,
unfortunately,
for
whatever
reason
you
got
into
that
state,
we
have
to
honor
it
especially
going
from
alpha
to
beta
transition,
but
you
are
still
broken.
We
will
tell
you
you're
broken
when
you
change
it
and
you
will
not.
B
You
have
to
delete
recreate
that
object
or
clear
the
field,
and
you
should
be
able
to
clear
the
field
once
the
field
gate
is
on
anyway.
So
it's
a
solvable
problem.
I.
Think
for
this
say
we
change
the
validation.
You
got
it
into
this
state.
He
tried
to
update
anything
on
that
that
changed
it.
We
would
get
your
update
again
right.
C
B
C
C
Think
with
that
thing
is
I'm
moving
forward
with
the
implementation
and
getting
it
to
an
implementable
state.
So
unless
anybody
is
explicitly
saying
we
should
not
do
this
and
I
don't
see
any
feedback
on
the
cap
itself
or
anyone
the
community
is
saying
we
should
not
do
this
I
guess
we
can
move
forward
or
does
anyone
have
objections?
I.
E
Mean
I
think
I,
like
it
and
I
think
we
have
been
waiting
for
similar
functionality
for
daemon
sets
for
sure.
I
would
look
I'll,
take
a
look
at
it
and
review
it
once
more,
but
I
haven't
tributed
at
all,
so
I
will
review
it
and
I
will
be
looking
forward
to
validations
with
respect
to
host
report
and
things
like
that.
F
G
C
G
While
the
new
replica
set
scales
up,
we
would
just
make
a
new
replica
set
at
the
maximum
or
the
the
current
scale
of
the
old
replica
set.
And
then
the
new
replica
set
would
be
labeled
differently
than
the
old
replica
set.
They
would
have
sort
of
a
managed
label.
I
didn't
necessarily
specify
what
that
would
be,
but
one
option
to
be
just
have
a
color
and
the
old
one
would
be
blue
in
the
new
one
would
be
green.
That
way,
you
could
specify
say
a
service
that
only
goes
to
Blues
and
another
service.
G
That
only
goes
to
greens,
and
that
way
the
user
can
sort
of
decide
when
the
client
rolls
over
to
the
new
replica
set,
and
so
what
this
would
mean
and
sort
of
different
than
what
we
have
now
is
that,
instead
of
sort
of
automatically
deleting
the
old
replica
set,
when
the
new
replica
set
is
scaled
up,
we
would
have
to
have
some
sort
of
policy
on
how
to
decide
when
to
retire.
The
old
replica,
set
or
say
delete
the
new
one
and
roll
back
to
the
old
one.
G
So
there's
the
the
current
mechanism
for
rollback
is
using
the
rollout
undo
or
sort
of
rolling
back
to
a
specific
revision
of
the
deployment,
I
believe,
which
is
a
whole
different
resource
management,
there's
a
sort
of
a
generic
resource
for
managing
the
undos,
but
it
only
currently
implements
for
the
deployment
I
believe
no,
which
means
there's
some
flexibility
and
in
changing
that
what
I
didn't
do
is
necessarily
specify
how
that
should
change.
I,
think
that
is
sort
of
up
for
discussion,
because
I,
don't
know
the
ins
and
outs
of
the
sort
of
rollout
functionality.
G
G
G
The
I'd
like
to
have
a
sort
of
reverse
incompatible
way
to
add
this,
which
I
think
is
pretty
standard
in
the
kubernetes
api
and
pretty
much
required
in
order
to
make
any
sort
of
change
to
it,
and
I'd
like
to
have
sort
of
a
manual
continuation
policy
option
where
we
say
like
the
old
replica
set
is
retired
after
someone
makes
a
change
that
the
new
replica
set
should
be
kept,
or
vice
versa,
that
some
manual
changes
made
and
then
the
old
one
is
captain.
The
new
one
is
deleted.
G
G
That
one
is
useful
if
there's
reversing
compatibility,
but
it
doesn't
provide
any
sort
of
extra
validation
in
terms
of
having
a
sort
of
a
way
to
tell
that
the
the
thing
is
healthy
other
than
the
pods
individually
being
healthy.
So
if
you
I'd
say,
I
already
talked
about
rolling
forward
and
rolling
back
and.
G
G
What
I've
seen
when
using
blue
gained
rollouts
myself
is
that
oftentimes
you're
not
doing
a
blue-green
rollout
all
the
time
you
might
do.
A
blue
gain
rollout
for
sort
of
a
database
schema
migration,
for
example.
So
you
might
want
to
switch
back
and
forth
between
rolling
and
blooping
row
rollouts
and
then
sort
of
non
goals
as
I'm
not
really
planning
on
automating
the
continuation
policy
decision,
any
more
advanced
than
saying
all
the
pods
are
ready,
because
we
already
have
the
ability
to
do
that.
G
G
G
G
What
should
happen
if
a
change
is
made
to
the
deployment
while
two
replicas
sets
revisions
are
active?
Can
we
use
sort
of
the
the
pause
functionality
to
make
that
happen?
Then
no
changes
are
allowed,
or
rather,
if
changes
are
made,
the
controller
won't
do
anything
until
it's
unpause,
that's
one
option
for
rolling
out
and
then.
G
Our
changes
needed
for
cube
control,
rollout
undo
you
to
delete
the
new
replica
set
without
replacing
the
old
replica
set,
or
will
that
work
out
of
the
box?
Let's
sort
of
an
implementation
detail
that
I'm
not
really
sure
how
that
works.
Yet
these
questions
were
right
above
the
user
story,
but
basically
you
guys
can
all
page
through
that
dock
yourself.
If
you
want
I,
don't
necessarily
expect
you
guys
to
approve
and
agree
right
now,
but
I
wanted
to
have
some
sort
of
synchronous
discussion
with
what
do
you
guys
think
about.
C
Main
question
we'll
have
two
questions,
but
my
main
one
is:
did
you
reach
out?
To
so
I
mean
two
of
the
main
kind
of
controller.
Is
that
orchestrate
this
kind
of
workflow,
specifically
three
actually
spinnaker,
you
have
our
go
rollouts
and
then
you
have.
We
pluck
flagger,
Argo
and
flux
have
merged
with
the
sponsorship
of
AWS
to
become
agro
flux
or
any
of
these
communities
like
asking
for
this
in
court
because,
like
if
they're
all
coming
together
and
saying
this
is
something
that
would
like
streamline
the
implementation
that
all
of
us
will
consume.
C
So,
if
they're,
if
they
have
like
requirements,
if
they
like
that,
pass
in
the
core
that
we
can
come
up
with
something
where
it's
like.
Okay
now
we
know,
we've
met
a
minimum
set
of
requirements,
and
we
have
some
level
of
commitment
from
our
partners
in
the
open-source
community
to
actually
consume
it.
If
we
build
it,
that
would
be
a
stronger
motivator
than
even
trying
to
build
it.
First,
I.
G
Have
not
reached
out
yet
to
them
outside
of
like
one
guy
who
responded
to
my
Twitter
queries,
I'm,
not
entirely
sure
how
to
do
that,
because
I
don't
know
what
their
change
processes
are.
I
have
used,
sort
of
K
native
just
for
experimentation
and
I've
used
spinnaker
and
production,
but
I
haven't
used
our
NGO
yet
so
I
don't
have
sort
of
first-hand
knowledge
about
how
those
would
map
in
order
to
write
that
myself.
G
C
C
It's
just
kind
of
unless
you
have
very
very
large
deployments,
it's
kind
of
very
difficult
to
get
control
the
fraction
of
traffic
that
you're
actually
sending
the
canary
in
order
to
do
any
type
of
either
manual
canary
analysis
so
like,
instead
of
using
the
primitives
we
already
have
like.
Why
does
this
make
it
easier?
That
was
the
thing
that
was
unclear
to
me,
as
somebody
who
kind
of
uses
like
both
of
those
strategies
would
be
existing
in
fur.
G
So
so
what
are
those
sort
of
the
first
question
is:
why
is
this
valuable
on
the
face
of
it,
instead
of
using
two
deployments
and
sort
of
the
the
problem
with
having
two
deployments?
Is
that
there's
no
sort
of
controller
or
management
scheme
that
owns
both
of
that
deployments?
You
end
up
having
to
write
an
entirely
different
system
for
managing
those
or
doing
them
manually,
and
so
this
provides
a
way
to
do
it
in
a
more
automated
fashion.
G
Where
there's
a
you
know,
an
extra
deployment
and
only
sort
of
one
flag
that
you
would
have
to
flip
in
order
to
say
yes,
I
want
to
keep
the
new
ones,
which
would
make
it
relatively
trivial
to
sort
of
have
a
sort
of
mini
controller
that
says.
Okay,
all
these
new
ones
are
healthy
and
then
I
flip
to
the
the
new
ones
manually
or
automatically
either
one.
G
If
you
wanted
to
write
your
own
controller
to
sort
of
manage
deployments,
you
could
also
do
that,
but
as
far
as
I
know,
nobody
has
done
that
in
a
simple
fashion.
Wix
vinegar
doesn't
even
do
that.
A
spinnaker
manages
replica
sets
by
itself
inside
of
the
same
deployment,
so
this
this
strategy
here
is
actually
doing
something
very
similar
to
what
spinnaker
does,
except
without
needing
spinnaker
to
do
it.
So,
basically
it
would.
G
My
understanding
is
that
they're
doing
sort
of
percentage,
rollouts
and
they're
managing
them
via
load
balancer,
and
so
this
solution
doesn't
necessarily
explicitly
support
percentages,
but
you
could
wrap
percentages
around
it.
The
the
blue-green
implementation
wouldn't
wouldn't
support
granular
percentages.
It
would
just
be
a
hundred
percent
or
a
zero
percent
if
you
want
it
to
support
something
like
what
K
native
was
doing,
you'd
have
to
add
another
deployment
strategy
that
was
sort
of
a
canary
strategy
that
allowed
specifying
either
percentages
or
counts.
G
So
you
could
easily
sort
of
integrate.
Part
of
what
K
native
is
doing
with
a
separate
rollout
strategy,
have
a
canary
rollout
strategy
with
percentages
in
it
and
then
then
roll
those
percentages
manually
or
some
controller
that
rolls
them.
And
then
you
wouldn't
necessarily
even
need
the
complexity
of
having
the
load
balancer
doing
the
the
percentage
for
you,
because
you
have
backends
percentages
being
different.
G
It
might
be
that
you
can't
match
that
percentage
exactly
because
you
don't
have
that
particular
number
of
instances
replicas,
but
you
could
make
a
sort
of
a
bet
best
fit
attempt
that,
like
using
that
for
a
sort
of
canary
strategy
is
a
sort
of
a
different
approach.
It's
it's
a
replica
set
based
instead
of
load,
balancer
based
and
so
I'm,
not
sure
that
Canada
would
actually
go
for
that,
but
they
might
I
would
have
to
talk
to
them.
G
I
think
it
would
be
different
enough
that
there
might
be
sort
of
two
ways
to
do
it,
but
the
the
nice
thing
about
this
strategy
is
that
it's
sort
of
simpler
than
using
a
load
balancer
and
doesn't
require
sort
of
changes
to
load
balancing
setup.
You
could
easily
have
two
ingresses
that
point
to
two
services
and
just
switch
between
them
by
like
DNS
on
the
front
end
or
or
having
one
load
balancer.
That
is
kubernetes
aware
that
can
decide
which
service
to
route
it's
backends
to.
C
G
G
Not
saying
it's
not
used,
you
just
said
most
people
use
it
I.
Think
a
lot
of
people
use
like
a
engine,
X
or
ingress
style
directly
to
the
pods
based
on
the
service
resolution,
which
it
doesn't
necessarily
use,
IP,
cable
or
doesn't
specifically
depend
on
that
functionality.
But
with
this
particular
design,
everything
both
replica
sets
are
going
to
be
matched
by
the
the
deployments
name
in
sort
of
a
cube
DNS
way.
C
I,
don't
I
wouldn't
understand
how
that
would
work
in
either
IP
animals,
proxy
mode
or
in
IP
vs
mode
for
could
be
proxy
right,
like
I.
Can
change
IP
vs
to
actually
do
those
types
of
more
customizable
load
balancing
the
connections
going
out
well
at
90
table
approximate.
If
I
have
a
single
service,
that's
fronting
each
random
connection
assignment
is
all
I
get
for
the
most
part,
yeah.
G
C
Works
well
so,
in
the
event
that
I
have
very
large
deployments
that
actually
works
well
and
is
consistent
with
what
we
suggests
for
to
deployment
Canaries
right,
which
is
what
we
have
on
the
website.
But
for
one
of
the
reasons
that
our
go,
rollouts
and
flux
became
or
sorry
Flagler
became
popular
is
that
it
doesn't
particularly
work
well
for
smaller
deployments,
because
when
you're
doing
rent,
if
you're,
if
the
number
of
connections
that
are
being
requested
are
much
greater
than
the
number
of
bloods,
you
still
don't
necessarily
get
a
consistent
amount
of
traffic
across
them.
C
G
Reasonable
argument,
one
way
you
can
get
around,
that
is
by
still
using
the
the
sort
of
blue
green
color
labels
that
you
would
get,
but
using
that
in
a
canary
scenario,
where
you
have
sort
of
less
percentage
rolled
out
for
the
one
side.
And
then
you,
you
use
your
ingress
strategy
to
decide
which
service
your
route
traffic.
To
whether
it's
you
know
the
canary
or
the
the
Robeck
there
is
sort
of
a
downside
with
that
strategy,
is
that
the
auto-scaling
might
not
kick
in
the
same
way.
If
you're
halfway
through
a
canary
rollout.
G
G
C
C
G
C
Mean
I
think
the
way
I
look
at
it
is
this
really?
There
are
so
for
a
get
op
space
workflows
and
for
custom
orchestration
of
deployments.
There
are
a
lot
of
solutions
that
are
out
there
in
the
open
source
already
and
people
seem
to
be
using
them.
They
seem
to
be
happier
or
happy
with
them
from
the
architecture
standpoint
the
positions
kind
of
been
taken
that
we
don't
want.
C
There's
no
custom
resource,
there's
just
a
resource,
and
you
can
bring
your
own
there's
a
set
of
default
ones
that
most
clusters
will
have
there's
a
set
of
resources
that
you
have
to
have
in
general
to
be
compliant
with
the
kubernetes
api
specification.
But
we
don't
want
to
treat
them
as
special.
So
if
we
already
have
a
set
that
are
out
there
for
our
end
users
and
our
end,
users
aren't
really
pushing
on
us
to
saying
that
the
technologies
that
are
available
and
the
open
source
are
insufficient
to
meet
my
use
case.
C
We
really
need
you
guys
to
do
something
separate
and
the
only
value
that
we
can
bring
as
a
sig
is
to
kind
of
converge
like
provide
utility
to
the
people
who
are
providing
those
open
source
and
there's
the
other
open
source
resources
and
try
to
get
continuity
to
get
kind
of
a
convergence
of
the
implementation.
So
you
have
one
standard
implementation
that
just
works,
which
seem
to
be
one
of
the
goals.
C
Are
your
proposal
and
I
think
that's
like
that
is
what
kind
of
seemed
attractive
to
me
like
if
we
can
get
the
open
source
community
to
converge
on
a
standard
implementation?
That's
robust,
that's
probably
better
for
the
end-user
than
having
very
many
implementations
that
are
just
slightly
disparate.
C
G
So
so
part
of
the
impetus
is
to
sort
of
reduce
the
complexity
and
and
disparate
functionality
between
those
implementations.
But
part
of
it
is
also
that
you
know
there's
an
API
with
a
rollout
strategy
here
and
it
doesn't
have
the
rollout
strategies
that
people
want
to
use.
It
only
has
rolling
and
you
would
think
it
should
also
have
canary
and
blue
green,
because
that's
how
people
think
about
these
things
like
parallel
to
each
other,
but
those
functionalities
aren't
here
in
deployment.
D
C
Not
not
you
in
isolation
right
but,
like
think
of
it,
like
this
deployment
itself
comes
from
deployment,
conflict
config,
which
is
implemented
inside
of
openshift,
and
it
became
so
obviously
valuable
that
it
was
adopted
in
this
course
because,
like
everyone,
everyone
wanted
right.
So
supposing
we
could
get
like,
let's
say
some
fraction
of
the
community,
that's
interested
in
doing
orchestration
on
top
of
K.
That's
for
value,
add
whether
it's
our
go
rollout
or
the
our
go
flux,
project
or
spinnaker
or
whatever
interested
in
it.
C
One
thing
we
could
do
is
like
fork
deployment
as
an
open-source
sig
sponsored
project
like
inside
of
cig
apps
and
provide
a
CRD
there
that
would
be
commonly
used
across
the
community
and
if
we
like,
it,
then
promote
it
as
a
built-in
type.
If
we
feel
that's
necessary
or
if
we
don't
feel
it's
necessary,
there
are
types
that
are
maintained
out
of
tree
by
a
sig
right,
so
if
there's,
it
doesn't
necessarily
have
to
be
a
built-in
type
in
order
to
be
supported
or
maintained
by
the
Kerber
Nettie's
community.
C
There
are
a
lot
of
different
paths
there
that
achieve
the
same
result
of
converging
the
community
around
an
individual
resource.
That's
robust
and
well
implemented
and
meets
the
needs
of
people
who
want
to
implement
blue,
green
and
canary
on
top
of
kubernetes.
But
you
know
we're
not
saying
you
should
you
should
go,
do
it
in
isolation.
Unless
that's
something
that
you
you,
you
think
you
wants
to
so.
E
E
It
essentially
creates
two
replica
sets
and
best
and
manages
most
of
what
the
command
controller
already
does,
but
I
am
still
like
not
confident
that
whether
I
should
move
all
my
deployments
to
our
guru
routes,
because
I
know
I
need
math,
Serge
and
Max
and
available,
and
all
that
and
I
don't
know
if
our
go
rule
out.
Teen
addition
to
implementing
blue/green
implements
those
correctly
and
figures
out
all
the
corner.
E
G
A
native
is
in
a
similar
space
where
they
have
reduced
the
functionality
to
a
core,
and
there
they've
also
added
a
custom
resource
on
top
and
so
that
custom
resource
doesn't
do
everything
that
a
deployment
does
sort
of
does
a
subset
of
it.
So
if
you
want
some
of
the
features
that
deployment
has
that
Kay
dative
doesn't
have
you
can't
get
K
Natives,
you
know
blue,
green
or
canary.
G
Without
you
know,
sacrificing
the
extra
features
the
deployment
has
and
spinnaker
doesn't
quite
have
that,
but
sort
of
does
because
it
does
some
manual
management
of
the
replica
sets.
While
the
deployment
still
exists.
I
don't
haven't
looked
at
the
code
to
see
how
that
works,
but
anyway,
I
suspect
that
most
of
these
guys
would
be
happier
to
go
along
with
it.
G
C
Know
if
you
need
but
I
think
I
would
start
with
going
out
and
reaching
outs
of
them
and
seeing
what
the
requirements
are
and
if
they
would
use
it
and
then
have
a
discussion.
Maybe
we
could
start
a
working
group
within
the
sig
to
start
it
have
a
discussion
about
what
we
can
do
here
in
order
to
try
to
converge
solutions
in
the
community
and
provide
something
that's
robust
and
feature
for,
but
Janet
you've
been
you've
said
some
stuff
you've
been
a
little
bit
quiet
and
you're
kind
of
one
of
the
you
know
deployment
experts.
A
A
So
I'll
need
to
go
to
take
a
look
at
this
details
of
this
proposal
and
to
see
what
was
different
in
this
proposal.
G
C
But
I
mean
again
there's
a
space
there
there's
do
it
as
a
built-in
type
right.
There's
everyone
writes
their
own
custom
resource,
that's
completely
divergent
and
then
there's
we
come
together
as
a
sig
and
there's
an
open-source
community,
including
multiple
projects,
and
build
one
custom
resource
that
everybody
consumes
right.
So
I
mean
there
there's
multiple
solutions
in
that
space
right
now
we're
at
one
where
everybody
is
just
kind
of
doing
their
own
thing,
and
maybe
there's
some
problems
there.
We
don't
necessarily
have
to
go
all
the
way
to
three.
C
Where
we
modify
the
existing
deployment
we
can.
We
may
find
that
two
has
the
most
value,
or
maybe
it
is
three
I
think,
having
discovery
and
having
conversations
with
the
other
open
source
projects
to
try
to
figure
out
like
what
their
needs
are
and
what
the
best
solution
to
meet
their
needs
is
would
be
my
first
step
that
way.
We
have
some
evidence
to
actually
support
a
direction.