►
Description
Update NGNIX Ingress Parameters annotations to Helm
A
Okay,
so
this
sync
meeting
was
requested
to
make
some
clarity
in
a
specific
issue
I'll
share
my
screen,
we'll
discuss
it.
A
Okay,
so
this
is
the
issue
that
we
started
talking
about,
implement
scale
up
down
switch
from
the
play
boards.
This
issue
was
formally
called
implement
blue
green
switch
on
deploy
boards,
but
we
changed
it.
So
what
I'd
like
to
do
unless
shenya,
since
you
requested
the
meeting,
if
you
have
an
objection,
let
me
know
what
I'd
like
to
do
is
just
start
from
the
beginning
from
what,
where
we're,
actually
trying
to
go
and
then
drill
down
to
the
specific
issue.
B
Okay,
can
I
so
just
one
thing
like
there
are
a
bunch
of
issues
on
this
story
right
like
this
is
not
the
only
issue
that
we
have
the
previous
issues,
and
then
maybe
this
is
some
sort
of
first
issue.
Are
we
going
to
trade
on,
can?
Does
it
make
sense
to
just
quickly
go
through
the
like
a
rough
overview,
how
we
gonna
iterate
the
thing.
A
Yes,
yes,
exactly,
that's
exactly
what
I
wanted!
That's
exactly
what
I
wanted
to
do
so
there's
two
epics
that
hold
this
issue.
The
first
one
is
advanced
deployments
so
today
in
get
lab,
we
support
canary
and
incremental
rollout
and
different
permutations
of
that
like
time
rollout
and
things
like
that,
and
we
kind
of
support
blue
green,
but
it's
very
complicated
and
hacky
way,
and
what
I
wanted
to
do
was
just
make
it
really
easy
to
do
blue-green
deployments
with
gitlab.
So.
B
One
question:
you
said:
we
kind
of
support
blue
gleam
deployment.
What
which
feature
are
you
talking
about.
A
So
amy,
like
two
and
or
three
milestones
ago,
did
it.
B
B
A
Let's
not
jump
to
the
template
or
the
ui,
or
so
when
we
just
we
discussed
this,
and
we
wanted
to
see
how
we
can
support
even
more
advanced
deployments.
A
We
discussed
a
load
balancer
implementing
a
load
balancer,
and
you
did
some
research
and
we
came
to
the
conclusion
that
the
best
way
to
implement
load
balancer
would
be
to
use
nginx
ingress
as
as
our
method
right,
yeah,
that's
the
beginning
where
we
started,
and
then
we
said:
okay,
so
we're
gonna
use,
ingress,
we're
gonna,
start
kubernetes,
first
with
with
nginx,
and
what's
the
first
thing
that
we're
gonna
do
so.
The
first
thing
that
we
decided
to
do
was
to
talk
about
different
annotations
right.
C
A
This
issue
right
and
we
after
doing
some
additional
research,
we
talked
about
starting
with
the
weight
annotation
correct,
can.
C
I
can
I
ask
a
question,
or
it
we
started
this
story
with.
There
are
two
epics
I've
seen
only
one.
What
is
the
other
one?
Let's
have
the
overview.
First.
A
That
we
have
update
engine
ingress
parameters,
annotations.
A
Sorry
that
I
didn't
verbalize
my
clicks.
A
C
A
Good,
so
what
we
said
was
we're
going
to
allow
the
users
to
define
the
weights
and
the
first
thing
that
we
wanted
to
do
was
non-configurable
weights,
so
go
from
0
to
100
and
go
from
100
to
0,
which
in
fact
will
do
a
blue
green
deployment.
A
Okay.
So
now
I'm
going
back
to
this
issue,
which
probably
should
be
linked
under
the
other
epic,
which
I'll
take
an
action
item,
because
it's
the
same
detail.
So
we
had
a
lot
of
conversations
around
this
area
and
a
few
weeks
ago
we
talked
about
this.
So
so
we
discussed
canary
way
to
0
and
100,
and
then
I
wrote
thinking
beyond
the
specific
issue.
Maybe,
instead
of
blue
green,
we
introduced
buttons
called
scale
up
scale
down
which
will
give
us
maximum
flexibility
in
the
future.
A
B
So
I
still
feel
like
we
kind
of
jumped
into
that
issue
because
no
problem,
so
you
have
haven't
three
four
shoes
before.
B
Yeah
that
makes
sense
and
then
like
the
2185,
to
do
I'm
currently
working
on
this
and
then
yeah,
and
this
is
in
progress
and
then
at
the
next,
where
we
go.
A
A
So,
whatever
is
easier,
I
don't
really
have
a
preference,
but
usually
api
is
faster,
so
it
doesn't
matter
to
me.
We
need
to
support
both
api
and
ui
in
the
end,
so
whichever
one
is
easier
to
go,
that's
that's
fine,
don't
care
so
the
next
one
I
have
is
api,
which
is
for
13.4,
and
this
one
we
actually
talked
about
asynchronously
as
well,
where
I
said
I
think
api
needs
to
come
first.
Maybe
we
should
push
this
to
the
next
milestone.
A
This
was
this
week.
We
talked
about
it
right.
A
B
At
first,
we
support
api
and
we
allow
users
to
control
in
the
weight
of
english,
your
api
public
api.
So
we
don't
need
a
front-end
effort
on
that
issue
and
at
the
next
step
we
actually
extend
the
ui
to
let
them
to
choose,
set
the
weight
in
ui.
A
Exactly
yes,
now
there's
a
little
bit
of
a
repetition
between
this
issue
and
issue
about
the
scale
up
and
scale
down,
because
the
scale
upscale
down
issue
is
like
a
private
case
of
the
of
the
traffic
ui.
Because
again,
we
started
from
implementing
switch
to
blue
switch
to
green,
which
is
basically
0
to
100
right.
A
So
the
idea
of
bluering
is
that
you
do
your
development
on
some
branch
and
when
you're
ready
to
deploy
you
do
like
in
one
shot,
you
switch
the
entire
production
to
the
new
deployment,
but
the
added
value
of
this
is
that
you,
you
can
easily
roll
back.
So
you
monitor
your
deployment,
you
check.
If
you
have
performance
problems
or
errors
or
anything
you
and
and
based
on
the
feedback.
You
decide
if
to
leave
that
as
production
or
to
do
a
rollback.
So
it's
the
first
step
to
canary
it's
even
before
canary.
C
B
So
yeah
that's
interesting
question
so
basically
kindly
allows
more
flexibility
than
brooding
deployment
like
currently
actually
allows
the
granular
percentage
rollout
or
head
specification
like
there
are
much
more
advanced
features
and
then
yeah.
The
first
thing
I
felt
on
that
new
issue
is:
why
do
we
need
blue
gleam
on
top
of
canary
and.
A
So
yes,
in
an
iteration
standpoint,
again,
blue
green
is
a
specific
use
case
of
canary
once
we
support
canary.
We
support
them
all.
So
if
we
can
do
everything
in
one
milestone,
yay
we
can
skip
this
issue.
It's
an
idea
to
split
it
up
because
scale
up
scale
down
in
terms
of
blue
green.
You
can't
set
the
weight.
It's
hard
quoted
right,
0
to
100,
that's
it
and
the
user
can't
set
any
other
value.
A
B
So
sorry,
one
one
more
question
so
in
this
issue,
so
two
two
two
one,
eight
one:
three
nine
set
deployment
traffic
weight
via
ui.
We
allow
users
to
set
any
number
like
one
percent,
even
one
percent,
fifty
percent
study
person.
Ninety
nine
percent.
A
B
So,
like
let's
say
we
can
set
zero
percent
and
one
100
percent
in
this
issue
in
the
next
issue.
We
achieve
the
same
thing:
yes,
with
different
button.
C
B
Actually,
there's
one
more
thing
about
I
want
to
discuss
is
so
that
already,
your
idea
is
basically
set
zero
percent
and
one
hundred
percent
on
canary
country
deployment
so
that
it
acts
like
blue
green
improvement
right.
B
I
think
it's
slightly
different
from
what
user
expects
when
they're
hard
blue
link
deployment.
It's
a
like
to
me
personally
when
I
heard
this
storm.
I
feel
like
a
much
more
very
fundamental
thing.
Straightforward,
like
there
are
always
two
production
is
running
and
the
user
can
switch
traffic
in
our
current
canary
deployments
in
audio
debugs.
B
It
behaves
slightly
slightly
differently
like
at
first
there's,
a
the
users
deploy
the
canary
instance
right
and
then
switch
a
traffic
to
canary
to
test
out
the
new
feature
new
instance,
and
when
they
promote
the
code,
promote
deployment
to
production,
the
canary
instance,
it
will
be
destroyed
because,
like
there
are
no
points
to
running
two
instances,
exactly
exactly
the
same
thing,
so
the
current
audio
deploy
works
in
this
way.
B
Like
there
are
some
age
cases
that
it
doesn't
work
exactly
as
bluegreen
deployment
is
so
my
question
is
that,
should
we
create
a
blue
link
deployment
from
scratch,
like
we
introduce
what
exactly
it
should
behave
which
we
don't
have?
We
don't
have
such
feature
today
or
we
tweak
the
kind
of
redeployment
acts
like
blue
gleam
and
then
let's
call
it
as
it.
It
is
okay,
as
is
so
like
I
wanna.
I
wanna
know
this
nuance:
okay,.
A
A
So
they
really
have
like
target
groups
that
they
switch.
There's
auto
scaling
groups,
you
have
listener
rules
which
is
basically
you
know,
networking
rules
right
forward
to
blue
everyone
or
forward
to
green,
and
this
is
also
using
ingress
right,
so
they
actually
have
the
two
instances
and
they
just
change
the
traffic.
A
B
C
A
quick
question
for
how
it
would
look
like
on
a
kubernetes
deployment
right
like
for
a
kubernetes
environment,
say
that
you
do
a
blue
green
deployment
and
you
have
eight
pods
active
for
your
production
environment.
Then
you
have
your
blue
environment
set
up
additionally,
so
then
it
will
like.
Am
I
correct
in
assuming
that
it
will
create
eight
additional
pods
that
are
canary
enabled?
C
C
A
B
And
please
scroll
down
a
bit
scroll
down
and
then
please
wait
there
there.
It
should
render
okay,
okay
screw
up
a
bit
yeah.
B
So
this
is
their
new
architecture
in
our
kubernetes
od,
deploy
things
and
then
so
at
first.
The
first
level
is
nginx
english.
This
is
our
english
for
all
all
environments,
including
production
staging
and
the
review
apps,
and
then
it
drills
down
the
production
namespace
and
the
production
nemesis
also
has
ingress
yes
and
then
it
it
it
passed
through
the
traffic
to
canary
ingles,
which
does
a
bunch
of
advanced
things
and
then
controlling
a
traffic
fish
on
the
header
or
like
even
weight.
B
For
example,
fifty
point
fifty
percent
going
to
stable
track
at
the
left
or
fifty
percent
going
to
canary
track
at
the
center,
and
then
so,
basically,
in
that
in
in
the
issues
that,
like
we
let
api
to
set
the
country
head
hundred
weight,
it
basically
led
the
users
to
control
the
this
kind
of
reading
glass
in
this
diagram.
B
And
then
after
traffic
is
rooted,
their
individual
deployments
stable
means
a
production
and
it
can
remain
the
canary,
and
these
are
isolated.
Each
had
each
runs
different
application
in
insecurity,
stable
runs,
older
code
and
then
kind
of
reruns
your
code.
B
A
C
Yeah
gotcha
it
does.
It
does
seem
to
me,
though,
like
as
a
like
that
you
would
that
you
would
think
that
a
canary
deployment
would
be
more
of
a
controlled
and
efficient
way
of
doing
your
deployment.
Switching
like
you
wouldn't
have
16
parts
you
would
just
like.
You
have
eight
plus
one
canary
all
right.
It's
going
good.
All
right
switch
three
extra
turn
down
few
original
ones
and
then
slowly
convert
to
eight
canary
and
then
make
them
all
production
and.
A
A
I
can
tell
you
that
there
are
other
competitors
that
they
have
like
blue
green
deployments,
and
they
have
you
you're
allowed
to
set
a
duration
of
how
long
you
want
the
old
production
to
live
in
case
of
a
roll
back.
So
it's
it's
just
like
a
safety
guard
for
your
new
production,
because
when
you
think
about
it,
most
of
the
new
deployments
has
gone
through
various
testing
phases.
A
C
B
So
one
qualification
on
this,
this
specific
audio,
deploy
on
kubernetes
our
future,
so
stable
truck,
is
created
at
first
and
then
canary
truck
association,
create
canary
tracker
will
be
created
at
first
and
the
next
stable
truck.
So
this
is
sort
of
wrong
table.
B
Steroid
silver
truck
is
the
let's
say:
there's
a
street
boot
truck
and
then
user
deploys
a
new
code
and
then
create
a
canary
truck
right
in
this
diagram
and
they
switch
traffic
to
canary
truck
and
then
test
out
test
things
out
and
if
everything
is
okay
switch,
the
traffic
to
sorry
promote
the
deployment
to
stable
truck,
so
it
overwrites
the
stable
and
then
canary
will
be
deleted
not
like.
It
previously
already
said
that
several
will
be
destroyed,
but
not
it.
B
B
And
then
one
more
thing
I
wanted
to
point
that
point
point
out
is
that
when
stable,
when
the
counterweight
track
both
track
exists,
it
acts
like
blue
green
department
right
because
stable
is
running
all
the
code
canary
running
new
code
and
then
these
two
instances
running
separately
and
we
can
control
a
traffic.
B
So
this
is
exactly
the
same
with
polygly.
What
google
enterpr
in
in
this
specific
point,
if
users
switch
that
traffic
zero
to
one
hundred
percent,
then
it's
completely
blue
into
deployment,
but
once
user
promoted
the
canary
to
production,
then
the
canary
is
going
to
destroy
it.
So
it's
no
longer
the
case.
C
A
C
B
Once
we
created
the
future
to
flexibly
set
the
the
annotation
for
weights,
it's
really
easy
to
like
set
any
any
any
weights
to
the
current
ingress.
So
I
think
that
I
think
we.
B
A
Would
you
would
we
want
to
leave
this
on
the
deploy
board,
because
the
other
issue
is
not
really
defined
yet
it
just
says
like
allow
customization
of
the
weights.
Add
this
up
to
the
ui
to
configure
the
way
through
texture
drop
down.
Text
means
that
we
allow
any
value
right.
Yeah,.
C
A
The
biggest
question
is:
where
do
we
place
it?
So
do
we
place
it
on
the
deploy
board?
I
think
yes,
question
is
real
estate
here
and
how
it
is.
C
So
I
was,
I
was
thinking
of
actually
a
similar
direction.
It
just
wasn't
clear
to
me
exactly
before
how
we
were
going
to
do
like
the
deployments
was
it
was
so.
The
thing
where
I
was
confused
initially
was:
are
we
going
to
switch
between
two
different
environments
and
you
know
re-route
traffic
between
two
separate
environments,
but
no
we're
going
to
use
canary
deployments,
which
is
all
bundled
up
within
a
single
environment,
so
that
was
where
I
was
unclear.
C
I
would
say
we
can
just
close
this
issue
and
use
the
other
issue
and
I'll
kind
of
like
come
up
with
a
solution
that
is
probably
going
to
use
a
similar
ui
piece
as
that,
but
it
might
not
be
the
only
place
where
we
want
to
allow
for
that
configuration
to
happen.
So
I
don't
want
to.
I
don't
want
to
say
all
right.
Let's
only
do
that,
let's,
let's
see
what
is
right
right.
A
C
A
Okay,
there's
another
like
close
issue
that
I'd
like
to
talk
about:
that's
not
directly
related
to
this,
but
it's
still
under
the
advanced
deployments,
epic.
A
So
we
talked
a
little
about
about
canary
and
we
talked
about
blue
green
there's.
This
really
new
cool
method
that
I
learned
about
that,
I
think,
is
really
really
awesome
and
I
would
like
to
also
add
support
to
it.
So
it's
called
traffic
shadowing
and
basically
the
idea
is
that
you
don't
deploy
to
another
environment
at
all.
A
But
what
you
do
here
is
really
without
any
risk
whatsoever.
You
check
production
traffic
on
your
development,
so
you
have
your
regular
production
and
all
the
traffic
is
going
there.
But
what
you
do
is
you
have
a
proxy
in
between
in
the
traffic
and
you
mirror
the
traffic
into
where
your
development
is
currently
so
staging.
A
Maybe,
and
then
you
get
real
production
like
traffic
on
a
non-production
environment
without
any
risk
like
users
don't
get
to
see
the
new
development,
it's
not
exposed
to
anyone,
but
you
get
to
see
the
new
behavior
even
before
it's
in.
A
C
A
It
may
not
answer
all
the
use
cases,
but
it
definitely
answers
a
bit
of
it.
So
the
idea
is
that
you,
the
problem
with
testing
in
pre-production
environments,
is
that
they're
never
like
the
production
data,
and
here
you
get
to
mirror
the
production
data
on
your
testing
environments,
which
is
really
really
nice.
C
A
A
B
A
Think
it's
more
traffic
based,
so
I
think
it's
more
in
the
lines
of
advanced
deployments
because,
in
my
mind,
feature
flags
are
more.
You
meet
a
logical
criteria
like
an
if
or
else
and
here,
you're
you're,
just
mirroring
and
duplicating
a
sub
a
portion
of
your
production
traffic
into
another
environment.
A
A
C
A
So
I'll
repeat,
can
you
ask
me
if
I
don't
think
that
this
belongs
to
the
feature
flags
domain?
A
And
I
don't
think
so
because
in
future
flags
you're
allowing
users
to
see
new
functionality
based
on
a
specific
criteria
that
they
meet,
and
this
is
traffic
based.
So
everything
traffic
based
is
more.
In
my
mind,
advanced
deployments
and
and
future
flags
is
more
how
you
give
a
subset
of
your
users,
the
new
functionality,
and
here
the
users,
don't
actually
get
the
new.
B
A
Yeah,
but
this
is
a
totally
different
topic
and
we
can
talk
about
it
asynchronously.
I
was
just
super
excited
about
this
and
I
really
want
to
to
get
this
in
our
product,
but
going
back
to
our
original
discussion
of
the
helmet
notations,
do
we
have
a
clear
goal.