►
From YouTube: Feature Flags Office Hours 27 Aug 2020
Description
Discussion about dogfooding and Unleash vs. Flipper
A
B
A
So
there
it
seems,
like
there's
two
different
paths
right
now:
amnesia
and
flipper,
and
we're
kind
of
going
to
parallel
paths.
Here.
On
the
one
hand,
most
of
our
developers
are
using
flipper
via
the
chat
ups
project
and
on
the
other
hand,
our
team
is
developing
the
unleash
solution
which
we're
offering
for
customers,
and
I
feel
like
we
should
be
trying
to
get
more
teams
on
board
for
dog
fooding
the
unleash
solution.
A
Since
that's
what
we're
offering-
and
I
know
that
china
had
some
performance
concerns
about
unleash-
we
talked
about
some
some
of
the
pros
and
cons
of
both
approaches.
So,
on
the
one
hand,
unleash
has
an
advantage
because
it
supports
a
lot
more
languages
than
flipper.
A
A
C
Can
you
open
the
issue
now?
Can
you
share
your
screen?
I
just
summarized
that
why
it's
difficult
to
switch
over
in
the
issue.
A
C
Three
yeah
so
like
this
is
briefs.
A
brief
summary
architecture
summary
that
I
try
to.
I
try
to
explain
that,
like
anyone
literally
anyone
can
understand
so
like
in
the
at
the
last
last
part,
you
can
scroll
down
that
why
it's
so
hard?
What's
what
so
like?
We
are
talking
about
performance
concept,
a
lot,
but
what
is
what
it
is?
What
exactly?
It
is.
C
That's
your
question
right
like
there
are
many
questions,
so
I
summarized
it
there
once
more
again
and
then
like
so
these
are
things
we
need
to
tackle.
We
need
to
resolve.
A
So,
let's
start,
let's
just
go
one
by
one
and
and
see
if
we
have
any
mitigations.
Obviously
this
is
an
effort
and
this
is
going
to
be
there's
going
to
be
some
trade-off
on
development
in
time
right.
So
we
don't
need
to
go
into
that.
That's
we
know
about
that.
The
the
situation
so
the
first
one
is
a
rough
estimate.
There
will
be
between
1,
500
and
2000
threads
in
production
for
polling,
and
I
assume
this
is
the
client's
pulling
the
server
saying
like
which
who
am
I?
What
flag
do
I
get
right.
A
So
we
had
talked
about
different
solutions
at
the
moment.
There's
two
there's
a
few
questions.
One
is
a
thousand
five
hundred
two
thousand
polling
events
that
bad
like
how
many
feature
flags
do
we
actually
have
right
now
that
we
can
say
that
this
is
a
problem
with
active
clients
that
are
actually
pulling
that's
question
number
one
that
we
need
to
figure
out.
You
don't
have
to
answer
it.
I'm
just
going
to
raise
the
question
question
number
two
is
that
we
we
figured
out
this.
A
Isn't
the
smartest
thing
to
just
have
a
set
duration
of
polling
and
we
talked
about
a
few
solutions.
One
of
them
was
like
an
incremental
polling
mechanism.
So
maybe
it
starts
at
15
seconds
and
then
it
goes
to
30
seconds
and
then
to
a
minute.
So
you
don't
have
everyone
holding
at
the
same
time
and
another
option
that
chase
had
talked
about
was
web
web
sockets.
D
Websockets
or
you
know
some
other
kind
of
different,
you
know
that
is
not
a
pulling
mechanism,
but
it's
more
push
based.
D
It's
like
it's
like
an
alternative
to
that
I
mean
there's
other
things
we
can
do
inside
this
this
this
this
particular
problem
right.
Caching,
I
think,
but
I
think,
like
the
thing
that
I'm
kind
of
coming
around
to
is
like
well.
This
is
like
a
this
is
a
big
change
to
every.
D
That's
we
just
need
to
then
account
for
that
that
effort
right
of
how
we're
gonna
do
this
and
and
which
ones
do
we
support,
and
then
we
say
thank
you
for
using
gitlab
feature
flags,
but
you
can
only
use
the
python
library
now,
because
it's
the
only
thing
we've
updated
right
or
whatever
right
or
maybe
there's
a
phase
solution
where
you
can,
you
can
have
polling
for
a
certain
amount
of
time,
but
we,
you
know,
we
deprecate
like
the
usage
of
those
over
time.
Somehow
it's
it's
not.
I
don't
know
that.
C
I
just
want
a
few
add
more
colors
on
this.
That
unleash
is
basically
not
maintained
by
us
unless
clients
as
well
there.
C
So
we
don't
have
control
here,
because
it's
not
owned
by
us.
D
B
Okay,
yeah,
I'm
I'm
like
a
few
thoughts
from
me.
Okay,
please
the
unleashed
vacant.
My
initial
thought
was
to
use
race
because
it
was
the
easiest,
but
I
always
anticipated
that
unleash
its
architecture
is
not
performant,
and
I
always
anticipated
that
at
some
point
we
just
move
the
appi
into
something
more
performant
where
amount
of
the
requests
some
kind
of
resiliency
is
not
an
issue
anymore.
B
I'm
not
saying
that
we
should
be
doing
that,
but
I'm
kind
of
thinking
that
one
way
to
like
to
approach
that
is
like
have
unleashed
happy
backend
to
be
leaving
as
a
separate
service,
separate
micro
service
that
customer
can
deploy
and
use
our
app.
B
I
mean
it
doesn't
have
to
be
honest,
it
could
be
our
appy,
but
have
this
kind
of
like
back
appy
for
unleash,
be
separate
from
github
right?
Gitlab
price
is
not
performant,
then
a
lot
of
these
concerns
about
amount
of
the
requests.
They
are
not
really
like
consoles
anymore,
because
if
you
deploy
this
micro
service
on
your
infrastructure,
this
micro
service
can
ask
like
even
every
second
gitlab,
but
it's
going
to
be
one
request
per
second
versus
20.
B
000
requests
per
like
or
like
one
thousand
requests
per
minute
because
it
would
be
catching
like
the
content
that
you
are
storing
is
highly
cashable,
but
the
second
aspect
is
like
the
resiliency
like
the
rice
is
like
very
strong
dependency
on
availability
of
the
github.com
and
now
my
question
is
like
what
happens
if
ops
github.net
is
during
upgrades
and
we
restart
side
key
or
we
re
restart
race
on
github.com.
B
What
started
to
gonna
use
like
who
holds
the
latest
the
last
state
in
this
kind
of
highly
distributed
architecture.
I
think,
like
whatever
we
chose
like
the
microservice
written
in
the
go.
Probably
because
would
be
the
easiest
and
the
most
performance
it
could
be
using
some
kind
of
cache
or
it
could
hold.
This
kind
of
persistent
state
would
always
have
that.
The
latest
state
of
the
picture
flux.
It
would
be
so
small
that
it
would
be
fast
to
to
respond
to
unleashed
happy.
B
It
would
be
able
to
multiplex,
but
it
would
be
resilient
to
like
the
temporary
unavailability
of
the
feature,
flux
happy
and
then
like
the
console,
whether
we
are
pulling
or
whether
we
are
pouring
whoops
web,
socket
wrong.
Pouring
how
is
being
maintained,
I
think
it's
not
really
like
the
problem.
It's
not
really
like
the
problem
that
we
fire
2003
15
seconds.
B
If
you
are
serving
cash
data,
the
problem
is,
if
we
have
to
generate
all
this
data
over
and
over
and
over
and
over,
then
it's
like
very,
not
performant,
but
like
this
micro
service,
since
it
would
be
living
separately
to
gitlab,
could
be
like
managed
by
the
ops
team
and
could
provide
all
the
feature
flux,
basically
in
kind
of
cached
form.
So
I
I
think,
like
my
my
main
goals,
like
the
the
point,
is
very
terrible
but
like
pulling
from
the
locally
highly
optimized
service.
B
It's
not
that
terrible
anymore
because,
like
the
cost
per
request
is
very
minimal
because
you
are
mostly
serving
cash
data,
but
the
second
aspect
is
like
the
the
flipper
today
give
us
residency
like
the
the
data
from
the
free
pair,
we
are
stored
in
the
database,
the
same
database
that
github
is
serving
data
from
it
means
which
means
that,
like
if
databases
are
available,
github
will
not
function
as
well.
But
what
happens
if
your
storage
for
the
feature
effect
is
not
unavailable?
It's
unavailable.
B
So
initially
it
could
work,
but
freeport
has
completely
different
resiliency
approach
that
we
don't
currently
support
with
the
unleash
and,
coincidentally,
the
resiliency
we
could
solve
or
unleash
by
solving
performance
problem
as
well,
by
making
this
more
local
to
your
installation.
B
So
these
are
my
random
thoughts,
I'm
it's
kind
of
like
my
my
my
overall
thinking
is
like
we
may
solve
pulling.
B
We
may
solve
web
sockets,
but
we're
not
gonna
solve
in
that
model,
residency
and,
and
and-
and
I
think
that
this
is
like
the
most
important
aspect-
to
cover
with
the
feature
that,
like
that
applications
can
function
in
fully
distributed
architecture
when
the
feature
flux,
api
is
not
functioning
for
various
reasons:
they're
going
to
be
an
outage,
they're
going
to
make
ops
guitar.net
down
or
for
maintenance
and
everything
gonna
explode,
and
if
we
now
make
github.com
strongly
dependent
on
it's
going
to
be
very
about
for
the
stability
whole
system.
D
So
if
I
understand
correctly
that
the
client
libraries
for
unleash
have
some
sort
of
caching
mechanism,
so
even
if
the
api
for
that
they're
calling
to
get,
you
know
a
refreshed
state
of
what
the
the
of
what
the
flags
are,
they
they
will
they
can.
They
will
still
continue
to
use
whatever
internal
state
that
they
have
in
memory
that
so.
B
So
so,
like
the
client,
libraries
like
persistence
of
the
site,
they're
different,
they
implement
differently
but
like
there.
B
Structural
difference
between
client,
libraries
and
and
the
flipper
flipper
allows
you
to
model
like
the
the
common
state
that
you
use
the
shared
state,
but
basically
unleash
clients
us
the
last
time
I
saw
they
rather
store
their
state
individually,
really
so
like.
If
you
have
100
nodes,
each
of
these
100
nodes
gonna
have
their
own
state.
C
Yeah,
so
actually
it
it
purchased
the
file,
the
backup
file
into
your
own
instance
so
like
we
need
a
file
stream
and
like
like
it,
purchased
into
like
temp,
slash,
temp,
slash
whatever
place,
and
this
file
could
be
gone
anytime.
If,
for
example,
in
cloud
native
architecture,
it's
a
it's
risky
to
rely
on
this
file.
I
would
say
that,
like
really
on
this,
yes
locally
cache.
B
File
like
like,
like
like
it's,
it's
super
risky
because,
like
each
redeploy
of
the
application
in
cloud
native
results
in
completely
new
workspace
that
you
are
working
on
and
like
you
like
this
kind
of
cache
data,
so
it's
like
I
mean
like
persisting
the
cache
as
a
file
in
the
fml
container.
It's
like
it's
subject
to
be
broken.
D
D
I'm
trying
to
reconcile,
like,
I
think,
we're
kind
of
on
like
we're.
We,
you
and
I
may
be
talking
about
sort
of
like
a
similar
path
of
of
the
like.
It
may
not
be
flipper.
It
may
not
be
unleashed,
but
there
in
order
to
do
this
properly,
and
to
do
this
in
like
the
best
possible
way
for
ourselves
and
for
our
customers
that
it
needs
to
be.
D
Something
potentially
something
that
we
like.
Maybe
we
take,
take
some
from
unleashing
some
from
flipper
some
ideas,
but
we
put
them.
You
know
we
own
the
solution
and
it's
not
just
something
off
the
shelf
right,
like
here's,
this
micro
service,
that
we
are
going
to
build
to
to
manage
and
maintain
all
of
these,
the
the
feature
flags
for
ourselves
internally
and
for
external
customers
through
the
same
solution.
D
But
here
it
is
as
a
stand-alone
separate
thing
that
maybe
you
were
talking
before
about
writing
it
and
go
or
something
else
but
like
it
seems
like.
While
I
think
I
agree-
and
I
think
I'm
like
thinking
that
that
is
like
the
maybe
the
end
state,
I'm
wondering
if
there's
that
seems
like
a
year
away,
for
it
seems
like
a
long
time
to
get
to
that
place.
D
I'm
wondering
if
there's
a
more
what
we
can
do
incrementally
to
get
ourselves
to
from
here
where
into
whatever
the
next
stage
is
tweets,
we
can
see
more
internal
customers
being
able
to
use,
unleash
or
be
able
to
use
feature
flags
in
the
in
a
in
the
way
that
they.
That
is
like
the
way.
The
english
implementation
is
or
something.
A
A
D
B
It's
not
our
default
path
because
it
is
like
it
requires
different
competencies,
and
it
takes
a
lot
of
attention
like
like,
for
example,
we
could
maybe
align
and
needs
to
behave
more
as
a
free
player
in
the
architecture
that
provides
this
kind
of
caching
layer
and
the
third
state.
B
B
So
I
I
guess,
if
there
is
a
lot
of
unknowns,
really
like
how
to
handle
that
and
like
we
at
least
like
in
the
other
parts,
we
very
rarely
like
decide
to
do
it.
We
we
may
decide
to
contribute
like
very
small
additions,
but
we
don't
really
like
kind
of
do
a
structural
improvements,
because
this
would
be
a
kind
of
structural
improvement.
That
kind
of
changes,
the
whole
philosophy
of
how
the
how
the
library
works
basically
or
like
how
it
interfaces
with
the
external
system.
A
I
understand
so
so
a
question
that
I
have
is
if
we
do
go
the
way
that
chase
had
mentioned
and
do
it
iteratively
taken
leash
as
it
is
today,
and
then
we-
and
I
think
we
discussed
this
also
when
we
were
trying
to
do
the
minimal,
ruby,
client
and
and
we
kind
of
went
off
that
path-
make
some
kind
of
separate
area
separate
server.
If
you
want
to
think
about
it
like
that,
that
will
do
the
api
calls.
A
Could
be
the
architecture
that
we
recognize
self-managed
users
as
well
as
ourselves,
while
we
learn
that
so
that
could
be
managed
by
ops
or
someone
else
and
then
having
it.
Not
part
of
the
gitlab
solution
itself
would
mean
that
any
dependencies
on
this
server
or
on
that
server
wouldn't
happen
in
case
something
goes
down.
Does
that
make
any
sense.
B
There
is
also
in
any
case.
There
is
also
one
other
important
aspect
to
to
like
to
remember,
like
whatever
we
do
with
the
feature
flags
to
footing,
and
we
just
stand
that
internally,
how
we
use
future
flux-
I
mean
it
could
be
only
sweeper
or
whatever
it
has
to
include
support
for
the
on-premise
installation,
because
this
is
like
the
work
of
doing
like
we
simply
feature
and
our
customers
and
our
support
team
extensively,
use
the
feature
flux
to
see
to
disable
features
that
may
be
broken
or
not
broken.
B
The
current
architecture
of
the
unleash
doesn't
offer
any
of
those
reading.
I
mean
it
could
probably
offer
if
you
deploy
the
only
server
like,
maybe
somewhere
in
the
infrastructure,
but
it's
not
like
making
solution
as
it
is
today.
It
kind
of
requires
an
additional
component.
B
B
But
how,
in
this
affects
like
our
on-premise
customers,
that
they
are
using
the
same
gitlab
and
they
have
they
need
to
have
a
way
to
toggle
these.
These
plugs.
A
I
think
we
do
it
is
supported
for
self-managed
today,
and
it
is
basically
using
the
unleash
server
libraries
as
well.
I'm
not
even
sure
that
we
have
manual
telling
the
customers
what
to
do.
We
probably
rely
on
the
leash
itself
and
when
amy
was
working
on
this,
I
think
she
had
mentioned
using
a
separate
server
entirely
for
it.
A
A
Use
on
unleash,
they
manage
the
flag
themselves
through
gitlab,
but
the
code.
You
know
the
code
is
stored
in
gitlab
source
source
control
and
everything,
but
the
url
of
which
the
clients
ask
for
the
flags
themselves.
Is
it
like
on
a
different
server.
D
B
I'm
kind
of
thinking
that,
whatever
we
do
with
the
dog
fooding,
we
should
aim
to
have
like
the
scene.
The
single
solution
for
like
the
symbol,
single
framework,
and
it
kind
of
like,
implies
that
we
need
to
support
github.com,
staging
github.com,
dev
gitlab.org,
our
qa
environments
and
our
on-premise
customers
to
be
able
to
toggle
feature
plugs,
because
this
is
what
we
are
doing.
B
This
is
what
our
support
team
is
doing
to
overcome
some
problems,
and
it's
also
like
to
add
a
little
more
complication
like
in
the
long
term
like
like
these
people
might
have
on
the
three-person
configuration
today.
B
B
Why
so,
I'm
kind
of
having
random
questions
like
we
had
this
discussion
about
providing
the
flipper
compatible
happy?
What
did
happen
with
that.
A
A
With
it
we're
kind
of
trying
to
figure
out,
you
know
where
to
put
our
efforts
right
now,
because
on
the
one
hand
we
have
spent
months
now
developing
unleash
and
yet
we're
having
problems
convincing
our
own
developers
to
use
it.
So
that's
where
we're
at
right
now,
I'm
not
sure
yet,
but
I
want
to
add
flipper
into
the
solution.
Yet
I
would
like
to
to
know
that
we
checked
all
our
possibilities
in
order
to
support
what
we
already
developed
before
moving
on
to
a
different
solution.
A
It
could
be
that
we'll
get
to,
but
unleash
is
not
performant
and
we
need
to
switch
to
something
else,
but
I'm
not
sure
we're
there.
Yet
I
I
understand
the
concern
that
we
don't
own
the
code
and
that
we
need
to
figure
it
out
and
and
manage
the
architecture.
But
I
think
we
need
to
do
that
anyway,
regardless
of
if
it's
unleashed
or
if
it's
flipper
or
a
gitlab
proprietary
feature
flag.
B
Because,
like
at
least
the
flipper
and
the
architecture
and
the
flexibility
of
the
flipper
library
kind
of
overcomes
a
lot
of
these
problems
mentioned
and
like
my
idea
in
the
past,
was
that
maybe
the
easiest
way
instead
of
reading
and
these
proxy
and
like
because
we
we
talked
only
about
the
unused.
So
I
provide
two
apis
one
for
free
pair
one
for
unleash
and
basically
like
make
our
feature.
B
Apis
then,
this
is
really
like
from
the
then
we
don't
really
have
the
problem
with
the
happy
cause,
probably
also
with
the
resiliency
that
much
place
may
be
slightly
like.
Probably,
there
is
something
that
we
should
like
improve
the
expiry
times
of
the
flux
but
flipper.
B
It
has
multiple
layers
of
caching
data
and
it
can
use
the
search
third
case.
It's
something
that
we
use
today
as
well
and
and
something
that
probably
would
be
pretty.
It
uses
the
same
point
mechanism,
but
the
frequency
of
doing
pulling
is
significantly
lower
due
to
the
caching
in
the
framework
that
you
can
very
easily
use
and
that
we
are
using
today
in.
B
D
D
Some
sort
of
bridging
between
the
two,
if
that's
like
how
long-term
resolution
that
might
be-
and
that's
fine
for
like
the
short
term,
if
that's
how
we
get
to
a
place
where
more
people
can
can
use
this
internally
and
vet
the
solution,
then
maybe
that's
fine.
D
I
mean
like
I
it
and
it's
like
the
same
thing
if
we
went
the
other
way
right,
if
flipper
has
all
like
the
performance
gains
and
all
the
things
that
we
need.
It
just
lacks
all
of
this
clan
support,
but
we
can
then
create
a
second
adapter
or
some
other
http
layer,
2
flipper.
That
has
that
then
we
can
build
out
all
the
client
libraries
in
such
a
way
that
works
with
that
solution
great
like,
but
then
we
own
that.
D
But
then
it's
the
same
thing
we
talked
about
before,
where
it's
we're
owning
that
solution
and
those
client
libraries
are
like,
then
the
this
is
what
you
now
must
use.
You
must
migrate
away
from
using
this
library
to
this
library
and
how
like
what
that
transition
looks
like,
and
how
can
we
make
that
happen
and
what
level
of
effort
are
we
willing
to
extend
to
to
get
there
right?
D
I
think
it's
that's.
I
think
that
is
probably
part
of
this
is
the
difficult
thing,
at
least
for
me,
it's
like!
Well,
yes,
it's
like
yes
to
this
way.
Yes
to
this
way,
sure,
but,
like
I
think,
to
a
reit's
point
before
like,
are
we
there
yet
and
are
we
willing
to
abandon
six
months
worth
of
work
or
longer
to
to
like
do
it
about
face
to
go
a
different
direction
and
rebuild
from
a
different
place?
A
It
I
think
that,
and
we
we're
really
out
of
time-
and
I
really
need
to
go,
but
I
think
that
we
need
to
figure
out
the
architecture
of
what
the
engine
is
for
frying
itself,
because
at
the
end
of
the
day,
when
we're
talking
about
a
sas
solution,
we
need
to
serve
many.
Many
many
client
requests
by
an
amount
of
projects
and
the
amount
of
developers
and
environments
that
we
serve
today
and
it's
a
very
big
number.
A
So
I'm
very
concerned
about
performance,
and
I
think
we
need
to
nail
that
down
regardless
of
flipper,
unleash
whatever
it
doesn't
really
matter
again.
We
need
to
find
out,
where
is
the
right
place
to
put
in
the
product
who
needs
to
manage
it
and
what
are
the
requirements
for
that?
So
I
think
that's
the
key
in
any.
D
B
So
like,
at
any
rate,
I
think
you're
not
gonna
avoid
building
some
kind
of
proxy
or
using
some
kind
of
proxy.
B
B
But
maybe
customer
doesn't
need
to
deploy
this
micro
service
yet
because
they
are
only
having
to
applicational
nodes
and
it's
good
enough
for
testing
it's
really
up
to
them.
But
they
have
like
the
tools
to
make
it
performant
and
we
have
very
minimal
maintenance
from
our
side
because
we
use
the
same
rps.
We
have.
We
just
provide
something
that
it's
a
caching
layer
that
it's
not
so
suspectible
to
write
limits,
because
we
don't
have
problem
with
the
writing
is
because
we
we
multiplex
requests
basically
by
that
proxy
service.
B
So
I
think,
like
it's
unavoidable,
like
to
to
have
something
in
between
to
make
it
performance
like
two
installations
having
a
thousand
nodes.
Basically
we're
not
gonna,
be
able
to
like
serve,
let's
say
100
weeks
from
one
client
per
60
seconds.
It's
like
it's
not
visible
from
the
cpu
time
database
time
anything
else,
but
on
the
other
hand,
as
rewriting
the
team
to
go,
I'm
not
very
convinced
that
this
makes
a
lot
of
sense.
B
It
seems
like
the
business
logic
is
easier
if
it's
in
the
ruby,
but
also
like
what
we
are
serving
is
not
highly
changing
information
so,
and
there
is
pretty
big
markup
price
that
we
pay
on
serving
from
the
race
which
we
don't
really
have
to.
There
is
something
like
feature
aware
and
can
catch
that
efficiently.
I
mean
this
is
exactly
what
we
do
with
flipper
right.
We
have
free
three
layers
of
the
caching
between
engine.
I
mean
you,
you
like.
B
We
have
three
layers
for
between
you
requesting
a
feature
flag
and
us
checking
the
database,
which
is
like
the
most
expensive
operation
to
perform
and
like
like
what
I'm
saying
it
doesn't
differ
at
all
from
that.
It's
just
moves
that
away
into
slightly
different
component
that
I
hopefully
would
make
it
let's
say
fairly
easy
to
maintain,
because
it
would
be
coherent
for
all
frameworks
for
different
languages
that
we
support,
but
we
do
not
maintain
them
where
we
kind
of
provide
this
the
app
on
the
server
side.