►
From YouTube: OpenFeature - Community Meeting, June 8, 2023
Description
Meeting minutes https://docs.google.com/document/d/1pp6t2giTcdEdVAri_2B1Z6Mv8mHhvtZT1AmkPV9K7xQ
OpenFeature website: https://openfeature.dev/
A
Okay,
so
I
put
a
yeah
I,
have
a
couple
agenda
items
but
I
guess:
first
we
can.
We
can
welcome
the
new
attendee.
If
you
don't
mind,
saying
hi
real,
quick,
Dominic
and
introducing
yourself.
B
Hi
I'm
new
to
this
community
I
work
at
Google
on
Google
Cloud,
particularly
and
my
team
is
working
on
experiments,
which
means
feature
flux.
So
so
that's
I'm!
That's
why
I'm
here,
because
I
think
it
can
be
beneficial
for
for
us
to
cooperate
and
work
on.
That.
B
Google,
of
course,
has
some
internal
Solutions,
but
I
think
we,
like
everybody,
is
trying
to
yeah
to
participate
in
open
source.
A
Oh
perfect,
yeah
welcome
all
right
so
I'll
just
rip
through
my
agenda.
There's
I,
don't
think
they'll
take
super
super
long,
so
feel
you
know,
please
feel
free
to
add
any
items
that
you
have
or
want
to
cover.
You
want
to
more
detail
I'm
just
going
to
give
a
quick
update.
We
did
release
the
spec
zero
six
zero
about
two
weeks
ago
or
so,
and
basically
work
has
been
done
to
implement
it.
A
A
I
will
be
pushing
one
more
update,
but
it's
just
a
pure
documentation
change.
So
it's
not
a
not
a
code.
Change
at
all.
Java
is
underway.
Todd
is
basically
working
on
events.
He
said
when
he
gets
back
from
New
York
today.
He
should
be
able
to
maybe
finish
that
up.
So
that's
the
last
part
of
the
spec,
that's
missing.
A
Everything
else
is
ready
to
rock
and
then
we'll
just
hopefully
be
able
to
release
it,
and
so
that
adds
you
know,
obviously
quite
a
bit
of
new
and
pretty
cool
features,
and
then
work
has
started
on
go
and.net.
The
issues
are
created.
Some
people,
like
some
of
the
the
spec,
has
been
implemented,
but
it's
not
not
fully
feature
complete.
A
So
if
there's
anything
that's
interesting
to
anyone,
you
know
feel
free
to
try
to
claim
some
of
the
issues
that
I
want
to
help
out
to
implement
any
of
that,
but
I
would
assume
especially
the
go.
One
should
be
able
to
be
knocked
out
over
the
next
couple
weeks
quite
easily
any
questions
on
that
cool
yeah.
A
So
hopefully
we
can
kind
of
wrap
that
up
for,
like
the
1.0
sdks
fairly
soon
it
seems
like
the
the
work
has
been
kind
of
divvied
up
and
and
people
are
working
on
it
right
now,
so
all
good
there,
the
next
one
is
I'm
working
on
an
ecosystem
page.
This
has
been
kind
of
like
a
long
time
request.
Basically
it's
like.
A
If
you
go
to
the
website,
it's
quite
difficult
to
know,
like
you
know,
I'll
use
flagsmith
as
an
example
like
which
providers
are
are
available
for
flagsmith
or
you
know,
go
feature
flag
or
whatever
right
and
so
right
now,
you'd
have
to
like
go
to
each
SDK
and
kind
of
like
look
through
the
list
and
and
whatnot
I'm,
basically
building
it's
it's
based
on
the
algolia
interface,
but
I'm,
not
using
algolia,
and
it's
basically
like
a
little
search
thing
that
you
can
like
do
all
your
facet
filters
and
stuff
like
that,
so
hopefully,
I
can
have
something
available
to
like
get
feedback
later
this
week,
it's
pretty
close
like
it
works.
A
If
I
can
get
it
a
little
bit
closer,
then
I'll
open
up
like
a
draft
PR
and
kind
of
work
through
that,
and
you
know,
certainly
try
to
address
any
feedback,
but
the
nice
part
is
then
we
can
link
to
it
from
all
over
the
docs,
and
you
know
some
people
specifically
to
that
page
to
see
what's
available
and
what
languages
and
things
like
that.
So
I
think
it
can
be
a
really
nice
place
on
the
website.
A
It's
just
a
matter
of
you
know
getting
it
fully
working
so
yeah
as
soon
as
that's
ready,
I'll
post,
something
in
the
channel
to
try
to
get
some
feedback
there,
but
really
it's
it's
basically,
just
like
a
very
simple,
like
search
page
with
facet,
filters
and
stuff
like
that.
A
Well,
it's
not
a
new
concept,
of
course,
but
it's
like
a
new
part
that
I
want
to
kind
of
float
the
idea
to
gauge
interest
to
see
like
there's
this
concept
of
progressive
delivery.
It's
been
pushed
out
relatively
heavily
over
the
last
couple
years
and
it
fits
very
nicely
into
feature
flagging
and
I'm
kind
of
just
gauging
interest
to
see
like
from
a
open
feature.
A
You
know
community
and
possibly
even
specification
looking
at
what
we
could
do
to
basically
try
to
come
up
with
a
workflow
for
Progressive
delivery,
essentially
meaning
like
what
would
it?
What
would
it
look
like
between
like
a
flag
management
system
and
like
a
monitoring
tool,
essentially
to
do
this
and
I
I
have
some
ideas
but
I'm
just
trying
to
like
float
it
out
there?
Is
there
like
broad
Community
interest,
or
is
it
basically
like
a
couple
of
us?
That's
that's
interesting.
You
know
coming
together
to
possibly
spec
it
out.
A
You
know
you
can
typically
do
you
do
different,
like
rollouts
or
targeted
rollouts,
you
could
do
a
percentage
based
or
based
on
Geo,
or
you
know,
user
Persona
or
whatever,
and
you
either
have
to
well,
even
in
like
the
more
elaborate
systems
like
you
would
have
to
build
out
like
stages
and
say,
like
I'm,
going
to
roll
it
out
to
two
percent
of
users
and
let
it
sit
there
for
some
amount
of
time
and
validate
that
it's
you
know
a
working
as
intended
before.
A
Moving
on
to
the
next,
you
know
rollout
phase,
trying
to
figure
out
what
it
would
take
to
kind
of
spec
out
that
communication
between
the
two
tools
and
what
would
be
expected,
especially
on
like
the
observability
vendor
side,
like
I,
think
it's
pretty
straightforward
from
like
the
the
feature,
management
and
rollout,
but
like
what
would
it
take?
What
would
the
feature
management
tool
send
to
a
tool
like
a
dinosaur
example
potentially,
and
what
would
the
expected
you
know
results
be
to
say
like?
Yes,
we
we're
all
good
here
to
move
on
to
the
next.
A
You
know,
phase
of
the
rollout
essentially
like
you
can
kind
of
do
it
already,
but
it's
pretty
primitive
for
the
most
part,
like
you
would
say
like.
What's
the
response
time
on
this
service
and
I'm,
looking
for
doing
it
more
granularly,
what
was
the
response
time
for
users
hitting
this
particular
feature
flag?
Now,
the
like
query
engines
are
becoming
more
powerful
on
traces
like
that,
becomes
possible
and
I'd
like
to
possibly
think
about
what
we
could
do,
but
maybe
specking
that
out,
because
it
does
affect
open
feature
in
some
ways
because
it
would
be.
A
These
are
the
type
of
values
we
need
to
capture
as
part
of
like
our
SDK
make
available
to
hooks.
So
then
we
can
have
like
open
Telemetry
hooks,
essentially
to
add
the
correct
metadata
to
the
trace
to
do
this,
like
it
basically
already
works,
I've
tested
it
out,
but
it's
like.
We
also
would
maybe
want
to
standardize
parts
of
it
where
possible.
So
that's
it
in
kind
of
a
nutshell.
A
It's
still
kind
of
a
bit
theoretical,
though,
from
my
perspective,
like
I'm,
still
just
trying
to
think
of
like
what
who
would
be
responsible
for
what
part
in
this
flow
and
what
would
the
you
know,
kind
of
the
payloads
look
like
what
would
this
like
contract
essentially
look
like,
and
does
it
make
sense
to
even
be
in
like
open
feature,
because
it's
not
necessarily
like
SDK
spec.
It's
part
of
like
it's
a
it's
a
special
interest
essentially
or.
D
Do
you
expect,
like
some
flight
configuration
changes
based
on
this,
like
I,
don't
like
switch
off
the
flag
automatically
or
this
kind
of
stuff?
Basically.
A
What's
you
know
the
failure
rate
things
like
that,
if
it's
100
failure
rate
on
that
one
percent,
we
can
abort
basically
immediately
like
there's
no
chance
that
that's
going
to
be
a
good
release
essentially
and
then
you've,
just
like
completely
minimized,
like
the
blast
radius
of
that
bad
feature,
essentially
automatically.
A
You
couldn't
really
do
that
traditionally,
without
doing
it
with
like
a
trace
based
metrics
generation,
because
you'd
basically
get
like
it'd
be
lost
in
the
noise
you'd.
Look
at
the
response
time
and
of
you
know,
or
a
failure
rate,
for
example,
on
one
percent
on.
You
know
that
many
requests
or
a
subset
of
one
percent
like
point
one
percent
or
something
you'd,
probably
never
see
it-
would
never
cause
like
an
anomaly
and
so,
like
that's
kind
of
the
core
idea,
it's
like
what
we
could
do
at
these
stages.
A
You
look
at,
but
possibly
reduce
the
frequency
as
we
get
closer
to
full
full
release,
and
so
that's
kind
of
the
idea
there
I
think
like
it
could
be
a
cool
way
to
get
people
to
come
together
to
come
up
with
this
like
use
case,
and
if
we
could
kind
of
standardize
it,
then
it
could
work
between
a
lot
of
flag
vendors
but
then
also
a
lot
of
like
observability
tools
that
are
capable
of
that
granularity.
D
E
C
B
This
is
exactly
as
I
understand
that
you
have
this
very
minimal
progression
of
the
the
rollout
of
your
feature,
and
you
want
to
detect
very
quickly
if
something
is
really
really
bad.
So
this
is
this
is
kind
of
canary
analysis
and
you
are
comparing
this
instances
when
you
have
this
flag
enabled
and
the
other
which
are
not
with
this
flag
enabled
and
then
you
basically
see.
B
Sometimes
there
is
an
improvement
and
the
metric
is
going
in
in
some
direction
and
you
you
know
it,
but
basically
the
the
big
change
is
more
often
a
symptom
of
something
back,
but
this
is
a
from
the
from
the
runtime
perspective,
important
to
as
I
understand
report
those
metrics,
but
from
the
other
side
from
the
vendor
site.
It's
pretty
complex
because
you
can
imagine
the
system
when
you
have
hundreds
of
flags
and
you
are
rolling
them
in
different.
B
You
know
aspects
and
you
need
to,
and
if
you
have
that
mechanism
in
the
background
in
the
back
end
sorry,
then
then
it's
really
much
better
for
devops
or
SRE
to
to
find
the
root
cause
of
the
problem.
B
A
It'll
almost
be
like
the
next
phase,
I
would
say
like
like
at
first
when
you
first
release
it
it'd,
be
very
aggressive
about,
like
you
know,
red
metrics
and
like
the
kind
of
the
important
like
health
of
the
the
feature
and
then
as
soon
as
it's
deemed
like
healthy.
At
a
certain
point,
then
you
can
move
into
like
some
kind
of
experimentation.
A
C
A
Or
whatever
you
want
to
call
it
yeah,
but
but
yeah
I
think
it's
just
an
interesting
idea,
so
I
kind
of
want
to
float
that
out
there
and
just
yeah
get
people
thinking
about
it.
It's
something
that
I'm
thinking
about
quite
a
bit
now
and
we'll
see
if
we're
going
to,
if
it
makes
sense
to
you,
know,
spin
off,
like
a
you,
know,
a
special
interest
group
or
something
as
part
of
open
feature
to
kind
of
see
if
we
can
inspect
something
out
or
what
you
know.
A
If
people
aren't
interested
then
well,
then
we
won't
do
it,
but
if
I
think
there
there
is
going
to
be
some
interest
in
this
and
I
think
from
like
dynastrases
perspective,
we
can
help.
You
know,
make
modifications
like
the
open,
Telemetry,
spec
or
at
least
push
for
that,
which
then
makes
it
very
easy
or
easy-ish
to
to
consume
it
from
like
a
querying
standpoint
and
that's
where
it
becomes
way
more
powerful
than
like
traditional
metrics
so
and
then
from
flag
vendors.
Then
it's
like
they're
already
doing
a
lot
of
the
controls.
A
They
already
have
targeted
release
like
what
does
that
communication
look
like,
and
then
we
can
have
like
really
interesting.
Lower
risk
feature
flag
releases,
essentially
because
you're
not
eliminating
risk.
D
C
So
yeah
I,
my
Mike
and
I,
had
a
very
brief
chat
about
this.
The
beginning
of
the
week
and
I
sat
with
some
of
the
members
of
the
fleisman
team
and
just
sort
of
put
together
a
super
high
level
idea
of
how
it
might
work.
C
This
just
seemed
like,
like
probably
the
simplest
workflow
one
of
the
things
that's
missing
here
that
I've
just
sort
of
realized
hearing
you
talk
like
is
there's
an
aspect
around
the
Telemetry
here,
as
well,
so
having
the
Dino
trace
or
equivalent
being
able
to
say
this
is
the
P95
latency
for
people
with
this
flag
turned
off,
and
then
this
is
the
P95
for
the
play
turned
on
and
having
that
flag
with
the
thing
that's
changing
so
that
they
can.
C
You
know,
then,
then
there's
no,
you
know
you're
not
lost
in
the
noise
kind
of
thing
and
there's
different
ways
of
solving
that
problem
as
well
like.
Obviously
you
could
use
open,
Telemetry
or
something
like
that,
and
some
of
the
open
feature
sdks
have
that
already
right,
so
that
that's
something
that's
actually
missing
from
here.
That
would
be
important,
I
think,
but
the.
C
A
Actually
it's
a
it's
a
span
event
at
the
moment
and
then
so,
tools
that
are
able
to
query
spans
and
then
generate
metrics
from
that,
like
Trace,
can
I
already
grab
like
time
series
for
you
know
or
essentially
generate
time
series
on
demand
for
requests
that
had
a
particular
feature
flag
evaluated,
and
so
it's
basically
like
continuing
to
vet
that
out
and
then
figuring
out
what
metrics
make
the
most
sense
I
mean,
there's
some
pretty
obvious
ones,
but
maybe
there's
some
less
obvious
and
then
it's
like
at
what
frequency
would
we
want
to
check
it?
A
What
kind
of
like
you
know,
roll
out
strategies?
Would
we
want
to
potentially
support
because
there's
a
couple
different
ones,
and
maybe
you
do
you
know
the
the
top
three?
You
know
it's
like
a
percentage
based
one
possibly
like
you
know,
user
Persona,
based
ones
in
Geo
or
something
like
that.
I,
don't
know
like
that's
a
hypothetical,
but
you
come
up
with
those
like
groupings
and
then
it's
like
how
what
metrics
we
want
to
check
at
what
frequency?
A
What
would
we
determine
to
be
like
a
healthy
stage
before
the
system
could
go
automatically
because
I
really
don't
like
the
idea
of
like
a
system?
I
mean
it's
fine,
but
a
system
that
like
turns
it
on
for
one
percent
for
like
two
days
and
then
moves
to
the
next
one
like
automatically
like
it
needs
to
have
like
a
quality
gate
of
some
sort
and
probably
more
importantly,
like
abort
extremely
quickly.
A
If,
if
we
detect
like
everything's
failing
like
no
reason
to
continue
on
so
yeah,
that's
that's
it
in
a
nutshell
and
I:
don't
have
all
the
answers
yet
I,
just
kind
of
I'm
starting
to
think
about
it.
Right
now
and
I
wanted
to
kind
of
float
the
idea
out
there
and
if
there's
interest
you
know
we
spent
up.
A
Maybe
you
know
dedicated
slack
Channel
or
something
and
a
couple
other
things
just
to
talk
specifically
about
that
or
we
could
do
it
in
the
existing
one,
but
just
see
if
we
want
to
like
Trend
in
that
direction,
if
nothing
else
like
I'm
working
on
that
basically-
and
so,
if
there's
there
is
interest
outside
of
just
even
open
feature,
if
we
decide
it's
not
appropriate
in
the
spec
or
something
we
can
maybe
work
on
that.
So
that's
that's.
Basically
any
any
questions
or
thoughts
on
that
before
we
move
on.
A
Cool
all.
E
E
Hey
everyone,
yeah
I,
just
wanted
to
create
another
newsletter
and
I
want
to
kind
of
keep
this
on
a
rolling
basis
trying
to
publish
a
newsletter
every
month,
so
I'm
I'm
thinking
of
doing
it.
Every
mid-month
and
I'm
wondering
what
the
best
way
to
sort
of
gather
all
of
the
updates
and
feedback
is
from
everybody
in
the
community.
Not
just
me
trying
to
gather
everything
that
I
think
is
important,
so
I
was
thinking.
E
A
Yeah
I
mean
we
tried
something
similar,
it
worked,
okay,
I
would
say
so,
I
think
it's
still.
It's
not
gonna
fix
all
your
problems,
I
suppose,
in
terms
of.
E
E
A
C
A
All
right,
yeah
any
other
topics
or
questions
or
concerns
or
anything
I,
I'll
I,
think
I,
post
I
posted
the
notes
again,
one
more
time.
So,
if
anything
feel
free
to
add
it,
if
not
wrap
up,
you
guys
can
think
about
Progressive
delivery,
workflows.
A
All
right,
cool,
yeah,
well
I
appreciate
everyone's
time.
It
was
good
talking
to
you
again
and
yeah
have
a
nice
day.