►
From YouTube: Kubernetes Federation WG sync 20180418
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
B
So
the
update
is
that
we,
in
a
thread
with
the
steering
committee,
have
worked
through
the
due
diligence
on
what
the
process
should
be
for
donating
existing
repos
kubernetes
SIG's.
So
if
anybody
doesn't
remember
or
might
be
watching
this
later,
we
we
have
a
project
called
Federation
v2
that
we
arrived
at
consensus
to
try
to
donate
to
the
kubernetes
SIG's
github
org.
We
donated
it
and
then
some
folks
on
the
steering
committee
felt
that
we
were
a
little
premature,
because
then
that
repo
is
primary
or
that
work
was
primarily
intended
for
new
repositories,
not
donations.
B
B
B
C
B
B
The
third
one
is,
you
have
to
demonstrate
that
the
dependencies
are
acceptable
like
the
licenses
of
the
dependencies
are
acceptable,
so
I
actually
have
a
lengthy
gist
where
I
had
typed
out
an
email
that
I'll
send
to
the
steering
committee
later
that
articulates.
All
of
this
I'm
happy
to
well.
I
was
I
was
going
to
CC
singing
multi
cluster
mailing
lists
on
the
thread,
so
that
everybody
would
be
aware.
A
C
This
particular
feature
existed
for
quite
a
long
time
in
our
existing
Federation
code.
So
I
think
this
is
also
demoed
in
multiple
places.
So
I'm
just
going
to
touch
about
this
briefly.
Okay,
so
there's
a
mechanism
right
now
called
discovery
of
a
federated
service.
It
basically
happens
through
DNS
and
anybody
who
also
discover
a
service
across
cluster
right.
It
follows
a
naming
scheme.
C
Actually,
the
name
is
caming
is
like
this:
the
service
name
space
and
the
furnishing
name
followed
by
a
service,
and
there
is
an
or
region
and
rezoning
zone
name
and
the
region
actually
and
the
DNS
Federation
domain
name.
So
this
is
the
naming
scheme
used
to
discover
a
friend
a
date
service,
so
so
this
will
be
resolved
ideally
or
to
the
nearest
nearest
service
shard.
C
So
this
is
achieved
by
three
or
four
cname
records.
Ok,
so
if
there
are
any
endpoints
existing
within
that
zone,
it
will
be
written
with
a
records
for
that
particular
load.
Balancer
IPS,
and
if
there
is
none
the
targets
ceviche
are
soluble
within
that,
then
there
will
be
a
cname
record
written
to
region
pointing
to
the
region
and
then
the
same
thing
happens
to
the
region,
level,
DNS
name
for
so
for
any
service.
There
are
three
DNS
names:
return
for
a
cluster
zone
level,
name
a
region,
level
name
and
a
global
name.
C
So
this
is
how
exactly
it
was
happening
with
internal
clients
for
external
clients.
It
was,
or
they
had
to
give
a
complete
DNS
name,
I
think
so.
This
is
how
it
was
happening
previously.
Okay,
oh
I,
think
most
probably
we
are
going
to
go
ahead
with
the
similar
stuff.
Do
you
agree
with
this,
or
do
we
need
to
change
anything?
We
have
any
alternative.
The
that's
my
point.
Is
this
good
enough
to
go
out
or
do
you
want
to
change
something
I.
C
There
is
a
slight
confusing
terms
here:
it's
talking
about
the
zone,
but
the
all
abilities
zones,
so
each
cluster
has
an
attributes
like
the
region
and
so
on.
It
belongs
to
so
whenever
a
cluster
is
provisioned,
it
is
thousand
eight
in
a
row,
the
region
and
a
zone.
So
we
are
trying
to
construct
a
DNS
names
using
those
zones
and
region.
D
Mean
I,
don't
think,
there's
really
much
concern
about
going
ahead
with
the
previous
functionality.
My
life
concern,
as
expressed
in
the
last
meeting,
is
making
sure
that
we
externalise
the
programming
of
DNS
provision
like
not
cube,
DNS
necessarily
but
Google
DNS.
They
wrote
53
and
everything.
Basically,
the
goal
being
I
mean
you
really
need
Paul's
sort
of
dress
for
a
maybe
understand
the
broad
strokes
more
than
me
just
saying
words,
but
it's
more
about
how
its
implemented
yeah.
B
C
Yeah
we
can
jump
on
to
that
particular
part,
but
there
seems
to
be
small
issues
when
we
ran
into
during
this.
So
one
of
them
is
about
the
naming
scheme.
Okay,
it
seems
so
we
are
clashing
with
another
naming
scheme
are
related
to
the
staple
set,
so
I
think
that
we
might
need
to
change
the
naming
scheme
a
bit
and
also
there
are
this
seems
to
be
more
dots,
which
can
be
a
problem.
It
seems
for
the
DNS
servers,
so
I
think
yeah.
We
need
to
relook
upon
this.
C
Okay
and
another
issues
around
that
AWS
kW
us
when
it
prevents
the
load
balancers,
it
is
allocating
the
load
balancer
with
the
host
name
instead
of
IPs,
so
in
Federation
v1.
So
we
were
resolving
the
IP
and
writing
the
multiple
are
our
data's
within
the
DNS
names
and
that
used
to
go
stale,
maybe
within
a
day,
and
there
was
no
way
to
change
that
DNS
records,
so
that
was
kind
of
one
of
a
problem
which
existed
in
the
previous
implementation.
C
C
Right
and
so
we
are
going
to
have
a
new
API
object,
which
will
are
which
will
collect
the
status
of
load
balancer
in
each
clusters
and
create
the
object,
something
like
this.
So
so,
okay,
we
can
debate
on
this
few
fields,
one
that
we
need
to
put
this
in
status
or
in
spec
or
better.
We
need
clusters
and
I'm
not
sure
whether
we
need
clusters
yeah,
okay,.
D
I
think
I
have
kind
of
the
same
thought
my
mind.
This
is
a
resource
that
Federation
uses
to
program,
an
external
DNS
configuration
mechanism.
Okay
to
me,
it
more
properly
belongs
in
the
spec.
The
fact
that
it's
machine
generated
it
doesn't
really
I
think
detract
from
the
fact
that
some
mechanism
is
going
to
use
this
as
a
spec
to
program.
The
DNS.
C
B
I
agree:
I
think
that
name
based
correlation
seems
like
very
like
very
powerful
mechanism
to
use
to
make
things
super
clear
and
avoid
unnecessary,
like
the
referencing
that
we
have
to
do,
the
I
might
just
for
context.
I
was
writing
this
up
and
it
just
kind
of
playing
around
with
it
and
had
planned
to
open
an
issue
in
Federation.
V2
I'll
still
do
that
unless
we
get
through
this
call
and
everybody
is
like
Paul,
you
should
have
just
written
that
down.
It
was
a
total
waste
of
time.
C
I
think,
okay,
so
this
is
how
it
looks
like
okay,
we
are
collecting
all
the
information
related
to
what
is
needed
to
program
a
dns
for
this
federated
service
discovery.
Okay,
so
only
slight
additional
things,
I
think
or
like
the
cluster
is
currently
associated
with
the
zone
and
region.
Information
which
needs
to
be
outlined
for
each
of
this
cluster
is
that.
D
D
C
A
Yeah,
what
I
wanted
to
just
specifies
that,
before
going
into
more
details,
so
there
is
some
portion
of
this
credited
service
discovery,
there's
some
portion
of
that
functionality
actually
written
down
in
cube
DNS,
which
was
meant
only
if
a
particular
services
we
consider
that's
federated
on
that
service.
Dns
recall,
is
considered
estimated
based
on
the
naming
scheme.
So
our
goal
ideally
should
be
to
be
able
to
reuse
that
naming
scheme
or
reuse
that
particular
functionality
which
is
already
implemented,
and
we
ideally
should
not
need
to
update
anything
in
the
future
into
that.
A
C
E
I,
actually,
oh
just
to
say,
like
I,
actually
made
the
mistake
before
using
a
the
problem
is,
if
you
have
like
a
be
like.
Imagine,
though,
you're
a
was
actually
a
B
and
you're,
that's
confusing.
If
you
could
have
like
you
can
have.
If
you're
in
one
of
the
names
itself
has
a
hyphen
in,
it
then
becomes
ambiguous,
whereas
dots
are
not
allowed
in
the
names.
So
yeah
right.
F
I
was
gonna
suggest,
but
one
approach
would
be
to
just
you
know,
get
something
basic
working
in
the
alpha
version
and
in
parallel
we
can
try
and
figure
out
what
the
actual
names
should
be,
because
there,
the
amount
of
implementation
that
depends
on
the
actual
naming
scheme
chosen
is
fairly
minor.
So
we
could
just
go
with
some
reasonable
naming
scheme
in
alpha
and
then
in
parallel
and
between
alpha
and
beta,
we
could
figure
out
what
what
if
there
is
a
better
naming
scheme
and
I.
You
know
we
thought
about
this
fairly
hard
before
and
I.
F
Don't
actually
think
there
is
a
good
one.
Well
put
it
this
way
between
Tim
Harkin
and
myself
and
there's
other
people
who
know
much
more
about
this
than
me.
We
we
struggled
very
hard
and
didn't
come
up
with
a
particularly
good
outcome,
so,
rather
than
hold
it
hold
things
up
on
that
basis,
I
would
just
go
ahead
with
with
a
semi-reasonable
scheme,
and
we
can
change
that
between
alpha
and
beta.
C
So
coming
to
the
new
implementation
right,
so
there
seems
to
be
a
new
project
initiated
by
Justin
called
DNS,
so
it
seems
to
I
have
supported
quite
a
lot
of
DNS
providers
out
of
the
box
right
now,
and
the
design
is
good
enough
where
we
can
plug
in
what
information
we
have
to
program
the
DNS.
So
this
is
a
thought.
I
am
having.
I
may
be
wrong,
okay,
so
it
is
just
a
thought:
I
just
need
to
prototype
further
and
just
to
give
a
little
brief
about
that
design.
C
I
think
it
is
more
or
less
this.
This
is
how
it
looks
like,
and
there
is
a
source.
This
is
an
interface
which
basically
provides
an
DNS
end
points
which
needs
to
be
programmed
and
a
controller
will
be
getting
this
end
points
basically
the
DNS
end
points-
and
this
is
a
registry
interface
which
is
similar
to
our
Federation
interface,
but
slightly
different.
It
has
only
couple
of
methods,
get
records
and
apply
changes,
so
the
Gator
cuts
would
be
the
calls
will
be
going
to
the
actual
provider
and
then
getting
the
records.
C
And
then
the
controller
will
calculate
the
desired
records
versus
the
actual
records
and
create
a
plan,
and
it
would
create
a
change
list,
and
that
is
what
is
submitted
to
the
actual
DNS
server.
So
now,
I
think
we
pretty
much
could
reuse
much
of
this
part
except
the
I,
think
controller
and
then
the
source.
If
we
can
now
change
the
source
part
and
controller
part,
we
can
get
all
the
provider
implementations
out
of
the
box.
C
E
I
think
that
sounds
great
I
think
that
the
just
zalando
or
the
people
that
are
really
driving
most
of
the
external
DNS
work
it
was
sort
of
a
union
of
three
or
so
DNS.
Projects
in
Zalando
is
certainly
doing
99%
of
the
work
in
that
external
DNS
project.
I
would
hope
you
could
just
add
another
source
so
that
someone
could
install
the
external
team
yunus
controller
and
it
would
work
with
their
other.
You
know
their
other
DNS
as
well.
E
I
did
I
didn't
actually
want
Netta
suggestion,
which
is
in
the
consuming
in
the
in
the
cluster,
which
is
consuming
a
federated
service.
If
we
exposed
that
consumption
as
a
kubernetes
service
today
that
can
be
of
type
external
DNS
and
it
can
point
to
whatever
DNS
name.
We
we
decide,
but
in
future
it
could
also
do
like
a
more
direct
path.
E
So,
for
example,
on
some
a
diversity
and
I
provider,
some
GCE
configurations-
you
can
do
pod
2
pod
networking
across
clusters-
and
you
could
imagine,
having
you
know
like
maybe
sto
and
future-
does
some
cross
cluster
stuff.
So
it
might
be
that
if
we
had
a
service
in
the
consuming
clusters
that
we
could
swap
out
the
underlying
mechanism
in
the
transport
in
future
I,
so
it
might
not
be
dns-based,
but
no
one
would
be
any
the
wiser
as
it
were,
but.
D
Clusters
actually,
maybe
call
it
internal,
even
if
it's
between
clusters,
but
then
there's
like
how
do
I
get
traffic
into
clusters
from
elsewhere,
where
I
may
not
have
control
over
DNS
I.
Just
remember
having
this
discussion
before
and
it's
like
we've.
If
there's
not
a
clarity
around
those
two
use
cases
and
they're
kind
of
conflated,
some
things
I
think
get
a
bit
messy.
So
my
sense
yes.
D
F
Just
I'd
seem
to
remember
that
that
the
one
of
the
main
problem
was
that
if
a
pod
looks
up
a
service,
it
needs
to
get
different
IP
addresses,
depending
on
what
the
current
status
of
the
external
services
are.
So
if
there
is
a
service
in
that
cluster,
it
needs
to
get
the
local
server
side
P,
and
if
there
is
a
service
nearby
that
fulfills
that
name
basically,
then
it
needs
to
get
whatever.
E
I
am
coming
back
to
this
after
a
while
I
think
it'll
be
great.
Maybe
a
controller
that
sits
in
each
cluster
could
reprogram
the
surfaces
dynamically
and
do
the
failover
in
that
way.
But
yes,
it
certainly
like,
as
no
pointed
out
definitely
only
for
internal
that
I'm
talking
about
and
I
just
want
to
make.
My
goal
here
is
to
try
to
see
whether
there's
a
for
internal
across
cluster
services,
a
configuration,
let's
go
direct
pod
to
pod
without
DNS,
without
a
tie.
G
B
C
D
D
D
D
A
C
D
E
A
D
Rather
not
implement
dot,
like
separately
just
reuse
that
and
get
that
sort
of
support
for
free,
relatively
speaking.
What
how
is
the
integration
work
like
we
talked
about
a
source.
This
is
a
controller
like
to
have
to
be
built
in
how
how
will
the
source
communicate
with
the
controller
that's
not
built
in
its
its
component.
D
B
D
Mean
is
that
I
guess
my
my
takeaway
from
Paul's
sort
of
thought
exercise
and
that
gist
was
that
it's
very
useful
to
be
able
to
communicate
the
controller,
not
with
code,
and
so
in
this
example
like
being
able
to
give
it
enough
data
give
provide
enough
data
that
a
controller
could
ingest
like
having
a
common
API
resource
that
a
controller
could
ingest.
I
guess
is
what
I'm
trying
to
get
out
and
I'm
wondering.
D
Any
talk
of
doing
that
so
that,
rather
than
having
a
compiled
in
source,
you
would
have
a
common.
You
know
API
resource
or
resources
that
the
controller
ingested
and
then
a
number
of
sources
could
ingest,
like
you
know
a
bunch
of
data
and
create
that
API,
because
API
resource
or
resources
that
the
controller
would
then
consume.
Does
that
make
sense
it
does
think
it
do
you'd
like
to
me
look.
D
E
Know
and
I
think
I
think
I
would
I
would
raise
it
with
them.
I
think
that
the
you
know
that
for
the
cases
of
an
ingress
or
an
annotated
service,
I
think
for
an
ingress.
In
particular,
it's
pretty.
You
know
you.
We
want
an
easy
user
experience,
so
you
don't
a
user
have
to
create
a
second
DNS
record,
but
he
has
value
in
your
scenario.
Yeah.
B
I
think
it
has
value
for
us,
because
it
would
be
nice
to
just
be
able
to
spit
out
some
resource
and
have
something
that
we
could
plug
into
that
resource
instead
of
a
tight
coupling
between
Federation
and
external
DNS.
It
sounds
like
this
is.
This
is
an
issue
for
the
external
DNS
folks
to
me,
and
it.
B
D
Mean
the
idea
that
you
have
a
built-in
source
and
that's
just
a
convenience
thing
implement
it.
That
way,
it's
hard
to
extend
like
that's
kind
of
something
we've
been
fighting
with
in
sort
of
rebooting
Federation
in
an
ideal
sense.
I
would
expect
that,
like
there'd,
be
a
source
that
would
ingest
these
services
in
a
way
that
it's
currently
compiled
in
generating
this
intermediate
format
and
how
the
controller
consumed
that
instead
and
that
opens
the
door
for
anybody
to
implement
their
own
source
of
those
API
resources
outside
of
the
tree.
E
Anyway,
I'm
just
talking
I'll
spawn
I,
don't
speak
for
them,
but
I
can
certainly
imagine
that
if
it
was
me-
and
you
came
to
me
with
two
proposals-
one
of
which
was
to
create
a
federated
cluster
record
and
one
of
the
create
a
DNS
record
that
was
very
generic
I-
would
obviously
lean
towards
you
know:
I'll
take
the
generic
one.
Thank
you
very
much
so
yeah
yeah,
I,
guess
we'll
see
how
it
goes.
I
guess
I
had.
F
A
quick
question
about
this
external
DNS
does:
what
is
the
relationship
between
external
DNS
and
our
current
DNS
provider
package?
Is
there
any
do
they
reuse,
DNS
provider,
or
should
we
deprecated
that
completely
and
say
we
don't
need
it
anymore,
because
people
should
use
external
DNS
for
how
do
those
two
projects
relate.
E
The
in
they
they
don't
use
the
DNS
provider
code.
The
cops
is
trying
to
get
onto
the
external
DNS
project,
but
it's
still,
you
know
it's.
It's
still
some
differences,
so
I
think
the
DNS
Pro
header
code.
We
pulled
that.
We
that
from
KK
into
cops
I,
think
it
since
been
deleted
from
KK
I.
Don't
know
if
you
have
another
copy,
that's
somewhere
else.
E
F
E
If
you
can
use
this
on
Venus,
that
would
be
great
I.
Think
that
the
big
gotcha
you
should
be
aware
of
is
that
it
in
order
to
tag
resources.
It's
it
a
decreased
text
records
in
order
create
an
association
with
with
the
sort
of
source
resource
so
that
it
sort
of
knows
who
owns
things
which
is
important
for
cleanup,
but
yeah.
E
F
E
C
Okay,
so
any
other
points
to
be
discussed.
D
A
B
A
A
F
It
sounds
like
they
sorry
Justin's
gonna
answer
on
his
behalf.
It
sounds
like
they.
They
had
existing
code
that
didn't
that
was
essentially
a
duplicate
of
DNS
provider
and
they
have
not,
as
they
considered
DNS
provider
and
excluded
for
any
reason.
I
just
already
had
implementations
of
all
the
stuff,
so
they
just
used
that
okay.
H
H
H
A
But
what
why
do
you
think
your
friend
might
have
confusion?
We
say
this
is
the
b2b,
and
let's
talk
about
that.
This
is
the
addiction
before
you
differentiate.
D
Eventually,
we
don't
want
to
converge
on
cute
fed
being
named
when
we
formally
deprecated
v1
and
there
isn't
really
room
for
confusion,
but
I
guess
the
thinking
was
that
in
the
near
term,
calling
it
something
different,
allow
someone
to
go.
Oh
you
mean
v2,
okay
or
or
call
it
cube,
fed
I
mean
I.
Just
think
that
having
the
name
be
distinct
might
be
useful
to
make
sure
that
nobody
gets
yes
but
I.
That's
just
my
opinion.
H
Yeah
I
think
that
if
we
want
to
name
a
cube
Fed
for
immediately,
then
I
think
we
should
probably
tack
on
a
two
just
to
make
them
distinct.
The
way
we
are
with
the
Federation
would
be
to
stuff
at
least
until
kind
of
the
message
propagates
out,
and
we
feel
that
there'd
be
no
more
confusion.
Of
course,
once
you
name
it
cube
head,
then
you
backtrack
to
name
a
cube
Fed.
That
could
also
introduce
some
confusion.
F
One
of
the
thoughts
I
had
I
thought
about
this
little
bit
in
the
past
as
well.
Wouldn't
it
be
better
to
just
keep
the
cube
head
tool
and
have
it
support
v1
or
v2
with
a
flag
that
says
you
know,
set
up
a
v2
cluster,
or
rather
a
v2,
Federation
or
b1
Federation
just
come
online
flag,
so
we
thought
technically.
D
F
F
F
D
Only
real
thing
that
cube
Fed
is
gonna
be
doing,
at
least
in
the
near
term
is
joining,
so
they
aren't
really
the
same,
and
it
is
not
really
thing
in
the
same
way.
It
was
with
cube
head
one,
because
the
deployment
scenario
is
very
different,
so
it's
not
just
like
I
have
a
tool,
and
it
does
the
same
thing
for
two
different
things.
It
does
actually
quite
different
things
so.
F
F
D
So
in
Federation
v2
we're
currently
working
with
aggregation
and
we're
using
API
server
builders,
basically
to
employment
mechanism,
for
that
going
forward
that
may
even
change
if
we
move
towards
CR
DS,
in
which
case
it'll
be
like
installation
of
CR
these
verses.
So
we're
already
like
we've
stepped
down
from
deploying
at
CDE,
deploying
an
API
server
and
running
controllers
to
just
running
controllers
and
and
then
configuring
aggregation
and
maybe
in
the
future,
it'll
be
just
running
controllers.
So.
D
D
So
the
only
thing
we
need
to
do
is
the
joy
I'm
not
saying
there
doesn't
need
to
be
automation
around
like
there's
gonna
be
documentation,
cuz,
it
should
be
kubernetes.
Then
you
have
to
deploy
cluster
registry.
Then
it
comes
into
play.
Federation
API
and
the
controllers.
So
three
separate
steps.
There
may
be
room
for
automating
that,
but
it
it'll
be
kind
of
weird
to
have
to
have
API
server
builder
and
then
layer
stuff.
On
top
of
that,
ok.
F
Make
sense,
and
so
so
the
question
then
becomes
if
all
we're
doing
is,
if
cube,
do
have
fed
join
whatever
we
call
this
thing
that
joins
clusters
to
Federation.
If
all
that
is
really
doing
is
creating
CR
DS
essentially,
is
there
a
generic
tool
for
creating
CR
DS,
or
is
every
CR
D
vendor
expected
to
build
their
own?
You
know
cute
control
equivalent
to
handle
their
kinds
of
CR.
These.
D
B
F
B
You
can
you
can
use,
for
example,
you
can
use
like
Q
control,
edit
with
a
CRT,
and
you
can
use
the
basics
of
cube
control
there.
There
is
no
like
custom,
like
generator
command,
that
will
like
generate
a
new
like
cute
control,
create
my
CRT,
but
they
are.
Similarly,
they
have
very
similar
features
to
like
what
you
can
do,
generically
with
resources
in
cube
control.
A
I
I
have
to
pour
two
points.
One
is
if
we
actually
like
in
the
previous
week.
We
had
this
discussion
that
we
think
that
future
paths
for
ourselves,
converting
whatever
we
are
writing
right
now
into
see,
are
these,
but
there
is
a
possibility
that
CRD
stone
mature
as
well
or
we
might
face
issues
like
we
have
shoes
with
versioning
North
API
agency
are
uses
of
now
under
continue,
so
we
retain
the
current
implementation,
which
is
using
the
case
of
a
bit
of
an
dedicated
API.
A
So
in
this
case,
in
this
case,
what
we
talked
at
deployment
of
the
controller
and
API
server
is
handled
by
the
API
server
builder
command.
Do
we
expect
to
package
that,
if
and
when
we
sort
of
create
our
first
package
as
part
of
our
Deniz,
or
it
makes
sense
to
have
a
tool
just
like
you
back
now,
the
cupid
that
we
have
had
earlier
now
to
do
that
job.
A
The
v1
actually
has
gone
through
this
path,
so
it
started
as
just
the
ml
files
which
people
can
deploy
for
controlling
for
API
server
that
devolve
into
a
tool,
because
that
was
a
little
hard
and
hell
implementation
has
one
drawback
that
is
I,
think
to
do
with
odd
vegetable
or
something
I.
Don't
you.
D
A
D
D
Goes
away
when
we
talk
about
aggregation,
because
now
we're
just
piggybacking
on
a
cube,
API
server.
That
party
has
SSL
configured
and
there's
no
requirement
to
do
separate
as
a
cell
so
effectively
helm
should
not
be
like.
It
should
be
possible
to
do
a
helm
turn.
That's
what
I'm
trying
to
say
so
so
then
it's
like
having
a
tool
to
join
a
cluster
may
make
sense.
I
mean
it's
kind
of
a
nicety
in
terms
of
creating
the
service
account
and
all
that
other
stuff,
but
I,
don't
think
we
need
it
for
a
knitter
anymore.
A
A
Okay,
so
naming
of
this
tool
still
remains
open
or
we
should
conclude
I'm
sort
of
okay
in
letting
it
be
named.
Sq
Beth
not
right
now
for
some
time,
because
the
main
concern
seems
to
be
like
people
differentiating
between
this
and
an
existing.
We
value
that
and
probably
in
future
knowledge
into
like
merge
in
the
name
of
with
them
kind.
D
Of
a
Kurtz
mean
like
yeah,
I'm
I
think
it's
fine
to
have
like
half
a
placeholder
name
and
definitely
like
until
we
figure
out
what
we
want
to
do.
I
think
for
me
this.
This
whole
discussion
has
raised
the
the
need
at
some
point,
to
have
a
transition
plan
and
I'm,
not
saying
we
need
to
discuss
to
here,
but
like
okay,
we're
gonna
have
something
we're
gonna,
Federation
v2
and
it's
gonna
I
mean
oh
I.
Remember
it
was
so
we
need
to
set
the
versions
at
some
point
like
right
now.
D
The
federation
of
ET
repo
we're
just
using
like
point
zero
one
or
something
like
whatever
the
default
lowest
value
for
versioning
is
probably
at
some
point.
We're
gonna
want
to
synchronize
with
cube,
certainly
before
we
do
a
release,
and
maybe
that
can
be
the
cutover
like
once
we
actually
do
a
release.
It
becomes
like
cuvette
version,
one
point,
twelve
or
whatever
it'll
be.
When
let
me
do
this
release
and
then
they'll
be
it'll.
Be
easy.
It's
like
well,
don't
want
cube,
fed.
There's
no
version.
A
D
So
I
have
one
sort
of
procedural
thing
that
I'd,
like
to
just
raise
I,
have
grown
increasingly
frustrated
with
using
github
reviews
just
in
reviewing
some
of
the
stuff
I've
eivin's
been
doing
around
getting
join
in,
and
it
occurred
to
me
that,
since
we're
no
longer
like
in
KK,
we
no
longer
have
this
requirement
to
be
universal
and
use
the
lowest,
basically,
the
lowest
common
denominator,
which
is
github
review.
So
for
me,
any
complicated
review
that
I
do
I
would
want
to
use
review
hole.
D
I
am
happy
to
serve
as
a
resource
for
anybody,
who's
curious
and
it's
like
to
also
get
started
with
review
ball
I
mean
clearly
if
you're
fixated
on
using
reviews
and
you
hate
reviewable,
then
I'll
have
to
figure
out
a
strategy
for
dealing
with
that.
But
if
you
like
quality
reviews
for
me,
maybe
you'll
have
to
tolerate
reviewable.
At
least
in
the
near
term.
Till
github
means
every
sound.
Does
that
make
sense?
It.
A
D
A
lot
of
people
that
are
I
mean
it's
a
polarizing
issue
for
people
who
don't
know
how
to
do
good
code
review.
I.
Think
github
review
is
fine
if
you
used
good
tooling,
like
the
tooling,
that's
internal
to
Google
or
Gerrit,
and
other
tooling,
like
that
and
you're
you're
serious
about
wanting
to
keep
track
of
what
you're
doing
and.
D
D
Derek
Carr
and
maybe,
and
then
Phillips
and
at
cute
cotton
want
to
say,
2016
have
a
developer
thing
and
Derek
really
didn't
like
review
all
we
thought
the
UI
was
terrible.
When
you
want
to
do
reviews,
especially
in
his
phone,
it
was
like
it
was
just
impossible
and
github
review
was
not
impossible,
so
it
depends
on
your
use
case,
I.
D
Think,
and
it
depends
how
close
that
you
are
working
with
the
people
who
are
writing
the
code
and
the
complexity
of
the
code
you're
reviewing
in
terms
of
the
size
of
the
PR,
so
it's
really
dependent
on
the
on
the
person.
I
mean
some
people
need
full-blown
IDs
and
other
people
are
happy
with
them.
So
I
can't
say
this
is
a
one-size-fits-all
tool.
It's
more
on
the
full-blown
idea
of
code
review.
D
If
that
doesn't
suit
you
then
I'm
not
gonna
force
you
to
use
it,
and
you
know
we
can
work
things
out,
but
it
may
mean
that
you
know
your
PRS
need
to
be
smaller,
so
I
can
actually
digest
them.
That's
me
personally,
I
mean
if,
if
anybody
wants
to
use
different
tooling,
if
they
want
gonna
use,
github
reviews
I'm
not
preventing
that
I'm,
just
stating
a
preference
for
me
to
start
using
reviewable,
more
and
I
will
help
analytic,
serious
or
one
judgment.
D
And
I'm
happy
with
that
approach.
I
think
that's
reasonable,
but
I
just
wanted
to
raise
the
visibility
of
this,
because
I
I've
started
using
a
review
all
this
week
and
I've
been
super
happy
and
how
it's
a
lot
of
made
a
few
more
productive
as
a
reviewer.
So,
as
we
pick
up
speed,
I'm
expecting
I'm
hoping
to
use
it
a
lot
more.