►
From YouTube: Community Meeting, February 22, 2022
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
I
just
started:
stefan,
do
you
want
to
moderate.
C
All
right,
so
is
it
recording
already
yeah
it's
all
right,
so
you
get
from
nicole
few
22nd
community
meeting
of
kcp
welcome
everybody.
We
have
two
topics
on
the
agenda.
I
think
first
one
is
paul
about
issues
paul
if
you're
here.
A
He
is
under
the
weather
today,
but
he
and
I
spoke
about
this,
so
I
can
comment
so
and
you're
not
sharing
at
the
moment.
I
don't
think
stefan,
I
don't
nope,
I
guy
do
you
want
to
share
all
right?
A
Let
me
okay,
while
you're
doing
that,
so
we
do
have
a
lot
of
issues
and
pr's
open
in
the
project
and
some
of
them
are
old
and
maybe
have
been
fixed
or
are
no
longer
relevant,
as
the
project
has
advanced
and
some
of
them
certainly
still
are
so
this
was
just
a
reminder
to
periodically
go
through
and
triage
or
retrieve
issues
and
make
sure
that
things
aren't
getting
stale.
We
don't
have
prowl
enabled
in
the
org
or
on
the
repo,
so
we
don't
have
any
time
based
changes
like
for
stale
and
rotten.
A
So
I
think
part
of
this
is
around
talking
about
what
sort
of
labels
we
want
to
have.
What
sort
of
triaging
we
want
to
do.
A
Does
it
make
sense
to
try
and
do
it
periodically
through
these
community
meetings,
maybe
not
weekly
but
periodically
or
just
do
it
async
as
a
community,
so
that
that's
really
what
this
is
about?
A
I'm
sure,
if
there's
any
discussion,
it
would
be
very
welcome,
and
we
can
talk
about
that
now,
if
folks
have
want
to
think
about
it
and
have
some
feelings
that
they
want
to
just
comment:
async,
that's
cool
too.
D
Okay,
do
we
have
a
work
item
for
it
captured
anywhere
like
we
need
to
think
about
this,
but
when
we
reach
this
time
horizon,
I
don't
know
if
we'd
actually
put
that
down
somewhere.
A
C
E
Go
ahead
just
for
whatever
it's
worth
in
canada
went
through
this
process,
a
lot
as
I
imagine
in
in
kubernetes
as
well,
and
what
what
was
the
a
real
change?
The
thing
that
actually
moved
the
needle
was
using
github
projects,
especially
the
new
version
of
it.
E
So
I
would
suggest
that,
whatever
way
you
want
to
settle
on
triaging
to
at
least
try
to
have
a
visual
aid
through
the
github
projects,
that's
usually
the
easiest
way,
because
all
the
new
issues
end
up
visible
there
that
are
not
triage
and
whatever
you're
working
on
is
in
the
state
that,
where
you
have
your
focus,
because
there's
always
going
to
be
a
lot
of
issues
that
are
new
and
or
stale,
especially
without
stillbots.
So
you
want
to
have
that
out
of
your
visual
focus.
A
Yeah
I
like
that
idea.
I
know
staphon
had
set
up
a
project
for
kcp,
it's
in
the
org,
not
in
the
the
project
itself,
but
it
is
there
in
github.
I
would
say
we
probably
haven't
been
doing
a
great
job
at
actively.
Maintaining
it
so
coming
up
with
the
process
for
doing
triage
repeatedly
through
a
project
would
be
good
and
yeah
just
making
sure
that
we
have
it
set
up
the
way
that
works
for
us
too.
C
A
Yeah,
I
can
take
a
stab
at
it
at
some
point.
Unfortunately,
everybody's
super
busy
and
this
sort
of
thing
may
fall
through
the
cracks
a
little
bit
and
get
lower
priority,
but
it
is
important
that
we
don't
just
have
an
ever
increasing
number
of
issues.
C
Also,
sorry
go
ahead.
One
thing
I
I
started,
I
added
a
prototype
for
milestone.
Maybe
we
could
use
that
to
project
work.
A
E
C
E
C
Yeah
we
can
talk
offline
andy.
I
like
the
idea
to
to
have
something
we
do
at
the
end
of
the
crawl,
like
new
items
and.
A
Yeah
we
did
that
when
I
was
working
on
cluster
api
and
okay,
it
at
least
got
visibility
in
in
every
meeting
for
the
stuff
that
had
come
in
and
even
if
it
was
just,
we
don't
really
know
what
to
do
with
this.
Let's
just
put
it
in
a
needs:
more
investigation
category,
like
that's
better
than
nothing.
C
F
Okay,
so
update
description
below
get
started
is
when
we
try
to
install
a
service
account
on
a
physical
cluster
communities.
What
capabilities
that
is
install
the
relevant
secrets
for
the
same
thing.
F
So
that's
what
I'm
trying
to
do
when
I'm
trying
to
install
rbo
cd
on
kcp,
so
I'm
pulling
on
the
manifest
in
the
manifest
of
argo
cd
on
kcp
and
what
and
I'm
not
trying
to
worry
about
what
it
is
performing
on
my
physical
clusters,
but
when
I,
when
I'm
trying
to
compare
what
an
ideal
situation,
will
look
like
when
I'm
trying
when
I
directly
try
to
deploy
alpha
cd
on
my
normal
kubernetes
cluster
was
versus
when
I
do
with
gcp.
F
So
I
find
I
found
out
that
one
of
the
port
was
running
perfectly
fine,
but
other
three
was
not
and
the
and
the
main
reasons
were
being
the
service
accounting
is
not
being
delivered
to
the
physical
clusters.
So
that's
the
thing
that
I
wanted
to
raise.
A
So
the
the
funny
thing
about
the
code
is
in
the
go
portion
of
it,
not
the
command
line
bit,
but
the
go
portion
like
it
would
work
with
a
client
cert
if
you
had
one,
but
the
command
line
requires
that
there
be
a
secret.
Let
me
is
it,
but
is
it?
Is
it
it's.
D
Two
apis
right:
there's
the
if
you
create
a
service
account
today
in
cube
a
secret
gets
created.
That
will
not
be
the
case
forever.
We
are
going
like
cube,
is
changing
away
from
that.
But
if
you
create
a
secret
and
say
that
you
want
that
secret
for
a
service
account
a
token
will
be
created.
That's
our
public
api
is
this
doing
the
former
or
the
latter.
D
So
probably
this
is
going
into
the
bucket
of
like,
if
I
were
my
gut
here,
is
that
the
right
thing
for
us
to
do
is
to
never
support
this
in
kcp.
The
first
part.
First
off
this
is
an
operator
operators
don't
fit
the
workload
97
works
by
default
capability,
especially
on
something
that
in
base
and
like
here's.
My
reasoning
base
cube
we're
trying
to
turn
this
off.
A
What
what's
the
new
workflow
going
to
be
just
it's
going.
D
To
be
so,
there's
two
there's
what
we
always
supported,
which
is,
if
you
want
a
secret
for
a
service
account
you
shouldn't
be
using
the
service
accounts
secret.
You
should
create
a
secret
set,
the
annotation
that
says
which
service
account
you
want
the
secret
for
and
a
controller
should
create
it.
We
should
support
that.
Probably
I
I
don't
see
a
reason
not
to
support
that,
because
there
are
operators
that
do
that,
and
that
is
the
official
way
in
cube
reading
the
service
account
token
is
not
actually
supported.
D
In
that
sense,
there's
no
public
api
contract
for
it.
It
just
happens
to
work,
and
we
are
talking
about
removing
that
in
general,
in
cube
in
the
long
run,
but
the
other
one
is
request
token
request.
Token
long
term
is
the
best
possible
interface
for
a
service
account
for
a
controller
that
needs
a
service
account
of
something
to
ask
for.
D
If
I
had
the
choice
between
those
two
I'd,
probably
lean
towards
request
token,
but
I
don't
know
what
actually
we
refer.
So
it's
like
the
mindset
is
the
basic
scenarios
described
here.
We
may
never
want
to
support.
We
need
to
document
the
work
around
the
workaround.
Probably
in
some
of
these
cases
is
the
creating
the
secret.
That's
a
little
bit
easier.
I
create
the
secret
set
the
annotation
and
I'll
link
in
a
second
where
that's
documented,
and
then
the
question
is,
is
how
often
is
this
going
to
come
up
for
non-operator
workloads
versus
operator,
workloads.
D
And
that's
the
I
would
guess,
probably
a
lot
of
these
interaction
style.
Clies
probably
are
doing
this
more
formally
deployed
controllers.
Probably
aren't
so
you
have
an
alternative.
You
just
don't
get
the
easy
flow,
this
one's
kind
of
an
interesting
one,
but
like
yeah
like
I,
I
don't
know
that
this
is.
D
This
description
is
like
something
we
have
kind
of
said,
even
in
like
sick
arch,
like
isn't
part
of
the
actual
supportive
api
of
cube,
even
though
people
rely
on
it,
and
so
there
will
be
clusters
in
the
future
that
do
not
support
this,
and
so
then
there's
a
this.
Might
this
starts
to
cross
into
that
we're
probably
going
to
break
some
people
coming
over?
Is
the
break
significant
enough,
and
do
we
have
enough
reasoning
that
someone
could
figure
out
why
the
break
is
there?
And
then
you
know
have
that
that
architectural
principle
captured.
A
So
I
think,
there's
two
action
items
here:
one
samyak,
if
you
pre-create
a
surface
account
and
pre-create
a
secret
with
the
right
annotation
which
will
get
linked
in
for
documentation.
You
should
be
able
to
make
some
progress.
You
may
hit
other
stumbling
blocks,
but
you
should
be
able
to
get
past
this
issue.
D
This
is
an
important
one
like
this
is
a
this
probably
should
go
in,
like
our
one
of
our
core
workload,
docs
in
the
transparent
multi-cluster
dock
jason
had
called
out
parts
of
this,
but
this
is
an
implication
of
that
which
is
we're
not
trying
a
transparent
multi-cluster
is
not
trying
to
transparently
support
operators
talking
to
the
underlying
cluster,
and
so
we
should
probably
clarify
now
that
we
have
a
concrete
example
of
it.
Failing
we
are
where
this
would
be
is
like
here's,
how
we
would
reason
about
it.
D
A
buck
in
august,
cds
and
technically
it
is
jordan,
has
some
comment
somewhere
that
says
we're
going
to
stop
doing
this
because
we've
been
sigoth
has
been
talking
about
this
for
and
we're
five
four
years
into
pod
identity
and
the
original
goal
of
pot
identity
was
to
nuke
service
account
secrets
out
of
secrets,
and
so
this
is
a
great
example
of
potentially.
This
is
something
sigoff
should
already
be.
Doing,
may
already
have
start
like
someone
may
actually
be
going
and
doing
some
canvassing
and
we
can
be.
D
We
should
be
able
to
find
a
sig
off
issue
which
is
like
hey,
that's
what
we
recommend,
here's
what
you
should
think
about,
and
then
this
may
be
input
to
the
sig
off
deprecation
of
this
flow,
because
token
review,
ultimately,
is
the
thing
that
replaces
this
and,
like
the
clouds
are
already
doing
that,
like
in
a
cloud,
you
won't
be
able
to
get
some
types
of
secrets
like
this.
That
would
let
you
act
as
a
principal.
I
don't
know
that
argo
cares
about
it,
but
you
know
the
the
tokens
generated.
D
D
And-
and
this
is
great
because
I
think,
is
there
a
sig
off
this
week-
I
can
take
that
to
the
sagoth.
If
I,
if
it's
next
week,
I'll,
take
it
to
the
next
sig
off
as
a
concrete
and
follow
up
there.
G
Oh
hey
guys,
so
I
I
was
facing
another
issue,
that's
very
closely
related
to
the
one
that's
been
talked
about
now,
so
my
use
case
is
basically
when
I'm
running
pipelines
on
kcp
and
triggers
on
kcp
right.
That
is
successfully
running,
but
once
the
setup
is
there
and
I
try
to
actually
trigger
a
pull
request.
For
instance,
where
I've
set
up
the
triggers
to
listen
to
the
pull
request
and
run
some
task
runs.
It's
actually
looking
for
a
service
account
right.
G
But
of
course
that's
not
synced
with
a
physical
cluster
and
that's
where
it's
saying
that
hey
I'm
not
able
to
find
the
service
account
so
just
want
to
confirm,
based
on
what
clayton
just
was
saying
that
can
I
use
secrets
here
as
well?
G
If
so,
there
is
this
independency
problem
of
what
someone
I'll
just
borrow
the
things
you
have
said
on
slack
on
the
thread
where
samia
pointed
out
that
if
we
have
to
create
secrets,
then,
if
in
in
the
case
of
multiple
clusters
right,
do
we
continue
creating
secrets
on
every
cluster
and
suppose
the
secret
is
changed
or
needs
to
be
updated
or
deleted.
Do
we
again
have
to
do
this
manual
work
of
updating
this
on
every
you
know,
cluster
on
the
multi-cluster
setup.
D
D
We
basically
said
we
weren't
going
to
automatically
propagate
service
accounts.
That's
not
part
of
the
transparent
multi-cluster
contract,
because
if
you
automatically
propagate
the
service
account
down
and
the
secrets
down-
and
you
use
that
it
means
you're
tied
to
that
cluster.
D
But
we
would
want
use
cases
where
you've
set
up
something
consistent
across
those
clusters,
so
that
something
shows
up
in
the
workspace
and
you
can
use
it.
I
think
there's
a
couple
of
pieces
here.
I
think
I
missed
the
actual
use
case
for
the
secret.
So
so
can
you
write
me
by
one
more
time
like
what
what
on
in
the
physical
cluster
is
specific
to
that
physical
cluster?
D
And
what's
generic
part
of
the
use
case
like
being
able
to
take
a
being
able
to
get
a
token
that
lets
you
talk
to
the
service
account
in
the
in
the
logical
cluster
in
the
workspace?
Is
something
we
want
to
support
being
able
to
access?
The
service
account
on
the
cluster
is
not
so,
which
one
is
it?
Is
it
accessing
the
the
workspace
service
account.
G
I
think
token
should
work
so
I'll,
just
iterate,
so
this
trigger
setup
and
then
there's
a
event
listener
setup
on
the
kcb
cluster
right
now,
if
I
trigger
a
github
pr,
the
event
listener
is
able
to
listen
to
this,
get
a
pull
request,
but
now,
when
it's
supposed
to
actually
do
some
action
on
this,
that's
where
it's
searching
for
the
service
account.
That's
where
it's
failing.
So
I'm
not
sure
if
accessing
the
token
would
fix
this
problem
or
not.
D
Yeah,
this
is
a
pretty
deep
one.
Maybe
this
is
one
like
we
if
we
want
to
if
we
want
to
take
the
time
now
to
go
through
it.
If
we
have
time
at
the
end,
we
can
go,
and
I
think
we
could
probably
want
to
break
the
use
case
down
and
figure
out
the
bits.
I
don't
know
that
I
have
the
full
model
in
my
head
and
without
jason
here,
like
we,
I
mean.
C
D
How
is
the
prototype
2
overall
experience
going?
Do
we
have
some
things
we're
ready
to
start
getting
feedback
on
and
having
people
critique
and
yeah.
A
So
we
we
merged
my
pr
that
fleshed
out
the
bulk
of
the
remaining
demo
script.
Work
joakim,
has
graciously
and,
with
a
lot
of
work,
updated
his
pr
to
get
the
ingress
controller
in
tree.
I
am
gonna.
Do
one
final
pass
as
soon
as
I
am
no
longer
in
meetings
which
will
be
right
after
this,
and
hopefully
we
can
get
that
merged
today
tomorrow
we
need
somebody
to
record
the
demo.
A
That's
on
a
linux
host
because
trying
to
get
ingress
working
on
a
mac
through
a
vm
running
docker
pod
man
is
not
for
the
faint
of
heart,
so
maybe
stephon
or
somebody
else
could
record
that,
and
we
need
to
update
the
content
like
the
readme
and
any
other
collateral
to
highlight
the
new
value
props
that
are
coming
in
for
today,
so
which
format
is
it
is
it's
a
video.
C
A
I
just
can't
test
the
ingress
part,
but
everything
else
worked
for
me.
Okay,.
H
It's
not
there
yet
I
mean
we
need
to
merge
the
ongoing
pr
or
the
english
controller,
and
I
will
add
some
parts
to
actually
do
something
with
the
ingress
controller.
Okay,
as
the
demo
right
now,
it's
not
doing
anything
otherwise,.
D
Close
the
other
day,
but
then
got
distracted.
Is
there
a
new
demo
script
where's
the
new
demo
script.
A
I
mean
it's
been
in
there
for
a
while.
It
just
wasn't:
flashing,
okay,
okay,.
D
So
that's
one
which
maybe
like
a
call
for
feedback
on
kcp
dev,
would
actually
be
really
good
once
it's
in
a
state
like
do
is
this
when
this
goes
in?
Do
you
think
it's
an
estate
for
people
to
kick
the
tires,
or
is
there
another
follow-up
to
it?.
A
D
And
then
the
mac
comment
was
kind
of
interesting
like
a
part
of
my
head
was
like
okay,
you
know
what
what
is
the
best
foot
forward?
Are
we
going
to
assume
people
are
going
to
be
on
max?
Is
there
a
step
below
the
ingress
functionality
or
a
partial
ingress
functionality
to
really
like?
What's
the
minimum
flow
that
would
still
allow
someone
on
a
mac
to
see
the
basic
potential?
Is
it
this
or
is
it
something?
D
A
Yeah
I
struggle
with
that
one,
because
we
have
very
aggressive
timelines
for
all
of
our
successive
prototypes
and
the
effort
to
try
and
get
ingress
plumbed
through
on
a
mac
is
purely
for.
Like
local
experience,
kick
the
tires
testing
work
like
nobody's,
going
to
deploy
kcp
on
a
mac
in
production
and
expect
that
ingress
is
going
to
work.
A
It's
only
going
to
be
on
a
linux
or
maybe
windows
host
right.
So
I
I
just
don't
know
that
we
can
justify
the
time
right
now.
I.
A
Cluster
failover,
so
it's
visible
like
looking
at
the
api
objects.
You
can
see.
My
deployment
was
on
cluster
one.
I
killed
cluster
one.
Now
it's
on
cluster
two.
You
know
it
would
be
icing
on
the
cake
if
you
could
just
keep
going
to
one
url
and
have
that
continue
to
function
and
you're.
None
the
wiser
that
it,
the
the
physical
compute
transitioned,
but
don't
know
that
it's
critical.
D
This
is
the
this
is
the
eternal
tension
in
the
you're
hunting
you're.
Do
we're
hunting
for
two
things:
we're
hunting
for
product
market
fit
right,
which
can
you
concisely
and
clearly
demonstrate
the
ideas
to
your
to
your
audience
and
every
every
potential
user
that
you
exclude
is
someone
who
might
actually
be
part
of
your
audience.
D
Conversely,
the
time
you
spend
on
that
has
to
be
well
justified.
It's
like
I,
I'm
hearing
the
the
time
spent
conservation
and
then
I
think,
there's
a
we.
Don't
really
have
an
advocate
for
the
audience
yet
and
so
finding
an
advocate
looking
for
like
what
that
minimal
thing
would
be.
I
agree.
Andy
like
this
is
a
tough
one.
This
is
like
the
I
struggle
with.
This,
too,
is.
D
If
you
could
hit
three
times
as
many
users
in
the
pitch.
Are
you
going
to
have
three
times
as
many
potential
long-term
users?
Does
that
change
your
growth
curve
is
kind
of
the
trade-off
we're
trying
to
figure
out
here
for
p2
I'd,
probably
say
I
can
buy.
That
argument,
which
is
like
hey
ingress,
is
still
this
slightly
more
complex
thing
on
mac:
here's
a
set
of
instructions.
D
Here's
how
you
here's,
how
you
would
get
to
it
yourself,
but
we
don't
have
that
automated,
but
here's
what
you
would
see
on
a
mac,
probably
is
you
know
the
the
minimum
step?
If
you
can
see
failure
failover
and
then
the
question
for
p3
is:
is
the
audience
we're
reaching
broad
enough?
So
maybe
we
can
follow
up
on
that
later.
That's
kind
of
the
this
is
a
hard
set
of
trade-offs.
D
D
And
this
is
a
little
bit
like
kind
versus
micro
cube
versus
cube
core,
where
versus
k3s,
even
like,
where
it's
like
in
the
early
days
cube
tried
to
reach
an
audience
some
of
these
additional
projects,
which
were
a
lot
of
effort.
Don't
get
me
wrong?
What
kind
is
you
know,
10
15
person
years
of
work
at
this
point
was
that
would
that
have
made
cube
more
successful
in
the
early
days?
D
We
don't
know
cube
kind
of
had
that
natural
gravity
to
it,
but
it
was
a
missing
gap
in
the
ecosystem
and
just
trying
to
figure
out
like
what
the
same
lesson
learned
from
case
b
is
so
we'll
get
to
that
I'll.
Do
that
as
a
follow-up
with
you
guys,
and
we
can
talk
through
some
of
the
options
as
well,
maybe
for
prototype
three.
A
There
was
a
question
in
chat:
do
we
have
to
do
something
different
to
set
up
ingress
using
kcp,
or
should
it
be
similar
to
what
we
usually
do?
I
think
the
answer
is
you.
You
don't
have
to
do
anything
differently,
but
joaquim
I'll
defer
to
you.
H
Well,
no,
you
don't
have
to
do
anything
differently,
I
mean
yeah.
You
will
need
an
english
controller
in
the
physical
clusters,
it's
more
about
how
you
sync
the
ingress
from
kcp
into
the
physical
clusters
and
that's
what
the
ingress
controller
is
taking
care
of.
It's
a
similar
approach
like
the
deployment
splitter,
is
doing
something
similar,
but
for
ingresses.
H
No,
I
mean
the
ingress
controller
uses
exposes
an
emboy
control
plane
just
for
local
development
and
testing
okay.
So,
instead
of
relying
on
on
hosts,
you
know
on
dns,
which
is
something
that
configuring
locally
can
be
messy.
We
use
an
envoy
just
to
have
an
endpoint
where
you
can
hit,
and
that
will
take
care.
No,
I
mean
the
ingress
controller
that
we
have
right
now
can
run
without
emboy
and
basically
propagate
the
ingresses
between
physical
clusters
is
that
the.
D
It's
interesting
too
because,
like
I
know
like
craig
and
them
are
like
the
dns
programming,
certainly
you
can
imagine
like
a
local
host,
an
etsy
local
host
programming
dns
on
a
mac.
I
know
there's
projects
out
there
that
tried
that
for
some
of
like
the
multi-cluster
stuff,
previously,
it's
kind
of
like
the
problem.
Ultimately
here
is
that
all
local
development
sucks,
and
just
some
of
it
sucks
less
than
others.
D
What
is
the
core
concept?
We're
trying
to
get
across?
Is
it
that
ingress
is
possible
or
that
someone
locally
would
do
this
day
to
day
and
there's
like
the
you
know,
you
could
program,
you
could
program
localhost,
you
could
have
the
proxy
running
on
the
mac.
You
could
have
a
more
rigorous
solution.
That
looks
like
a
smaller
version
of
like
what
you
might
use
in
a
production
environment.
D
Kind
of
what
you
have
joaquin
kind
of
is
like
what
I
would
call
the
bottom
of
a
production
setup,
and
then
you
know,
we've
imagined
hypothetical,
higher
level
things
on
a
single
or
multiple
hosts.
None
of
them
are
going
to
be
identical.
There's
going
to
be
six
different
approaches,
maybe
even
just
describing
somewhere
clearly
in
our
docs,
like
that
concept
of
now
that
you
have
these
multi-cluster
stuff.
There's
like
these
multiple
levels,
and
you
can
imagine
here's
how
you
might
approach
it.
D
You
could
use
etsy
localhost
for
the
simple
you
could
you
know
you
could
imagine
these
kinds
of
ecosystem
things
existing
here's
where
we
have
an
example?
And
then
you
can
imagine
these
others
like
even
that
clarification
in
the
reading
demo
flows
might
give
someone.
You
know
like
the
two
levels
or
three
levels
like
here's,
how
you
would
set
this
up
on
a
linux
box?
Here's
how
you
could
do
something
similar
on
a
mac
that
might
even
be
enough
in
the
short
run.
C
A
Yeah,
I
mean,
I
think,
that's
a
good
idea.
We
could
very
fairly
quickly
and
easily
describe
the
like
ip
and
networking
issues
when
you've
got
a
vm
in
the
middle,
as
you
do
on
a
mac
yeah.
J
J
That's
actually
a
great
analogy:
hiring
so
and
yeah.
The
question
is
like:
how
are
we
going
to
tunnel
that-
and
I
don't
know
like
when
we
want
to
handle
that
case,
but
I
imagine
that
there's
some
future
iteration
of
acp
that
we're
going
to
want
to
support
that
yeah.
D
We
know
like
we
know
enough
to
describe
what
you
need
to
do,
but
we
can't
do
it
ourselves,
because
we're
focused
on
these
other
aspects
call
for
participation
motivated
people
I
mean,
like
raphael,
actually
was
pinging
me
in
the
background.
I
don't
know
if
he's
on.
I
don't
think
he's
on
the
meeting
today,
but
like
raphael
like
this
is
kind
of
like
his
bread
and
butter
of
stuff.
He
goes
and
does
like
gets
it
working
in
various
environments.
He
did
the
global
load
balancer
operator
early
on.
D
D
To
work
on,
and
actually
like,
maybe
that's
even
like
as
a
fundamental
principle
for
the
project
going
forward
as
we
transition
from
type
to
project
phase.
Do
we
want
to
make
sure
that
we
just
always
have
three
or
four
good?
First
issues,
jason
kind
of
did
that
early
on
and
it's
always
one
of
those
hard
ones,
I'm
not
great
at
it.
I
know
I
generate
ideas
and
then
I
like,
I
don't
have
time
to
follow
up
on
them.
D
D
Yeah,
and
actually
like
going
even
further
ideas
for
people
that
we'd
love
to
see.
Try
in
like
a
read
me,
is
a
way
of
inviting
ideas
in
the
ecosystem
so
like
these
are
kind
of
like
the
evolving
ones.
It
might
make
sense
to
define
some
areas
that
we're
like
these
would
be
really
cool.
We're
not
focused
on
them
ourselves
like
as
a
non-goal
section
or
as
a
not
within
the
scope
of
whatever
becomes
the
project,
but
we
would
really
want
to
enable
the
ecosystem
side.
D
It's
almost
like
halfway
between
a
good
first
issue
and
actually
here's
some
stuff
that
we
just
don't
have
time
to
chase.
If
you
can
think
about
it,
let's
document
it
here,
it's
kind
of
a
little
bit
like
when
people
document
the
users
of
their
github
open
source
project
like
here's,
the
people
using
it
and
what
they're
using
it
for
this
is
almost
like,
the
the
one,
the
meta
second
level,
the
second
meta
level
of
that
which
is
here's
some
areas
where
you
could
really
think
about
like
we
have.
D
And
even
like
in
the
defining,
what
a
project
is,
what
is
kcp?
Not
kcp,
not
is
not
a
full
system
like
we
are
not
going
to
build
a
project
that
does
ingress
at
scale
to
multi-clusters,
but
we
think
that's
a
really
great
place
for
people
and
there's
like
here's
like
three
ideas
or
three
places
and
here's
folks
who
have
also
started
looking
at
and
that's
a
place
for
them
to
say,
like
hey,
I
want
to
share
what
I've
done
in
the
ecosystem.
D
So,
like
cube,
did
this
early
on
as
well
like
the
cube
101
blog,
and
some
of
it
is
just
calling
out
to
people
who
want
to
go
chase
ideas,
giving
people
ideas
to
chase
and
there's
like
three
levels
of
it.
There's
like
orienting
them
at
the
first
place.
You
see
it
like
in
a
project
read
me
or
a
sub
document
of
roadmap
versus
not
roadmap
or
what
the
project
isn't
cube.
D
Did
that,
like
cube,
is
in
the
paths,
here's
some
people
going
and
doing
that
and
then
there's
the
issues
which
are
like
bigger
ideas
that
people
have
but
we're
never
going
to
get
to
them
and
then
there's
the
good
first
issues,
which
is
much
more
approachable,
valuable
part
of
the
project.
A
Yes,
so
we
are
using
k-log
for
logging
and
k-log
only
logs
the
file
name,
not
the
full
path
when
it
prints
out
a
log
message,
and
we
have
a
bunch
of
files.
For
example
named
controller.go,
and
it's
hard
to
tell
when
you
see
a
log
message.
A
What
controller
did
it
come
from,
so
we
either
can
switch
to
a
different
logging
library,
which
could
log
full
paths
or
we
can
just
rename
our
files
so
that
they
have
unique
file
names.
So
first
step,
I
think
we'll
rename
the
files-
and
this
is
a
good
first
issue
and
help
wanted,
because
this
can
be
done
per
file.
A
And
I
would
expect
and
hope
that
if
there's
multiple
people
interested
that
you
could
pick
a
file
at
a
time
or
a
couple
files
at
a
time
and
open
up
separate
prs
for
for
each
of
these.
C
A
And
I
was
thinking
I
don't
know
in
my
subconscious
like
I
don't
know
that
we
want
like
the
ee
tests
or
I
don't
know
that
we
care
so
much
about
those,
because
it's
really
just
for
k-log
logging.
But
I
don't
know.
C
A
Yeah,
so
this
one
depends
on
getting
the
api
type
or
api
changes,
merge,
which
is
pr
524,
but
the
idea
is.
I
want
a
command
line
command
to
be
able
to
mark
a
physical
cluster
as
unschedulable
which
is
coordinating
it.
I
want
to
drain
any
workloads
off
the
physical
cluster,
which
basically
means
getting
them
reassigned
to
another
physical
cluster,
and
then
I
want
to
uncordon
and
mark
a
cluster
schedulable
again.
B
C
Sounds
good
in
a
similar
direction.
There
was
a
discussion
about
login
plugins.
C
C
C
C
We
can
work
around
that
in
kcp,
obviously,
by
overlaying
or
open
api
specs.
In
some
way,
so
we
know
cubes
don't
make
icebergs
incomplete
and
then
we
add
something
to
the
spec.
This
is,
of
course,
a
workaround,
but
we
want
to
be
generic.
We
don't
want
those
overlays,
so
the
correct
way
is
to
fix
it
upstream.
C
C
B
C
A
I'm
curious
to
dig
deeper
into
the
tecton
trigger
architecture
and
with
the
little
bit
of
time
we've
got
left
sounds.
C
A
G
Yeah
yeah
can
I
present
my
screen
share
my
terminal.
G
F
G
So
yeah,
so
this
is
what
I've
been
trying
to
do.
I
my
kcp
running
in
a
pod
right
and
then
pipelines
and
then
triggers
so
all
of
them
are
running
in
the
individual
parts
and
I'm
trying
to
set
up
a
event
listener
so
that
it
can
listen
to
any
github
requests
right.
So
this
is
the
first
barrier
I'm
facing
when
I've
I've
just
set
up
my
event
list
now
and
immediately.
G
I
get
this
error
right.
The
event
listener
that
I'm
setting
up
in
the
kcp
cluster
is
actually
looking
for
a
service
account
in
one
of
the
namespaces.
That
kcp
creates
that.
G
So
I
I
just
listed
those
you
know
clusters
here.
So
it's
it's!
It's
it's
this
cluster
and
it's
looking
inside
that
so
I
was
actually
quite
confused
at
this
step
only,
but
I
tried
with
secrets
as
my
answer
and
so
yeah,
so
here's
that
event
listener
I'm
talking
about
which
is
currently
failing,
where
it's
looking
for
this
service
account.
This
is
what
is
defined.
G
A
D
No,
a
underlying,
so
this
is
the
previous
discussion
yeah,
the
physical
cluster,
so
a
work,
so
this
is
like
high
level
conceptual,
and
I
actually
this
is
documented
somewhere,
and
I
can't
find
it
now.
This
is
driving
me
crazy.
I
was
like
frantically
going
through
and
getting
nerd
sniped
by
that's,
okay,
so
somewhere
somewhere
jason
and
I
had
a
really
long
discussion.
We
worked
through
the
the
scenario
for
it.
It's
documented
somewhere.
D
There
may
be
a
need
to
support
that,
but
it
would
not
happen
by
default.
There
would
have
to
be
something
explicit,
so
service
account.
Token
injection
does
not
happen
by
default
to
the
underlying
physical
cluster.
In
fact,
from
a
workload
perspective,
we
don't
want
workloads
to
have
service
accounts
in
the
underlying
physical
cluster,
except
in
a
few
specific
cases
and
there's,
like
some
examples,
are
in
the
very
early
days
of
cube.
We
were
like
oh
there's.
Actually
it's
really
awesome
that
workloads
can
become
aware
of
the
underlying
cluster,
and
so
we
were
like.
D
That
says,
hey
by
the
way,
I'm
willing
to
break
the
idea
of
encapsulation.
So
concretely,
we
haven't
done
any
of
that.
I
really
wish
I
could
find
this
dot.
This
is
going
to
drive
you
crazy,
because
we
actually
specked
out
the
high
level
and
then
we
need
to
go
add
to
that.
Okay,
here's
a
concrete
use
case
in
this
particular
scenario.
A
So
we
have
a
pod
that
techcon's
trying
to
create
that
is
referencing
a
service
account
flash
secret
in
the
physical
cluster
that
doesn't
exist.
So
my
question,
I
guess,
would
be
what
is
techdon
doing
in
this
pod
and
what
what
api
calls
is
it
making.
G
Yeah
yeah,
so,
as
I
was
saying
so,
this
is
the
first
problem
right
and
I'll
just
quickly
finish
on
this
point,
so
because,
obviously,
for
all
the
reasons
the
service
account
is
not
synced.
What
I
did
was
instead
create
a
secret
here
right
with
the
conflict
of
the
kcv
cluster.
D
G
Right,
I
think
the
use
case
is
when
a
event
is
there.
When
a
github
pull
request
happens,
the
event
listener
right
is
supposed
to
create
a
deployment.
Sorry,
it's
supposed
to
create
a
task
run
to
do
some
action.
D
A
D
D
So
in
that
case,
task
run
is
a
high
level
api,
not
a
low
level
api,
a
logical
api,
not
a
physical
api
like
just
distinguishing
between
where
those
would
be.
There
are
use
cases
that
I
could
imagine
where
you
actually
wanted
to
create
task
runs
on
the
underlying
cluster,
but
that
depends
on.
Is
that
the
goal
of
what
you're
trying
to
do
as
a
service?
D
A
D
D
D
But
if
you
do
that,
then
inherently
all
of
that
logic
is
then
local,
but
that's
a
fundamental
trade-off
right,
you're
making
the
trade-off.
You
want
to
distribute
all
the
high-level
work
down
to
a
lot
of
clusters
and
then
summarize
the
high-level
work
back.
That's
a
little
bit
like
deployments
in
kcp.
D
A
separate
mindset
would
be
you're
running
pods,
but
there's
nothing
tied
to
the
cluster
for
your
workload
and
it
again
like
we're
kind
of
talking
about
it
happens
in
the
workspace.
Tecton
becomes
a
complete
high
level,
but
that
does
mean
if
that
physical
cluster
can't
talk
to
the
logical
cluster,
because
there's
an
outage
or
connectivity.
D
That
means
that
event
listener
pauses
and
does
nothing,
which
means
you
don't
get
reliability.
This
is
you're
distributing
the
problem
or
you're
centralizing
the
problem,
and
it's
just
really
important,
like
we
don't
try
to.
If
we're
designing
for
centralized,
we
should
say
we're
designing
for
centralized,
not
distribution
and
here's
the
trade-off
we're
going
to
get
from
it.
I'd
probably
jason
if
you
were
here,
would
like
be
able
to
like
orient
on
where
we
are
to
make
sure.
D
That's
like
something
we
discuss
so
in
that,
in
this
case,
like
it
sounds
like
the
problem
is,
is
that
we
have
not
implemented
the
service
account
token
distribution
for
implicitly
mounted
service
accounts
for
workspace
service
accounts
as
part
of
the
syncer.
That
is,
I
think
this
is
actually
captured
in
the
design
document.
D
It
so
there's
a
section
for
it
in
the
design
under
workload
strategies,
but
it's
not
been
filled
out.
So
it's
secrets
and
config
maps
I'll
put
a
thing
in
here:
access
or
shoot.
I
need
to
come
up
with
a
name.
It's
like.
G
Okay,
given
that
we're
running
out
of
time,
so
what
would
be
your
suggestion
in
which
direction
should
I
go
forward
to
sort
of
figure
out
how
to
run
these
triggers.
D
G
Yeah
yeah,
so
that's
what
I
was
getting
at
so
once
I
do
this.
The
event
listener
is
active,
so
the
secret
is
indeed
working,
but
but
then
what
happens
is
when
I
actually
trigger
a
pull
request.
G
The
deployment
right,
which
is
supposed
to
get
created
that
again,
is
looking
for
the
same
service
account
so
that
that
that
is
a
whole.
You
know
who.
D
D
Service
accounts
should
be
copied
down,
but
we
can't
copy
service
accounts
down
until
we
do
the
mapping
that
changes
them
from
like.
If
we
pass
service
accounts
down
today,
they
would
get
a
service
account
on
the
underlying
cluster.
We
want
service
accounts
in
the
deployment,
so
this
is.
D
I
think
this
is
a
not
yet
implemented
key
syncer
functionality,
and
so
the
workaround
is,
you
have
to
use
either
default
service
account
or
we
copy
the
service
account
and
tell
people
that
and
and
specifically
not
copy
role
bindings,
because
if
we
copy
role
bindings
today
technically
like
you
could
ask
for
those
privileges
we
copy
them
down,
we
need
to
actually
make
sure
we're
not
doing
half
of
that.
We
need
to
do
both
parts
of
it.
C
G
Yeah
yeah,
thank
you
so
much
for
that.
Yeah
I'll
try
to
spend
more
time
on
this
and
try
to
just
try
a
couple
of
things
and
forum.
Kcp
is
always
there.
I
hope.