►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello
everyone-
this
is
the
cluster
pi
office
hour
meeting
today
is
wednesday
is
17
of
march.
A
This
meeting
should
beat
the
cncf
code
of
control,
so
please
be
kind
of
if,
with
every
other,
we
also
have
a
meeting
netiquette.
So
if
you
want
to
speak,
please
raise
use
the
razer
and
the
feature
which
is
where
there
are.
There
are
reactions
in
in
the
latest
of
the
upgrade
of
zoom,
and
we
can.
We
can
start
with
today
agenda
and
I
have
two
quick
psa.
A
First
of
all,
sig
contributor
experience
is
started
at
the
annual
contributor
survey.
I
I
added
the
link
to
the
to
the
meeting
notes.
Please
participate.
It
is
always
a
good
opportunity
to
provide
your
feedback
and
to
help
country
backs
to
shape
out
the
work
for
all
the
contributors.
A
A
The
deadline
for
the
doodle
is
tomorrow
end
of
day
and
on
friday.
Basically,
I
will
pick
up
the
the
most
voted
slot
and,
and
so
we
next
week
we
can
have
this
workload.
A
A
C
Yeah
was
just
going
to
say
a
complete
summarize
that
so
I
think
his
question
about
whether
or
not
q
bar
black
proxy
is
needed.
It's
owned
by
I've
gotten
his
name
now
in
his
personal
github
account
frederick
banks
see
from
red
hat,
believe
who
was
that
red
hat
and
there's
a
question
about
like
secure
software
supply
chain,
which
is
reasonable.
I
think
the
question
is:
do
we
really
need
it?
C
Obviously,
we've
got
cubar
back
proxy
because
it's
been
included
in
cube
builder,
which
is
why
we're
using
it
and
we're
only
really
using
it
for
metrics.
So
I
think
a
fairly
confident
answer
is:
no,
you
don't
need
it,
but
it
isn't
infrastructure
components
yeah,
I
think
finally,
there's
an
ask:
maybe
we
can
transfer
this
to
a
kubernetes
sig
zone
project
that
should
resolve
the
software
supply
chain
issue.
B
Yeah
well,
that
happens
like
let's
open
an
initial
builder
as
well,
because
if
this
is
just
for
metrics
and
nothing
else,
and
we
don't
have
metrics
in
cluster
api
or
like
that
useful
metrics
right
now,
so
we
might
just
want
to
have
the
option
to
disable
it
or
remove
it
all
together
for
zero.
Four,
given
that,
like
the
project,
has
like
some
security
concerns
as
well.
B
A
B
Let's
understand
the
impact
first,
if,
if
we
are
planning
to
add
metrics,
we
need
to
find
an
alternative
solution
and
if
we're
not
yet
getting
the
same,
look
at
the
user
in
kappa,
but
if
there
is
other
ways
to
authenticate
or
we
can
keep
using
it
but
like
it
would
be
better.
If
this
project
was
on
by
saying.
A
Okay,
I
saw
on
the
issue
that
that
there
is
a
general
consensus
to
to
try
to
get
a
disowned
by
a
sig.
Let's
see
do
you
do
someone
happen
to
know
if
this
was
already
started
the
process,
or
there
is
an
issue.
D
Yeah,
so
I
had
a
conversation
with
frederick
a
long
time
ago,
actually
about
converting
q
bar
back
crops
into
more
of
a
library,
so
it
would
end
up
being
like
a
http
middleware
kind
of
pattern.
He
was
definitely
supportive
of
that
when
I
originally
had
the
conversation,
but
it
never
happened
as
far
as
I'm
aware.
D
Would
it
be
more
of
a
library
make
this
concern
less
or
is
it
still
a
a
big
concern
if
it's
just
like
integrated
as
a
library
into
taskbar
api.
B
I
wouldn't
wire
the
library
in
cluster
api.
It
could
live
in
controller
tools
or
controller
runtime,
which
would
make
a
little
bit
more
sense.
Given
the
metrics,
I
think
the
metric
server
is
exposed
in
control
of
runtime
right
now,
but
so
like.
What
I
would
like
to
understand
is
two
things
like
one:
the
primary
use
case
to
use
this
library,
like,
let's
confirm
it's
only
for
metric,
if
it's
only
for
magic.
B
What
are
we
trying
to
use
it
for
and
because
someone
mentioned
like,
we
might
use
kubernetes
libraries
directly
to
achieve
a
similar
thing.
So
if
that's
the
case
like
that
would
be
probably
a
preferred
path.
B
The
other
path
I
could
think
about
is
to
talk
with
control
runtime
maintainers,
to
see
if
we
want
to
have
a
subset
of
that
of
this
project
as
a
library
to
protect
the
metric
server.
A
D
D
I
think
nadir
mentioned
that
there
was
some
customers
there
asking
as
well,
so
I
started
working
on
fleshing
out
a
proposal
in
the
hackmd,
and
this
is
very
variable
at
the
moment,
but
I'm
trying
at
the
moment
to
gather
just
the
like
high
level
requirements
and
use
cases,
because
I
know
there's
already
this
existing
low
balance
proposal
from
jason,
that's
sort
of
like
in
flight
as
well,
and
what
I
don't
want
to
do
is
end
up
with
two
proposals.
D
Both
saying
yeah
he's
changed
all
of
the
slow
balance
of
stuff
at
once,
because
it's
just
going
to
get
confusing
for
everyone.
So
yeah
like
that's.
Why
it's
it's
very
deliberately
vague
on
any
sort
of
implementation
detail
at
the
moment,
because
I
want
to
let
jason's
proposal
get
through
first
and
then
build
this
on
top
of
that.
D
But
that
said
would
still
like
feedback
on
the
use
cases
and
requirements
stuff
that
I'm
gathering
in
there
and
the
explanations
of
those
one
question
that
I
brought
up
in
slack,
which
I
thought
probably
warranted.
Some
extra
discussion
was
around
the
traffic
sources,
so
part
of
this
problem
is
that
there's
like,
as
far
as
I
can
see
three
different
sources
of
traffic
in
a
cluster
at
the
moment.
So
you've
got
the
end
user
who
gets
their
cube.
D
Config
you've
got
the
in
cluster
stuff,
so
that's
like
cubelet,
for
instance,
in
the
workload
clusters
and
then
cluster
api.
So
things
like
the
machine,
controller
or
machine
healthcheck
controller,
which
also
interact
with
the
workload
cluster.
D
If
you
separate
traffic
to
internal
and
external
load
balancers
so
external
like
that,
would
then
be
for
the
end
users.
The
internal
makes
sense
for
things
like
cubelet,
but
cappy
could
be
either
depending
on
how
you've
got
it
set
up
so
like.
If
you've
got
your
management
cluster
in
a
vpc.
That's
then
paired
with
the
workload
cluster.
You
probably
want
that
traffic
to
be
internal
and
go
over
the
private
networking,
whereas
if
they're
not
paired
or
they're
on
different
cloud
providers,
you
want
that
to
be
public.
D
So
basically,
what
I've
got
in
there
at
the
moment
is
adding
some
option
to
determine
where,
where
cappy
should
go,
whether
it
should
go
via
the
external
or
the
internal.
Now
I've
suggested
initially
that
should
go
on
the
cluster
object,
but
that
kind
of,
maybe
isn't
the
right
place.
D
My
thought
for
it
initially
going
on
cluster
object
was
that
this
is
something
we're
going
to
want
to
be
a
common
feature
among
all
clusters
and
that
kind
of
seems
the
cluster
is
kind
of
the
source
of
truth.
For
all
of
the
other
components.
Does
that
you
know?
Does
that
make
sense
to
other
people?
Do
people
have
better
suggestions
or
concerns
about
that
particular.
A
I
I
can
give
my
opinion
on
that
is.
It
goes
back
to
your
comment,
so
it
is
kind
of
difficult
now
to
validate
one
user.
So
in
term
of
use
case,
I
I
got
your
point
and
I
kind
of
agree
so
in
terms
of
eye
level,
your
requirement,
I
think
that
what
we
are
proposing
makes
sense
in
terms
of
solutions,
so
user
experience
basically
api.
A
We
are
talking
about
load
balancer,
so
what
really
cares
I
care
is
that
at
the
end,
we
provide
a
single
place
for
the
user
for
defining
the
balancer
and
their
properties
and
a
single
way
to
do
so
and
yeah.
This
is
where,
unfortunately,
the
two
the
two
works
kind
of
collide
and
they
and
you
really
get
on
my
second
question.
So
what
is
your
respected
deadline
for
this
proposal?
So
it's
something
that
that
you
would
like
to
get
in
v1,
alpha
4
as
we
I
assume
we
are
doing
for
the
json
works
or
not.
D
For
me,
like,
ideally
the
sooner
the
better,
but
if
it
doesn't
get
into
v1
alpha
4,
like
not
gonna,
lose
sleep.
I
think
this
is
important
long
term
for
koster
api,
but
you
know,
I
think
the
the
separation
of
the
low
balance
of
stuff
obviously
takes
precedence,
and
I
don't
want
to
like
block
any
of
that
by
saying.
Oh,
we
need
to
do
this
at
the
same
time.
D
To
me
this
is,
you
know
it's
kind
of
an
optimization
right
like
it's
not
required,
but
it
will
make
a
lot
of
topologies
easier.
It
will
help
our
like
external
parties.
Like
add
you
know,
support
for
cluster
api
and
using
the
external
cluster
infrastructure
proposal,
so
you
know
like
yeah
whenever,
but
ideally
the
next
four
or
five
months.
B
I'm
curious
to
think
with
some
folks
here,
like
cecile
or
nitier,
we're
not
sure
who
we
have
on
the
vsphere
side
right
now
here,
but
have
we
gotten
like
into
this
use
case
like
in
laylac
or
have
we
suggested
like
other
solutions
to
it,
or
this
is
just
completely
new
and
we
should
think
about
it
from
scratch.
G
It's
new
for
cabsie
right
now.
We
it's
like
internal
or
external,
like
you
just
specify,
we
have
like
an
api
server
load,
bouncer
spec
and
you
just
like
specify
what
type
of
load
balancer
you
want
to
use
for
the
api
server,
but
it's
one
or
the
other
you
can't
get
both
and
joel
was
actually
the
first
person
to
bring
that
up.
B
C
Yes,
I
mean
the
use
case
is
makes
a
lot
of
sense
in
the
aws
environment
because
of
costs,
because
today
kubla
connectivity
back
to
api
server,
when
you
have
an
external
facing
load,
balancer
means
traffic
goes
out
to
the
public.
Internet
comes
back
in
so
you're
paying
egress
cost
so
yeah.
C
I
suspect
people
probably
have
asks
around
sorting
that
out
starts
to
go
away
once
you're
all
internal,
so
I
think
just
having
the
option
is
definitely
useful
and
I've
also
seen
it
in
on-premise
environments
and
some
other
deployments
like
on
openstack,
where
you
have
internal
cluster
communication
going
to
vip
and
then
the
cloud
the
cloud
ops
team
will
overlay
a
sort
of
dns
entry
and
load
balancer
for
the
external
consumers
to
access.
D
D
I
think
it
should
be
a
controller
plain
endpoint,
and
then,
if
you
want
to
have
a
second,
you
should
specify
the
second
as
the
internal
or
private
one,
and
then
like
I've,
been
trying
to
put
in
the
document
like
what
it
means,
if
you
have
one
or
both
and
like
where
the
traffic's
going
and
how
that's
flowing
one
change
that
I
was
thinking
about
is
to
make
it
easier
for
cluster
api
components,
like
obviously
with
this
potential
switch
between
internal
or
external.
D
Whether
we
should
require
the
control
plane
providers
to
create
two
cube
configs,
rather
than
once
they
create
one
for
the
users,
the
end
users
and
then
one
for
cluster
api
components
and
then
that
cluster
api
component,
one
it
could
switch
between
the
internal
and
external
endpoint,
depending
on
whatever
this
value
of
the
like
option
ends
up
being.
Obviously,
that
would
be
a
bit
of
a
change
from
now,
where
I
think
and
correct
me.
If
I'm
wrong,
they
all
just
consume
the
same
cue.
B
So
from
my
perspective,
like
a
given,
if
this
goes
into
load
balancers
and
like
wow,
the
blood
dancer
purples
are
still
in
flight
like.
I
would
ask
you
and
jason
to
collaborate
on
having
this
as
a
use
case.
If
that's
okay
with
you
all
so
that
we
have
one
cohesive
story
all
around
rather
than
two
going
different
paths,
if
that
makes
sense,
I'm
still
fuzzy
about
the
details.
B
If,
like
we
should
actually
say
this
is
internal
or
external,
or
just
say
that
you
can
have
multiple
advantages
and
that's
it
without
saying
this
can
and
like
having
some
sort
of
priority
list
so
like
the
first
one
wins
over
depending
on
like
who
the
target
audience
is,
but
yeah
those
details
should
probably
go
in
the
proposal.
D
Yeah,
I
I
did
try
and
think
about
if
there
was
a
use
case
for
three
or
more-
and
I
haven't
seen
one,
but
if
anyone
has
any
ideas
about
why
you'd
want
three,
then
please
do
come
my
way,
because
that
would
definitely
change
the
way
I'm
looking
at
this
yeah
like,
I
also
wasn't
sure
about
whether
you
want
to
use
internal
external
public
private,
like
what
the
wording
wants
to
be
on
any
of
this
as
well.
D
I
think
that's
something
that
you
probably
want
to
vote
on,
or
have
some
decision
about,
but
yeah
I'll,
try
and
sync
up
with
jason
at
some
point
and
also
try
and
help
out
with
his
existing
proposal,
where
I
can.
A
G
For
brazil,
I
I
think,
because
I
think
they
have
a
quick
question
in
the
chat
yeah.
E
Hey
this
is
micah,
I'm
sorry,
I'm.
I
showed
up
a
little
bit
late
and
I
just
got
the
very
tail
end
of
the
the
item
that
I
added
to
the
beginning
of
the
discussion
here
and
I
just
wanted
to
thanks
for
taking
notes.
I
just
had
a
just
quick
question
after
reading
those
notes
and
kind
of
wondered
where
what
was
the
status
like
it
sounds
like
there's
just
some
questions
about.
If
the
cube
our
back
proxy
is
needed,
I
guess.
Are
we
just
gonna?
E
A
A
Okay,
this
is
one
and
making
this
happen
is,
let
me
say
one
part
of
the
story
and-
and
we
are
trying
to
understand
if
there
is
an
issue
and
and
how
what
is
the
seek
out
opinion
about
taking
care
of
this
project.
The
second
part
is
that
okay,
we
are
using
kuber
airbag
in
order
to
protect
the
metric
server.
This
is
our
current
assumption.
We
have
to
properly
to
to
make
some
little
more
investigation,
but
we
are
pretty
sure
this
is
the
case.
A
Currently,
in
casserpie
we
don't
have
a
metric,
so
eventually
it
is
even
not
not
necessary,
but
we
are
getting
this
because
we
are
basically
following
what
kuber
builder
is
doing.
So
it
will
be
interesting
to
move
the
issue
in
the
cuba
builder
repo
and
trying
to
understand
there
if
it
makes
sense
to
continue
to
approach
or
if
it
makes
sense,
to
explore
other
approach
based
on,
for
instance,
in
library
or
methods
that
exist
in
in
the
main
kubernetes
circle.
A
A
Questions
my
pleasure,
so,
okay,
next
one
is
again
on
joel
anything
needed
to
move
the
external
management
cluster
infrastructure
proposal
along.
Please
go
on
enjoy.
D
Yeah,
so
just
wanted
to
chase
up
on
this
one.
There
hasn't
been
much
feedback
on
the
proposal
for
the
last
couple
of
weeks,
so
I
did
go.
I
know
I
think
cecile
added
some
concerns
about
how
this
might
interact
with
the
low
balance
proposal.
So
I
tried
to
evaluate
that
and
I
I
posted
some
comments
back
there.
I
I
don't
think,
there's
too
much
like
interaction.
D
I
don't
think
it'd
be
a
problem
to
go
ahead
with
this,
as
is
at
the
moment
but
yeah.
If
anyone
has
some
time,
it'd
be
great
if
we
can
just
get
some
final
checks
on
that
and
hopefully
get
it
merged.
I
did
also
add
a
proof
of
concept
for
the
like
implementation
of,
what's
required
for
the
cluster
api
repo
and
I'm
going
to
try
and
do
a
kappa
demo
for
it
like
over
the
next
week
or
so
so
might
demo
that
next
week,
if
I
get
that
done.
A
A
A
Okay,
I
see
that
nadir
pointed
out
that
there
is
a
some
work
on
going
in
api
machinery
if
I'm
not
wrong
in
in
with
regards
to
a
lowing,
kuber
cuttle
to
basically
patch
status
so
and
it's
nadir
wants
to
add
more
color
on
this
yeah.
It's
basically
what
I
just
said
there
so.
C
Some
nikita
from
rn
did
a
poc
on
adding
sub
resource
support,
so
that
would
let
you
set
status
or
scale
from
the
command
line.
There's
still
some
issues
to
work
out,
there's
some
internal
bits
that
coupon
wouldn't
need
changing
startups
pop.
It
needs
to
become
a
cap,
we're
interested
in
use
cases,
and
this
seems
like
a
perfect
fit
going
from
what
we
were
talking
about
a
few
weeks
ago
right.
D
A
F
Yeah
thanks
fabrizio
so
yeah,
I
you
know
I've
been
doing
a
lot
of
testing
recently
with
the
auto
scaler
and
you
know
cluster
api
and
the
capd
provider,
because
that's
kind
of
easiest
for
me
and
a
lot
of
this
is
around
the
scale
you
know
from
and
two
zero
implementation,
I'm
trying
to
put
together
and
one
of
the
things
I'm
noticing-
and
I
know
that
I
know
that
we
changed
some
of
the
capti
provider
several
months
ago,
but
it
things
are
behaving
a
little
bit
differently
on
the
auto
scale
and
I'm
curious
if
there's
a
way
to
control
what
the
the
kublet
thinks
is
the
size
of
the
node
that
it's
on
or
the
machine
that
it's
on.
F
F
You
know
whatever
you're
running
on,
but
it
would
be
really
useful
for
testing
if
we
could
create
cap
d
clusters
where
we
could
define
what
it
thinks
the
size
of
the
node
is
inside
of
the
container
that
it's
running
so
that
you
know,
then
we
could,
because
the
cluster
auto
scaler
is
reporting
from
the
node
stats,
what
it
thinks
the
cpu
and
memory
resources
for
each
node
are,
but
we
want
to
be
able
to
tell
it.
F
You
know,
to
use
less
resources
or
to
be
able
to
control
how
it
expands
so
yeah,
and
maybe
this
is
something
you
know
we
could
talk
about
during
fabricio's
session-
he's
gonna,
but
I
just
it's
something
I've
been
running
into,
so
I
thought
I'd
bring
it
up
here.
Thank
you.
So.
A
I
I
try
to
answer
this.
So
in
cap
d,
we
don't
have
option
to
set
memory
and
and
cpu
I'm
not
aware
of
such
type
of
the
flags
in
in
kind
as
well,
but
what
we
can
do
is,
to
probably
do
is
to
basically
add
flags
in
our
docker
types
and
then
pass
this.
These
types
to
the
docker
exec
that
we
are
using
to
create
nodes.
A
What
I'm
not
sure
about
is
is
is,
if
kubelet
being
executed
in
the
inside
container
with
memory
limits.
We
recognize
these
memories.
This
is
something
that
that
we
can
test
out.
If
you
want
to
make
a
quick
test,
I
can
point
out
to
you
where
to
to
make
the
changes
and
then
do
a
quick
test
to
to
see
if
it
works.
A
You
great
daniel
up
to
you.
H
Thanks
for
rachel
yeah,
I
it's
something
I
I
was
playing
around
with
actually
a
few
weeks
ago
and
I
was
wondering
if
anybody's
interested,
I
I
I
had
some
troubles
with
it,
but
basically
what
what
I
did
was
I
took
the
the
the
tilt
tilt
configuration
and
added
added
the
delve
debugger
to
the
to
the
various
containers
and
then
set
up
vs
code.
H
So
it
was
attaching
two
to
the
debuggers
and-
and
you
know
I
could
create
a
resource
and
step
through
you
know,
step
through,
like
you
know,
one
controller
and
then
one
you
know
once
that
was
done
with
the
reconcile
it
would.
H
You
know
it
said
like
set
break
points
in
the
different
reconcile,
loops
and
and
sort
of
you
know,
get
get,
get
get
a
guided
tour
through
the
through
the
various
controllers,
which
I
thought
was
neat
I
I
did
run
into
some
issues
but
anyway,
I
was
just
wondering
if
anybody
was
interested
in
that
I
could.
I
could
come
back
and
and
work
on
a
little
more.
A
Thank
you
very
much.
This
sounds
awesome,
so
I
think
that
kind
is
something
that
everyone
is
is
interesting,
because
it
is
part
of
the
most
of
the
developer
jobs.
So
I
I
will
be
happy
to
see
the
demo
at
least
this.
A
H
All
right,
yeah
I'll
I'll,
open
up
a
pr
because
I
might
have
some
might
have
some
questions
just
yeah.
Some
some
issues
that
I
that
I
ran
into
it
would
be
great
to
get
some
help,
but
yeah
I'll
I'll
create
a
pr
and
yeah
figure
out
figure
out
when
to
demo,
maybe
I'll,
just
pre-record
something.
G
Cecile
yeah,
I
just
want
to
take
this
opportunity
to
say
if
anyone
has
like
cool
things,
they're
working
on
and
you
just
want
to
show
it
off,
I
think
feel
free
to
like
sign
up
in
the
agenda
for
like
office
hours
and
just
do
a
quick
demo.
It
doesn't
have
to
be
like
very
prepared
or
very
formal,
but
I'm
sure,
like
lots
of
people,
would
like
appreciate,
like
learning
from
what
you've
done
and
what
you
ran
into.
A
So,
thank
you
cecile.
So
I'm
seeing
a
question
in
the
chat
from
jerry.
I
I
don't
know
if
you
want
to
repeat
the
question
to
the
audience.
Yes,.
B
I
I
was
a
fantasy
for
the
club
cluster
api,
where,
in
like
able
to
connect
to
the
like.
Is
this
like
a
aks
like
a
kubernetes
cluster
and
without
have
to
like
recreate
it
and
we're
just
using
the
club
api
sort
of
as
like
a
management
like
a
cluster?
I
So
I
see
that
the
this
feature
has
a
was
like
an
experiment
like
a
feature,
so
I
was
wondering
what
this
one
was
mature
enough
to
use
in
the
production
environment.
Anybody
have
any
like
experience
of
that.
G
I
can
take
this
one
and
then
ace,
I
know
you're
in
here.
If
you
want
to
add
anything,
but
I
yeah
so
the
it
does.
So
that's
a
cab,
z-specific
question
capzi
has
support
experimental
support
for
creating
and
managing
the
life
cycle
of
aks
clusters.
It
does
not
support
adopting
or
managing
existing
aks
clusters,
although
I
know
some
folks
from
giant
swarm
were
interested
in
adding
that
functionality,
but
that's
just
a
proposal
issue
at
the
moment.
So
it's
not
supported
in
terms
of
like
production
support.
G
It
is
an
experimental
feature,
so
it's
an
experiment
which
means
it
can
change
and
the
have
breaking
change
that
changes
at
any
point
and
we
can
decide
to
remove
it
all
together
if
we
don't
see
and
need
to
continue
supporting
it.
So
use
with
caution
is
the
summary.
J
Yeah,
I
think,
like
just
add
a
little
bit
to
that.
The
flip
side
is
like
the
the
cappy
interface
to
it
is
probably
not
as
stable
as
it
could
be,
but
obviously
like,
underneath
it's
just
aks
so
like
that's,
not
going
to
be
necessarily
changing
a
lot
underneath
you,
but
not
all
the
features
are
exposed.
So
that's
one
thing
to
think
about,
but
try
it
out.
Let
us
know
you
know
open
issues.
If
you
find
problems,
yeah.
A
Great.
Thank
you,
ladies
and
gentlemen,
for
answering
this
question.
So
we
we
are
at
the
end
of
today
agenda
and
are
there
some
last
minute
topic
issue
that
we
want
to
discuss.