►
From YouTube: kubeadm office hours 2020-10-14
A
A
All
right
new
meeting
participants,
I've
seen
prakash
before
john.
Do
you
want
to
present.
B
Yes,
I'm
an
engineer
at
dell.
I
work
on
kubernetes
designs.
A
So
I
will
proceed
with
a
couple
of
quick
psas
here.
The
first
one
is
that
the
rename
of
the
master
label
paints
pr
is
on
hold,
but
it's
still
probably
going
to
happen
in
a
first
stage
for
120.
A
As
you
may
know,
there
is
a
new
working
group
in
kubernetes
called
working
group,
naming
that
recently
had
a
meeting
and
just
in
santa
barbara
joined
this
meeting
and
asked
the
question
hey.
Are
you
sure
that,
like
master
to
control
plane
is
the
actual
rename
we
are
looking
for?
And
everybody
said
yes,
but
basically
justin
is
looking
for
something.
That
is
a
more
like
an
official
statement
on
this
topic.
A
A
The
second
psa
is
that
the
insecure
serving
of
the
core
components,
the
control
plate,
components
such
as
cube
api
server,
cube
scheduler
and
a
cube
controller
manager
are
being
removed
as
a
functionality.
Basically,
these
components
have
a
particular
flag.
That
is
a
difference
between
the
components,
but
it
can
be
dash
dashboard
or
dash
dash
in
secure
port.
A
A
The
insecure
port
functionality
itself
was
deprecated
for
a
very
long
time,
and
120
appears
to
be
the
release.
When
we
are
going
to
remove
the
functionality,
the
flags
themselves
are
planned
to
remain
dormant
or
no
operation,
basically
for
a
few
more
releases
so
that
they
don't
break
user
infrastructure
immediately.
A
But
we
expect
to
remove
the
flux
as
well
in
a
maybe
in
a
couple
couple
of
releases
for
more
information
check,
the
pr
the
link
tissue
inside
the
pr.
A
A
Basically,
I
cannot
open
the
discussion
from
this
browser,
but
I
can
show
you
the
what
the
user
is
talking
about
so
by
the
way
this
this
this,
like
the
latest
iteration
of
the
docs
written
by
me
and
fabrizio
pretty
much
so
basically,
the
user
is
talking
about
that.
A
A
They
are
in
a
way
correct,
because
minor
cubelet
upgrades
minor
release,
couplet
upgrades
require
that
you
normally
evict
all
the
pots
from
this
node,
because
the
idea
is
to
restart
the
couplet,
and
the
cubelet
basically
requires
that
nothing
is
running
on
the
node,
except
for
maybe
static
ports,
and
this
is
in
the
the
documentation
of
somewhere.
In
the
couple
docks
now,
I
should
try
to
somehow
open
the
my
response.
It's
like
anyway.
A
A
Ideally,
we
would
want
this
particular
nodes
to
enter
enter
maintenance
mode,
which
is
you
know,
drain
and
cordon
is
what
they
are
doing,
but
if
we
don't
do
that
around
the
static
port,
manifest
is
if
this
nodes
fails
completely
and
if
the
node
is
scrapped.
For
some
reason,
if
the
load
is
completely
removed,
this
means
the
coordinates
or
other
critical
parts
will
be
stuck
or
on
the
node
for
the
timeouts
that
is
defined
somewhere
in
the
scheduler,
which
is
five
minutes
by
default.
A
The
other
option
is
to
not
drain
around
the
static
posts.
Like
I
explained,
this
is
problematic
in
some
cases,
and
the
other
alternatives
is
not
acceptable.
Just.
A
Draining
sorry
uncordoning
after
every
a
particular
note
is
upgraded
completely,
which
means
that
we
could
end
up
with
control
plane
nodes,
for
example,
three
at
the
time
not
being
schedulable,
which
is
not
great.
A
So
I
think
a
compromise
here
is
to
go
with
the
suggestion.
With
the
user,
the
user
is
giving
which
is
basically
drain
and
concordant
only
around
the
corporate
upgrade
like
we
are
doing
for
the
worker
nodes,
but
instruct
the
users
that
in
case
they
see
a
problem
with
critical
ports
such
as
coordinates.
C
C
A
Yeah,
it's
a
bit
of
a.
How
do
I
say
it's
a
bit
of
a
stretch,
this
problem,
if
you,
if
you
drain
around
the
static
pot,
upgrade
you
don't
gain
anything
because
the
static
pots
upgrades
you
don't
have
to
drain
right,
but
if
the
static
port
upgrade
fails
and
the
user
cannot
recover
from
the
backup
for
some
reason,
then
critical
ports
can
be
stuck
on
this
note
and
the
user
might
decide
to
scrub.
The
note
like
I
explained
and
then
the
like
critical
services
can
be
lost
for
a
period
of
time.
C
C
C
Yeah,
this
could
be
only
a
problem
if
I'm
in
a
single
node
control,
plane
and
and
the
upgrade
basically
fails
to
create
a
new
version
of
the
api
server
and
the
tcd
and
so
on.
But
for
this
we
have
rolled
back
during
upgrades
so
yeah.
B
A
B
A
It's
it's
like
the
only
element
we
described
in
this
guide,
which
is.
B
I
get
that
so
the
point
of
criticality
in
updating
the
cubelet
is
the
continuity
of
service
of
any
of
the
non-static
configured
pods
or
containers
running
on
that
node.
Is
that
correct
as
well.
B
Yes,
but
I
was
thinking
in
terms
of
the
actual
update
of
cubelet
on
the
worker
node
there's
a
transition
point
of
obviously,
when
you
install
the
new
instance
of
the
new
version
of
the
cubelet
and
the
old
one,
the
old
version
of
a
cubelet
has
to
die,
and
at
that
point
we
run
risk
of
losing
a
pod
that
might
be
executing.
Am
I
correct
there?
If
we
do
this
live.
A
A
So
I
think
for
brits
I
agree
with
you.
We
should
proceed
with
the
change
the
user
is
suggesting.
A
I
think
I'm
wondering
what
should
be
the
wording
of
the
note
that
we
should
add
here
like
should
we
even
add
a
note
that
explains
hey.
If
the
note
is
not
recoverable,
you
should
try
to
evict
all
the
ports
to
other
control
play
notes,
something
like
that.
C
My
opinion,
it
is,
we
should
not
add
the
product,
because
if
the
node
is
not
recoverable,
how
you
can
reveal
it
and
also.
A
Yeah,
the
corporate.
We
assume
that
the
couplers
will
continue
to
run
and
manage
these
critical
parts,
but
imagine
that
you
have
high
level
infrastructure
that
is
watching
for
the
health
of
the
control
plane
node
in
general,
like
if
you
see
that
the
node
is
failing,
but
not
not
the
couplet
side.
Of
the
note.
The
api
server
bot,
for
instance,
is
not
running
there.
A
You
might
tell
this
high-level
infrastructure
to
completely
scrub
the
node
right
and,
if
you're
scraping
the
note
you
should,
I
guess,
with
this
hello
infrastructure,
you
should
also
evict
so.
C
C
A
Yeah,
like
I
explained
this
is
a
bit
of
a
stretch,
I'm
just
trying
to
cover
as
much
as
as
much
detail
as
possible
when
we're
making
a
decision.
So
we
shouldn't
add
any
notes
to
this.
Just
move
the
drain.
C
B
I
can
see
in
one
situation
where
a
an
update
of
kubernetes
may
be
disruptive
to
the
workload
that's
running
on
that
node,
and
that
is
a
distinct
possibility
going
forward.
B
Since
I
believe
that
an
effort
is
now
being
made
to
review
the
use
of
metadata
in
the
pod
spec
and
in
the
node
spec,
and
if
we
discontinue
some
matter,
the
use
of
some
of
the
metadata
in
a
later
version,
and
we
have
a
pod
that
is
currently
running
on
a
node
that
we
are
updating.
B
And
now,
when
cubelet
comes
up
no
at
the
the
infrastructure.
The
api
no
longer
knows
of
a
particular
piece
of
metadata.
B
A
Yes,
this
is
quite
plausible
and
from
the
set
of
cube
adm,
we
don't
care
much
about
this.
Such
a
problem,
at
least
in
terms
of
our
static
bots,.
A
Say
any
for
any
critical
workload
that
you
are
managing
in
your
cluster,
for
example,
some
addon-
or
you
know,
controller
that
you
installing
with
this
metadata,
we
are
hoping
that
other
groups,
like
abr
machinery,
scheduling
they
will
basically
make
it
possible
for
us
to
not
encounter
this
problem
in
deployers,
like
kubernetes.
A
A
B
Process
of
what
going
from
one
version
of
kubernetes
to
another.
B
A
Well,
basically,
the
like,
like
I
show
in
this
document,
is
the
process
of
stopping
is
quite
complex.
So
in
many
steps
we
we
first
look
at
the
api
server
the
control
plane,
update
so
from
what
I'm
understanding
from
your
description
of
the
metadata
problem.
A
A
Yeah
at
this
point,
do
you
have
some
sort
of
a
link
for
this
particular
discussion
that
you
saw.
A
B
Right
now,
there's
a
request
out
for
information.
There
is
no
decision
at
this
point
on
any
action.
They
were
just
simply
trying
to
identify
what
sort
of
croft
has
been
generally
generated
through
the
successive
updates
of
kubernetes
and
what
metadata
has
been
obsoleted
trying
to
identify
if
anyone's
still
using
the
obsoleted
metadata.
A
Yeah
yeah
in
general,
metadata
labels
annotations,
are
notoriously
difficult
to
deprecate
and
remove.
A
B
And
I
believe
that
it
is
that
that
transition
that
actually
triggered
the
call
for
review
of
all
of
the
metadata
that's
currently
being
used,
possibly
possibly.
A
I
will
I
will
be
on
the
lookout
for
similar
changes
and
I'm
hoping
we're
not
going
to
be
that
affected
cuba,
dm
users
as
well.
Thank
you.
D
Yeah,
I
think
I've
I've
got
I've,
seen
a
concrete
example
with
the
beta
os
label,
right
so
say,
for
example,
someone's
installing
an
older
version
of
calico,
so
cni
that
targets
the
beta
annotations
and
then
they're
no
longer
existent,
then
that
cni
has
broken,
and
I
think
that's
more
in
scope
for
the
cluster
add-ons
operator
rather
than
cube
adm,
so
it
so
those
kind
of
like
dynamic,
pods
or
dynamics.
Core
services
are
very
much
in
scope
for
the
cluster
add-ons
project
at
which
kubernetes
may
eventually
consume.
B
A
It's
under
storage
sink
yeah.
I
haven't
seen
these
efforts,
but
like
basically
anytime,
you
create
something
that
is
alpha
and
it's
changing
and
things
like
that
if
people
start
actively
consuming
it,
especially
in
product
and
and
you
break
them,
nobody
is
going
to
be
happy
and
what
we
try
to
do
in
kubernetes
and
projects
like
cube
idm
is
try
to
make
the
transition
as
smaller
as
possible,
but
sometimes
it's
a
bit
difficult.
So.
A
Okay,
so
this
is
going
to
be
a
hopefully
a
quick
one.
Basically,
I
saw
this
in
kubernetes
query.
This
ask
the
user
to
move
the
ticket
to
kubernetes,
because
I
think
it's
a
actual
problem,
so
basically
they
are
using
unicast
ipv4
on
the
loopback
and
ipv6
link
local
on
interfaces.
A
So
I
haven't
had
seen
such
a
use
case
before
and
also
the
user
said,
I'm
using
bgp.
I
don't
know
how
to
pronounce
this.
You
and
I
know
the
standard,
but
they
have
big
gaps
in
the
standard.
So
basically
the
user
is
trying
a
very
weird
case
to
connect
the
worker
node
to
the
cursor
and
what
they're
seeing
is
that
our
code
for
it
from
borrowed
from
api
machinery
that
detects
a
particular
ip
from
gets
a
ip
from
an
interface
and
says?
Okay,
I
saw
that
you
have
this
ip.
A
Maybe
this
is
the
ip
you
want
for
the
work
or
not
to
join
the
fails,
but
then
I
started
digging
in
the
kubernetes
join.
I
saw
that
we
are
fetching
the
init
configuration
from
the
cluster
on
worker
nodes
on
worker
join.
A
We
apply
dynamic,
defaults
to
a
advertised
address.
That
is
not
needed
on
worker
modes
and
this
is
where
it
fails.
Basically,
this
is
the
logic
that
tries
to
default
dynamically
default,
the
api
server
other
than
advertise
others
on
worker
modes-
and
I,
like
I
said
to
this-
I
I
don't
think
we
should
be
fetching
the
init
configuration
at
all
on
worker
nodes.
A
I
don't
see
why
we're
doing
that,
and
also
I
asked
a
couple
of
questions
like:
can
you
mitigate
by
skipping
brief
white
completely,
because
this
is
part
of
preflight
and
also
I
asked
if
you
have
an
hd
setup
when
you
join
a
control
plane
node,
can
you
confirm
that
if
it's
not
a
problem,
if
you
explicitly
define
the
advertised
address
of
the
control
plane
instance
of
the
api
server
via
this
json
slash
yammer
path,
the
user
hasn't
responded
yet,
but
I
basically
am
bringing
this
topic
to
this
meeting
to
ask
like.
A
A
D
A
Yeah
basically,
these
are.
These
are
artifacts
problems
of
of
the
way
we
organized
phases
and
also
the
way
we
organize
the
whole
construction
of
of
a
configuration
object
during
the
runtime.
B
A
Cubedium
any
command
like
joining
it,
we
sorry
join.
In
particular,
we
always
fetch
the
init
configuration
from
the
cluster
construct.
A
an
init
configuration
object
that
also
stores
the
acoustic
configuration
object
and
all
the
component
conflicts
in
there,
like
the
discussion
from
last
time,
was
that
we
shouldn't
fetch
component
config.
Sorry,
the
q
proxy
component
config
and
store
it
in
this
like
ephemeral.
Indeed,
configuration.
A
And
I
think
for
workers
in
particular,
we
are
calling
this
init
config
function
during
freak
preflight,
which
results
in
the
fetch.
I
think
we
should
probably
add
a
flag
somehow
here
that
if
this
is
like,
we
should
skip
this
during
non-control
plane
joint
pretty
much.
I
think
this
is
what
we
should
do.
C
It
is
not
simple
and-
and
I
would
like
to
understand
if
this
is
a
problem
that
happens
only
on
worker
node
or
independence
or
also
master
nodes,
because
if
it
happened
also
must
have
those
your
the
solution
of
not
fetching
is
not
a
solution.
C
We
are
not
solving
the
problem
and
with
regards
to
fetching,
it
is
definitely
required
for
control
plane
nodes.
C
A
I'm
pretty
sure
that
if
I
tell
the
user
to
compile
a
custom,
kubernetes
binary
and
skip
this
particular
action,
it
will
go
is
going
to
work
for
them
because
there's
there's
nothing
to
prevent
a
couplet
to
join
the
coaster.
If
you
skip
this
particular
step.
A
The
control
plane
check
that
we
have
here.
C
Yeah,
I'm
not
sure
I
can
follow.
I
think
that
we
are
talking
about
two
problems
and
my
gut
feeling
is
that
we
have
to
solve
them
separately,
so
one
is
to
fix
the
other
solution
and
the
other
one
is
to
avoid
to
read
what
is
not
necessary
when
joining
a
node,
but
they
are
to
separate
program.
In
my
opinion,.
A
Okay,
so
you're
suggesting
that
yeah.
I
think
this
is
you're
correct
in
a
way
yeah
you're,
basically
suggesting
that.
Let
me
try
to
dig
this
source
code
quickly.
A
D
Yeah,
because
what
I
think's
happening
is
client
go
is
trying
to
select
an
outbound
interface
to
reach
the
api
server
and
it's
trying
to
filter
the
interfaces
on
the
available
things.
So
even
if
we
pass
cube,
adm
cubelet's
gonna
fail.
I
think
at
some
point
anyway.
So
I
think
this
is
really
about
ipv6
support.
A
D
No,
so
it's
it's
yeah.
A
D
Yeah,
I
think
it's
gonna
fail,
because
the
machine
doesn't
have
basically
we're
doing
a
check
that
is
either
a
valid
like
private,
ipv,
private
or
public
ip
address,
or
it
has
to
be
linked
global
ipv6,
but
the
machine
doesn't
have
any
only
has
the
link
local
address,
which
is
okay.
If
it's
on
the
same
subnet,
we
should
allow
that
connection.
D
A
This
is
true
here
again
like,
but
even
the
if
the
couplet
doesn't
work
should
we
apply
some
sort
of
a
fix,
although
I'd
say.
D
Because
I
don't
think
that
we-
I
don't
think
so.
I
think
we
should
not
be
downloading
the
init
configuration,
but
I
think
they're
this
we're
only
seeing
that
it
is
a
problem
because
client
go
is
failing
so
or
api
machinery
is
failing.
So
this
is
really
api.
Machinery
fix
to
support
this
type
of
networking
and
independently.
We
probably
shouldn't
be
downloading
unit
config,
so
we
can
make
it
password
up.
It's
just
going
to
fail
at
the
next
step.
I
think
at
the
next
connection
to
the
api
server.
A
A
D
A
Okay,
if
you,
if
you
comment
on
the
ticket,
I
can
then
create
a
separate
tickets
for
that.
I
guess
yeah.
D
E
Yeah,
I
I
just
agree
exactly:
this
is
the
problem
not
ours.
It
has
to
handle
from
top
down,
and
the
api
server
should
handle
that
so
an
issue
against
api
server
to
ensure
that
ip
forwarding
is
taken
care
of
routing
is
taken
care.
Whichever
way
they
do,
it
is
there
to
take
care
of
not
us.
We
should
not
change
anything
here.
I
agree
well.
A
Cuba
dm
is
fairly
convalidation
step.
This
is
what
the
user
is
seeing.
The
validation
step
is
performed
by
code,
that
is
in
api
machinery,
but
I'm
talking
about
for
api
machine
has
a
library
here.
Api
machinery
is
a
library,
it's
not
the
api
server
that
is
failing.
We
are
just
consuming
some
functions
to
perform
the
validation,
sorry
that
to
check
the
what
ip
others
use.
E
If
it
is
a
dhcp,
then
you
get
dynamic
ip.
If
you
use
static,
then
so
there
are
many
issues
which
is
related
to
ip,
even
link
local,
not
being
routable
forward,
but
that
this
is
not
the
problem
of
internal
at
the
infrastructure
or
port
level.
It
is
more
to
do
with.
D
The
service
we're
not
talking
so
api
machinery,
we're
not
talking
about
anything
that
cuba
dm
is
doing
around
setup
of
a
cluster.
This
is
a
client.
This
is
the
client
code
of
kubernetes
itself.
It
doesn't
support
this
networking
environment.
So
we
need
to
change
that
low-level
client
library,
which
is
used
in
everything
in
lots
of
places
so
used
in
kubelex
using
cube
adm,
it's
using
cluster
api.
All
of
these
things
won't
work
in
this
environment.
D
So
we
need
to
change
that
underlying
library,
which
is
api
machinery,
and
then
everything
will
start
working,
but
this
is
also
highlights
that
we're
doing
something
we
don't
need
to
do
in
cube
adm,
which
is
we
don't
need
to
download
the
init
configuration,
so
it
would
have
failed
anyway.
The
kubelet
would
have
broke
at
some
point,
but
it's
also
highlighted
that
we're
doing
doing
something
stupid.
We
can
clean
that
code
up
and
separately.
We
need
to
deal
with
this
api
machinery
issue.
C
I
just
want
to
add
that
I'm
not
really
sure
this
is
a
cube
api
server
responsibility,
because,
basically,
if
I
got
it
right,
the
logic
that
the
is
raising
there
or
is
a
kubernetes
logic
that
basically
try
to
find
the
automatically
find
out
the
api
server
to
use
for
that
they
add
the
ip
address
to
use
for
the
fps
server.
C
Okay-
and
there
is
all
this,
this
code
should
not
fail,
but
it
is
acceptable
that
the
logic
basically
does
not
resolve
to
anything
and
because
we
we
say
over
time,
we
are
adding
some
more
knowledge
to
the
to
this
piece
of
code
and
we
are
becoming
able
to
detect
in
new
different
type
of
networks
configuration,
but
there
will
be
some
always
a
network
configuration
that
we
cannot
detect,
and
so
in
that
case,
kubernetes
should
aggressively
aggressively
tell
to
the
user.
Sorry,
my
my
internal
logic
for
detecting
the
ip
address
speed.
C
C
First
thing
that
I
will
look
is
that
why
it
is
fading
with
by
throwing
an
exception.
At
least
this
was
the
impression
that
I
got
second,
I
will
look.
Are
we
able
to
so?
I
I
think
that
returning
to
the
user,
unable
to
select
the
ap
is
something
that
is.
It
is
acceptable.
That
means
that
our
logic
simply
does
not
handle
this
type
of
networking.
C
A
Yeah
sure,
but
even
if
they
provide
this
address,
explicitly
it's
going
to
be
applied
to
the
like
the
api
server
for
on
a
worker.
Note
that
doesn't
exist
like
the
api
server
doesn't
exist
on
work
or
not.
This
is
what
we
are
seeing
here.
So
in
any
case,
let's
move
to
the
next
topic.
Just
for
britain,
if
you
want
to
just
comment
on
the
ticket
as
well,
I
I
would
like
to
see
us
to
stop
fetching
the
q
proxy
configuration
and
the
init
configuration
from
the
coaster
for
work.
C
Is
completely
a
separate
problem
and
a
nice
cleanup,
but
I
will
open.
A
Yeah
log
a
ticket-
and
we
can
close
this
one
and
ask
the
user
to
go
back
to
kk
for
this
problem.
A
All
right.
You
have
to
this
topic
about
the
library.
C
C
During
last
week,
I
spent
a
little
bit
of
time
writing
down
what
are
the
priority
and
also
brought
down
a
first
by
storming
dock
about
a
possible
approach
to
to
to
start
developing
the
library,
and
this
approach
basically
is
highly
conditioned
by
the
fact
that
we
cannot
move
kubernetes
out
of
kkk
and
by
the
fact
that
kk
basically
has
a
strong
limitation
in
the
in
importing
external
package
packages
back
in
and
so
basically
a
come
out
with
with
a
proposal
that
is
basically
to
develop
the
library
in
kk
which
and
then
to
mirror
the
library
in
another
repository.
C
A
C
The
document
is
open
for
comments
to
the
sea,
thruster,
recycle,
mailing
lists,
so.
A
C
Didn't
want
to
spam,
but
if
you
think
that
it
should
be
shared,
feel
free
to
send
to
the
main
list
or
or
just
to
to
give
the
ruler
in
the
channels.
A
I
really
don't
want
to
create
a
mess
if
we
have
yet
another
repository.
If
we
have
the
cubanium
library
repository,
this
means
that
we
have
to
version
it
independently.
Then
kubernetes,
maybe
one
day
is
going
to
be
in
k,
slash
kbm.
It
has
to
consume
this
library.
This
sounds
great,
but
we
are
doing
it
before
cube.
Adm
is
moved
out
and
before
we
have
decided
whether
cube
adm
is
going
to
follow,
kkk
cadence
or
not
follow
the
kkk
cadence
seek
architecture
are
currently
saying
you
should
not
follow
the
kkk
cannons.
A
So
if
we
are
not
going
to
follow
the
kk
cadence,
this
means
that
we
can
have
the
library
in
qk
cube
adm,
but
we
also
have
kinder
there.
We
also
have
the
operator
there,
it's
really
messy.
A
Kkk
cadence
means
the
kubernetes
release
cycle.
Kk
is
the
kubernetes
repository
the
basically
the
release
cycle
of
kubernetes
itself.
E
Okay,
so
in
spite
of
all
those
cadences,
does
the
library
implementation
not
require
additional
feedback,
as
you
said,
so,
why
not
expose
it
in
this
which
lag
is
there?
Is
that
admin
slack
sorry
cubed
means
slack
or
is
it
in
the
cluster
apis
like?
Where
do
you
put
this.
A
You
should
look
at
the
jeddah
documents
that
we
okay
for
today
today,
there's
okay,
yeah.
There
is
an
issue
at
the
bottom
of
the
issue.
You
can
see
the
link
to
this
document.
Okay,
I
will
take
product.
Oh,
if
you
have
comments,
please
add
them.
C
A
A
C
Shash
to
to
be
honest,
it
is
something
doable
given
that
our
main
target
is
cluster.
Our
initial
target
is
cluster
api
and
we
control
what
it
happens
in
in
cluster
api,
or
there
is
a
good
collaboration
between
the
two
teams
and
but
to
be
honest,
I
I
like
having
release
release
notes
and
so
it
starts.
C
C
C
A
Yeah,
I
think
for
hcd
in
kk
we're
still
importing
the
shot
because
they
don't
have
tax
and
nobody
cares
so
yeah.
I
think
some
people
care
about
it
but
yeah.
I
I
think
this
I
think
the
separate
repository
is
probably
a
good
idea.
A
So
like
do
you
have
a
formalization
of
the
proposal,
I'm
trying
to
understand?
If?
Because
we
had
a
couple
of
solutions
here,.
C
Yeah,
but
up
to
now
it
is
not
a
proposal.
It
is
a
brainstorming.
A
Yeah
yeah,
okay
yeah.
If
you
can
get
the
attention
of
andy
and
vince
to
comment
on
this
idea.
D
A
This
is
a
nice
addition.
In
particular,
I
really
don't
like
some
of
the
floating
discussions
around
exposing
parts
of
the
joint
process
itself
to
be
able
to
retry
it.
This
is,
I
don't
need.
I
don't
know
how
we're
going
to
manage
that.
A
We
have
to
have
good
evidence
for
this
use
case.
I
saw
a
kabzi
maintainer
requesting
it
yesterday.
A
A
We
have
the
main
cubadiem
process,
and
now
we
expose
the
same
thing
as
the
library
like
we
are
breaking
this
project
into
so
many
fractions
that
I
don't
see
how
how
easy
for
us
is
going
to
be
made
to
make
changes
in
the
future,
like
everything
is
going
to
break
because
everything
is
exposed
yeah.
It's
a
interesting
discussion.
A
All
right,
please
comment
on
this
vlog.
The
the
next
topic
is
the
the
vlog
of
adults,
and
we
have
only
two
minutes.
However,.
C
Yeah
just
a
quick
say
last
meeting
and
when
we
discussed
the
item
in
the
roadmap,
but
I
I
told
that
I
had
in
mind
an
idea
for
making
possible
pragapala
tones
and
I
I've
written
down
this
idea.
C
Lubomir
already
gave
some
positive
feedback,
but
we
need
other
feedback.
So
maybe
we
should
bring
this
one
to
the
cluster
addons
project.
Attention.
A
Yeah
I
was
there
yesterday,
but
I
was
preparing
the
slides
for
kubecon,
so
I
couldn't-
or
rather
I
forgot
about
this
topic
so
yeah
like
like
if
I
have
to
provide
a
public
statement
on
this
topic,
I
like
the
program,
progress
idea,
but
I
see
that
everybody
is
doing
their
own
thing,
like
the
questions
project
is
doing
something.
Some
companies
like
red
hats,
have
solutions.
A
Google
still
use
the
legacy
other
manager,
everybody
has
a
different
solution
for
the
add-on
management
problem,
and
if
we
enable
this
plug-in
binary
idea,
we
are
essentially
just
adding
yet
another
solution,
even
if
I
think
it's
like
one
of
the
better
ones.
There
are
some
cavities
around
like
how
do
you
manage
plugins
on
on
notes
that
don't
need
the
plugin
binaries?
A
Basically
again,
I
would
like
to
see
more
feedback
from
people
to
see
like
how
do
we
install
this?
How
do
we
what
they
do?
What
do
they
think
about
this
solution?.
C
Yeah
and
just
the
tldr
is
that
the
plugin
basically
is
is
not,
in
contrast
with
any
of
their
donor
initiative,
that
I'm
aware
of
it's
just
a
way
to
make
this
a
dawn
approach
integrated
with
the
huberdman
experience.
So
this
is
the
tltr.
A
Yeah
I
can
share
this
idea
with
the
addons
project,
but
maybe
we
should
have
a
like
a
separate
meeting
about
it.
Maybe
we
can
have
a
separate
meeting
about
the
library
again
before
I
share
it
with
the
customers
project.
I
would
like
us
to
discuss
the
idea
more.
Maybe
that's.
C
A
All
right,
let's
leave
the
room
for
the
question
api
thanks
everybody
and
see
you
again
a
couple
weeks.