►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
everyone
today
is
june
10th,
and
this
is
the
cluster
api
provider
azure
office
hours,
cluster
api
provider.
Azure
is
a
sub
project.
That's
the
cluster
life
cycle.
As
always,
please
make
sure
you
follow
the
cncf
code
of
conduct
be
respectful
to
everyone
else
and
if
you'd
like
to
speak,
please
raise
your
hand
and
I'll
make
sure
you
get
called
on.
A
If
you
have
any
agenda
topics
that
you'd
like
to
or
any
topics
that
you'd
like
to
discuss,
please
add
them
to
the
agenda
on
the
google
doc
and
if
you
can
also,
please
add
your
name
to
the
attendee
list,
so
we
can
keep
track
of
who's
coming
to
these
all
right.
So,
let's
get
started.
A
And
let's
go
straight
into
the
milestone
review.
A
Okay,
so
we're
getting
really
close.
There
are
a
few
that
were
still
remaining
that
got
closed
since
last
time,
mostly
the
windows
rename
of
nodes.
So
that's
done
and
then
I
think
we
have
support
for
aks,
which
is
over
close
to
the
finish
line.
There's
a
pr
that
cheyenne
has
been
working
on
and
there's
it's
currently
being
reviewed
and
there's
end-to-end
tests
that
shine
added,
so
that's
great
and
then
I
think
the
machine
pool
machine
pr
merged
yesterday.
A
I
know
david
added
an
agenda
item
to
discuss
it
afterwards,
so
let's
hold
until
then,
but
I
think
we're
in
a
good
place,
at
least
in
terms
of
the
release,
and
then
the
last
two
ones
are
not.
I
don't
think
there's
any
active
pr's.
This
one
is
almost
done,
except
for
the
aks
service,
the
manage
cluster
service,
so
I
think
those
two
are
probably
not
going
to
make
it
till
before
the
release.
But
that's,
I
think,
that's
okay.
A
B
A
So
actually,
this
one
is
currently
being
worked
on.
There's
a
pr.
Okay,
let's
go
from
the
top
sorry,
so
secure
sensitive
bootstrap
data
that
we
decided
to
punt
based
on
the
work
that's
going
on
in
the
cube,
secure
cubelet
proposal
and
cluster
api.
A
So
that
will
happen
as
far
as
zero
five
x,
but
it
won't
be
in
the
v1
of
four
release.
It's
still
at
the
proposal
stage
allow
multiple
subnets
of
nodes
that
one
there's
a
pr
open.
It's
still
a
work
in
progress.
I
believe,
so
it's
not
correct
ready
to
merge,
but
it's
an
additive
feature,
so
it
should
be
okay
to
merge
after
the
point
of
release,
decouple
infrastructure,
research
from
machine
controller-
oh
that
actually
is
done.
A
So
I
should
close
that
that's
the
one
that
is
that
was
implemented
in
cluster
api
for
the
it's
called
externally
managed
infrastructure
and
that
we
were
able
to
take
in
as
part
of
the
cluster
api
dependency
updates
and
we
were
able
to
add
the
externally
managed
predicate
to
the
controller
so
that
we
skip
over
any
externally
managed
clusters
in
our
reconciler
apparel
resource
or
parallel
reconciliation
of
azure
resources.
A
I
don't
know
the
status
on
that.
One
time,
do
you
do
you
want
to
give
an
update.
C
Like
it's,
I
haven't
made
any
changes
after
the
discussion.
We
did
during
last
office
hours,
so
yeah.
C
A
Yeah,
I
I
think
this
one.
We
actually
want
to
get
it
after
zer
five
zero,
just
because
it's
like
going
to
be
a
big
refactoring
restructure
of
the
code
and
I'd
rather
not
merge
something
that
big
this
close
to
the
release.
So
it's
that's
fine!
For
now
any
questions
or
opinions
on
this
or
anything
so
far.
A
Okay
change
default
branch
to
main
that
one
I
propose,
we
take
care
of
that
as
soon
as
we
cut
the
release
so
like
right
after
we
had
the
release,
we
changed
the
branch
over
it's
just
so
that
it
doesn't
cause
any
issues
in
terms
of
like
tests,
because
we
know
that
there's
probably
going
to
be
some
tests
that
break,
and
hopefully
we
get
all
of
them
the
first
time,
but
it's
possible
that
we
don't
so
just
to
limit
risk
of
our
getting
our
test
signal
corrupted
right
before
the
release.
A
B
Yes,
okay,
I
can
mention
it
now
so
after
I
did
a
pr.
Basically,
we
found
david
found
and
I
tested
that
if
you're
using
gentoo
image,
you
don't
have
to
do
anything
because
azure
will
take
care
of
it
and
create
a
gen
2
via
so
I
closed
that
vr.
But
I
didn't
close
the
issue
in
case.
B
We
want
to
later
add
some
validation
right
now
we
don't
publish
magento
images,
so
nothing
there's
nothing
to
use
there
if
we
do
a
beta
pr
and
image
builder
to
kind
of
fix
a
couple
small
things
to
publish
them
because
it
wasn't
working,
but
once
we
published
that-
and
we
might
need
a
new
pr
here
to
like-
maybe
make
that
the
default,
if
we
want
to
add
some
kind
of
validation
or
stuff
like
that.
A
Okay-
let's
maybe
circle
back
on
the
whole
gen
2
topic,
but
I
think
for
now
in
terms
of
milestone
planning
the
yeah
we're
good.
This
is
good
here
right,
yes,
okay
and
then
this
one
actually,
I
will
assign
myself
and
check
on
it,
because
I
don't
think
we've
taken
that
in
yet,
but
this
was
just
a
preventative
issue
because
we
were
told
there
was
going
to
be
a
breaking
change
in
this
network.
Api
version
in
the
azure
sdk
so
just
need
to
check
back
and
see
where
we're
at.
A
For
that
and
then
the
last
one
is
this
one
favorites
issue
this
one's
been
around
for
a
while
and
actually,
I
think,
there's
nothing
else
to
be
done
here.
So
this
for
context
for
anyone
who's
watching
who
doesn't
know
what
this
is.
This
was
an
issue
with
the
cloud
provider
where
the
cloud
azure
cloud
provider
was
actually
using
going
through
the
wrong
code
path
when
trying
to
provision
load
balancers
for
services
in
a
all
vms
cluster,
and
it
was
actually
not
refreshing.
A
The
cache
of
like
the
vms
in
the
cluster
in
the
resource
group,
which
was
causing
it
to
not
see
vms
for
the
nodes
and
try
to
go
and
treat
it
like
a
vmss
which
it
wasn't.
So.
This
has
been
an
issue
for
a
while.
We
actually
made
a
fix
in
the
cloud
provider
code
which
moves
merged
and
it
was
later
backported
to
the
entry
cloud
provider.
So
the
fix
was
made
out
of
tree
cloud
provider
and
it
was
back-parted
to
entry
cloud
provider
that
pr
has
merged.
A
It
did
not
qualify
as
a
cherry
pick.
Unfortunately.
So
it's
right
now
in
the
pipeline
for
getting
released
with
the
next
minor
release,
which
is
going
to
be
122.,
so
that
will
be
completely
fixed
as
soon
as
122
is
at
at
least
in
versions.
122
and
above
and
it's
already
fixed
in
external
cloud
provider,
for
the
version
that
is
currently
being
used
in
the
templates.
A
D
A
Yeah,
I
think
that's
been
common
okay,
but
maybe
I
can
add
a
clear
note
in
the
description
yeah.
So
it's
at
the
top
but
yeah
this
is
yeah.
A
Yeah
I'll
I'll
update
that
after
the
meeting,
okay
cool
so
update
this
and
then
close
this
one
anything
else,
any
other
issues
that
were
tracking
anywhere
or
need
to
be
paying
attention
to.
A
Okay,
I
think
we're
in
a
pretty
good
place
for
the
release,
honestly
we're
just
waiting
for
cluster
api
to
release
at
this
point,
and
then
we
should
also
so
that
we
just
merged
the
zero
four
zero
beta
zero
cluster
api
dependency
yesterday.
So
the
tests
are
currently
running
with
that
better
release.
A
A
Cool,
maybe
we
should
take
a
brief
look
at
the
tester
dashboards,
just
to
make
sure
I
did
see
some
flakiness
in
the
full
end-to-end
test
suite,
but
in
terms
of
v1,
alpha
4
or
the
main
branch
periodic
tests,
the
conformance
regular
conformance
is
looking
good.
A
The
conformance
on
kubernetes
main
had
a
bit
of
issues
here
because
of
the
vipre
config
thing
that
got
removed
from
the
kubernetes
repo,
and
I
think
there
are
two
provisioning
flakes
here.
I
I
think
those
were
quota
issues,
but
I
need
to
double
check,
but
those
were
like
the
cluster
didn't
provision
in
terms
of,
like
conformance,
were
all
passing
in
the
last
recent
runs
and
then
I
think
periodic
can't
be
yeah.
So
this
is
the
cavi
antoine
test,
so
some
flakes
here.
A
So
I
think
we
should
be
good
to
investigate,
especially
this
machine
health
checks,
one
which
has
been
flaky,
but
I
think
it's
also
flaky
and
cappy,
so
it
might
be
just
the
test
itself,
but
yeah.
We
should
double
check
and
then
to
end
full.
This
one
has
been
also
very
flaky.
A
A
Any
oh,
is
this:
one
is
a
pool.
Job,
oh,
are
we
james?
Are
you
on
the
call?
Do
you
know,
if
is,
is
there
a
plan
to
make
this
into
a
periodic
job
now
that
the
pr
has
merged
the
windows
upstream
tests.
F
Yeah,
so
I
I'm
working
on
getting
the
ci
bills.
I
think
we'll
turn
those
into
periodic
jobs.
Once
once
we're
complete
there.
A
Okay
sounds
good
cool.
Any
questions
on
tests
or
comments
concerns.
B
A
Yeah
I
can.
I
can
also
help
maybe
like
let's
start
right,
triaging
and
I'd
be
seeing
what's
provisioned,
what's
not
and
then
maybe
opening
issues
for
the
ones
that
have
like
recurrent
flakes
that
we
haven't
seen
before,
and
then
I
can
help
you
investigate
those
okay
thanks,
neither
cool
thanks
for
bringing
that
up.
Craig
all
right.
B
B
Not
exactly
related
to
the
reasons
but
like
I
know
there
was
like
a
new
and
cappy
0319
and
stuff.
Do
we
need
to
get
that
in
our
zero
forward.
A
Yeah,
that's
actually
a
good
point.
We
do
so
that's
probably
so
zero
three
eighteen
has
a
few
different
fixes.
So
we
definitely
want
that
and
then
zero
three
19
was
shortly
released
after
because
there's
a
small
issue
in
the
updates
to
the
cube
test
or
to
the
conformance
test
suite
in
the
test
framework
to
deal
with
that
viper
config
being
removed
that
actually
had
like
a
missing
pointer
reference.
A
A
Okay,
this
one
yeah,
so
this
has
been
not
running
for
a
while
and
I
suspect,
there's
something
else
going
on.
That's
not
just
that.
I
don't
know
if
that's
gonna
be
enough
to
fix
it,
but
yes,
we
can't
even
investigate
until
we
at
least
fix
that,
because
it's
not
running
the
conformance
with
the
right
flags,
so
yeah.
A
We
need
to
get
319
in
and
then
check
if
that
fixes
it
and
if
not
investigate
why
it's
not
provisioning,
I
think
my
guess
is
probably
something
in
318
is
going
to
fix
it,
because
this
is
like
the
alpha
3
branch
running
on
kate's
on,
like
kubernetes
master
and
that's
been
failing.
A
Cool,
let's
move
on
so
gentoo
images.
B
I
just
wanted
to
ask
if
we
want
to
start
publishing
that,
with
our
images
from
I
guess
at
some
point,
we
need
to
do
that.
I
don't
know
what
the
priority
on
that
is.
E
A
A
Yeah,
I
think
I'm
I'm
with
craig
here
that
we
should
probably
just
switch
over
at
some
point.
I
don't
know
if
publishing
both
side
by
side
is
a
good
idea
in
terms
of
like
maintenance
cost
and
unless
there's
like
a
really
good
reason
that
some
users
are
going
to
want
gen,
1
and
some
are
going
to
want
gen
2,
especially
for
testing
and
for
like.
B
A
We
should
do
a
quick
analysis
of
our
quick
check
of
if
we
were
to
switch
all
our
images
to
gen
2
today.
Is
there
anything
that
we're
doing
in
the
tests
like
in
the
ci
suite
that
would
break
any
like
special
vm
sizes,
we're
using
that
wouldn't
work
like
gpu,
for
example,
where
things
like
that.
I
think.
Maybe
that's
a
good
first
step
to
check
that
and
then,
if
we're
confident
that
we're
ready
to
support
gen
2
today,
then
maybe
we
can
start
working
on
that.
F
We,
I
don't
think
we've
done
the
work
in
image
builder,
for
windows
to
do
gen
2,
so.
H
D
D
Yeah
I
was
just
gonna
say
we
should
make
sure
that
if
someone
uses
some
oddball
vm
size
but
uses
our
future
default,
gen
two
images.
They
get
a
nice
clear
error
so
that
they
know
what
to
do,
which
I
would
hope
so,
but
you
never
know
with
azure
stuff.
A
A
What
I
was
going
to
say
earlier
is,
if
we're
not
going
to
do
this
right
away,
we
should
at
least
like
have
some
mention
of
gen
2
in
the
docs,
so
that
users
know
that
they
can
do
this
on
their
own,
like
if
they
want
to
publish
a
gen
2
image
that
it
should
just
work
and
then
maybe
we
should
publish
at
least
one
for
testing.
I'm
not
sure
if
that
makes
sense,
but
or
we
just
wait
for
the
cut
over
that
might
be
easier.
Okay,.
E
That's
my
face,
it
just
seems
like
unnecessary
expense.
A
Okay
sounds
good
all
right:
let's
talk
about
machine
pulls
david.
H
Hey
everybody,
so
the
pr
that's
linked
there
has
has
landed.
Thankfully,
thank
you
for
all
the
reviews
and
feedback,
and
I
promise
I
will
never
put
one
out
that
big
again
in
there.
So
quick
recap:
the
features
that
landed
in
there
are
azure
machine
pool
machines,
so
azure
machine
pools
are
now
composed
of
azure
machine
pool
machines
which
have
their
own
state
and
their
own
life
cycle.
H
H
H
When
that
one
is
gone,
the
new
machine
pool
will
be
created
and
you'll
have
three
again
and
then
the
final
one
will
be
removed
from
the
list
and
then
you'll
be
left
only
with
say
the
the
two
that
were
there
so
that
that
functionality
is
there,
it's
ready
to
go
hope
people
give
it
a
give
it
some
try
and
put
some
miles
on
it
before
we
get
to
release.
H
There
is
two
items
that
need
to
be
completed
before
this
is
like
totally
awesome
we
need
to
put
in,
or
I
I'll
open
up
a
pr
to
do,
chord
and
drain.
So
right
now
we
are
not
going
to
draining.
We
are
simply
just
deleting
that
node.
So
that's
not
safe,
we'll
get
in
corn
and
drain
and
then
also
there's
an
issue
for
documentation.
A
In
terms
of
next
next
steps,
what
how
so
the
the
thing
that's
still
missing?
The
big
thing,
that's
missing
from
mission
pool
today
is
cluster,
auto
scaler
integration.
So
we
talked
a
little
bit
in
the
proposal
about
you
know
like
how
to
like
get
this
back
into
cappy
at
some
point.
Where
are
have
your
thoughts
on
that
changed,
or
what's
your
current
evaluation
of
what
needs
to
be
done.
H
Thoughts
haven't
changed,
we're
just
close
to
this,
releasing
cappy.
So
I
wasn't,
I
didn't
feel
compelled
to
go
open
up
the
cap,
yet
it's
definitely
something
that
should
go
in.
Do
you.
I
don't
see
anything
that
has
changed
to
lead
me
to
think
that
we
shouldn't
do
it.
Do
you.
A
No,
but
I
guess
yeah
if
it's
going
to
need
a
cab.
That
means
we're
not
going
to
get
that
until
the
next
minor
yeah,
which
is
not
going
to
be
for
a
while,
although
it
is
experimental,
so
we
might
be
able
to
negotiate
getting
that
in
earlier,
but
yeah.
I
think
that's
the
missing
thing
before
moving
machine
guns
out
of
experimental,
I
agree,
and
then
I
guess
we
should
also.
I
don't
think
it's
gonna
work,
but
just
in
case
we
should
probably
just
look
into.
H
Seems
reasonable,
especially
just
to
prove
the
point,
even
if
we
were
just
going
to
do
it
as
a
poc,
to
show
how
this
would
work.
I
think
that
seems
like
a
totally
reasonable
path.
A
A
Sure
so
I
think
we
just
said
another
thing
like
another
alternative
to
look
into
is
to
add
a
provider
contract
instead
of
a
different
crd
in
capi,
so
that
we
reuse
the
intra
crd
across
providers
but
yeah,
no
matter
what
we
do
we'll
have
to
get
some
sort
of
agreement
across
providers,
because
right
now
we're
doing
this
alone
and
we
haven't.
A
I
mean
we've
talked
to
the
other
folks
who
support
machine
pools
like
in
kappa
and
everything,
but
we
still
need
some
more
official
agreement
of
like
how
this
is
supposed
to
work
so
that
we
have
consistency,
because
without
consistency,
we'll
not
be
able
to
have
a
single
cluster,
auto
scaler
approach.
D
E
And
it
could
be,
you
know
useful
to
bring
it
in
this
experimental
feature
in
the
depth
z,
but
it
really
won't
be
experimental
until
the
captain's
analyzed.
A
B
Yes,
so
I've
had
this
issue
for
a
little
bit
of
time.
Let
me
give
some
context.
I
I
had
this.
I
have
this
pr
that
is
trying
to
use.
B
The
special
thing
about
this
test
is
it's
the
only
test
that
uses
a
management
cluster
that's
running
on
azure,
not
on
kind,
because
it
creates
the
clustered
until
it
creates
a
worker
cluster,
makes
that
imaging
cluster
and
then
uses
that
to
create
a
private
cluster
and
because
it's
trying
to
create
a
private
cluster
in
the
same
v-net,
because
it's
private.
So
we
had
to
make
that
a
management
cluster
on
azure
and
before
it
was
worked
without
this
pr,
it's
working
because
it's
using
it's
not
using
nmi.
So
it's
just
able
to
create
the
cluster.
B
If
it
has
a
service
process,
one
is
using
nmi
it
doesn't
it's
not
able
to
authenticate
when
it's
trying
to
create
the
private
cluster,
because
the
nmi
pod
is
supposed
to
be
using
host
network
and
when
you're
using
host
network?
B
There's
a
bug
in
calico
with
host
network
with
calco
vxlan
and
the
kernel
version
that
we're
using
that
is
making
dns
not
work.
If,
if
we're,
if
it's
not
using
host
network,
it
will
work
fine,
because
the
rest
of
the
cluster
works.
I
have
a
link
to
the
issue
in
calco,
I
tried
to
work
around
that
they
mentioned
it
didn't
seem
to
make
it
work.
B
I
know
the
kernel
version
that
we
have
in
our
images
is
pretty
old.
It's
like
5.4.
This
is
supposed
to
work
for
5.7
and
above
so
I
don't
know
why
we're
using
this
kernel
version
and
not
a
newer
one
or
if
that's
something
we
can
change.
B
I
know
sham
had
like
investigation
recently
about
some
other
kernel
bug
version
and
he
tried
to
generate
with
the
newer
kernel,
like
the
other
bug
that
I
can't
remember
now,
but
it
was
also
related
to
some
networking
anyway.
This
is
the
background,
so
the
other
fix
that
is
coming
in
calco.
I
updated
all
the
stuff
in
this,
the
other
stuff
that
is
coming
up
in
calico,
there's
a
fix
coming,
but
it
sounds
like
it
should
come
in
320,
which
is
coming
in
the
next
few
weeks,
so
it's
not
released
yet.
B
A
To
answer
the
kernel
version
question:
the
kernel
version
is
the
one
that
was
there,
the
one
that
was
the
latest
one
at
the
time
the
image
was
built,
and
in
this
case
I
think
the
tests
haven't
had
an
updated
image
for
a
while,
because
we're
still
using
an
old
kubernetes
version
and
the
reason
one
of
the
reasons
for
that
is
that,
like
I
think,
first
of
all,
we
should
probably
update
the
latest
patch
release
of
119
because
we're
still
in
197-
and
I
think
the
latest
is
one
1911
and
I
think
one
of
the
open
peers.
A
I
think
it
was
the
managed
cluster
identity.
One
was
doing
that.
We
should
probably
do
that
separately
and
get
that
in
sooner
and
then
the
other
thing
is,
we
haven't,
moved
to
120
121
because
there's
an
issue
with
cubadm
in
120,
plus
where
it
tries
to
set
the
ephemeral
storage
requirements
on
the
etsy
d
pod,
which
causes
us
to
have
like
really
bad
flakes
for
multi
control,
plane
clusters,
because
cubanium
join
fails
due
to
a
cubelet
race
condition.
A
That's
the
thing
that
jack
francis
brought
up
at
the
cluster
api
office
hours
yesterday.
So
we
could
publish
a
new
image
with
the
same
kubernetes
version
just
to
get
a
fresher
kernel.
We
could
also
add
some
precubation
or
postcubitium
command
that
updates
the
kernel.
That
is
also
a
solution
in
that
test.
If
you
want
to
try
that
just
to
see
if
that
would
fix
your
issue,.
A
Okay,
I
can
try
that
and
then
yeah
azure
stack
hdi
zac.
You
want
to
talk
about
that.
What's
the
work
around.
I
Oh,
I
know
I'm
sure
it
was
the
same
like
use
like
east
tool
to
do
something
right,
more
granted
and
trying
to
look
it
up
and
but
yeah
we
were
working
with
like
tigra
tiger.
I
Like
jocelyn,
my
team
was
working
with
them
for
a
while,
but
yeah
we
were
hitting
issue
with
no
dns
resolution
for
our
report
host
networking
pods.
B
The
issue
says
that
the
calculation
says
anything
before
5.7
has
that
issue.
A
B
Have
a
workaround
to
work
with
older
kernels
by
setting
a
flag
that
will
kind
of
skip
the
randomized
problem.
B
A
Okay,
then
yeah.
Let's
just
try
with
the
new
opinion
and
see
what
happens,
and
so
you
could
either
update
it
in
like
like
command
in
the
cloud
in
it,
or
you
could
also
just
use
on
your
kubernetes
version
in
your
pr
just
to
see.
If
that
fixes
the
test,
it
might
introduce
other
problems
which
we're
working
on
fixing
separately,
but
at
least
to
see.
A
If
that's
the
solution,
I
think
the
latest
one
I
think
latest
119
will
probably
have
what
you
need,
but
if
not,
you
can
try
the
latest
kubernetes
version
that
was
published.
I
think,
since
they
all
come
in
batches
like
all
three
patches
usually
get
released
on
the
same
day,
we
build
them
on
the
same
day,
which
means
that
119,
120
and
121
latest
patch
will
probably
have
the
same
kernel
version
because
they
were
built
on
the
same
day,
so
they
took
whatever
was
latest.
A
Okay,
let
me
copy
that
to
make
sure
we
don't
lose
it.
Oh
someone
already
did
cool
okay.
A
B
Rest,
I
mean,
I
don't
know
sure
it's
up
to
if
you
wanted.
If
you
want
this
to
happen
before
the
release
we
can,
I
mean,
I
don't
think
it's
related
to
the
release,
so
I
don't
know.
A
Okay,
well,
that
actually
brings
me
to
my
next
point,
which
is
you
reminded
me
by
saying
the
azure
culture
identity
thing.
We
talked
at
some
point
about
making
multi-tenancy
like
have
the
optional
vars
in
alpha
4
and
then
completely
removing
them
in
alpha
5
or
a
better
one.
Do
we
still
want
to
do
that?
So
we
should
do
that
soon.
So
that
means
updating
the
cluster
components
template
to
have
like
that
and
subs
actually
wait.
Do
we
have
m
stop
support
in
the
cluster
components?
G
B
A
A
Either
there
or
it's
not
so
okay,
I
will
check
that.
A
A
I
think
yeah
so
in
terms
of
like
the
testing
of
that
I
was
just
thinking
like
it'd,
be
nice
if
we
can
at
least
get
some
test
data
on
like
azure
cluster
identity
and
not
hold
it
for
too
long.
But
if
you
think
that
we
can
get
the
private
cluster
one
fixed
soon-ish
like
I'm,
okay
with
waiting.
It's
just
like
you
know
if
it's
blocked
might
as
well
get
the
other
ones
in
there.
B
A
Oh
note
about
the
workaround:
it
needs
to
be
executed
on
each
node
after
the
calico
pod
is
scheduled
on
set
node
okay,
so
that
will
be
annoying
indeed
means.
We
can't
do
it.
I
This
is
the
workaround,
so
you
don't
have
to
kind
of
go
to
each
node
ssh
in
or
something
good,
because
you
can't
even
do
like
cuban
posts.
Yeah.
I
You
all
know,
timing
was
when
that
calculation
get.
I
A
Oh
okay,
I'd
be
interested
to
know
sec
like
if
you
can
check
after
the
meeting.
If
you're
like
that,
I
imagine
this
workaround
you
applied
a
while
ago,
while
this
issue
was
blocking,
but
if
indeed
it's
fixed
and
you
kernel
versions.
Maybe
your
workaround
isn't
needed
anymore,
like
it'd,
be
interesting
to
know
if
the
issue
still
exists,
even
with
newer
kernels
or.