►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right,
hello,
everyone
today
is
thursday
march
18th,
and
this
is
the
cluster
api
provider
azure
office
hours.
To
start,
we
have
a
demo
from
christian
this
one
who's
gonna
show
us
some
stuff
with
azure
scenario.
B
B
I
will
begin
with
an
empty
cluster
with
a
just
standard
workload
cluster
generated
with
cluster
ctl
config,
and
I
will
manually
change
the
make
them
the
needed
changes
in
order
to
make
it
work
with
azure
c9,
and
then
I
would
show
you
the
changes
I
needed
to
do
in
the
crs
to
perform
the
same.
The
same
thing
basically
automatically.
B
B
B
B
B
B
B
Restarts
there
it
is
with
the
flex
wall
perfect.
Another
thing
I
need
to
do
and
this
comes
from
the
documentation
of
visual
cni.
I
need
to
set
masquerading
for
from
the
pod
sider
in
all
machines
in
the
cluster
and
that's
needed
for
outgoing
traffic.
As
far
as
I
understood,
so
I
will
just
do
it
and
the
last
step.
What
we
are
missing
right
now
is
basically
to
initialize
the
cni
and
how
we
decided
to
do
it
and
we
quickly
quit
from
the
ssh
in
the
machine.
B
Here
we
use
calico
for
that
and
calico
we
disable
cni,
so
we
use
calico
just
for
ipam
and
network
policy.
Basically,
I
will
show
you
them
the
manifest
in
a
cycle,
but
just
let
me
apply
it.
B
B
B
B
So,
in
order
for
this
to
work,
the
the
routing
table
has
to
be
assigned
to
all
subnets,
but,
as
you
can
see,
by
default,
this
subnet
this
routing
table
is
assigned
only
to
nodes
and
not
to
to
the
control
plane
and
that's
not
good.
I
don't
even
know
how
the
test
work
to
be
honest,
it
should
have
failed
but
anyway,
so
one
last
thing
I
need
to
do
here
is
to
assign
another
subnet,
which
is
the
control
plane,
subnet.
B
Okay,
as
I
promised,
let's
quickly
have
a
look
at
the
calico,
manifest
that
I
that
I
used
so
as
you
can
see,
there
is
the
ipam
plugin
in
host
local
mode,
and
that
means
that
it
uses
the
pod
sider
that
is
specified
in
every
node
in
every
node
in
kubernetes,
so
the
controller
manager
defines
a
cider
for
every
node
and
calico
picks
up
that
cider
to
assign
ips
to
bots,
and
the
other
thing
I
want
to
show
here
to
show
you
is
that
in
the
kaliko
not
demon
set,
we
have
the
kalikana
touring
the
cancer.
B
So
this
is
not
calico
doing
cni
right
it
is.
It
is
just
used
for
the
initialization
okay,
so
how
to
automate
all
this
thing
in
this
screen.
Let
me
try
to
slightly
increase
the
font,
but
I
can't
increase
it
too
much
on
the
left.
This
is
a
diff,
a
two
column
diff
on
the
left.
You
have
the
the
vanilla
manifest
for
created
with
plastic
ctl
and
on
the
right.
B
B
B
Then
I
flipped
the
flag
for
allocating
outsiders
on
controller
manager.
Nothing
thanks
to
here
I
created
a
systemd
unit
to
run
the
editables
dev
tables
command.
I
showed
you
and
I
added
a
pre
cubed
m
command,
just
to
enable
and
start
the
service
at
boots,
and
this
is
for,
of
course,
the
control
plane.
B
I
enabled
an
ip
for
wording
again
for
the
control
plane
with
this
with
this
flag.
I
did
the
same
also
for
the
machine
deployment,
and
this
is
again
the
same
systemd
unit
for
the
machine
deployment
this
time.
So,
as
you
can
see,
it's
a
very
limited
amount
of
changes
so
to
speed
up
the
process.
I
already
spawn
a
cluster
using
the
same,
the
cr
on
the
right,
so
the
one
with
the
modifications
so
up
here
again,
you
have
the
watch
on
the
new
cluster.
B
B
In
theory,
we
can
also
automate
application
of
calico
with
the
crs
themselves
in
january.
We
don't
need
that
because
we
have
our
way
of
deploying
applications
on
top
of
kubernetes
cluster.
So
I
didn't
take
care
of
that
to
be
honest
but
yeah,
and
there
are
a
ton
of
ways
for
doing
this.
I
mean
and
they're
not
not
ready.
A
B
I
honestly,
let's
start
with
with
something
it's
difficult
to
find.
At
least
it
was
difficult
for
me
to
find
any
comprehensive
documentation
about
installing
azure
cni.
So
for
my
understanding,
the
only
need
to
start
was
to
run
id
tables
to
ensure
that
the
routing
table
are
updated
and
basically
that's
enough.
When
you
install
calico
the
calico
template
it
initialize
the
cni
folder
with
the
needed
binaries
and
that's
enough
to
make
this
unite.
Work.
B
B
B
B
B
C
I
think
I
had
the
same
question.
I
didn't
think
azure
cni
needed
route
tables
to
work
with
everything's
on
this
on
the
network,
and
so
are
you
able
to
find
the
azure
scene
I'm
driving?
So
I
can't
quite
see,
but
are
you
able
to
find
the
hydrocni
binary
on
the
node.
B
To
be
honest,
just
to
come
back
one
step,
I'm
not
a
super
expert,
as
you
seen
I
we
used
to
use
it
since
a
long
time
ago
in
john's
world,
and
we
always
deployed
it
like
this,
and
we
just
thought
that
was
enough.
So
maybe
we
are
missing
some
point
here.
C
Yeah,
so
I
think,
what's
happening
is
that
you're
using
calico's
local
node
ipam
instead
of.
C
Average
cni,
and
so
instead
of
like
installing
all
of
calico's
ipam
that
lives
up
in
the
cluster
you're,
using
like
a
post,
local
ipam
and
and
that's
why
you
need
to
turn
on
ip43
and
also
enable
those
route
cables.
Whereas
I
I'm
pretty
sure
with
azure
cni.
A
Yeah,
that
seems
right
to
me.
Thank
you
james.
I
because
I
would
expect
to
see
the
azure
v-net
and
azure
v-net
ipam
binaries
for
azure
cni.
If
you're
using
like
the
azure
cni
binaries
underneath
yeah,
they.
B
Are
not
there
yeah,
but
the
the
networking
backhand
for
calico
is
disabled.
That's
100.
B
There
is
no
calico
ipip
or
ipx
land.
Nothing
like
that.
A
Sounds
good
yeah
thanks
for
the
demo,
that
was
very,
very
clear.
Does
anyone
have
any
questions.
A
A
A
All
right,
yeah,
let's
follow
up
christian.
I
I'm
very
interested
in
this,
so
if
you
want
we
can
pair
or
something
and
try
to
see
absolutely
what's
missing.
Okay,
all
right!
Let
me
share
my
screen.
A
And
go
back
to
the
agenda
still
looking
pretty
empty,
so
this
might
be
a
quick
one,
but
nader
you
want
to
talk
about
v05.
Oh.
D
Yeah,
there's
a
there's,
a
milestone,
called
five
dot
x
and
then
I
was
going
through
it
and
like
five
zero,
and
I
would
some
of
the
things
seemed
like
that.
We
we
would
run
them
from
the
first
release
in
the
new
in
the
new.5
because,
like
changing
the.
D
A
That
sounds
good.
I
think
in
general
anything
that's
a
breaking
change
or
an
api
change.
We
should
target
in
5.0
and
then
anything
else
that's
just
like
in
the
next
semester,
but
not
doesn't
have
to
be
in
the
dotto
release.
Then
that
can
mean
5.x.
A
For
example,
I
think
experimental
we
want
to
take
a
look
at
that.
It's
been
a
while
yeah
definitely
redesigned
the
user
facing
api
this
one.
I
think
that
one
has
a
bunch
of
to-do
items
in
it
already,
so
we
just
need
to
go
through
them
and
decide
which
ones
we
still
want
to
do.
A
The
sensitive
bootstrap
data
thing
I
that's
going
to
go
hand
in
hand
with
the
proposal
and
cluster
api
for
node
attestation,
so
I
think
that's
already
being
handled,
but
that's
probably
not
going
to
be
in
5.0
bootstrap
failure,
detection.
I
think
it
is
going
to
be
just
because
I'm
I
have
a
pr
open
right
already
and
then
this
also
is
going
to
be
in
there.
A
I
think,
is
there
anything
else
that
I'm
missing,
that
anyone
is
working
on
or
from
this
list
and
I'm
sure
there's
other
things
that
aren't
even
in
this
milestone
right.
A
Now,
I
think,
the
parallel
reconciliation.
I
would
like
to
do
this
soon.
It's
not
something
that
is
like
related
to
the
api
or
anything
though
so
it
doesn't
have
to
be
in
the
dotto.
D
Release
do
we
want
to
get
also
like
the
new
thing
david
was
working
on
like
the
the
the
machine
thing.
A
Which
machine
thing
machine
gun
machine,
I
think,
was
oh,
I
thought
that
was
already
in
there
yeah.
I
think
it's
this
thing
right
is
there
anything
in
here
that
should
not
be
in
here
like
the
gpu
operator.
Is
that
really
something
that
is
5.0
release,
blocker.
D
A
A
C
Perhaps
changing
the
branch.
D
Yeah
yeah,
so
I
asked
about
that
and
basically
there's
steps
to
do
it.
I
put
it
in
the
issue,
that's
published
by
kubernetes,
but
they
said
they
recommend
not
doing
it.
If
you
have
too
many
pr's,
because
it's
going
to
re-trigger
all
the
drops
and
all
the
pr's
at
the
same
time,
so
we
have
a
lot.
We
have
like
20
something
so
we're
just
waiting
for
it
to
kind
of
go
down
a
bit
and
then
we
can
do
it.
C
A
I
I
think,
in
my
opinion,
that's
more
of
a
5.x
than
5.0,
because
it's
not
like
the
we
can
do
it
at
any
time.
We
don't
have
to
do
the
release
after
right,
but
we
should
do
it
in
this
next
cycle
and
then
I
would
like
to
see
the
refactor
finally
done.
I
think
that's
I
don't
know
what
people
think,
but
the
only
ones
that
are
left
are
the
experimental.
A
A
A
Oh
this
one
I
mean
so
this
oh
hub
is
crashing
on
me
this
one.
The
load
balancer
takes
15
minutes
when
you
have
a
single
control
plane,
that's
a
club
provider
bug
and
I
actually
have
like
I've
already
merged
the
pr
to
fix
it
in
the
external
cloud
provider
and
right
now
I
have
a
pr
open
to
cherry
pick
it
to
the
internal
cloud
provider
in
kubernetes,
kubernetes,
that's
being
blocked
by
unrelated
pr
failures
and
also
waiting
for
someone
to
approve
the
cherubic.
A
So
it's
not
in
yet,
and
then
once
it's
in,
I
don't
know
I'm
not
sure
how
exactly
how
the
cherry
picks
do
like
cloud
providers
work
right
now,
but
I
assume
it
would
probably
make
it
in
like
a
future
kubernetes
version,
but
wouldn't
necessarily
be
back
ported
to
the
like
existing
miner
releases.
So
this
wouldn't
really
be
something
we
fixed
in
cavs.
It's
just
a
matter
of
having
a
new
kurdish
version
that
has
effects.
A
I
think
so.
The
only
problem
is
that,
because
of
the
nature
of
the
fix,
like
it's
not
like
a
small
bug
fix
it's
it's
more
like
a.
It
is
a
small
bug
fix,
but
it's
just
the
cash
wasn't
working
properly.
So
this
is
adding
more
api
calls
to
refresh
the
cash
when
the
data
is
still
and
the
issue
with
that
is
that
it
has
an
impact
by
having
you
know
more
api
calls,
so
I'm
not
sure
it
would
be
acceptable
for
blackboard.
A
B
D
D
A
A
Let's
yeah,
let's
do
regular,
like
milestone
like
planning,
especially
in
this
period,
coming
up
to
the
next
author
release
right,
we
want
to
make
sure
we're
tracking
what's
in
there
and
what
should
be
in
there.
A
All
right,
anyone
else
have
anything
they
want
to
bring
up
or
talk.