►
Description
Join Scott and Jay as we look at a way to run a multi OS (windows, linux) workload cluster on VMWare tanzu - this involves having multiple MachineDeployments for Cluster API for Windows as well as Linux nodes... something not explicitly supported using the tanzu cli, but... well.. easy enough to hack together if your feeling adventurous.
B
C
Hey
everyone,
I'm
scott
rosenberg,
I'm
the
practice
leader
for
cloud
and
automation
at
terra,
sky,
long
time,
tkg,
hacker
and
glad
to
be
here.
A
See
so,
okay,
I
can
give
a
little
intro
to
how
how
we
got
here.
So
let
me
see
here,
oh
actually,
it
won't.
Let
me
share
my
screen,
so
I
guess
scott.
You
could
just
take
it
away
because
I'm
having
chrome
issues
I
may
have
to
reload
anyways.
A
A
A
Oh
wait.
No,
that's
not
it
it's,
whereas
it's
the
multi-os
issue
that
you
filed,
but
it's
not
open.
C
It
was
on
the
pr
it's
a
p
yeah
I
just
do
is
pr
and
mixed
windows
yeah
this
one
so
yeah
this
started
initially,
where
I
was
working
with
jay
and
ameem
to
try
and
figure
out
how
we
could
add
mixed
cluster
support
for
mixed
os's
for
windows
and
linux
in
a
single
tanzu
cluster
after
I
saw
that
windows
was
being
added
and
wanted
to
get
mixed
cluster
support
and
put
together
a
pr
which
basically
added
in
this
support
and
in
the
end
there
was
a
conversation
that
was
being
had
where
a
lot
of
the
core
maintainers
were
saying
that
there's
a
belief
that
mixed
clusters
are
an
anti-pattern
because
of
how
easy
it
is
to
create
clusters
today
in
cluster
api
and
tkg.
C
How
easy
it's
become
that
we
should
have
more
purpose-built
clusters,
also
with
the
idea
of
cluster
class
and
cluster
api
coming
into
tanzu,
it's
going
to
become
much
easier
to
do
things
like
this
and
you
don't
need
a
lot
of
ytt
hacking
to
get
it
to
work,
but
in
the
end,
so
in
the
end,
this
pr
was
closed
and
wasn't
merged.
But
once
tkg
1.5
came
out
and
windows
was
added.
C
I
decided
that
even
if
my
pr
wasn't
accepted
for
myself,
I'm
going
to
do
mixed
clusters
so
with
some
additional
things
I
hadn't
thought
about
in
the
pr
or
weren't
in
framework.
At
the
time
there
was
some
more
hackery
that
needed
to
be
done,
but
I
have
mixed
clusters
working
now
in
tkg
and
today
we're
gonna.
You
know
talk
about
the
idea
of
mix
clusters,
you
know
where
it
kind
of
comes
from
and
how
we
can
actually
hack
framework
to
make
it
work.
A
Okay,
cool
yeah,
so
so
what
have
you
got
for
us
today?
Should
we,
you
know,
1.6
is
out
andrea
1.6
is
out,
and
then
so
I'm
thinking.
If
it'll,
let
me
pull
up
that
announcement.
A
Then
we
can.
Let's
see
here.
Let
me
see
if
it
lets
me
so:
oh
yeah,
you
could
pull
it
up,
that's
even
easier
1.6,
so
yeah
the
android
1.6
releases
out.
So
we
should.
We
should
sort
of
like
just
give
that
a
little
hand
wave
and
then
we
should
get
into
it.
I
think,
and
let's
see
I'm
interested
to
see
what
your
end-to-end
solution
looks
like
for
this
yeah.
C
Exactly
so,
I-
and
I
think
one
of
the
great
ones
is
that
egress
moved
from
al
to
beta
in
1.6,
so
we
can
now
get
the
egress
iep
pools
and
all
that
is
now
beta
so
enabled
by
default
and
made
some
really
cool
additions
to
the
entry
ipam
as
well,
where
we
can
now
get
continuous
ips
for
stateful
sets,
which
is
pretty
awesome
and
some
other
really
cool
things
around
multi-cluster
support.
C
There's
some
really
awesome
stuff.
This
is
a
huge
release
and
definitely
worth
looking
at.
A
Oh
and
they
already
added
the
skip
cni
binaries,
so
that
was
one
I
asked
for
three
four
five
four,
so
that
allows
you
to
you
know:
do
things
like,
for
example,
andrea,
bootstraps,
its
own
opt
cni
bin
binaries,
but,
like
you
might,
wanna
have
different
ones
like,
for
example,
you
might
wanna
have
a
different
version
of
whereabouts
or
dhcp
for
ipam
advanced
ipm
for
the
telco
folks
or
whatever.
So
now
we
can
skip
those
that'll
be
useful
for
for
some
folks.
C
B
C
Moving
visibility
to
grafana
as
well
or
adding
visibility
of
the
flow
collector
into
grafana
is
also
pretty
nice
with
click
house.
C
So
that's
another
great
addition
that
was
added
here
so
very
excited
about
this
release
for
sure
cool.
I
think
one
of
the
coolest
ones
is
actually
this
idea
of
a
service
account
selector
and
entry
and
native
policies.
I
hadn't
looked
at
this
before,
but
it's
a
really
cool
idea,
we're
now
in
andrea
network
policies
we
can
actually
select
pods
based
off
of
the
service
account
they're
using.
C
So
if
you
think
about
it
like,
if
a
user
has
access
to
a
specific
service
account,
you
could
create
network
policies
allowing
traffic.
Let's
say
between
all
pods
cross
name
spaces
that
you
utilize
service
accounts
with
specific
names
or
some
really
cool
things
like
tying
in
our
back
with
network
policies.
A
Yeah,
okay,
okay,
cool,
so
yeah.
This
is
a
big
release.
Then
so
now
we've
got
yeah
and
they've
got
more
stuff,
and
yesterday
in
the
community
meeting
they
did
a
thing
about
adding
icmp
policies
to
andrea,
which
was
kind
of
interesting.
How
they're
going
to
change
the
data
structures
around
for
how
they
do,
and
I
was
kind
of
wondering
I
don't
know-
what's
the
big
use
case
june
jen
for
icmp
network
policies,
I
didn't
fully
get
it
from
the
presentation.
Maybe
I
wasn't
paying
attention
to
the
whole
thing.
But
what
who
uses?
A
C
A
C
So
the
way
that
here
I
can
show
this
here
actually
because
this
will
be
easier
if
we
look
in
general
at
the
build
out
of
tanzu
itself
and
the
way
that
the
file
layout
happens
on
our
machines
is
that,
under
this
folder
and
your
root
directory
of
dot,
config
tenzu
tkg
providers,
we
have
this
file
here
called
config
default.yaml,
which
basically
configures
all
of
the
values
that
can
be
utilized
within
a
cluster
configuration.
C
C
C
We
needed
to
configure
certain
controllers
to
run
on
control,
plane
nodes
and
to
tolerate
those
taints,
because
they
can't
run
on
windows,
there's
a
lot
of
things
that
need
to
change
when
we
move
to
a
only
windows
based
worker
node
cluster,
that
required
this
type
of
a
setting
to
be
made,
and
one
of
the
things
that
we
talked
about
in
one
of
the
previous
sessions
on
the
tkg
customization
show
we
did
was
you
know
the
multi-machine
deployment
overlays
that
I
built
back
in
the
day
which
allows
us
to
deploy
machine
deployments
of
different
types
into
a
cluster,
and
the
idea
here
was
well
can't.
C
We
do
the
same
thing
where
we
would
have
a
machine
deployment
for
windows
and
a
machine
deployment
for
linux
in
the
same
cluster,
and
then
we
can
get
the
best
of
all
worlds.
We
can
be
able
to
have
things
running
on
the
appropriate
os
and,
at
the
same
point,
get
service
discovery
get
all
of
the
benefits
of
running
within
a
single
cluster.
C
You
know
very
easily
so
in
order
to
do
that,
I
basically
needed
to
hack
around
a
few
files
and
add
in
a
few
additional
data
values.
So
the
number
one
thing
that
changed
was
that's
under
infrastructure,
vsphere,
ytt,
and
then
we
have
the
mix
cluster
data
values.
So
the
first
value
is
we
needed
to
know
that
this
is
a
mixed
cluster,
just
like
the
configuration
needs
to
know
if
it's
a
windows,
cluster
or
not,
we
needed
to
know
if
this
is
a
mix
cluster
or
not
so.
C
I
am
yeah
because
I
didn't
want
to
hack
all
of
the
files
there's
about
50
references
for
is
windows
cluster
in
all
of
the
tanzu
framework,
ytt
files
and
I
didn't
want
to
need
to
hack
them
all.
So
I
just
added
is
windows,
cluster
true
and
then
is
mix
cluster.
True
as
well,
we'll
add
a
linux
machine
deployment.
C
Yeah,
so
that's
awesome.
I
have
someone
else
to
blame
sweet,
but
so
beyond
that,
so
we
just
added
in
this
additional
data
value
which
allows
us
then,
once
this
is
configured
anywhere
within
the
folder
structure
of
dot,
config
tanzu
tkg
providers.
C
We
can
utilize
this
now
for
our
clusters
in
our
cluster
configs
by
just
adding
any
additional
values
here.
So
I've
added
in
the
ismix
cluster
by
default,
it's
false,
and
then
we
also
added
in
the
ability
to
set
like
cpu
disk
size
memory,
size,
machine
count,
auto
scaler
settings,
and
these
are
data
storage
networks
which
you
may
want
different
between
your
windows
and
linux
machine
deployments.
C
So
basically,
what
this
does
is
the
end
goal
will
be
that
we
will
have
our
cube
adm
control,
plane,
our
control,
plane
nodes
will
be
linux.
We
have
a
machine
deployment
that
is
windows
based
that
will
be
md0
the
default
machine
deployment
and
then
md1
the
additional
one
we're
adding
is
going
to
be
a
standard
linux
based
machine
deployment.
A
C
So,
basically,
just
adding
on
top
of
what
we
get
out
of
the
box,
not
really
changing
anything
just
adding
on
top
of
it.
So
the
overlay
itself
is
actually
a
very
simple
overlay,
there's
not
much
to
it.
It's
only
93
lines
and
in
general,
what
we're
doing
here
is
just
adding
three
new
objects:
we're
adding
in
a
machine
deployment,
a
vsphere
machine
template
and
a
cube,
adm
config
template
for
a
linux
cluster.
C
These
are
basically
copy
paste
from
what
exists
out
of
the
box
in
a
linux
clusters
configuration
under.
Where
would
that
be?
That's
I
think
here
yeah.
So
it's
like
taking
the
vsphere
machine
template
taking
the
machine
deployment,
taking
the
cube,
adm
config
template
just
taking
these
three
objects,
putting
them
into
a
separate
overlay
file.
So
it's
a
pretty
simple
one
with
some
defaulting
and
some
fun
ytt
magic.
C
But
it's
relatively
very
simple,
and
this
runs
with
the
conditional
of
if
data
values
is
mix
cluster.
So
if
we
have
is
mixed
cluster
set
to
true,
we
create
this
additional
three
resources,
and
this
works.
So,
just
by
doing
this
without
really
hacking
anything
else,
things
almost
work
except
they
don't,
except
they
don't,
and
there
are
four
places
that
the
conditionals
that
were
set
in
framework
were
too
limiting
for
us
in
order
to
add
this
capability,
and
so
we
had
to
change
some
if
conditionals
within
the
files
to
make
this
actually
work.
C
So
basically,
where
things
needed
to
be
changed,
where
number
one
was
actually
andrea
itself
related
is
under
the
ytt-03
customization
zero.
Three
windows,
we
register
the
andrea
cleanup
script
and
we
prevent
windows
updates
are
two
things
that
are
being
done
automatically
for
us
in
windows
clusters,
because
you
don't
really
want
windows
updates
running
on
a
worker
node
in
kubernetes,
and
so
originally.
What
we
can
see
here
is
that
the
original
line
was.
Is
this
big
enough
by
the
way
the
screener?
Should
I
zoom
in
I?
C
I
can
see
it
all
right
awesome.
So
what
we
can
see
is
that
the
original
line
was
basically
overlay
matching
on
cube
adm
config
templates
just
take
the
cube,
adm,
config,
template
and
overlay
on
it
to
add
this
powershell
commands
and
tab.
This
post
cube
adam
command.
The
issue
is,
is
that
we
don't
want
this
running
on
a
linux
vm.
C
Is
supposed
to
only
run
on
windows,
so
I've
scoped
it
down
even
further
and
said
only
if
this
is
a
cube.
Adm
config
template
with
the
name
of
data
values,
cluster
name,
md0,
windows,
container
d,
so
only
in
that
case
will
this
apply
and
then
that
allows
me
to
have
linux
cube
adm,
config
templates
that
won't
be
manipulated
by
this
file.
C
So
that
was
change
number
one
and
a
similar
change
needed
to
happen
with
the
register.
Andrea
cleanup,
which
was
again
just
that
cubaydm
config
template
and
here
we're
registering.
This
script
doesn't
work
on
linux,
so
we
just
again
just
scope
down
the
overlay
matching
so
that
this
actually
can
work
on
just
our
windows
machine
deployment
right.
So
those
are.
C
Those
are
the
two
changes
that
are
really
critical
right,
because
otherwise
our
cluster
won't
work.
But
there
are
an
additional
few
changes
that
I
made
because
certain
things
are
disabled.
When
I
run
a
windows
cluster
in
tkg
and
the
number
one
of
that
is
the
secret
gen
controller
secret
gen
is
not
installed
on
windows
clusters,
so,
instead
of
disabling
it
now,
I
can
actually
enable
that.
A
C
Yeah,
so
secret
gen
is
used
heavily
in
tenzo
application
platform.
Tanzu
mission
control
integrates
with
secret
gen
now
to
allow
you
to
create
secrets
from
tanzania
mission
control
and
push
them
to
your
clusters
and
manage
them
centrally.
The
tanzy
rabbit,
mq,
utilizes
secret,
gen
controller.
A
C
It's
more
in
the
wider
tanza
ecosystem
that
this
is
being
utilized.
Okay,
and
so
the
initial
value
that
we
had
was
if
data
values
is
provider
type
is
not
tkg
service
for
vsphere,
so
not
running
on
vcr
with
tanzu
and
the
secret
trend
controller
is
enabled
and
not
data
values
is
windows,
workload
cluster.
So
if
it's
not
vsphere
with
tanzu,
we
have
secrets
and
controller
enable
which
is
the
default
value,
and
it's
not
windows
then
add
secret
gen,
but
we
needed
to
make
this
a
bit
more
complex
of
a
conditional.
C
So
just
remove
the
windows
conditional
and
then
I
added
another
condition
saying
if
data
values
is
windows,
workload,
cluster
and
data
values
is
mixed
cluster
or
it's
not
a
windows
cluster.
Okay.
So
if
it's
mixed
awesome,
if
it's
just
windows,
no,
and
if
it's
just
linux,
also
awesome
yeah.
A
C
Or
conditional
here
and
that's
basically,
the
change
we
needed
to
make
around
secret
gen
in
order
to
make
it
actually
work.
A
C
C
A
Real
quick,
I
want
to
look
at
jun,
jin's
answer
and
then
so
jun
jin
said
our
earlier
thing
about
what
what's
this
icmp.
He
said
people
from
upstream
and
evidently
customers
have
asked
for
icmp
network
policies,
and
I
guess
people
use
icmp
for
things
other
than
ping.
I
just
don't
know
what
that
is.
I
don't
I
don't
know
what
people
use
icmp
for
I
don't
even
know
what
icmp
stands
for
okay
and
then
ameem.
A
I
asked
you
a
question
if
you're
around,
which
is
what
what
that
android
cleanup,
does,
I
think
what
it
does
is.
It
makes
sure
you
clean
up
ovs
on
reboots,
but
if
you're
around
you
can,
let
us
know
anyways
gohan
go
ahead,
keep
going
keep
going,
yeah.
C
Yeah
yeah,
so
I
basically
the
other
change
that
we
wanted
to
make
was
that
currently
csi
is
not
supported.
Vsphere
csi
is
not
supported
on
windows,
at
least
in
the
versions
that
are
supplied
in
tkg.
It
can
work
with
csi
proxy.
There
are
all
these.
You
know,
kind
of
hackery
things
that
were
added
in.
I
think
there's
alpha
support
now
in
the
latest
version
of
the
vsphere
csi,
but
tkg
does
not
support
at
this
point.
Csi
on
windows,
so
csi
is
disabled.
C
We
don't
actually
install
a
csi
provider
on
windows
clusters
in
pkg,
but
the
second
I
have
a
mix
cluster.
There
is
no
reason
I
shouldn't
have
csi
capabilities
for
my
linux
vms.
So
this
is
not
a
blocking
issue
right
this
just
like
secret
gen.
I
could
have
not
made
these
changes
and
just
gotten
a
limited,
tanzu
cluster
limited
capabilities
out
of
the
box.
C
Instead,
I
decided
to
add
in
the
csi
by
again
just
changing
the
conditional
from
if
it's
vsphere
and
not
windows
cluster,
to
if
it's
vsphere
and
if
data
values
is
windows,
workload,
cluster
and
mix
cluster,
or
it's
not
a
windows
cluster.
C
So,
basically,
that's
just
adding
in
additional
fields
here
that
allow
us
to
say
that
in
a
mix
cluster
I
want
the
csi
to
work
and
those
are
all
the
changes
that
actually
needed
to
be
made,
and
all
of
this
is
up
here
on
a
on
my
github
repo
on
the
tkgm
customizations,
just
exactly
what
changes
need
to
be
made,
and
then
this
basically
with
just
some.
You
know
examples
and
to
show
this
here.
C
What
I
actually
have,
let's
see
here,
if
I
and
it'll
be
easier
to
show
here
panzu
cluster
list-
and
we
can
see-
I
have
a
cluster
here
called
mix
cls01
with
five
worker
nodes,
and
what
I
can
actually
do
is,
let's
see
cue
cto.
C
Yeah
I
we
built
this.
We
built
this
all
out,
which
is
a
admin
vm
that
allows
you.
We
built
one
for
tce
for
tkg
and
eks
anywhere.
Let
me
just
log
in
here:
oh
idc,
stuff,
that's
cool,
awesome
and
then
yeah.
So
this
is
it's
an
ova
basically
that
you
can
deploy
to
vsphere.
C
That
then
has
within
it
all
of
the
clies.
That
should
exist
on
a
machine
that
you
work
with
kubernetes
things
like
helm,
things
like
customize
things
like
all
the
carvel
tool
set.
Tanzania
and
cuddle
is
here
as
well.
Pinopen
cli
is
here
all
of
these
cli's,
as
well
as
a
lot
of
cube,
ctl
plugins.
C
Exactly
I
mean
it
has
terraform,
it
has
ansible,
it
has
python.
It
has
basically
anything
that
you
may
want
or
need,
when
you're
dealing
with
kubernetes
or
anything
in
the
ecosystem
around
it.
We
have
here
and
like
we
have
a
tce
one
that
installs
the
tc
version
of
tanzu
cli.
C
We
have
a
tkg
one
as
well
in
eks
anywhere
one
and
we're
actually
just
in
the
final
steps
of
running
some
scans
on
it
and
making
sure
there's
nothing.
You
know
problematic
and
then
we're
gonna
release
this
open
source
as
well,
so
that
should
hopefully
be
very
soon,
but
it's
a
really
easy
thing
and
it's
completely
automated
with
ansible
and
packers.
C
So
tc011
was
released
yesterday,
I
believe,
or
two
days
ago,
and
today
we
built
the
ova
for
it
with
about
one
minute
of
just
running
the
ansible
playbook
took
you
know
an
hour
to
build
with
packer
and
ansible
and
everything
but
gets
pushed
up
to
a
content
library
in
an
s3
bucket.
C
A
C
In
terms
of
the
framework
you
know,
capabilities
and
all
of
that
so
yeah,
and
so,
as
we
can
see
here
in
this
cluster,
we
actually
have
our
control
plane
nodes.
We
have
our
windows
worker
node
and
we
also
have
our
linux
worker
nodes
all
configured
for
us
in
a
single
cluster,
and
if
I
actually
looked
here
at
my
examples,
we
can
see
here
the
it's
not
there.
Actually,
this
will
be
easier
to
show
in
vs
code.
C
C
So
this
is
just
vs
code
in
the
browser
that's
just
running
on
that
ssh
host,
so
you
can
access
from
anywhere.
You
don't
need
to
ssh
into
the
node
or
anything
like
that
and
yeah.
This
is
that
same
vm,
just
from
a
vs
code
perspective
which
is
really
fun
to
play
with.
But
so,
if
I
looked
here
at
cluster
configs,
this
is
the
cluster
configuration
basically
that
we
built
out
so
regular.
You
know
cluster
settings
control,
plane
settings
and
then
these
are
the
two
values
that
we
add
for
mix
clusters.
C
True
from
that
point
on,
we
have
our
windows,
node
settings,
I've
added
the
capability
to
add
worker
labels,
so
that
we
can
also
target
things
with
specific
labels
if
we
wanted
for
our
windows
nodes
and
our
linux
nodes,
so
added
that
here
and
added
one
for
linux
as
well-
and
this
is
the
settings
for
our
linux
nodes
and
the
reason
we
added
in
the
additional
values
is
because
we
wanted
to
make
sure
that
you
don't
want
the
same
resources
for
a
linux
vm.
C
As
you
do
for
a
windows
vm,
you
definitely
need
a
bit
more
for
windows,
vms
and
kubernetes,
so
we
allow
you
to
set
separate
resources
for
each
of
them.
This
works
perfectly
with
the
ivy
control,
plane,
h,
a
provider
unlike
windows,
workload
clusters
and
one
of
the
nice
things
with
this
is
because
I
have
the
linux
worker
nodes
for
those
that
have
dealt
with
windows
clusters
in
tkg
1.5.
C
There
are
some
workarounds
you
need
to
do
after
deploying
the
cluster,
because
ako
the
avi
kubernetes
operator,
as
well
as
pinniped,
I
believe
it
is
both
of
them
have
an
issue
where
they
don't
tolerate
the
taint
of
the
control
plane
nodes
and
they
have
to
run
on
linux.
C
So
instead
because
we
have
windows
because
we
have
linux
worker
nodes,
they
just
run
there.
And
the
interesting
thing
is
that
actually,
the
first
distribution
from
vmware
that
supported
windows
was
tkgi
or
what
was
previously
called
pks
and
in
pks
there
is
actually
no
way
to
create
an
all
windows
cluster.
A
C
C
You
could
unpaint
the
control
plane,
but
you
don't
necessarily
want
to
do
that,
especially
if
it's
going
to
be
a
large
cluster,
in
which
case
you're
just
going
to
be
putting
a
lot
of
burden
on
the
control,
plane
nodes
and
someone
who
has
cube
ctl
access
and
is
going
to
run
a
nginx
pod
on
by
mistake.
Instead
of
a
iis
pod
is
going
to
start
taking
resources
from
fcd
api
server,
cube,
scheduler
and
different
components
that
you
want
to
have
their
dedicated
resources.
C
Argument
exactly
and
cds
should
be
on
its
own
nodes.
That's
not
supported
right
now
in
cluster
api,
but
we
should
have
the
ability
to
have
separate
at
cd
nodes
just
due
to
its
level
we're.
A
A
C
Awesome,
it's
a
great
excuse,
but
yeah.
So
basically
I
that
was
the
only
change
we
needed
to
make,
which
is
really
awesome.
To
think
that
you
know
again,
because
tkgm
is
so
customizable
using
ytt
down
the
road
cluster
class,
but
I
mean
having
all
these
customization
capabilities
really
allows,
even
if
the
product
itself
doesn't
support
the
mixed
clusters
with
about
eight
lines
of
changes
in
if
conditionals,
which
is
pretty
simple,
to
do
and
adding
in
three
resources,
we
can
have
mixed
clusters
with
auto
scaling
and
anything
that
we
need.
C
We
can
really
make
happen
out
of
the
box
and
that's
what's
so
amazing
here
and
what
makes
tkgm.
I
think
such
a
interesting
distribution
in
such
an
awesome
tool-
and
I
even
took
it
further
in
my
other
environment
and
if
I
went
here
and
ran
a
cube,
cto
and
hanzo
package
installed
list
dash
a
and
look
at
the
packages
I've
installed
on
this
mix
cluster.
C
Okay,
I
have
tanzu
application
platform
installed,
deploying
windows
apps
and
linux
apps
for
me,
so
we're
actually
able
to
even
install
tamsu
application
platform
the
most
like
cutting
edge,
let's
say,
platform
running
in
the
tanzu
portfolio.
That
allows
us
to
do
some
amazing
things
all
running
on
a
mixed
cluster
and
because.
A
C
Because
can
I
this
has
things
like
backstage
for
its
ui,
it's
running
with
techton?
All
of
this.
We
couldn't
run
this
on
windows,
but
we
get
all
the
capabilities
running
on
linux
and
the
applications
we
build
because
tansu
build
service
and
really
the
back
end
technologies
of
kpac
and
cloud
native,
build
packs
support
which
build
our
images,
support,
building
windows
images.
C
Here
you
can
see
that
I
have
this
demo
windows
application.
That's
been
going
through
a
few
builds
here.
I've
got
my
demo
app
that's
up
and
running
with
net,
and
I've
also
got
my
linux
containers
running
here
and
everything
working
of
tanzoo
application
platform
and
with
just
a
custom
supply
chain
of
basically
a
custom
pipeline.
A
A
C
Yeah,
so
there
I'm
in
my
container,
if
you'd
like
to
play
around
with
the
net
application
from
my
container,
that
I
just
expect
into
we've
got
windows
running
here
and
right
alongside
our
linux
containers,
and
I
know
to
go
back
to
the
question
on
you
know:
is
this
an
anti-pattern
or
not?
You
know.
I
think
that
in
the
end
goal,
I
think
it
may
be
right.
C
We
have
this
idea
of
multi-cluster
and
how
easy
it
is
to
bring
up
clusters,
but
in
the
end
it
can
only
be
an
anti-pattern
if
the
solution,
if
there
is
a
solution
to
the
challenges
out
there,
because
there's
a
need
to
be
able
to
have
communication
between
windows,
front
ends
and
linux
back-ends
or
windows
back-ends
and
linux
front-ends
we're
in
that
world.
A
So
I'm
wondering
like
are
the
andrea
folks
here,
thinking
that,
like
the
actual
solution,
one
alternative
to
this
would
be
to
start
to
leverage
those
like
like
you
would
do
something
like
you
would
create
a
tanzu
windows
cluster,
and
then
you
treat
a
tanzu
linux
cluster
and
then
you
create
a
service
bridge
like
for
the
prometheus
example.
Right,
like
you.
D
A
C
C
A
C
Well,
and-
and
it's
really-
you
had
mentioned
that
you
know
the
icmp
thing
that
came
up
in
the
antrio
community
meeting,
which
was
an
awesome
demo
of
you
know
where
that's
going.
I
think
that
the
other
really
interesting
thing
that
came
up
at
the
andrea
community
meeting
was
the
data
path
proposal
for
multi-cluster,
where,
basically,
today,
the
multi-cluster
support
you
need
to
have
basically
routable
pods.
C
In
order
for
this
to
work,
you
need
to
have
the
ability,
in
the
underlay
network,
for
pods
between
clusters
to
be
able
to
communicate
with
one
another.
Otherwise
it
doesn't
really
work
and
there
is
an
idea
of
adding
in
now
a
data
path
api
to
the
multi-cluster
in
andrea.
C
That
will
allow
us
to
actually
basically
have
like
a
gateway
node
in
each
cluster,
that
the
traffic
will
be
able
to
actually
traverse
through
geneve
tunnels
between
kubernetes
clusters,
but
even
the
multi-cluster
that
we
have
today
without
the
data
path,
doesn't
support
windows
clusters,
a
lot
of
the
really
cool
functionalities
like
the
egress
functionality.
Nantria,
doesn't
support
windows,
yet
I'm
sure
that
those
things
are
coming
hopefully,
but
they
aren't
there
yet.
C
C
C
Yeah,
so
that's
like
one
of
the
challenges
right
that
we
need
to
deal
with
right.
I
think
you
know
talking
about
egress.
That's
another
thing
that
you
know
currently.
Today
we
have
a
version
of
andrea
and
tkg
that
can
support
egress,
but
it's
not
actually
set
as
a
configuration
value
in
the
andrea
package.
So
you
can't
configure
egress
by
default
in
tkg
today
I
actually
have
an
overlay
that
I've
built
for
that
as
well,
and
it
works
perfect
and
it
works
on
the
mix
clusters
as
well.
Yeah
and
egress.
C
In
1.6,
so
once
1.6
gets
bumped
in
unless
tkg
changes
the
defaults,
it
will
be
enabled
by
default,
but
I
mean
there
are
things
that
are
enabled
now.
A
C
A
lot
to
celebrate
exactly
right,
and
so
I
it's
a
really
cool
if
I
do
get
pods.
I
think
I
have
here
even
like
the
really
nice
thing
here
is
that
I
can
have
ias
and
apache
already
got
the
same
thing,
and
I
just
the
idea
of
a
mixed
cluster.
I
think
until
the
tooling
is
there
right,
there's
great
tooling
out
there
can
do
service
mesh
as
an
example.
C
Console
from
hashicorp
are
great
multi-cluster
service
meshes.
That's
another
way
of
dealing
with
the
multi-cluster
situation
is
using
a
multi-cluster
service
mesh.
You
can
use
commercial
offerings,
you
can
use
open
source
offerings
personally.
I've
had
the
best
experience
using
tools
like
either
tanzu
service,
mesh
or
console,
which
is
great
in
console,
supports
windows.
A
C
Nomad
still
exists,
I
guess
it
does.
I
don't
know
I've
been
dealt
with
nomad
in
a
while.
I
used
to
deal
with
nomad
a
lot.
It's
actually
a
very
cool
technology
and
I
think
it
just
never
caught
on,
but
it's
really
cool,
because
it's
not
just
containers
right,
it's
a
full-time
orchestrator
for
whatever
you
want
it
to
orchestrate.
C
Yeah,
okay,
yeah
nomad,
is
great.
It's
really
easy
to
bring
up.
Also
it's
a
single
binary.
I
mean
it's
so
scalable,
it's
a
really
cool
tool.
It
just
never
caught
on
like
kubernetes
did,
and
you
know
we
see
the
fact,
though,
that
you
know
kubernetes
has
become
the
standard
in
this
world
and
I
think
the
real
reason
that
I
got
into
this
idea
of
a
mixed
cluster
was
really
actually
around.
C
I
think
you
know
I
always
like
to
look
at
things
from
you
know,
taking
things
from
different
disciplines
in
my
life
and
bringing
them
together,
and
when
I
read
religious
writings
how
jewish
law
was
actually
created.
There's
a
famous
rule.
That's
brought
down
a
lot
of
times.
It
says
when
they're
trying
to
figure
out
what
the
law
is,
the
rabbis
back
2
000
years
ago,
it
says,
go
out
and
see
what
the
people
are
actually
doing.
C
C
Right
I,
like
that
rule
scott,
it's
a
great
rule
right
and
there's
a
lot
of
truth
to
that,
and
I
think
that
one
of
the
things
that
we
realize
that
when
I'm
out
in
like
the
consultancy
world,
so
I'm
going
and
talking
with
customers
every
day
and
seeing
different
environments
and
seeing
different
needs,
and
we
see
two
things
happening
at
once.
Right
on
one
hand,
everyone
wants
to
move
to
cloud
native
cloud
native
move
to
microservices
move
to
kubernetes,
that's
all
great.
C
On
the
other
hand,
there
are
dot-net
houses
running
on
windows,
using
framework
and
they're
slowly
moving
over
to
net
core
and
to
tell
I
do
not
think
personally
right
and
there
are
people
both
watching
this
and
that
are
on
here
with
me
right
now.
That
may
agree
with
me
or
not.
I
don't
think
that
windows
and
kubernetes
is
the
end
goal.
I
think
the
end
goal
is
linux.
C
But
windows
and
kubernetes
is
a
necessity
that
we
need
to
make
sure
works
very
well
because
there
are,
there
are
things
that
are
going
to
continue
running
on
windows.
I
still
think
linux
is
more,
let's
say
native
to
the
kubernetes
world,
but
windows
is
a
necessity
and
we
need
to
make
that
work.
Well,
I
mean,
and
luckily
we
have
people
like
jay
stewart,
a
meme
that
are
helping
that
happen.
A
C
Exactly
there
are
the
needs
to
do
it
right.
There
are
the
use
cases
and
the
biggest
use
case
that
we've
seen
is
that
people
want
to
move
to
kubernetes
and
gain
the
benefits,
and
even
with
the
1.25
gigabyte,
tiny
hello
world
application.
Basically,
the
smallest
you
can
build
with
a
windows
container
running
on
windows.
C
C
Keep
it
in
that
framework,
because
breaking
up
into
services
is
an
easier
task.
Breaking
up,
my
application
can
be
easier
than
rewriting
my
whole
application.
C
C
I
can
move
micro
service
by
micro
service
over
to
linux,
increase
the
size
of
that
machine
deployment,
shrink
the
size
of
my
windows,
machine
deployment
or
the
other
way,
whatever
it
ends
up
being,
but
we
can
play
around
with
things
within
the
same
cluster
and
make
everything
just
work
really
simply
and
in
a
transparent
way,
and
that's
where
this
need
has
come
up
from.
You
know
our
users
and
our
customers
of
this
idea
of
a
mixed
cluster.
A
A
C
Exactly
listen,
I
I
I'm
still
relatively
young,
but
I
started
my
career
in
mainframe.
That's
where
I
started
my
career
in
mainframe
right
mainframe
was
obsolete
when
I
started
working
on
mainframe
and.
C
When
you
start
where
I
started
dealing
with
technology,
you
don't
really
get
the
choice
of
where
to
work.
So
when
you
start
working
in
the
army,
that's
you
know,
you're
put
where
you're
put.
A
C
Working
on
weapons,
well,
you
know
there
are
things
that
you
know
need
to
be
done
and,
but
so
with.
So
when
you
think
about
it,
mainframes
still
exist
today.
Right
and
everyone
talked
about.
Mainframes
are
absolutely
no,
they
aren't
mainframes
still
exist
and
will
continue
to
exist
for
years.
On
years.
From
now
bare
metal
servers
and
not
vms
people
said
vns
are
going
to
replace
everything.
No,
they
aren't
vms
are
amazing
and
they
may
replace
90
95,
but
legacy
is
still
here
to
stay.
There
are
used
cases
that
the
new
technologies
don't
solve.
C
Not
everything
is
meant
to
run
in
kubernetes.
There
are
things
that
should
still
run
in
virtual
machines
right
everything
has
its
place
and
it's
the
same
thing
windows
and
linux
right
all
of
this,
even
if
we
say
that
kubernetes
the
native
is
running
in
linux,
there
are
use
cases
that
will
continue
in
10
years
from
now
of
people
needing
windows,
containers
and
windows
technology,
specifically
in
kubernetes.
C
Exactly
and
I'm
that's
where
this
becomes
really
interesting
right,
the
idea
of
how
do
we
make
this?
You
know
single
control
plane
and
I
think
that
you
know
the
idea
of
separate
clusters
will
work
when
things
like
cube,
fed
right,
kubernetes,
cluster
federation,
probably
in
version
four
five,
six
seven
eight
actually
becomes
a
viable
solution.
It's
just
got.
The
idea
is
great,
it's
just
not
there.
Yet
in
cuba,
one
or
two
I.
A
Get
the
feeling
that,
what's
going
on
with
the
federated
stuff,
because
it's
been
around
for
a
while
and
I've
had
friends
that
were
working
on
the
kubernetes
federation,
for
you
know,
five
years
ago
people
were
doing
this
like
like
within
months
of
kubernetes
coming
out.
They
had
that
working
group
for
that,
and
it
was
like.
I
think,
that
the
problem
they
keep
hitting
is
that
there's
no
economic
incentive
for
any
company
to
really
make
it
work
really.
A
Well,
that's
what
it
feels
like
right
like
what,
if
you're,
if
you're,
if
you're
amazon,
why
do
you
want
like
you
just
want?
I
don't
know
like
you,
just
want
people
to
have
some
steady
state
of
infrastructure
churn
or
something?
I
guess
I
don't
know
like.
That's
the
that's
the
vibe.
I
get
right
like
no
company.
C
A
C
A
A
C
But
there's
this
idea
of
we
can
basically
attach
into
kcp
additional
clusters
and
then
from
a
single
api,
basically
federate
out
to
these
additional
clusters
or
the
other
way
right.
We
can
it's
actually
like
two-way
syncing.
There's
some
really
cool
things
here
and
so.
A
Kcp
is
you
you
use
kcp
as
a
mirror
for
all
your
resources,
and
then
you
have
a
whole
bunch
of
clusters
underneath
it
and
then
you
can
access
it.
You
can
facade
into
it
through
kcp,
but
you
can
have
a
scheduler
that
reads
what's
in
kcp
and
you
can
have
and
that
scheduler
could
be
running
somewhere
different
and
right.
C
A
C
A
C
I
know
that
fabrizio
built
a
prototype
of
doing
it
with
cluster
cuddle
up
in
you
know,
upstream
for
cluster
api,
which
I
think
makes
sense
that
then
the
next
step
would
be
for
framework,
because
it's
a
really
cool
idea
and
then
you
don't
need
a
docker
engine
right.
A
A
C
And
I-
and
I
don't
think
it's
just
for
testing,
though
right,
because
I
even
most
controllers
today,
you
look
in
their
logs
and
it
says
running
with
dash
dash
in
cluster
right
or
we're
running
in
in
cluster
mode,
because
all
controllers,
almost
if
you're
building
them
with
cube
builder,
can
run
as
a
binary
outside
of
the
cluster
on
your
machine.
So
you
can
do
the
same
thing
just
run
out
of
cluster
from
kcp
just
register
that
controller
in
to
kcp
and
it'll
work.
C
C
Anything
that
wants
to
have
that
you
know
type
of
an
api
service
can
utilize
kcp
within
their
own
tooling,
and
that's
where
it
becomes
really
cool,
and
this
is
the
idea
jay
that
I
think
will
make
something
like
mix
clusters
not
necessary
right.
This
idea
of
the
transparent,
multi-cluster
topology,
where
we
run
kcp
as
a
control
plane
in
front
of
your
physical
compute
clusters
for
workloads
and
let
kcp
determine
how
to
schedule
your
workloads
to
physical
compute
clusters.
C
If
you
ran
kcp
ahead
of
a
windows
cluster
in
a
linux
cluster,
it
could
suggest
it
could
schedule
accordingly
on
the
right
cluster,
your
containers
and
then
with
something
like
the
entry,
a
multi-cluster
data
path
when
that
comes
in,
and
then
when
that
supports
windows
and
all
of
these
things,
you
could
actually
get
separate
clusters
with
these
capabilities.
A
Yeah
and
another
thing,
I
guess
antonin
what
about
this
one
right
from
an
andrea
perspective,
like
you
so
scott's
example,
and
then
the
other
one
that
I'm
thinking
is.
You
know
we
had
that
whole
thing
with
android
controller,
extending
api
server
and
that's
kind
of
a
confusing
model
sometimes
prometheus.
Does
that
and
you
know
I
think
prometheus?
Does
it
like
I've
seen
this
with
things
before,
where
you
extend
the
api
server
and
then
you
crash
and
that
crashes
the
api.
A
C
A
Cap
extends
cap,
extends
it
like
in
the
way
that
it
doesn't
android
just
consumes
the
api
server
as
a
library,
I'm
not
sure
ksp
would
bring
anything.
D
A
So
it
ext
okay,
so
so
antonin's
saying
that
vendoring
kcp
instead
of
consuming
the
api
server,
so
I
mean
I
guess
at
the
end
of
the
day,
people
need
to
you
need
crds
for
andrea
for
sure,
because
I
do
not,
I
do
see
somebody.
Sometimes
you
need
to
run
a
sass
and
there's
existing
framework.
You
have
to
adopt
that,
doesn't
allow
you
to
run
a
full
cluster.
A
I
think
you
do
need
to
extend
the
api,
so
you
do
need
to
bootstrap
and
consume
the
api
server
and
then
install
your
own
crds
anyways.
So
in
that
case
I
see
what
you're
saying
anthony
but
yeah
so
yeah
cap
is
a
cap
is
a
tricky
one,
because
it's
like
stateless.
It's
really
confusing
like.
If,
if
you
kill
there's
a
bunch
of
end
points
when
you
start
up
cap
that
are
now
available
and
then
they
go
away
and
that
always
confuses
me
right,
yeah
yeah.
C
A
A
A
A
C
C
Cool
thing:
anyways
yeah,
so
this
is
in
general
anyone
that
wants
to
play
around
with
this.
It's
up
on
github.
I
haven't
found
any
issues
with
it.
Yet
I'm
sure
there
are
some
because
I
haven't
paddle
tested
it
enough,
but
auto
scaler
works
for
windows
clusters,
auto
scaler
works
for
linux
clusters
and
mix
clusters,
and
you
get
all
the
capabilities
of
tkg
with
just
a
few
edits.
So
yeah.
D
A
C
A
Support
yeah,
I
think
they
didn't
andrea,
just
add
some
new
thing
where
you
could
just
run
it
anywhere
without
a
thanks,
stewie
yeah.
Thanks
for
stewie,
we
never
see
you
on
the
show.
It's
good
to
see
you
man,
so
yeah,
thanks
scott,
we'll
we'll
catch
up
later,
thanks
everybody
for
coming.