►
Description
Join @jayunit100 @luthermonson, and maybe a few other surprise guests for another this WEDNESDAY where we'll explore parts of the broader Antrea story. We'll dig into Antrea on GKE, Antrea Agent's startup on Windows, and look at wether or not we can install it on K3s as well.
- luther is here again !
- k3s demo
- test networkpolicys
- https://github.com/kubernetes/enhancements/pull/2975
- https://github.com/rancher/rke2/issues/2201
- https://github.com/kubernetes-sigs/sig-windows-dev-tools/blob/master/forked/0-antrea.ps1
A
A
A
What's
up
matt,
where
are
you
from
you're
coming
from
massachusetts
because
you
live
in
the
same
state
as
I
do
good,
I'm
glad
you
can
hear
me
and
let's
see
luther's
here
from
rancher
we're
going
to
look
at
k3s
we're
going
to
look
at
andrea,
running
different
platforms.
Talk
about
different
cni's
cody
can't
stay
for
the
whole
show
so
cody
cody,
hi,
cody,
hi,
luke,
yeah.
A
C
First
off,
congratulations
on
on
how
far
andrea
continues
to
push
the
boundaries.
I'm
been
really
fun
to
watch
and
really
fun
to
reach
you
on
from
the
from
a
different
place.
C
But
now
I'm
over
at
cisco
and
I'm
getting
to
you,
know,
play
in
an
arena
where
you
know
we
have
some
interesting
as
we
go
to
multi-cloud
and
and
we
go
to
a
lot
more
deeper
fabric
integration
right,
taking
a
lot
of
the
things
that
are
happening
with
containers
and
container
networking
and
orchestrating
that
all
the
way
down
to
the
fabric,
and
so
what
I've
been
working
on.
C
If,
if
any
of
you
have
cisco
infrastructure
and
are
running
kubernetes
on
cisco
infrastructure,
you
may
or
may
not
have
heard
about
a
cni
called
aci
cni
and
it
basically
extended
cisco
aci
into
kubernetes.
But
you
know
being
that
kubernetes
is
such
a
large
community
and
it's
all
about
choice
and
about
interoperability.
C
You
know:
we've
really
tried
to
pivot
the
position
that
we're
taking
at
cisco
in
and
being
able
to
leverage
the
value
of
our
fabric
and
and
what
we
can
do
for
kubernetes,
and
so
I've
been
working
extensively
on
a
a
new
vision
and
a
new
direction
for
how
we
do
container
networking
in
cisco,
specifically
we're
building
a
for
lack
of
a
an
official
product
name.
Yet
an
operator
framework
that
allows
us
to
respond
allows
our
fabric
to
respond
to
intent.
That's
specified
anywhere
along
the
stack.
C
So
let's
imagine
a
typical
kubernetes
installation.
You've
got
you
know
at
the
bottom,
you
got
a
fabric
and
then
you
know,
you've
got
some
type
of
infrastructure,
whether
it
be
bare
metal
or,
or
you
know,
a
vm
infrastructure
and
then
you've
got.
You
know
different
flavors
of
the
distribution.
You've
got
your
cni,
maybe
you've
got
service
mesh
installed,
maybe
you're
using
something
like
multis
to
you,
know:
containerize
network
functions
and
do
some
more
advanced.
C
You
know
networking
stuff
with
your
containers,
and
so
what
we
want
to
do
is
provide
a
framework
so
that,
wherever
that
intent
is
specified,
whether
it
be
up
in
service
mesh,
whether
it
be
in
you
know
the
cni
and
kubernetes,
you
know,
maybe
it
came
from
the
network
operations
team
down
in
the
fabric
be
able
to
see
that
intent
can
nauticalize
it.
You
know
centrally
within
you
know
our
nexus
dashboard
and
kind
of
a
kubernetes
operation
framework
and
and
be
able
to
enforce
that
intent.
C
Man,
we
didn't,
we
didn't
test
sharing
my
screen.
Let
me
yeah,
probably
I
do
you
know
I
did
like.
I
did
a
cab
meeting
earlier
today,
so
I
think
I
could
probably
pull
that
slide.
If.
B
A
B
That's
a
lot
of
people:
oh
oh
genji!
Okay,
so
we
wanted
to
get
android
running
on
k3s
and
the
basis
of
k3s
is
it's
kind
of
a
little
micro
distribution.
It's
really
good
for
edge
stuff.
Frankly,
there's
a
lot
of
people
in
the
community
who
just
do
raspberry,
pi
stuff
with
it.
So
we
get
a
lot
of
conversations
with
really
cool
people
who
are
doing
a
bunch
of
weird
crazy
arm
stuff,
but
the
trick
to
k3s
was
removing
ncd
and
replacing
it
with
sq
lite.
B
That
was
the
super
trick
to
make
that
thing,
nice
and
neat
and
tidy.
So
the
the
it's
also
opinionated.
So
we
actually
make
a
bunch
of
decisions
for
you.
So
it's
easy
to
boot,
but
you
can
turn
off
all
those
opinions
and
you
can
do
your
own
thing,
which
is
what
we're
gonna
do.
So
I
don't
need
to
show
that
demo,
this
exact
second,
if
cody's
ready,
but
that's
yeah,.
B
A
C
B
C
So,
okay,
you
know
I
was
talking
about
this
idea
that,
as
you
know,
we
are
ever
more
moving
toward
you
know,
truly
being
able
to
declare
intent,
and
this
whole
idea
of
kubernetes,
which
I
love
about
it
is
you've,
got
this
this
model
right.
Where
you
can,
you
can
declare
like
the
configuration
that
you
want
and
and
the
whole
idea
of
kubernetes
is
you've
got
decoupled
pieces
of
it
that
can
respond
to
the
pieces
of
that
model
that
concern
them
right
and
really
networking
is
no
different
right.
C
We
we
have
different
intent
at
different
layers
in
the
stack
yeah.
You
know
you
get
your
app
developers,
they're
they're,
specifying
service
graphs
along
the
service
mesh
they
may
be
just
describing
you
know
the
deployment
mechanisms
and
like
how
things
are
going
to
be
load
balanced
and-
and
you
know
we
may
have
a
case
operator.
You
know
selecting
a
specific
cni
that
they're
comfortable,
implementing
and-
and
you
know,
at
cisco,
we're
we're
really
trying
to
just
move
away
from
the
you
know
to
work
with
cisco
fabric.
C
You
got
to
use
acic,
and
I
is
that
that's
not
the
the
messaging
anymore.
The
message
is,
you
want
to
use
calico
use
calico
if
you
want
to
use
antria
use
anterior.
If
you
want
to
use
you
know,
psyllium
musician,
that's
kind
of
the
idea
that
we're
that
we're
running
with
on
this,
and
what
we
want
to
do
is
is
that
intent
comes
in
right,
like
having
a
provider
specific
way
to
respond
to
it.
This
is
in
the
same
vein
that
we
saw
with
like
cluster
api,
for
example.
Right.
C
You
know,
cluster
api
has
these
various
infrastructure
providers
they're
kind
of
monolithic,
though
right,
because
they
cover
not
only
the
infrastructure
but
also
the
networking
of
that
infrastructure,
and
they
cover
a
lot
and
we're
kind
of
trying
to
break
that
down
a
little
bit
by
the
piece
of
the
stack
that
we
want
to
basically
plug
a
provider
into.
So
you
know,
if
I'm
a
antria
provider,
I
can
understand
some
of
the
features
and
capabilities
of
entry
and
translate
that
into
a
canonical
model
right
within
within
our
framework
and
expose
that
to
the
fabric.
C
So
the
fabric
can
understand.
You
know
hey
I've.
I've
got
this
container
network
function
out
here.
That
you
know
really
needs
to
it's
something
specialized
for
telco
and
it
needs
to
make
a
you
know
a
bgp
connection
or
have
this
vlan
available
to
it
right,
so
our
fabric
can
say:
hey.
I
know
where
that
got
scheduled
on
which
host.
C
I
know
what
what
leaf
switch
is
connected
to
that
host,
and
I
can
make
that
vlan
just
show
up
on
that
host
and
get
piped
into
into
that
container,
and
so
it's
it's
a
a
very
responsive
approach
to
container
networking
and
and
again
you
know,
keeping.
C
C
It's
everybody.
So
so,
obviously
I
think,
if
you
think
about
where
we're
cisco,
we
we're
concerned
a
lot
with
on-premise
stuff
right.
We've
got
the
hardware
networking
hardware
underneath
those
those
types
of
installations.
So
I
think
there's
gonna
be
a
lot
of
value
there,
but
we
also
want
to
make
it
just
as
easy.
If
you're
in
you
know,
amazon
or
if
you're
in
you
know,
azure
or
google
right
or
another
public
cloud
to
be
able
to,
you
know,
say:
hey
I've
got
this
cluster.
C
I've
got
these
security
policies,
but
they
still
may
need
to
talk
to
workloads
that
are
on
prem
right
and-
and
you
know,
maybe
you
know,
we've
got
an
sd-wan
set
up
and
we
want
to
prioritize
traffic
going
into
those
pods.
You
know
in
the
public
cloud
et
cetera.
How
do
we
provide
that
end-to-end
optimized
path
right
between
you
know,
workloads
that
are
on
prem
or
workloads
that
are
multiple
clouds
when
it
involves
that
many
different
layers
of
the
stack?
C
And
so
you
know
we
want
there
to
be
choice.
We
want
you
to
be
able
to
choose
a
service
provider.
You
want
distro,
you
want
a
cni
provider,
you
want.
You
know,
if
you're
using
our
hardware
for
for
networking,
we
want
to
make
it.
You
know
extremely
easy
for
that
hardware
to
respond
and
and
collaborate
you
know
or
make
sure
that
we,
you
know,
set
up
the
appropriate
services
to
interconnect
your
clusters.
C
You
know
wherever
they
may
be,
so
that
that's
the
approach
that
we're
taking
at
cisco
right
now
and
it's
it's
again
a
very
different
approach.
I
think
that
we've
taken
in
the
past-
and
you
know,
I'm
excited
to
as
we
get
more
of
the
framework
kind
of
built
out-
jay
we're
definitely
going
to
be
reaching
out
to
the
antria
team.
You
know
reaching
out
to
the
calico
team,
the
psyllium
team,
you
know
just
so
that
we
can
say
hey.
A
A
B
A
Let's
see
so,
okay,
so
we're
doing
so
we're
going
all
polyglot
today,
so
we've
got
the
whole.
This
is
real
interesting.
I'm
glad
that
you
explained
this
to
us,
because
I
never
understand
how
the
cisco
stuff
plays
into
the
cloud
stuff
and
the
kubernetes
stuff.
So
now
this
makes
sense
because
it
sounds
like
you're
all
in
a
great
position
to
provide
like
the
stuff
that
everything
else
runs.
On
top.
You
know.
C
Yeah
and
it's
interesting
right
in
in
in
regards
to,
if
you
think
about
like
the
vm
space
and
and
what
cisco
did
with
aci
right,
we
provided
a
way
to
segment
those
vms,
and
it
was
very
you
know
the
the
the
challenge
there
really
is
that,
with
the
aci
model
right,
you
could
kind
of
only
have
a
vm
in
like
one
security
group.
C
If
you
will
right
and
with
containers
it's
it
kind
of
explodes
right
in
the
number
of
connections
you
can
have,
and
you
know
what
we're
really
trying
to
do
is
have
a
better
mapping
and-
and
also
you
know
where
that
intent
is
specified
right
instead
of
the
network
operators
having
to
drive
that,
and
we
really
want
that
to
be
in
the
hands
of
developers
and
and
us
to
be
able
to
delegate
it
in
a
safe
way
right.
C
A
A
Cool
really
quick
cody,
so
does
that
mean
you
all
have
like
a
whole
public
api
site
or
something
that
you're
building
out.
C
We're
building
it,
we
haven't
released
it
yet,
like
I
said,
you're
getting
stuff
hot
off
the
press.
This
is
this
is
stuff
that
we're
right
now,
building
some
okay
initial
pocs
around
and
next
year,
we'll
we'll
be
full-blown
so.
A
A
I'm
gonna
put
the
show
notes
here:
cody,
so
if
you
wanna
go
in
there
and
add
links
or
whatever
to
all
this
sure
stuff
that
you're
doing
I'm
sure
there's
people
that
use
cisco
stuff,
that
would
be
interesting
luther.
We
lost
your
screen
just
now,.
A
B
A
B
Okay,
so
this
is,
I'm
gonna
make
this
a
little
bigger
there.
We
go
okay,
so
this
is
my
proxxbox
setup.
That's
in
my
closet
over
here.
I
have
a
new
box
coming
up
here.
If
I
want
it,
but
I'm
going
to
start
first
off
with
this
one
that
I
created
earlier.
This
is
just
an
ubuntu
box
base.
Install
super
super
stupid,
easy
and
I
just
want
to
quickly
to
show
you
the
k3
stuff,
because
this
is
literally
all
you
have
to
do.
To
get
k3s
do
get
a
k3
server
node.
B
All
embedded
too
yeah
so
rke2's
the
build
for
arcade
2,
the
sorry.
The
architecture
arcade
2
is
to
actually
spin
up
sub
processes,
so
it
just
kind
of
execs
them
out,
and
then
it
has
them
all
running
and
it's
generally
it's
the
upstream
stuff
from
kubernetes
themselves,
it'll
be
an
upstream
cubelet
upstream
proxy
et
cetera.
K3
says
it
all
embedded.
So
if
you
want
k3s
to
work,
you
have
to
it's,
you
have
to
sorry.
B
C
A
B
A
B
Yeah,
so
if
first
off,
if
it's
in
the
helm
chart
to
consume,
it
is
much
much
easier.
So
the
the
the
better
thing
to
do
is
to
actually
get
it
in
rke2.
Arcade
2
is
all
driven
by
home
charts
and
it's
actually
configurable
super
easy.
It
just
runs
those
helm
charts
on
boot.
It's
like
really
simple
k3's
does
something
similar,
but
to
turn
off
some
of
the
stuff
you
have
to
do
by
hand.
Rk2
kind
of
extends.
B
So
it's
kind
of
hard
to
explain,
but
this
is
the
gist
by
the
way
is
flannel
comes
it's
the
initial
pre-installed
and
to
turn
it
off.
You
have
to
grab
this
flannel
back
end,
specifically
final
black
and
none.
This
is
where
they're
taking
custom
cni's.
As
you
can
see
here,
and
then
you
can
set
this
in
what
they
call
an
x
key.
A
B
B
I
did
this
yesterday,
like
really
fast
and
it
worked.
Pvp
will
probably
be
spamming.
The
channel
saying
you're
an
idiot
trying
to
get
this
going.
Where
is
it
k3's
yeah?
You
can
do
it
like
that
with
extra
prim
all
right.
So
let's
try
this
out.
So
this
should
be
something
along
the
lines
of
k3's
params.
C
We
are
we're
seeing
a
lot
of
arm
usage,
and
so
I
I
think
it's
I
just
ordered
my
first
arm
laptop
last
night.
I'm
fired
up
so
is.
B
A
C
Yeah,
the
the
man
we're
like
on
almost
a
month
and
a
half.
A
C
C
B
I
was
looking
for
so
this
is
the
quick.
This
is
actually
a
bad
doc's
family
wants
to
fix
that.
It's
actually
now
it's
it's
flannel
backhand
equals
none,
but
this
is
what
I
was
looking
for
is
a
basic
example
of
how
you
use
the
the
environment
variables
and
pass
it
in,
so
that
the
install
script
actually
works.
So
I'm
gonna
stop.
A
And
I
know
june
jan
and
antonin
are
both
here,
so
that's
cool
and
so
antonin
folks
are-
and
I
know
matt's
here
and
we've
got
some
other
people
here.
If
folks
have
other
questions
that
have
nothing
to
do
with
what
we're
doing
right
now.
That's
totally
fine,
because
this
is
a
live
show.
So,
like
you
know,
don't.
B
A
B
A
B
B
Comment
from
stream
yard
into
the
other
one-
I
don't
know,
I
don't
you
go
closer
and
put
it
on
youtube.
Okay,
so
that
was
it
it's
done.
So
if
you
get
k3s
cube,
cpl
get
notes,
you'll
see,
we
have
a
node
there,
it's
still
not
ready.
Yet
it's.
A
A
A
B
A
B
C
C
C
So
some
of
the
components
you
just
use
the
host
networking
right
because
they
have
to
be
able
to
talk
to
the
other
host
right
so
that
you
got
the
api
server
things
like
that
right,
but
a
lot
of
the
components
that
you
end
up,
spinning
up
then
use
container
networking
right,
which
is
kind
of
this
overlay.
You
know
it
may
be
an
overlay,
it
may
be
a
separate
paired
network
and
until
you
put
the
cni
in
place
right,
those
those
pods.
C
B
A
B
C
B
B
Yeah
so
I
controlled
cdip
by
the
way,
because
I
was
actually
going
to
run
at
the
k3's
way
and
actually,
if
you
do
k3
so
watch,
this
k3s
actually
has
this
stuff
included.
It
has
server
agent.
Those
are
the
two,
so
k3s
coalesces
a
coalesces,
the
database
and
the
api
server
into
what
we
call
a
server.
And
then
all
workers
are
agents
right
and
the
server
also
includes
an
agent.
So
it's
actually
coalesces.
C
B
Three
and
then
you
can
add
additional
agents
to
the
setup
right.
So
if
you
do
k3's
agent-
and
you
do
some
help
here-
you'll
see
there's
ways
for
you
to
connect
this
back
to
another.
You
have
to
give
it
a
server,
url
plus
a
token,
and
then,
if
you
do
k3s
do
server
help.
You
can
see,
there's
all
the
configuration
here
to
boot
it
up
with.
You
know
the
crap
that
you
care
about
which,
if
you
look
up
here,
for
instance,
you
should
see
some
of
this
networking
stuff.
C
B
It's
all
it's
all
there,
the
first
versions
of
it
when
darren
first
built
it
out.
He
stripped
out
a
bunch
of
stuff
that
he
didn't
care
about
specific
things.
C
B
Betas
and
alphas
apis
things
like
this,
so
he
tried
to
make
it
as
lean
as
possible,
but
then,
when
we
actually
got
to
the
point
to
donate
it
to
the
cncf,
it
was
more
like
gotta
get
in
in
line
with
upstream,
if
it's
a
120
it
better,
be
120
capable
and
have
everything
there.
So
the
that
flipped.
You
know
it
was
initially
like
how
small
can
I
make
it
and
then
it's
an
issue.
Then
it
would
turn
into
how
to
make
this.
It's
you
know
donatable
and
that's
what
it
had
to
come
to
so
yeah.
B
There's
nothing!
That's
missing!
It's
a
fully
functional,
but
a
lot
of
people
use
it
for
driving
controllers.
Honestly,
that's
we
use
it
internally.
We
use
a
tool
called
tool
called
k3d,
and
if
I,
because
rancher
itself,
when
I,
when
I
boot
it,
I
have
to
have
a,
I
have
to
have
a
k3s
sorry
kubernetes
back
end
to
run
it
on,
because
we
have
our
quote-unquote
local
cluster
and
almost
every
dev
that
I
know
of.
C
B
Yeah,
it's
pretty
slick,
okay,
so
we
have
this
thing
and
we
also
have
this
other
thing
which
is
j
has
tests.
I
didn't
know,
j
wrote
tests,
but
j
has
tests
me.
A
B
A
A
For
us
sometime
on
this,
show
okay
yeah,
it's
getting
demoed
on
k3s.
A
Well,
this
is
yeah,
so
matt
has
a
version
that
automatically
generates
all
the
policies
cody
like
and
we
used
it
to
find
bugs
and
other
cni
providers,
but
we
didn't
have
any
we
found
we
we
found
like
one
bug
in
calico.
It
was
a
minor
bug
and
then
we
found
like
I
don't
know.
We
found
a
few
bugs
and
still
him,
but
I
think
they're
working
through
them.
B
A
C
About
these
tests,
these
tests
used
to
take
forever
to
run.
They
are
they're
using
the
new
framework
right
where
you
sped
it
up.
A
I
don't
know
what
josh
is
talking
about.
We
made
it
a
lot
faster,
you
know
who
helped
me
rajas
helped
me
make
faster.
He
did
a
pr
matt
reviewed
it
to
make
it
way
faster
and
what
we
did
was
we.
We
did
some
stuff
like
consolidating.
I
forgot
what
we
did.
We
like
moved.
We
got
rid
of
the
unnecessary
containers.
We
were
running
like
three
containers
for
every
pod.
Now
we
just
it's.
C
A
We
need-
and
we
I
just
found
an
issue.
We
need
udp
policy.
We
need
to
do
so.
We
added
this
for
windows
clusters,
but
we
didn't
added
udp
functionality,
so
so
we
need
somebody
to
go
into
these
tests
and
actually
get
them
to
probe
udp
for
windows,
which
would
be
really
cool.
I'll,
put
a
link
to
that
in
the
show
notes.
A
So
we
see
the
tables
there,
let's
see
they're
almost
there.
So
what
it's
doing
right
now
is
it's
probing.
It's
created
a
bunch
of
pods
in
k-3s
and
andrea,
is
running,
and
then
what
it's
doing
is
it's
checking
to
see
whether
all
these
pods
can
talk
to
each
other
or
not.
According
to
the
network
policy
spec
that
is
running
right
now
and
then
we
should
see
if
andrea
is
correctly
running.
We
should
see
a
beautiful,
like
1990s
like
spreadsheet,
in
ascii
text.
A
A
You
got
to
scroll
up
a
little.
You
got
to
scroll
up
yeah
there,
you
go.
You
scroll
down,
stay
right
there,
okay,
so
you
could
see.
We
expect
everything
in
that
name,
space
to
be
blocked
on
that
far
left
and
then
everything
is
blocked
on
the
far
left
as
what
we
observed,
and
so,
oh
god,
so
the
test
passes.
A
A
C
C
C
A
C
No
that
no
it's
no
problem.
I
had
a
great
time
and
and
excited
to
see
the
the
progress
of
andrea.
You
guys
keep
it
up,
and
I
look
forward
to
talking
about
some
fun
stuff
that
we
could
do
together.
A
A
B
A
A
Interested
in
seeing
if
we
can
condense
that
information
into
one
thing
so
anyways
what
else
we
got
so
we've
got
okay,
so
we've
got
everything
working
on
k3s.
I
you
know
it's
like
it's
already
4
30,
so
we're
kind
of
we're
on
time
is
now.
A
Time
is
now
of
the
essence,
and
we
I
don't
know
if
we're
going
to
have
time,
to
set
up
google
on
on
android
on
gke
and
on
windows,
but
at
the
very
least
we
can
like
walk
through
it.
So
why
don't?
I
share
yeah.
A
B
Wait
before
we
segue
into
what
you're
about
to
do,
yash
asked
about
the
back
end
for
k3s
the
sq
lite
back
end.
It's
super
stupid,
easy,
it's
just
a
replacement
of
ncd
and
just
it
basically
uses
one
table
and
it
just
turns
json
storage
into
a
into
a
table
row.
It's
it's
actually
really
simple
and
there's
a
project
called
kine
that
does
it
all
I'm
about
to
link
it
for
you.
A
Okay,
so
so,
if
I
wanted
to
run
andrea
on
gk,
so
pull
that
up
while
we
go
here
so
let
me
see-
I'm
assuming
folks
can
see
this.
I'm
doing
I'm
getting
better
about
making
my
screen
big
but
like
if
you
can't
just
yell
at
me,
you
just
couldn't
see
it.
It's
looking
good,
so
I'm
gonna
make
a
cluster
here.
I'm
gonna
use
a
static
release.
A
I
don't
know
what
this
is:
121,
that's
fine,
okay,
so
we'll
use
121
and
then
we'll
do
create,
and
then,
if
I
can
make
it
small
enough,
maybe
it'll
come
up
fast
enough
that
we
can
at
least
see
what
we
would
have
to
do.
So
I
think
the
way
this
works
and
I
think
the
reason
why
it
was
kind
of
an
interesting
thing
to
look
at
is
that,
like
you,
don't
deploy
this
regular
android
you
deploy
like
a
you,
deployed,
a
different
way
where
you
have
this.
A
This
init
ammo,
like
I've,
never
done
this
before
so
so
you
do
this
and
yeah
you
do
one
of
these
and
when
you
run
this
damon
that
this
daemon
set
runs
like
a
startup
script
and
that
startup
script
is
like
a
google
thing,
and
I
guess
the
idea
is
that
if
you
want
to
have
nodes,
do
something
when
they
start
in
gke,
there's
a
special
image,
inner
image
that
you
run
or
something
I
don't.
I
don't
know
so
like.
A
A
B
A
A
A
B
B
B
A
A
We
do
we
distribute
it
separately
and
we
install
it
separately
on
windows,
so
you
know,
but
on
linux
I
don't
know
I
mean
I
think
it's
in
the
container
like
and.
B
A
A
A
A
A
B
A
A
yeah,
so
all
so
so
yeah,
so
so
your
july
1
timestamps,
are
when
that
that
container
image
was
built,
and
you
can
see
all
the
ovs
stuff
in
there
and
all
the
other
stuff
is
from
some
other
time.
So
I
think
when
the
container
image
is
built,
it's
like
packaging
up
all
the
ovs
stuff,
somehow
into
the
image
itself.
But
then
I
thought
obs
was
like
a
kernel.
Jean
jen
said
it
somewhere
over
here.
Obs
includes
a
kernel
module
for
datapath,
which
is
part
of
the
mainline.
B
A
And
two
user
space
daemons
for
control
paths,
so
those
user
space
statements
are
probably
what
andrea
packages
for
us
and,
in
fact
like.
We
can
see
this
when
we
run
in
andrea,
like
it's
easy
to
see
this
in
windows,
because
we
run
it
as
a
raw
process
like
when
you
run
an
android
windows.
Cluster
you've
got
the
vswitch
dn
server
as
running
as
as
nssm
processes.
A
B
A
Does
run
ovs
daemons
in
a
container
of
the
entrance,
so
it
runs
the
demons
inside
of
it
yeah,
okay,
so
the
obs
db
is
that
I
don't
know
what
andrea
what
runs
the
ovsdb,
but
that's
so
back
to
this
cluster.
It's
still
coming
up
right,
so
I'm
trying
to
get
this
gke
cluster
to
show
folks,
and
it's
still
coming
up.
As
that
comes
up,
you
need
to
kill
more
time,
yeah
like
well.
So
as
that
comes
up,
we
gotta
waste
some
time
here.
So
there's
a
gke
startup
script.
A
I
don't
know
if
anybody
here
knows
what
this
maybe
it's
on
github.
Maybe
we
can
look
up
the
code
for
it,
but
gke
has
a
startup
script
container
and
I
and
somehow
we
leverage
that
in
andrea
and
I'm
I'm
trying
to
see
here-
we
go
well.
Here's
a
stack
overflow
question
about
it.
So
in
gke,
when
I
go
to
gcp
interface,
select
the
node
and
view
the
metadata
see
the
kit
bubble
cannot
start.
It
can't
specify
startup
scripts
on
gk,
the
node
has
a
built
up.
Startup
script
created
damon
said
well.
A
This
was
robert
bailey.
This
was
20.
He
answered
this.
So
I
don't
know
so
evidently
andrea
uses
this
thing
called
gce
startup
script.
I
don't
know
what
this
does,
but
we
can
we
can.
We
can
maybe
try
to
run
it.
Docker,
pull
this
script
and
see
what
it
does.
But
evidently
what
this
thing
is
doing
is
it's
going
in
and
it's
it
just
runs
the
environment
variable.
That's.
B
A
It's
like
a
root
kit,
so,
okay,
so
it's
so
you
put
root.
So
you
run
this
as
root.
You
run
a
thing
as
root
on
your
cluster,
without
saying
that
you're
running
something
as
root
and
what
that
thing
does.
Is
it
itch
or
monson
what
I
can't
believe
my.
I
can't
believe
my
eyes
here
what
it's
that
easy?
Okay,
so
then
it
is
doing
this
said
replace
and
it's
changing
this
the
network
plug-in
from
into
home
kubernetes
bin
okay.
A
B
A
A
A
B
A
At
10th
luther
said
dash,
I
said
colon,
I
don't
know
what
that
does.
I
don't
know
what
said
colin
does.
I
think
that's
where
the
root
kit
starts.
Is
that
what
it
is?
If
I
grip
that
outfit,
can
I
get
anything
dim
showed
me
how
to
do
this?
Yes,
yeah.
I
love
this
thing
it
you
can
grab
anything
in
the
whole
world.
A
B
B
A
A
After
that,
I'm
still
waiting
for
this
cluster
to
come
up.
Gk
is
slow
today.
So
after
it
comes
up,
I'm
gonna
I
can
coop
ctl
get
nodes,
and
then
you
create.
A
A
A
A
It's
the
default,
cni
interface.
That
people
do,
I
think
it's
the
delimiter
vivek.
I
don't
know
it's
like
a
good
homework
assignment
for
somebody
trying
to
learn
linux.
What
does
said
colin?
Do
I
have
never
used
it.
So
are
you?
Are
you
still
trying
to
learn
linux
yeah?
I
guess
I
mean
it.
Sound
looks
like
it
so
command
will
deploy
a
single
replica
of
the
entry
controller
and
ga
cluster
to
deploy
and
enter
into
every
node,
so
yeah.
So
then
you
just
so
afterwards.
You
you
just
deploy
this.
A
A
Looks
like
to
me:
let's
look
at
the
images
it's
running:
okay,
andrea,
ubuntu,
latest
and
entry
until
latest,
so
we're
running
so
afterwards.
I
guess
you
just
run
the
regular
entry
of
cni
but,
as
I
recall,
I
think
you
run
kindness
for
certain
things.
So
I
don't
know
if
there's
like
dke
specific
so
and
then
admin.
A
I
don't
know,
I
don't
know
how
that
arbit
gets
applied.
So
if
anybody
on
the
entrance
side
can
tell
us
I'm
curious,
I
see
that
this,
our
back
rule
is
getting
created
here,
damn
it.
Where
is
it?
I
see
that
we're
creating
this
our
back
rule
up
here
where
we're
talking
about
this
container.admin,
but
I
don't
know
what
that
container
admin
is
actually
referencing.
A
I
don't
know
I
I
see
it.
I
think
it's
used
a
lot
in
in
in
different
contexts,
but
I
guess
it's
a.
I
guess.
It's
probably
a
google
thing.
It's
probably
a
gke
thing
that
there's
a
controller,
a
container
admin
and
that
person
is
a
role
that
you
get
for
free
in
a
g
in
a
gpe
cluster
and
then
what
you
do
is
then
you.
A
A
A
So
when
you
run
coognet,
I
don't
know
where
that
lives.
Oh
here
it
is
github
coupe
net.
Oh,
it's
like
a
whole.
What
is
this
like
a
whole?
No,
that's.
A
A
Yeah
because
it
uses
kubernetes,
so
all
the
cloud
cni's
use
kube
net
right.
I
guess
like
they,
they
have
a
their
own
cloud
overlay
and
then
they've
got
the
cube
net
underneath
so
by
default
aks
clusters,
you
use
cubenet
azure
virtual
network,
okay,
so
we're
doing
the
same
thing
in
gke
and
then
I
think.
A
C
A
Here's
the
picture
yeah,
okay,
so
kubnet's
here
and
then
it's
doing
local
routing
right.
So
you
hit
this
machine
and
then
kubenet
routes
you
from
the
the
ingress
of
the
machine
to
an
individual
pod.
But
then
anything
that
goes
out
of
this
gateway
is
going
to
go
into
your
cloud
network
and
then
route
you
to
the
other
places
and
that's
why
cloud
syncs
are
so
damn
fast
right.
So
so
gk
is
doing
the
same
thing.
So
in
here.
B
A
And
andrea
have
the
best
network
policy
implementations
that
you'll
ever
find
anywhere
right
and
so,
like
you
know,
you're
going
to
want
to
use
a
commercial
or
not
a
commercial
but
you're
going
to
want
to
use
a
sort
of
an
established
upstream,
cni
provider
right
and
that
actually
conforms
to
all
of
the
network
policy,
apis
and
stuff
right
and
well-
and
that
said,
I
think
you
can
use
psyllium
for
network
policies
too,
and
they
have
a
whole
bunch
of
apis
and
other
things.
A
B
A
A
Well,
you
know
I'm
kind
of
wondering
like
when
I
spin
up
my
thing
in
google
cloud.
Are
they
doing
some
kind
of
weird
service
proxy
another?
Well,
I
know.
Google,
google
proud
cloud
supports
psyllium
and
I
assume
people
running
psyllium
will
maybe
want
to
use
the
psyllium
proxy
because
that's
one
of
the
things
people
buy
psyllium
for
or
whatever
I
mean.
If
I.
A
A
I
don't
know,
does
anybody
there
have
one?
I
mean
andrea
works,
andrea
proxy
works
for
gke
and
aks
and
eks.
Actually
we
always
enable
andrea
for
we
don't.
I
think
he
means
we
always
enable
andrea
proxy
for
that
yeah
right.
So
that's
interesting.
I
didn't.
I
didn't
know
that.
So
then,
when
I
access
like
an
external
lb
in
entry
on
gke,
it's
actually
forwarding
me
through
the
ovs
data
path
into
the
okay
wow.
A
That
that's
cool,
maybe
we
can
okay.
So
we
have
this
repository
here
where
we
show
people
how
to
install
andrea
from
scratch
on
windows,
and
anybody
can
try
this.
You
can
grab
this
and
you
can
spin
this
up
at
home.
You
can
vagrant
up
it,
and
so
you
can
get
clone
sig
windows,
dev
tools,
and
you
could
see
my
friend
friedrich
here.
He
is
friedrich
and
zeshan
he's
from
the
entry
community.
He
helped
us
to
fix
one
of
the
files
in
here
because
we
needed
the
latest
version
of
andrea.
A
So
now
we're
pulling
that
down
from
here
right.
This
latest
version
fixes
one
of
the
bugs
that
allows
us
to
to
hard
code,
one
of
the
nicks
to
hard
code,
the
nick
that
vagrant,
so
that
vagrant
uses
the
right
nick
to
put
the
interface
on.
But
so
you
can
vagrant
up
this
and
then
what
happens
is
you
know
it
pulls
down
all
these
all
these
entry
installation
files,
and
so
it
installs
ovs
as
a
powershell
script.
A
That's
the
first
thing
that
we
need
to
do
you
install
ovs
and
then
once
you
install
ovs,
you
get
those
services
that
luther
just
showed
you
and
then
once
you
have
those
services,
there's
other
scripts,
there's
helper
scripts,
that
like
restarting
entry
on
the
windows,
node
and
so
on,
and
then
after
you
get
all
that
you
and
then
we're
we're.
We've
got
like
two
minutes
left
after
that
you,
let
me
put
this
in
the
show
notes.
A
I
just
can't
believe
josh,
I
just
didn't
know.
Josh
was
coming
today
and
I
was
so
happy
to
see
him
gosh.
Are
you
gonna
put
the
show
notes
in
you?
Can
you
can
like
be
a
part
of
the
team?
Again,
if
you
just
copy
paste
this
into
the
github
repo
you'll,
be
you'll,
be
like
a
first
class
show
member
again
that's
so
we
pull
all
this
in
and
then
we
start
these
things
up,
and
we
start
this
android
agent.
Conf
is
directly
pulled
down
right.
A
So
this
is
the
yaml
file
that
is
pulled
down
normally
as
like
a
config
map
in
the
linux
environment.
But,
like
you
know
here,
this
is
the
configuration
you
pull
this
directly
down
into
the
windows
node
on
the
host
and
it's
the
same
thing
that
it's
just
it's
just
the
generic
configuration
of
it
so
like
once
you
pull
all
this
stuff
down
right
then,
and
you
ends
up
you
unzip
the
cni
plug-ins
onto
the
host,
and
we
run.
A
This
is
the
host
local
plug-in,
so
the
host
local
plug-in
is
what
gets
the
ipam
gets,
the
ip
addresses
for
you
and
stuff
and
allocates
them
on
your
note,
once
you
unzip
all
that,
then
you
then
you
go
in
here
and
you've
got
another
powershell
script,
and
these
are
all
in
the
android
repo
as
well.
By
the
way,
this
is
just
a
completely
automated
way
of.
A
So
then,
at
that
point,
you've
got
see,
you've
got
all
these
folders
and
you
can
see.
Windows
is
very
complicated
and
I'll
be
the
first
person
to
admit
it.
So
it's
not
that
bad,
it's
it's
like.
So
you
get
all
this
and
then.
A
Defender
windows
defender,
I
didn't
even
know
we
had
windows
defense,
I
don't
know
who
did
that
it
wasn't
me?
Oh
he's
there.
I
think
it's
because
I
copied
this
from
the
entry
repository.
So
it's
still
in
there
yeah
so
and
then
you
start,
then
we
start
the
coup
proxy.
We
don't
need
to
do
that
now,
because
andrea
proxy
all
doesn't
require
coproxy
anymore.
It
can
proxy
everything,
but
when
we
originally
did
this
we
used
to
do
we
used
to
start
the
coupe
proxy.
A
So
we
should
probably
file
an
issue
to
get
rid
of
all
this.
We
don't
need
the
coup
proxy
in
the
andrea
installation
anymore,
but
this
starts
the
user
space
coupe
proxy
and
then.
A
Yeah,
when
coupe
proxy,
so
I
think
the
rules
are
redundant
so-
and
I've
asked
this
to
june
jim
before
and
he
said
I
was
right.
So
I'm
right,
the
rules
are
redundant,
so
you've
got
routing
rules
that
are
user
space,
routing
rules
that
are
created
and
you've
got
andrea,
proxy
routing
rules,
and
I
don't
think
it's
like.
I
don't
know
how
it's
defined.
Who
who
does
what?
A
A
B
A
You
match
this
first,
then
you
go
here,
but
if
you
don't
match
that,
there's
I
assume
there's
some
kind
of
a
fall
through.
That
happens,
because
I
think
any
os
probably
allows
you
to
have
redundant
routing
rules
right.
I
don't
know
so
it
would
be
interesting
again.
Another
question
for
the
entry
engineers
on
the
call
right
like
do
we.
A
How
does
that
work
if
you're
running
both
how
is
precedence
and
how
does
the
data
path
calculated?
If
you
have
two
different
things
that
have
routing
rules?
Okay,
so
then,
finally,
we
just
nssm
all
this
right,
so
we
do
an
nssm
step
and
that's
a
windows
thing
where
you
basically
install
like
a
systemd
service.
So
we
do
an
nssm
step
where
we
give
it
the
name
of
the
executable
and
we
can't
do
wins
luther
because
we're
running
container
d.
So
that's
why
we're
not
doing
wins
here.
So
when
that.
A
Proxy,
sir,
that's
a
feature
being
added
to
wins.
Okay,
that's
actually
pretty
one.
Maybe
we
can,
you
know
start
using
it.
I
don't
know
so,
install
andrea
agent.
So
then
we
and
an
install
android
agent
as
a
service
right,
and
we
also
have
this
thing
that
installed
the
ovs
stuff
as
a
service
right
and
so
now
that
nssm
has
sort
of
installed
all
those
things.
Then
we
do
a
re.
I
think
we
do
a
reboot
somewhere
in
here
yeah
we
do
do
a
reboot,
so
we
have
andrea
one.
A
Oh
no,
we
don't
do
a
reboot
actually,
so
we
actually
just
wait
until
kube
entry
and
ovs
are
running
and
then,
after
that
we
just
poll
for
everything
being
running
and
then
we're
done
and
we
just
write
them
out
a
final
time.
So
you
know
we
start
up.
A
Kubernetes
we
make
sure
kubernetes
is
running,
we
make
sure
the
ovs
stuff
is
running
and
once
the
lbs
stuff
is
running,
we
also
want
to
make
sure
the
entry
stuff
is
running
and
at
that
point,
you're
good
you've
got
it
all
working
so
and
my
cluster
is
still
creating
on
gke
and
it's
been
an
hour,
so
we
can't
actually
show
gke
or
windows.
We
could
only
walk
you
through
those,
but
we
could
do
demos
of
those
next
time.
If
folks
want
to
see
it,
let
us
know,
there's
a
show.
You
go
to.
A
Live
and
you
can
file
issues
there
on
on
my
repository.
Where
is
it
here?
It
is
okay,
you
can
file
issues
here.
If
you
want
and
maybe
someday,
we
will
put
that
in
a
put
that
as
a
actual
link
on
the
andreas
stuff
and
then
yeah,
okay,
cool,
that's
it.
It's
been
an
hour.
The
show's
over.
A
A
C
B
Hasta
lego,
everybody
bye-bye.
I.