►
Description
In this episode we'll look at ways to stress test k8s clusters and, maybe, see if we can reproduce any interesting etcd race conditions... like https://github.com/kubernetes/kubernetes/issues/65517 .... (and https://github.com/kubernetes/kubernetes/issues/109399). Maybe this will result in a new Kubernetes e2e ? or a cool test we can run on the side! BTW , did you know theres a little bit of a race for CRD creation and CRD request availability in the apiserver? Maybe we can find out why !
A
Here
we
go,
I'm
gonna
see
if
we
can
edit,
if
we
can
create
like
a
race
like
a
race
condition,
type
thing
today,
so
I've
coop
ctl
get
I've.
Let
me
see
here,
docker
ps,
okay,
so
I
just
created
a
welcome
to
andre
life.
So
it
is.
I
kind
of
just
did
this
show
last
minute
and
I
don't
even
I
think
I
announced
it
yesterday
really
late
on
twitter.
So
I
don't
I'm
not
sure
how
many
people
are
gonna
show
up,
but
all
right.
A
Let's
do
I
have
got
I've
got
our
k8s
prototypes
kind
cluster
up
and
what
I'm
gonna
do
is
I'm
going
to
I'm
going
to
attempt
to
nikita
showed
me
yesterday.
So
I
saw
yesterday
that,
if
I
did,
if
I
did,
where
is
it
I'm
going
to
look
it
up
in
the
show?
If
I
did
there's
a
there's,
an
issue
where
it
with
crds,
where,
if
you.
A
Is
it
see,
I
think
if
I
go
to
here
beautiful
playlist,
I
actually
put
it
in
the
show
notes
yeah
here
and
if
I
go
well,
let
me
open
this
up
right.
If
I
go
to.
A
Right
here
right
that
getting
the
new
crd
name
and
then
there's
another
issue
that
I'm
going
to
try
to
look
at,
which
is
this
one
right.
This
is
one
I
filed
and
I
think
mustafa
might
join
us,
but
I
kind
of
sent
him
the
link
last
minute.
So
I
don't
know
we
filed
this
issue
about
a
week
and
a
half
ago
or
two
weeks
ago
about
we
needed
this
disruptive
test
for
upstream.
So
maybe
something
that
pushes
the
api
server
beyond
what
you
normally
the
limits.
You'd
normally
push
it
to.
A
Where
you
know,
3.5
up
to
0.2
is
not
recommended
for
production.
Now
that
has
been
fixed
and
there's
a
new
version
of
that
cd
out
at
cd
there's
a
new
version
that's
out
and
let's
see
where
the
release
is
right.
A
So
this
is
out
3.5.4
is
out.
3.53
is
out
whatever
critical
correctness
issue.
Oh,
this
is
a
different
one.
I
think
I
don't
think
we
had
talked
about
yet.
A
Was
that
the
the
core
issue
that
was
filed-
I
don't
I
don't
know-
was
it
called
the
critical
corruptness
issue?
I
don't
know
anyways
actually,
two
days
ago,
they
just
yeah
3.5.3
and
3.4
the
data,
inconsistency
idris,
who
is
addressed
right
so
that
this
issue
came
out
and
it's
the
next
two
point
releases
of
etc
fixed.
This
I
know
vmware
tons
was
going
to
have
this
soon
and
we're
going
to
soon
put
this
into
our
next
release
and
I'm
sure
all
the
other
k-8s
providers
are
going
to
do
the
same
thing.
A
So
so
thank
you
to
everybody
in
the
etcd
community,
hey
vivek,
what's
going
on,
so
thank
you
for
to
the
etcd
community
for.
A
3.5.3
plus
right,
so
that
was
a
big
win
and
you
saved
us
for
whatever
I
wasn't
able
to
reproduce
this
issue,
though
right.
So
we
looked
at
some
of
this
some
of
these
things
last
week
and
and
how
you
can
do
the
fcd
ctl
perf
check
stuff.
But
now
so,
let's
see
if
I
make
a
cluster,
so
I
have
a
cluster
now
and
if
I
say
coupe
ctl
get
nodes,
I've
got
a
three.
A
Coop
ct
okay,
so
I'm
gonna
do
kind,
delete
cluster
right.
There's
a
cluster
in
there
called
kind
kind,
get
clusters
kind,
delete
cluster
dash,
dash
name,
equals
calico
kind,
delete
cluster
dash,
dash
name
equals
entry,
I'm
going
to
delete
both
of
these,
and
so
I'm
gonna
create
a
new
cluster
and
kind
get
clusters.
A
Docker,
ps,
okay,
there's
nothing
doc
or
ps.
Yeah!
There's
nothing
in
here
all
right.
So
I
think
so.
I
I
just
sort
of
hacked
this
up.
I
figured
we'd
do
calico
this
time
because
we
haven't
done
it
in
a
while
check
if
it
still
works
but
anyways.
So
if
I
I
found
something
yesterday
wherein
if
I.
A
T-Box
a-t
andrea,
live
okay,
where,
if
I,
if
I
go-
and
I
continuously
create
the
the
same
crd
like
over
and
over
and
over
again
right
and
then
I
check
whether
it's
up
and
running
like
right,
if
I,
if
I
do
coup
ctl,
get
that
crd
every
once
in
a
while,
I
found
that
I
could
get
into
a
state
where
for
split
second,
there
would
be
a
there
would
be
a
like
a
split
second,
where
the
api
server
didn't
know
that
that
crd
existed,
and
I
kind
of
thought
that
was
interesting
when
I
saw
it
yesterday,
because
I
didn't
know
that
was
the
case.
A
So
nikita
showed
me
this
issue
yesterday
right
so
so
she
showed
me
that
I
she
thought
this
was
probably
the
issue
that
I
was
running
into
and
it
looks
like
people
are
still
asking
if
folks
are
sort
of
have
have
been
able
to
fix
this,
and
so
I
thought
it
might
be
a
nice
little
quick,
interesting
thing
to
go
through
and
see
if
we
could
reproduce
this
and
then
see
see
if
we
could
read
through
this
issue
together
and
see
where
it's
coming
from
so
all
right.
A
Actually,
I
guess
the
good
news
is
that
there's
actually
a
more
maybe
there's
a
stable
reproducer
that
we
can
use
here
because
he's
creating
this
so
he's
creating
this
crd
in
his
cluster,
who
did
this
brian
brian
made
this
crd
and
then,
after
brian
makes
this
crd.
A
He's
saying
this,
this
is
not
in
and
of
itself.
I
don't
think
a
very
obvious
reproducer,
because
I
had
to
do
this
hundreds
of
times
in
order
to,
but
maybe
it's
just
that
coop
ctl
applies
fast
enough
right
that
this
works.
So
let's
try
this
all
right.
So,
let's
see
what
happens
so
if
I
say,
let's
see
if
my
cluster
is
up,
ctl
get
nodes.
A
Okay,
vim.
A
Okay,
if
I
go
in
here,
I
think
I
meant
to
have
multiple
of
these,
but
somehow
I
only
have
one.
A
Plane
and
then
I'll
do
roll
control,
plane
right
and
let
me
see
where
I
vim
local
up
yeah
here.
It
is
it's
because
we
do
it
in
here.
So
let
me
fix
this,
so
I'm
gonna
do
roll
control
plane,
so
I'm
gonna
make
multiple
control
plane
nodes
here
right.
So
if
you're
doing
this
in
kind,
you
can
make
multiple
control
plane
nodes
and
I
think
we
only
need
one
worker.
I
don't
think
we
need
like
a
million
workers
right.
So
if
folks
want
to
use
this
recipe,
it's
not
like
it's.
A
A
Why
are
you
being
so
nice
to
me
today?
How's
it
going?
I
love
you
too,
so
so
here
we
go.
If
I
damn
it
ricardo
all
right,
I'm
making
a
new
one.
Okay!
A
But
I
I
really
didn't
have
any
time
to
prepare
for
today,
so
we're
just
gonna
be
like
we're:
gonna,
be
kind
of
winging
it
more
than
more
than
usual
right,
okay,
so
this
is
hopefully
coming
up
now
joining
more
control,
plane,
nodes,
okay!
So
let's
wait
for
the
control
plane
notes
to
go
scott's
here,
scott!
A
Do
you,
okay,
scott?
So
I
want
to
make
a
damon
set
and
I
want
the
damon
set
to
spam
at
cd
and
I
haven't
really
thought
of
how
to
do
that.
Yet
I
mean
I
have
some
ideas,
but
do
you
have
any
ideas?
So
I
I
I
filed
this
issue
and
I
filed
this
issue
upstream
and
I
haven't
gotten
a
chance
to
look
on
it
yet
look
at
it
yet.
So
I
thought
maybe
we
could.
Maybe
we
could
try
to
do
this
today,
but
the
idea
would
be.
Let
me
see
the
original
yeah.
A
I
have
it,
so
this
was
the
original
idea.
I
I
don't
know
if
this
is
any
good,
though,
like
you
have
a
damon
said:
you're
continuously
just
updating
annotations
or
something
like
that.
I
mean
that's
going
to
hit
that
cd
right
and
then
continuously
pull
it
for
changes
and
then,
if
you
can
ever
detect
it
a
difference
between
the
nodes,
you
know
that
that's
kind
of
what
I'm
wondering,
and
so
I
don't
know
how
far
we'll
get
so,
but
that's
my.
A
That's
my
plan,
we'll
see
I'll
see
whether
we
can
at
least
maybe
get
some
pseudo
code
for
that
sketched
out.
I
don't
know
okay
yeah
here
it
is.
A
Let's
see
if
I
have
multiple
nodes,
okay,
the
cluster
is
up
so
now
I
want
to
see
how
many
do
I
have
ncd
running
everywhere.
Do
I
have
multiple
lcds
or
one-
I
don't
know,
yeah?
Okay,
we
have
multiple,
ed
cds,
so
that's
good
more
more
chance
for
things
to
go
wrong
right
either
that
are
created
and
delete,
random
config
maps
and
secrets
aggressively
out.
A
Yeah,
so
so
the
first
one
I
thought
so,
in
other
words
the
yeah.
The
overall
theme
for
this
is
like
race
conditions
at
the
api
server
level
right,
so
the
first
one
will
be
the
one
that
I
saw
yesterday
and
so
the
one
I
saw
yesterday
was
this
right.
A
What
was
it
yeah,
okay,
brian
pursley?
Maybe
we
can
get
him
to.
A
A
There
we
go
now.
I
can
hear
myself
again:
okay,
yeah,
okay,
so.
A
So,
let's
try
this
this
this
this
guy's
idea
so
he's
going
to
do
this
and
then
right
afterwards,
let's
make
a
shell
script.
That
does
this
right,
so
we'll
call
this
vmcrd
race,
dot,
sh
and
I
kind
of
want
to
see
why
this
happens.
So,
in
order
for
this
to
happen,
you
have
to
do
coop
ctl
get
foos
get
foo,
coupe
ctl
get
foo,
coop
ctl
get
jesus,
coop
ctl
get
foose
right,
so
echo
three
echo
two
and
echo
one
right.
A
A
A
There
we
go
okay,
so
is
that
c
already
shortening
my
shoe
on
first
get
is
fun.
Oh
you.
You
see
it
all
the
time,
scott,
okay,
okay,
so
it
yeah
you're
right,
it's
it's
consistent,
and
so
so
does
it
happen
all
the
time.
I
guess
scott.
I
don't
know
like
so,
let's
find
out
so
so
it
happened
our
first
time.
A
A
A
A
I
mean
this
is
a
totally
new
crd,
it's
it's
yeah,
so
so
how
long
does
it
take
so?
In
other
words,
I
have
to
wait
between
tests
then.
So
I
have
to
do
this.
I
have
to
do
this.
Do
that
and
then
I
have
to
like
sleep,
let's
see
if
we,
let's
see
how
what
the
cash
clearance
is
right,
what
cat?
How
oh
gosh
this
is
so
hard.
A
A
A
And
and
then
and
then
maybe
in
order
to
reproduce
this,
we
have
to
look
in
there
so
yeah,
I
like
your
okay,
do
it
in
a
loop
and
every
time
make
the
crd
name
and
short
name,
have
an
integer
in
its
name:
okay,.
A
A
A
I
y.
This
is
weird.
If
I
used
a
different
short
name
like
photo
and
re-ran
it,
it
reproduces
the
problem.
You
can
also
run
coupe
ctl
api
resources.
It
has
the
same
effect
and
it
has
oh
so
really
so
if
I
run
so,
if
I
do
ls
so
first
of
all,
I
didn't
know
this,
there's
a
thing
called
the
http
cache
and
that
cache
what
what
is
this?
Okay?
So
there's
a
thing
in
here
and
it's
caching
all
these
requests.
I
guess
it's
making
some
kind
of
a
hash
cluster
specifically
doing
a
loop
every
time.
A
A
A
A
Yeah,
that's
a
different
cache
you're
right.
It's
not
the
cache
that
you're
talking
about
it's,
not
the
this
caching,
the
api
server,
it's
caching,
the
crd
resource,
but
he's
saying
that
if
he
deletes
this
and
then
re-runs
it
he
could
not
reproduce
it.
It's
only
the
first
time
the
short
name
has
ever
ran.
I
guess
what
he's
saying.
Maybe
what
he's
saying
is
here
is
that
it's
not
yeah.
It's
not
happening
on
the
client
side.
Oh
wait!
A
A
A
A
Delete
crd
foo
3s
ctl
delete
s.
Let's
delete
these.
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
We
may
have
to
do
mike
zappa's
idea,
where
we
do
this
in,
like
a
for
loop
or
something
right
where
we
say
vim,
crd
race
and
we
say
name
and
we
don't
declare
name
crossbar
cross
plane,
what
cross
plane,
pla
cross,
plane,
cross,
plane,
ricardo
cross
plane.
So
for
folks
that
join,
I
see
there's
like
new
people
that
seem
to
have
joined
so
we're
trying
to
reproduce
this
issue.
65517.
A
Where
and
we
kind
of
reproduced
it
the
first
time,
but
I
don't
know
I
screwed
up-
and
I
couldn't
reproduce
it
again,
but
if
you
run
this
little
snippet
here
on
issue
65517
in
upstream
k8,
where
is
it
come
on
where'd
it
go
yeah
this.
If
you
run
this,
you
can
reproduce
this
issue
at
least
on
kubernetes
19,
and
I
saw
it
on
a
two.
I
saw
it
on
a
223
cluster
yesterday.
So
this
isn't
something
that's
uncommon.
A
Where
you
create
a
crd
and
then
you
can't
you,
you
can't
see
it
right
after
you
create
it.
You
have
to
like
wait
a
little
while
before
you
can
see
it,
but
one
thing
that
I
found
out:
that
was
that
there
is
a
there's,
a
there's,
a.
A
A
A
A
A
A
I
mean
I'm
making
a
million
of
these.
My
oh
I'm
doing
your
thing
that
you
suggested
mike.
Isn't
this
what
you
asked
for
like
like?
I,
I
can't
reproduce
this
again.
I
got
to
see
it
once
and
I
couldn't
do
it
again.
Maybe
I'm
doing
something
stupid,
I
don't
know
so
yeah
anyways,
but
I
I
have
seen
this
before,
and
so
that's
that
now
here
I'll
put
my
if
folks
want
to
try
to
make
this
something
that
reproduces
every
time.
I'll
put
my
code
up
here.
Okay,
I'm
gonna
put
my
code.
A
A
This
issue:
okay,
so
I'll
put
this
in
the
here's,
the
hack,
md,
okay.
This
is
your
lead
code
for
the
week,
scott
okay,
so
here
we
go
what
happened?
What's
going
on,
many
k8s
will
be
going
down
with
this
one
cve
and
next
cv
will
be
making
windows
jokes
shout
out
to
daniel
magnum,
indeed
cross
plane.
Folks,
did
nice
work?
A
Oh
oh,
I
see
because
scott
made
an
ingress
joke
scott's,
making
fun
of
rick,
okay
scott's,
making
fun
of
ricardo.
Okay,
I
go
all
right
so
if
folks
want
to
try
this,
I'm
going
to
put
this
in
the
show
notes.
So
I'm
going
to
put
this
file,
I'm
going
to
put
it
in
here.
Okay,
I'm
going
to
put
it
right
here.
A
I
just
made
the
readme
okay,
so
there's
no
reason.
This
is
super
easy
to
remember.
Where
the
show
notes
are
you
just
go
to
ja
unit,
100,
k8s,
prototypes,
andrea,
live
427
right
and
see
if
you
can
make
that
right,
reproduce
it
and
the
bash
snippet
that
I
was
using.
Is
this
one
for
those
of
you
that
don't
know
bash?
A
This
is
the
snippet
okay.
This
is
the
test.
Snippet,
okay,
right
and
then
see
if
you
can
make
that
cr
make
that
crd
race
script
and
then
see
if
you
can
get
that
to
get
that
to
make
that
thing.
Do
that
thing?
So
everybody
knows
okay,
so,
and
anybody
who
can
get
this
to
work,
we're
going
to
send
you
another
android
t-shirt,
another
one,
because
I'm
sure
you
all
have
t-shirts
by
now.
So
all
right!
A
Where
were
we
so
the
other
thing?
What
was
it?
It's
almost.
We
only
have
20
minutes
left,
so
I
don't
know
how
we're
going
to
crash
that
cd
with
the
damon
set
in
20
minutes.
But
let's
see
so
so
I
mean
the
first
thing
I'm
going
to
do
is.
Can
I
can
I
google
for
a
solution
here?
So
if
I
do
like
damon
set
with.
A
Yeah
run
goop
ctl,
here's
here's
one
I
can
borrow-
I
don't
know
what
yeah
here
I
found
one
it
next
dot
io
here
we
go.
Here's
one
I
can
borrow
right
so
looks
like
looks
like
this.
We've
got
a
this
is
20
20,
I'm
assuming
this
might
work.
So,
let's
see
if
this
works,
can
I
do
this.
So
if
I
do
them
that
thing's
probably
on
I
don't
know
if
I'll
get
talk
rate
limited
or
not,
let
me
okay,
get
fetch
dash.
A
A
A
A
name
for
your
hackathon:
you
can
now
you've
got
it
now.
You've
got
now.
You've
got
a
project
yeah,
well
you
and
ricardo,
and
then
after
this
I'm
going
to
do
I
I
know
mike's
here
and
we
were
talking
about
windows
stuff.
So
we
figured
we'd
look
at
some
stuff,
we'll
look
at
the
end
of
the
episode
at
what
is
going
on
in
upstream
windows.
Maybe
look
at
what
issues
are
open.
So
someone
in
oh,
let's
see
what
did
ricardo
said?
Scott,
you
don't
have
a
t-shirt.
A
If
you
can
solve
the
coding
challenge
and
we're
gonna,
send
you
one
t-shirt
if
you
can't
because
you're
one
of
our
favorites,
so
we're
gonna
have
to
tell
we're
gonna
have
to
talk
to
susan
and
we're
gonna
have
to
say
susan
susa,
we're
gonna
have
to
say
susan.
I
need
you
to
tell
me
why
scott
does
not
have
a
t-shirt,
get
pods
dash
a
did
we
okay,
so
we
got
it.
We
got
that
internal
coupe,
ctl
pod,
so
coupe
ctl.
A
Is
there
a
let's
see
here.
A
I
see
that
there's
somebody
once
made
this.
Oh
no,
that's
that's
stress,
testing
api
server.
Anybody
ever
use
kaboom.
A
Does
it
push
at
cd?
Is
the
question
running
skill
launching
to
see
these
aren't
real?
These
aren't
testing
etcd.
This
is
a
thing.
I
don't
think
it
is.
I
think
it's
just
seeing
if
it
can,
like
I
mean
you
know
like
I
think
it's
just
seeing
whether
it
can
create
and
delete
I'm
just
going
to
do.
Coupe
ctl
patch,
a
million
times
coupe
ctl.
Here
we
go
coupe
ctl
patch
example:
let's
just
grab
a
snippet
okay,
let's
just
grab
a
coupe
ctl
patch
command
right,
let's
grab
one
of
these
here
we
go.
A
Yeah
can
use
this,
so
if
I
get
this
deployment
right,
if
I
make
this
deployment,
I
like
this
example,
vim
patch
vim,
deployment.yaml,
okay
uh-oh,
I
screwed
up
set
paste
here.
We
go
okay,
coop
ctl
create
dash
fd.yaml,
let's
see
if
that
comes
up
now.
A
What
I'm
going
to
try
to
do
here
is
I'm
going
to
see
if
I'm
gonna
see
if
this
comes
up
and
if
so,
I'm
gonna
see
if
I
can
just
continuously
patch
it
and
as
long
as
that,
but
I
don't
know
how
this
makes
me
start
thinking
about
caches,
because
I
really
wander
now
with
this
caching
thing
like.
How
do
you
even
know?
A
A
Coop
ctl:
what
is
it
patch
patch
deployment
patch
demo
and
dash
dash
patch
file
equals.
A
A
A
A
That
we
can
just
run
like
a
bash
script,
thing
that
will
continuously
forever
and
ever
continuously
randomly
patch
a
deployment,
but
I
suppose
you'd
want
to
make
a
new
deployment
for
every
every
container
or
whatever.
I
guess.
You'd
probably
just
want
to
do
a
single
go
program
that
spawned
a
bunch
of
threads.
But
then
I
don't
know,
then,
if
you
were
only
you'd,
probably
want
to
run
into
damon
sent
since
you
could
distribute
the
amount
of
looking
for
an
object
of
type
key
string.
A
Maybe
I
have
like
some
hidden
characters
in
here
that
I
didn't
see.
I
don't
know
okay,
so
I'm
I'm
gonna,
I'm
gonna.
I'm
gonna
end
this
in
a
second,
then
we're
gonna
see
if
we
can
look
at
I'm
gonna,
maybe
I'll
show
we
can
look
at
windows.
A
A
A
A
Okay,
let's
see,
let
me
ch:
oh,
it
worked
okay
cool!
So
all
right,
we
have
a
little
tiny,
coupe
ctl
patch
command,
at
least
that
we
can
at
least
use
as
a
little
template
to
see
if
we
could
get
this
did
I.
Where
did
I
put
it?
I
put
it
in
here
right
so
now.
Let
me
put
this
in
here
right,
okay
and
now
the
second
ant
live
coding
challenge
ever
right.
A
Let's
patch
those
those
deployment
objects
and
then
as
you're
doing
that
monitor
to
see
if,
before
every
patch
have
every
node
re
have
every.
A
Oh,
I
don't
know
if,
oh
I
don't
know
how
you
do
that,
so
I
don't
know
how
you're
detecting
consistency
right
so
you'd
have
to
think
of
a
way
to
detect
an
inconsistent
like
you'd
have
to
this
yeah.
It
would
be
a
damon
set
and
eventually
you
would
detect,
I
think,
an
inconsistency
and
see
if
eventually.
A
One
of
the
nodes
to
detect
an
inconsistency
you'd
have
to
get
to
a
point
where
one
of
the
nodes,
one
of
the
nodes,
would.
A
A
A
This,
the
current
value
of
an
annotation
and
sort
of
continue
printing
that
out,
I
come
to
think
of
it
like
see
see
if
you
can
see
if
you
figure
out
a
way
to
compare
the
object,
as
seen
in
the
script
over
time,
depending
on
what
node
in
the
game
and
set
was
running
a
get
on
that
resource.
That
is
constantly
being
mutated,
see
if
things
get
out
of
sync,
so
I
don't
know
it's
my
first
attempt
at
trying
to
think
reason
about
how
you
would
reproduce
this
anyways.
A
So
that's
that
jun,
jen,
hey
so
jin
jin.
We
we
we
did
you
get
three
someone
on
entry,
a
t-shirt,
just
fine
new,
I'm
sure
ji
and
jen
will
be
giving
out
kubecon
t-shirts
at
kubecon,
andrea
t-shirts,
api
server
in
case
you
folks,
don't
know:
android
leverages
the
k8's
highest
pierce
build
yeah
yeah,
it
does
so.
Yes,
it
does
so
so
andrea.
A
Does
that
and
then
guess
what
you
know
what
else
june
jen
one
time
we
were
talking
to
the
carvel
folks
and
it
turns
out
that
cap
also
does
the
same
thing.
Cap
also
leverages
borrowed
from
andrea
and
all
does
the
same
thing
to
sort
of
leverage
the
leverage,
the
api
server
in
the
same
way
and
extend
it
cool.
So
this
is
just
like
my
first
step
of
trying
to
think
of
like
okay.
Here's
all
this
is
a
patch
command,
but
I'm
like
okay
is
there
a
way
we
can?
We
could
script
this.
A
We
want
to
put
like
a
variable
in
here
kind
of
the
way
we
did
before
right
and
we'd
want
to
just
keep
patching
it
over
and
over
again
until
maybe
we
were
able
to
see
an
inconsistency
but
there's
some
kind
of
thing
where
we'd
have
to
record
the
state,
or
we
would
have
to
do
the
thing
that
I
showed
you
all
how
to
do.
Last
time
that
we
played
around
with
where
we
were
running
at
cdctl
and
what
was
that
trick?
It
was
oh,
what
was
it
at
cdctl
right,
yeah?
A
It
was
endpoint
status,
this
one
right,
so
you
might
want
to
do
you
might
be
able
to
do
something
like
this
right
and
you
could
you
could
see
whether
things
you
know
check
whether
things
get
out
of
sync
with
that
right,
I
don't
know.
I
think
I,
like
bro,
wrote
a
blog
post
about
like
with
the
snippet
on
there.
No,
I
no.
I
didn't
write
that
up.
A
Sorry,
yeah,
okay
cool,
so
what
we
were
trying
to
do
was
reproduce
some
lcd
consistency
issues
and
look
at
how
we
could
do
that
and
we
were
able
to
do
it.
A
The
first
time
we
were
able
to
see
that
that's
not
an
fcd
issue
that
that
we
were
able
to
see
the
first
time
that
we
were
able
to
reproduce
this
crd
related
issue,
where
the
first
time
you
create
a
crd,
the
short
name
sometimes
is
not
available
for
like
a
split
second
in
time
we
were
able
to
reproduce
that
once,
but
we
were
never
able
to
reproduce
it
again,
which
is
weird,
but
so
yeah
good.
A
First,
two
andrea
life
coding
challenges
ever
so
so
mike
zappa
joined
also-
and
we
were
talking
about.
What's
going
on
in
the
windows
world,
so
issues
windows
like
let's
take
a
look
and
see
if
we
can
do
some
triage,
really
quickly:
cool
github.github.com.
A
Kubernetes
and
let's
see
what
we've
got
for
sig
network
issues
and
let's
look
for
ones
that
have
assigned
to
nobody
right.
Let's
look
at
these
because
these
are
the
ones
so
folks
want
to
get
involved
right.
Here's
how
you
do
it
right!
You
look
for
things
that
aren't
assigned
right,
so
some
of
these
might
be
tagged
as
like
good
first
issues,
but
if
we
look
briefly,
we've
got
60
open
windows,
ones
and
112
sig
network
ones.
So
I
don't
know
like
let's
look
at
this
one.
A
A
A
A
That
must
be
as
something
in
the
build
that
got
broke.
I
want
to
see
if
there's
anything,
networking
related
soap
tests
for
azure
k8
clusters
with
windows,
nodes,
marius
disks
become
read
only
on
windows,
node,
divya
and
patel.
This
is
a
interesting
I
think
I
think
divion
saw
this
when
testing
vsphere
csi
when
you
rebooted
it.
So
that's
an
interesting
one.
If
folks
are
using
vsphere
csi
proof
network
policy
test
reliability.
So
is
this
a
a
meme?
You
opened
this
one.
What
is
this?
What
did
you
fix
this.
A
This
is
a
good
one
I
think
mark.
This
would
be
an
interesting
one
to
start.
This
is
a
good
getting
started
issue.
I
think
so.
A
A
A
A
A
A
A
If
you
wanted
to
learn
any
any
of
us
could
show
you
this
that
work
on
this
stuff
so
and
so
these
print
out
these
tables
these
connectivity
tables-
and
you
could
go
in
here
and
you
could
look
at
how
this
works,
and
this
goes
off
and
it
probes
you
have
agn
hosts
on
different
pods
and
they
probe
each
other,
and
so
you
could
go
in
here
and
you
could
sort
of
look
at
this
windows
issue
where
it
looks
like
we
have
some
flaky
network
policy
tests.
A
So
we
wrote,
we
updated
these
tests
to
support
windows
recently,
so
you
could
spin
these
up
on
a
windows
note
and
they'll
probe
each
other,
but
we
don't
do
udp,
because
that
somehow
crashes,
everything
and
I
think
there's
another
is,
and
it
would
be
interesting
to
try
to
reproduce
this
and
see
if
you
can
get
see
if
you
can
solve
the
reliability
issues
or
reproduce
the
reliability
issues
that
folks
are
seeing
when
they
rerun
these.
So
to
run
these
tests,
you
would
compile
kubernetes
and
then
you
would
you
do
make
you
do
make.
A
A
Where
is
it?
Andrea
live
no
where'd,
I
put
it
where'd,
I
put
it.
Entry
live
get
yeah
here
it
is
so
where
is
it
here?
It
is
so
I'll
put
this
here
right.
So
now
you
could
go
in
here
and
I'm
gonna
put
the
snippet
of
you
know.
This
is
roughly
what
you
would
do
to
run.
Those
tests
and
you'd
need
a
windows
cluster,
and
for
that
you
can
use
rc
windows
dev
tools,
and
then
you
can
yeah,
you
can
go
and
you
can.
A
This
could
be
a
good
starting
good,
getting
starting
issue
getting
started
on
advanced
windows,
e
net
working
policy,
test
improvements,
and
these
are
fun
tests
to
run
because
you
learn
a
lot
about
kubernetes
network.
Let's
see
what
running
these
so,
okay,
cool,
so
yeah,
that's
the
that's!
Today's
show
thanks
folks
for
coming.
I
see
four
people
hung
out.
Oh
wait,
I
mean
said
we're
fixing
the
timeout
close.
A
I
need
to
drop
off
yet
thanks
vivek.
Thank
thanks.
Everyone
yeah
for
sure
cool,
I'm
going
to
drop
to
david
thanks
man
thanks
thanks,
yeah
everybody
thanks
for
coming.
A
So,
let's
catch
up
next
time
and
who
knows
next
time,
I
do
this,
maybe
I'll
we'll
try
the
ncd
thing
again,
but
if
folks
are
interested
in
trying
that
little
coding
challenge
like
I
was
only
half
joking
about
it,
I
think
it
would
be
a
fun
thing
to
try
to
build
a
thing
that
detected
those
fcd
inconsistencies
and
and
spammed
the
api
server
through
all
the
way
through
to
fcd
continuously
in
the
damon
set.
A
The
damon
sent
might
have
to
make
go
functions,
though,
because
you
may
need
to
have
like
multiple
threads
running.
I
don't
know.
Okay,
see,
y'all
enter
your
life
thanks
for
coming.
If
you
all
are
having
fun
when
you
come
to
the
show
and
you're
learning
stuff
make
sure
to
like
and
subscribe
so
that
we
can,
if
you
enjoyed
this,
show
like
and
subscribe,
so
we
can
justify
doing
it
more
okay,
cool
right!
Thank
you.
Everybody
see
you
next
week
with
chinchi.