►
From YouTube: Kubernetes SIG Windows 20201110
Description
Kubernetes SIG Windows 20201110
A
All
right,
everybody
welcome
to
the
sick
windows
meeting
it's
the
10th
of
november
and
we
have
quite
a
few
topics
on
our
agenda.
So,
let's
dive
in
right
away,
show
my
screen
here.
A
You
should
be
able
to
see
this.
The
first
item
really
quickly
a
little
bit
of
bookkeeping.
We
had
some
trouble
with
our
zoom
meeting
for
the
backlog,
prioritization
meeting.
I
don't
know
how
many
of
you
are
attending
that
definitely
a
few
of
of
us
like
mark
and
james
and
and
myself
and
amaz
do
attend
that
meeting,
but
if
you're
interested
feel
free
to
to
attend
as
well,
it
happens
bi-weekly
on
thursdays
at
12,
30
eastern.
A
So
same
same
time
as
this
meeting
we
have
a
new
invite
for
that
meeting,
including
the
standard
pasco
that
kubernetes
meetings
have,
which
is
the
five
sevens.
So
if
you
are
interested
and
you
want
to
basically
attend-
let
us
know
it's
also
on
the
community
calendar.
So
if
you
go
to
the
kubernetes
community
calendar,
that
meeting
is
is
also
set
there.
So
I
don't
know
if
you're
following
that
or
not
so
cover
that
so
aravind
are
you
and
the
team
ready
for
giving
us
a
demo
of
the
operator.
A
B
Awesome
so
folks,
what
I'm
going
to
show
as
a
demo
today
is
enabling
windows
workloads
on
a
openshift
4.6
cluster,
which
is
basically
kubernetes
119..
B
So
I
have
a
I've
brought
up
a
cluster
here
and
the
way
openshift
in
general
recommends
its
users
to
add
new
services
or
enable
data
operations
on
a
cluster
is
using
the
operator
model
for
which
we
have.
This
thing
called
operator
hub
operator
hub,
is
actually
sort
of
a
front
end
into
the
operator
lifecycle
manager,
which
will
then
you
know,
help
you
install
the
operator
install
its
dependencies
and
also
manages
its
lifecycle,
including
upgrades
so
user
would
who
wants
to
enable
windows.
B
Workloads
would
come
to
the
operator
hub
search
for
the
windows
operator,
and
here
it
is,
and
here
you're
going
to
get
some
instructions.
What
are
the
prerequisites?
We
need?
The
main
prerequisite
is
we
only
support
clusters
that
run
with
hybrid
ovn,
kubernetes
networking,
so
your
cluster
needs
to
be
in
that
mode
and
once
the
operator
is
installed,
the
other
prerequisite
we
require
is
a
private
key
secret.
B
B
What
I'm
then
now
going
to
do
is
I'm
going
to
switch
to
another
cluster
that
has
the
operator
already
installed.
Has
the
has
a
node
attached
because
bringing
up
a
node
using
the
operator
configuring?
It
takes
typically
around
15
to
20
minutes.
So
what
I
will
do
now
is,
I
will
switch
to
another
openshift
4.6
cluster.
The
only
difference
here
is
if
we
go
to
the
windows
namespace
for
the
operator.
B
Look
at
the
pods
and
if
you
look
at
the
logs,
you
will
see
that
I've
already
added
a
windows
vm
to
the
cluster
as
a
worker
node.
So
if
you
look
at
the
nodes
now,
you
will
see
that
there
is
a
note
that
if
I'll
go
and
look
at
the
details,
it
indicates
that
the
operating
system
is
windows.
There's
a
windows
server,
I'm
using
the
windows,
server,
2019
data
center
image,
the
kernel
version,
so
it's
an
actual
windows
note
and
then
I've
also
deployed
a
workload.
B
B
B
And
so
we
are
now
playing,
you
know,
chess
on
on
a
windows
container
basically-
and
I
think
they're
using
iis
for
some
of
its
back-end
purposes
and
and
things
like
that.
So
that's
the
that's
the
workload
that's
running.
I
do
want
to
go
back
and
talk
a
little
bit
about
so
now
a
customer
wants
to
add
more
windows
nodes
to
the
cluster,
and
so
what
do
they
do
about
it?
So
our
entry
point
into
adding
windows
workers
is
basically
using
a
machine
set.
B
It's
not
very
different
from
a
linux
machine
set.
You
would
based
on
the
cloud
provider.
You
would
you
know
you
would
say
things
like
an
instance
type.
The
key
thing
to
call
out
here
is
the
image
id
that's
being
used,
needs
to
be
a
windows
image
that
has
the
docker
runtime
enabled
in
it.
Now.
What
have
now
the
user
wants
to
bump
it
up
to
more
than
like.
You
know,
two
machines.
B
Now
all
they
have
to
do
is
pump
this
up
to
two
and
then,
if
I
go
back
to
the
operator,
you
will
see
that
the
operator
has
kicked
in
and
it's
going
to
start
handling
the
new
node,
that's
being
added.
So
let's
go
and
look
here.
B
If
you
look
at
the
logs,
you'll
start
seeing
it's,
you
know
it's
starting
to
see
a
new
machine,
that's
appearing,
and
it's
now
going
to
start
configuring
it.
I
also
want
to
call
out
michael
spoken
about
whether
we're
using
cluster
api,
even
though
our
entry
point
is
machine
sets
and
the
resulting
machine.
The
machine
api
actually
in
internally
uses
cluster
apis.
B
We
actually
had
to
introduce
a
set
of
fixes
to
get
windows
working
on
all
the
providers,
mainly
azure
and
vsphere,
and
we
are
at
the
moment
trying
to
upstream
those
patches
and
I'll
I'll
ping
people
on
on
sync
windows.
To
do
this.
B
The
other
major
work
that
we
have
down
the
pipeline
is
a
way
for
us
to
look
at
windows
service
event
logs
from
the
cubelet
itself,
so
we're
discussing
that
internally
and
there's
a
pr
open
up
stream.
Regarding
that,
I
can
give
more
details
on
another
day
regarding
that,
but
if
anybody
else
has
any
questions
at
this
point,
that's
the
end
of
the
demo.
Please
ask
away.
A
Yeah,
I
haven't
a
couple
quick
things
and
I
know
miles
has
a
question
there
as
well.
So
yes,
absolutely
I'd
love
to
see
the
changes
upstream,
especially
around
cluster
api.
We
have
a
few
folks
here
that
are
working
on
that,
like
jay,
nadir,
james
and
and
and
more
that
are
kind
of
driving
the
upstream
cluster
api
work
on
windows.
So
the
earlier
you
get
them
your
work,
then
maybe
that
could
boost
up
that
effort.
A
So
we
definitely
want
to
kind
of
see
that
collaboration,
then
thank
you
guys
for
for
starting
on
this
journey
earlier
than
some
of
us.
The
second
thing
is
around
the
logs.
I
don't
know
how
familiar
you
are
with
the
log
manager
capability
that
microsoft
has
for
containers
that
could
help
you
with
gathering
a
lot
of
the
details
that
are
both
logs
and
events
inside
the
the
windows
container.
B
I
see
so
michael
will
that
also
help
in
us
getting
the
docker
runtime
logs,
which
have
been
written
to
the
windows?
What
do
you
call
windows
event,
event
logs?
B
B
D
B
A
But
but
you
can
write
that
probably
mark
is
the
possibility
you
can
write
like
a
obviously
it
would
be
hard
to
write
an
agent
inside
the
container
to
kind
of
pipe
this
that
to
somewhere
where
it
can
be
consumed
right
I
mean
that
could
be
the
only
way.
D
Possibly
yeah,
let's,
let's
start
a
conversation
on
slack
about
this-
I'm
also
very
interested
in
getting
event
logs
out
of
off
of
the
host.
E
Yeah,
I'm
interested
as
well
and
michael
the
thing
you
were
talking
about
agent.
I've
seen
that
implementation
being
done
multiple
places,
but
it's
it's
harder.
The
end
of
this
tooling
you're
talking
about
the
log
tooling,
is
about
specifically
the
you
know.
The
the
events
block
within
the
you
know
within
the
container
and
bridging
them
out
to
std
out,
and
so
so
the
host
is
still
you
know,
a
problem.
That's.
A
C
A
Event
logs,
so
so
there's
it's
more
than
initially.
I
mean
I
thought
you
were
talking
about
the
event
logs
inside
the
container,
but
if
you're
also
talking
about
the
ones
on
the
host-
and
you
know
privileged
containers
come
into
play
that
we
don't
have
as
well
as
having
a
tool
that
can
pipe
and
get
all
that
info.
B
Yeah,
so
what
I
do
michael
and
mark
is
I'll
I'll
post,
a
a
pr
that
we
open
up
stream
and
some
of
the
downstream
work
that
we
have
done.
Also
to
make
this
work
even
on
the
linux
side.
And
then
we
can
discuss
and
see
what
the
best
possible
solutions.
A
All
right:
well,
I'm
typing
notes
really
quickly.
Code
freeze
is
coming
up,
it's
on
the
12th,
so
in
two
days,
so,
if
you
haven't
seen
the
calendar
november,
12th
is
week,
nine
is
code,
freeze
and
then
test
freeze
is
right
before
thanksgiving
for
the
folks
in
the
united
states.
So
you
know
that
means
you
know
that
the
two
big
efforts
that
are
trying
to
drive
now.
A
Obviously
cluster
api
doesn't
adhere
to
the
kubernetes
states,
but
our
container
d
work
does
so,
let's
do
a
quick
sync
at
the
end
of
this
meeting.
If
we
are
on
track
for
everything,
sounds
good
all
right
cool,
so
james,
no
density
test
for
windows.
F
Yeah,
so
for
one
of
the
issues
that
we've
we
found
at
with
some
of
our
customers,
we're
seeing
them
oversubscribe
their
their
notes
and
the
node
would
lock
up,
and
this
would
happen
in
118
and
with
no
changes
when
they
upgraded
from
117.
So
we
have
a
open
issue
that
we've
tracked
in
there
and
one
of
the
things
that
we
found
that
works
well
to
resolve.
F
This
is
to
use
like
a
system
reserve
or
a
cubelet
reserve
to
reserve
it
a
little
extra
cpu
for
the
system
services
as
well
as
putting
in
we,
I
think,
ravi.
I
think
his
name
is
ravi
put
a
pr
in
for
120
that
sets
the
priority
of
cubelet,
so
you
can
set
a
higher
priority
for
cubelet
with
the
system
reserve.
We
during
some
of
our
initial
testing
and
trying
to
resolve
this
issue.
F
We
noticed
that
we
need
to
kind
of
bump
that
system
reserve
as
the
node
and
number
of
pods
get
deployed
onto
the
cluster.
So
right
now
we
have
an
example
where
we
put
500,
we
reserve
500
millicourse
for
cpu
and
tester
passing
and
things
are
working,
but
we
needed
to
scale
that.
So
as
part
of
that,
I
was
looking
into
various
ways
to
figure
out
how
much
cubelet
uses
on
a
windows
node-
and
I
came
across
this
node
perf
dashboard
for
for
linux,
and
I
guess
I've
dropped
it
in
slack.
F
G
G
Overall,
on
on
gce,
we
we
have
been
running
some
load
tests
for
no
density
to
basically
test
it
on
an
eight
core
node.
We
can
run
up
to
like
80
or
90
windows
pods
some
something
like
that.
I
don't
remember
the
specific
limits
we
reached.
G
I
can
send
you
the
I
can't
I
don't
have
the
link
handy
but
I'll
after
the
meeting
I'll
try
to
pull
up
the
link.
I
believe
we
upstream
to
that
on
the
some
test
grade
somewhere,
but
also
new
details,
at
least
of
what
we've
done
and
what
we
have
running.
I
haven't
actually
looked
at
it
in
a
few
months:
okay,.
F
A
Thank
you
james
james,
at
the
end,
when,
as
you
go
through
this,
what's
the
outcome
gonna
be,
are
you
thinking
like
creating
a
blog
post
that
you
can
push
to?
Basically,
let
folks
know
you
know
the
scale
or
density
that
they
could
expect
from
windows,
or
is
this
more
around
driving
performance
improvements
or
windows,
I'm
assuming
both,
but
just
want
to
kind
of
get
your
thoughts
on
this.
F
Yeah
it's
both.
I
haven't
really
come
up
with
that
when
I'm
looking
for
an
outcome,
currently
we're
trying
to
figure
out
if
we're
going
to
have
time
to
do
this
type
of
work
or
or
not
so,
but
I
think
overall
long
term
we'd
like
to
be
doing
this.
I
just
wanted
to
see
if
other
folks
in
the
community
are
already
doing
it.
A
Sounds
good,
thank
you
who
added
the
dockerhub
red
limit,
pull
update.
D
I
edited
it,
I
don't
know
claudio
or
at
elena
do
you
have
more
context
or
you
want
me
to
go
over
what
I'm
aware
of
for
this.
H
No,
the
rate
limit
yeah.
There
were
some
updates
on
that
issue
regarding
the
docker
hub
rate,
limiting
one
of
the
proposals
that
we
had
for.
That
was
that
we
could
ask
for
the
open
source
account
from
docker
hub
which
they
were
willing
to
give
out.
But
apparently
they
had
a
couple
of
strings
attached
and
some
of
them
were
not
really
in
our
how
to
say
in
our
domain
or
scope,
and
you
would
most
likely
imply
the
whole
kubernetes
community,
not
just
c
windows.
H
Yeah,
that's
that's
the
one.
A
So
I've
seen
that
kind
of
play
around
in
a
couple
of
different
threads
as
well.
You
know
this
is
a
recorded
conversation,
so
I
won't
go
into
any
details
beyond
saying
that
that
created
some
problems
for
us.
I
But
does
the
azure
container
registry
option
like
does
that
work
like?
Is
that
like
will
that
satisfy
all
the
needs,
because
if
so,
then
we
don't
have
to
worry
about
the
free
account
and
all
that
right.
H
Ideally,
we'll
be
able
to
just
use
the
kubernetes
g-sphere
io
registry
in
the
near
future.
H
D
H
Yeah
yeah,
that's,
I
think
the
best
the
best
way
forward.
I
Yeah
another
data
point
I
was
discussing
this
with
claudio
was,
like
all
the
rate
limiting
that's
been
happening
so
far,
that's
done
on
a
per
ip
basis.
So,
if
it's
like
you
know,
if
ips
are
not
getting
recycled
too
often,
it
should
not
be
too
often
like
it
should
be
pretty
infrequent.
I
Then
the
number
of
variations
and
the
increase
in
the
number
of
clusters
corresponding
to
that
should
not
play
a
huge
deal,
because
you
know
every
ip
would
be
different
and
it's
the
limit
being
calculated
against
those.
So,
even
besides,
like
shifting
to
azure
that
may
not
actually
affect
the
runs
a
whole
lot.
A
Cool
I
want
to
spend
a
little
bit
of
time
on
on
the
next
issue,
which
is
a
windows
device,
plugin
api.
If
you
all
remember,
this
is
a
pr
that
tried
to
make
it
in
on
118
it
didn't
make
it
then
it
tried
to
make
it
into
119
and
didn't
make
it.
You
know
there
were
a
couple
of
things
there
right.
We
got
approval
at
some
point
from
sig
note
folks,
but
then
the
tests
and
the
test
infrastructure
wasn't
there.
A
So
then
we
said
we
want
to
make
this
in
into
1.20,
but
it
needs
to
have
end-to-end
tests
so
who's
bringing
that
today.
So
I
I
don't
know
is
it
tomatoes
is
that.
C
How
you
say
your
name
is
my
real
name:
okay,
go
go
ahead,
yeah.
I
think
this
big
question
is
whether
it
can
go
in
before
the
code
freeze.
I
would
summarize
the
current
situation.
Basically,
everything
should
be
ready,
except
which
is
an
open
question.
I'm
just
not
sure
about
how
to
build
the
image
for
the
device
plugin
itself.
C
Which,
I
think
was
an
open
issue
so
far,
otherwise
the
test
seems
to
be
running
and
also
providing
the
acquired
output,
and
the
test
is
doing
the
required
to
check
whether
you
can
access
whether
the
gpu
is
available,
and
we
had
some
discussion
on
the
on
the
pull
request
there.
So
this
would
be
fine
and
it's
our
currently
running
a
test
again.
I
think
okay,
but
this
also
was
a
reason
to
edit
here,
because
I
want
to
avoid
that.
C
This
is
a
this
remaining
issue
of
how
to
build
the
device
plug-in
image
would
be
as
a
reason
to
to
fail.
A
C
C
A
Mark
I
saw
you
commented
on
the
pr
as
well.
Do
you
have
any
thoughts
on
how
we
should
proceed
on
on
this
yeah.
D
So
I
enabled
a
pr
test
job
which
you
see
getting
triggered
there,
which
will
run
on
the
I
think,
nc6
sku
in
azure,
which
has
assignable
gpus
and
like
what
we
just
discussed
there.
The
test
is
running
and
it's
running
on
those
vms
and
I
believe
the
test
right
now
is
just
calling
dxdiag
to
dump
output,
but
there
is
a
in
that
same
pr.
D
There
is
a
code
for
a
test
image
like
a
docker
file
and
a
go
program
which
will
actually
try
and
acquire
the
gpu
and
query
the
status
through
wmy
calls
to
make
sure
that
it's
actually
that
you
can
actually
make
calls
into
the
gpu
instead
of
just
relying
on
the
dx
diag
output
and
the
question
is:
how
do
we
merge
that
into
the
current
image
promotion?
I
like
image,
building
and
image
promotion
workflows
for
the
rest
of
the
windows
test
images.
D
D
So
the
base
images
that
we
need
to
build
the
test
image
off
of
I
had
commented
in
there
that
the
documents
for
how
to
test
gpu
workloads
in
windows
containers.
Let
me
pull
up
the
windows.
Documentation
says
it
needs
to
be
based
off
of
the
dot
com
windows
instead
of
windows,
slash
nano
server,
server,
core.
C
Because
to
clarify,
as
indeed
the
test
is
using
two
images,
it's
for
the
test
itself.
As
of
the
gpu
test
running
the
dxd
is
using
a
standard,
plane,
windows
server,
whatever
version
it
was
yeah,
okay,.
E
C
C
This
is
a
test
image:
it's
just
a
standard
window,
plain
windows
as
a
windows,
server
image,
okay
and
we've
added
this
new
repository
for
being
able
to
for
docker
to
access
the
standard
windows,
server
image-
and
this
is
working.
A
C
The
second
image
using
for
testing
success
with
the
gpu
is
a
standard
windows
server
image
which
we're
already
using
and
all
it
is
successfully
downloaded
and
executed.
This
is
a
container
where
we
executing
the
xdrc
tool
and
which
already
provides
some
proof
that
the
gpu
was
accessible.
So,
of
course,
you
could
also
this
micro
dimension.
Of
course,
it
would
be
also
possible
to
create
another
image
for
really
accessing
the
gpu,
but
this
is
what
the
exterior
is
already
doing.
C
A
Yeah,
but,
but
essentially
that
means
that
in
our
test
infrastructure
we
need
to
add
two
windows
right.
One
of
it
is
the
full
windows
os
like
mark
mentioned
and,
like
you
mentioned,
and
that's
what
you're
calling
dx
diag
yes
and
the
other
one
is
another
image
that
will
host
the
device.
Plugin.
Are
we
shipping
the
image
with
the
device
plugin
or
is
it
something
that
every
customer
needs
to
recreate.
A
C
C
Indeed,
one
motivation
to
also
include
this
full
x
device
plug-in
for
testing
into
the
kubernetes
project
was
also
one
idea
as
unbeknown,
basically
to
also
get
it,
let's
say
a
maintained
by
the
kubernetes
project,
because
if
it
has
to
be
maintained
for
the
test,
it
would
be
implicitly
also
maintained,
but
indeed,
although
it
would
be,
of
course,
very
great
if
it
would
be
somehow
part
as
a
more
official
part
of
the
kubernetes
project.
But
so
far
it's
separate.
D
I
think
that
a
good
place
for
that
for
the
device
plug-in
image
is
probably
we
have
a
kubernetes
sigs
repo.
We
have
a
sig
windows
tools,
repo
under
the
kubernetes
org.
It
seems
like
that
would
be
a
good
place
to
store
the
image,
at
least
for
now
that
that
that
contains
the
device
plug-in.
That's
intended
to
be
used
for,
like
run
as
the
damon
said
on
the
nodes.
A
I
agree:
100
is
under.
We
should
put
under
sick
windows
tools
so
focker.
Can
you
follow
up
with
arnold
and
make
sure
james
added
the
link
to
the
chat?
I'll
put
it.
D
What
I
think
we
would
probably
do
is
we'd
move
the
source
code
into
there
and
then
just.
C
D
A
docker
like
have
a
a
ci
job
that
would
create
a
the
image
and
push
it
to
the
sig
windows
tool,
docker
hub
repository
or
wherever
the
container
registry
is
for
that.
A
Stuff,
let's
put
the
source
code
in
the
image
there
ask
arnold
to
do
that
and
then
we'll
work
with
adelina
and
claudia
and
see
if
we
can
enable
ci
to
that
builds
that
image
so.
F
C
It
sounds
good
and
one
question
is:
do
we
get
this
working
before
the
code?
Freeze.
C
Or
indirectly,
yes,
because
the
point
is
for
the
is
there,
two
issues
is
two
things:
gpu
access,
obviously
like
device
plug-in,
but
the
core
issue
is
obviously
this
device
plug-in
api
port
from
linux
to
windows,
which
was
a
pull
request
of
an
out
waiting
there
for
one
year,
whatever
it
was,
and
the
dependency
chain
is
this
device
pack
in
api
pull
request
didn't
get
in
because
the
kubernetes
core
people
wanted
to
have
this
e
to
e
tests?
C
A
A
We
can
get
that
done
next
week,
for
example,
this
week
we
should
please
work
with
claudio
and
adelina
to
make
sure
that
we
get
the
the
window,
the
full
windows
image,
the
one
that
exercises
and
runs
dx
diag
to
be
part
of
a
test
suite,
so
that
your
e3
tests
that
you
have
can
finish
and
pass,
and
then
at
that
point
we
should
be
good
to
merge
the
the
device
plugin.
C
A
D
Yeah,
I
think
I
I
was
just
taking
another
look
at
the
pr,
some
updates
that
look
like
they
came
in
in
the
last
day
or
two,
and
it
looks
like
the
the
pr
adds
a
new
just
image
repository,
that's
in
the
in
the
ede
test
things,
that's
just
for
rooting,
images
off
of
mc.microsoft.com
and
yes,
that
I
think
that
probably
just
needs
a
review
to
help
get
that
in
that
looked
like
it
made
sense.
I
I
think
I
commented.
C
Yeah,
just
to
clarify
what
the
attention
is
as
of
for
the
testing
the
gpu
access
itself,
we're
using
the
standard
windows
server
image,
which
is
indeed
what
exactly
you
have
shown
just
now
yeah.
This
is
also
working
and
is
successfully
executed,
and
everything
is
fine
and
it's
also
a
public
registry.
C
D
And
those
tests
in
order
to
be
able
to
target
those
tests
in
the
nodes,
we
need
to
have
a
more
permanent
spot
to
host
it
correct
for
the
device
plugin,
yes,
claudia
adelina.
Would
it
be
possible
to
manually
push
that
image
to
the
ede
team
or
this
pro
or
okay?
It's
proud,
as
you
see
our
dot
right
now,
because
we
are
kind
of
time
sensitive
and
then
we
can
work
on
getting
ci
to
to
build
that
image
from
source.
C
Okay
sounds
good
as
a
is
an
additional
information,
because
originally
our
understanding
was
that
somehow
this
container
image
will
be
also
built
as
part
of
the
tests.
So
there
is
already
the
source
code
of
the
device
plugged
in
in
this
metoe
test
directory,
but
this
is
probably
not
used
at
the
moment
and
can
be
even
removed.
I
don't.
D
Know
I
I
think,
for
the
purposes
of,
but
the
test
coverage
that
we
want
to
put
in
for
120.
Yes,
we
should
we
could
we
probably
don't
need
to
build
that
test
image
as
part
of
the
ete
test.
We
can
just
push
it
to
the
ete
team's
docker
hub
repo
and
mirror
it
to
that
kate's
prow
so
that
you
can
access
it
like
you
would
access
the
standard,
like
you
know,
busy
box
or
in-host
images
in
the
test
content,
and
then
we
can
work
on
cleaning
that
up.
You
know.
D
C
A
No,
no,
you
will
the
image
that
you're
using
for
the
device
plugin
that
are
not
created.
You
need
to
send
it
to
claudio
so
that
claudio.
A
To
our
to
our
docker
hub
repo
that
they're
using
for
the
end
so
sorry
to
our
github
repo
they're
using
for
the
end-to-end
test
and
that
will
get
popular
so
that
when
we're
running
them
to
a
test,
they
have
access
to
that
image.
C
A
We
can
you
know
me
and
mark
can
approve
that
checking
or
james,
so
you
know
you
know
that
pr
can
happen
today
tomorrow,
a
day
later,
that's
fine,
don't
lose
the
source
code
and
we
can
check
it
in
the
most
important
thing
right
now
is
your
image
that
you're
using
for
the
device
plugin
that
reference
implementation
needs
to
be
given
to
claudio
so
that
claudio
can
push
it
in
and
like
the
biggest
thing
is
this
test
needs
to
pass?
Yes,.
C
A
C
Right
yeah,
as
I
mentioned,
is
this
source
code
is
anyway
only
a
copy
paste
from
the
original
source
code
from
arnold
from
his
repository
yeah.
So
this
is
anyway
always
available
there
in
any
case,
okay,
but
therefore
the
short-term
solution
is
that
we
provide
claudio
this
build
image
containing
the
device
plug-in
and
then
okay
cloud
you
provide
us
an
is
a
link
for
this
image
right,
which
we
can
include
then
into
the
e2e
test
for
pulling
it
and.
D
Then
I
think
you
are
going
to
need
to
include
in
your
pr
and
update
to
where
the
like
that,
where
the
images
are
like
added
into
a
map
of
what
the
image
name
is
to
what.
Yes,
this.
C
No,
no,
I
honestly,
I
don't
use
luck
a
lot
yet
it
just
registered
me
for
for
joining
the
kubernetes
discussion
but
yeah
claudio.
Can
you
give
me
your
contact
somehow
also
or
yeah?
You
can
find
me
on
slack
with
the
same
name
as
okay.
C
H
It
will
be
faster
than
communicating
through
github
comments.
C
A
Claudius
in
europe
as
well,
so
it
looks
like
tomorrow
tonight
us
time
is
when
you
guys
are
gonna
work
on
this,
so
yeah
all
right,
but
we
do
it
as
fast
as
possible.
Listen
all
right!
All
right!
Everybody!
We
need
to
close
this
meeting
and
we're
late
for
a
lot
of
us
are
left
for
other
meetings.
Thank
you
all.
So
much
have
a
great
day.