►
From YouTube: Kubernetes SIG-Windows 20211116
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right,
hello,
everybody
and
welcome
to
the
november
16
2021
iteration
of
the
kubernetes
windows
community
meeting.
As
always,
these
meetings
are
recorded
and
uploaded
to
youtube
so
be
sure
to
adhere
to
the
cncf
code
of
conduct.
I'll
start
with
some
announcements.
The
code
freeze
for
123
is
today
it's
not
really
fixed
when
the
release
team
will
start
to
remove
enhancements
from
the
release,
but
it's
usually
kind
of
after
business
hours
and
whatever
the
time
zone
is
for
the
release
lead.
A
So
I
think
best
to
try
and
get
those
in
as
early
as
possible.
I
think
most
of
the
prs
that
we
want
the
standalone
pr's
that
we
wanted
for
the
perfect
reference
have
been
in.
I
believe
the
os
field
and
host
process
containers
enhancements
in
and
the
it
looks
like
the
log
viewer.
One
is
probably
not
going
to
make
it.
Is
that
correct
our
event?
A
A
I
think
I
saw
we
have
them
for
the
host
process
containers
in
the
os
field
cup,
so
we
should
be
good
with
that
and
then
the
docs
and
releasing
would
like
those
piers
to
be
reviewable
or
out
of
draft
mode
by
november
23rd,
which
is
next
tuesday,
and
then
they
want
at
least
subject
matter
expert
owners
to
review
or
approve
by
the
following
tuesday.
A
So
let's
just
stay
on
top
of
those,
so
we
don't
jeopardize
any
of
the
enhancements
from
the
box
yeah,
and
we
also
said
that
next
week
I
know
it's
a
u.s
holiday
and
things
are
kind
of
light.
We
said
that
if
there's
no
agenda,
then
we'll
just
cancel
the
meeting
by
5
pm
so
we'll
take
care
of
that.
B
A
Okay,
first
up
is
jason,
wanted
to
do
a
demo
of
techton
running
on
windows.
You
want
to
introduce
yourself
and
then
it
should
be
open
to
sharing
your
screen.
I'm
gonna
stop
sharing
and
determine.
C
All
right
go
ahead.
Hi,
let
me
start
sharing.
Is
that
working
yep,
okay
hi,
so
my
name
is
jason
hall.
I
work
at
red
hat.
I
mainly
work
on
tecton
and
openshift
pipelines.
I
am
just
now
noticing
a
bug
in
this.
I
was
supposed
to
present
this
two
weeks
ago,
pretend
that
says
november
16th.
Instead
of
actual
today,
I
wanted
to
come
and
share
the
the
saga.
The
journey
that
tekton
went
through
to
get
working
workloads
on
top
of
windows.
C
I
think
we
came
up
with
a
novel
approach
to
the
problem.
I'm
not
sure
if
it's
a
good
approach
to
the
problem,
but
it
works,
and
so-
and
mainly
I
just
wanted
to
sort
of
get
some
feedback
from
this
group,
specifically
whether
this
was
a
good
way
to
solve
it,
a
bad
way
to
solve
it.
If
other
people
are
having
the
same
sort
of
problems,
how
we
can
all
work
together
to
do
what
we're
trying
to
do
together.
C
I
will
be
hopefully
very
quick,
because
I
know
this
is
only
half
an
hour
and,
and
we
already
have
some
stuff
so
techton
is
a
cicd
workflow
orchestration
platform
on
top
of
kubernetes.
It
lets
you
build
and
test
and
deploy
your
code
on
kubernetes
literally
running
as
pods
on
the
cluster.
The
idea
behind
it
is
well
one
of
the
ideas
behind
it
is
you
should
manage
and
secure
your
build
infrastructure.
C
The
same
way
you
manage
and
secure
your
prod
infrastructure
jenkins
is
great
and
a
lot
of
people
use
it,
but
it
means
you
have
like
a
jenkins
cluster
that
is
differently
secured
and
sometimes
unsecured
than
your
regular
production
serving
infrastructure,
and
it
would
be
nice
to
have
the
same
expertise
and
the
same
tools
available
to
both
of
those
really
really
really
briefly.
Tecton
is
a
bunch
of
stuff
as
a
bunch
of
sub
projects,
but
we
have
the
main
core.
One
is
called
pipelines.
C
We
define
a
number
of
crds
where
one
is
tasks.
Tasks
execute
a
number
of
steps
run
as
containers.
They
run
an
order,
pipelines
orchestrate
tasks
that
execute
in
a
dags.
You
can
have
two
things
deploy
at
the
same
time
or
run
unit
tests
and
scan
an
image
at
the
same
time
and
when
executed
tasks
execute
as
pods
on
the
cluster.
C
C
So
I
mentioned
that
steps
operate
in
in
order
inside
the
pod.
This
turns
out
to
be
a
huge
source
of
complexity,
protect
on,
because
kubernetes
really
really
really
wants
to
run
all
these
pods
all
at
once,
right
it
once
it
wants
to
start
them
all
and
run
them
all.
That's
what
it's
there
for
and
techton
runs
to
wants
to
run.
You
know
fetch
git,
repo
and
then
unit
test
and
then
build
a
container
image.
C
It
doesn't
want
to
have
all
those
run
at
the
same
time,
because
that
would
be
insane
so
the
way
that
we
settled
on
a
solution
for
this
was
to
inject
an
entry
point
binary
into
every
step
and
have
that
binary,
be
responsible
for
sort
of
scheduling
and
controlling
the
order
of
these
things.
So
when
you
specify
a
tekton
task
run,
it
might
be
these
steps
busy
box
sleep
10
seconds
and
then
echo
hello
and
then
the
second
one
starts
and
says
cat
hello.
C
If
these
both
happened
at
the
same
time,
utter
chaos,
nothing
would
work
because
cat
hello
would
run
before
echo
hello
into
hello.
C
By
when
we
take
this
task
run,
we
generate
a
pod.
That
does
the
same
thing.
It
has
one
init
container
that
has
this
entry
point
image
that
copies
the
binary
out
of
that
entry
point
image
into
a
shared
directory,
and
then
we
update
each
each
steps
command
to
actually
run
that
entry
point,
and
then
you
know,
do
all
the
rest.
C
C
We
then,
with
that
injected
entry
point.
We
have
another
set
of
shared
volumes
that
basically
say
watch
for
this
file
to
exist
before
you
start
the
actual
command
after
the
first
step
is
done,
it
writes,
run
one
second
step
waits
for
run,
one
and
writes,
run
two
etc
etc,
and
the
good
news
is
this
mostly
works
most
most
of
the
time.
This
actually
works
shockingly
well,
but
back
to
windows.
C
So
this
has
worked
really
well
on
on
linux
for
for
three
years
or
so,
but
we
had
some
users
who
wanted
to
start
running
these
on
windows,
containers
and
the
controller
components
all
still
can
run
on
linux.
That's
fine,
but
we
need
to
run
these
pods
on
windows
and
the
way
that
we
were
injecting
entry
points
and
using
this
entry
point
made
that
a
little
difficult
among
the
challenges
that
we
faced.
C
We
had
some
users
who
knew
about
windows
but
didn't
know
anything
about
techton
and
we
had
tech
on
folks
who
know
a
lot
about
tecton,
but
none
about
windows
also
practically
all
of
the
people
that
were
interested
in
windows
in
this
problem.
Space
were
in
australia
and
I
am
in
new
york.
So
we
had
that.
Also
the
latency
issue
of
you
know
back
and
forth
communication
across
the
world.
So
we
we
wanted
to
solve
this
similar
to
how
we
support
multi
architecture.
C
Today,
tekton
works
on
amd
and
arm,
and
power,
pc
and
s390x,
and
and
potentially
maybe
others
in
the
future.
The
way
that
we
got
that
to
work
so
easily
was
everything
in
tecton
is
written
in
go
the
images
are
built
using
a
tool
called
go
and
are
based
on
top
of
a
distrolus
base
image
the
same
one
that
kubernetes
uses
co
makes
this
really
easy
for
us
when
you
co-publish
or
co-apply
or
there's
a
number
of
things.
C
Essentially
what
it
does
is
it
does
go
build
whatever
you
need,
tar
that
up
and
put
it
on
top
of
that
base
image
in
the
registry.
It
doesn't
require
a
docker
demon.
It
doesn't
do
anything
in
containers,
it
literally
just
shells
out
to
go,
build,
tars
it
and
pushes
it
to
a
registry
because
go
makes
cross-compilation
so
easily,
so
easy
with
go
os
and
go
arch
or
goose
and
gorge
have
started
calling
them.
It's
really
fun.
You
should
try
it
essentially.
C
If
your
base
image
is
a
multi
multi-platform
image,
you
can
just
say
for
each
platform
in
base
image,
lewis
and
gorge
go
build
whatever
and
then
just
more
registry
operations
to
assemble
that
into
a
multi-platform
image,
but
co
couldn't
build
windows
images.
I
didn't
know
anything
about
windows
images,
I'm
sure
this
group
knows
at
least
something
about
windows,
images
and
that's
part
of
the
reason
I'm
here
to
talk
to
you
to
learn
more
code
assumed
linux
for
everything
right.
C
It
assumed
the
tar
layout
that
linux
container
images
wanted
and
didn't
know
anything
about
windows,
and
so
we
had
to
build
in
windows
support
for
co.
This
means
adding
that
weird
hives
directory
that
I
still
don't
understand
having
a
magical
incantation
of
pax
pax
records,
header
or
something
sim
links,
don't
work.
The
same,
probably
a
dozen
other
things
that
I
know
nothing
about
yet,
but
with
some
help
from
buildpacks
folks
that
have
already
sort
of
gone
down
these
paths,
we
got
code
to
work
with
windows
and
it
mostly
works.
C
We
even
have
an
end-to-end
test
running
on
github
actions,
windows
runner,
which
was
nice
so
co,
will
build
for
every
os
and
archer
architecture
provided
by
the
base,
image
and
distrolus,
which
we
use
for
our
base.
Image
only
provides
linux.
As
far
as
I
know,
there
are
none
in
the
world
images
that
are
exist
as
suitable
base
images
for
both
linux,
stuff
and
windows
stuff.
C
So
we
stitched
one
together,
we
wrote
a
simple
script
that
sort
of
takes
all
of
the
images
pointed
to
by
distribu,
static,
non-root
and
all
of
the
images
pointed
to
by
this
windows.
Nano
server
and
stitch
them
together
and
push
that
and
then
use
that
as
our
base
image
and
after
we
do
that
everything
works
you
can
see
here.
This
is
a
released
image
from
the
tecton,
the
latest
tecton
release.
C
If
you
do
crane
manifest,
you
can
see,
there
are
linux,
stuffs
and
windows
stuffs
and
it
mostly
kind
of
works,
and
so
now
an
updated
image
of
our
venn
diagram.
There
is,
I
think,
exactly
maybe
one,
or
maybe
almost
one
person
who
understands
both
windows
and
techton.
That's
me,
I'm
realizing.
Now
I
shouldn't
put
myself
in
the
list
of
people
who
understands
windows.
C
We
still
have
to
sort
of
set
expectations
with
users
and
potential
clients
and
users
of
this.
We
still
don't
have
experience
running
this.
We
don't
have
a
ci
testing
for
any
of
this
in
tecton,
so
we
went
back
and
forth
on
the
wording
for
it.
Technon
doesn't
officially
support
windows.
We
provide
images
that
allow
techton
workloads
to
run
on
windows
nodes.
What
you
do
with
that
is
sort
of
up
to
you.
We
make
no
promises
about.
You,
know
future
breaking
changes
or
compatibility
or
anything
like
that,
but
it
works.
C
I
mean
the
users
who
are
looking
for
this,
the
friendly
australians
that
that
we
talked
to
there.
As
far
as
I
know,
completely
happy
with
this,
which
is
which
is
really
great
next
steps
is,
is
get
some
ci
involved
and
actually
end
and
test.
This
sometimes
write
more
or
have
more
examples
and
more
usage
and
more
experience
with
running
windows,
nodes
and
multi
multi
osfi.
C
This
will
probably
involve
improving
code
for
building
windows
images,
and
actually
I
see
that
jamie
is
also
on
this
call
jamie-
and
I
talked
yesterday
about
improving
some
of
the
underlying
libraries
to
support
building
for
windows
even
better
for
all
kinds
of
other
things.
So
the
sort
of
model
that
co
has
of
build
a
thing
slap
it
on
top
of
an
image
is
not
unique
to
code.
Other
things
could
just
build
a
thing
slap
it
on
top
of
an
image
and
should
be
windows
aware
if
they
are,
if
they
can
be.
C
The
questions
I
bring
to
this
group
are:
is
this
a
model
we
should
consider
reusing
outside
of
tecton?
Is
it
something
that
other
people
have
encountered,
how
do
they
solve
it?
How
have
they
tried
to
solve
it
and
did
it
work?
I'm
not
married
to
the
solution.
If
this
is,
if
there's
a
better
way
out
there,
I'm
totally
happy
to
use
that
instead,
but
I
didn't
really
find
one
and
yeah
the
the
question
before
is
cos
model
one.
C
We
can
reuse
outside
of
code
to
build
a
build,
an
executable
and
slap
it
on
top
of
an
image,
all
things,
I'd
love
to
chat
about
and
discuss
either
here
or
offline,
or
in
the
slack
or
anywhere.
C
You
can
learn
more
about
tecton
at
tecton.dev.
You
can
learn
more
about
co
at
github,
google,
co
and
just
release
disrealist.dev,
but
yeah
other
than
that.
I
have
nothing
else
to
present.
Thank
you.
A
Thanks,
I
think
that
yeah,
the
only
other
agenda
item,
was
jays,
so
I
think
we
could
probably
talk
about
like
spend
some
time
discussing
this
sure
and
kind
of
answering
your
questions.
Yes,
that's
it's
interesting.
C
I
guess
my
my
main
question
is:
has
anyone
had
this
problem
before
whether
or
not
they
have
needed
this
solution?
Whether,
like
I
can't
imagine,
I
am
the
first
person
to
ever-
need
an
image
built
for
both
linux
and
windows?
C
A
We
have
that
problem
quite
a
bit
and
we
actually
have
that
solved
a
little
bit
differently
in
the
kubernetes
or
like
org,
for
the
all
of
the
images
that
we
use
to
run
the
ede
tests
and
the
pause
image
itself.
What
we
actually
did
so
docker
build
kit
now
supports
that
with
docker
buildex.
A
We
use
that
to
build
windows,
images
on
linux
machines
with
builders,
and
it's
pretty
similar
if
you
do
like
the
output,
if
you
specify
the
output
as
a
type
of
registry-
and
you
have-
I
think
some
experimental
settings
turned
on-
you-
can
do
something
similar
where
we
do
exactly
what
you
did
where
like
they're
go
mostly
go
projects,
so
we
cross
build
and
then
yeah.
We
target
the
nanoserver
image
layer.
A
Stick
the
binaries
that
we
need
on
there
update
the
some
environment
variables
like
for
paths
and
then
stick
the
binaries
on
there
and
then
push
that
to
the
registry,
and
I
think
we
had
some
of
the
same,
possibly
challenges
that
you
look
like
you
already
solved
where
the
windows
image
needs
to
have
the
metadata
in
the
manifest,
especially
for
the
os
version
that
I
believe
has
been
addressed
with
the
docker
manifest
tools,
that's
kind
of
what
I'm
the
most
familiar
with
here
and
I
have
claudia's
actually
who's
on
the
call
pioneered
most
of
that
work.
F
Yeah
well,
there
have
been
some
updates
to
docker
build
decks
since
then,
and
you
can
simply
just
docker
manifest
sos
version
as
well
to
docker
images,
so
we
don't
have
to
hack
around
that
ourselves.
F
So
adding
noise
versions
to
the
manifest
images
is
directly
supported
by
docker
as
well
as
for
the
other
challenges
I
have
seen
some
of
them,
I'm
sorry.
What
am
I
on
mute?
I.
F
Strange
so
I'll
go
again,
then
I
was
mentioning
that
docker
fine
docker
manifest
finally
supports
adding
always
versions
to
images
as
well
the
work
around
for
that
previously,
but
we
don't
have
to
rely
on
that
anymore.
F
Indeed,
we
do
use
docker
build
x
to
basically
slap
on
binaries
and
other
things
on
top
of
windows
images
as
you
do,
but
I
am
curious
about
different
things
when
you
mention
that
core
slaps
on
the
binaries
on
top
of
the
image
and
so
on
and
so
forth,
does
core
require
to
pull
the
image
first
and
then
add
the
it
doesn't.
C
No,
so
it
does,
it
pulls
the
manifest
to
know
what
you
know
the
digestive,
each
of
those
images,
the
constituent
manifest
in
the
list
are,
but
then
it
just
does
registry
operation
it
never
pulls
the
blob
it
never
has
to
like
pull.
You
know
the
full
contents
of
the
layer
underneath
it
because
it
doesn't
care
what's
in
the
layer
underneath
it
is
just
slapping
something
on
top.
So
it's
all
it's
all
registry
operations,
just
to
pull
manifests,
modify,
manifests
push,
manifests
it
never
touches
blobs.
F
Yeah,
that's
in
my
opinion.
That's
that's
fantastic,
because
some
windows
images
are
quite
huge.
For
example,
the
windows
server
core
images.
They
are
two
gigabytes
in
size
compressed
if
you
compress
that
it's
going
to
be
even
larger
and
if
you
have
to
build
for
quite
a
few
different
images,
that's
going
to
take
a
while.
So
I
think
that's
very
nicely.
C
Done
are
you
right?
I
I
I
didn't
have
a
demo,
but
I
could
show
you
a
demo
of
building
this
and
it
takes.
You
know.
Maybe
five
minutes
to
do
all
of
the
all
of
the
builds
for
all
of
the
platforms
and
push
them
all
in
the
images
that
you're
talking
about
are
you?
Are
they
only
running,
go
build
on
something
or
are
they
doing
other
stuff?
Do
they
need
stuff
from
the
base
layers?
Do
they
need
do
they.
F
So
the
way
we've
done,
those
images
we
built
some
multi-stage
docker
files.
A
Okay,
I
think
we
do
also
copy.
We
do
have
a
kind
of
an
image
helper
that
we
have
that
caches
all
of
the
powershell
of
like
offline
install
files
and
then
so
from
another
windows
container,
and
then
we
do
copy
those
onto.
I
think,
there's
an
installer
setup
that
we
do
intake
nanoserver.
A
D
A
few
slides
back,
you
mentioned
something
about
sharing
the
base
image.
Would
you
expand
upon
what
you
meant
there
this
one
yeah
one
for
one
or
I
think
it
was
this
one
or
the
one
right
before
it
sure.
D
Okay,
so
maybe
this
one
yeah,
it
might
have
been
this
one.
So
I
guess,
could
you
talk
a
little
bit
more
about
this
because
I
didn't,
I
think
I
thought
exactly
what
you
were.
You
were
saying,
yeah.
C
So
co,
by
default,
when
you,
when
you
try
to
build,
build
an
image,
you
say
base
it
on
this
thing
and
if
it's
a
multi,
if
it's
a
multi-arch
image
or
manifest
list,
it
will
figure
out
what
all
the
platforms
of
the
things
underneath
it
are
run,
go
build
with
with
boots
and
gorge
and
then
slap
the
things
on
top
of
those
things
that
works
fine
for
distralis
or
when
there's
when
there's
sort
of
one
image
that
contains
all
of
the
base
stuff
you
need
and
care
about,
but
we
didn't-
or
I
didn't,
find,
an
image
that
had
that
was
a
suitable
sort
of
distro-less
plus
windows
stuff.
C
So
I
built
a
it's
a
tiny.
I
think
100
line
go
script
that
just
pulls
these
image
manifests,
makes
sure
that
there
aren't
any
conflicts
in
the
platforms
and
then
stitches
them
together
and
pushes
it
to
a
registry.
I
think
this
is
essentially
what
docker
manifest
stuff
could
do
or
does
do,
and
maybe
I
can
get
rid
of
all
of
this
by
just
running
docker-
manifests
stitched
together
these
things
but
yeah.
I
would
be
totally
fine
removing
this
as
a
step.
C
Yeah,
so
currently
it
only
stitches
together.
These
two
images,
if
there
were
you
know,
distro
list,
provides
all
the
linux
based
images
and
if
there
were
five
windows
based
images,
I
needed
to
stitch
together
with
different
os
versions.
This
is
the
this
is
the
tool
that
would
do
that.
C
C
This
is
an
area
that
either
either
this
tool
will
become
more
sophisticated
and
stitch
together,
more
images
or
it
will
be
completely
erased
and
replaced
with
a
docker,
manifest
stuff
sort
of
similar
to
it
sounds
like
what
you
all
are
doing
with
with
your
base
image.
A
Yeah,
it
is
similar.
Our
base,
images
are
kind
of
inverted.
We
most
of
them
have
like
one
linux,
flavor
and
then
multiple
windows
flavors,
like
windows,
server,
2019
windows,
server,
2022,
because
those
are
generally
not
compatible
with
each
other
but
kind
of
very
similar
yeah.
G
G
But
the
other
thing
we
always
hate
is
when
we
actually
want
to
build
an
image
that
needs
to
be
built
on
a
windows
os.
So
did
you
have
any
experience
of
actually
using
real
windows
nodes
in
the
cluster
where
yeah
these
build
steps
can
be
scheduled,
yeah
and
then
yeah
go
ahead.
Yeah
yeah
mainly
yeah
for
this
scenario,
so
because
other
one,
I
think
it's
using
build,
get
or
build
x,
will
mostly
get
it
yeah
with
some
hacks.
If
you
are
not
actually
having
any
run,
commands
or
yeah,
something
that
requires
the
operating
system.
C
Yeah,
so
the
none
of
this
is
ever
built
on
a
windows
on
a
windows
node.
This
is
all
run
from,
I
mean
in
theory.
It
could
be.
Co
runs
on
windows
if
you
want
to,
but
but
all
of
our
developers
build
on
linux
and
our
cio
runs
on
linux,
including
our
release
process.
So
windows
nodes
are
never
involved
in
the
process.
C
One
thing
I
wanted
to
so
build
x
can
do
this,
but
I
think
if
you
are
using
buildex
and
dockerfiles
to
express,
like
a
bunch
of
scaffolding,
to
build
up
an
environment
where
you
end
up
running
go
build
like
that
is
hugely
wasteful
of
resources
and
time
you
have
to
pull
the
whole
base
image
to
get.
You
know
like
go,
build,
doesn't
care
what's
in
the
rest
of
that
image,
underneath
it
but
docker,
you
know
build
x
and
docker
files.
C
E
C
Of
that
you
know,
go
build,
so
that's
that's
effectively.
What
code
is
trying
to
do
is
say
like
if
we
can
assume
you
don't
care?
What's
underneath
you,
which
is
again
not
every
case,
but
in
our
case
we
know
we
don't
care.
What's
underneath
us
just
run,
go
build.
Do
a
cross
compilation
from
linux.
C
If
you,
if
you
can
or
you
know,
cross
compilation
works
no
matter
what
the
host
os
is,
do
all
of
that
and
then
just
append
it
to
that
image
up
in
the
registry
and
never
have
to
pull
that
stuff
so
like
buildex
will
work
and
like
it
sounds
like
build
x,
is
totally
working
for
you.
C
I'm
not
trying
to
advocate
like
change
your
whole
build
process,
but
I
think
if
you,
if
you
know
you
don't
need
it,
if
you,
if
you
know
you
don't
care
about,
what's
underneath
you,
you
don't
have
to
pull
that,
but
build
x,
isn't
smart
enough
to
know
that
and
I
don't
think
there's
any
way
to
tell
build
x
like
don't
pull
that
three
gig
image
just
run,
go
build
and
put
the
thing
on
top
of
it.
It's
going
to
go
and
pull
that
three.
G
Yeah
yeah
yeah.
I
totally
see
you
as
a
benefit
of
this
yeah.
I
was
just
curious
because
yeah,
the
other
problem
is
something
that
we
face
with
a
lot
of
customers,
even
in
some
cases
yeah
they
don't
care
about
lynotes
but,
as
others
we're
mentioning
with
windows,
you
have
different
flavors,
so
yeah
someone.
We
need
to
build
the
multi-arc
image
that
supports
the
sac
ltsc
and
the
new
windows
22,
and
they
look
for
some
kind
of
a
way
that
I
can
orchestrate
this.
G
C
Yeah,
no
that's
an
area
like
I
don't.
I
still
don't
know
anything
about
the
the
many
different
flavors
and
varieties
and
os
versions
of
windows
stuff.
If
go,
multi
cross
compilation
works
across
on
top
of
those,
then
I
never
need
to
know
right.
I
never
I
hopefully
never
will
learn,
but
that's
only
because
we
know
we
don't
care
about.
What's
underneath
us
we're
just
going
to
slap
something
on
top
yeah,
okay,.
A
Yeah,
jamie
thanks.
I
I'm
just
saying
I'm
going
to
drop
real
quick,
sorry
go
to
sig
node,
but
this
is
really
interesting,
feel
free
to
carry
on
this
conversation,
or
we
could
bring
this
up
again.
Jay,
I'm
gonna
make
you
host
just
remember
to
stop
the
recording
before
no
no
okay.
B
A
B
A
I
So
I
stumbled
across
co.
Like
a
couple
weeks
ago,
we
were
already
using
crane
to
pull
images
and
things
on
linux,
for
windows
and
to
create
and
to
do
to
package
up.
You
know
air
gap,
tar,
balls
and
stuff
like
that.
So
I.
E
I
It
took
took
aws
a
little
while
to
get
an
aws
base
image
published,
but
if
we
were,
if
we
were
using
something
like
code
to
do
what
we
package
up,
then
we
could
have
built
2022
on
linux
way
before
and
not
have
to
wait
on
anybody.
So
that's
where
my
big
interest
is
is
the
fact
that
I
can
that
we
can
just
streamline
our
workflow.
We
don't
have
to
worry
about
making
windows
runners.
We
don't
have
to
worry
about,
like
drone
currently
doesn't
support
2022.
I
Well,
we
don't
have
to
think
about
drone
needing
to
support
2022
with
the
runner,
because
we
can
do
it
all
in
linux,
so
so
just
for
other
people
that
might
have
questions
like
that's
that's
my
big
interest
in
it
and
our
interest
in
general,
because
we
we
package
a
lot
of
things
inside
of
containers
and
like
use
that
to
ship
software,
and
we
were
already
using
crane
to
pull
and
extract
stuff
out.
So
we
were
already
doing
that
by
default.
C
Right,
yeah,
that's
another!
That's
another
good
reason,
aside
from
having
to
pull
the
whole
base
image
that
whole
base,
image
has
to
exist
and
be
runnable
in
your
environment
or
whatever,
like
code,
doesn't
care
coach,
just
gonna
like
go,
build
and
then
registry
stuff
on
top
of
it,
so
yeah
I'm.
I
have
nothing
after
this,
I'm
completely
happy
to
take
more
questions
or
talk
anymore
about
this.
You
can
also
hit
me
up
offline
on
slack
or
twitter,
or
anything
else.
B
Jason,
this
is
cool.
I
I'm
do
do
you.
We
always
do
this
sig
wind
window,
pairing
thing
like
it
at
this
time
and
do
you
want
you
want
to
drive
and
just
like
show
us
how
to
show
us
stuff
like
like
you'd,
have
to
be
a
demo.
It's
just
like.
If
you
could
just
show
us
stuff
all
the
fall
tips
and
wait
for
me.
C
C
F
Remaining
just
to
make
sure
when
you
build
the
image
with
core,
it
does
inherit
the
os
image
version
for
windows,
images
right.
C
Yeah
yeah
yeah.
Actually
I
can
show
in
the
slides
again.
F
C
Right,
so
it's
combined.
This
actually
demonstrates
two
things.
One
is
that
combine
is
able
to
copy
over
the
os
version
stuff
from
the
stuff
it
combined,
that
it
doesn't
drop
it
and
that
co
so
co-built.
This
image
that
co.
C
It
doesn't
drop
that
on
its
way
through
so
yeah,
both
of
those.
If,
if
combined,
took
multiple
base
or
multiple
things
to
stitch
together,
which
is
a
relatively
easy
change,
this
could
be.
You
know,
10
different
os
versions
of
the
space
image,
and
then
this
would
show
up
with
10
different
os
versions
and
to
jamie's
point
like
this
can
also
be
windows
versions
that
don't
that
don't
exist
that
aren't
runnable
yet
or
aren't
you
know
that
aren't
rolled
out
on
aws
or
something
yeah.
C
I
don't
know
what
people
are
interested
in
seeing,
but
I
can
show
you
the
pull
request
to
co,
that
added
windows
support,
because
I
think
it
actually
it's
one
of
those
pr's
that
takes
a
very
long
time
to
work,
but
at
the
end
of
it
is
only
you
know,
a
couple
hundred
lines
of
actual
changes,
one
of
those
really
really
fun
pr's,
where
choosing
what
the
right
hundred
lines
are
was
the
hard
part.
So
you.
D
C
Does
the
go
build
and
just
says
like
here
is
the
path
to
the
binary
that
was
built,
and
then
this
function
just
tars
it
up
into
a
tar
file
before
we
were
just
doing
like
tar
head
directories,
just
adds
that
binary
to
the
thing
and
after
this
we
have
to
if
we're
building
for
windows
if
the
base,
if
the
base
image
platform
is
windows,
we
create
empty
directories
for
hives
and
files
and
do
this
thing,
which
I
would
love
for
somebody
to
who
understands
what
this
is
to
tell
me
what
it
means,
because
it's
this
you
know
elder
horror
of
screaming
random,
opaque
string,
but
this
was
necessary
to
get
it
to
run
on
windows.
C
Yeah.
I
guess
that's
it.
I
guess
that's
mainly
what
we
do
is
when
we
are
tarring
the
file
to
put
it
into
that
new
layer.
We
have
to
create
these
empty
directories.
We
don't
put
anything
into
hides.
We
don't
put
anything
else
into
files,
except
for
the
binary
that
we
write
in
this
hidden
section.
C
We
write
this
is
the
binary
and
then
copy
the
binary
file
into
it.
So
the
actual
change
for
windows
here
was
relatively
small
was
just
like
if
we
think
we're
building
for
a
window.
If
we're
building
a
windows
layer
create
these
empty
empty
paths,
I
guess
hives
is
the
only
one
that
ends
up
empty.
We
create
files
in
co-app
and
then
set
this
magic
header.
D
I
wonder
if
the
hives,
in
maybe
claudiu,
knows
this
but
like
so
if
you
pull
down
an
image
and
you
extract
it
and
look
at
it,
there's
the
hive
files
they're,
just
a
bunch
of
registry
files,
those
come
from
the
base
image.
So
I
wonder
if,
like
I
wonder
how
that
interacts
with
the
way,
because
you're
not
actually
pulling
the
image
and
so
you're
not
uploading
those
files
right
right.
C
So
my
under
my
my
completely
dumb
understanding
of
what
hives
is,
is
it's
it's
like
being
able
to
layer
registry
tweaking
right,
so
this
layer
doesn't
need
to
do
any
tweaks
it
it
in
theory,
shouldn't
have
a
hives
directory
at
all,
but
for
some
reason
it's
required.
C
So
it's
just
going
to
take
whatever
is
in
that
base
the
same
way
that
like,
if
there
was
a
file
at
pathfubarbaz.txt
like
this
layer,
doesn't
care.
It
just
needs
to
say
on
top
of
that
layer.
C
E
B
C
Yeah,
so
that's
I
mean
that's,
that's
the
main
stuff.
We
do
ko.
Also.
Does
this
thing
where,
if
you
have
a
a
path
in
your
repository
or
in
your
in
your
local
workspace,
that
is
slash
code
data
or
like
a
code
data
directory,
then
it
will
also
tar
up
everything
in
that
directory
and
make
it
available
as
a
layer
in
the
file
or
in
the
final
image.
C
This
is
just
doing
that
for
windows.
It's
the
same
sort
of
thing,
so
we're
creating
a
a
new
layer.
We
have
to
have
this
hives
thing
there,
but
other
than
that,
it's
it's
other.
I
guess
what
else
is
interesting.
We
have
if
the
platform
is
windows.
The
entry
point
is
c
colon
co-app
file
name
instead
of
whatever,
and
we
update
the
path
to
bc,
colon,
whatever.
F
C
I
will
I
will
delete
these
lines
of
code
after
after
this
after
this
meeting,
because
that
would
that
would
be
great
to
have
fewer
and
fewer
things
you
do.
D
Have
to
be
careful
with
the
exe
thing
we
found
that
with
the
host
process,
containers
work
so
just
there's
like
there's
special
there's
special
edge
cases
where
you
do
need
to
have
the
exe.
C
Okay,
yeah
sure
I
mean
the
nice
thing
also
about
this-
is
that
this
in
this
induced
us
to
have
end-to-end
tests
that
run
on
windows,
so
we
hopefully
won't
progress
on
anything
hopefully,
but
yeah.
The
rest
of
this
is
pretty.
You
know,
standard
and
easy.
I
don't
think
there's
actually
that
much
to
it.
It
was
my.
It
was
mainly.
You
know,
fumbling
around
in
the
dark
to
figure
out
what
this
was.
I
I
straight
up
copy
this
from
buildback,
so
I
still
don't
understand
what
it
is.
C
K
K
I
K
I
Should
be
container
manager,
slash,
container
user
or
something
like
that?
That's
that's
not
technically
basics,
before
that's
a
security
descriptor,
so
those
letters
and
things
mean
something.
Supposedly
I
did
some
research.
D
C
Yeah
I
mean
my
everything
works.
The
good
news
is
everything
works.
The
bad
news
is,
I
have
no
idea
if
I
am
like
you
know,
invoking
some
terrible.
You
know
belly
of
the
beast
terror
by
using
this
string
right
like
if
I'm
not
just
making
everything
executable
by
default.
I
have
no
idea,
but
if
anybody
has
a
lead
on
all
ears.
F
I
have
places
in
chat
a
link
to
a
windows
dockerfire
in
which
we
don't
even
use
that
exe
at
the
end
of
it.
This
is
a
okay.
D
F
And
you
mentioned
that
you
had
some
issues
with
siblings
right.
C
Yeah,
I
actually
don't
remember
exactly
what
they
were.
Oh
in
the
in
the
the
code
data
thing
that
I
mentioned
before:
it's
very
common
for
people
to
put
in
their
code
data
directory
a
sim
link
to,
like
you
know,
dot
get
slash
head
so
that
in
your
image
you
always
have
a
file
that
tells
you
what
the
git
commit
was.
This
is
sort
of
a
lazy,
hacky
way
of
doing
the
ld
flags.
C
You
know
inject
my
git
commit
into
there,
which
I
think
probably
will
stop
recommending,
but
so
very
very
often
in
coda.
You
will
see
sim
links
and
I
hit
some
issue
with
it
and
was
near
the
end
of
being
able
to
get
this
merged,
and
so
I
just
gave
up
and
did
it
to
do,
but
if,
if
I
you
know
get
enough
activation
energy
I'll
go,
try
it
again
and
see
what
happened.
But
there
was
some
issue
with
with
chasing
siblings
and
putting
siblings
into
this.
C
That,
frankly,
I
don't
remember
what
the
issue
was,
but
yeah.
F
Because,
technically
you
can't,
you
can
add
siblings
to
those
star
files
and
submit
them.
I
mean
that's
how
docker
buildex?
Does
it
as
well?
The
only
issue
with
docker
build
x
is
that
it
up
it
prepends
files
slash
to
every
single
sibling
target
which
can
break
the
simulink
itself.
You'd
have
something
like
files,
slash
c
column,
slash
which
makes
no
sense
right.
C
Right,
maybe
that
was
maybe
that
was
the
issue
that
I
was
like
canonicalizing
the
path
and
then
trying
to
add
files
to
it,
and
that
made
it
that
made
it
nonsense,
but
yeah
I
mean
this
is
this
is
a
relatively
small
thing
in
the
grand
scheme
of
things
and
and
techton
doesn't
even
use
that
really
that
much
so
at
least
on
windows,
so
I
was
more
than
happy
to
just
add
a
to-do.
Instead,
I
can
also
share
a.
C
A
work
in
progress
to
so
co
is
built
on
top
of
a
library
called
go
container
registry,
which
does
most
of
the
registry
operations
like
just
append
a
you
know,
append
a
layer
to
this
manifest
or
whatever
and
then
push
it,
and
so
this
is
an
attempt
to
and
this
registry
sorry.
This
repository
also
comes
with
a
command
called
crane
to
append,
basically
do
that
in
the
cli.
So
you
can
say
like
take
this
tar
file.
Add
it
to
this
base
image
push
it
to
here.
C
This
is
necessary
to
select
the
windows
image
out
of
here,
but
this
code
path
didn't
do
the
windowsification
stuff
that
coded
code,
like
co,
had
the
logic
for
it
not
go
container
registry.
So
this
is
an
attempt
to
sort
of
move
that
up
in
the
in
the
dependency
tree
and
do
that
itself.
So
we
have
a
mutate,
a
layer
that
is
just
a
regular
tar
file
in
the
example
here
I
can
show
you
the
example
again
in
the
example.
C
This
is
a
tar
file
that
I
created
on
linux,
so
it's
linux,
formatted
and
everything.
This
just
takes
that
layer,
tar
and
does
the
same
thing
adds
an
empty
hives,
pre-pens
files,
prepends
files
and
sets
the
packs
records
to
that
eldritch
horror
string
I
mentioned
before
so.
Basically,
this
is
like
moving
this
logic
up
and
up
and
up
into
this
package
so
that
you
can
do
crane
append
any
tar
file
to
the
top
of
an
image.
This
could
be
useful
to
you.
If
you
are,
you
could.
E
C
Could
simulate
what
what
code
does
by
basically
doing
go,
build
tar
that
binary
and
then
go
run
or
sorry?
You
just
do
crane
append
on
top
of
this
windows
image.
That's
our
file,
you
could
you
could
effectively
do
it.
What
code
does
without
co?
The
interesting
thing
about
this
is:
you
can
do
this
for
languages
that
aren't
both
anything
that
can
cross
compile
if
you
can
do
cross
compilation
of
net
binaries
across
compilation
of
python
and
cross
compilation
of
anything
else,
you
should
be
able
to
extend
this
to
do
that
with
anything.
F
My
minor
question,
technically
speaking,
that
nanoserver
image
is
actually
a
manifest
list
which
contains
both
amd
64
and
arm
64
images.
I
think
yeah.
F
J
C
Oh
okay,
so
you
would
just
in
for
this
example,
you
would
do
crane,
append
platform,
amd64
and
os
version
this
one
or
whatever
yeah
this.
This
was
just
merged
last
week,
so
these
two
together
would
basically
give
you
that
ability
I
haven't
tested
it,
but
it
should
give
you
that
ability.
F
C
So
yeah,
I
don't
know
I
mean
yeah,
I
have.
I
have
no
direction,
but
I'm
more
than
happy
to
keep
vamping
and
talking
about
this
stuff.
If
people
have
questions,
I
can
answer
them,
take
them.
C
It
sounds
like
this
is
at
least
a
novel
way
of
approaching
this,
and
maybe
even
a
useful
one.
So
you
know
feel
free
to
keep
this
in
mind
in
the
future.
C
If
you're,
if
you're,
hitting
problems
like
this
one
of
the
real
like
tenets
of
co
is,
if
you
don't
care
about,
if
you
don't
care
about
having
the
whole
operating
system
available
to
you
when
you
build
stuff
like
why
are
you
pulling
all
these
images
and
pulling
all
these
layers
down
just
to
hydrate
an
os
around
you
that
you
just
run,
go
build
inside
of
and
destroy
like
you
should
just
go
run,
go
build
in
your
regular
environment,
where
we
assume
it
has
no
side
effects,
we
assume
it
can
cross
compile.
C
You
know
it's
pretty
fast.
It
shows
your
build
cache,
a
lot
of
good
stuff
and
then
just
take
that
and
slap
it
on
top
of
an
image
using
registry
operations.
It's
pretty
nice
yeah.
I
C
Yeah,
so
so
one
thing
because
go
binaries
are
static
and
don't
care
about.
What's
underneath
them,
we
tend
to
use
distroless
static
as
our
base
image,
which
is
very,
very
small
and
has
almost
nothing
in
it.
If
you
do
care
or
do
need
something
from
your
base
image,
you
can
absolutely
do
that.
But
code
is
like
you
know,
super
optimized
for
go
that
doesn't
care
about
it,
so
we
don't
tend
to
worry
about
it,
but
to
jaime's
point
you
can
absolutely
build
something
that
does
need
dlls
from
somewhere
else.
F
Yeah,
we
usually
cache
those
dlls
into
a
different
linux,
based
images
or
rather
scratch
based
images,
and
we
just
pull
that
image,
which
will
be
something
like
one
megabyte
in
size.
F
Yeah
yeah,
for
example,
we
do
have
windows,
server,
core
cache,
which
does
exactly
this
thing.
As
I
mentioned
there
are
so
there
are
certain
applications
that
require
some
dlls
like
net
api
32
dll.
F
In
order
to
run
some
http
servers,
those
are
not
available
in
nano
server,
they
are
available
on
server
core
and
the
solution
would
be
to
just
copy
them
over
to
run
a
server
but
again
pulling
the
entire
server
core
image
would
be
extremely
painful,
so
we
just
cached
those
dls
into
an
image,
a
scratch
image
and
we
use
those
in
later
builds.
C
Cool
yeah
I
mean
I
I
I
still
know
almost
nothing
about
windows,
but
the
the
more
I
learn
about
it.
The
more
I
think,
like
this
model,
is
really
good.
Just
because
your
base
images
are
so
huge
and
you
probably
don't
need
them
for
most
of
the
stuff
you're
doing.
I
think
it
can
be
a
real
real
benefit
and
even
just
being
able
to
like,
like
techton's
release
process
was
hardly
changed
throughout
this.
C
All
we
needed
to
do
was
add
that
combined
step
and
then
build
on
top
of
it,
and
then
you
know
our
release.
Infrastructure
still
runs.
Linux
doesn't
need
to
care
about
windows,
we'll
eventually
need
to
add,
like
windows,
nodes
for
ci
testing,
this
stuff,
just
to
be
able
to
make
sure
we're
not
regressing
on
windows
in
some
way,
but
it
was.
It
was
super
nice
to
be
able
to
just
yeah
dude,
like
multi,
multi-arch
and
multi-os
base
image
stuff
super
nice.
C
Yeah,
I
guess
hit
me
up
on
slack
or
or
anywhere
if
you
have
questions
or
or
want
any
pointers
to
any
of
this
stuff
later
I'll.
Add
the
the
slides
to
the
agenda
I'll
link
to
the
slides
to
the
agenda
if,
if
anyone's
curious,
but
yeah
thanks.
C
Yeah,
I
I
yeah,
I
think,
I'm
in
there,
I'm
pretty
sure,
I'm
in
there,
if
not
at
jason
hall.
You
should
be
able
to
find
me.
F
Great
sounds
amazing.
Thanks
for
the
information
and
presentation.
C
Thank
you
thanks
for
having
me
yeah
talk
to
you
later,
everyone.
Thank
you.