►
Description
Current progress and challenges encountered switching CNG images to multi-arch.
Lots of thanks to Serena He for kickstarting MRs:
* https://gitlab.com/gitlab-org/build/CNG/-/merge_requests/1250
* https://gitlab.com/gitlab-org/build/CNG/-/merge_requests/1368
* https://gitlab.com/gitlab-org/build/CNG/-/merge_requests/1377
A
Okay,
today
is
July
5th
2023
and
it's
the
distribution
team
demo
for
CNG
multi-arc
builds
so
without
Too
Much
Ado.
Let
me
start
sharing
my
screen.
First.
A
A
We
split
it
up
in
the
several
components,
so
the
first
components
went
in
already,
and
that
was
just
the
infrastructure
for
the
multi-arc
builds
where
we
we're
not
building
Market
multi-arc,
we
were
just
having
the
docker
files
adjusted
to
be
able
to
build
the
multi-art,
and
then
this
portion
that
I've
been
working
on
lately
is
basically
a
takeover
from
the
community
contribution
from
Serena,
where
she
adjusted
the
pipeline
to
build
the
actual
multi-arc,
but
because
a
lot
of
components
he
doesn't
have
access
to
is
the
infrastructure
and
everything
else
I
had
to
take
it
over
and
kind
of
push
it
forward.
A
So
that
kind
of
turned
into
the
1425
merge
request
and
that's
where
things
are
happening
at
the
moment.
So
that's
kind
of
the
overall
description.
The
overall
effort
for
the
entire
multi-art
build
is
tracked
in
this
epic
right
here.
A
So
if
anybody
wants
to
go
and
take
a
look
feel
free
to
so
just
jumping
right
into
it,
the
current
state
of
that
1425
is
that
it's
functional
beta,
in
my
opinion,
meaning
that
things
are
building
and
deploying.
But
there
are
some
unresolved
issues
with
the
back
end,
meaning,
for
example,
we're
not
cleaning
up
pods,
always
or
sometimes
the
jobs
get
stuck
the
stuff
that
doesn't
happen
right
now.
So
basically,
it's
the
net
new
failures
that
we
didn't
have
before,
which
is
not
nice.
A
A
So
those
are
the
outline
of
the
challenges
just
looking
at
the
documentation
here
for
the
notes,
dependence
on
build
X
now
and
the
kubernetes
driver
for
buildbacks,
and
this
in
combination
with
Kaz,
which
was
the
option
we
went
with
for
the
initial
implementation
introduced.
Quite
a
few
interesting
things
to
begin
with,
the
build
X
was
lacking.
A
The
support
for
Bearer
token,
meaning
that
we
couldn't
authenticate
with
our
own
cluster,
so
I
submit
that
the
patch
up
stream,
with
the
help
of
folks
from
the
Kaz
team,
the
patch,
is
still
being
processed.
So
we
are
monkey
patching
our
own,
build
X
built
into
our
Builder
images
to
allow,
for
the
bearer
token
to
be
processed
properly.
So
that
needs
to
be
known,
but
the
418
image
is
already
there
I
believe.
That's
the
yeah!
A
That's
the
merge
request.
That
did
it.
So
if
anybody
is
curious
to
go
and
take
a
look
so
going
lower,
so
this
is
the
overall.
How
do
we
try
to
do
it?
Because
the
alternative
to
the
cast
was
to
actually
add
to
the
Ruby
Docker
container
G
Cloud
binary?
A
Well,
G,
Cloud
distribution,
basically
the
whole
thing
so
that
we
can
generate
the
cubeconfig
and
be
able
to
access
the
cluster,
because,
as
of
124
I,
believe
it
relies
on
gcloud
plugin
for
the
cube
CTL
to
actually
authenticate
against
the
cluster,
and
if
that
is
not
there,
no
guys,
and
that
was
a
heavier
solution.
So
we
opted
for
a
lighter
solution
with
the
cars,
but
that,
as
I
mentioned
earlier,
introduced
some
issues.
A
So
the
second
consideration
was:
how
do
we
do
Builders
because
build
X
the
way
it
does
things
and
let
me
switch
to
the
code
for
a
little
bit.
Let
me
do
the
wrapping
here.
A
So
what's
happening
is
build
X,
you
can
create
the
builders
for
your
needs
and
then
you
have
to
attach
those
Builders.
So
this
is
the
bootstrap
right
here
and
then
you
can
when
you
build
and
it
notices
that
the
platform
is.
A
Thank
you
any
of
the
mentioned
ones.
He
will
use
that
Builder
to
build
the
necessary
image
and
more
than
that,
it
would
actually
combine
them
all
together
into
a
single
multi-arc
image
in
the
end.
So
just
to
remind
for
those
who
do
not
know
or
forgot,
multi-arc
image
is
not
like
one
big
fat
image,
including
everything
on
Earth.
It's
it's
an
index
pointing
to
the
images
for
each
particular
platform.
A
There
are
some
images
that
need
to
be
pulled
for
a
particular
platform,
and
then
it
pulls
that
specific
image
it
doesn't
pull
all
of
them
all
at
the
same
time,
we'll
come
back
to
that
later,
because
that
was
a
gotcha
in
our
Pipelines
so
and
to
use
those
build.
X
Builders
build
their
pods.
One
has
to
do
this
on
every
job
to
recreate
basically
the
pods
it
gets
spawned
once,
but
to
reconnect
to
them
one
has
to
do
the
entire
dance
right
there
and
that's
within
the
stop
like
even
to
stop
the
Builder.
A
You
have
to
first
connect
it,
and
then
you
have
to
stop
it
so
going
back
to
the
notes
so
shared
with
versus
individual
shared
Builder.
If
we
just
spin
up
like
the
idea
could
be
that
we
spin
up
the
builders
like
we
normally
do
currently
have
a
bunch
of
Builders
like
10,
15
whatever,
and
then
just
let
every
job
just
address
those
Builders
and
that's
it,
but
that
doesn't
quite
scale
and
it's
much
harder.
It's
not
impossible,
but
it's
harder
to
gauge
when
to
scale
and
how
to
scale
those
builders.
A
So
instead
I
opted
for
the
individual
per
Branch
Builders.
So
basically,
each
one
each
branch
HMR
will
create
its
own
Builder
pool
two
replicas
per
platform
and
it
seems
to
work
just
fine.
So
far,
I
didn't
see
any
bottlenecks
based
on
that.
A
So
with
the
two
replicas,
it
works,
but
I
have
not
been
able
to
find
my
documentation
and
sense
of
how
do
you
gauge
or
how
do
you
properly
scale,
this
kind
of
deployment
either
way
it
seems
to
be
working
and
we
can
tear
down
the
unnecessary
pods
with
environments
at
the
end
and
environments
is
actually
another
thing
that
happened
here.
That
is
slightly
artificial,
but
it
was
a
necessary
evil.
So
to
speak.
So
previously
we
did
not
do
like
when
we
do
builds.
A
We
did
not
create
any
environments
because
there's
no
need
to,
but
to
use
Cas
we
need
to
create
environment
and
do
deployment
into
that
environment.
So
we
have
to
artificially
create
the
environment
with
the
name,
build
X
and
then
whatever
so.
Basically,
cash
is
configured
that
any
environment
with
the
main
build
X.
Yes
cubeconfig
for
the
build
X
cluster
and
for
the
moment,
I
made
it
fairly
aggressive
to
stop
in
four
hours
so
that
we
have
nice
scaling
happening
there
and
you
know
not
over
utilization
of
resources,
which
I
can
ask.
A
So
right
now
there
are
no
running
pods
for
the
build
X,
which
is
a
good
thing,
which
means
that
you
know
they
all
worked
out
in.
They
all
disappeared
in
the
end.
So
because
of
that,
because
we
create
for
each
merge
request
or
the
Branch
the
same
set
of
Builders
I
actually
created
a
singular,
stop
build
X,
even
though
for
each
job
we
create
the
new
build
X
or
we
seem
to
create
the
new
buildax
environment.
A
But
we
don't
we
just
reconnecting
to
the
existing
one,
because
it's
the
same
name,
so
it
just
notices
that
it's
there
and
it
just
reconnects
to
it
rather
than
bootstrapping
it
completely
and
creating
brand
new
pods.
So
there
is
a
singular
stop
build
as
job
that
does
the
trick
at
the
very
end
of
the
pipeline.
So
if
we
go
back
and
take
a
look
at
the
pipeline
itself,
okay,
let's
take
a
look
here.
A
It's
right
at
the
tail
end
here
at
the
cleanup
stage,
so
once
everything
is
complete
and
every
single
job
here
is
attempting
to
create
the
build
x.
Builders
in
the
cluster
I
create
the
pods
and
if
they
do
exist,
just
skip
right
on.
A
We
can
actually
go
and
take
a
look
at
any
one
of
them
and
take
a
look
at
what's
happening
at
the
start,
because
that's
the
most
visible
part.
C
A
Yeah,
so
this
is
the
output
of
what's
actually
happening
in
the
background
so,
and
this
is
why
this
bootstrapping
is
important,
because
otherwise
it
records
the
intent
to
have
the
builders
in
the
kubernetes
cluster,
but
it
never
spins
them
up
and
never
connects
to
them.
Well,
at
least
from
my
experimentation,
it
didn't
so.
The
bootstrapping
is
a
necessary
part
of
this
entire
thing.
It
takes
a
little
bit,
but
it's
not
critical
to
the
overall
pipeline
execution.
C
So
if
there
I
have
a
couple
of
immediate
concerns,
yes,
first
off
a
little
bit
of
time,
Define
a
little
bit
of
time.
A
C
We
will
need
to
at
some
point
quantify
what
is
normal,
what
is
acceptable
and
what
is
P95
failure
right.
So
at
what
point
does
it
take
a
minute
and
a
minute
on
every
single
job
is
a
serious
problem,
yeah
right.
If
it
should
only
take
a
few
seconds
for
every
job,
once
the
atom
potent
behavior
is
actually
completed,
we
need
to
know
okay.
C
A
A
So
this
could
be
what
you're
referring
to
and
where
we
could
put
the
check
and
put
the
Timeout
on
this
job.
Saying
that
this
job
should
time
out
in
you
know,
and
we
Define
crimes.
A
C
Timeouts
not
what
I'm
concerned
about
at
all
I'm,
literally
saying:
okay,
how
much
overhead
does
this
add
to
every
single
job?
And
why
and
what
is
abnormal
right,
we're
adding
a
complex
piece
of
infrastructure?
We
need
to
know
the
behaviors
of
it
and
the
whys,
and
the
hows
should
be
at
least
started
to
be
documented.
That
would
indicate
to
us
where
there
is
a
failure,
why
there
is
a
failure
what's
causing
long
poles
things
like
that.
A
From
my
observations
so
far,
just
working
with
this
Mr
and
doing
like,
if
you
take
a
look,
there
was
67
pipelines
running
on
it.
So
that's
the
67
instances.
I
did
not
see
the
build
like
the
initiation
of
the
cluster
to
be
ever
an
issue.
There
are
other
issues,
they're
still
there,
but
that
was
not
one
of
them
so
far,
just
you
know
to
put
some
context
here.
A
The
efficiency
is
basically
because
okay
I'll
flip
it
around
a
little
bit,
so
our
main
goal
was
to
build
the
multi-arc
images
not
to
gain
efficiency
or
speed
up
the
pipeline.
So
that
was
not
the
goal
here
to
begin
with,
so
that
was
achieved
with
an
introduction
of
build
X
now
build
x
because
of
how
it
works
and
how
things
are
organized
allows
us
to
scale
and
easier
introduction
into
the
multi-art
builds.
A
We
could
have
done
it
with
the
builder,
for
example,
and
have
the
or
even
with
the
standard
Docker
build
and
do
the
runners
on
different
platforms
and
kind
of
combine
the
images
later
ourselves
and
kind
of
publish
that,
so
the
efficiency
comes
in,
build
decks
castigating
most
of
it.
For
us,
we
just
created
the
infrastructure.
Everything
else
is
done
for
us.
A
That's
what
I
was
talking
about
earlier
when
I
was
saying
the
talking
about
the
shared
versus
individual
Builders.
C
A
That's
that's
the
portion
where,
at
the
moment
it
is
an
unknown
entity
like
how
to
scale
the
buildax
deployment
in
shared
kind
of
scenario.
There
is
not
much
documentation
on
Docker
website,
there's
not
much
documentation
in
community.
It
just
referenced
that
you
can
create
as
many
replicas
as
you
want
and
that's
about
as
far
as
it
goes,
so
how
many
replicas
and
when
is
it
gonna
croak
like
it's,
it's
an
unknown.
A
C
A
When
it
can't
sorry
what.
A
The
build
X
cluster
is
set
to
Auto
scale
so
and
it
will
be
Auto
scaling
based
on
the
both
numbers,
rather
than
anything
else
so
I'm,
because
we
didn't
put
any
limits
or
anything
else
on
it.
It's
honestly
hard
to
judge
the
resource
consumption
and
there
is
there
are
no
guidelines
that
I
could
find
so
far
on
what
how
the
measure
resources
needed
by
the
build
X
pod
period.
B
A
You
can
pass
quite
a
few
settings
to
it.
Let
me
take
a
look
right
here.
A
So
if
we
go
up,
we
can
say
you
know
how
much
memory,
how
much
CPU
and
so
forth
and
try
to
curate
this
a
little
bit
more,
but
nowhere
throughout
this
documentation
does
it
say
this.
A
C
The
one
thing
that
we
want
to
keep
in
mind
is:
if
we
Define
nothing,
it
will
be
sheerly
best
effort
right.
This
is
a
you
know,
a
thing
about
how
it
works.
In
from
an
administrative
perspective,
the
documentation
can't
cover
I
need
this
much
CPU
in
memory,
because
it
has
no
idea
what
you're
doing
right.
C
It's
the
same
problem
with
RCI,
if
you
run
it
on
a
runner,
that's
only
got
two
gigs
and
you're
trying
to
compile
Chrome
yeah,
that's
gonna
work
if
we
set
limits
as
an
example
here
right,
if
you
say,
I
want
to
never
use
more
than
two
gigs.
Most
of
our
code
will
probably
run
for
the
CNG,
at
least,
but
anytime.
A
Okay,
I
noted
it
down,
it's
I,
think
one
way
of
for
us
to
do.
It
would
be
to
crank
up
the
monitoring
for
that
cluster
and,
as
we
run,
the
jobs
just
monitor
how
much
resources
are
being
consumed
overall
and
kind
of
go
from
there.
That's
going
to
be
the
best
estimate,
I
could
see
because
I
was
trying
to
kind
of
get
into
those
pods
and
see
what
they're
doing
it's,
not
that
easy.
A
It's
kind
of
tucked
away
neatly
somewhere
in
the
background,
so
I,
it's
hard
to
say
what's
happening
and
what
are
the
requirements.
But
yes,
it's
like
I,
totally
agree.
It's
a
valid
point.
We
kind
of
walking
into
a
big
great
unknown
with
the
build
X.
As
is
we.
We
were
kind
of
I.
A
A
A
D
Yeah
yeah:
do
you
know
if
Builders
work,
like
runners,
in
the
sense
that
if
one
Builder
is
like
occupied
than
the
other
in
jobs
that
want
to
use,
have
to
wait
like
it's
like
a
running,
and
then
you
have
to
wait.
So
it
depends
on
the
amount
of
pods,
for
instance,
like
one
pod,
it's
one
Builder
that
can
take
like
one
request
to
build,
or
is
it
parallel?
It
can
take
many
requests
or
how
this
work.
A
Just
from
observation,
it
looks
like
it
does
it
in
parallel,
like
when
I
was
running
this
pipeline.
Let's
go
back
to
this
guy
right
here.
A
D
A
So
going
back
to
this
the
challenges
and
changes
so
because
build
is
happening
away
from
gitlab
Runner
itself.
A
There
is
the
other
one,
because
things
are
happening
away
and
you
have
less
control
over
what's
happening.
You
are
leaving
a
lot
to
Builders
Logic
on
how
to
pick
an
image
or
which
cache
is
being
applied
to
it.
So
our
initial
trouble
was
that
we
were
cross-polluting.
Md64
and
arm64
builds
with
the
binaries
from
either
one
depending
on
which
one
got
published
and
which
build
decks,
got
pulled
like
pulled
and
which
was
in
the
context,
so
that
becomes
really
really
important
there
and
Scorpio.
Just
from
my
personal
experience.
A
Somewhere
between
build
X
and
our
registry,
sometimes
the
media
type
for
the
image
did
not
change
for
some
reason,
when
I
was
pushing
it
just
with
the
build
X,
but
when
I
did
it
with
the
Scorpio,
it
changed
the
media
type
and
the
media
type
was
proper
and
the
index
rather
than
the
singular
image
and
stuff
like
that.
Even
though,
when
you
pull,
you
see
the
full
manifest
and
the
media
type
is
wrong
and
it
screws
up
a
lot
of
things.
A
So
as
a
note,
I
opted
for
using
Scorpio
for
any
of
the
manipulation
of
the
image
just
because
it
seemed
to
be
more
reliable
and
because
of
how
Kaz
is
working
and
I.
A
I
did
not
know
that
it
wasn't
documented
much,
but
there's
a
timeout
built
into
the
cars
on
when
it
times
out
the
jobs
running
against
the
cluster
and
the
initial
timeout
was
30
minutes
and
thanks
to
DJ's
intervention,
the
folks
from
I
think
it
was
the
intra
that
bumped
us
to
two
hours
now,
but
I
still
see
the
timeouts
like
I,
still
see
the
job.
Hang
and
I
have
no
explanation
for
that.
A
For
the
moment,
I
just
have
to
restart
it
and
sometimes
like
going
back
to
what
geologists
mentioned
like
is
that
a
you
know,
concurrency
thing
no
I
had
one
job,
just
single
job.
The
gitlab
rails,
ee,
for
example,
and
I-
had
to
restart
it
three
times,
while
no
other
job
was
running
for
it
to
complete
so
and
it
would
hang
Midway
through
way
before
two
hours
expired,
and
it
still
does
it
and
I.
A
Don't
know
why
it's
a
little
harder
to
troubleshoot
that
one
so
for
now
the
other
catch
I'll
I'll
probably
have
to
go
a
little
faster
here,
just
to
wrap
it
up
to
be
on
time,
but
I've
hard
coded
for
the
two
arches
the
arm
and
AMD
64..
A
We
probably
will
need
to
abstract
it
away
if
we
are
to
pick
up
risk
builds
later
on.
The
good
news
is
with
the
risk
builds
and
the
with
the
build
X
to
build
X.
You
can
just
add
another
Builder
with
the
specific
art.
So
if
we
get
some
platform
that
provides
us
with
the
risk
Builders,
we
can
actually
run
there
and
just
add
it
in
those
jobs.
The
same
way.
I
do
with
AMD
and
arm64,
and
we
can
just
get
going
with
the
more
architectures
yeah.
A
The
media
type
I
just
mentioned
before,
like
when
we
publish
first
thing
to
check,
is
the
media
type
and
that's
you
will
see
that
in
the
pipeline.
It's
actually
I've
made
a
modification
to
check
for
the
media
type
to
be
proper.
To
conclude
whether
image
exists
or
not.
If
images
are
the
wrong
media
type,
I
just
conclude
that
it
doesn't
exist.
It
also
helps
with
the
initial
migration
when
we
will
migrate
from
existing
images
to
the
multi-arc
images,
because
all
of
the
existing
images
are
going
to
be
not
media
type.
A
Just
even
I
can't
remember
the
full
name
of
it,
but
it's
just
an
image,
not
the
index,
and
we
did
split
out
the
Ubi
and
fips
modifications
related
to
those
two
into
a
separate
merch
request,
because
those
are
quite
a
bit
more
involved
and
there
are
other
things
happening
there
that
need
to
be
addressed.
So
we
figured
it'll
be
easier
to
get
things
done
as
a
first
iteration.
A
Second,
technically,
to
get
this
done
and
then
move
on
to
the
fips
and
Ubi,
especially
that
there
is
no
particular
demand
for
those
to
be
available
as
as
of
now,
but
we
do
need
to
produce
just
a
normal,
the
canonical
images
in
the
multi-arc
or
our
partners,
all
right.
So
any
more
questions
or
shall
we
wrap
up.
C
C
In
a
few
other
pieces
like
documentation
or
presentations
that
we've
got
from
the
past,
either
from
coupons
or
or
other
people
that
were
kind
of
following
the
steps
on
Just
for
everybody
to
have
read
up.
A
Get
that
for
the
1425
pushed
out.
That
would
be
the
implementation
okay
pushed
out.
First
of
all,
we
have
to
iron
it
out,
and
then
we
can
push
it
out
like
once.
We
sort
out
the
issues
with
jobs
hanging
or
we
just
conclude
that
this
is
the
Lesser
evil
and
we
can
live
with
it,
which
I
don't
think
we
can.
But
hey.
A
You
know
priorities.
So
it's
what
I
don't
know
like
I'm.
Probably
didn't
quite
answer
your
question,
but
I
don't
have
a
better
answer.
No.
B
A
C
So
they
only
additional
question
I
have
is:
what
can
we
do
to
improve
the
situation
where
we
have
to
push
and
pull
push
and
pull
either
a
cluster
internal
registry
or
a
shared
cache?
That's
in
the
same
region
so
that
we're
not
like
building
on
us
and
then
pulling
and
pushing
back
and
forth
across
regions
right,
because
we're
talking
about
a
lot
of
data
back
and
forth.
C
C
A
Yeah,
like
especially
that
that
would
affect
the
fips
and
Ubi,
because
we
do
try
to
fetch
assets
from
the
pre-existing
images
when
we
pull
the
image
and
we
extract
the
assets
out
of
the
image,
whereas
we
don't
do
it
with
the
canonical
as
much
so
there's
a
lot
more
pulling
in
Ubi
and
fips
for
sure,
and
we
gotta
pull
both
images
to
extract
the
both
arcs
well
at
least
right
now,
both
if
it
becomes
three
arcs,
then
three,
four.
Whatever.
A
The
alternative
to
it,
of
course,
is
to
scrape
the
entire
buildx
implementation
and
go
with
Lessons
Learned
and
try
to
do
this
with
a
buildup.
Then
everything
becomes
much
more
local
and
much
more
controllable.
In
that
way,
certain
things
will
go
away.
Certain
things
will
surface
that
we
don't
have
to
deal
with
right
now,
like
how
do
we
manage
the
fleet?
How
do
we
deploy
like
build
that
build?
That
will
require
the
full
whole
new
Fleet
of
builders.
C
Well,
sorry,
not
quite,
but
we
can
sort
of
sort
that
out
go
ahead
and
show
out
before
I
ask
anything
else.
D
A
Well,
because
we
okay,
we
could
have
done
that,
but
that
means
now
we
are
in
the
business
of
managing
the
runners
VMS
because
we
needed
the
arm
Runners
as
well.
So
now
you
have
to
spin
up
the
runner,
the
arm
Runner,
probably
GK
the
gcp.
A
Cost
to
doing
this,
kubernetes
seemed
at
the
start
of
this
before
we
walked
into
all
the
problems
we've
seen
at
the
start
of
it.
It
seemed
like
the
most
elegant
solution
because
it
scales
nicely.
It's
got
all
the
things
that
we
don't
have
to
code
built
in
and
that's
why
we
went
with
the
kubernetes
to
begin
with.
D
I
I
asked
this
because
I
know
of
other
projects
that
we
use
view
decks
and
we
haven't
been
using
kubernetes,
but
we
have
been
building
multiple
Arch
like
arm
and
AMD
so
and
we're
using
like
the
gitlab,
shared
Runners,
I.
Think.
C
So,
when
you
do
that,
basically
all
the
operations
are
working
through
quemu
or
a
native
compile
chain
that
can
compile
to
the
Target
the
the
long
and
short
of
it
comes
down
to
telling
go
to
build
an
arm.
64
binary
is
super
easy
telling
Ruby
and
everything
that
it
knows
and
everything
that
it
relies
upon
and
all
of
its
external
modules
AKA
see
lost
Etc
that
hey
by
the
way
you're
not
actually
on
the
platform
that
you're
building
for
is
less
reliable.
C
Okay.
So,
when
you're
doing
that
cremeu
that
that
emulation
by
definition,
it's
not
instruction
translation,
it's
literally.
This
is
the
instruction.
Let
me
figure
out
how
to
do
the
thing.
The
we
have
done,
the
timings
on
extremely
large
items
right
and
golang,
it's
great,
that
it's
awesome
and
fast
for
that
language.
But
if
you
take
an
Omnibus
as
an
example,
okay
and
you
run
it
on
a
64
core,
Intel
64
course-
and
you
say,
build
me
this
for
arm-
do
you
know
what
happens.
D
C
C
C
To
this
I,
one
second
evidence
to
the
contrary,
basically
being
well,
it
worked
for
me
well,
I
have
maintained
a
distribution
targeted
at
four
arm
architectures
for
the
better
part
of
a
decade.
Trust
me,
my
experience
tells
me
there's
edge
cases.
You
don't
want
to
find
it's
easier
to
just
go
around
them
entirely
and
since
the
capability
is
now
perfectly
available
to
us,
let's
make
use
of
it.
D
C
A
build
kit,
so
yeah
build
X,
effectively
uses
build
kit
behind
the
scenes
and
that
can
deploy
into
kubernetes
and
when
you
say
I
want
to
do
this
on
arm,
it
will
say:
oh
you're
supposed
to
be
using
an
arm
and
here's
the
selector
so
put
this
job
on
arm
right.
So
we
end
up
saying:
here's
the
the
runner
for
arm
here's,
the
runner
for
x86,
and
then
we
can
put
them
onto
this
same
cluster
like
the
same
cluster
that
has
different
node
pools.