►
From YouTube: 20200319 SIG Arch Code Org
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right
everyone-
this
is
the
code
organization
sub-project
under
take
architecture,
and
today
is
March
19th
2020.
The
folks
can
add
their
names.
The
agenda
that
great
and
yeah-
let's
just
skip
straight
to
it
wait
am,
I
recording
I,
am
so
I
added
to
PRS
that
that
need
with
you.
They
are
largely
for
the
work
that
sick
cloud
provider
is
doing
to
move
the
cloud
controller
manager
binary
to
staging
I.
Think
this
one,
the
first
one
is
them
probably
gonna,
be
the
most
contentious
one.
C
C
C
B
A
A
D
Sounds
good
so
just
to
give
you
a
little
bit
of
a
background,
so
there
are
different
SIG's
in
communities
right.
So
the
one
of
the
ones
that
we
work
under
is
called
the
sig
architecture
and
cigarette
lecture
has
a
few
sub
projects,
and
this
one
is
the
core
organization
sub-project.
So
in
code
organization,
we
talk
a
lot
about
maintenance,
moving
things
around
refactoring
things
you
know
just
to,
because
we've
accumulated
things
over
time
in
different
places,
and
there
is
several
long-term
initiatives
you
know
involved
in
the
sub-project.
D
We
were
talking
last,
you
know
couple
of
months
ago
and
Matt
was
expressed
interest
in
helping
with
the
taking
dr.
shim
out
of
out
of
cubelet,
so
we
he
wanted
to
do.
He
has
been
involved
in
reviewing
a
bunch
of
things
and
he
wanted
to
do
a
little
bit
more
in
signal,
so
I
I
between
the
two
of
us.
We
thought
that
this
would
be
like
a
good
long-term
work
that
he
could
take
on
so
and
the
first
thing
that
we
did
was:
okay.
We
have
provider
les
tags
for
the
extracting
the
cloud
cloud
stuff.
D
So
can
we
have
a
docker
l'estaque
similar
to
that?
So
this
got
us
down
a
path
of
what,
when
we
add
darker
less
tag,
what
are
the
things
that
are
in
humanity's
Cuban?
It
is
which
still
reference
docker
docker
that
so
we
went
down
that
experimental
path
and
we
realized
that
the
Signet
working
uses
IP
vs.
That
was
one
major
thing.
Then
the
CLI
uses
the
package
PRM
at
for
terminal
functions.
D
That
was
another
one,
and
there
was
a
few
other
references,
especially
from
windows
code
about
asus
info,
which
got
which
drew
in
a
bunch
of
packages.
So
what
and
then
we
also
went
through
this
exercise
of
what
does
it?
Can
we
do
the
same
tag
in
c
advisor
as
well?
Is
you
see
it
was
in
a
position
to
be
able
to?
You
know,
be
built
without
dakka
dakka.
D
So
so
we
went
to
both
the
repositories
we
ripped
out
docker
and
we
we
basically
saw
what
we
ended
up
with
and
then
we
said:
okay,
there's
a
bunch
of
things
that
we
need
to
do
outside.
So
we
went
to
the
mobile
docker
community.
We
got
them
to
externalize
IPV
s
in
a
new
repository
mobi
/,
IP
vs,
and
we
got
them
to
extract
term
into
mobi
/
term.
D
Yes,
yes,
definitely
so
Toby
did
lot
of
background
work
behind
the
scenes
before
we
brought
some
of
these
here.
So
the
JSON
log
is
probably
the
easiest
one.
We
just
have
to
take
a
data
structure
out,
so
we
didn't
bother
extracting
any
code
from
Doc
or
other.
We.
We
just
pulled
out
the
data
structure
with
a
few
fields,
so
that
was
really
easy.
Then
the
runtime
CPU
was
a
little
bit
more
confusing
in
the
sense
that
we
had
to
first
figure
out
why
we
were
using
num
CPU
from
season
four
instead
of
runtime.
D
So
did
a
research
back
to
see
what
API
is
they
were
using
under
the
covers,
because
this
num
CPU
just
uses
runtime
noms
with
you
for
Linux,
so
the
specialization
was
only
for
Windows
and
for
Windows.
Apparently
hot-plug
you
know
was
not
supported
a
while
ago
and
then
they
added
support
for
hot-plug.
So
the
number
of
CPUs
changes
if
one
of
the
CPUs
was
enabled
or
disabled
or
pulled
out
so
so
it
was
basically
a
replace
now
because
runtime
CPN
numb
CPU
also
supports
the
same
thing
that
system
for
was
supporting.
D
There
is
a
enhancement
request,
there
is
a
cap
and
there
is
a
WIP,
the
proof
of
concept,
so
I
sent
an
email
out
to
sig
note
asking
for
approvers
and
reviewers
and
that's
where
we
are
right
now,
once
that
cap
gets
approved
and
the
proof-of-concept
they
like.
If
they
like
the
approach
signal
likes
the
approach,
then
we
will
be
able
to
get
to
a
point
where
we
can.
You
know
we
can
do
a
darker
less
tag,
especially
with
kind
because
kind
uses
continuity,
so
kind.
We
power
was
the
best.
C
It's
worth
noting
that
until
we
actually
drop
dr.
shim,
we
will
still
carry
the
cost
of
the
docker
docker
dependency
in
our
dependency
tree.
So
I
think
it's
worth
pushing
on
the
deprecation
removal
timeline,
especially
because
that's
likely
to
be
long,
even
even
if
we
just
reached
agreement
on
a
timeline
to
start
that
process.
I
think
that
would
be
worthwhile.
D
Right
so
I
I
did
a
bunch
of
things
around
that,
but
I'm
not
ready
to
just
suggest
deprecation
yet
so,
for
example,
I
want
to
be
sure
that
we
are
testing
continuity
properly
in
in
our
CI
system,
and
we
we
didn't
have
any
CI
jobs
that
uses
continuity
on
the
master
node.
We
were
just
using
continuity
in
the
work
cut
node,
but
not
in
the
master
node,
so
I
added
a
see,
a
job
which
runs
Ubuntu
continuity
in
master
as
well,
and
that's
a
release
informing
now
and
it
doesn't
use
continuity.
D
Yoga
master
knew
I,
wanted
a
realistic
scenario
right
kind:
I,
don't
I,
don't
treat
it
as
a
realistic
scenario,
because
we
need
multiple.
You
know
we
need
one
master
with
multiple
workers
that
that's
what
I
would
consider
a
real
scenario:
I
not
and
not
kind.
Ok,
so
I
started
with
cluster.
You
know
cuba
script,
which
can
deploy
Ubuntu
and
either
docker
or
continuity,
so
two
variations.
That
was
some
of
the
things
that
we
did
last
time
so
that
got
done.
The
next
thing
I
want
to
do
is
I
want
to
do
this.
D
I
think
we
do
have
some
with
cubane
iam
with
master
or
no
bun
too,
but
then
you
know
the
number
of
jobs
that
we
have
any.
Let's
not
go
go
down
that
path
yet
so
at
some
point
we
have
to
get
rid
of
Q
cluster.
So
that's
that's
a
whole
another
thing
that
we
need
to
chase,
but
then
now
the
main
question
that
I
have
before
we
can
suggest
something
is
do.
Do
we
consider
container
D
to
be
a
replacement
for
what
we
are
doing
or
do
we
have
to
come
up
with?
C
And
I
think
it's
less
container
D
and
more
just
proof
that
all
the
CIA
scenarios
are
exercised
through
the
CRI.
A
CRI
integration
container
D
happens
to
be
the
one
that
we're
leaning
on
NCI,
but
the
blocker
before
was
that
pretty
much
all
of
our
CI
stuff
was
using
docker
shim
alone
by
demonstrating
CRI
integrations
right.
If
some,
if
someone
wants
to
volunteer
to
set
up
other
backing
CRI
annotations
I,
don't
think
that
we
would
be
upset
but
I,
don't
think
we
need
to
prove
out
and
integrations.
C
D
C
D
C
D
So,
there's
a
class
of
jobs
that
do
that,
especially
in
the
cluster
API,
because
cluster
API
depends
on
just
continuity
and
not
docker
the
way
we
do
the
image
builder.
We
chose
continuity
as
the
CRI
implementation
and
we'd
skip
docker
entirely
so
based
on
that
any
cluster
API
job
will
end
up
having
just
continuity
and
not
docker.
So
I
I
allowed
to
dig
into
those
jobs
to
make
sure
that
we
are
able
to
throw
in
their
tag
there
for
sure
you,
you.
D
A
The
job
is
me
spinning
up
a
kind
cluster
at
the
end
of
every
release
and
running
all
the
Eevee
tests
against
the
cost
of
running
a
queue.
Yes,
so
no,
we
don't.
As
far
as
I
know,
there
are
no
CI
jobs
that
can
figure
if
those
reviews
a
few
years
and
and
run
tests
against
that,
but
we
really
should
get
that
get
that
running
so.
A
Yeah
I
think,
that's
all
all
we
need
the
require
yeah
like
once.
If
you
have
the
ideas
modules
installed,
like
you're
good
to
go
and
I'm
I'm,
pretty
sure,
like
the
kind
already
has
it
installed
to
me
so
like
we
could,
we
could
very
well
have
a
kind
job
that
test
I
can
just
and
just
make
it
optional
office.
D
A
Yeah
but
yeah
I
definitely
like
what
sucks
is
that,
like
there
are
some
very,
very
like
small
differences
between
IQ
tables
and
IPAs
that
might
break
some
of
the
new
tests.
I've
seen
you
before
so
I
I,
don't
know
like
we'd
have
to
buy
with
that,
but
that
doesn't
we
don't
have
to
talk
about
that
here,
but
related
to
this
PR.
That
updates
to
the
new
repo.
Does
that
push
coordinates
I
have
it
on
my.
A
C
We
we
really
want
at
least
a
periodic
one
so
that
we
know
within
a
day
or
two.
If
a
change
that
breaks
out
BBS
Slams,
it
doesn't
necessarily
have
to
be
presubmit
I,
don't
know
how
long
it
takes
or
how
complex
it
is
to
set
up,
but
we
should
have
periodic
tests
at
the
very
least
for
every
feature.
We
say
we
support.