►
From YouTube: 20200402 SIG Arch Code Org
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
A
A
Do
they
move
to
staging,
as
is
first
and
that
way,
whoever
is
pulling
in
our
Miri
testing
framework
can
at
least
benefit
from
not
having
to
pull
in
KK
and
then
once
we
do
that
we
can
have
a
broader
discussion
around
how
you
want
to
clean
up
the
code
in
there,
but
that's
what
that
least.
That's
what
we
decided.
I
wanted
to
get
feedback
from
the
group
and
see
if
that
priority
makes
sense.
B
A
Assumption
is
that
like
been
during,
it's
going
to
be
a
real
pain
for
unless
we
have
the
actual
like
ete
test
out
to
like
it
doesn't
make
sense
to
vendor
it.
I
think
staging
make
sense,
because
I'm
sure,
like
changes
to
that
the
framework
will
be
really
couple
two
people
who
are
writing
the
test.
But
that's
just
my
that
was
my
opinion.
A
So
we
definitely
know
CSI
muses
this
in
some
repose,
like
they
pull
in
this
framework
to
run
their
own,
you
to
be
sweet
and
it
was
actually
Patrick
from
Intel
working
on
CSI
that
actually,
but
as
for
the
future
first,
but
then
also
some
folks
at
VMware
working
on
some,
oh
boy.
They
run
a
bunch
of
you.
Do
these
sweets
for
conformance
for
the
conformance
suite
and
there's
been
pains
pulling
in
KK
for
that
as
well,
and
also
like
the
provider
like
the
cloud,
we
still
have
cloud
provider
dependencies
in
there
and
so
I
figured.
A
We
want
those
pulled
out
and
because
we
want
the
e
to
be
separate.
Anyways
like
one
like
middle
step
between
getting
the
providers
code
out
of
the
framework
could
be
like
less
stage
of
the
framework
to
make
it
easier
for
providers
to
run
a
Yui
Suites
under
external
repositories
and
that
kind
of
ease
your
transition
to
remove
the
provider
dependency
and
the
framework
that
make
sense.
C
A
Of
action
motivation
is
like:
we
want
the
provider
specific
tests
living
in
the
provider,
specific
repos
and
by
stagings
of
frameworks.
We
make
it
easier
for
the
providers
to
import
the
priests
like
the
existing
framework
from
KK
and
just
like
import,
whatever
they're
staging.
So
the
external
stage
refill
is
and
just
use
that
exact,
like
the
exact
same
framework
for
their
eatery
seat,
without
having
to
duplicate
whatever
provider
framework
within
KK.
But.
C
As
soon
so,
the
framework
currently
has
cloud
provider
dependencies
yeah.
If
we
publish
that
and
then
people
start
using
that
with
its
provider
dependencies
won't
that
make
it
harder
to
drop
the
provider
dependencies
from
the
framework.
I
guess
I'm,
not
following
the
order
of
operations
we're
at
the
end,
we
don't
have
provider
dependencies
from
work.
A
Yeah,
that's
a
good
point.
I
haven't
thought
of
that.
My
assumption
was
that
it's
better
to
have
one
source
of
the
where
it
comes
from
so
like
stage
it
and
you
know,
keep
keep
one
source
of
truth
versus
having
like
providers
updating
the
framework
internally
and
having
a
copy
of
it
externally,
but
yeah
I
haven't
thought
about.
It
seem
harder
to
remove
136
of
more
consumers
of
it.
My
assumption
is
that
if
there's
consumers
of
it
they're
already
doing
it
through
KK,
so
there's
not
much
different
to
create
stages,
but.
C
C
C
A
Yeah,
like
one
thing
we
could
do
instead,
is
like
we
can
stage
the
framework
and
keep
the
provider
plugins
entry
and
then
tell
all
the
providers
to
move
that
package
like
the
testing
frame
or
package
into
their
external
repositories,
and
then
we
can
import
from
there
but
I,
but
I
at
least
keep
the
actual
like
frame
the
cloud
provider
interface
in
there,
so
that
other
providers
can
just
import
that
the
staging
repo
and
then
import
their
own
framework.
Yeah.
Okay,.
C
My
masters
or
I
know
how
to
like,
whatever
the
random
things
are,
that
we're
doing
inside
the
time,
work
that
are
clouds
specific
that,
like
pulling
those
bits
out,
could
be
done
in
place
and
then
moving
just
the
e2e
claimed
work
stuff
to
its
own
thing
and
moving
the
clouds
specific
things,
maybe
even
under
legacy
cloud
providers
like
that
one
already
has
all
of
these
crazy
dependencies.
Yeah
I
mean
kind.
If
you
have
like
legacy
cloud
providers,
/gc
e
/,
you
know
you
need
helpers
or
whatever
there's
some
packages.
A
Okay,
yeah
that
make
sense
so
so
first,
we
superstition
make
sure
that
any
call
into
the
provider
framework
is
done
through
an
interface
and
then
make
sure
that
and
then,
when
we
do
the
move
to
staging
where
we're
not
moving
it.
Along
with
the
provider
packages,
we're
just
going
to
move
that
into
wherever
the
provider
specific
vehicles
are,
and
then
yeah
just
a
my
is
that
is
that
what
you're
saying
pretty
much?
Yes,.
D
D
C
A
E
D
D
This
because
today
I
don't
think,
there's
anybody
really
owning
or
maintaining
this
framework
like
SiC
testing.
Has
it
because
a
lot
of
people
who
used
to
be
in
cig
testing
helped
write
it
back
in
the
day,
but
it's
not
really
actively
maintained
used
anymore.
As
Jordan
sac,
there's
lots,
we
could
do
to
fix
it.
D
A
And
so
that's
why
I
wanted
to
do
the
move,
the
staging
ahead
of
time
so
that
when
they
import
both
the
testing
framework
and
Cloud
Controller
man
framework
that
they're
both
staged
so
yeah
there's
a
good
point
like
I
will
talk
to
Georgians,
we'll
see
if
we'll
have
the
bandwidth
to
to
put
together.
Can
I.
C
I'd
be
curious.
What
the
uses
of
the
ete
framework
are
like
a
I
had
actually
thought
it
was
moving
more
in
the
direction
of
being
agnostic
to
the
cluster.
It
was
running
on
so,
like
you
bring
up
a
cluster.
However,
it
is,
and
then
you
run
the
e
to
e
suite
against
it
yeah.
It
seems
to
be
more
returning
to
the
like,
oh
yeah,
the
guts
of
the
ete
test.
A
I,
don't
follow
so
like
the
framework.
Does
all
the
ginkgo
and
Omega
set
up
for
you
right
and
then
you
just
you
just
tell
it
like
look
at
these
packages
for
where
all
my
tests
are
like,
but
it's
like
and
then
and
then
it
has
a
bunch
of
utilities
for,
like
you,
know,
managing
clients
and
checking
nodes
and
all
that
stuff.
So
like
that,
like
that's
all
that
all
that
stuff
like
we
don't
want
the
cloud
providers
when
they
go
ahead
and
add
their
specific
tests.
We
don't
want
them,
rewriting
all
that
I.
C
Guess:
I'm
not
understanding
what
so
that,
like
the
helper
functions
for
like
wait
for
this
deployment
and
like
start
a
pod
like
we
can
put,
we
can
publish
été
helper
functions
without
you
know
doing
all
of
the
test.
Setup
like
you,
don't
need
cloud
specific
anything
to
wait
for
a
pod
to
be
ready
and
wait
for
an
appointment
to
be
ready
and
like
that's.
You
don't
need
anything
cloud
specific
for
those
things
most
of
the
cloud
specific
stuff,
but
I
was
aware
of
was
around
like
wait.
A
C
A
There
there's
two
kinds
of
that:
I've
been
seeing
like
the
one
you
just
described,
where
you
need
a
verifies
like
specific
behavior
in
that
platform,
but
then
there's
also
like
generic
test
or
kubernetes
that
depend
on
external
action
that
only
certain
cloud
providers
can
do
select
till
a
node
to
test.
If
a
node,
you
know
if
we're
test,
if
we're
handling
like
milk,
going
down
properly
things
like
that
gene.
F
A
We
have
one
test:
that's
not
run
anywhere,
so
we
have
actually
a
cloud
like
a
feature.
A
ginkgo
feature
for
cloud,
specific
behaviors,
so
I,
don't
think
any
external
sliders
do
that
today.
But
what
we
want
them
to
do
is
call
in
the
framework
like
find
wherever
that
external
packages
for
the
class
specific,
a
degree
behaviors,
and
then
they
would
pull
that
into
their
own
repositories
and
then
kick
it
off
as
part
of
CI.
Ok,.
F
A
Mean
I
could
I
could
yeah
while
Jordan
does
that
we
can
definitely
look
into
it.
But
my
my
gut
feeling
is
that,
because
one
of
the
use
cases
is
running
the
current
Zetas
hash,
which
heavily
depend
on
the
framework
in
the
it's
like
cloud
providers,
disagree.
Those
like
I,
don't
think
that
adopting
a
new
thing
is
gonna
be
doable,
but
I'd
have
to
dig
deeper
to
usually.
D
C
His
was
targeted
specifically
at
like
the
EDT
test,
manifests.
G
G
I
mean:
could
you
could,
in
theory
move
all
the
clouds
specific
the
cloud
providers
Pacific
code
like
any
internal
package
and
only
allow
like
basically
create
an
entry
point
for
all
of
that
functionality
and
each
of
like
a
higher
level
exposed
endpoint
and
that
would
sort
of
force.
It
I
think
at
a
compile
time.
C
C
C
Think
the
question
about
extracting
the
framework
for
use
has
more
to
do
with
what
Aaron
was
saying
about
like
support
ability
of
it
long
term
and
if
like,
what
is
our
plan
for
people
to
run
été
tests?
Is
it
to
take
our
fully
to
e
suite
and
then
layer
on
or
inject
their
own
implementation
of
this
interface
and
then
build
an
e
to
e
test
binary
and
then
run
that
and
yeah?
It's
so
like
how
much
work
is
involved
in
getting
what
we
have
today
in
a
consumable
format
for
that
I
guess.
G
I
thought
we
could
try.
I,
don't
see
anyone
from
Red
Hat
on
the
call,
but
I
do
know
that
open
ship
runs
the
conformance
suite
with
a
bunch
of
open
ship
stuff,
on
top
with
a
open
shift
tests,
binary
I,
don't
I,
don't
know
if,
like
that,
like
helps
anything
like
certainly
I'm,
like
I
know.
If
she
already
has
KK
dependency,
so
I,
don't
think
they
care
about
that.
But
I
don't
know.
If
there's
something
we
can
learn
from
that
approach
to
help
others,
because
I
think
the
requirement
there
was
run.
C
C
D
The
word
provider
in
the
test
framework
kind
of
doubles
for
both
cloud
provider
and
also
cost
your
provider
like,
like
you
guys,
were
talking
about
like
how
do
we
tear
down
notes
and
stuff-
that's
kind
of
maybe
more
clustered
provider
specific.
But
how
does
the
cloud
you
know
magically
bring
notes
back?
That's
the
that's
up
to
the
cloud
provider.
C
A
It
depends
on
the
scope
like
like
from
my
discussions
before
with
George
intent.
If
we
were
gonna
just
move
it,
as
is
I,
don't
think
it's
that
much
work
cuz,
it's
just
a
matter
of
like
shifting
code
around
to
make
the
dependencies
not
depend
on
K
K,
but
if
we're
gonna
have
to
change
up
the
framework
to
you
know
to
cover
everything
we
just
discussed,
then
it
might
take
longer
than
one
release,
given
that
it's
just
me
in
Georgia,
Yama's,
okay,.
B
So
but
part
of
the
thing
that
we
were
trying
to
update
in
the
the
during
the
last
cycle
was
the
combination
of
h,
CH,
h,
CS,
sham
container
d
and
c
advisor.
So
we
got
the
continuity
and
the
c
advice
are
done,
and
the
h
h,
CM
shim,
had
a
similar
problem
with
the
c
advisor
where
it
was
dragging
in
a
bunch
of
dependencies
that
actually
the
usage
of
HZ
s
HZ
s
shim,
as
library
was
not
using.
B
So
it
turns
out
that
they
they
were
doing
a
lot
of
go
dependencies
just
for
their
test
framework,
which
was
unrelated
to
how
we
use
h,
CS
shrim
as
a
library.
So
I
proposed
a
split.
You
know
multiple
go
mod
in
their
repository
and
that
got
accepted
by
them.
So
the
as
a
result
in
the
route
go
mod
file
in
HCl,
HCS
gem
that
has
come
down
quite
significantly
and
it
is
kind
of
like
ready
for
us
to
use.
So
this
PR
was
updated
to
prototype
and
make
sure
that
we
don't
drag
in
anything
else.
B
There's
like
one
package
that
gets
added,
but
other
than
that
it's
it
looks
really
good.
I
requested
them
to
cut
a
release
and
whoever
I
was
talking
to
promised
to
get
back
to
me
in
a
few
days.
So
that's
the
update
on
this.
So
when
that
gets
done,
we'll
be
able
to
update
this
dependency
and
without
taking
on
a
bunch
of
go
mods
that
we
don't
use.
Yeah.
B
Thanks
for
taking
my
data
K,
so
then
the
next
one
was
the
CRI
split,
so
I
said
this
was
I.
Would
let
me
say
where,
where
I
started
so
where
I
started
was
repository,
there
is
a
lot
of
cost
dependency
between
continuity,
continuity
and
continuity.
Cri,
so
I
was
trying
to
research
how
to
break
the
chain
there,
and
one
of
the
things
that
stood
out
was
the
CRI
C
API
use,
because
we.
B
So
we
have
to
it's,
not
just
the
t.c.r.I
PA.
It's
also,
you
know
the
usage
of
Kate's
I/o.
Some
of
the
repositories
in
Kate's
I/o
is
being
used
in
continuity,
so
that
was
complicating
the
situation
there.
But
then,
if
you
look
at
cryo
or
if
you
look
at
it's
a
shim,
they
were
not
using
Kate's
Cuban
otters
as
extensively
as
is
being
done
in
the
continuity
repositories,
so
cleaning
up
the
container
upholstery.
B
So
that
was
one
of
the
threads
that
came
out
of
that
analysis
and
what
I
think
that
is
stopping
us
from
actually
doing.
That
is
because
that's
not
enough
just
breaking
the
CRI
API
is
not
enough,
because
there
is
a
streaming
package
in
cubelet
cubelet,
/
service,
/
streaming.
That
is
also
being
used
by
various
people,
and
there
is
a
cap.
Also
there
is
a
you
know.
Can
you
go
back
to
the
dock
and
click
on
the
cap
for
the
streaming
library
and.
B
B
You
so
this
was
the
other
problem
that
we,
you
know
that
we
will
face
because
the
people
who
are
using
craap
I
end
up
using
the
streaming
streaming
library
as
well,
so
we
essentially,
they
essentially
need
both.
They
need
the
CR
API
and
the
streaming
library
and
then,
if
you
scroll
down
to
the
bottom,
there
is
the
discussion
there
about.
Okay,
we
need
to
actually
not
publish
the
streaming
API
a
streaming
library.
B
As
such,
we
should
fold
that
into
GRCC
based,
you
know,
handshake
by
adding
stuff
into
the
CRA
API
itself,
so
that
basically
puts
the
ball
back
in
the
coat
of
signal
to
see
if
they
want
to
take
this
on
at
some
point.
So
at
this
point,
I'm
going
to
stop
doing
this,
you
know
stop
looking
at
the
CR
API
and
streaming
leave
it
up
to.
B
You
know
signal
for
them
to
think
about
what
they
want
to
do
and
if
they
want
to
rev
the
CRA
API,
because
updating
the
CRA
API
gets
into
other
sorts
of
discussions.
Where
you
know
is
the
current
CRA
APA
is
enough.
What
does
it
not
support
and
there
is
like
a
sandbox
API
that
is
being
proposed
in
another
forum,
so
it
gets
into
those
kinds
of
details.
So
I
am
basically
trying
to
put
a
closure
on
this
work
and
say:
okay,
I'm
not
gonna,
touch
this
anymore.
Then
we
already
have
this.
B
E
E
E
E
H
E
C
D
So
I
feel
like
there's
another
less
satisfactory
option,
which
is
you
are
free
to
pull
qadian
about
into
its
own
repo
and
just
not
be
included
as
part
of
the
regular
kubernetes
kubernetes
release,
because
today
the
kubernetes
Cabernets
release
doesn't
include
anything,
that's
not
from
the
KKK
Rico,
so
that
wouldn't
block
you
from
moving
the
repo
out.
It
just
means
you
haven't
liked.
D
C
B
C
Know
and
said
for
a
long
time
that
letting
cube
Adam
have
a
release.
Kittens
independent
from
kubernetes
makes
a
lot
of
sense.
There
have
been
many
releases
where
something
was
discovered.
You
know
the
day
of
release
and
in
something
like
a
cube,
Adam
comp
file
right
and
it
didn't
actually
require
any
changes
to
any
kubernetes
binaries
or
any
artifacts
right.
C
All
it
was
was
a
install
configuration,
but
it
required
scrambling
an
entire
kubernetes
point
release
to
rev
an
installer
tool
I'm,
rather
than
doing
a
lot
of
work,
to
try
to
figure
out
how
to
pull
cube
Adam
out.
But
then
let
its
release
remain
coupled
to
the
kubernetes
release.
I
actually
see
it
as
a
benefit
to
let
cube
Adam
release
on
whatever
cadence
at
once
to
let
it
react
much
quicker
and
faster
to
bugs
or
features
or
things
that
need
to
be
done.
B
That
was
my
first
instinct
to
Dartmouth.
In
fact,
at
that's
what
I'd
mentioned
to
Lumiere
a
long
time
ago,
saying
it's
a
deployer.
It
just
deploys
Kuban
it
is,
it
should
have
its
own
cadence.
You
can
like
make
a
release
on
the
same
day
as
cuban
at
us
to
support
the
latest
cuba
at
us.
So
it's
it
doesn't
need
to
be.
B
But
then
the
question
that
comes
back
to
us
is
maybe
not.
This
sub
project
itself
would
be
like
how
many
people
expect
Cuba
idiom
to
be
present
in
in
their
workflow
from
the
packages
that
we
publish,
right
and
I.
Don't
know
how
how
much
of
a
hassle
it
is
for
everybody
to
pull
Q
medium
from
somewhere
else
from
other
than
us.
E
E
E
B
E
B
E
B
The
differences-
this
is
a
clean
split.
There
is
no
dependency
on
the
code.
The
only
dependent
see
is.
It
looks
on
the
same
tree
other
than
that.
There
is
literally
nothing
that
binds
cube
idiom
to
cumin
addresses
the
rest
of
the
binaries.
You
know
have
some
sort
of
dependency
or
you
know
they
have
to
be
brought
up
in
a
certain
off
there
or
you
know
they
depend
on
each
other
because
they
call
each
other.
You
know
there,
there
are
the
things
there
right,
so
you
radium,
like
literally
nobody
else
needs
it
indicator.
Concentrating
the.
C
C
C
Is
that
couples
like
the
build
release
processes
but
still
gets
the
benefits
of
bundling
so
people
who
are
affected
by
a
cube,
ATM
bug?
You
know
the
day
of
release.
You
can
report
it
to
keep
medium.
You
can
fix
it.
You
can
get
a
patch
release
out.
They
can
update
and
be
get
their
problem.
Fixed
people
who
don't
particularly
care
about
a
particular
issue
can
just
continue
consuming
stuff
from
the
mega
bundle.
B
C
B
Right,
Liuba,
mate,
then.
Basically,
what
you
are
telling
those
signal
is
foxes.
Okay,
we're
gonna,
pull
the
repository
out.
We're
gonna
have
our
own
independent
processes,
incubating
repository
for
putting
out
releases
and
they
can
choose.
If
they
are
so
insistent
on
bottling,
cube
Adium
with
cuba
notice,
they
can
do
the
work
to
pull
in
whichever
version
they
want
and
package
it.
So
that's
the
split
of
responsibility.
C
C
The
goal
is
not
to
take
the
heavy
weight
kubernetes
process
and
push
it
out
these
other
repos
it's
to
give
them
their
own,
really
lightweight
ones.
That
produce
exactly
the
minimal
artifacts
required
like
a
tag
and
the
artifacts
for
every
architecture,
and
then
let
pick
every
Nettie's
process
consume.
That.
E
I've
seen
the
the
heavy
process
being
applied
to
multiple
repositories
like
modules,
in
a
different
in
a
different
big
project,
wider
distribution.
So
all
the
modules
follow
the
same
versioning.
So
this
is
not
something
new
I
think
I
have
a
like
a
general
question.
Do
you
think
that
the
own
versioning
cadence
of
Kip
caro
is
going
to
confuse
the
consumers
further.
B
B
C
C
Work
around
the
problem
by
like
build
it
all
from
the
same
tree
and
everything
at
the
same
commit
works
great
together.
But
if
you
drift
at
all
like
do
we
test
that
sometimes
not
always,
and
sometimes
we
do
test
it,
and
then
we
ignore
the
results
and
so
I
think
we
already
have
this
problem.
This
will
make
it
more
visible
and
hopefully
make
it
that.
C
You
can
pick
the
current
version
or
you
can
pick
the
n
minus
one
version
or
the
n
minus
2,
or
if
you're,
cute
control
and
three
or
four
like,
whereas
all
the
test
scripts
that
are
in
current
IDs
today,
not
all
of
them,
but
the
majority
of
them
don't
even
have
any
concept
of
the
last
version.
It's
just.
This
is
the
code
in
the
repo
and
I'm
gonna
test
it,
and
so
recognizing
that
this
is
a
client,
and
that
is
a
server
and
they're,
not
always
at
the
same
version.
C
C
It
is
we
just
need
to
have
our
eyes
open
to
like
this
is
the
work
that
has
to
be
done
to
actually
make
sure
this
gets
tested
and
works.
It's
gaps
that
we
have
today,
it's
not
worse,
but
we
can't
magically
decouple
these
and
expect
things
to
just
continue
the
way
they
have
been.
If
we
don't
close
those
gaps,
I.
D
E
B
B
C
To
me
that
means
that
the
180
no
code
is
distinct
from
the
release.
Artifacts
like
once,
we
have
a
tested
level,
and
we
say
this:
this
is
the
kubernetes
code.
That's
gonna
be
180,
no,
like
that.
That
gets
tagged
like
the
get
tagged
gets
added.
The
all
the
libraries
get
published
like
the
code
is
fixed.
We
have
not
released
the
release
artifact,
yet
okay
and
and
then,
and
then
the
things
that
we
say
are
so
important
that
they
have
to
be
included
in
the
kubernetes
communities.
C
Artifacts,
like
I,
think
that
list
should
be
small,
like
we
shouldn't
be
as
a
matter
of
course,
pulling
in
additional
components.
Those
things
are
on
the
hook
to
build
against
the
180,
no
libraries
and
tag
a
stable
version
that
includes
those
libraries
and
then
we
can
release
I,
don't
know
way
to
guarantee
that
those
versions
of
libraries
round-trip
all
the
way
through
those
external
components
that.