►
From YouTube: 20220120 SIG Arch Code Org
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hi,
everyone
today
is
jan
20..
This
is
the
meeting
for
the
core
organization
under
sig
architecture.
So
it's
just
the
three
of
us.
So,
let's
get
started.
A
The
main
thing
topic
that
we
had
lined
up
for
discussion
today
was:
is
there
a
way
we
can
have
do
we
need
a
separate
github
org
just
for
code
that
before
currently
we
have
a
few
places
where
we
use?
You
know
where
we
borrow
code
from
other
people
and
stick
it.
A
One
is
in
cuban
united
six
utils,
the
other
one
is
the
third
party
directory
in
various
repositories,
including
kubernetes
kubernetes,
and
that's
it's
getting
hard
to
like
maintain
these
and
like
especially
getting
when
we
are
when
we
want
to
adopt
something.
How
do
we
like
preserve
history?
So
there
is
some
context
there
and
it's
not
just
a
copy
of
like
what
we
end
up,
rendering
where
history
is
lost
from
the
original
project.
So
those
are
the
some
of
the
challenges
and
the
other
challenges.
Other
challenges
are
around
like.
A
How
do
we
make
sure
that
it's
just
us
who
are
using
this?
So
we
don't
break
other
people,
so
are
there
any
new
things
that
are
coming
down
the
pike
that
will
be
useful
for
us
in
terms
of
like
the
go
language
itself?
You
know
the
hooks
and
extensions
and
whatever
that
are
available
there.
So,
let's
get
started
now.
Tim.
B
So
I'm
I've
always
been
big
on
structure,
and
so
when
I
look
at
the
github
kubernetes
repo
as
a
flat
repo
of
hundreds
of
sorry
as
a
flat
org
of
hundreds
of
repos,
I
I
get
stressed
and
because
I
don't
know
how
to
find
things,
I
don't
know.
B
If
you
look
at
kubernetes
kubernetes
versus
you
know,
kubernetes
utils,
they're
they're,
not
nearly
the
same
size
or
importance,
they're
managed
differently
right
and
especially
if
we
kick
more
things
out
of
kubernetes
kubernetes
like
it's
hard
to
tell
what
are
the
important
repos
and
what
are
the
less
important
repos
and
you.
You
also
touched
on
something
that
sort
of
has
been
bothering
me.
B
We
end
up
supporting
people
who
use
our
libraries,
which
forces
us
into
uncomfortable
positions
when
we
would
prefer
that
it
was
only
us
using
our
stuff
right,
right,
yeah
and
not
all
of
it,
but
some
of
it
and
then
there's
the
you
know,
go
go
said
from
the
beginning,
like
don't
make
util
packages
that
have
lots
of
random
crap
inside
it,
and
we
did
that
and
we,
the
goal
like
kubernetes
utils,
was
supposed
to
be
a
bunch
of
really
small
standalone
generic
libraries,
because
maintaining
a
bunch
of
independent
repos
was
a
pain
in
the
butt.
B
Well,
maintaining
a
big
omnibus
repo
is
now
a
pain
in
the
butt
and,
as
evidenced
by
you,
know,
importing
some
third-party
thing
that
drags
in
a
bunch
of
dependencies
and
gosh.
If
I
just
want
to
use
kubernetes
util
pointer,
which
is
a
nice
little
utility
library
I
have
to
vendor,
or
at
least
be
aware
of
all
these
dependencies
right
and
go
modules
are
a
little
bit
opaque
in
this
regard.
B
They
sort
of
do
what
they're
going
to
do
and
you
don't
have
a
choice
about
it,
and
so
this
got
me
thinking
about
like
maybe
we're
really
just
doing
it
wrong
and
we
should
be
doing
more
smaller
repos
and
I
feel
like
it
would
be
easier
to
manage
things
like
history
etc.
But,
as
I
said
before,
I
don't
want
to
make
like
I'll
pick
on
ip
tables,
because
it's
a
library
that
that
I
touch
a
lot
right.
B
I
don't
want
to
say
github
kubernetes
iptables
is
in
the
same
like
conversation
as
kubernetes,
cube,
cuddle
or
cube
kubernetes,
you
know,
eventually
cubelet
or
cubeproxy
or
whatever
all
the
repos
we
put
there.
It
seems
like
noise,
so
github,
of
course,
doesn't
have
any
structure.
There's
no
like
folders
in
github.
If
there
were
I'd,
probably
suggest
we
use
those,
but
we
have
orgs.
B
So,
a
long
time
ago,
I
started
squatting
on
a
bunch
of
kubernetes
dash
stuff
orgs
on
the
thought
that
maybe
one
day
we'll
need
them
right,
and
so
I
I
wanted
to
bring
up
the
topic
here,
like
maybe
we're
at
an
inflection
point,
where
it
makes
sense
to
start
doing
that.
I
know.
There's
a
lot
of
machinery
and
you
know
prow
and
everything
else
to
add
a
new
org,
but
it's
not
impossible.
We
have
a
couple
of
them
already
and
then
I
had
this
thought
right.
Go.
A
So
two
questions
there
right
like
one
is:
how
badly
are
we
breaking
people
who
shouldn't
be
using
our
internal
packages
but
do
like
k3s
and
k0s,
and
folks
like
that
who
end
up
you
know
for
better
or
worse
importing,
kcio
kubernetes?
Today
we
are
going
to
break.
B
A
B
To
compile
oh
yeah
they'll
still
be
able
to
compile,
but
the
package
can
io.
Internal
fubar
is
only
usable
by
other
packages
that
are
kate's
that
I
owe
kubernetes
something
something:
okay,
so
right,
but
they
wouldn't
be
able
to
use
it
themselves.
You
couldn't
take.
You
know,
you
know,
company.com
slash
my
package
and
import
kubernetes
or
case.io
internal.
B
B
Yeah
that
we
don't
worry
about
because
we
say
hey,
this
is
the
license
thing
and
you
can
do
whatever
you
want
with
our
license
right,
but
but
if
they,
if
they
take
specific
action
to
work
around
the
barriers
that
we've
put
in
place
and
then
they
complain
that
they
cut
their
fingers
on
the
sharp
edges,
I
don't
feel
as
bad
right.
I
do
feel
bad
today,
like
I
would
not
want
to
break
kubernetes
utils,
because
we've
said
these
are
a
bunch
of
generic
libraries
and
we're
going
to
support
the
api
right.
A
So
we
don't
have
any
downstream
implications
for
people
who
are
building
and
shipping
kubernetes
itself.
We
don't
really
need
to
worry
about
folks
who
are
breaking
the
glass
so
to
say
to
to
use
the
internal
stuff.
We
told
them.
You
know
this
is
internal.
You
shouldn't
be
using
it
anyway,.
B
B
A
Have
to
worry
about
getting
getting
axe
for
lgtm
for
doing
that.
A
What
else
is
there?
So
one
just
slightly
related
question
like
go
logger
who
owns
gologar
at
this
point.
B
B
Not
a
kubernetes
project,
no
okay,
okay,
that's
fine,
but
it's
created
as
a
separate
org,
as
opposed
to
like
go
logger
so
that
I
can
add
extra
people,
so
patrick
stepped
up,
he's
doing
all
the
logging
stuff
within
kubernetes
and-
and
he
stepped
up
and
he's
been
really
super
awesome
on
helping
that
I'm
totally
welcoming
to
other
people
if
they
want
to
also
step
up,
though
I
hope
knock
on
wood
that
go
lager
is
more
or
less
done.
I
was
working
on
a
pr
this
morning
to
handle.
B
It
was
supporting
your
nil,
stringers
patch
right,
the
same
idea,
but
you
know
more
or
less.
I
I
think
it's
done.
I
don't
want
to
be
adding
a
million
things
to
it
right,
yeah,
okay,
so
that's
fine!.
A
B
Internal,
I
have
kate's,
I
have
kate
stash
internal,
somebody
else
has
kubernetes
dash
internal
which,
and
I
don't
know-
and
it
seems
like
a
dead
repo
and
we
can
maybe
try
to
get
it
back.
But
I
have
I'm
squatting
on
like
dozens
and
we
can
use.
B
At
this
point
right,
that's
right,
internal.
We
are
good
yeah
exactly
right,
so
I
think
there's
actually
three
questions
here.
Right
and
and
they
can
be
answered
independently
one
should
we
start
doing
more
independent,
smaller
repos
two.
Should
we
put
those
in
a
separate
org
just
for
organizational
sake
and
three?
Should
we
facade
them
with
case
that
I
o
internal
right
they're,
all
like
they're
additive
questions.
A
Right
so
my
thought
process
usually
is:
let's
not
make
it
too
easy
for
people
to
do
this,
then
there's
going
to
be
an
explosion
of
these
things
and
we
need
to
strictly
strictly
regulate
how
many
of
these
go
in,
because
you
know
people
you
know,
there's
a
lot
of
the
folks
who
are
doing
it
for
better
or
worse
gaming
statistics
and
those
kinds
of
things
happening.
B
So
yeah
I
mean
that
is
the
downside
of
having
lots
of
repos.
Is
you
get
lots
of
points
right
and
you
don't
know
who's
doing
what
and
where
and
it
becomes
a
hassle?
B
A
But
yeah
very
few
of
us
do
it
for
sure
yeah.
So
I'm,
okay,
those
are
all
secondary
concerns
and
not
primary
concerns
as
such.
Okay,
so
I
think
we
should
open
up
an
issue
and
you
know
just
work
through
what
we,
what
we
need
to
try:
okay,.
B
And
you
know,
I'm
I'm
happy
to
like
drive
just
the
the
testing
of
it.
We'd
have
to
decide.
Ultimately,
if
like
do
we
want
to
move
things
like
kubernetes
utils
like
we
could
take
kubernetes,
utils
and
break
it
up
into
10
or,
however
many
libraries
there
are
in
there
and
then
depend
on
that
from
the
existing
kubernetes
utils
library
so
that
they
were
exposed,
but
that
wouldn't
be
the
primary
interface
to
them.
You
get
what
I'm
saying
like
yeah.
We
can
choose
to
do
that
as
the
proof
of
concept,
so.
A
One
of
the
problems
that
we
had
in
the
past,
even
for
like
six
storage
to
adopt
k
utils,
they
tried
to
put
something
in
chaotics
and
it
broke
some
use
cases.
So
if
we
need
to
find
a
way
to
or
run
some
kind
of
kubernetes
test,
this
is
true
for
c
advisor
too
right,
like
we
don't
know
what
we
break
when
we
make
a
change
in
c
advisor
until
we
try
to
bring
it
into
a
kk.
A
So
the
scenario
is
exactly
the
same,
so
we
need
to
figure
out
a
way
to
be
able
to
test
some
of
these
things
under
kubernetes
master,
so
to
say
and
yeah
well.
This
is
that
that
will
be
the
prop
thing
right
like
they
will
have
independent
tests
for
sure.
But
you
know
typically
it
breaks
when
we
integrate
it
into
cumulatives.
B
Yeah-
and
this
has
always
been
the
problem
with
having
things
in
multiple
reposes
like
how
do
we
integration
test,
all
of
the
components
right
if
you're
doing
cube,
cuddle
and
cube
proxy
independently
from
each
other?
How
do
you
test
or
keep
cuddle
and
cube
api
server?
How
do
you
integration
test?
Those
two,
this
I
think
like
we
can
draw
the
scope
of
this
of
like
things
that
have
apis
at
the
go
level:
api,
not
at
the
rest
level
of
api,
and
so
things
like
api
diff
have.
B
It
has
been
really
useful
in
gologer
of
warning
me
hey
like
little
subtle
things
that
I
didn't
get
this.
The
importance
of
has
warned
me
about
right,
so
we
should
run
api
diff
on
kutils
and
we
should
have
a
hard
line
of
like
hey.
No,
no
new
dependencies
like
we're
not
going
to
take
in
cloud
libs.
I
remember
the
mount
one
had
a
bunch
of
libraries
right,
yeah,
we're
not
gonna!
Do
that.
A
B
Yeah-
and
I
mean
I
think,
ultimately,
the
rule
kind
of
comes
down
to
it's
either
you
either
make
a
reason
why
a
new
package
is
in
kk
or
you
make
a
reason
why
it's
not
a
new
repo
right
right
and
show
me
why
it
belongs
with
something
else,
not
why
we
have
to
make
an
excuse
to
make
a
new
repo.
I
think
we
should
flip
that
on
its
head,
so.
A
The
other
example
that
came
just
yesterday,
you
know
recent
symbiosis.
You
know,
for
example,
in
kubernetes.
You
know
when
we
update
golang
as
soon
as
a
public
release
is
made
on
the
golan
side.
We
adopt
it
immediately
and
then
we
kind
of
like
make
sure
that
we
vendor
in
things
or
for
the
json
encoding.
We
pulled
the
internal
json
encoding
stuff
from
go
117
which
depends
on
go,
17,
features
and
apis.
A
So
the
continuity
folks
were
trying
to
update
to
newest
version
of
kubernetes
and
they
saw
that
and
they
were
not
able
to
build
in
116..
So
they
went
back
to
a
previous
package
previously,
you
know
so
they
said.
Okay,
we
are
going
to
stop
at
122,
5
or
6.,
we're
not
going
to
go
to
123.,
so
we
have
to
add
guarantees
around
compilability
under
various
versions.
Generics
are
coming
down
the
pike
and
like
if
you
have
a
mixture
of
libraries
with
generics
and
without
getting
yeah.
B
There's
gonna
be
a
lot
of
demand
for
using
them
because
they
solve
a
lot
of
the
problems
that
we
bump
our
heads
on
correct.
But
if
we
care
about
compilability
with
older
go
versions,
then
we
will
have
to
be
careful
there,
especially
in
libraries.
Kk
is
cool.
We
can
do
whatever
we
want
in
there,
because
it's
not
supposed
to
be
used
by
anybody
else
and
if
they
use
it,
I
just
don't
feel
bad
right
and
I
agree.
A
On
the
rest,
this
morning
was
hey.
I
was
we
were
looking
at
a
issue.
James
john
howard,
and
I
were
looking
at
an
issue
where
there
was
a
init
method
in
go:
go
open,
census,
dot,
io,
you
know
sergey
might
be,
might
know
this.
They
were
using
an
init
method
to
start
a
go
routine.
A
Even
if
you
just
import
the
package
and
like
the
thing
was,
we
are
doing
the
same
in
k.
Log
dot
go
so
yeah.
I
didn't
have
an
answer
for
that.
We
were
telling
people
not
to
do
something,
but
we
are
doing
the
same
thing.
B
A
That
that's
why
we
need
to
write
these
things
down.
So
you
know
it'll
serve
as
a
guideline,
so
we
don't.
We
are
not
trying
to
cargo
cut
this
and
there
is
a
set
of
things
that
people
can
check
off.
Okay,
there
is.
B
B
So
you
know,
one
of
the
things
that
go
18
is
introducing
is
workspaces
and
it's
funny
because
in
their
like
release,
notes
for
their
alphas
and
betas,
there's
like
a
one-line
mention
of
it,
but
it's
actually
hugely
powerful
and
I've
started
converting
kubernetes
kubernetes
to
use
this
this
mechanism,
but
it
has
some
implications
well,
first
of
all,
it
allows
you
to
have
multiple
modules
and
work
on
them
atomically,
so
you
can
do
like
a
go
list
and
have
it
actually
work
across
modules.
B
So
here's
an
opportunity
for
doing
things
like
better
integration,
testing
right,
you
can
have
a
mechanism
that
actually
checks
out
a
whole
bunch
of
different
repos
has
a
single
work
file
for
them
and
then
integration
tests
them
all.
But
it's
I
lost
the
train.
I
thought
I
was
going
somewhere
with
this
and
I
forgot
it.
A
You
must
be
thinking
about
the
testing.
Breaking
compiler
changes,
those
kinds
of
things
right.
B
Yeah
I
lost
where
I
was
coming
around.
Yes,
it
it
was
related
to
the
like.
This
is
part
of
18,
and
so
you
know
if
we,
if
we
depend
on
it
too
much,
it
will
break
for
older
compilers
right
yeah
anyway,
I
I'll
if
it
comes
back
I'll,
bring
it
back
up.
But,
okay,
I
I'm
I'm
happy
to
try
to
help
out
here
my
time's
pretty
stretched
for
the
next
couple
of
weeks
with
kept
freeze
and
everything
else
coming
up,
but
I
don't
think
it's
a
very
difficult
experiment.
A
It's
easy
for
me
to
like
give
me
one
of
your
repositories
and
I'll
create
something
which
we
can
buy
and
yeah.
The
only
other
thing
I
can
think
of
is
I'll
need
a
pr
against
case
io
repository.
You
know
that
that's
probably
a
few
couple
of
lines
or
three
lines:
yeah.
B
And
in
truth,
you
could
probably
just
deploy
it
without
even
a
pr
and
say:
oh
yeah,
this.
This
works,
here's
the
pr
for
it
right
and
roll
it
right
quickly.
If
you
so
desire.
Give
me
a
second-
and
I
will
tell
you,
let's
pick
a
repository-
that
we
we
want
to
or
an
organization
that
we
want
to
go
with
all
right.
B
Tell
me
when
you
hear
one
you
like
kate's
internal
kate's,
util,
kubernetes,
add-ons
kubernetes,
approvers,
kubernetes,
charts
communities,
community
communities,
contributors,
communities,
controllers
communities,
demos,
communities,
developers,
kubernetes,
ecosystem,
extensions,
graveyard,
incubator,
retired
ingress,
lib,
kubernetes,
maintainers,
communities,
owners,
communities,
playground,
communities,
providers,
reviewers,
side,
cars,
cigs,
test,
testing
tools,
universe,
util,
utils
x.
I
kind
of
liked
x.
B
And,
of
course,
I
missed
kubernetes
dash
internal,
so
somebody
else
got
it.
So
we
could
try
to
claw
that
back.
I
like
x,
but
x,
feels
like
go
x.
Tools
like
it's
actually
things
we
are
okay
with
people
using
versus
like
kate's
internal
would
be
things
we're
not
okay
with
people
using
yeah,
but
vm
direct
or.
B
We
could
also
that's
a
good
one.
We
could
do
kubernetes,
vendors
or
kubernetes
third
party,
but
if
we're
going
to
present
this,
if
we're
going
to
assume
we're
going
to
present
this
as
kubernetes
internal
sorry,
catestadio
slash
internal
slash
other
stuff,
I
would
suggest
we
just
go
with
kate's
internal
for
the
test.
Is
that
fair.
B
B
B
If
there's
no
way
well
back
up,
let's
assume
that
we
want
to
use
the
internal
semantic
of
go
like
the
the
name
internal
right.
It
has
special
meaning.
If
we
wanted
to
do
that,
we
then
have
to
somewhere
in
our
redirector
config
list
which
repositories
are
internal
and
which
repositories
are
not.
And
if
you
go
to
github.com
kubernetes,
you
can't
see
that
at
all
so
putting
it
in
a
separate
org
means
like
hey,
I'm
looking
at
the
kate's
internal
org.
B
B
A
Internal
has
been
around
since,
like
12.,
okay,
then
yeah
so
16
17
18.
As
long
as
it
works
this
across
those
three,
I
think
we're
good
yup,
okay,
so
we
we
need
a
like
a
checklist.
I
think
I
I'll
differ
to
you
on
that.
If
you
can
write
something
up
as
these
are
the
rules
for
this
github.
B
Part,
let's
do
the
experiment.
First,
I'm
not
gonna,
I'm
just
about
to
write
it
on
my
to-do
list,
but
I
realize
I
have
to
turn
the
page.
So
that
is
a
bad
signal
so
like,
if
you
can
do
the
just
a
proof
of
concept,
just
put
a
trivial
repo
there
yeah
and
set
up
the
redirect
and
try
to
use
it
from
a
library
and
get
an
error
from
go
that
says
hey.
This
is
an
internal
repository.
Then
we'll
declare
victory
on
the
poc
and
we'll
write
principles.
B
B
B
B
We
need
to
you
know.
The
truth.
Is
this?
Go
workspaces
stuff
makes
the
staging
model
actually
infinitely
less
awful,
like
it's
actually
pretty
powerful
and
it
mostly
just
works,
and
I'm
working
with
some
folks
on
the
go
team
to
hammer
out
some
of
the
issues.
They
were
really
looking
at
what
we
do
when
they
designed
it
so
it
it
really
does
make
some
of
these
problems
better.
Okay,.
B
Like
k,
three
k,
zero,
I
think
the
I
think
the
staging
approach
is
an
interesting
I
mean
I
hate
the
word
staging,
but
it's
an
interesting
way
to
have
our
cake
and
eat
it
too.
We
can
make
our
atomic
changes
to
our
components
and
we
can
publish
them
into
individual
repos
that
people
can
consume,
and
now
we
don't
have
to
like
fight
all
the
tooling
in
order
to
make
it
work.
Like
the
vast
majority
of
this
pr,
I'm
working
on
for
workspaces
is
deleting
code
that
assumes
gopath.
B
Looking
forward
to
that
for
sure
it's
a
hell
of
a
pr
I'll
tell
you
that,
as
long
as
it
removes
sport
more
than
it
adds
I'll
assign
it
to
you
dims.
Thank
you
all
right.
I
have
to
run
to
another
meeting.
So
if
we're
done
with
this.
D
D
It's
a
super
critical
like
first
one
is
continuation
of
last
topic.
I
I
mentioned
that
there
are
non-trivial
changes
in
between
these
types
that
call
internal
and
staging,
and
it's
all
expected
to
turn
out.
I
still
try
to
wrap
my
head
around
like
do.
We
is
it
worse
to
clean
it
up
and
like
synchronize,
because
once
we
synchronize
there
is
literally
no
way
to
I
mean
I,
I
can't
come
up
with
a
way
to
make
sure
that
prs
will
not
diverge
these
files
again
right.
D
A
Yeah,
I
don't
think
there
is
any
any
value
in
trying
to
make
make
it
look
the
same,
because
it
is
meant
to
diverge.
So
let
it
it's
already
diverging.
So
let's
go
with
it.
Unless
you
you
want
to
do
a
one-time,
you
know
just
adding
some
comments
where
something
has
diverged
like
in
the
new
in
the.
So
there
is
a
older
copy
and
there
is
a
newer
thing
that
that
has
some
delta
differences.
A
So
if
you
want
to
comment
them
since
you
know
you
had
to
find
them
find
out
for
yourself
right
like
so,
we
can
add
a
comment
at
each
of
those
instances
where
you
saw
a
change,
so
you
can
like.
We
should
come
up
with
a
pattern
that
other
people
are
going
to
follow
down
the
line
right.
A
A
C
D
Cool,
okay
and
the
next
topic
is
crai
api.
I,
like
I
created
issue.
We
need
to
come
up
with
a
policy
how
we
version
cri
api.
There
is
quite
like
I
started
the
writing
document,
but
it's
so
completed
yet
so
I
I
wanted
to
bring
it
up
here
and
understand
if
there
is
any
precedence
or
anything
already
was
done
before
for
similar
apis.
D
Like
first
is
like
I
pasted
the
pr
that
adds
a
new
method
to
cri
v1
and
this,
this
method
only
needed
will
only
be
implemented
in
v1,
container
d
or
v1
cryo,
because
I
mean
nobody
will
but
pour
this
feature
into
1.5
container
d
at
this
point,
but
we
still
need
to
add
it
in
both
places,
because
we
want
to
have
binary
compatibility
because
v1
and
v1
alpha
2,
mostly
to
prevent
situations
when
we
cannot
convert
one
api
to
another
api,
so
yeah,
I
don't
see
big
like
huge
problem
with
that,
but
it's
kind
of
indication
where
cri
api
go
into.
D
So
whenever
we
add
a
method,
we
will
need
to
make
sure
that
we
work.
This
method
is
compatible
backward
and
forward
with
all
the
runtimes,
because
we
may
have
continuity.
D
I
will
use
continuity
in
this
example,
but
because
cryo
is
a
little
bit
more
like
versions
with
kubernetes,
so
you'll
have
containers
that
can
go
ahead
of
kubernetes.
Just
because,
like
I
mean
1.6
is
released
and
people
still
using
like
120
of
kubernetes,
so
they
should
be
compatible.
I
mean
we
don't
block
this
compatibility,
yet
we
never
declare
that
it's
not
supported
and
opposite
is
possible
when
earlier
version
of
continuity
is
being
used
with
a
newer
version
of
kubernetes
and
also
being
like
was
never.
D
Forbidden
even
furthermore,
there
are
projects
like
there
is
a
project
for
faster
image,
download
from
like
there's
an
open
source,
one
that
creates
a
proxy
between
couplet
and
continuity.
It
just
inserts
itself
to
hijack
some
image
secrets
and
like
store
it
and
somehow
process
them,
and
it's
like
it
creates
a
weird
situation.
When,
like
half
of
sierra
api
has
been
v1
and
half
sierra
api,
we
won
alpha
2,
and
then
I
mean
when
we
only
have
v1.
D
We
also
will
have
this
situation
you,
you
cannot
predict
which
methods
support
which
are
not
so
it
creates
a
situation.
We
need
to
somehow
test
skew
of
versions.
Like
I
mean
we
can
declare
that
all
the
version
changes
must
be
backward
and
forward
compatible.
D
If
you
create
some
non-backward
for
compatible
versions
and
you
need
to
bump
the
major
version,
so
it's
like
v2
or
something
yeah.
It
also.
A
D
D
Is
a
minimal
supported
right
at
this
stage
we
will
need
to
create
some
tests,
I
think,
to
run,
or
at
least
run
all
the
conformance
tests
against
runtime
implementing
that,
but
we
may
not
have
this
runtime,
so
maybe
we
can
create
some
proxy
runtimes
that
will
only
implement
v1
of
early
supported
version,
and
I
wonder,
from
code
organization
perspective
how
hard
or
easy
it
will
be
to
do
that
like
to
import
this
like
specific
version
of
ci
api
alongside
the
like
full
api,
and
so
you
can
create
this
proxy
in
kk
repository
to
use
it
and
test
like.
D
D
So
let's
say
I'm
I'm
implementing
end-to-end
tests
and
I
want
to
run
all
the
conformance
tests
against
runtime
implementing
earliest
supported
version
of
crime,
to
implement
that
I
I
don't
have
runtimes
that
supports
early
supports
version
runtime
because
we
may
not
like
continuous
e
of
1.5
may
not
be
support.
1.6
may
not
be
supported
at
this
point
yeah,
so
I
need
instead
of
using
specific
runtime
or
implementing
fake
right
time.
A
I
don't
I
don't
see
where
we
will
end
up
getting
pulled
in
it's,
you
know
other
than
like.
I
I
don't
know
who's.
I
don't
know
if
we
know
that
when
we
have
a
rest
api
question,
we
go
to
apm
machinery,
we
don't
have
anything
for
grpc
right
at
this
point.
So
from
the
scenario
that
you're
saying
it
seems
almost
like,
we
need
a
binary
that
listens
on
a
stock
and
it
forwards.
A
Things
and
it'll
implement
a
bunch
of
these
interfaces,
and
you
talk
to
one
of
them
and
it'll
turn
around
and
delegate
it
back
to
a
version
of
container
d.
You
know
that
kind
of
thing,
so
I
don't
so.
I
think
it
would
be
best
left
to
signal
on
how
to
do
this
and
use
you
and
the
problems
that
you've
seen
as
a
way
to
like
write
down
what
we
should
be
doing
and
what
we
should
not
be
doing.
A
A
I
yeah
it
was
the
cri
api
right
yeah.
It
was
the
cra
api
and
testing
internally
on
a
on
gke.
When
you
were
trying
to
do
that,
you
ran
into
a
problem,
and
so
we
haven't,
we
still
haven't
returned
on
that
down
anywhere
on
how
how
to
do
this,
or
you
know,
or
a
ci
job
for
that
matter,
to
make
sure
that
we
don't
run
into
the
exactly
the
same
problem.
You
know
that
you
encountered
so
we
should
probably
start
there
and.
D
A
Exactly
so,
you
need
to
figure
out
like
so
again.
We
have
to
think
about
like
cri
api,
should
we
just
break
it
out
and
then
start
doing
the
proxy
for
cri
api,
which
will
turn
around
and
give
it
to
a
known
crio
or
continuity
that
kind
of
stuff
so,
and
so
that
might
be
a
good
thing
to
shoot
for
on
125
to
see
if
we
can
break
things
apart
and
add
a
proxy
and
do
some
additional
testing
and
in
preparation
for
in
a
newer
version
of
sierra
api.
D
Okay,
yeah,
I
I
will
bring
it
back
to
signal
to
discuss
okay,
but
there
is
no
presence
of
well.
I
mean
I
know.
B
D
Other
six
doing
something
similar,
but
there
is
no
written
guidance.
I
think
right.