►
From YouTube: 20220106 SIG Arch Code Org
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Cloud
hi
everyone
today
is
jan
sixth,
and
this
is
the
bi-weekly
meeting,
of
course
seek
architecture
code
organization.
So,
let's
get
started,
I
don't
have
any
specific
agenda
as
such
and
looks
like
sergey
has
added
one.
So,
let's
get
started
with
that.
B
Okay,
yeah,
I
I
don't
know
whether
it
deserves
or
to
be
discussed
in
a
meeting.
But
since
it's
an
agenda
item
I'll
go
ahead,
welcome
and
I
will
try
to
start
attending
this
meeting.
I
want
to
be
more
involved
to
this
so
on
holidays.
I
was
looking
into
different
files
like
comparing
them,
especially
ones.
That's
supposed
to
be
the
same,
and
I
have
a
question
about
this,
like
two
files
about
typesco
files,
they
almost
the
same,
but
not
exactly
the
same.
B
Can
we
do
something
about
it,
especially
like
I
I
started
comparing
fields,
it's
not
that
easy,
because
even
like
ordering
of
types
declared
is
different,
and
I
found
a
couple
missing
fields
and
one
types
goal
have
this
field
and
other
types
go
doesn't
have
this
field.
This
doesn't
sounds
right,
and
maybe
we
can
do
something
about
it
and
like
maybe
one
of
them
can
be
removed.
I
just
don't
know
all
the
history
and
second
topic
is
for
cri.
B
Specifically
I
I
looked
at
cri
comparison
and
I
wonder
like
if,
in
this
release,
we'll
remove
crib
one
of
two,
it
will
be
okay,
if
we
wouldn't
remove.
We
need
to
keep
the
files
as
closely
matched
as
possible,
so
I
wonder
if
there
is
any
precedence
like
using
tool
to
compare
files
and
keeping
them
in
sync.
C
I
mean
the
the
internal
api
type
and
the
external
api
type
are
intentionally
able
to
diverge,
though
in
practice
we
keep
them
mostly
in
sync.
C
The
reason
they
have
to
be
able
to
diverge
is
so
that
if
there
are
multiple
external
versions
that
have
differences,
the
internal
one
can
sort
of
be
the
union
of
the
two.
Also
the
the
documentation,
the
godoc
on
the
internal
type
doesn't
matter
nearly
as
much
as
the
external
one.
The
external
ones
are
the
ones
that
are
generated
into
open
api
and
are
used
by
end
users.
C
So,
ideally,
if
we
made
a
typo
fix
in
one,
we
would
fix
it
in
the
other
in
practice.
It's
it
doesn't
really
matter
that
much
the
the
important
thing
that
they
have
to
be
convertible
to
each
other.
That's
verified
by
ci
scripts
like
there's
a
test
to
make
sure
they're
convertible
and
verify
conversions
and
round
tripping,
so
that
is
tested,
but
the
doc,
the
documentation
and
stuff
like
that
is
sort
of
best
effort.
A
How
big
is
the
differences
that
you've
seen
sergey?
Is
it
you
know
just
the
comments
or.
B
A
few
fields
I
can
try
to
compare
them,
it's
a
little
bit
tedious
process
but
yeah.
I
can
file
an
issue
and
send
a
comment.
Yeah.
C
Differences
I'd
be
interested
in
the
fields,
especially
sometimes
that's
intentional,
like
I
said,
because
the
internal
one
was
bridging
between
two
different
external
versions
that
had
differences
but
and
there
are
verification
scripts
so
that
when
there
are
differences
in
the
fields
they
get
flagged
during
review.
So
I
suspect
that
the
ones
that
are
there
are
intentional.
But
if
you
see
something
that
doesn't
look
right,
definitely
bring
it
up
and
we'll
check
it.
A
Right,
I,
I
guess
we
do
this
once
now
and
if
we
end
up
doing
this
again
at
that
point,
instead
of
doing
a
manual
thing,
we'll
have
to
write
some
code
to
compare.
You
know
parse
out
the
go
files
and
do
a
comparison
that
way
so
does
it
sound?
Okay,
it's
okay.
B
Yeah
absolutely,
I
was
just
not
aware
of
this
difference
in
why
this
difference
exists,
but
this
expansion
totally
makes
sense.
C
Yeah
it
it
makes
more
sense
if
you're
looking
at
a
package
that
has
multiple
external
versions
like
apps,
has
an
internal
one
internal
version
and
then
like
three
external
versions,
and
so
that
makes
it
easier
to
see
why
there
are
different
files.
And
why
there's
the
ability
to
have
them
be
different.
When
you
only
have
one
internal
and
one
external,
it
seems
pretty.
A
Silly,
okay,
so
the
next
one
is
the
cri
v1
and
v1
alpha
2
api.
I
did
see
this
briefly,
but
I
didn't
go
through
it.
B
Yeah
in
this
pr,
I
just
synced
up
comments
and
it
turned
out
that
people
sending
prs
and
npr's
we've
been
adding
fields
and
both
a
few
in
both
files
and
comments
were
different
in
one
another.
So
I
was
wondering
if
you
want
to
keep
them
up
like
exactly
the
same.
A
Yeah,
I
think,
for
now
at
least
you
know,
through
whatever
we
are
doing
in
the
big
note
stuff,
I
think
they
should
be
in
sync.
There
should
be
no
differences
between
the
two,
but
I
think
over
a
period
of
time,
v1
will
be
make.
We
will
be
making
changes
in
the
v1
api
at
that
point.
So,
let's
see
once
and
then
after
that,
we
can
leave
it
to
the
wins.
A
So
maybe
we
have
to
write
down
some
policy
depending
on
what
and
what
jim
hawkins
comes
back
to
us
with.
B
Oh
sorry
yeah,
so
I
yeah
this.
I
found
the
issue
we
discussed
this
before
we
wanted
to
have.
This
policy
is
part
of
signaled
and
I
filed
it
specifically
after
comment
and
slack
and
the
person
commenting
there
was
having
trouble
to
compile
in
definitions.
We
provide
out
of
jrpc
files
and
we
don't
versions
of
definitions.
We
just
add
fields
and
the
bucket
compatible
if
you
consume
it
through
grpc
services,
because
you
can
add
fields
freely
without
changing
the
version
theoretically.
B
A
Okay,
jordan
says:
we've,
I
mean
we
version
the
right
post,
so
you
can
get
kate's,
io,
cri
api
at
0
to
2,
etc.
So
that
leads
to
the
next
question.
I
remember
bobby
page
having
trouble
with.
A
Can
we
right
now
cri
api
is
in
staging,
should,
should
it
be
a
standalone
repository
so
that
then
both
kubernetes
and
you
know
c
addresser-
can
both
pick
it
from
there.
B
B
I
think
don
has
a
like
warning
us
a
lot
about
georgian
story
there,
because
we
will
need
to
coordinate
three
repositories
in
terms
of
versions,
and
it
may
be
a
little
bit
like
tied
some
or
terrorism,
and
I
think
she
gave
csi
as
an
example,
and
she
said
that
csi
api
had
some
difficulties,
versioning,
inversion
story.
A
Yeah
one
difference
between
csi
and
us
is
that
we
are
not
changing
anything
at
all
here,
because
csi,
there
was
a
working
group
and
they
were
trying
to
do
something
for
us.
It's
almost
like
you
know
it
is
static
at
this
point
and
I
don't
think
we've
made
any
changes
for
a
really
long
time.
A
The
only
change
that
I
remember
somebody
wanted
to
do
was
to
for
cubelet
to
be
able
to
query
the
cri
implementation
on
what
the
pause
image
that
has
been
configured
with
it
so
other
than
that.
I
I
don't
know
of
any
other
changes.
You
also.
A
Okay
did
that
need
cri
api
changes.
B
I
think
this
again,
the
security
structure
was
added
yeah.
C
B
A
So
will
we
freeze
the
v1
api
at
that
point
at
124
boundary
or
and
if
we
need
any
more
changes,
we
will
go
to
a
v2
api.
At
that
point,.
A
Okay,
so
the
the
next
one,
I
think,
jordan-
and
I
we
were
talking
about
on
slack-
was
the
docker
distribution
dependency.
A
So
we
we
tried
going
from
the
docker
slash
distribution
to
the
other
one,
which
is,
I
think,
called
distribution,
slash
distribution,
which
is
a
cncf
project,
I
guess
or
somewhere
else.
I
I
don't
remember
anymore,
and
that
was
a
v3.
I
guess
so.
A
The
problem
was
this
specific
thing
that
we
were
trying
to
pull
in
the
new
one
that
moved
out
of
docker
repository
to
a
public
repository
is
pulling
in
many
more
dependencies
that
we
don't
really
use
jordan.
Do
you
want
to
add
some
more
context
here
sure
so.
C
I
think
the
main
change
was
actually
that
the
new
version
specifies
a
go,
go
mod
file,
which
means
that
all
the
dependencies
are
exposed
to
go
even
ones
in
packages
that
we're
not
using,
and
so,
even
though
our
use
is
very
small,
it
was
greatly
complicating
our
dependency
tree
like
adding
an
additional
20ish
transitive
dependencies.
C
So
it
looks
like
we're
only
using
a
single
package
from
this
library
and
we're
only
using
it
to
parse
image
spec
strings.
That's
the
only
thing
we're
using
it
for
so
that
seems
like
a
really
good
candidate
for
just
extracting
the
package.
We
use
and
forking
that
one
package,
like
the
one
or
two
functions
that
we
use.
C
Another
good
reason
for
that
is
that
we
actually
don't
want
to
pick
up
changes
to
that
image.
Parsing
function,
if
the
distribution
distribution
dependency
made
additions
to
that
or
changes
to
that.
That
could
be
a
breaking
change
for
us
because
we
expose
this
as
api
surface,
so
we
actually
don't
want
to
pick
up
any
changes
there.
So
I
think
this
is
a
good
candidate
for
just
pulling
out
the
function
or
two
that
we
use
and
then
dropping
the
dependency
entirely.
A
So
I'm
typing
in
in
in
the
what
we
talked
about
in
slack
in
the
dock,
so
the
options
are
a
third-party
directory
in
kk
and
the
problem.
There
is
some
of
the
code
that
uses
this
dependency
is
in
staging,
so
the
whatever
is
in
staging
can't
depend
on
the
third
party
so
and
we
couldn't
find
any
other
good
place
which
is
already
in
staging
where
we
could
throw
this
stuff.
A
So
the
second
option
that
we
talked
about
was
kate,
cio,
slash,
utils
looks
like
kcio.utels.
We
already
picked
up
a
third-party
ford
repository
into
inotify,
so
we
could
do
exactly
the
same,
so
I'm
so
dim.
Let
me
add:
dimsas
on
deck,
dim,
swivel,
prototype,
so
I'll
prototype
this
and
I'll
see.
A
If
we
need
to
preserve
history
and
things
like
that
and
and
look
at
various
options,
one
one
way
to
do
this
is
just
plonk
the
the
files,
as
is
the
second
choice,
is
to
see
if
we
can,
you
know
prune
the
git
history
and
limit
it
to
just
the
files
that
modified
and
then
see.
If
we
can,
you
know
somehow
mount
that
into
the
existing
stuff.
In
case
you
utils.
So
that's
what
I
I'm
going
to
try.
A
First,
so
we'll
have
some
history,
you
know
and
we
don't
if
we
need
to
go
back
in
time
sort
of
thing,
but
our
usage
of
this
hasn't
changed
at
all
and
I
think
that
should
be
easy
to
do
so.
That's
update
from
me
on
that.
C
Is
the
next
one?
So
this
is
continuing
the
to
burn
down
the
list
of
things
that
would
block
us
from
building
kubernetes
in
module
mode.
C
So
I
looked
at
this
a
while
in
the
last
release,
we
thought
that
go
was
dropping
gopath
support
in
117,
and
so
this
seemed
pretty
urgent
that
actually
got
deferred,
partly
because
we
reported
to
them
that
a
bunch
of
our
make
files
and
code
generators
didn't
work
well
in
module
mode
or
we're
like
100
times
slower.
C
So
we
tim-
and
I
have
been
talking
with
the
go
team
and
they're-
definitely
aware
of
like
the
impact,
so
they
added
one
thing
in
go
118,
which
is
called
workspace
mode,
which
makes
it
easier
to
indicate
you
have
a
group
of
related
modules
and
makes
some
of
the
go
tooling
work
better
for
doing
things
like
querying
across
those
modules.
C
We
got
some
feedback
from
the
go
team.
They
pointed
us
to
the
the
golin
tools
packages,
library,
which
makes
is
sort
of
the
next
generation
way
to
query
stuff,
that
works
well
across
module
mode
or
go
path
mode,
but
to
use
that
we
have
to
sort
of
rework
our
generators
to
work
more
in
a
batch
mode
instead
of
iterating
through
like
100
or
200
packages
and
doing
things
one
at
a
time
to
really
take
advantage
of
the
benefits
there.
We
have
to
say
here
are
the
200
packages
that
I
want.
C
Please
load
them
all
and
then
work
on
the
results,
so
tim
has
been
digging
on
that.
I
don't
know
if
he
has
a
tracking
issue.
I
I
had
the
I
can't
type
and
talk
at
the
same
time,
module
mode.
C
So
that
was
the
issue
that
I
had
opened
about:
building
in
module
mode.
I
haven't
updated
that
with
some
of
the
recent
info,
so
I
will
see
if
I
can
get
tim
to
link
to
anything
in
progress
he
has
or
or
sort
of
his
plans.
I
know
he
was
poking
at
it
over
the
holidays,
so
I
think
there
are
so
the
things
that
are
good.
It
looks
like
gown.
C
18
will
help
us
and
it
looks
like
there
were
a
couple
things
that
tim
found
that
might
come
in
with
go
119.
That
would
help
us
more
like
honoring
vendor
mode
in
workspace
mode
stuff
like
that.
So
that's
good.
It's
also
good
that
the
go
team
is
aware
of
the
impact,
and
so
I
don't
think
that
they
will
pull
the
rug
out
from
under
our
feet
and
drop
gopath
support.
Until
we've
demonstrated,
we
can
successfully
use
module
mode.
C
A
Yep
got
it.
Thank
you.
So
there
is
a
one.
Eight
one,
eight
beta
one
pr
in
kk
to
update
to
one
1.8
beta
1
of
the
compiler.
A
Now
the
question
that
comes
there
is,
you
know:
do
we
know
if
we
can
coordinate
landing
that
pr
and
hoping
that
the
go
team
will
release
one
eight
by
the
time
we
need
to
ship
in
time
for
our
code
freeze
and
feature
freeze
and
whatnot
right.
C
Yeah,
I
I
think
it's
fine
to
update
to
118
beta
1..
I
think
the
timeline
for
118
is
like
in
the
next
few
weeks.
So,
okay,
that's
well
ahead
of
124
plans.
Okay,.
C
A
Okay,
we
need
to
codify
that
somewhere.
I
don't
know
where
I
I
poked
the
release
team
and
left
a
note
saying
exactly
the
same
thing.
We
will
have
to
figure
out
somewhere
to
do
this.
A
Okay,
what
else
so
I
did
go
through
the
prs
and
issues
that
we
had
in
our
buckets.
They
didn't
seem
anything
that
was
gonna,
make
a
lot
of
churn
for
us,
but
there
is
one
thing
that
I'm
waiting
on,
which
is
continuity
1.6
when
that
gets
released.
You
know
we'll
be
able
to
clean
up
much
more
of
dependencies
from
kk,
so
just
waiting
on
that
there
is
a
rc
that
is
going
to
come
out
for
container
d16.
A
A
So
even
we
pull
in
container
d,
one
six
we'll
be
able
to
get
rid
of
more
things
from
our
dependencies,
because
in
container
d
one
six,
the
api
is
in
a
separate,
go
module
and
we
will
depend
only
on
the
api
on
on
the
continuity
side.
So
first
I'll
need
to
update
c
advisor
to
continue
d16
and
then
update
kk
to
the
version
of
the
c
advisor,
so
that
transitively
picks
up
the
c
address
continuity
version
as
well.
A
Thank
you
yeah,
so
that's
all
we
had
for
today
cc
oh
merrill
bambi.
Would
you
like
to
introduce
yourself-
and
you
know,
see,
tell
us
a
little
bit
about
yourself
and
anything
that
is
was
interesting.
E
To
you,
okay,
so
first
off
hi,
it's
my
first
time
joining
this
sig
meeting
a
very
new
contributor,
pretty
much
just
jumping
onto
things
trying
to
figure
out
which
one
I'm
interested
in
I'm
interested
in
the
good
level
of
things.
So
this
seemed
to
be
a
good,
a
good
place.
To
be
my
background,
I'm
a
consultant
devops
engineer.
Actually
I
work
in
a
company
in
california,
but
I'm
based
on
toronto.
E
So
my
time,
my
time
zone
is
esd,
pretty
much
been
working
with
kubernetes
for
three
years,
almost
four
years
now
and
been
working
from
startups
to
fortune
500
financial
companies.
Here
so
pretty
much
any
insurance
company
you
can
think
of
of
canada.
I've
touched
their
production
systems
and
that's
pretty
much
it
so
nice
to
be
here,
and
I
hope
I
can
contribute.
A
Absolutely
so
just
some
general
stuff
about
what
we
are
doing
here
in
this
specific
thing.
So
we
are
under
sig
architecture
and
a
sub
project
of
sig
architecture
called
code
organization
and
basically,
what
we
end
up
doing
is
we
look
at
like
you
heard
what
we
were
talking
about
right
like
this
is
a
one
place
where
we
look
across
the
whole
kubernetes
kubernetes
code
base
and
we
don't
care
which
sig
it
falls
under.
A
You
know
we
think
about
how
do
we
update
dependencies?
How
do
we
do
a
bunch
of
things
that
is
across
the
whole
code
base?
How
do
we
like
sanitate
sanity
checks
for
code
linters?
You
know
updating
golang
versions,
for
example
118
when
it
comes
out
we'll
be
thinking
about.
How
do
we
add
generics,
for
example
right
like
so
we
need
to
and
then
or
we
we
probably
tell
people
like,
don't
use
generics
until
xyz
right,
like
so
one
one
way
or
the
other
so
we'll
have
to.
A
We
talk
about
those
kinds
of
challenges,
and
you
know
so
welcome
abroad.
E
A
A
lot
of
work
is
grunt
work
which
takes
a
really
long
time,
so
you
don't
get
a
quick
hit
like
the
c
advisor
continuity
stuff
that
we
were
talking
about.
You
know
there
is
a
dependency
chain
that
is
going
across
multiple
open
source
projects,
so
we
end
up
touching
other
projects
and
tell
them
hey.
If
you
do
this,
it's
easier
for
us
and
like
it'll,
be
easier
for
you
or
maybe
maybe
you
take
on
more
work,
so
it's
easier
for
us.
A
So
there
is
negotiations
like
that
happening
and
we
will
push
pr's
to
see
advisor
container
d,
run
c
and
other
projects
as
well
depending
on-
and
you
know,
people
will
come
to
us
saying:
oh
jinko,
we
are
updating
to
the
next
version.
You
know
what
you
know.
Will
it
work
for
you
and
then
we
kick
the
tires
and
we
get
back
to
them
with
some
feedback.
So
things
like
that,
so
some
of
these
things
take
a
really
long
time.
A
So
it's
like
lots
of
patience
and
lots
of
experimentations
and
work
in
progress
so
little
by
little
over
a
period
of
time,
things
get
better.
Okay,.
E
F
A
We
won't
mind,
so
don't
worry
any
any
time
you
can
carve
out
for
us,
that's
good
for
us.
We
try
to
do
most
of
the
things
on
slack.
A
We
haven't
had
a
really
good
meeting
like
this,
for
for
a
long
time
now
so
and
if
not
in
in
a
zoom
meeting,
we
end
up
providing
some
status
updates
on
on
the
slack
channel
and
usually
that
works
out
cc.
G
Yeah
hi,
this
is
cece
and
I'm
currently
work
at
google.
As
a
software
engineer
and
I
joined,
I
started
work
in
kubernetes
area
2020
and
I
started
in
cloud
providers,
a
sig
cloud
provider
and
helping
like
moving
the
external
cloud
providers
out
from
kiki
private,
and
I
went
the
2020
kubernetes
contributor
award
and
after
that
I
moved
to
the
sig
api
machinery
world.
In
last
release.
G
I
worked
in
the
cell
validation
if
you
heard
about
it,
just
the
alpha
feature
now,
but
we
hope
to
expand
it
of
course,
and
also
I
work
in
the
sikh
really
I
work
in
the
release
team
for
this
release.
I
I
will
be
acting
as
the
lead
shadow,
so
say,
hi
from
the
release
team
and
we
are
like
just
formatting
the
teams
this
week
and
fixing
the
timelines
for
the
coming
videos.
So
if
you
have
any
opinion,
please
feel
free
to
it
in
the
pr
before
it
emerged.
G
Otherwise,
I
guess
it
will
be
fixed.
Thank
you
for
raising
the
118
like
the
beta
issue
there.
We
will,
of
course,
keeping
eyes
on
that
yeah.
That's
it
actually,
this
meeting
sitting
in
my
canada
for
a
long
time,
but
it's
somehow
post
like
in
the
middle,
because
every
time
when
I
try
to
join
it's
like
seems
not
happening
and
yeah.
Thank
you
for
keeping
it
going.
I
will
try
to
like,
enjoy
it
awesome
yeah
more
often.
Thank
you.
Welcome
again.
D
Okay,
see
you
all
in
two
weeks
bye
see
you
thank
you.
Bye.