►
From YouTube: 20200611 SIG Architecture Community Meeting
Description
GMT20200611 180502 SIG Archit 640x360
A
All
right,
hello,
everyone-
this
is
the
bi-weekly
code
organization,
sub-project
meeting
under
sig
architecture
and
today
is
june
11th
2020..
So
we
have
a
pretty
light
agenda.
The
first
one
is
something
that
that
been
raised.
I
think
I'm
missing
context
on
that,
but
I'll
let
them
just
walk
through
it.
B
Yeah
so
before
ben
starts,
complaining
I'll
give
a
quick
overview.
So
what
happened
was
we
were
iterating
over
cia,
so
c
advisor?
We
usually
pull
it
just
before
a
code
freeze
typically,
and
we
make
a
tag
on
c
advisor
and
then
we
pull
it
towards
the
end
just
before
the
code
freeze.
But
recently
what
we've
been
doing
is
like
do
a
periodic
update
of
c
advisor.
B
You
know,
because
we
were
trying
some
things
out
and
there
were
major
changes
that
were
going
in,
so
we
were
trying
hard
to
test
it
because
we
don't
have
adequate
test
coverage
in
c
advisor
itself.
We
have
only
a
few
operating
systems
and
combinations
of
environments
where
things
run,
so
we
try
to
make
it
more
frequent.
B
See
advisor
lacks
enough
hands
to
test
in
other
things
as
well.
So
you
know
so
the
best
way
to
do
it
is
just
throw
it
into
the
mix
and
kubernetes,
and
then
let
people
go
while
trying
things
out
on
everywhere
that
they
usually
try
kubernetes
on.
So
that
was
our
way
of
opening
up
c
advisor
to
more
things.
But
what
happened
was
when
the
k
log
request
came
in
the
request
was
for
structured
logging,
so
we
ended
up
updating
a
bunch
of
repositories
to
a
newer
version
of
k.
B
We
can't
go
back
to
a
previous
version
of
c
advisor
so
that
that
was
the
loop
where
c
advisor
got
stuck
in
because
we
found
that
we
broke.
We
did
indeed
break
some
scenarios
where
cpu
was
not
being
found,
and
you
know
some
other
assorted
problems.
So
what
has
happened?
Is
we
basically
broke
kind
from
master
running
against
kubernetes
master?
That
was
the
main
main
thing
and
there's
a
bunch
of
people
who
were
waiting
for
the
fixes
and
then
we
have.
B
We
haven't
been
able
to
move
to
newer
c
advisor
because
for
one
thing
the
cpu
fix
landed
yesterday
or
there
is
probably
one
pr
still
in
flight.
So
that's
where
we
are
currently.
B
So
we
couldn't
go
back
to
a
previous
version
of
c
advisor
which
used
to
work
because
you
know
we
updated
k-log
in
kk.
So
that's
that's
the
whole
story.
Ben.
Do
you
want
to
add.
C
I
would
add
so:
we've
discovered
it
in
kind,
but
it
turns
out
that
it's
just
all
kubernetes
of
any
sort
is
broken
on
any
system
with
hyper-threading
disabled.
It
was
also
broken
on
arm,
though
I
believe
we
did
get
that
patch
in
now,
though,
we're
running
into
okay,
so
we
can't
roll
backwards
because
of
the
k
log
thing,
but
we
also
still
can't
roll
forward
because
of
all
of
the
secret
v2
stuff,
and
I
think
grpc
incompatibilities,
so
I'm
not
sure
there's
any
one
individual
problem
here.
B
C
C
We've
got
this
new
shiny,
pneuma
detection
stuff,
but
it
doesn't
work
on
all
systems
yet,
and
we
can
tell
from
the
bug
fixes
we
weren't
really
fully
aware
of
the
file
system
layout
that
we
were
trying
to
read
through
nc
advisor,
so
the
stuff
was
not
stable,
yet
it
wasn't
ready
and
we
couldn't
roll
it
back
and
we
are
still
unable
to
roll
forward
and
it's
been
over
a
month.
We
should
not
let
our
dependencies
get
into
that
state.
C
I
don't
know
how
we
can
coordinate
that
better,
but
we've
got
a
number
of
these
changes
going
on
that
are
like.
Oh,
you
have
to
roll
forward.
You
have
to
do
this
new,
like
we're
changing
the
lib
container
interface,
we're
changing
grpc,
we're
changing
k
log.
C
I
think
there's
also
something
of
a
testing
issue
here
that
we
missed
some
of
these,
but
it
didn't
take
us
very
long
to
detect
it.
It's
just
we
were
like
okay.
Well,
we
want
to
get
this
big
sweeping
change
in,
so
we
don't
want
to
roll
back,
but
we
never
managed
to
roll
forward
and
we
still
haven't
the
fix
is
in
c
advisor,
but
looking
at
I
was
attempting
to
update
that
today
and
it
looks
like
we've
got
a
few
more
organization
messes
that
are
the
new
problem
with
that
yeah.
B
Plus
things
got
more
complicated
because
we
got
sidetracked
with
the
container
d
run
c
problem
as
well,
so
that
that's
a
whole
another
thread.
We
need
to
unscrew
so
yeah
and
but
one
thing
one
bright
light
here
is
at
least
we
know
that
we
caught
the
problem
earlier,
I
would
have.
B
It
would
have
been
even
worse
if
we
had
done
what
we
used
to
do
before,
where
we
cut
c
advisor
towards
the
end
and
actually
shipping
it,
because
not
many
people
might
have
had
a
chance
to
try
at
that
point
so
that
that's
the
flip
side
of
it.
C
Well
and
we're
lucky
there
that
we
I
mean
so
we
had
a
member
of
the
istio
team
that
tends
to
try
to
test
with
kubernetes
head
on
their
corporate
workstation
at
google,
where
hyper
threading
is
disabled
and
they
caught
this
while,
but
it
took
us
a
bit
to
pin
down
because
it
it
creates
a
strange
failure
mode
where
you
have
zero
cpus
and
nothing
schedules.
I
I
think
my
promise,
I
don't
understand
why
we
got
in
a
situation
where
that
wasn't
a
simple.
C
Like
revert
of
this
completely
new
behavior
of
reading
cpus,
I
mean
we're
using
a
completely
different
linux
mechanism
to
detect
them
before
we
were
reading
proc,
cpu
info
and
now
we're
reading
this
whole
virtual
file
system.
That
has
different
directories
for
different
nodes
and
things.
I
understand
the
intention
to
pick
up
the
new
network.
I
don't
understand
why
we
weren't
able
to
like,
even
if
we
couldn't
you
know,
revert
k
log.
Why
yeah?
How
did
we
get
into
a
state
where
we
kept
just
trying
to
roll
forward
for
this
long.
B
So
I
filed
a
reward
and
david
said:
let's:
let's
try
to
move
forward.
Okay
katrina
was
doing
a
bunch.
C
B
B
Typically,
you
know,
I
must
do
a
knee
jerk
and
roll
things
back,
but
this
was
like
c
advisor
which
it
does,
which
is
not
a
community
project
as
well.
I.
D
B
D
It's
a
much
stronger
case,
like
those
are
the
things
that
usually
spawn
a
like.
We
need.
We
need
to
roll
this
back
like
it
was
good
on
day
three.
It
was
bad
on
day
five,
let's
roll
back
to
day
three
like
those
are
the
ones.
C
I
think
we
need
something
to
cover
the
situation,
because,
like
the
fact
that
we
somehow
managed
to
break
these
particular
cpu
platforms
doesn't
seem
like
something
we
can
scale
testing
of,
like.
I
don't
think
we're
gonna
run
like
special
c
advisor
tests
on
every
cpu,
with
every
kernel
configuration
with
hyper
setting
on
and
off,
and
I
I
think
we
need
a
way
to
say.
Okay,
we
don't
have
a
ci
signal
here,
but
we
we
know
this
is
broken.
Okay,.
B
So,
let's
two
things
here
right,
one
is
leave
the
c
advisor
as
a
special
project,
so
you
know
that
it
has
a
different
set
of
rules.
The
other
one
is
g,
linux
ben.
I
don't
really
care
about
g
linux,
but
if
there's
something
else
which
is
in
public
that
that
we
broke,
then
I
would
like
to
add
a
ci
job
for
it.
We
I
mean
so
we
I
mean
so.
B
It
doesn't
matter
if
there
is
somebody
else
we
are
breaking.
We
add
a
ci
job
for
that
system.
We
do
have
that
fedora
core
os,
but
we
don't
have
upstream
ci
there's
nobody
in
the
release.
B
Right
and
kubernetes
has
room
to
roll
back
right,
but
again
this
is
not
come
at
a
release
boundary.
We
are
not
trying
to
make
a
release
and
rolling
back,
and
you
know
this
is
we
haven't
even
reached
code
fees,
so
I
don't
think
it's
a
fair
thing
to.
If
this
had
happened
after
code
freeze,
then
I
would
say:
okay,
you
gotta
dial
things
back.
We
haven't
reached
code
freeze,
yet,
oh,
we
still
don't
have
a
platform
right.
So
what
we
haven't
reached
code
free,
so
we
haven't
really
broken
anything.
B
Yet
we
haven't
broken
anybody
yet
anyway,
but
yes
in
general.
I
think
that
we
should
be
more
careful,
but
at
this
point
we
didn't
even
have
a
choice
of
going
back
to
rolling
back
all
the
five
projects
back
in
to
k
log
v,
one
either.
So
so
that's
actually
what
I
would
like
to
talk.
D
Rollout
is
what
I'm
more
interested
in
here,
because
that
I
think
that
has
lessons
we
can
take
like
putting
ourselves
in
a
position
where
we
have
no
way
to
go
forward
and
no
way
to
go
backward
is
really
problematic,
and
so
thinking
about
how
we,
how
we
did
the
v2
rollout
and
if
there
are
ways
we
could
have
done
that
differently,
that
would
have
preserved
our
options.
B
We
did
have
an
option
jordan,
which
we
still
do,
which
is
go
back
to
the
old
c
advisor
and
use
the
k
log
b,
one
plus
v,
two,
that
might
you
know
the
hack
that
we
had
for
making
sure
that
we
don't
lose
the
logs.
So
that
is
still
on
the
table.
I
just
don't
wanna.
Do
it?
That's
all.
B
B
See
there's
fixed
but
we're
unable
to
pick
it
up
when
was
it
fixed
yesterday,
ben.
D
B
D
I
think,
the
the
way
I
would
have
maybe
approached
it
is
switching
to
a
logging,
interface
and
saying,
like
let's
have
an
interface,
that
we
can
connect
up
to
cloud
v1
or
to
catalog
v2,
and
that
way
we're
not
we're
not
trying
to
coordinate
like
five
or
six
downstreams.
D
All
at
once
and
like
once,
it's
in
rolling
back
one
means
rolling
back
everything
like
yeah.
There
aren't
a
lot
of
dependencies
that
we
have
where
we
control
we
control
the
diamond
dependency
and
then
like
try
to
propagate
it
out.
This
is
the
main
one
I'm
aware
of,
but
at
least
being
aware
of
the
situation
and
trying
to
trying
to
avoid
getting
the
same
situation
in
the
future
would
be
good.
What
were
you
gonna
say
david.
E
I
was
gonna
say
that
I
find
the
idea
of
having
ci
jobs
for
the
cpu
configurations
or
whatever
it
is
here,
various
like
linux,
flavors
to
see
a
reference
to
arm.
If
we
want
those
things
to
work,
I
think
having
ci
jobs
for
them
is
pretty
reasonable.
E
It's
broken
on
my
laptop,
it's
not
something
that
is
easy
to
to
objectively
consider
and
say,
like
it
was
broken
on
his
laptop,
so
it
reverted.
I
I
understand,
like
you
know
sure
different
people
are
different
and
like
maybe
this
particular
person,
we
have
a
lot
of
confidence
in,
but
I'm
I
do
struggle
with
with
having
this
dependency.
If
we
hadn't
owned
all
the
projects
using
k
log,
we
wouldn't
have
tried
to
update
the
dependency
this
way.
Would
we
right
like
we
would
have
jordan's
on
mu,
but
I'm
confident.
D
D
D
C
Cpu
detection
thing:
it's
it's
a
serious
regression
that
it
doesn't
work
on
some
potential
platform
and
I
still
disagree.
I
do
not
think
it
is
reasonable
for
us
to
try
to
fund
and
maintain
ci
for
every
possible
like
cpu
out
there,
but
I
mean
we
do
support
them.
They
were
functioning
perfectly
fine
before
this
change.
The
way
that
we
detected
things
before
was
pretty
robust,
we're
just
counting
the
processes
sirs
and
the
c
proc
cpu
info.
We
switch
to
reading
this
under-documented
virtual
file
system
and
it
still
probably
has
bugs.
D
Like
if,
if
we
release
an
arm
version,
it
seems
like
there
should
be
a
test
that
tests
against
arm.
Like
david,
I
tend
to
agree
like
every
last
cpu
oddity
and,
like
I
don't
think,
that's
reasonable,
like
it's
an
infinite
number
of
combinations,
but
like
an
entire
architecture.
That
seems
egregious
to
me
if
that's.
D
I
think
that
conversation
is
probably
probably
belongs
more
to
sig
node
than
to
us.
I
think
our
concerns
should
be
around
like
how
what
are
we
doing
in
the
way
we
manage
dependencies
and.
D
C
B
So
the
problem
was,
we
didn't
want
to
go
back
to
those
five
six
projects
where
we
updated
from
b1
to
v2
to
roll
them
all
back,
and
then
you
know
including
c
advisor
and
then
go
forward
with
the
reward
to
see.
D
I
I
think
ben
is
asking
like
where
we
are
right
now
going
forward
with
c
advisor
means
pulling
in
a
run
c
version.
That's
like
mid
development
like
even
picking
up
the
new
version
of
c
advisor
with
the
fix
doesn't
put
us
in
a
great
place.
Our
if
we
rolled
c
advisor
back
to
the
last
known
good
version,
then
that
would
be
before
c
advisor
picked
up
k,
log
b2
and
we
could
shim
okay
like
v1,
to
k
log
b2.
That
would.
C
I
mean
it
seems
like
it's
been
our
call
about
the
thing,
I'm
not
talking
about
changing
c
advisor
itself,
I'm
talking
about
rolling
back
the
c
advisor
dependency
yeah.
I.
D
I
think
when
the
c
advisor
thing
was
going
on,
we
were
saying
like
we
can't
pick
up
the
new
fix,
because
this
is
in
progress,
and
so,
if
there's
a
way
for
them
to
pick
up
the
new
fix,
I
I
think
some
of
the
signature
folks
would
have
rolled
it
back
and
shimmed
it.
So
if
we
we
should
give
them
the
information
about
how
to
do
that.
So
they
can
decide.
D
If
that's
what
they
want
to
do,
and
then
we
should
consider
ways
to
improve
both
the
rollout
of
things
like
catalog
v2,
and
if
there
are
different
approaches
like
an
interface
based
approach,
that
would
mean
we
would
not
have
such
tight
coupling
between
all
these
dependencies
right.
B
It's
just
use
basic
logging
stuff
from
golang.
D
B
C
There
is
an
interesting
point
there
about
the
architectures.
We
don't
test-
that's
probably
not
a
discussion
for
this
meeting,
but
if
we
are
going
to
say
that
that
is
a
much
bigger
call
to
make-
and
there
are
many
architectures
for
which
we've
never
had
any
testing
that
we've
released
today
and
we
do,
we
do
normally
pull
in
fixes
for
such
as
s390x
powerpc.
C
B
The
that
we
tell
there
is
it
is,
as
is
it
may
it
may
work,
it
may
not
work,
we
are
not
guaranteeing
that
it
works
other
than
fd64.
I
think
that's
the
approach
we've
taken
to
these
bugs
I'm
not
talking
about
bugs,
but
this
is
a
general
statement
of
support
for
those
architectures
right.
It's
it's
use
at
your
own
risk.
B
C
B
C
B
Seem
kind
of
we
have
that's
a
balance
right
ben.
We
can't
say
no
to
some
of
these
platforms
because
because
otherwise
they
they'll
never
be
able
to
make
progress.
C
B
B
Well,
I
don't.
I
wasn't
my
call,
but
I
would
have
preferred
to
tolerate
a
risk,
but
then
they
did
have
another
problem
then,
which
stopped,
which
is
they
had
some
changes
in
golang
which
weren't
in
the
version
of
golang
that
we
were
using,
so
it
wouldn't
have
worked
anyway
at
the
initial.
D
We
should
test
what
we
release.
If,
if
someone
wants
to
contribute
a
fix
that
allows
kubernetes
to
build
on
a
platform
like
and
doesn't
add,
complexity
or
add
problems
like,
I
don't
have
a
problem
with
that,
but
changing
our
release,
scripts
and
being
like
all
right.
Here's
the
release
for
you,
know
web
assembly
and
we're
like.
Does
it
work
like?
D
If
so,
if
someone
the
thing
with
releasing,
it
is
if
we
release
one
version
and
someone
consumes
it
and
it
works
they're
like
great.
You
guys,
support
this
architecture
and
then
we
release
the
next
version
and
maybe
something
broke,
and
we
don't
know
because
we
don't
have
any
tests
and
they
upgrade
and
they're
like
look.
I
was
working
and
you
broke
me
and
and
that's
a
reasonable
thing
for
them
to
say
and
saying
like.
Well,
I
don't
know
we
just
built
it.
We
don't
know
if
it
works.
B
B
C
B
D
So,
but
I
agree,
these
are
questions
for
signaled
and
or
sig
release.
Not
really
us.
Our
place
is
like
how
are
we
helping
the
project
be
able
to
pick
up
fixes
or
roll
things
back
or
roll
things
forward?
Are
there
things
that
we're
doing
that
are
causing
problems
or
recommendations?
We
can
make
so
dense?
D
B
Yeah
so
can
can
I
can
we
spend
a
few
more
minutes
on
the
moving
forward
bit
just
in
case
that's
something
that
we
could
entertain
as
well,
so
so
for
moving
forward
the
issue
right
now
is
we
need
to
pick
up
a
newer
ecd
because
of
one
of
the
dependencies
fails.
Otherwise,
one
of
the
one
of
the
things
fails
otherwise,
so
we
need
a
newer
dependency
of
hcd
and
grpc
to
get
picked
up.
B
If
we
are
able
to
do
that
quickly,
then
we
can
rev
up
c
advisor.
So
that's
that's
what
I
I
spent
some
time
yesterday
and
that's
that's
where
I
ended
up.
Don't.
D
The
where
are
we
now
bullet
rolling
forward
to
the
currently
fixed
version
of
the
advisor.
B
Yeah,
so
on
the
on
the
run
c
thing,
I
push
back
majorly
on
updating,
run
c
to
an
intermediate
sha
in
c
advisor,
and
then
they
came
back.
They
looked
at
it,
they
came
back
and
they
said
oh
you're,
not
using
c
groups
v2
in
kubernetes.
So
it's
not
going
to
affect
you
and
you
know
they
they
add
like
a
set
and
then
in
the
end
it
wasn't
my
call
because
the
change
was
going
into
c
advisor,
not
kkk.
Sure.
C
That
isn't
that
also
going
to.
B
Yeah,
so
that's
what
the
pr
that
I
pointed
you
to
from
gcp.
That
updates.
E
B
D
B
So,
ben
and
jordan,
between
the
two
of
you,
can
you
talk
to
david?
Please
yeah,
because
that's
that's
where
this
needs
to
get
worked
on
a
little
bit
more.
B
We
have
tracking
issues
for
some
of
them,
and
people
have
been
promising
stuff
and
they
haven't
been
showing
up.
So
if
we
want
to
remove
architectures,
then
we
need
to
start
the
conversation
and
to
get
rid
of
them
on
the
same
side.
You
know
windows
right.
B
Of
worms
yeah,
I'm
happy
to
you
know,
start
that.
C
B
In
the
same
way,
we
somebody
brought
this
to
my
attention
that
looked
dims.
We
don't
even
do
any
conformance
testing
for
windows
notes
like.
Why
is
that
even
something
that
we
are
allowing
as
a
project
right?
Why
do
we
have
an
architecture
where
you
can't
run
conformance
tests
with
only
windows,
only
nodes
in
a
kubernetes?
You
know.
A
D
C
C
You
how
manageable
it
is
to
get
a
ci
with
hyper
threading
off.
I'm
I'm
not
sure
how
I'm
not
sure,
if
that's
a
thing
that
any
provider
lets
you
toggle
and
I
don't
know
if
we
have
access
to
any
that
have
it
off.
But
I'll
look
around
sounds
good.
B
A
Yeah,
I
could
do
that
after
the
call
we
don't
have
to
wait
around
is.