►
From YouTube: Kubernetes SIG Azure 2019-07-10
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
hello,
everyone.
This
is
the
July
10th
edition
of
the
cig
adjure
meeting.
This
is
a
meeting
that
is
recorded
and
will
be
available
on
the
internet.
So
please
be
mindful
of
what
you
say:
please
be
sure
to
adhere
to
the
kubernetes
code
of
conduct
and
just
in
general,
be
excellent
to
each
other.
B
C
A
If
there
was
an
advisory
that
went
out
for
people
who
are
on
Macs,
that
zoom
has
installed
a
like
secret
web
server.
What
that
does
that
does
not
remove
itself
and
kind
of
runs
forever
when,
if
you're,
if
you're,
if
you
happen
to
uninstalls
him,
so
that
leads
to
lots
of
nice
fun
security
issues.
Anyhow
I
will
yeah
yeah
I
also
don't
use
a
Mac,
so
I'm.
Fine!
A
C
A
C
C
E
A
G
Well,
we
have
breakage
right
now,
I'm
still
trying
to
swim
upstream.
So
I
should
have
been
aware
of
this,
but
unfortunately
people
told
me
about
it.
It
boils
down
to
we
made
a
change
so
that
now
we
require
apps
v1,
not
v1
beta,
and
so
we
had
a
cat
for
some
deployments
and
demon
SATs.
We
had
to
catch
up
with
that
change.
I
didn't
realize
that
the
selector
was
no
longer
implicit,
so
we
had
to
catch
up
with
that
change
that
hasn't
been
merged.
F
G
True,
the
other
big
issue
for
us
is,
you
know,
a
lot
of
labels
under
kubernetes
io
are
no
longer
allowed
and
are
causing
problems
and
we've
been
using
those
for
a
while
by
convention
kind
of
the
same
way
cube
ATM
does
so
we
thought
you
know
that
was
legit.
It
turns
out
they're,
causing
errors
now
and
we're
kind
of
scrambling
to
find
a
fix
for
that
because
it
seems
like
just
not
using
the
labels
is
going
to
be
adequate
for
us,
given
how
deeply
we're
using
them
using
them
incorrectly
there.
G
The
intent
is
that
we're
not
supposed
to
be
using
labels
for
scheduling
like
this
but
anyway,
so
those
are
two
points
of
breakage
I.
Think
there's
also
something
with
a
metric
server
that
just
cropped
up
that
unfaded
out
that
I
don't
really
have
a
handle
on,
but
probably
the
labels
is
the
thing.
That's
creating
the
job
for
most
of
us,
so
I'm
trying
to
fix
that
now.
F
A
A
A
Let's
see,
if
I
put
it
in
ok
pages
frozen
super
anyhow,
there
were
changes
to
our
SDK
for
go
as
well
as
go
auto
rest,
the
I
believe
in
a
Levin
dot,
4
dot,
something
maybe
of
go.
Oughta
rest.
They
introduced
a
dependency
for
our
new
dependency
for
open
census,
proto
and
that
has
basically
like
torn
us
up
right.
I.
Don't
think
that
we
are
able
to
update
our
SDKs
right
now
for
either
cluster
API
for
Azure
or
kubernetes
kubernetes
right,
because
essentially
what
happens
is
that
open
census?
A
Proto
update
leads
to
a
protobuf
bump,
which
is
a
protobuf
for
the
entire
project
right,
so
I
filed
I
filed
and
man-thing.
It's
totally
broken
now,
I
filed
an
issue
n
go
out
arrests,
saying
that
I
need
to
gather
more
data,
but
essentially
like
we're
locked
two
versions
right
now
right
which
is
bad
because
we're
lost
you
who
reluctance
of
of
the
address
K.
Then
we
need
to
be.
In
fact,
the
I
think
the
last
bump
that
we
did
was
a
was
a
bump
for
the
network
network.
A
C
A
A
A
G
A
A
A
C
C
D
A
All
right
so
we'll
move
into
say:
that's
something
to
kind
of
kind
of
track.
For
the
long
term,
we're
I
mean
I
I,
think
that
if
we
can
fix
it
on
the
Microsoft
side,
that's
probably
more
advantageous
because
well
what
it's
sounding
like
to
me
at
least,
is
that
we're
going
to
be
stuck
in
tree
for
a
little
bit
until
they
detangle
some
other
blockers.
So
there
are
some
blockers
around
so
again
now
we're
shifting
into
the
kubernetes
116
work.
A
So
we
have
three
things
on
three
things
on
the
plate
for
116,
as
your
availability
sets
will
be
going
GA,
as
well
as
cross
region
across
resource
group,
nodes
going
GA,
so
just
buffering
up
some
of
the
testing
on
that
stuff,
making
sure
everything
it's
good
to
go.
There
should
be
a
lightweight
lightweight
graduation
for
those
two,
the
the
bigger
one
is
tracking
for
the
out
of
tree
Cloud,
Controller
manager,
so
that
is
still
in
beta.
Some
of
the
things
that
are
blocking
us
right
now
around.
That
is
there's
an
API
throttling
issue.
A
So
when
node
initialization
happens,
especially
for
larger
clusters,
it
would
be
nice
to
have
some
protections
around
how
we,
how
we
submit
those
to
the
turbine,
Nettie's,
api's,
so
I,
don't
know.
I
will
speak
very
vaguely
about
that
because
I,
don't
know
the
entirety
of
the
problem,
but
it
is
linked
in
that
blocker
section.
That's
in
the
agenda
there's
also
the
credentials
provider
which
I
think
is
should
be
a
lot
simpler
to
do
and
we're
going
to
kind
of
action
on
that.
If
I'm,
correct
Rita
for
for
116.
A
The
CSI,
the
the
volume
plugin
so
azure
file
and
azure
disk
have
CSI
plugins.
There
are
still
some
entangled
bits
in
in
the
kubernetes
kubernetes
code
that
has
to
deal
with
the
the
entry
cloud
provider,
the
entry
volume
plugins,
so
those
are
still
wired
up.
So
for
us
for
entry
for
the
entry
cloud
provider
to
use
the
CSI
plugins,
we
have
to
untangle
some
of
that
stuff.
I,
don't
know
what
the
exact
progress
is
on
that,
but
I
did
ping
ping,
Fei
and
I.
A
A
Perfect
yeah
and
he
he
recently
sweep
tall
the
issues.
There
are
a
few
more
issues
for
tracking
within
the
urban
IT
SIG's
cloud
provider.
Azure
repo
shaking
feel
free
to
check
them
out
there
as
well
all
right.
So
the
big
task
that's
coming
up
is
the
so.
This
is
more
administrivia
for
myself
and
I'm
Craig
and
Cal
and
ping
Fei
that
the
that
cigar
will
be
folding
under
Sig
cloud
provider
as
a
sub-project
right.
So
I
mocked
up
a
quick
like
a
zero.
A
A
Of
these
these
phase,
0
things
are
going
to
be
like
quick
repo
PRS,
so
I
can
I
can
handle
shouldn't
be
too
bad.
The
last
few
weeks
we've
had
like
release
engineering
stability
issues
for
kubernetes
kubernetes,
so
I've
been
working
on
stabilizing
that
stuff,
so
lots
of
failing
tests,
lots
of
things
with
like
our
push
and
build
process.
So
that's
cleaned
up
so
now
I'm
kind
of
like
freer
to
work
on
the
azure
stuff.
A
But
essentially
we
want
to
make
sure
that
people
who
come
in
to
say
gasher
or
are
interested
in
joining
these
meetings
and
participating
in
general.
They
know
where
to
go
right,
so
the
first
place
they
would
check
is
kubernetes
community
right,
so
small
cosmetic
changes
they're,
basically
putting
up
a
banner
saying
that
we're
moving
from,
we
are
no
longer
signature.
A
We
are
now
the
azure,
our
provider,
asher
sub-project
of
of,
say
cloud
provider,
also
making
sure
that
we
migrate
the
owners
and
the
projects
and
sub
projects
that
are
listed
in
the
azure
in
the
azure
landing
page
over
to
the
cloud
providers
page
from
there.
There
are
also
a
short
teams,
github
teams
that
are
defined
within
our
parabola
config
on
a
kubernetes
org,
so
parabola
says
this
tool
that
we
use
across
kubernetes
to
essentially
manage
the
config
of
the
the
github
works
right.
So
that
includes
users.
A
Teams,
as
I
was
just
throwing
out
some
some
term,
but
no
one's
heard
of
right.
So
that's
that's
users
team's
the
permissions
that
the
teams
have
what
you
can
do
on
the
org
level.
If
you
can
create
projects
different
things
like
that,
all
of
that
is
contained
in
a
set
of
llamó
configs
in
kubernetes
work.
A
So
so
the
chairs
and
technical
leads
will
become
sub
project
owners
right,
so
sub
project
owners
are
supposed
to
have
sub
projects,
basically
dictate
ownership
of
code
right.
So
as
such,
we
should
make
sure
that
the
sig,
chairs
and
technical
leads
are
owners
of
all
the
code.
I
think
that's
mostly
true
today,
but
this
is
just
like
a
sanity
check
great.
A
We've
essentially,
we've
essentially
split
ownership
with
them
or
they've,
been
like
secondary
owner
during
this
kind
of
like
stand
up
time
for
for
cluster
API
Azure,
but
now
that
we're
shifting
over
to
sub-project
it
makes
sense
to
shift
that
into
into
their
governance
model
as
well.
So
there
are
a
bunch
of
things.
I
haven't
quite
figured
out
about
this,
because
I
think
this
is
the
first
time
that
kubernetes
community
is
actually
doing
this.
A
We're
like
migrating
sub-projects,
spinning
down
a
cig,
and
you
know,
and
all
of
the
cruft
that's
involved
in
like
cleaning
up
mailing
lists
and
making
sure
our
owners
are
in
certain
places
and
then
like
little
things
like
so
I'm
calling
all
the
previous
stuff
phase
zero
right.
So
if
a
zero
like,
let's
hit
all
the
the
critical
points
which
I
think
covering
the
owners
files
and
making
sure
people
know
where
to
go,
is
the
first
step
from
there
hitting
more
of
the
user
facing
changes.
A
If
this
thing
looks
like
I
get
repo,
then
you
know
change
this
from
this
change
this
to
this
and
then
they're
all
they're
also
rewrites
for
go
imports
right,
so
that
part
should
be
handled
and
then
the
the
second
part
of
that
is
also
how
do
links
change
right.
So,
if
you're
aware
that
when
you
change
a
repo
in
github
as
long
as
you
only
change
it
once,
it
will
keep
a
redirect
around
from
the
old
repo
to
the
new
right
so
between
the
vanity
URL.
A
So
we
have
set
up
for
kubernetes
and
the
Rita
and
the
redirects
that
we
have
for
imports
for
imports
and
and
sorry
the
vanity
URL.
So
we
have
in
kubernetes
for
safer
case
at
I/o
and
sig
that
case
I
do
as
well
as
the
github
automatic
redirect
we
should.
We
should
be
fine
there,
okay,
I'm,
not
super
concerned.
A
B
A
The
future
yeah-
that's
that's
kind
of
the
answer
right
now,
so
it's
it's
supposed
to
be
at
some
point:
I
think
it
was
116,
I!
Think
right
now,
it's
118
based
on
the
blockers
that
we
have
right
now,
I,
don't
know,
I,
don't
know
and
when
that's
actually
gonna
happen,
I
think
that
API
is
fraud.
Throttling
issue
needs
to
be
solved
first
until
they
can
start
until
they
can
start
to
untangle
some
of
that
stuff.
A
A
It's
4
p.m.
it's,
it's
4
p.m.
Eastern,
so
1
p.m.
on
on
Pacific.
If
you
were
there,
ok
I
may
or
may
not
be
able
to
attend
so
having
coverage
there
would
be
great
I
will
there
are
so
I
mean
and
then
little
things
like
channel
rename,
so
we're
going
from,
say,
gasher
to
provider
ashore
on
slack
mailing
list
renames
it
will
be
I,
don't
know.
A
A
The
community
updates
I
think
what
we
I
think
what
we
landed
on
was
having
a
combined
community
update
right.
So
usually
what
happens
is
three
SIG's
will
do
their
updates
for
that
week.
I
think
what
we
can
end
up
doing
is
when
the
cig
cloud
provider
update
happened,
we
just
we
all
pile
on
right.
Every
cloud
provider
will
have
a
a
section,
and
that
will
be
the
only
sig
that
does
the
update
for
that
week.
I
think
that's
what
the
plan
was,
but
I
can
clarify
with
Andrew
later
and
then
outside
of
that
I.
A
A
A
So
you
can
take
a
look
at
that
and
the
so
I
think.
Once
we
nail
down
some
of
the
stuff,
we
need
to
circle
back
and
talk
about
the
meeting
time,
because
this
is
a
lowly
attended
meeting.
It
is
nice
to
see
microsoft
faces
pretty
consistently
in
a
red
hat
face
now,
but
this
is
a
it's
not
a
well
well
attended
meeting
and
also
some
of
the
technical
talent
that
we
have
like
ping,
say
and
Andy
are
in.
You
know,
are
in
time
zones
where
this
is
not
convenient
right.
So
we
need
to
do.
D
A
So
we,
you
know,
we
try
our
best
to
let
people
know
when
the
meetings
were
canceled
shoot
out
a
quick
email
or
something
but
I.
Think
between
you
know.
The
last
few
months
have
been
cube.
Con
release,
breakages
release
team
cuts,
things
like
that
that
that
have
stolen
attention,
I
think
Craig
was
also
you
were
at,
so
we
were
at
at
Barcelona
and
Shanghai.
So
what
kind
of
lack
of
availability
over
the
last
few
months?
So
we
will,
it
will
be
more
present
about
that.