►
From YouTube: Kubernetes SIG Azure meeting 20171213
Description
A
A
A
B
And
if
you
know
for
status
wise
or
if
we
want
to
help
or
contribute,
because
vmss
seems
to
be
one
thing,
we
definitely
need
to
move
towards
and
not
even
for
educating
but
just
to
get
rid
of
availability
set
because
we're
getting
issues,
provisioning,
big
clusters,
with
a
s
with
capacity
and
stuff,
so
they've
recommended
that
using
the
VMS
s
with
crash
cluster
option
would
probably
allow
us
to
create
bigger
clusters.
Now
I'm
trying
to
see
if
that's
the
right
place,
to
get
the
status
of
what's
left,
yeah.
A
We
can
definitely
look
into
it.
I
think
Cal
can
probably
go
into
some
detail
here,
but
part
of
the
the
difficulty
is
with
the
nodes
and
what
volumes
they
can
support
or
not
support,
and
also
there's
a
way
that
the
you
can't
pick
which
nodes
or
key
provision
when
it
scales
down
so
there's
there's
some
challenges:
architectural
e
in
in
the
cross,
between
vmss
and
in
the
way
career
Nettie's
handles
the
scaling
so
Cathy
to
you.
You
know
a
little
bit
about
this.
C
Started
talking
like
an
idiot
good
morning
guys,
the
first
thing
is
the
unique
storage
profile
per
node
on
a
VM
scale
set.
This
is
currently
in
a
preview
malt
and
we're
not
sure
yet
that
everything
works
with
been
testing
it.
The
second
important
thing
is
when
you
skill,
when
you
scale
ask
a
VM
scale,
set
down
you,
not
you
don't
have
a
lot
of
control
over
which
note
gets
off
or
gets
taken
off
from
the
skill
set.
Both
of
them
will
have
a
couple
of
guys.
C
Working
on
I
have
a
meeting,
probably
next
early
next
week,
where
I
can
get
more
details
about
this
and
try
to
open
up
the
work
for
everybody
to
see
and
get
the
early
feedback
on
it
by
next
meeting,
I
shall
have
more
details
on
how
to
get
out
more
contribution
on
it
and
more
more
work
done
on.
Does
this
make
sense
to
so.
B
Where
is
the
place
that
I
could
get
like
statuses?
Let's
say
we
put
aside
or
to
scaling
because,
let's
say
I,
don't
care
about
those
caring
I
just
want
to
be
able
to
provision
a
cluster
with
X
many
nodes
using
VM
scale,
set
I've,
seen
people
trying
it
and
having
success
now
that
they
added
the
disk
support?
Is
there
anything
else
from
the
list
of
from
the
link
that
I
pasted
it
says
like
penetration
of
service
resources,
I'll
be
rules,
they'll,
be
probes,
I'll,
be
fronting
IP
msg
rules?
Does
it
really?
C
Every
every
look
at
it
this
way
every
touchpoint
between
copper,
necklace
and
the
cloud
via
the
cloud
provider-
and
I
would
say
why
are
see
advisor
as
well,
which
actually
says
on
top
of
the
clock
provider,
will
need
to
be
reviewed
once
we
change
to
PM
sky,
sir,
because
the
way
we
do
load
balancing
rules
are
different.
The
way
we
do
this
research
might
be
different
and
so
forth.
You
know.
B
C
C
B
I
mean
for
sure
after
the
Christmas
holidays
or
New
Year's,
we'll
probably
reassess
and
maybe
put
in,
if
like
one
person
to
out
so
if
we
could,
you
know
I'll,
think
I'll
put
you
off
flying
together
like
a
place
where
we
could
follow
up
on
what's
left.
Where
can
we
help
and
whatnot,
because
I
think
it's
gonna
be
something
very
important?
First,
let.
B
C
Alright,
and
along
with
the
out
of
three
provider,
okay,
we
share
big
big
change.
Okay,
we're
trying
to
get
this
ready
by
110
for
everybody,
because,
frankly
speaking,
everybody
needs
it
along
along
with
Microsoft.
Oh
and
110
is
somewhere
somewhere
code
freezer
somewhere
in
Marsh.
You
much
really
much.
Okay,.
A
D
A
Got
an
off-site
at
that
time,
but
I
think
we
can
take
an
hour.
That's
a
pretty
important
thing.
So,
okay,
perfect!
Okay.
So
let
me
make
a
note
on
my
calendar.
I
also
have
the
trouble
of
a
lot
of
my
SIG's
stuff
is
in
one
calendar
and
the
Google
Calendar
and
then
also
in
Outlook
or
Microsoft
jondor.
So
they
say.
E
D
A
Okay,
sorry
I'm
just
adding
adding
this
to
my
calendar,
so
I,
don't
forget
it.
Okay,
all
right,
so
I'll
send
out
a
notice
in
the
mailing
list
to
let
people
know
that
that's
gonna
be
planning
time
and
we'll,
hopefully
get
them
where
our
folks
from
from
Microsoft
and
Manhattan
and
where
to
show
up
because
we're
kind
of
lightly
lightly
populated
at
the
moment.
A
Okay,
so
quick
bit
of
news
I
had
some
great
meetings
with
people
at
Kirkland
regarding
the
cloud
provider
breakout,
so
we're
actually
mobilizing
a
pretty
strong
group
effort
on
that.
So
I've
got
contacts
now
across
all
the
clouds,
so
we're
gonna,
hopefully
be
able
to
do
more
in
the
working
group
and
it
looks
like
people
are
gonna.
Try
for
at
least
some
sort
of
alpha,
slash
beta
implementations
in
110
I
would
love
to
have
us
be
ahead
of
the
pack.
A
So,
as
Khalid
mentioned,
that
we
have
a
pretty
good
working
version
by
the
end
of
110
and
we're
just
trying
to
synchronize
on
Microsoft
site
about
what
that
looks,
like
who's
working
on
internally
and
and
so
forth,
but
I'll
be
meeting
with
the
Google
team
during
the
week
of
the
off-site
I
mean
actually
in
be
staying
in
Mountain
View.
Just
so,
I
can
facilitate
so
many
spot
meetings
with
people
at
Google
to
talk
about
how
they've
implemented
it
and
some
of
the
80
patterns
they've
run
into
so
that'll
help,
ideally
save
us
some
time.
A
I
also
found
out-
and
this
is
really
good
news-
is
that
if
we
have
any
troubles
with
the
internal
hosting
of
the
repo
that
the
cloud
native
found
cloud
native
compute
foundation
has
offered
to
give
us
repos
in
the
CN
CF
org
and
give
us
a
hundred
percent
ownership
and
management
of
them.
So
essentially
we
have.
A
We
can
bypass
having
to
do
all
the
controls
and
stuff
that
we
might
have
to
do
if
we
had
it
hosted
in
Chile,
Microsoft,
so
so,
sort
of
a
plan,
a
plan
B
there
and
I'm
kind
of
on
the
fence.
I
would
I'd
in
some
ways
I'd
like
to
have
it
under
the
Microsoft
control
bit
I.
It
also
feels
like
it
might
be
better
if
it's
in
CNC
F,
so
I
will
see
how
that
rolls
out
is.
A
Well,
so
the
idea
is
simply
decoupling
totally.
So
if
we
did
that
under
the
communities-
or
it
would
mean
you'd
have
dozens
of
cloud
providers
in
there
I
think
basically
they
want
the
they
want
to
treat
this,
like
any
other
extension
point
so
like,
for
example,
for
you
know,
csic
and
I
so
forth
that
the
the
drop-off
point
for
the
kubernetes
borderline
is
that
the
the
cloud
provider
controller,
so
the
controller
manager
is
basically
handing
off
to
some
external
repo
and
they
don't
want
that.
Necessarily
being
the
community
store.
A
I
think
it's
a
little.
Both
I
mean
the
last
I
heard
that
CeeLo
is
the
Microsoft
legal
team
was
reviewing.
The
current
implementation.
Google
also
is
having
the
same
problem.
Where
I
mean
the
concern
is
all
sorts
of
like
indemnity
around
taking
code
inside
in-house?
That's
you
know.
We
didn't
right
so
I
understand
why
there
that's
a
pretty
unusual
move
to
do
that
kind
of
transfer,
so
they're
just
they're
trying
to
look
it
over
and
make
sure
that
makes
sense.
But
if
that's
a
blocker,
then
we
can
simply
just
go
for
a
CN
CF
repo.
A
So
the
good
news
is
we're
not
blocked
right.
We
can
move
forward,
however,
we
need
to
for
a-19.
I
have
been
so
out
of
it.
I
have
not
looked.
Is
there
anything
that
we
need
to
call
out
as
far
as
all
this
work,
that
would
any
the
azure
load
balancers
and
WA?
Not
that
needs
to
go
in
release
notes?
No,
we,
oh
yes,.
C
I
sent
you
an
email
about
this,
so
a
bunch
of
the
guys
I
work
with
debt
modification
for
the
house
of
O'bannon,
stolen
energy
work
just
to
give
everybody
in
sync
to
keep
the
body
and
think
what
we
did
is
we
allow
we
had
the
problem
with
the
first
solving
a
bug
which
was
leaking
resources.
Sometimes,
when
you
remove
a
service,
if
you're
the
controller
cloud
provider
is
under
stress,
it
would
leak
resources
and
then
your
configuration
is
not
completely
removed.
C
So
we
fixed
that-
and
it's
currently
immersion
online
second
thing
we
did
is
we
allowed.
We
enabled
people
to
use
multiple
balancers
and
that's
crucial,
because
people
can
use
and
I
don't
mean
internal
and
external
will
have
better
availability.
That
I
mean
people
can
have
multiple
availability
set
and
then
the
cloud
provider
will
use
them
all
to
oppose
your
service
to
the
outside
world.
That's
also.
C
For
people
using
like
large
number
of
services,
this
is
this
disk
is
linearly
basic,
more
services.
Add
more
data
sets
okay.
The
third
thing
we
did
is
the
energy
consolidation
rules
around
the
number
of
possible
industry
rules
and
all
of
the
actresses
a
bit
of
discussion
here.
But
the
idea
is
energy
rule
now
or
cleaner,
more
concise
and
they
are
consolidated.
So
if
the
rule
is
there,
then
you
don't
really
to
to
create
a
rule.
Does
the
same
thing.
Hence
it
boots
the
energy
corporation.
Okay,
all
of
this
gone
I
have
people
like
the
people.
C
A
C
A
Okay,
so
CJ.
C
I
will
bring
this
up.
It's
obviously
not
not
not
ahead
in
information
that
we
use
cornetist
in
Microsoft
heavily
yeah.
We
relatively
use
some
of
the
very
large
clusters
out
there
and
we
have
discovered
a
bunch
of
things
in
the
clock
providers,
at
least
yes
around
throttling
the
way
it
will
talk
to
arm-wrestle
for
110.
We
have
works
of
course,
going
to
arm
and
are
they
needed
or
not?
This
is
their
every
touchpoint
like
the
email
you
sent.
So
the
link
you
shared
on
that
change
the
shares.
C
B
C
C
Jack
Jack's
code
is
perfect
because
it
helps
when
the
throttle
happens.
All
right,
but
should
we
be
throttled,
is
the
second
question
is
the
first
question
like?
Should
we
really
be
throttled,
then?
Why
do
we
have,
for
example,
sorry
I'm
hijacking
the
meeting,
but
one
of
the
things
we
do
things
the
the
interesting
thing.
There's
there's
a
call
every
two
seconds
to
get
machine
I
see
the
co-star
and
what
work
would
the
machine
IP
change
every
two
seconds
right?
So
that's
just
the
net
minutes
acting
like
it's
the
best
by
thousand
cuts.
B
Right,
yeah,
by
the
way
the
the
threatening
configs
actually
saved
us.
We
are
I,
think
we're
around
300
description,
ID
now,
Kenda
knows
and
trusts.
The
API
was
just
that
we
didn't
think
so
as
soon
as
we
apply
all
those
settings,
even
the
note
status
checks
and
everything
everything
went
so
far.
It's
good!
B
So
that's
something
to
if
it
would
be
nice
if
we
could
get
this
like
in
a
more
easier
way,
just
like
with
our
status
like
in
terms
of
limitations
for
a
a
RM
or
API
requests,
because
right
now
we're
sort
of
running
in
the
dark
like
we
have
no
clue.
What's
the
current
status,
if
you
know
we
apply
those
throttling
settings,
we've
been
told,
you
might
need
to
adjust
them
to
better
access
on
your
quest
to
size,
but
we
don't
have
visibility.
The
only
thing
we
have
is:
oh,
it
fails
you're
being
trial.
C
C
A
C
A
B
A
Probably
had
they
exploded
his
head
exploded
because
their
pods
I'm
sure
yeah,
so
so
Jack,
just
in
the
while
he's
going
back
but
yeah
he
he
did
a
lot
of
work
on
this
and
frankly,
this
is
something
we
have
to
fix,
because
it's
embarrassing
that
we
can't
support
the
the
5,000
No
Limit,
so
yeah.
We
need
to
have.
F
G
We
got
here
might
like
to
keep
you
waiting
there
thanks
for
ringing
up
instance,
metadata
Cole
yeah.
So
what
I
was
gonna
say
is
that
the
news
actually
got
a
lot
better
since
last
time,
I
spun
this
loom,
which
was
roughly
200
nodes,
I,
mean
there's
a
lot
of
vectors,
but
the
key
vector
that
I
was
concerned
with
was
the
sort
of
rod
noise
that
comes
out
of
a
single
node,
so
ignoring
all
the
subtlety
that
goes
with
building
different
VM
skews.
G
I
was
just
getting
node
count
effects
in
terms
of
arm
throttling
and
around
200
s
where
things
started
to
go
south.
That
should
be
a
lot
better
now
because
of
instance
metadata,
which,
as
Cal
said,
is
enabled
by
default,
but
it
has
I,
don't
think.
That's
been
like
fully
tested
I'm,
given
someone
a
week
to
mess
around
with
that,
but
that
gave
him.
A
A
G
C
Also
also
for
those
who
running
large
clusters,
things
get
compounded
like
hammered,
really
bad.
When
you
start
using
services,
maybe
services
load,
balancer
services
and
the
reason
behind
that
is
the
come
true.
The
service
controller
and
the
controller
manager
tend
to
machete
a
bit
and
just
go
and
pull
on
the
load
balancer
every
so
often
to
try
to
information
booth.
C
So
if
you're
running
a
large
cluster
without
learn
like
I,
would
say,
200
or
more,
if
you're,
not
using
external
load,
balancer
services
with
external
balance
or
services,
you
need
to
be
careful
because
it
will
heat
up
got
a
quite
cookie
who
goes
back
to
what
I
was
trying
to
say,
like
a
longer-term
goal.
Over
two
releases
will
probably
need
to
tackle
this
heavily
over.
The
next
video
then
just
make
it
cross.
The
number
like
200
is
nice.
C
A
Interesting
too,
so
I've
been
looking
a
lot
in
the
Cloud
Foundry,
because
cloud
foundry
did
some
interesting
abs
with
how
they
implemented
key
value
storage.
They
basically
they're
in
under
some
issues
with
scale
it
that
I
want
to
fully
understand
they.
They
switch
their
architecture
for
their
Diego
runtime
environment
over
to
sequel
and
I
want
to
find
out,
and
actually
this
is
a
segue
to
is
I'm
gonna,
be
unfortunately,
I.
A
Don't
know
how
this
happened,
but
I
am
now
on
the
at
CD
working
crew,
so
yeah,
so
the
at
CD
is
going
to
become
increasingly
important
and
especially
now
that
nobody
really
specifically
is
trying
to
own
it.
So
that
may
be
something
that
we
look
at
as
well,
but,
needless
to
say,
that
there's
there's
some
scaling
issues
that
we
that
we're
going
to
have
to
look
at,
and
this
all
plays
into
it.
So
so.
C
E
A
A
We
need
to
actually
start
booking
on
creating
some
end-to-end
test,
suites
and
and
make
those
portable,
and
we
also
need
to
be
able
to
publish
our
testing
results
to
test
grid
inside
the
communities
ecosystem
so
that
we
can
do
blocking
tests,
because
I
would
consider
in
the
future
some
sort
of
correctness
test
for
a
jury
to
e
to
be
a
blocking
test,
just
like
it
is
for
gke.
So
so
there's
a
lot
to
do
there
in
that
segues
into.
A
So
the
reason
that
we
want
to
do
this
is
because
there's
going
to
be
times
where
we
have
impacting
changes,
the
the
velocity
of
which
need
to
be
implemented
faster
than
we
can
do
with
community
support,
but
a
key
tenant
of
how
we're
going
to
handle
that
is
to
do
a
per
milestone,
reconciliation.
So,
for
example,
if
if
we
can't
get
it
up
through
the
community
within
the
milestone,
then
we're
going
to
drop
or
worst
case.
A
We
have
to
carry
it
another
milestone,
but
essentially
we're
not
going
to
drift
away
from
head
any
further
than
just
one.
One
point
release,
so
we're
gonna
basically
make
best
effort
to
do
things
through
the
community
and
and
only
in
cases
where
we
can't
bring
those
patches.
You
know
if
they
can't
get
through
fast
enough,
what
we,
when
we
apply
them
asynchronously
and
it's
public
legal,
that's
very
public,
all
aboveboard,
no
crazy!
No,
no
shenanigans.