►
From YouTube: SIG Azure Community Meeting 11-1-2017
Description
A
Good
morning,
everybody
if
this
is
the
sig
asier
community,
it's
Wednesday,
November
1st
2017
I'm,
your
moderator,
Jay,
singer,
DeMars
I
work
at
Microsoft
and
if
you
want
to
follow
along
after
the
fact,
the
agenda
and
notes
will
be
available
at
HTTP,
colon,
slash,
slash,
bitly,
slash,
sick
azure,
and
I
bit
my
tongue
a
couple
days
ago.
So
I
might
sound
like
I'm,
not
seeing
clearly
my
apologies
in
advance.
A
A
B
A
A
A
All
right,
I'll
I'll,
give
an
update
on
that
on
myself.
Essentially,
most
of
the
the
work
has
been
done
from
a
technical
perspective.
To
pull
us
out
were
currently
working
with
Microsoft
legal
to
look
at
what
is
required
around
the
licensing
and
whatnot
of
the
code
is
porting
over
to
Azure.
So
essentially,
we're
waiting
on
the
legal
team
there
to
sort
of
get
all
that
lined
up
the
deadlines
are
the
sort
of
delivery
date
that
we're
looking
for
is
by
the
end
of
the
year.
A
B
Can
talk
to
that
since
Luckies?
Concurrently,
that's
all
right
right
now,
as
the
version
out
there
is
confirms,
except
with
regards
to
keep
Cottard
proxy
feature.
Okay,
due
to
the
way
the
API
specifically
implements
this
API
server,
we
are
of
the
rest,
the
team
has
worked
together
and
there
is
a
another
bits,
binary
that
carries
the
the
better
implementation
that
will
be
confirmed.
We
already
that
then
confirm
and
test
internally
and
it
sound
represent
governments.
Okay,
so
should
be
soon.
I
would
say
weeks.
A
A
B
So
a
big
discussion
has
happened
over
the
past
few
weeks
with
regards
to
the
agile
cloud
provider
and
permutation
of
networking
specific
features,
specifically
the
LD.
We
have
confirmed
that
we
have
a
problem
with
regards
to
the
load
balancer
implementation.
This
problem
happens
only
when
the
the
cluster
is
under
stress
when
it
comes
to
the
number
of
services
it
tries
to
expose
concurrently,
tries
to
exposed
to
the
key
to
the
world
via
either
load
balancer.
This
problem
is
manifested
and
dangling
resources
on
the
load
balancer
itself.
B
We've
also
observed
the
behavior,
where
the
network
network
security
groups
rules
or,
as
we
refer
to
a
natural
world
energy,
are
created
in
a
way
that
consumes
a
lot
of
resources,
and
there
can
be
better
ways
and
to
implement
this
by
just
collapsing
them
all
right
by
collapsing
them
I
mean
if
the
rule
doesn't
need
to
be
there,
then
it
should
not
be
there
same.
The
same
feature
set
from
the
user
perspective,
similar
capabilities
from
the
user
perspective,
but
implemented
cleaner
on
the
underlying
platform.
B
The
third
problem
we
observed
is
right
now
from
a
cloud
perspective,
agile,
recognized
machines
as
groups
of
availability
sets
or
skill
sets
at
the
end
of
the
day,
just
a
group
that
represents
a
set
of
machine
that
acts
as
a
scale
unit.
The
platform
will
make
sure
that
at
least
that
fault
doesn't
touch
all
the
machines
at
once.
It
just
touched
some
of
the
machines,
not
the
rest
when,
when
a
cluster
runs
was
multiple
of
these
availability
sets,
each
availability
sets
and
hashable
gets
a
load
balancer
right
now.
B
If
a
user
runs
a
cluster
was
multiple
availability
sets
that
and
he
start
exposing
services.
All
the
services
are
exposed
on
the
first
availability
set,
not
on
all
of
the
available
sets
or
not
on,
like
least
utilized
aside
in
terms
of
balancing
routing
rules
on
the
balancing
rules.
So
that's
another
thing
again:
it
happens
only
and
fairly
complex,
active
clusters.
So
that's
the
three
behaviors
there
is
a
small
team,
actively
engaged
in
the
three
problems
and
we
expect
to
fix
them
by
1/9.
B
We
have
an
internal
code
freeze
around
15
I
expect
to
slap
by
day
or
two,
but
we
should
be
able
to
reach
one
nine.
The
last
behavior
again
regards
to
a
short
cloud
provider:
it's
not
really
a
car
provider,
but
the
scheduler
and
I
think
there
is
a
reference
to
it
somewhere
in
the
PRS.
Please
review
over
there
that
the
scheduler
right
now,
when
you
say
I,
have
a
pot
that
has
disks
as
your
disks
to
be
specific
right.
B
The
scheduler
doesn't
recognize
the
fact
that
agile
uses
different
VM
size
that
allows
different
types
of
disks
and
different
number
of
attached
data
discs,
so
a
VM
size,
small
VM
size
usually
gets
four
or
eight
disks,
largely
M,
sighs
get
up
to
64.
If
I.
Remember
correctly,
the
problem
happens
when
the
scheduler
assigned
a
VM
that
doesn't
have
has
locked
and
key
slot
for
a
new
data
disc.
B
The
other
problem
the
scheduler
has
is
the
fact
that,
as
recognized
field
storage,
so
you
have
like
the
standard
storage.
Well,
you
get
a
x
io
and
then
you
have
the
premium,
which
is
let's
say,
10,
X,
I/o
and
other
speed
features
performance
rate.
It
features
the
catch
here.
Is
this
doesn't
work
on
all
the
VMS?
You
have
to
have
a
premium
VM.
So
again
the
scheduler
can
place
the
disk
on
it.
B
This
owner
vm
just
not
premium
disk
on
a
VM,
that's
not
premium
level,
and
the
only
correct,
of
course
to
that
is
again
delete
the
pop
and
hope
for
the
best.
The
scheduler
issue
is
the
manage
to
home,
manage
disks
and
I'm.
Quite
sure
everybody
is
aware
of
it.
Like
then,
I'll
manage
VMs
can
all
take
manage
disks
and
the
other
way
around.
B
B
C
B
The
only
reason
I
would
push
back
against
labeling
sorry
against
admission
controller.
Is
that
the
admission
controller
doesn't
really
know
about
the
number
of
disks
currently
attached
on
the
machine,
because
it's
an
interesting
race
condition
that
that
pod
can
reach
that
mission
controller.
While
the
machine
has
three
disks
by
the
time
we
place
it
on
VMs
on
Rosalia,
alright
place
it
on
only
managed
with
this
VM,
because
it's
has
three
disks.
B
C
B
C
C
B
C
B
Should
make
what
my
my
my
the
way
I
wanted
to
approach
this
and
to
tell
you
the
truth,
it
will
be
modifying
GC,
specific
code,
but
what
I
want
to
do
is
one
unit
unique,
unique.
One
discs
predicate
that
sits
on
top
of
a
config
map,
a
part
of
the
system,
namespace
that
says
yeah,
disk
type,
node
type
and
and
and
all
of
that
logic
can
be
represented
in
a
configuration,
because
as
because
the
GC
guys
will
probably
had
new
VM
types,
those
different
different
disk
types
and
different
discounts,
and
we
on
our
side.
B
C
C
B
B
C
B
C
What
I
would
call
like
much
smaller
scale
polish
features
from
great
customers,
some
things
like
hey,
we'd,
really
like
to
have
a
public
IP
that
is
outside
of
the
research
group.
They'll
bring
your
own
public
IP
outside
that
the
resource
or
our
cluster.
There's
a
bunch
of
little
cleanups
like
that
I
think.
If
people
have
that
kind
of
pain,
this
small
scale
pain
because
you
throw
it
into
the
dock
and
and
we'll
just
try
and
knock
those
sort
of
small
fixes
off.
Also.
A
Is
a
backlog,
but
it
hasn't
been
updated
in
a
while,
because
we're
sort
of
I
did
the
engineering
around
this
is
very
distributed,
so
it
didn't
make
sense
to
try
and
keep
it
as
special
backlog.
We
probably
need
to
talk
about
how
we
do
that.
As
a
group,
there
should
be
a
seizure,
specific
backlog
so
that
anybody
can
contribute.
D
E
B
A
Cool
alright,
so
pr's
needing
review.
We're
not
going
to
go
through
these
in
this
meeting,
but
I'm
pointing
them
out
to
you
and
I
definitely
need
people
to
read
these
PRS
to
figure
out.
What's
going
on
how
we
move
these
four,
most
specifically
forty,
seven,
eight,
four
nine,
it
doesn't
look
like
anybody.
Four
measures
looked
at
it,
so
please
take
a
look
at
that
and
see.
What's
going
on
there,
five
four
one,
seven:
seven!
We
need
to
get
this
over
the
finish
line,
saying
with
five
four
six,
seven,
four!
C
A
couple
of
tricks
in
the
1-7
that
I
think
just
made
Anthony
a
to
bless
them
liturgical
proof
they
look
totally
non-controversial,
but
if
we
can,
if
you
can
just
ping
Anthony
and
get
them
to
go
and
cherry-pick
approve
the
original
PR,
then
those
are
bullet
like
they're
ready
to
go
there
LG
Tim
to
prove
they
just
cherry-pick
approved
on
the
original
PR.
Okay.
C
A
C
A
A
C
A
Okay,
lastly,
release
updates
1.9
feature:
freeze
is
happened
already.
Code
slush
begins
on
1120,
with
code
freeze,
formerly
starting
in
on
1122,
so
please
target
your
pr's
to
be
reviewed
and
ready
to
go
by
1122.
So
start
that
process,
the
milestones
and
labeling
and
all
that
stuff
on
11
xx
very
very
latest
1.82-
is
out.
We
don't
have
an
ETA
on
1.3
yet,
but
hopefully
that'll
be
by
next
Thursday
at
the
latest
and
no
updates
on
1.7
branch.
E
So
a
new
release
of
ACS
engine
will
include
182
and
I
believe
179.
Let
me
just
check
my
draft
yeah
1,
7,
9
and
182
will
be
going
out
with
the
next
minor
release
of
ACS
engine
I'm
gonna.
Try
to
put
that
through.
They
have
been
kind
of
a
little
passive
on
that
because
of
all
the
excitement
in
a
castle
and
I
didn't
want
to
introduce
more
movie
parts,
but
that's
ready
to
go
and
we'll
deliver
those
two
at
release
and
the
communities
among
some
other
fun
changes.
Ok,.